text
stringlengths 14
1.76M
|
|---|
11institutetext: UNSW Sydney, Australia
11email<EMAIL_ADDRESS>22institutetext: Data61 CSIRO
# Participatory Funding Coordination:
Model, Axioms and Rules
Haris Aziz 1122 Aditya Ganguly 11
###### Abstract
We present a new model of collective decision making that captures important
crowd-funding and donor coordination scenarios. In the setting, there is a set
of projects (each with its own cost) and a set of agents (that have their
budgets as well as preferences over the projects). An outcome is a set of
projects that are funded along with the specific contributions made by the
agents. For the model, we identify meaningful axioms that capture concerns
including fairness, efficiency, and participation incentives. We then propose
desirable rules for the model and study, which sets of axioms can be satisfied
simultaneously. An experimental study indicates the relative performance of
different rules as well as the price of enforcing fairness axioms.
## 1 Introduction
Consider a scenario in which a group of house-mates want to pitch in money to
buy some common items for the house but not every item is of interest or use
to everyone. Each of the items (e.g. TV, video game console, music system,
etc.) has its price. Each resident would like to have as many items purchased
that are useful to her. She may have concerns about whether she is getting
enough value for the contribution she makes. It is a scenario that is
encountered regularly in numerous shared houses or apartments.
As a second scenario, hundreds of donors want to fund charitable projects.
Each of the projects (e.g. building a well, enabling a surgery, funding a
scholarship, etc.) has a cost requirement. Donors care about coordinating
their donations in a way to fund commonly useful projects and they care about
the amount of money that is used towards projects that they approve. How to
coordinate the funding in a principled and effective way is a fundamental
problem in crowdfunding and donor coordination. The model that we propose is
especially suitable for coordinating donoations from alumni at our university.
Both of the settings above are coordination problems in which agents
contribute money and they have preferences over the social outcomes. A
collective outcome specifies which projects are funded and how much agents are
charged.
### Contributions.
We propose a formal model that we refer to as _Participatory Funding
Coordination (PFC)_ that captures many important donor coordination scenarios.
In this model, agents have an upper budget limit. They want as many of their
approved projects funded. We lay the groundwork for work on the model by
formulating new axioms for the model. The logical relations between the axioms
are established and the following question is studied: which sets of axioms
are simultaneously achievable. We propose and study rules for the problems
that are inspired by welfarist concerns but satisfy participation constraints.
In addition to an axiomatic study of the rules, we also undertake an
experimental comparison of the rules. The experiment sheds light on the impact
that various fairness or participation constraints can have on the social
welfare. This impact has been referred to as the price of fairness in other
contexts. In particular, we investigate the effects of enforcing fairness
properties on instances that model real-world applications of PFC, including
crowdfunding.
## 2 Related Work
Our model generally falls under the umbrella of a collective decision making
setting in which agents’ donations and preferences are aggregated to make
funding decisions. It is a concrete model within the broad agenda of achieving
effective altruism (MacAskill, 2015, 2017, Peters, 2019).
The model we propose is related to the discrete participatory budgeting model
(Aziz and Shah, 2020, Aziz et al., 2018, Goel et al., 2019, Fain et al., 2016,
Talmon and Faliszewski, 2019). In discrete participatory budgeting, agents do
not make personal donations towards the projects. They only express
preferences over which projects should be funded. We present several axioms
that are only meaningful for our model and not for discrete participatory
budgeting. Algorithms for discrete participatory budgeting cannot directly be
applied to our setting because they do not take into account individual
rationality type requirements.
Another related setting is multi-winner voting (Elkind et al., 2017). Multi-
winner voting can be viewed as a restricted version of discrete participatory
budgeting. The Participatory Funding Coordination (PFC) setting differs from
multi-winner voting in some key respects: in our model, each project (winner)
has an associated cost, and we select projects subject to a knapsack
constraint as opposed to having a fixed number of winners.
Our PFC model relies on approval ballots in order to elicit agents’
preferences. Dichotomous preferences have been considered in several important
setting including committee voting (Lackner and Skowron, 2019, Aziz et al.,
2017) and discrete participatory budgeting (Aziz et al., 2018, Fluschnik et
al., 2019).
Another related model that takes into account the contributions of agents was
studied by Brandl et al. (2020). Just like in our model, an agent’s utilities
are based on how much money is spent on projects approved by the agent.
However, their model does not have any costs and agents can spread their money
over projects in any way. Our model has significant differences from the model
of (Brandl et al., 2019, 2020): (1) in our setting, the projects are
indivisible and have a minimum cost to complete, (2) agents may not be charged
the full amount of their budgets. The combination of these features leads to
challenges in even defining simple individual rationality requirements.
Furthermore, it creates difficulties in finding polynomial-time algorithms for
some natural aggregation rules (utilitarian, egalitarian, Nash product, etc.).
Our model is more appropriate for coordinating donations where projects have
short-term deadlines and a target level of funding which must be reached for
the project to be successfully completed. We show that the same welfarist
rules that satisfy some desirable properties in the model (Brandl et al.,
2019, 2020), fail to do so in our model. Just as the work of Brandl et al.
(2019, 2020), Buterin et al. (2019) consider donor coordination for the
divisible model in which the projects do not have costs and agents do not have
budget limits. They also assume quasi-linear utilities whereas we model
charitable donors who are not interested in profit but want their money being
used as effectively as possible towards causes that matter to them.
The features of our PFC model enable the model to translate smoothly to a
number of natural settings. Crowdfunding, in particular, is a scenario in
which we would like to capitalise upon commonalities in donors’ charitable
preferences (Corazzini et al., 2015). Furthermore, crowdfunding projects (e.g.
building a well, funding a scholarship, etc.) often have provision points (see
e.g. (Agrawal et al., 2013, Chandra et al., 2016, Damle et al., 2019)), and it
can be critical for these targets to be met (for example, a project to raise
funds for a crowdfunding recipient to pay for a medical procedure would have
to raise a minimum amount of money to be successful, otherwise all donations
are effectively wasted).
Crowdfunding projects have been discussed in a broader context with various
economic factors and incentive issues presented (Agrawal et al., 2013).
Bagnoli and Lipman (1989) discuss additional fairness and economic
considerations for the related topic of the division of public goods. The
discrete model that we explore, where projects have finite caps, has potential
to coordinate donors and increase the effectiveness of a crowdfunding system.
## 3 Participatory Funding Coordination
A _Participatory Funding Coordination (PFC)_ setting is a tuple $(N,C,A,b,w)$
where $N$ is the set of agents/voters, $C$ is the set of projects (also
generally referred to as candidates). The function
$w:C\rightarrow\mathbb{R^{+}}$ specifies the cost $w(c)$ of each project $c\in
C$. The function $b:N\rightarrow\mathbb{R^{+}}$ specifies the budget $b_{i}$
of each agent $i\in C$. The budget $b_{i}$ can be viewed as the maximum amount
of money that agent $i$ is willing to spend. For any set of agents $S\subseteq
N$, we will denote $\sum_{i\in S}b_{i}$ by $b(S)$. The approval profile
$A=(A_{1},\ldots,A_{n})$ specifies for each agent, her set of acceptable
projects $A_{i}$. An _outcome_ is a pair $(S,x)$ where $S\subseteq C$ is the
set of funded projects and $x$ is a vector of payments that specify for each
$i\in N$, the payment $x_{i}$ that is charged from agent $i$. We will restrict
our attention to feasible outcomes in which $x_{i}\leq b_{i}$ for all $i\in N$
and only those projects get financial contributions that receive their
required amount. Also, note that the projects that are funded are only those
that receive the entirety of their price in payments from the agents. For any
given PFC instance, a mechanism $F$ returns an outcome. We will denote the set
of projects selected by $F$ as $F_{C}$ and the payments by $F_{x}$.111PFC can
also be viewed as a matching problem in which the money of agents is matched
to projects. For any outcome $(S,x)$, since $x_{i}\leq b_{i}$, the money
$b_{i}-x_{i}$ can either be kept by the agent $i$ or it can be viewed as going
into some common pool. The main focus of our problem is to fund a maximal set
of projects while satisfying participation constraints.
We suppose that an agent’s preferences are _approval-based_. For any set of
funded projects $S$, any agent $i$’s utility is
$u_{i}(S)=\sum_{c\in S\cap A_{i}}w(c).$
That is, an agent cares about how many dollars are _usefully_ used on his/her
approved projects. Our preferences domain is similar to the one used by Brandl
et al. (2020) who considered a continuous model in which projects do not have
target costs. In their model, agents also care about how much money is used
for their liked projects.
###### Example 1.
The following is an instance of a PFC problem with 5 agents and 6 projects.
The costs of the projects is stated next to the project name. The budget of
each agent is mentioned in front of the agent name. The plus sign indicates
the approval of an agent for a project.
| | A (7) | B (6) | C (1) | D (1) | E (8) | F (7)
---|---|---|---|---|---|---|---
| Budget | | | | | |
Agent 1 | 3 | + | | + | | + |
Agent 2 | 3 | + | | | + | + |
Agent 3 | 3 | | + | + | | | +
Agent 4 | 2 | | + | | + | | +
Agent 5 | 1 | + | | | | + |
Table 1: Example of an PFC instance.
## 4 Axiom design
In this section, we design axioms for outcomes of the PFC setting. We consider
an outcome $(S,x)$. For any axiom $\mathbf{Ax}$ for outcomes, we say that a
mechanism satisfies $\mathbf{Ax}$ if it always returns an outcome that
satisfies $\mathbf{Ax}$.
We first present three axioms for our setting that are based on the principle
of participation:
* •
Minimal Return (MR): each agent’s utility is at least much as the money put in
by the agent: $u_{i}(S)\geq x_{i}$. In other words, the societal decision is
as good for each agent $i$ as $i$’s best use of the money $x_{i}$ that she is
asked to contribute. We will use this as a minimal condition for all feasible
outcomes.222One can strengthen MR to a stronger version in which
$u_{i}(S,x)>x_{i}$ for each $i\in N$.
* •
Implementability (IMP) : There exists a payment function $y:N\times
C\rightarrow\mathbb{R}^{+}\cup\\{0\\}$ such that $\sum_{c\in C}y(i,c)=x_{i}$
for all $i\in N$ and $\sum_{i\in N}y(i,c)\in\\{0,w(c)\\}$ and there exists no
$i\in N$ and $c\notin A_{i}$ such that $y(i,c)>0$. IMP captures the
requirement that an agent’s contribution should only be used on projects that
are approved by the agent.
* •
Individual Rationality (IR): the utility of an agent is at least as much as an
agent can get by funding alone: $u_{i}(S)\geq\max_{S^{\prime}\subseteq
A_{i},w(S^{\prime})\leq b_{i}}(w(S^{\prime})).$ Note that IR is easily
achieved if the project costs are high enough: if for $i\in N$ and $c\in C$,
$w(c)>b_{i}$, then every outcome is IR.
We note that MR is specified with respect to the amount $x_{i}$ charged to the
agent. It can be viewed as a participation property: an agent would only want
to participate in the market if she gets at least as much utility as the money
she spent. We will show IMP is stronger than MR. IMP can also be viewed as a
fairness property: agents are made to coordinate but they only spend their
money on the projects they like.
###### Remark 1.
If there is an IMP outcome where a set of projects are funded, then there is
also an IMP outcome where any subset of these projects are funded. In order to
find an IMP outcome for any subset, simply take the original outcome and set
the payments of agents to projects that are being “de-funded” to zero.
Next, we present axioms that are based on the idea of efficiency.
* •
Exhaustive (EXH): There exists no set $N^{\prime}\subseteq N$ and project
$c\in C\setminus S$ such that $c\in\cap_{i\in N^{\prime}}A_{i}\cap(C\setminus
A)$ such that $w(c)\leq\sum_{i\in N^{\prime}}(b_{i}-x_{i})$. In words, agents
in $N^{\prime}$ cannot pool in their unspent money and fund another project
liked by all of them.
* •
Pareto optimality (PO)-X: An outcome $(S,x)$ is Pareto optimal within the set
of outcomes satisfying property X if there exists no $(S^{\prime},x^{\prime})$
satisfying X such that $u_{i}(S^{\prime})\geq u_{i}(S)$ for all $i\in N$ and
$u_{i}(S^{\prime})>u_{i}(S)$ for some $i\in N$. Note that Pareto optimality is
a property of the set of funded projects $S$ irrespective of the payments.
* –
PO is Pareto optimal among the set of all outcomes.
* –
PO-IMP: PO among the set of IMP outcomes.
* –
PO-MR: PO among the set of MR outcomes.
* •
Payment constrained Pareto optimality (PO-Pay): An outcome is PO-Pay if it is
not Pareto dominated by any outcome of at most the same price. Formally, there
exists no $(S^{\prime},x^{\prime})$ such that $\sum_{i\in
N}x_{i}^{\prime}\leq\sum_{i\in N}x_{i}$, $u_{i}(S^{\prime})\geq u_{i}(S)$ for
all $i\in N$ and $u_{i}(S^{\prime})>u_{i}(S)$ for some $i\in N$.
* •
Weak Payment constrained Pareto optimality (weak PO-Pay): An outcome is weakly
PO-Pay if it is not Pareto dominated by any outcome of at most the same price.
Formally, there exists no $(S^{\prime},x^{\prime})$ such that
$x_{i}^{\prime}\leq x_{i}$ and $u_{i}(S^{\prime})\geq u_{i}(S)$ for all $i\in
N$ and $u_{i}(S^{\prime})>u_{i}(S)$ for some $i\in N$.
A concept that can be viewed in terms of participation, efficiency, and
fairness is the adaptation of the principle of core stability for our setting.
* •
Core stability (CORE): There exists no set of agents who can pool in their
budget and each gets a strictly better outcome. In other words, an outcome
$(S)$ is CORE if for every subset of agents $N^{\prime}\subseteq N$, for every
subset of projects $C^{\prime}\subseteq C$ such that
$w(C^{\prime})\leq\sum_{i\in N^{\prime}}b_{i}$, the following holds for some
agent $i\in N^{\prime}$: $u_{i}(S)\geq w(C^{\prime}\cap A_{i}).$
We describe a basic fairness axiom for outcomes and rules based on the idea of
proportionality.
* •
Proportionality (PROP): Suppose a set of agents $N^{\prime}\subseteq N$ _only_
approve of a set of projects $C^{\prime}\subseteq C$ such that $\sum_{i\in
N^{\prime}}b_{i}\geq w(C^{\prime})$. In that case, all the projects in
$C^{\prime}$ are selected.
Finally, we consider an axiom that is defined for mechanisms rather than
outcomes. We say that a mechanism satisfies strategyproofness if there exists
no instance under which some agent has an incentive to misreport her
preference relation.
We conclude this section with some remarks on computation. The following
proposition follows via a reduction from the Subset Sum problem.
###### Proposition 1.
Even for one agent, computing an IR, PO, PO-MR, or PO-IMP outcome if NP-hard.
###### Proof.
Consider the Subset Sum problem in which there is a set of items
$M=\\{1,\ldots,m\\}$ with corresponding weights $w_{1},\ldots,w_{m}$, and a
real value $W$. The problem is to find a subset $S$ with maximum weight
$\sum_{j\in S}w_{j}$ such that $\sum_{j\in S}w_{j}\leq W$. The problem is
well-known to be NP-hard. We reduce it our setting for a single agent by
taking an item for each project that our agent approves of, and choosing the
item weights to be the corresponding project costs. Then any set of projects
$S$ satisfies the axioms in the proposition if and only if the corresponding
set of items is the solution to the Subset Sum problem. ∎
Note that IMP is a property of an outcome not a set of projects. We say that a
set of projects $S$ is IMP if there exists a feasible vector of charges to
agents $x$ such that the outcome $(S,x)$ is IMP. The propoerty IMP can be
tested in polynomial time via reduction to network flows.
###### Proposition 2.
For a given set of projects $S$, checking whether there exists a vector of
charges $x$ such that $(S,x)$ is implementable can be done in polynomial time.
Similarly, we can also check whether a particular outcome $(S,x)$ is
implementable with a variation of the above linear program, where the upper
bound on the sum of payments for each agent is $x_{i}$ instead of $b_{i}$.
## 5 Axioms: Compatibility and Logical Relations
In this section, we study the compatibility and relations between the axioms
formulated.
###### Remark 2.
Note that IR and MR are incomparable. Any outcome in which an agent does not
pay any money trivially satisfies MR. However, it may not satisfy IR. On the
other hand, an IR outcome may not be MR. Consider the case in which an agent’s
utility is at least as high as by funding alone. However, the agent may have
been asked to pay more than the utility she gets which violates MR.
Next, we point out that that PO-Pay is equivalent to weak PO-Pay.
###### Proposition 3.
PO-Pay is equivalent to weak PO-Pay.
###### Proof.
Suppose an outcome $(S,x)$ is not weakly PO-Pay. Then, it is trivially not PO-
Pay. Now suppose $(S,x)$ is not PO-Pay. Then, there exists another outcome
$(S^{\prime},x^{\prime})$ such that $\sum_{i\in N}x_{i}^{\prime}\leq\sum_{i\in
N}x_{i}$, $u_{i}(S^{\prime})\geq u_{i}(S)$ for all $i\in N$ and
$u_{i}(S^{\prime})>u_{i}(S)$ for some $i\in N$. Note that $S^{\prime}$ can be
funded with total amount $\sum_{i\in N}x_{i}^{\prime}$ irrespective of who
paid what. So $S^{\prime}$ is still affordable if $x_{i}^{\prime}\leq x_{i}$.
∎
The next proposition establishes further logical relations between the axioms.
###### Proposition 4.
The following logical relations hold between the properties.
1. 1.
IMP implies MR.
2. 2.
PO implies PO-Pay.
3. 3.
PO-$X$ implies PO-$Y$ if $Y$ implies $X$.
4. 4.
PO-IMP implies EXH.
5. 5.
PO-IR implies EXH.
6. 6.
CORE implies IR.
7. 7.
The combination of PO-IMP and IMP imply PROP.
Next, we show that MR is compatible with PO-Pay.
###### Proposition 5.
Suppose an outcome is MR and there is no other MR outcome that Pareto
dominates it. Then, it is PO-Pay.
###### Proof.
Suppose the outcome $(S,x)$ is MR and PO constrained to MR. We claim that
$(S,x)$ is PO-Pay. Suppose it is not PO-Pay. Then there exists another outcome
$(S^{\prime},x^{\prime})$ such that $\sum_{i\in N}x_{i}^{\prime}\leq\sum_{i\in
N}x_{i}$, $u_{i}(S^{\prime})\geq u_{i}(S)$ for all $i\in N$ and
$u_{i}(S^{\prime})>u_{i}(S)$ for some $i\in N$. Note that $S^{\prime}$ is
afforadable with total amount $\sum_{i\in N}x_{i}^{\prime}$ irrespective of
who paid what. So $S^{\prime}$ is still affordable if $x_{i}^{\prime}\leq
x_{i}$. Therefore, we can assume that $x_{i}^{\prime}\leq x_{i}$ for all $i\in
N$. Note that since $S^{\prime}$ Pareto dominates $S$ and since $(S,x)$ is MR,
$u_{i}(S^{\prime})\geq u_{i}(S)\geq x_{i}\geq x_{i}^{\prime}$ for all $i\in
N$. Hence $(S^{\prime},x^{\prime})$ also satisfies MR. Since
$(S^{\prime},x^{\prime})$ is MR and since $S$ Pareto dominates $S^{\prime}$,
it contradicts the fact that $(S,x)$ PO constrained to MR. ∎
###### Proposition 6.
There always exists an outcome that satisfies IMP, IR, PO-IMP and hence also
MR and EXH.
###### Proof.
Existence of an outcome that satisfies IMP, IR, PO-IMP: For each $i\in N$
compute $(S_{i},y_{i})$ that is an IR outcome. This can be computed by finding
a maximum total weight set of projects that has weight at most $b_{i}$. Then
consider the outcome $(\bigcup_{i\in N}S_{i},(y_{1},\ldots,y_{n}))$. In such
an outcome, we also keep track of which agent contributed to which project.
Note that if $c\in S_{i}$, then $i$ contributed $w(c)$ to that project. Note
that $w(\bigcup_{i\in N}S_{i})\geq\sum_{i\in N}y_{i}$. If $w(\bigcup_{i\in
N}S_{i})>\sum_{i\in N}y_{i}$, we need to return $w(\bigcup_{i\in
N}S_{i})-\sum_{i\in N}y_{i}$, back to the agents to ensure that no more money
is charged than needed to pay for $\bigcup_{i\in N}S_{i}$. We return the money
as follows. Recall that we know the amount paid by each agent to each project,
i.e., agent $i$ paid $w(c)$ to project $c$ if and only if $c\in S_{i}$. Some
projects may have received more money than needed. For each project $c$’s
surplus, we uniformly allocate it among the agents who paid for it. Suppose
the outcome satisfying IMP, IR and EXH does not satisfy PO-IMP. Then there
exists another outcome that satisfies IMP that Pareto dominates the outcome.
Such a Pareto improvement still satisfies IR because the utility of each agent
is at least as high. ∎
Note that PO-Pay and IMP are both satisfied by an empty outcome with zero
charges. PO-IMP and IMP are easily satisfied by computing a PO outcome from
the set of IMP outcomes. PO-Pay and PO-IMP are easily satisfied by computing a
PO outcome which may not necessarily satisfy IMP.
###### Proposition 7.
There always exists an outcome that satisfies MR, IR, PO-MR and hence also
EXH.
###### Proof.
Existence of an outcome that satisfies MR, IR, PO-MR: From the proof of part
(i), we know that an IMP and IR outcome always exists. Also, from Proposition
4 we know that every IMP outcome is MR, so there always exists an MR and IR
outcome. Now suppose the outcome satisfying MR and IR does not satisfy PO-MR.
Then there exists another outcome satisfying MR that Pareto dominates the
original outcome, which is still IR. There cannot exist an infinite number of
Pareto improvements because the budgets of the agents are finite. Hence we can
reach a PO-MR outcome that is also IR and MR. ∎
We note that if no agent can individually fund a project, then every outcome
is IR. In crowdfunding settings in which projects have large costs, the IR
requirement is often easily satisfied.
## 6 Aggregation Rules
In this section, we take a direct welfarist view to formalize rules that
maximize some notion of welfare. We consider three notions of welfare:
utilitarian, egalitarian, and Nash welfare; and we define the following rules.
* •
UTIL: define the utilitarian welfare derived from an outcome $(S,x)$ as
$\sum_{i\in N}u_{i}(S).$ Then, UTIL returns an outcome that maximises the
utilitarian welfare.
* •
EGAL: given some output $(S,x)$, write the sequence of agents’ utilities from
that outcome as $u(S)=(u_{i}(S))_{i\in N}$, where $u$ is sorted in non-
decreasing order. Then, EGAL returns an outcome $(S,x)$ such that $u(S)$ is
lexicographically maximal among the outcomes.
* •
NASH: maximises the Nash welfare derived from an output $(S,x)$, i.e.
$\prod_{i\in N}\left(u_{i}(S)\right).$
###### Proposition 8.
UTIL, EGAL, and NASH satisfy PO and hence PO-MR, PO-IMP, PO-Pay, and EXH.
One notes that the rules UTIL, EGAL, and NASH do not satisfy minimal
guarantees such as MR. The reason is that an agent may donate her budget to a
widely approved project even though she may not approve any of such projects.
Given that the existing aggregation rules do not provide us with guarantees
that the outcomes they produce will satisfy our axioms, we can instead define
rules such that optimize social welfare within certain subsets of feasible
outcomes. For a property X, we can define UTIL-X, EGAL-X, and NASH-X as rules
that maximise the utilitarian, egalitarian and Nash welfare respectively among
only those outcomes that satisfy property X. Next, we analyse the properties
satisfied by rules EGAL/UTIL/NASH constrained to the set of MR or IMP
outcomes. In the continuous model introduced by Brandl et al. (2020), there is
no need to consider the rule NASH-IMP, as the NASH rule in the case where
projects can be funded to an arbitrary degree (given there is sufficient
budget) already satisfies IMP.
Before we study the axiomatic properties, we note that most meaningful axioms
and rules are NP-hard to achieve or compute. The following proposition follows
from Proposition 1.
###### Proposition 9.
Even for one agent, computing an UTIL, UTIL-MR, UTIL-MR, EGAL, EGAL-MR, EGAL-
IMP, NASH, NASH-MR, NASH-IMP outcome is NP-hard.
Similarly, the following proposition follows from Proposition 5.
###### Proposition 10.
UTIL-MR, EGAL-MR, and NASH-MR satisfy PO-Pay.
From Proposition 5, it follows that UTIL-MR, EGAL-MR, and NASH-MR satisfy PO-
Pay. In contrast, we show that UTIL-IMP, EGAL-IMP, and NASH-IMP do not satisfy
PO-Pay. In order to show this, we prove that it is possible in some instances
for the set of jointly IMP and PO-IMP outcomes to be disjoint from the set of
PO-Pay outcomes.
###### Proposition 11.
UTIL-IMP, EGAL-IMP and NASH-IMP do not satisfy PO-Pay. In fact it is possible
that no IMP and PO-IMP outcome satisfies PO-Pay.
###### Proof.
Consider the following instance in Table 2.
| | A (7) | B (4) | C (3) | D (7) | E (7)
---|---|---|---|---|---|---
| Budget | | | | |
Agent 1 | 4 | | + | | | +
Agent 2 | 1 | + | | | | +
Agent 3 | 1 | + | | | | +
Agent 4 | 5 | + | | | + |
Agent 5 | 3 | | | + | + |
Table 2: Example instance for proof of UTIL/EGAL/NASH-IMP not satisfying PO-
Pay.
###### Claim.
Observe that no implementable outcome can fund project $E$ since it is too
expensive to be funded solely by its supporters.
###### Claim.
Any implementable outcome funds a subset of the following project sets:
$\\{A,B,C\\},\\{B,D\\}$. Note that for an implementable outcome, if $D$ is
funded, then only $B$ can also be funded (there is not enough money for agents
who approve of $A$ or $C$ to fund these projects after funding $D$).
| A,B,C | B,D | D,E (not IMP)
---|---|---|---
Agent 1 | 4 | 4 | 7
Agent 2 | 7 | 0 | 7
Agent 3 | 7 | 0 | 7
Agent 4 | 7 | 7 | 7
Agent 5 | 3 | 7 | 7
Table 3: Utilities provided to each agent by outcomes that fund the project
sets $\\{A,B,C\\},\\{B,D\\}$ and $\\{D,E\\}$.
Note that when we are looking for the optimal outcome under a certain rule, we
can ignore those project sets that are subsets of other project sets. Then,
from Table 3, we see that the unique UTIL-IMP, EGAL-IMP and NASH-IMP outcome
is the outcome that funds $\\{A,B,C\\}$ where each agent sends all their money
to the only project of those three that they approve of. But clearly, this
outcome is Pareto dominated by any outcome that funds $\\{D,E\\}$, which has
the same total cost. Thus we have shown that UTIL-IMP, EGAL-IMP and NASH-IMP
do not satisfy PO-Pay in general. In fact the set of IMP and PO-IMP outcomes
can be disjoint from the set of PO-Pay outcomes. ∎
The most striking aspect of Proposition 11 is that in the continuous domain
without project costs, NASH-IMP is equivalent to NASH and the rule satisfies
Pareto optimality and hence PO-Pay. In our context, NASH-IMP fails to satisfy
PO-Pay.
We also considered the issue of strategyproofness and found examples that show
that none of the UTIL/EGAL/NASH rules are strategyproof whether they are
unconstrained or constrained to MR or IMP outcomes. In contrast, there are
several natural rules such as UTIL that are strategyproof in the continuous
setting as well in the multi-winner voting setting.
###### Proposition 12.
UTIL, UTIL-MR, UTIL-IMP, NASH, NASH-MR and NASH-IMP are not strategyproof.
###### Proof.
Consider the instance given in Table 16. Note that we only include project $Z$
for the purpose of making the NASH welfare of outcomes non-zero. Now, observe
that it is impossible to fund all three projects, so our possible candidate
project sets to be funded by the above rules are those where two projects get
funded.
| | X (10) | Y (4) | Z (9)
---|---|---|---|---
| Budget | | |
Agent 1 | 8 | | + | +
Agent 2 | 1 | | + | +
Agent 3 | 10 | + | + | +
Table 4: Example instance where rules are not strategyproof. | $\\{X,Y\\}$ | $\\{X,Z\\}$ | $\\{Y,Z\\}$
---|---|---|---
Utilitarian Welfare | 22 | 37 | 39
Nash Welfare | 224 | 1539 | 2197
Table 5: Utilitarian and Nash welfares of certain project sets to be funded.
We check that there is an implementable outcome that funds $\\{Y,Z\\}$, and
find that the outcome where Agents 1 and 2 pay for $Z$ and Agent 3 pays for
$Y$ is implementable. Hence, $\\{Y,Z\\}$ is the result of UTIL, UTIL-MR, UTIL-
IMP, NASH, NASH-MR, NASH-IMP. Note that the utility for Agent 3 is 13.
Now, suppose Agent 3 were to misrepresent her preferences as in Table 18.
Again, according to this new (perceived) instance, it is impossible for all
projects to be funded, so in Table 19 we check the welfares produced by
funding any two of the projects.
| | X (10) | Y (4) | Z (9)
---|---|---|---|---
| Budget | | |
Agent 1 | 8 | | + | +
Agent 2 | 1 | | + | +
Agent 3 | 10 | + | | +
Table 6: Instance where Agent 3 is misrepresenting her preferences. | $\\{X,Y\\}$ | $\\{X,Z\\}$ | $\\{Y,Z\\}$
---|---|---|---
UTIL | 18 | 37 | 35
NASH | 160 | 1539 | 1521
Table 7: Perceived welfares of certain project sets to be funded if Agent 3
misrepresents her preferences.
Since $\\{X,Z\\}$ can be funded by an implementable outcome where Agents 1 and
2 paying for $Z$ and Agent 3 paying for $X$, $\\{X,Z\\}$ is the result of
UTIL, UTIL-MR, UTIL-IMP, NASH, NASH-MR, NASH-IMP. With this outcome, Agent 3
sees her utility rise to 19.
Then, by misrepresenting her preferences, Agent 3 can cause the choice of the
aforementioned rules to change from funding $\\{Y,Z\\}$ to funding
$\\{X,Z\\}$, hence increasing her own utility. Therefore, UTIL, UTIL-MR, UTIL-
IMP, NASH, NASH-MR and NASH-IMP are not strategyproof.
∎
Similarly, the following also holds.
###### Proposition 13.
EGAL, EGAL-MR and EGAL-IMP are not strategyproof.
Table 8 shows the axioms that are satisfied by restricting the aggregation
rules to optimising within the space of MR or IMP outcomes.
| UTIL-MR | EGAL-MR | NASH-MR | UTIL-IMP | EGAL-IMP | NASH-IMP |
---|---|---|---|---|---|---|---
MR | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
IMP | – | – | – | ✓ | ✓ | ✓ |
PROP | – | – | – | ✓ | ✓ | ✓ |
IR | – | – | – | – | – | – |
PO | – | – | – | – | – | – |
PO-MR | ✓ | ✓ | ✓ | – | – | – |
PO-IMP | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
PO-Pay | ✓ | ✓ | ✓ | – | – | – |
EXH | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
CORE | – | – | – | – | – | – |
SP | – | – | – | – | – | – |
Table 8: Properties satisfied by UTIL-MR, EGAL-MR, NASH-MR, UTIL-IMP, EGAL-IMP
and NASH-IMP.
## 7 Experiment
In addition to the axiomatic study of the welfare-based rules, we undertake a
simulation-based experiment to gauge the performance of different rules with
respect to utilitarian and egalitarian welfare. Our study shows the impact of
fairness axioms such as MR and IMP on welfare.
We generate random samples of profiles in order to simulate two potential
real-world applications of PFC.
1. 1.
Share-house setting: In this example, we can imagine a group of house-mates
pooling their resources to fund communal items for their house. We operate
under the following assumptions:
* •
Number of agents from 3-6: this represents a reasonable number of house-mates
in a share-house.
* •
Number of projects from 5-12: projects may include buying items such as
tables, chairs, sofas, televisions, lights, kitchen appliances, washing
machines, dryers, etc.
* •
Agent budgets from 300-600 and project costs from 50-1000. We base these costs
on typical rent and furniture costs in Australia as well as costs of the above
items in first and second-hand retailers. We expect that each agent brings
some money to the communal budget, and would spend around one or two weeks’
worth of rent on one-time communal expenses.
2. 2.
Crowdfunding setting: In this example, we imagine a relatively small number of
expensive projects to be funded, and a large number of philanthropic donors,
and make the following assumptions.
* •
Number of agents from 20-50: A review of crowdfunding websites such as
Kickstarter and GoFundMe shows that the most promoted projects are typically
funded by thousands of donors, and smaller projects can attract tens of
donors. For the purposes of our simulation, we use between 20-50 donors, which
is still relatively large to the number of available projects.
* •
Number of projects from 3-8: In crowdfunding, there are far more projects
available than a donor actually sees. However, we can estimate that in a
browsing session, a donor might view the top 3-8 promoted projects.
* •
Agent budgets from 0-400 and project costs from 1000-10000: Projects in real-
life crowdfunding can have vastly varying costs. For our simulation, we want
for the agents with all their money combined to be able to afford some, but
not all of the available projects in order to create instances that are not
trivially resolved by funding all or none of the projects.
Imposing MR on a rule seems to have a significant impact on both utilitarian
and egalitarian welfare on average. Of course, since IMP implies MR, we expect
that imposing IMP as a constraint will have an even greater cost on welfare,
but from our experiment, this cost is a relatively small increase on the the
cost of imposing MR. It is worth noting that in worst-case scenarios, it is
always possible that there are no non-trivial outcomes that satisfy the
constraints, and so there is a risk that a rule subject to a constraint could
produce an outcome that gives all agents zero utility.
When considering average performance, rules are more resilient to the
imposition of fairness constraints for instances that simulate crowdfunding
scenarios compared to share-house scenarios. When the number of agents is
large and the number of projects is small, and project costs are large
compared to agent budgets, it seems to be easier to achieve fairness
properties.
We typically expect the NASH rule to be a compromise between UTIL and EGAL.
This manifests in the results, where the performance losses for NASH with
respect to utilitarian welfare are considerably less than those for EGAL.
Likewise, NASH loses considerably less with respect to egalitarian welfare
than UTIL.
## 8 Conclusions
We proposed a concrete model for coordinating funding for projects. A formal
approach is important to understand the fairness, participation, and
efficiency requirements a system designer may pursue. We present a detailed
taxonomy of such requirements and clarify their properties and relations. We
also analyse natural welfarist rules both axiomatically and experimentally.
Our model is not just a rich setting to study collective decision making. We
feel that the approaches considered in the paper go beyond academic study and
can be incorporated in portals that aggregate funding for charitable projects.
We envisage future work on online versions of the problem.
In practical applications of PFC, it is important to balance welfare demands
with fairness conditions. Our experiment investigated the cost of fairness
when imposing MR or IMP on UTIL, EGAL and NASH rules over instances that model
crowdfunding and share-house scenarios. We find that imposing MR alone
significantly reduces welfare on average, but imposing IMP as well produces a
relatively small additional cost on welfare. The costs of imposing any
fairness condition are much more pronounced on instances that model a share-
house setting than a crowdfunding setting, suggesting that for a large number
of agents and large project costs, fairness conditions are more easily met.
## References
* Agrawal et al. (2013) A. Agrawal, C. Catalini, and A. Goldfarb. Some Simple Economics of Crowdfunding. In _Innovation Policy and the Economy, Volume 14_ , NBER Chapters, pages 63–97. National Bureau of Economic Research, Inc, June 2013.
* Aziz and Shah (2020) H. Aziz and N. Shah. Participatory budgeting: Models and approaches. In T. Rudas and P. Gábor, editors, _Pathways between Social Science and Computational Social Science: Theories, Methods and Interpretations_. Springer, 2020.
* Aziz et al. (2017) H. Aziz, M. Brill, V. Conitzer, E. Elkind, R. Freeman, and T. Walsh. Justified representation in approval-based committee voting. _Social Choice and Welfare_ , pages 461–485, 2017.
* Aziz et al. (2018) H. Aziz, B. E. Lee, and N. Talmon. Proportionally representative participatory budgeting: Axioms and algorithms. In _Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS 2018, Stockholm, Sweden, July 10-15, 2018_ , pages 23–31, 2018.
* Bagnoli and Lipman (1989) M. Bagnoli and B. L. Lipman. Provision of Public Goods: Fully Implementing the Core through Private Contributions. _The Review of Economic Studies_ , 56(4):583–601, 10 1989.
* Brandl et al. (2019) F. Brandl, F. Brandt, D. Peters, C. Stricker, and W. Suksompong. Donor coordination: Collective distribution of individual contributions. In _GAIW (Games, Agents and Incentives Workshops)_ , 2019.
* Brandl et al. (2020) F. Brandl, F. Brandt, D. Peters, C. Stricker, and W. Suksompong. Funding public projects: A case for the nash product rule. Working paper, 2020.
* Buterin et al. (2019) V. Buterin, Z. Hitzig, and E. G. Weyl. A Flexible Design for Funding Public Goods. _Management Science_ , 65(11):5171–5187, 2019\.
* Chandra et al. (2016) P. Chandra, S. Gujar, and Y. Narahari. Crowdfunding public projects with provision point: A prediction market approach. In _Proc. of the 22nd European Conference on Artificial Intelligence (ECAI)_ , pages 778–786, 2016.
* Corazzini et al. (2015) L. Corazzini, C. Cotton, and P. Valbonesi. Donor coordination in project funding: Evidence from a threshold public goods experiment. _Journal of Public Economics_ , 128(1):16–29, 2015.
* Damle et al. (2019) S. Damle, M. Hussain Moti, P. Chandra, and S. Gujar. Civic crowdfunding for agents with negative valuations and agents with asymmetric beliefs. In _Proc. of the 28th IJCAI_ , pages 208–214, 2019.
* Elkind et al. (2017) E. Elkind, P. Faliszewski, P. Skowron, and A. Slinko. Properties of multiwinner voting rules. _Social Choice and Welfare_ , 2017.
* Fain et al. (2016) B. Fain, A. Goel, and K. Munagala. The core of the participatory budgeting problem. In _Proceedings of the 12th International Conference on Web and Internet Economics (WINE ’16)_ , pages 384–399, 2016.
* Fluschnik et al. (2019) T. Fluschnik, P. Skowron, M. Triphaus, and K. Wilker. Fair knapsack. In _Proc. of the 33rd AAAI Conference_ , 2019.
* Goel et al. (2019) A. Goel, Anilesh K. K., S. Sakshuwong, and T. Aitamurto. Knapsack voting for participatory budgeting. _ACM Transactions on Economics and Computation (TEAC)_ , 7(2):8:1–8:27, 2019. ISSN 2167-8375.
* Lackner and Skowron (2019) M. Lackner and P. Skowron. A quantitative analysis of multi-winner rules. In _Proc. of the 28th IJCAI_ , pages 407–413, 2019.
* MacAskill (2015) W. MacAskill. _Doing Good Better: How Effective Altruism Can Help You Make a Difference_. Avery, 2015.
* MacAskill (2017) William MacAskill. Effective altruism: Introduction. _Essays in Philosophy_ , 18(1), 2017.
* Peters (2019) D. Peters. _Economic Design for Effective Altruism_ , pages 381–388. 2019\. URL https://app.dimensions.ai/details/publication/pub.1122621382.
* Talmon and Faliszewski (2019) N. Talmon and P. Faliszewski. A framework for approval-based budgeting methods. In _Proc. of the 33rd AAAI Conference_ , 2019.
## Appendix 0.A Remaining Proofs
Proof of Proposition 1
###### Proof.
Consider the Subset Sum problem in which there is a set of items
$M=\\{1,\ldots,m\\}$ with corresponding weights $w_{1},\ldots,w_{m}$, and a
real values $W$. The problem is a subset $S$ with maximum weight $\sum_{j\in
S}w_{j}$ such that $\sum_{j\in S}w_{j}\leq W$. The problem is well-known to be
NP-hard. We reduce it our setting by taking a project for each corresponding
item and project weight for each item weight. Then any set of projects $S$
satisfies the axioms in the proposition if and only if the corresponding set
of items $S$ is the solution to the Subset problem. ∎
Proof of Proposition 2
###### Proof.
In order to check whether a given set of projects $W$ is implementable, we
just need to check whether the following linear program has a feasible
solution or not. The following can also be checked via network flows.
$\displaystyle x_{i,j}=0$ $\displaystyle\text{ for all }i\in N,j\in[m]\text{
s.t. }p_{j}\notin A_{i}$ $\displaystyle\sum_{i\in N}x_{i,j}=w(p_{j})$
$\displaystyle\text{ for all }p_{j}\in W$ $\displaystyle\sum_{i\in
N}x_{i,j}=0$ $\displaystyle\text{ for all }p_{j}\notin W$
$\displaystyle\sum_{j\in C}x_{i,j}\leq b_{i}$ $\displaystyle\text{ for all
}i\in N$ $\displaystyle x_{i,j}\geq 0$ $\displaystyle\text{ for all }i\in
N,j\in[m]$
∎
Proof of Proposition 3
###### Proof.
Suppose an outcome $(S,x)$ is not weakly PO-pay. Then, it is trivially not PO-
pay. Now suppose $(S,x)$ is not PO-pay. Then, there exists another outcome
$(S^{\prime},x^{\prime})$ such that $\sum_{i\in N}x_{i}^{\prime}\leq\sum_{i\in
N}x_{i}$, $u_{i}(S^{\prime})\geq u_{i}(S)$ for all $i\in N$ and
$u_{i}(S^{\prime})>u_{i}(S)$ for some $i\in N$. Note that the $S^{\prime}$ is
afforadable with total amount $\sum_{i\in N}x_{i}^{\prime}$ irrespective of
who paid what. So $S^{\prime}$ is still affordable if $x_{i}^{\prime}\leq
x_{i}$. ∎
Proof of Proposition 4
###### Proof.
We distinguish between the cases.
1. 1.
IMP implies MR: Suppose an outcome $(S,x)$ satisfies IMP. Then there exists a
set of vectors $\\{\bm{y}_{c}\\}_{c\in S}$ where $\bm{y}_{c}$ is a vector of
payments from each agent to project $c$ such that $\sum_{c\in S}\bm{y}_{c}=x$
and $y_{c,i}=0$ if $c\notin A_{i}$. Examining any row $i$ of the vector $x$,
which denotes the money charged from agent $i$, we see that $x_{i}=\sum_{c\in
S}\bm{y}_{c,i}$. But since $y_{c,i}=0$ if $c\notin A_{i}$, we have
$x_{i}=\sum_{c\in S\cap A_{i}}y_{ij}$. Now,
$u_{i}(S)=\sum_{c\in S\cap A_{i}}w(c)=\sum_{c\in S\cap A_{i}}\sum_{j\in
N}y_{c,j}\geq\sum_{c\in S\cap A_{i}}y_{c,i}=x_{i}.$
Hence, $(S,x)$ satisfies MR.
2. 2.
PO implies PO-Pay: Suppose an outcome is PO. Then, it is not Pareto dominated
by any other outcome. Hence, it is not Pareto dominated by any outcome of
lesser total cost, and so it is PO-Pay.
3. 3.
PO-$X$ implies PO-$Y$ if $Y$ implies $X$: Suppose some condition $Y$ implies
another condition $X$. Now, suppose some outcome $(S,x)$ is PO-$X$. Then,
$(S,x)$ is not Pareto dominated by any outcome that satisfies $X$. Since $Y$
implies $X$, the set of all outcomes satisfying $Y$ is a subset of the set of
all outcomes satisfying $Y$. Thus, $(S,x)$ is not Pareto dominated by any
outcome that satisfies $Y$, and so $(S,x)$ is PO-$Y$.
4. 4.
PO-IMP implies EXH: Suppose for a contradiction that $(S,x)$ is an outcome
that satisfies PO-IMP but not EXH. Let an implementable payment function for
this outcome be $y:N\times C\rightarrow\mathbb{R}^{+}\cup\\{0\\}$. Since this
outcome is not exhaustive, there is a set of agents $N^{\prime}$ who can pool
together their unspent money to fund another commonly-liked project
$c^{\prime}$. We can construct a new outcome
$(S\cup\\{c^{\prime}\\},x^{\prime})$ with a payment function
$y^{\prime}:N\times C\rightarrow\mathbb{R}^{+}\cup\\{0\\}$ such that for all
agents $i\in N$ and projects $c\in S$, $y^{\prime}(i,c)=y(i,c)$, and for all
agents $j\in N^{\prime}$, $y^{\prime}(j,c^{\prime})=\delta_{j}$, where
$\delta_{j}$ is the contribution of each agent $j\in N^{\prime}$ to the new
project $c^{\prime}$. Note that $(S\cup\\{c^{\prime}\\},x^{\prime})$ is
implementable since its payment function has agents only funding projects they
approve of. Now, $(S\cup\\{c^{\prime}\\},x^{\prime})$ Pareto dominates $(S,x)$
which is a contradiction since $(S,x)$ is PO-IMP. Therefore, any PO-IMP
outcome must satisfy EXH.
5. 5.
PO-IR implies EXH: Suppose for a contradiction that $(S,\textbf{x})$ is an
outcome that satisfies PO-IR but not EXH. Then there is a set of agents
$N^{\prime}$ who can pool together their unspent money to fund another
commonly-liked project $c^{\prime}$. Since no agent’s utility decreases by
funding this project, a new outcome
$(S\cup\\{c^{\prime}\\},\textbf{x}^{\prime})$ is still IR, where
$\textbf{x}^{\prime}$ is any valid vector of payments. Also note that this
$(S\cup\\{c^{\prime}\\},\textbf{x}^{\prime})$ Pareto dominates
$(S,\textbf{x})$ since no agent’s utility decreases and at least one agent’s
utility increases. This is a contradiction as $(S,\textbf{x})$ is PO-IR by our
initial assumption. Hence, any PO-IR outcome is EXH.
6. 6.
Suppose an outcome $(S,x)$ is CORE. Then for every subset of agents
$N^{\prime}\subseteq N$, for every subset of projects $C^{\prime}\subseteq C$
such that $w(C^{\prime})\leq\sum_{i\in N^{\prime}}b_{i}$, for some agent $i\in
N^{\prime}$ $u_{i}(S)\geq w(C^{\prime}\cap A_{i}).$ Now consider the case
where $|N^{\prime}|=1$, i.e. $N^{\prime}$ is a subset of one agent. We now
have that for every agent $i$, for all $C^{\prime}\subseteq C$ such that
$w(C^{\prime})\leq b_{i}$, $u_{i}(S)\geq w(C^{\prime}\cap A_{i}).$
Equivalently, for every agent $i$, $u_{i}(S)\geq\max_{S^{\prime}\subseteq
A_{i},w(S^{\prime})\leq b_{i}}(w(S^{\prime}))$ and so $(S,x)$ is IR.
7. 7.
The combination of PO-IMP and IMP imply PROP. Suppose an outcome does not
satisfy PROP. Then this means that there is set of agents $N^{\prime}\subseteq
N$ _only_ approve of a set of projects $C^{\prime}\subseteq C$ such that
$\sum_{i\in N^{\prime}}b_{i}\geq w(C^{\prime})$ but not all projects in
$C^{\prime}$ are selected. Then one of the two cases occurs: (1) either the
money of agents in $N^{\prime}$ is used for projects not approved by them
which violates IMP (2) the agents in $N^{\prime}$ can pool in unspent money to
fund an additional project in $C^{\prime}$ that is not funded, which means
that the outcome is not PO-IMP.
∎
Proof of Proposition 13
###### Proof.
Consider the instance given in Table 20. Due the total budget constraint, at
most two of the projects can be funded, so we check the egalitarian welfare
derived by funding any two projects in Table 21.
| | X (3) | Y (2) | Z (1)
---|---|---|---|---
| Budget | | |
Agent 1 | 1 | | + | +
Agent 2 | 1 | | + | +
Agent 3 | 3 | + | + |
Table 9: Example instance where EGAL rules are not strategyproof. | $\\{X,Y\\}$ | $\\{X,Z\\}$ | $\\{Y,Z\\}$
---|---|---|---
Egalitarian Welfare | (2, 2, 5) | (1, 1, 3) | (2, 3, 3)
Table 10: Egalitarian welfares of certain project sets to be funded.
Observe that it is possible for an implementable outcome to fund $\\{Y,Z\\}$
by having Agents 1 and 2 pay for them, and so $\\{Y,Z\\}$ is funded by each of
the above rules. Then, the utility for Agent 3 is 2.
Now, suppose Agent 3 misrepresents her preferences to suppress the fact that
she approves of project $Y$. The new perceived instance is shown in Table 22
and again, we compute the egalitarian welfare produced by funding any two
projects in Table 23.
| | X (3) | Y (2) | Z (1)
---|---|---|---|---
| Budget | | |
Agent 1 | 1 | | + | +
Agent 2 | 1 | | + | +
Agent 3 | 3 | + | |
Table 11: Example instance where rules are not strategyproof. | $\\{X,Y\\}$ | $\\{X,Z\\}$ | $\\{Y,Z\\}$
---|---|---|---
Egalitarian Welfare | (2, 2, 3) | (1, 1, 3) | (0, 3, 3)
Table 12: Egalitarian welfares of certain project sets to be funded.
Note that there is an implementable outcome that funds $\\{X,Y\\}$, where
Agent 3 pays for $X$ and Agents 1 and 2 pay for $Y$. Hence, $\\{X,Y\\}$ is
funded by each of the rules. The new utility for Agent 3 is 3.
Thus, by misrepresenting her preferences, Agent 3 is able to increase the
utility she receives when egalitarian rules are used. Therefore, EGAL, EGAL-MR
and EGAL-IMP are not strategyproof.
∎
## Appendix 0.B Experiments
The results of the experiments are depicted in Figures 8, 8, 8, 8, 8, 8, 8,
and 8.
Figure 1: Average performance of rules with respect to utilitarian welfare in
share-house simulations as a percentage of the maximum achievable utilitarian
welfare. Figure 2: Average performance of rules with respect to utilitarian
welfare in crowdfunding simulations as a percentage of the maximum achievable
utilitarian welfare.
Figure 3: Worst-case performance of rules with respect to utilitarian welfare
in share-house simulations as a percentage of the maximum achievable
utilitarian welfare. Figure 4: Worst-case performance of rules with respect to
utilitarian welfare in crowdfunding simulations as a percentage of the maximum
achievable utilitarian welfare.
Figure 5: Average performance of rules with respect to egalitarian welfare in
share-house simulations as a percentage of the maximum achievable egalitarian
welfare. Figure 6: Average performance of rules with respect to egalitarian
welfare in crowdfunding simulations as a percentage of the maximum achievable
egalitarian welfare.
Figure 7: Worst-case performance of rules with respect to egalitarian welfare
in share-house simulations as a percentage of the maximum achievable
egalitarian welfare. Figure 8: Worst-case performance of rules with respect to
egalitarian welfare in crowdfunding simulations as a percentage of the maximum
achievable egalitarian welfare.
## Appendix 0.C Additional Propositions
###### Proposition 14.
UTIL, EGAL and NASH satisfy PO.
###### Proof.
Suppose there was an outcome that was Pareto dominant over the outcome
returned by any of these rules. Then, it would also have a strictly greater
utilitarian/egalitarian/Nash welfare to this outcome, which is a
contradiction. ∎
###### Proposition 15.
UTIL and NASH do not satisfy MR (or IMP by corollary) or IR (or CORE by
corollary).
###### Proof.
For the profile in Table 14, UTIL and NASH would require that only project $X$
is funded by using all of both agents’ money. But then, Agent 2 could have
left the mechanism and derived better utility on her own (IR). Also, her
return is less than her contribution, so UTIL does not satisfy MR.
| | X (20) | Y (10)
---|---|---|---
| Budget | |
Agent 1 | 10 | + |
Agent 2 | 10 | | +
Table 13: Example profile where UTIL and NASH outcomes do not satisfy MR.
∎
###### Proposition 16.
EGAL does not satisfy MR (or IMP by corollary).
###### Proof.
For the profile in Table 14, the maximally egalitarian output is to implement
both projects. However, in this case, Agent 1 is charged $25$ but receives a
utility of 20, so this outcome does not satisfy MR.
| | X (20) | Y (10)
---|---|---|---
| Budget | |
Agent 1 | 25 | + |
Agent 2 | 5 | | +
Table 14: Example profile where EGAL outcomes do not satisfy MR.
∎
###### Proposition 17.
EGAL does not satisfy IR.
###### Proof.
In the below example, given the total budget, the choice is to implement
either $X$ or $Y$ or neither. The egalitarian outcome with be to implement
$Y$, but then, by leaving the system, Agent 1 could pay for $X$ herself and
get a better outcome.
| | X (20) | Y (10)
---|---|---|---
| Budget | |
Agent 1 | 20 | + | +
Agent 2 | 5 | | +
∎
###### Proposition 18.
UTIL-IMP, EGAL-IMP and NASH-IMP do not satisfy PO-MR.
###### Proof.
In Table 15, the only implementable outcomes are those in which no projects
are funded or Agents 1 and 2 pay for project $X$. But the outcome where all
three projects are funded by all agents spending all of their money satisfies
MR and Pareto dominates any of the implementable outcomes.
| | X (10) | Y (12)
---|---|---|---
| Budget | |
Agent 1 | 10 | + |
Agent 2 | 10 | + |
Agent 3 | 2 | | +
Table 15: Example profile where UTIL-IMP, EGAL-IMP and NASH-IMP outcomes do
not satisfy PO-MR.
∎
###### Proposition 19.
UTIL-MR and NASH-MR do not satisfy IMP.
###### Proof.
Consider the example below. UTIL-MR and NASH-MR will require that both
projects are funded. However, at least one of Agents 1 and 2 must give some
money to project $Y$, as Agent 3 cannot afford it by herself. Thus, this
outcome is not implementable.
| | X (30) | Y (10)
---|---|---|---
| Budget | |
Agent 1 | 20 | + |
Agent 2 | 20 | + |
Agent 3 | 5 | | +
∎
###### Proposition 20.
UTIL-MR and UTIL-IMP do not satisfy IR (or CORE by corollary).
###### Proof.
Consider the example below. The overall (unique) utilitarian outcome is
achieved by funding projects $Y$ and $Z$ with $x_{1}=4$ and $x_{2}=11$.
Observe that this is an implementable outcome, and so this would be the result
of UTIL-MR and UTIL-IMP. However, if Agent 1 left the system, she could have
individually funded projects $W$ and $X$ which would have returned to her a
greater utility. Therefore, this outcome does not satisfy IR.
| | W (3) | X (3) | Y (5) | Z (10)
---|---|---|---|---|---
| Budget | | | |
Agent 1 | 6 | + | + | + |
Agent 2 | 11 | | | + | +
∎
###### Proposition 21.
NASH-MR, MASH-IMP, EGAL-MR and EGAL-IMP do not satisfy IR (or CORE by
corollary).
###### Proof.
From the example above, we see that any egalitarian distribution also will
fund only projects $Y$ and $Z$. For this output to be implementable (and also
satisfy MR), we have $x_{1}=4$ and $x_{2}=11$ or $x_{1}=5$ and $x_{2}=10$. In
either case, we have seen from above that this outcome will not satisfy IR. ∎
* •
NASH-MR and NASH-IMP do not satisfy IR (or CORE by corollary): Same argument
as above.
###### Proposition 22.
UTIL, UTIL-MR, UTIL-IMP, NASH, NASH-MR and NASH-IMP are not strategyproof.
###### Proof.
Consider the instance given in Table 16. Note that we only include project $Z$
for the purpose of making the NASH welfare of outcomes non-zero. Now, observe
that it is impossible to fund all three projects, so our possible candidate
project sets to be funded by the above rules are those where two projects get
funded.
| | X (10) | Y (4) | Z (9)
---|---|---|---|---
| Budget | | |
Agent 1 | 8 | | + | +
Agent 2 | 1 | | + | +
Agent 3 | 10 | + | + | +
Table 16: Example instance where rules are not strategyproof. | $\\{X,Y\\}$ | $\\{X,Z\\}$ | $\\{Y,Z\\}$
---|---|---|---
Utilitarian Welfare | 22 | 37 | 39
Nash Welfare | 224 | 1539 | 2197
Table 17: Utilitarian and Nash welfares of certain project sets to be funded.
We check that there is an implementable outcome that funds $\\{Y,Z\\}$, and
find that the outcome where Agents 1 and 2 pay for $Z$ and Agent 3 pays for
$Y$ is implementable. Hence, $\\{Y,Z\\}$ is the result of UTIL, UTIL-MR, UTIL-
IMP, NASH, NASH-MR, NASH-IMP. Note that the utility for Agent 3 is 13.
Now, suppose Agent 3 were to misrepresent her preferences as in Table 18.
Again, according to this new (perceived) instance, it is impossible for all
projects to be funded, so in Table 19 we check the welfares produced by
funding any two of the projects.
| | X (10) | Y (4) | Z (9)
---|---|---|---|---
| Budget | | |
Agent 1 | 8 | | + | +
Agent 2 | 1 | | + | +
Agent 3 | 10 | + | | +
Table 18: Instance where Agent 3 is misrepresenting her preferences. | $\\{X,Y\\}$ | $\\{X,Z\\}$ | $\\{Y,Z\\}$
---|---|---|---
UTIL | 18 | 37 | 35
NASH | 160 | 1539 | 1521
Table 19: Perceived welfares of certain project sets to be funded if Agent 3
misrepresents her preferences.
Since $\\{X,Z\\}$ can be funded by an implementable outcome where Agents 1 and
2 paying for $Z$ and Agent 3 paying for $X$, $\\{X,Z\\}$ is the result of
UTIL, UTIL-MR, UTIL-IMP, NASH, NASH-MR, NASH-IMP. With this outcome, Agent 3
sees her utility rise to 19.
Then, by misrepresenting her preferences, Agent 3 can cause the choice of the
aforementioned rules to change from funding $\\{Y,Z\\}$ to funding
$\\{X,Z\\}$, hence increasing her own utility. Therefore, UTIL, UTIL-MR, UTIL-
IMP, NASH, NASH-MR and NASH-IMP are not strategyproof.
∎
###### Proposition 23.
EGAL, EGAL-MR and EGAL-IMP are not strategyproof.
###### Proof.
Consider the instance given in Table 20. Due the total budget constraint, at
most two of the projects can be funded, so we check the egalitarian welfare
derived by funding any two projects in Table 21.
| | X (3) | Y (2) | Z (1)
---|---|---|---|---
| Budget | | |
Agent 1 | 1 | | + | +
Agent 2 | 1 | | + | +
Agent 3 | 3 | + | + |
Table 20: Example instance where EGAL rules are not strategyproof. | $\\{X,Y\\}$ | $\\{X,Z\\}$ | $\\{Y,Z\\}$
---|---|---|---
Egalitarian Welfare | (2, 2, 5) | (1, 1, 3) | (2, 3, 3)
Table 21: Egalitarian welfares of certain project sets to be funded.
Observe that it is possible for an implementable outcome to fund $\\{Y,Z\\}$
by having Agents 1 and 2 pay for them, and so $\\{Y,Z\\}$ is funded by each of
the above rules. Then, the utility for Agent 3 is 2.
Now, suppose Agent 3 misrepresents her preferences to suppress the fact that
she approves of project $Y$. The new perceived instance is shown in Table 22
and again, we compute the egalitarian welfare produced by funding any two
projects in Table 23.
| | X (3) | Y (2) | Z (1)
---|---|---|---|---
| Budget | | |
Agent 1 | 1 | | + | +
Agent 2 | 1 | | + | +
Agent 3 | 3 | + | |
Table 22: Example instance where rules are not strategyproof. | $\\{X,Y\\}$ | $\\{X,Z\\}$ | $\\{Y,Z\\}$
---|---|---|---
Egalitarian Welfare | (2, 2, 3) | (1, 1, 3) | (0, 3, 3)
Table 23: Egalitarian welfares of certain project sets to be funded.
Note that there is an implementable outcome that funds $\\{X,Y\\}$, where
Agent 3 pays for $X$ and Agents 1 and 2 pay for $Y$. Hence, $\\{X,Y\\}$ is
funded by each of the rules. The new utility for Agent 3 is 3.
Thus, by misrepresenting her preferences, Agent 3 is able to increase the
utility she receives when egalitarian rules are used. Therefore, EGAL, EGAL-MR
and EGAL-IMP are not strategyproof.
∎
|
# SimBle - Introducing privacy preserving BLE simulation to generate real-
world traces
††thanks: This work has been partially funded by the ANR MITIK project, French
National Research Agency (ANR), PRC AAPG2019.).
Abhishek Kumar Mishra12, Aline Carneiro Viana2, Nadjib Achir23, 1 Ecole
Polytechnique
Palaisau, France 2 Inria
Palaisau, France
{abhishek.mishra, aline.viana<EMAIL_ADDRESS>3 Université Sorbonne
Paris Nord
Paris, France
<EMAIL_ADDRESS>
# SimBle: Generating privacy preserving real-world BLE traces with ground
truth
Abhishek Kumar Mishra12, Aline Carneiro Viana2, Nadjib Achir23, 1 Ecole
Polytechnique
Palaisau, France 2 Inria
Palaisau, France
{abhishek.mishra, aline.viana<EMAIL_ADDRESS>3 Université Sorbonne
Paris Nord
Paris, France
<EMAIL_ADDRESS>
###### Abstract
Bluetooth has become critical as many IoT devices are arriving in the market.
Most of the current literature focusing on Bluetooth simulation concentrates
on the network protocols’ performances and completely neglects the privacy
protection recommendations introduced in the BLE standard. Indeed, privacy
protection is one of the main issues handled in the Bluetooth standard. For
instance, the current standard forces devices to change the identifier they
embed within the public and private packets, known as MAC address
randomization. Although randomizing MAC addresses is intended to preserve
device privacy, recent literature shows many challenges that are still
present. One of them is the correlation between the public packets and the
emitters. Unfortunately, existing evaluation tools such as NS-3 are not
designed to reproduce this Bluetooth standard’s essential functionality. This
makes it impossible to test solutions for different device-fingerprinting
strategies as there is a lack of ground truth for large-scale scenarios with
the majority of current BLE devices implementing MAC address randomization. In
this paper, we first introduce a solution of standard-compliant MAC address
randomization in the NS-3 framework, capable of emulating any real BLE device
in the simulation and generating real-world Bluetooth traces. In addition,
since the simulation run-time for trace-collection grows exponentially with
the number of devices, we introduce an optimization to linearize public-packet
sniffing. This made the large-scale trace-collection practically feasible.
Then, we use the generated traces and associated ground truth to do a case
study on the evaluation of a generic MAC address association available in the
literature [1]. Our case study reveals that close to $90\%$ of randomized
addresses could be correctly linked even in highly dense and mobile scenarios.
This prompts the BLE standard to be revisited on privacy-related provisions.
We provide privacy recommendations based on our case study. Finally, we
discuss the consequences that real randomized traces bring to different
scientific research domains and how our proposed solution helps in overcoming
new challenges.
###### Index Terms:
Bluetooth, IOT devices, BLE(Bluetooth Low Energy), Simulatuion, Privacy, MAC
address randomization, MAC address association, Data-sets
## I Introduction
The Internet of Things (IoT) is expected to connect billions of low-end
devices to the Internet. It thereby drastically increases communication
without a human source or destination. The total count of products and
businesses that use IoT technologies has increased to about 25 percent, and
the number of connected devices is projected to reach 43 billion by 2023[2].
Bluetooth has been a significant backbone for most of these connected devices
and applications[3]. Sniffing Bluetooth traffic has not been straightforward
because of the manufacturer-dependent adaptive channel hopping behavior and
shared 2.4 GHz spectrum of Bluetooth’s device. Various approaches have
predicted hop changes, allowing the user to be traceable [4]. Nevertheless,
these hopping challenges are mostly for the private data packets being
exchanged in Bluetooth. As we go for the public packets such as beacons and
keep-alive messages, which are emitted in three channels, it is much easier to
sniff them accurately. These beacons reveal the sender’s device identity in
the form of MAC address.
Devices that perform MAC randomization can hide device’s identity to some
extent. Bluetooth Classic (BT) does not randomize the addresses and has
already been shown to be de-anonymized[5]. Even MAC address randomization in
BLE has been claimed to be defeated specific to apple devices[6] and for
generalized devices[1]. [1] claim to get 100% device association for small set
of devices on sniffing public-packets in a controlled environment(inside
Faraday cage) as seen in Figure 1. The addresses shown in figure 1 are LAP
(Lower Address Part) of anonymized MAC addresses seen by [1] in the trace.
There is a need to evaluate the performance of [1] for a large population of
devices in real-world scenarios. If the results of Figure 1 are similar in
realistic environments, immense threats to user-privacy are posed in BLE.
Figure 1: Perfect association of MAC addresses achieved by [1] on sniffing
public-packets in the controlled environment for BLE with MAC randomization.
Each color represents a device broadcasting with anonymized addresses
Amidst raising privacy intrusion findings in the Bluetooth, there has been an
absence of frameworks to test these suggestions in scalable real-world
conditions. Current BLE simulators are mostly focusing on throughput, latency,
and signal-to-noise ratio (SNR) features rather than the security and privacy
aspects of the standard. There has been an inability to incorporate the real-
world device parameters into the simulation framework. Without these
advancements, it is impossible to generate a realistic BLE trace that
considers integral factors like MAC address randomization. This is because the
implementation of address randomization is dependent on the device
manufacturer. Lack of controlled simulated traces presently halts the
retrieval of ground truth in large-scale scenarios. Ground truth here refers
to the knowledge of a set of randomized MAC addresses that were emitted from a
particular device. It is needed to successfully evaluate device fingerprinting
solutions and propose adjustments in the standard to guarantee the user’s
privacy.
To the best of our knowledge, none of the current available BLE simulators
support and consider privacy aspects, specifically MAC address randomization.
The current state-of-the-art open-source for simulating wireless
communications in general, NS-3111https://www.nsnam.org/, is very weak in
support of BLE standard to much-advanced WiFi stack it possesses. In fact, the
official release of NS-3 still lacks BLE support. A different open-source
implementation of BLE stack without MAC randomization have been released based
on NS-3 framework[7, 8]. There has also been an implementation of BLE in
Omnet++ framework too222http://cc.oulu.fi/ kmikhayl/BLE.html. We rigorously
tested and chose [8] as the base BLE stack (BLE 4.1) of our proposed
simulator. This is because, firstly, it is currently most accurate, efficient,
and organized. Secondly, it is in the NS-3 framework, which gives users the
freedom to perform BLE experiments co-existing with the latest WiFi standards.
Most of the BLE trace collection is for public packets and is done passively
through sniffers. Private packets are mostly encrypted, and capturing them is
illegal in many countries. Expensive hardware like Ubertooth One[9] is
required to sniff on data channels. Moreover, as stated earlier, channel
hopping in BLE data packets makes the capturing worse. Unfortunately, current
simulation tools are not meant for generating sniffed public BLE traces. This
is because simulation time explodes with a large number of devices due to the
number of simulation events increasing when handling the inter-node public
packets. We are interested in the full processing of broadcast packets only at
the sniffer. SimBle addresses this issue and proposes optimized sniffing in
Section III-A2, which eliminates exponential run-time while being able to
generate the exact same trace.
In this paper, we first study and present different privacy guidelines across
released Bluetooth standards. Then, we develop and introduce the simulator
SimBle, which incorporates standard-compliant MAC address randomization
capable of emulating any BLE device. This is made possible as SimBle
introduces the notion of device class, which differentiates various kinds of
devices like phones, smartwatches, and headsets based on the frequency of
transmitted beacons.
The four major contributions of this paper are:
1. 1.
Study of different privacy features present in the BLE standard that is
necessary to be introduced in Simulation.
2. 2.
Architecture and implementation of a new BLE simulation stack in the form of
SimBle in NS-3 which considers user-privacy and distinguishes the devices
spawned in it.
3. 3.
Case study of the only generic MAC address association algorithm present in
literature. It is made possible for scalable scenarios after generating the
ground truth using our solution
4. 4.
Release of an open-source simulator along with tools and methods to generate a
realistic Bluetooth trace with associated ground truth
The rest of this paper is organized as follows. Section II defines the
overview of different privacy measures recommended by the BLE standard. We
present our BLE simulation stack, SimBle in Section III and IV. Section V
validates the functionality of SimBle. In Section VI, we perform a case study
of the generic MAC address association strategy available in literature using
simulated ground truth. We show the strategy’s effectiveness and then discuss
possible amendments to the BLE standard that this case study has forced to
consider. Finally, Section VII discusses the impact of privacy-preserving BLE
provisions on other research domains and how real-world traces from SimBle
would address big challenges. We also present the conclusion of our work along
with looking into the future directions.
## II Background
This section discusses how BLE handles MAC level addressing. We look into
different addressing modes supported by BLE. But we are mostly interested in
private addresses as they are fundamental in preserving user privacy.
Afterward, we present a study of privacy provisions currently proposed by the
standard. Finally, we identify the factors that must be taken into account for
designing the simulator that respects user privacy.
### II-A BLE MAC addressing
Bluetooth has been there for quite some time now, but it is the Bluetooth Low
Energy (BLE) variant[10] that has been used by the majority of the IoT
devices. When a particular BLE device communicates, it keeps sending
advertising packets on three public channels specified by the standard. These
packets include a link-layer MAC address, which acts as an identifier to the
device[[11], p. 69]. To avoid the user leaking the identifier to the world,
recent BLE standards have continuously forced all the devices to update their
publicly advertised MAC addresses. Various addressing modes have been
specified in the standard [[12], p. p. 2988] which are briefly described next.
In BLE, we identify the devices using a device address and an address type
[[12], p. 2988]. This means that whenever we compare two device addresses, the
same 48-bit addresses does not guarantee the same device. This is because the
two addresses could have different types. The address type could either be a
public device address or a random device address, which are both 48 bits long.
The device has the freedom to use at least one or both types of device
addresses.
Pubic device addresses are traditional MAC addresses that are created in
accordance with Universal addresses section of the IEEE 802-2014 standard[13].
They are more prevalent, but it is the random device address which is privacy-
preserving.
Random device address could either be static or private. A static address is a
48-bit randomly generated address meeting specific standard requirements. On
the other hand, private addresses are again either resolvable or non-
resolvable[[12], p. 2991]. These specific subtypes are identified by the two
most significant bits of the random device address, as shown in the table I.
Address [47:46] | Address Sub-Type
---|---
0b00 | Non-resolvable private address
0b01 | Resolvable private address
0b10 | Reserved for future use
0b11 | Static device address
TABLE I: Sub-types of random device addresses
BLE device’s Identity Address is one of Public device address or Random static
device address. When a device is continuing with Resolvable private addresses,
it must also possess an Identity Address.
### II-B BLE privacy provisions
The key to privacy provided by the BLE link layer is using private addresses,
which we described in the previous sub-section[[12], p. 3201]. This again
reflects the importance of the introduction of MAC address randomization done
by SimBle. BLE recommends devices to generate a resolvable private address.
The link-layer corresponding to the host sets a timer and regenerates a new
resolvable private address when the timer expires. Moreover, once the Link
Layer is reset, a new resolvable private address is generated, and the timer
is allowed to start with an arbitrary value in the allowed range. To maintain
the efficiency of connection establishment, the standard recommends setting
the timer to 15 minutes.
BLE[14][12] does not allow private devices to use its Identity Address in any
advertising packet. The Host could instruct the Controller to advertise, scan,
or initiate a connection using a resolvable private address after enabling the
resolving list.
The state machine for the link layer of BLE consists of various states[[12],
p. 2985]. A device could be found in either of these states. For instance,
advertising, scanning, and initiation states have different guidelines by the
standard. In the advertising state, the link layer is allowed to perform
device filtering based on the device address of the peer device to minimize
the number of devices to which it responds. This could be done according to a
local white list which contains a set of records comprising of both the device
address and the device address type (public or random) [[12], p. 3202]. If the
device is in scanning or initiating state, it is recommended to use private
addresses. The scanning device should use the resolvable or non-resolvable
private address as the device address. Whenever a scanning device receives an
advertising packet that contains a resolvable private address for the
advertiser’s device address, after address resolution, the scanner’s filter
policy decides to respond with a scan request or not.
Having over-viewed the BLE standard’s privacy-related recommendations,
especially the latest release BLE 5.2, we proceed in what follows to
incorporate the key elements to the simulator. The simulator should not only
care of including resolvable private addresses that are integral to BLE
privacy but also bring together other MAC address randomization related
aspects. The proposed simulation stack SimBle, is thus designed in such a
manner that adding further privacy-specific features in the future is
relatively straightforward.
## III SimBle: Design & Architecture
This section aims at providing the solution to the problem of emulating
devices that follow network and device privacy-provisions of BLE. This step is
a key to generating realistic traces with associated ground truth. If we
successfully come up with a device-specific privacy-preserving simulation, we
could easily produce traces that resemble real scenarios. This has profound
implications. It enables us to practically evaluate any MAC address-based
device-fingerprinting or privacy-intrusion solutions that are suggested in the
literature.
In the following, we introduce our BLE simulation stack that we call as
SimBle. We first look at different design aspects of SimBle and then we
present our SimBle architecture.
### III-A Design considerations
The first aspect that we should take into consideration is the device
heterogeneity. Indeed, BLE gives vendors the flexibility to implement privacy
features respecting specific guidelines released by the standard. Therefore,
different mobile phone manufacturing companies like Apple and Samsung could
have different implementation parameters related to randomization. Even one
vendor could have a range of devices supporting various BLE releases. Hence,
device distinction is an essential feature for BLE simulation, which is
currently absent in available simulators.
The second aspect that we have to consider is privacy provisions. As we saw in
the previous section, the central component of BLE privacy provisioning is the
MAC address randomization procedure. If devices violate these recommendations
and, for example, advertise it’s identity address, then the device and, thus,
network privacy is compromised, leading to traceability. Simble needs to
introduce these provisions specifically MAC address randomization in its
framework.
Finally, the last aspect is the flexibility to generate realistic traces.
Indeed, one of the significant demands in the research community is BLE
traces’ availability, which could replicate different real-world scenarios
like mobility, crowd density, and kind of devices present in the zone where
the trace was collected. Trace collection is impractical for the large
population using active means like installing specific applications on user
devices. Even passive methods, like the usage of sniffers, would require
massive deployment and user consent. That is why SimBle also aims to include a
framework for releasing a ready-to-use utility for trace generation in various
user-specified scenarios. We show a case-study of MAC address association
algorithm in section VI using traces and associated ground truth from this
framework.
In the following subsections, we detail how these design choices are
implemented in SimBle.
#### III-A1 Device heterogeneity
As discussed earlier in the previous section, different vendors have the
freedom with some bounds in the implementation of BLE stack in the device. For
example, Apple picks from the range for values to decide how frequently the
device changes a randomized MAC address. We need to distinguish for each
device introduced in SimBle so that simulation would be able to replicate its
behavior in terms of privacy features. In the following, we define the
device’s type through two points: the device’s class and the supported
standard version.
1. (a)
Notion of Device Class: We find a property to classify the device into various
groups where the behavior is similar irrespective of manufacturer. This
property is the frequency of transmitting beacons, which is characteristic of
a device with a maximum variation of 10ms [14, p. 2751]. The base value of the
beacon transmission period is between [20 ms; 10.24 s]. Based on this
property, we classify BLE devices into the following device classes:
* •
Frequent Emitters: For this class, the frequency of transmitting beacons is
from a normal distribution of mean 50 ms and standard deviation 10 ms. This
represents a highly active device like earbuds. We expect these kinds of
devices to also swap their randomized MAC address quickly.
* •
Moderate Emitters: These are devices with a moderate frequency of
advertisements. We describe them to be from a normal distribution of mean 300
ms and standard deviation 25 ms. From our experimentation, most smartphones,
especially iPhones, are falling into this category.
* •
Semi-Moderate Emitters: These are devices which are still active in
transmitting regular beacons on broadcast channels. They follow a normal
distribution of mean 500 ms and standard deviation 25 ms. This class again
mainly includes phones.
* •
Low Emitters: These are devices which are least active in sending out
advertisements. We define them to have inter beacon transmission intervals
from a normal distribution of mean 2 s and standard deviation 500 ms.
Smartwatches generally fall in this category.
A user, when instantiating a node in SimBle could choose any of the stated
device classes. If the user enables beacons, nodes automatically set their
behavior to that of the specified class. However, we give the flexibility to
specify the exact beacon frequency of a device if a user knows it beforehand
through experimentation.
2. (b)
BLE standards: The frequency of changing a randomized MAC address does depend
on the standard. In the most prevalent release currently in terms of the
number of devices, the BLE 4.0, for instance, devices change their MAC
addresses every 15 minutes[11]. In recent releases like BLE 5.2, devices are
allowed to change their address before 15 minutes too. Therefore, it is
crucial to specify a BLE node with its standard before using its privacy
features in the simulation. SimBle gives the user the option to mention the
standard they want to run on top of the declared node, which enables
controlling the privacy features associated.
#### III-A2 Realistic trace generation
One of the major motivations of this paper is to address the issue of
generating realistic Bluetooth traces finally. We identify following
components that are essential to be taken care of for SimBle to emulate real-
world trace collection:
1. 1.
Privacy features: As already stated earlier, SimBle not only introduces BLE
network and device privacy features like MAC address randomization but also
identifies key parameters that are necessary to get real-world traces. These
factors as introduced before in section III are swapDelay, randInterval,
Device Class and the BLE release version . As mentioned above, making sure of
correct device-specific parameters enables SimBle to emulate any vendor
device’s privacy features.
2. 2.
Passive sniffing: Trace collection using active methods like user
participation is not practical for BLE. Indeed, we need to recruit volunteers
and install the specific application on user devices. There has been rapid
growth in contact tracing and trajectory-reconstruction using BLE recently,
and the research community requires more real-world traces collected through
passive sniffing.
The capture of BLE packets should fall under the principle of ”legal capture”
in different countries. It is mostly not valid for private packets and
requires special authorization. Therefore, BLE passive sniffing generally
refers to listening on public channels. SimBle introduces a framework for the
user to deploy an arbitrary number of sniffers and nodes to be placed in a
sniffing zone. On top of it, different mobility models could be installed on
BLE nodes’ varying density, which enables recreating realistic environments.
Hence, we could emulate real-world BLE sniffing.
3. 3.
Ground truth: Introducing privacy in BLE simulation automatically answers the
search of ground truth in randomized-address traces. Ground truth here refers
to the knowledge of the history of randomized MAC addresses emitted by a
device. We need this to evaluate MAC association algorithms or device
fingerprinting methods in general, that are increasingly being proposed [1]
[6] [5]. SimBle generates ground truth trace by matching each device’s
generated private addresses to the Node ID, which acts a unique identifier to
the device in simulation time.
#### III-A3 Optimizing trace generation
As discussed earlier, passive sniffing is the most practical method for BLE
trace collection. We identify a major issue in the generation of real-world
traces inside a simulation. As the number of nodes increases, the number of
simulation-events due to processing inter-node packets also increases
quadratically. This has a significant impact on the time and resources needed
for simulation. But we are only interested in the node-sniffer interaction in
case of public packet capture.
SimBle addresses this problem and gives the user the flexibility to specify a
flag in simulation, which induces filtered and optimized handling of broadcast
packets at nodes. This reduces the simulation duration significantly and thus
makes trace-collection feasible. We discuss more on this and look at the
obtained gain in performances in Section V.
### III-B Architecture
After having figured out the design, we have a brief look into the
architecture of a BLE Node inside SimBle in the Figure 2. As discussed earlier
in the Section 1, we use the base BLE stack of [8]. Components of NetDevices
except the PrivacyManager were defined in the base stack. Application and
Packet socket interface are NS-3 wide entities not specific to BLE. We created
the new component, PrivacyManager that takes care of all BLE privacy features.
A node in SimBle carries the same meaning as in NS-3. It is a physical entity
with a unique integer ID and contains NetDevices and Applications.
In this paper, we could think the Node to be equivalent to a device/hardware
in the real world. We show in Figure 2 single instance of Application and
NetDevice for illustration but could be multiple in principle. NetDevice is an
integral object of a node representing a physical interface on it. Here, we
are interested in the Bluetooth interface. NetDevice communicates with the
help of interfaces to the Application. Packet socket interface connects the
application interfaces to the NetDevice here. IPv4/IPv6 stack could also be
installed by the user on the node in parallel. Let’s have a brief look at the
roles of other components of NetDevice which were already present in the base
BLE stack[8].
Figure 2: Architecture of a node in SimBle
BroadbandManager helps add a link to the list of links that can be associated
with a NetDevice. A link here refers to a BLE association between two nodes.
It also handles checking if there are new packets in the NetDevice queue and
forwards them to the right LinkManager’s queue.
LinkManager is the entity associated with a particular BroadbandManager. It
setups a link to a specific receiver with the role(Master/Slave) as expected
at the end of the setup process. LinkManager also manages TransmitWindow which
is the next time the device can send a packet over the associated link.
LinkController is majorly responsible for monitoring and handling the re-
transmissions and state changes in the link. It checks if the ACK was received
for the sent packet and also fires list of callbacks to other NetDevice
objects if the link changes. Lastly, PHY mainly takes the responsibility of
handling link bandwidth, bit-rates, transmission power, and bit-errors.
We introduce a new module, PrivacyManager in SimBle which takes care of all
the privacy-related aspects of a device. In the forthcoming section, we
discuss how MAC address randomization is managed by the PrivacyManager.
## IV SimBle: Privacy provisions
Hereafter, we describe the PrivacyManager implementation and the MAC address
randomization of BLE. We describe in details the implementation of
PrivacyManager or, to be specific, the MAC address randomization. All the
introduced algorithms follow the BLE standard guidelines[12].
Figure 3: PrivacyManager in SimBle
Overview of the PrivacyManager is illustrated in the Figure 3. Main in the
figure represents the base class of the PrivacyManager from which member
functions are called. We could observe in the figure that the function UPDATE
is called on the device startup. UPDATE generates new Resolvable private
addresses for the calling node using the function GENERATE. It recursively
calls itself after the expiration of the time associated with the current
private address. On the event of packet reception or checking of the existence
of a link to a destination, CHECKVALIDATION is called. On every call, it
checks with RESOLVE with a particular private address. RESOLVE returns on turn
the validity status and the device’s identity address, which generated the
private address. In the following, we describe the functions of PrivacyManager
in detail.
### IV-A KEY generation and distribution
PrivacyManager focuses on supporting Resolvable private addresses – the center
of all privacy provisions in current BLE release[12] (cf. Section II-B) For
node to generate a resolvable private address, it must have either the Local
Identity Resolving Key (IRK) or the Peer Identity Resolving Key (IRK). This
128 bit key is a proof of possession of a particular private address. In real
devices, IRK’s are exchanged through specific control messages. In SimBle, we
generate IRK randomly at each Node when it is initialized in the simulation.
The delay caused in the key exchange for real hardware is emulated by
swapDelay which we describe in the next section. Simultaneously, the Node also
generates an Identity Address, which is a unique identifier to the device.
In this paper, the Node or the NetDevice essentially mean the same in terms of
BLE associated parameters. This is because the remaining modules inside the
node (i.e., the socket and the application modules), are not dependent on the
BLE standard itself.
Finally, before creating links in SimBle and installing an application on top
declared nodes, each node updates a list in their respective NetDevice. This
list contains (IRK : Identity Address) pairs of each of the fellow BLE nodes
instantiated in the simulator.
### IV-B Generation of Randomized MAC
The format of a Resolvable private address is shown in fig 4. The resolvable
private address is generated with the IRK and a 24-bit number known as prand.
We see that it could be mainly divided into two blocks of 24 bits each. The
first block consists of 24 bit hash introduced in [Alg. 1 line 7]. SimBle
incorporates the AES (Advanced Encryption Standard) support as it is
recommended by the standard[12] for encrypting the plain-text data into
ciphered block [15] [16] in the process of randomized MAC address generation.
Figure 4: Format of a Resolvable Private Address
The second block consists of prand. Prand in the case of Resolvable private
address has two most significant bits as 1 and 0 as shown in the figure 4. The
random part of prand must consist of at least one bit as 0 and one bit as 1.
We discover in detail the generation of the Resolvable private address by
PrivacyManager in [Alg. 1].
Algorithm 1 SimBle’s Resolvable Private Address generation
1:procedure Generate($IRK$) $\triangleright$ Input variable
$\triangleright$ Prepare encryption inputs
2: $prand\leftarrow genPrand()$
3: $padding\leftarrow genPaddingBits(104)$
4: $plaintext\leftarrow Concatenate(padding,prand)$
$\triangleright$ AES encryption
5: $aesobj\leftarrow AES(IRK)$
6: $ciphertext\leftarrow aesobj.getEncrypt(plaintext)$
$\triangleright$ Getting MAC address
7: $prunedcipher\leftarrow getLeastSigBits(ciphertext,24)$
8: $macstr\leftarrow Concatenate(prunedcipher,prand)$
9: $macaddr\leftarrow toMacHex(macstr)$
10: return $\triangleright$ Returns a Resolvable Private Address
11:end procedure
12:procedure Update($randInterval,swapDelay,IRK$) $\triangleright$ Input
variables
13: $roundIndex=getCurrentRoundIndex()$
14: $macDevice=\textsc{Generate}(IRK)$
$\triangleright$ Check if this call is just after device initialization
15: if $roundIndex==1$ then
$\triangleright$ Calculate time offset for recursive callback
16: $nextUpOffset\leftarrow getURV(0,randInterval)\newline +swapDelay$
17: else
18: $nextUpOffset\leftarrow randInterval+swapDelay$
19: end if
$\triangleright$ Schedule a callback after offset expires
20: $incRoundIndex()$
21: Schedule(Update, nextUpOffset)
22:end procedure
Each of the node in SimBle has an instance of PrivacyManager as illustrated
earlier in the figure 4. [Alg. 1] performs two major functions. GENERATE in
[Alg. 1 line 1], takes as input the IRK and generates a resolvable private
address for that node. While UPDATE [Alg. 1 line 1] take care of necessary
calls to update a device’s MAC address according to the user specified BLE
standard and device class that we are trying to emulate.
Whenever GENERATE is called we generate a 24 bits value with two most
significant bits as 10. Rest of the bits are random and we use this value as
prand, the trailing half a resolvable private address [Alg. 1 line 2]. This
generated prand is then padded by 104 null bits such that the most significant
byte of the prand becomes the most significant byte of padding [Alg. 1 line
4]. We call this value plaintext as it is given as input for encryption. Then,
we generate an instance of AES algorithm initialized with the IRK of the
current node [Alg. 1 line 5]. AES instance then encrypts the plaintext to
generate 128 bits of ciphertext [Alg. 1 line 6]. We take 24 most significant
bits of ciphertext [Alg. 1 line 7] and concatenate to the earlier generated
prand to generate a string of 48 bits [Alg. 1 line 4]. The generated string is
then finally formatted in IEEE 802.11 MAC address format to produce a
resolvable private address [Alg. 1 line 9].
Once the randomized MAC address is generated, the next step is to change this
address dynamically while respecting the standard. This is done by the UPDATE
function of PrivacyManager which takes three arguments. One of them is IRK,
the identity resolving key of the node, which we have already discussed. The
other two arguments are device-dependent with the freedom to users for
allocating any specific values. They are as follows:
* •
randInterval: This is the time after which a specific device generates a new
resolvable private address. In BLE 4.1 standard[11], the most prevalent
Bluetooth standard in current mobile devices, this interval is fixed to 15
minutes. However, in the most recent release, BLE 5.2[12], the vendor is
flexible to randomize the MAC address before the mark of 15 minutes. But
standard recommends not to update the addresses too frequently as it might
affect the paired devices’ performance. It is due to an increase in the number
of control messages that need to be exchanged after generating a new address.
SimBle takes the BLE standard and device class as input from the user at the
initialization of nodes to calculate the respective randInterval value.
* •
swapDelay: It is introduced to emulate the behavior of the device in practice.
We see from the experiments that devices take some time before they develop a
new randomized address and advertise. This delay is caused due to resources
used in address generation and in updating the current MAC level state.
swapDelay could be device-specific. We empirically choose the value to be 10
times the frequency of transmitting beacons. We do after measuring the value
of this delay in experiments done on a large-set of BLE devices broadcasting
beacons.
On receiving the input arguments, UPDATE first checks the iteration index of
this call and stores it as roundIndex [Alg. 1 line 13]. For calls to UPDATE,
roundIndex has the value greater than or equal to 1. It distinguishes the two
states in which a node can generate a new address. The first
state(roundIndex=1) is when a node goes for obtaining a new address just after
spawning inside the simulation. While the second state(roundIndex$>$1) is when
the node requests an address after the expiration of the old one. GENERATE is
called from UPDATE to assign the device a new resolvable private address [Alg.
1 line 14].
After assigning the randomized address, UPDATE calculates the duration for
which this address would be valid. If the device has called UPDATE for the
first round, then we calculate this duration by taking a random value out of
uniform random variable distribution in [0, randInterval] and adding the
swapDelay to this value [Alg. 1 line 16].
We do this to respect the standard guidelines for setting the address
expiration timers as discussed in Section II-B. Else if the device has already
changed it’s MAC address since spawning, then we assign the offset to be the
sum of randInterval and swapDelay [Alg. 1 line 18].
Finally, we increase the roundIndex and schedule a recursive callback to
UPDATE after the expiration of offset that we just calculated above [Alg. 1
line 21] in order to get resolvable private addresses during the simulation
time.
### IV-C Resolution of Randomized MAC
Generation of MAC address is not sufficient for a BLE device. The receiving
node must be able to ”resolve” or associate the private address with the
sending device’s identity. A Resolvable private address may be resolved if the
sending device’s IRK is available to the receiver. If the address is resolved,
the receiving device can associate this address with the peer device.
To support this privacy-preserving feature, we need to figure out solutions to
two major questions inside a device; how to resolve a private address of a
device? And, where do we need to check the validity of the private address in
the packet being handled inside SimBle?
The solution to the first question is given by RESOLVE [Alg. 2 line 1] while
CHECKVALIDATION [Alg. 2 line 20] answers the second question that we arise
above.
As briefly stated earlier, RESOLVE returns a tuple consisting of (resolved,
resIDAdd). Here resolved states if the resolution attempt of the
privateAddress was successful or not. If the private address is resolved then
resIDAdd consists of the Identity Address of the node creating the private
address, else it is a empty string in the returned pair.
Whenever a node receives resolvable private address, the corresponding
PrivacyManager calls RESOLVE with privateAddress and irkIAddPairList as input.
While privateAddress is the sending device’s randomized MAC address,
irkIAddPairList is the locally maintained list of (IRK, Identity Address)
pairs at the resolving node, as described in section IV-A.
RESOLVE first extracts hash and prand part of the the private address [Alg. 2
line 3] as described earlier in Figure 4. We pad 104 null bits to the
extracted senderPrand such that the most significant byte of the senderPrand
becomes the most significant byte of plaintext, which is the resulted byte
array after padding.
Algorithm 2 SimBle’s Resolvable Private Address resolution
1:procedure Resolve($privateAddress,\newline irkIAddPairList$)
$\triangleright$ Input variable
$\triangleright$ Extract hash and random part of privateAddress
2: $senderHash\leftarrow extractHash(privateAddress)$
3: $senderPrand\leftarrow extractPrand(privateAddress)$
4: $padding\leftarrow genPaddingBits(104)$
5: $plaintext\leftarrow Concatenate(padding,senderPrand)$
6: $resolved\leftarrow FALSE$
7: $resIDAdd\leftarrow NULLSTR$
$\triangleright$ Check if Sender hash is valid
8: for $IRK,IDAdd\quad in\quad irkIAddPairList$ do
9: $aesobj\leftarrow AES(IRK)$
10: $ciphertext\leftarrow aesobj.getEncrypt(plaintext)$
11: $localHash\leftarrow getLeastSigBits(ciphertext,24)$
12: $resolved\leftarrow isEqual(localHash,senderHash)$
13:
14: if $resolved==TRUE$ then
15: $resIDAdd\leftarrow IDAdd$
16: end if
17: end for
$\triangleright$ Return resolved status & Identity Address
18: return ($PAIR(resolved,resIDAdd)$)
19:end procedure
20:procedure CheckValidation
$\triangleright$ Call RESOLVE to validate private address if any of the
function calls below is triggered in SimBle
21: if
22: $\textbf{BroadbandManager:}LinkExists(),\newline
GetLinkManager(),GetLink()$
23: $\textbf{LinkController:}CheckReceivedAckPacket()$
then
24: $\textsc{Resolve}(privateAddress,irkIAddPairList)$
25: end if
26:end procedure
Before considering a privateAddress to be resolved, the handling node checks
the validity of the address. Valid private address refers to the address which
was resolved using one of the IRK’s in the list available at the resolving
node. To get this verification, we first take out a (IRK : Identity Address)
pair from the irkIAddPairList. We generate an instance of AES algorithm
initialized with the IRK from the current pair [Alg. 2 line 9]. AES instance
then encrypts the plaintext to generate 128 bits of ciphertext [Alg. 2 line
10]. We take 24 most significant bits of ciphertext to generate the localHash.
If the value of localHash matches the earlier extracted senderHash [Alg. 2
line 2] for any of the iterations, RESOLVE successful returns the (TRUE,
Identity Address) pair. Otherwise resolution is considered a failure and
RESOLVE returns the (FALSE, ” ”) pair.
After resolving a private address, we look into the framework of SimBle to
investigate the modules that need address resolution. We identify two modules
that need to call PrivacyManager’s RESOLVE procedure: BroadbandManager and
LinkController through CHECKVALIDATION [Alg. 2 line 22]. Whenever
BroadbandManager receives a packet from the NetDevice, RESOLVE is recalled in
two cases. First is when it checks/tries to fetch the link. The second is when
it requests the LinkManager to the destination node. We do this to ensure that
the identity address resolved by the node suggested by the destination address
matches with the identity address of the existing link. Finally,
CHECKVALIDATION also needs to check if the sender address of the correctly
received packet by the LinkController could be resolved using one of the
stored IRK’s at the receiver [Alg. 2 line 23].
## V Validation
For validation of SimBle, it is fundamental to evaluate the functionalities of
the introduced PrivacyManager. Therefore resolvable private address generation
and resolution must be validated. Specifically, we must show that generated
randomized addresses are very close to what real-world devices advertise.
Also, we have to show that BLE data communication continues flawlessly between
the paired devices even when they change their advertised MAC address. In this
case, we assume that the devices have exchanged each other’s IRK during
initialization. All the MAC addresses shown in the paper are hashed using
SHA-256 and truncated to the first 8 bytes for illustration purposes.
### V-A Validating private address generation
To know if SimBle can emulate a real-world trace, we first collect real-traces
obtained form real experimentation. Then, we compare the difference between
real-traces obtained from capturing public packets from actual devices to that
of traces generated from initializing similar behavior devices inside the
simulator. This comparison aims to show that Simble could emulate the same
behavior in terms of randomized MAC advertisements and the transmission of
public packets.
#### V-A1 Experimental setup
As a sniffer, we use the Bluetooth chipset of the Raspberry Pi 4B to capture
Bluetooth public packets. Capture is done in a controlled environment inside a
Faraday cage. We choose two devices Apple iPad Pro 3 and iPad Mini 2, emitting
public packets in the cage for 40 minutes using BLE 4.1, which is captured by
the Raspberry Pi. We are mainly interested in captured timestamps and LAP
(lower address part) of the advertised beacons in the collected traces. LAP
refers to the least significant 24 bits of a BLE MAC address. Even though we
do trace-collection in non-public environments, we still present hashed values
to protect the device’s privacy.
While for the devices inside the simulator, we assign the BLE standard in
initialization as the release 4.1, which fixes the interval of MAC address
regeneration to 15 minutes. Afterward, we install a broadcast application on
top of spawned nodes. We assign the frequency of beacon transmissions in the
application as the mean device broadcast interval observed from the real-world
sniffer capture. We found this value to be 2 seconds. Moreover, we place a
sniffer at the center of a square area of 10 meters in which initialized
emitting devices are statically present. Sniffer captures on three public BLE
channels. The chosen area’s size is kept small to avoid transmission errors
because of the distance between the devices and the sniffer. This is because
errors are not present in the Faraday cage real-world experiment described
earlier. The simulation parameters are illustrated in Table II.
Parameter | Value
---|---
Simulation area | 10*10
Packet size | 20 bytes
Simulation duration | 2410 seconds
Packet sending Duration | 2400 seconds
Path loss model | nakagami
Num of nodes | N
Mobility model(nodes) | static
Num of sniffers | M
Mobility model(sniffer) | static
beacon interval | 2 seconds
Connection Interval | 6.25ms
Swap delay | 10* beacon interval
BLE standard | BLE 4.1
TABLE II: Simulation parameters for SimBle validation
(a) Real-World
(b) SimBle
Figure 5: Observed public packet addresses in real-world vs SimBle by two
devices. Each color represents a device broadcasting anonymized addresses.
#### V-A2 Observations
The first observation is related to the changing of the MAC addresses. In this
case, for the real experiments, we turn on the Bluetooth of the two IPad
devices at the start of sniffing since otherwise first change in MAC address
would be random, and it would be hard to use that trace for validation. As we
can see in Figure 5(a), randomized MAC addresses change every 15 minutes along
with the capture duration. Like real IPad devices, IPads emulated inside the
simulation change their MAC addresses after 15 minutes, shown in Figure 5(b).
Figure 6: Real-world vs SimBle in inter public packet times
After validating the role of PrivacyManager in private address generation, we
validate if the rest of the BLE stack could emulate the chosen real device. We
do this by looking at the inter-packet times for public packets observed at
the sniffer inside the SimBle and the real-world. We maintain the same
experimental setup and generated traces. We observe in Figure 6 that for both
the devices, real-world and SimBle inter-packet intervals at the sniffer have
the mean value of 2 seconds. A deviation of 20 milliseconds is expected for
the sniffers as they capture on either of three public BLE channels on random
and may miss some public packets on one of the three channels. A public packet
on Bluetooth is broadcasted on all three public channels within a time-frame
of 20 milliseconds. This validates the overall working of public packets in
SimBle.
Figure 7: Sent and received data packets by two paired BLE devices inside
SimBle
### V-B Validating private address resolution
To validate the resolution of private addresses in SimBle, we consider a
simple scenario, where a transmitter and receiver nodes are paired inside it.
This allows us to look into global trace obtained by send and receive logs and
deduce if the data communication was continuous in-spite of sender and
receiver changing their MAC addresses.
As we can see in Figure 7, the sender changes its private address around 13
minutes. However, the receiver BLE application continues to process and
receive packets as it could resolve the new private address to the sender’s
Identity Address, having possession of its IRK. Similarly, around 32 minutes,
we observe that the receiver changes its private address. Still, it is
communicated to the sender through beacons, and hence, the sender this time
around resolves and verifies the receiver’s private address. Therefore, the
sender could be seen sending its data to the receiver seamlessly. This
experiment thus ensures that SimBle’s [Alg. 2] is functional in handling BLE
MAC randomization.
### V-C Validating optimized trace-collection
We discussed in Section III-A3 about the need to optimize the trace-collection
procedure to obtain traces in a reasonable time. We validate the improvement
brought by SimBle in terms of run-time by increasing the density of devices up
to 1 device per square meter around a sniffer for a simulation duration of 30
seconds. The density is varied by increasing the number of devices up to 100
in 100 square meters around the sniffer. As we can observe, in Figure 8,
optimized sniffing gives a performance gain in simulation run-time up to a
factor of 100. In conclusion, since we generally have to simulate a
considerably longer duration to test BLE privacy provisions as most MAC
addresses change around 15 minutes, SimBle can optimize the sniffing to
generate traces in a reasonable amount of time.
Figure 8: Performance gain in run-time with optimized sniffing inside
simulation
## VI Case Study
MAC address association refers to defeating the anonymization techniques used
by the devices and being able to track a particular device. Recently many
strategies have been suggested to achieve this goal of associating different
private addresses advertised publically from the same device [1][17] [18] [6].
For instance, [17] [18] show that manufacturers like Apple and Microsoft leak
partial identifiers in the data field of public packets, which can be easily
exploited. In [6], authors reverse engineer continuity protocol messages of
Apple devices. They show that finger-printing the device, as well as
behaviorally profiling users, is possible using the contents of public BLE
messages. They also demonstrate that predictable frame sequence numbers in
them leave the possibility of tracking Apple devices across space and time.
As we mention in the Section I, [5] also discuss a de-anonymization strategy.
Authors of [5] mention that the focus of their solution is Bluetooth Classic
(BT) not BLE, because of the absence of MAC address randomization. Besides,
the proposed strategy requires specific sniffing devices and targets only
private packets. We believe that this approach can not be considered as fully
generic and scalable.
Contrary to the above BLE strategies [17][6][18] which target specific devices
like Apple, [1] propose a method which associates MAC addresses from a device
based on emitted public packets. This makes [6] independent of the device
vendor and generic for any BLE device as it just relies on beacons and
whatever the used application. They identify devices across time using an
identifier that discriminates a subset of devices at a given time, that is, a
weak identifier, and achieve close to $100\%$ accuracy for controlled
environments as shown in Figure 1. Therefore, we decided to implement and
study performances of [1] when using SimBle, since to the best of our
knowledge, it is the only generic BLE MAC address association strategy
currently available in the literature. We evaluate it using the traces and the
ground truth generated by SimBle.
### VI-A Algorithm Overview
The association strategy proposed in [1] could be briefed into the following
three steps:
1. 1.
Identifying the MAC conflicts across time: Whenever we look at passively
sniffed traces across time for public BLE packets, it is very probable that
two or more devices change their randomized MAC addresses around the same
time. These are identified as conflicts by [1] and seen over the entire
sniffing duration as conflict clusters. The authors also define the dswap as
the time that separates the consecutive and distinct private addresses from a
particular device. For each address change seen in the trace, there is a set
of appearing and disappearing MAC addresses in the interval dswap. They are
associated using the Linear Assignment [19] where the weights of possible
associations are chosen as distances between weak identifiers, which is
described next.
2. 2.
Finding a weak identifier: A device constant could be a weak identifier if it
is accessible to the sniffer and it splits the device population into a few
groups that are distributed as uniformly as possible. [1] choose the fixed
part of the time between advertising packets in BLE as the weak identifier and
call it characteristic time.
3. 3.
Resolving MAC conflicts: Union Find [20] is used to break the conflict
clusters into groups of appearing and disappearing MACs. Finally, all
conflicts seen in the observed trace are resolved by using the absolute
difference between the characteristic times as association weights for the
Linear Assignment.
### VI-B Study of the association strategy
We identify three aspects for which the association strategy [1] is most
sensitive in terms of effectiveness:
1. 1.
Conflict size and dswap chosen: As the number of devices in the sniffing zone
increases, the number of devices that change their private addresses around
the same time also increase. We see in section VI-A that weak identifier is
used to resolve conflicts. We define the number of devices in a single
conflict as conflict size. Increasing conflict sizes in the conflict cluster
have two major consequences in [1]. Firstly, weak identifiers would not be
effective in resolving conflicts during Linear Assignment. This is because a
large number of devices cause more possible associations to have similar
weights. Secondly, we identify the strategy [1] to be quadratic in run-time.
Thus, using Linear Assignment for the resolution of a huge set of conflicting
MAC addresses is practically not feasible for device-tracking purposes. We see
dswap as critical parameter in [1]. It could not be chosen arbitrarily large,
as this results in very large conflict clusters containing MAC addresses that
are probably not single conflict. On the contrary, relatively small value
leads to the exclusion of actual conflicts. For the evaluation of association
strategy, we use dswap to be 10 times characteristic time as recommended to be
optimal by [1].
2. 2.
Device diversity in the population: The effectiveness of association is also
dependent on the diversity of devices in the sniffed trace. This is because
characteristic times of devices vary more with diversity. Thus it is easy for
the Linear assignment to group conflict pairs with similar weights. [1] also
uses the vendor information in public packets as an identifier while resolving
conflicts. Filtering out possible associations with different vendors in the
advertised packet increases the chance of correct MAC address association.
3. 3.
Mobility observed in trace: Characteristic times as a weak identifier is
calculated from the observed packet timestamps sequence in the trace. If there
is a high degree of mobility around the sniffer, then devices keep coming and
leaving the sniffing zone. This causes an error in the value chosen by [1] for
possible association pairs’ weight during conflict resolution. Hence the
accuracy of MAC address association should decrease naturally.
### VI-C Evaluation
In the following, we evaluate the accuracy of MAC address association and
growth of conflict cluster size for various realistic scenarios. In scenario
1, we choose BLE 4.1, since it is the most prevalent BLE release in devices
today. We also choose a single device class, which is smartphones. Smartphones
largely fall into the device class moderate emitters as stated earlier in
Section III-A1. The randomization interval in BLE 4.1 is set to 15 minutes.
For scenario 2, we choose BLE 4.1 and multiple device classes. We emulate the
environment with different device classes to include co-existing smartphones,
smartwatches, earbuds e.t.c. Finally, in scenario 3, we consider BLE 5.2 and
multiple device classes. Here we emulate a diverse range of devices supporting
the latest release, BLE 5.2, in them. We choose this BLE standard because,
unlike BLE 4.1, vendors can keep private address generation interval to be
less than 15 minutes. Though standard advises avoiding smaller values for
randomization interval than 15 minutes as it could affect performance due to
connection times. We deliberately keep the randomization interval as uniform
distribution in the range (3, 15) minutes to observe how [1] performs when
more and more vendors start to quicken private address generation. We evaluate
all the scenarios for the following mobility-profiles:
1. 1.
Static-Confined: Here the devices are static and are always present in the
sniffing zone.
2. 2.
Mobile-Free: In this profile, devices are mobile and are free to leave and
enter the sniffing zone. We try to mimic human mobility by using a random-walk
mobility model with a speed of 1.5 $m/s$ and direction change after every 2
$s$.
We generate all the traces and associated ground truth by simulating several
BLE devices and a sniffer for 40 minutes using SimBle. We prefer a longer
duration than multiple simulation runs of small duration as it gives detailed
insight on how conflicts evolve with time. It is essential to note how
accurately strategy in Section VI-A resolves the MAC addresses from a single
device in the capture duration. For Static-Confined mobility-profile, we place
a sniffer in the center of a square of 100 square meters and vary the number
of BLE devices/nodes up to 100. We choose this area to make sure that nodes
are always in sniffing range of the sniffer. As shown in Table II, we use the
Nakagmi path loss model and consider the successful BLE transmission range to
be around 20 meters. While in the case of Mobile-Free mobility-profile, we
deliberately take a square of 2500 square meters and place the sniffer in the
middle of it. BLE nodes are performing random-walk in that area and thus move
in and out of the sniffing range.
(a) Scenario 1
(b) Scenario 1
(c) Scenario 2
(d) Scenario 2
(e) Scenario 3
(f) Scenario 3
Figure 9: Accuracy of MAC address associations and average conflict size
observed by MAC association strategy[1] on SimBle generated traces for Static-
Confined and Mobile-Free mobility-profiles, described in Section VI-C
### VI-D Results and Analysis
1. 1.
Scenario 1: First, we observe how well the algorithm[1] can defeat MAC
randomization and correctly associate private addresses for BLE 4.1 with
moderate emitters. MAC addresses change after every 15 minutes in BLE 4.1. For
average conflict sizes below 10, we expect the algorithm in Section VI-A to
perform well both in run-time and accuracy. We observe in the Figure 9(a) that
accuracy of association is above $98\%$ for Static-Confined mobility-profile.
Even in the case of Mobile-Free nodes, minimum accuracy of around $91\%$ is
seen for 100 devices. Average conflicts increase with an increase in the
number of devices as expected in Figure 9(b), but they are well beneath the
bound of 10 conflicts. Hence, the accuracy of MAC address association is very
high for both mobility-profiles.
2. 2.
Scenario 2: We just saw how accurately MAC addresses from moderate emitters,
which are generally mobile phones is associated. We present a further
realistic scenario, where we allow all device classes (Section III-A1). This
favors MAC association as described in Section VI-B. We again stick to the
privacy behavior of BLE 4.1 as it is the most prevalent standard in current
devices. As expected, we observe an increase in accuracy for both the
scenarios in Figure 9(c). While MAC addresses of Static-Confined nodes are
associated with accuracy close to $100\%$, the minimum accuracy of association
for Mobile-Free devices also increased to $93\%$. Conflict sizes observed are
also small for up to 100 devices, as seen in Figure 9(d).
3. 3.
Scenario 3: Finally, we go for multiple device classes but with privacy
behavior of BLE 5.2, which allows vendors to change the private address of the
device before the interval of 15 minutes (Section VI-C). We expect the
conflict sizes to rise and hence a decrease in accuracy for a large number of
devices. We see a relative decrease in accuracy in the Figure 9(e) when
compared to the previous Figure 9(c) as expected. For 100 devices accuracy of
MAC address associations decrease to around $89\%$ for both mobility-profiles.
Conflict sizes increase to a maximum value of 13 as seen in Figure 9(f), but
it is still not large enough to degrade the efficiency of the association
strategy [1].
Results of the case study shows that current MAC address randomization
proposed by the BLE standard is not enough to safeguard user-privacy. The
association strategy[1] can successfully defeat the randomization procedure
and correctly fingerprint close to $90\%$ of the devices even in highly dense
and mobile scenarios. An adversary could setup multiple sniffers strategically
and easily track a particular user device.
The high accuracy of MAC address association in the initial case study made us
look into the methods to avoid device-traceability. We reduced the
randomization interval of the device population to 3 minutes. Devices changing
their private addresses quickly should lead to higher conflict sizes and hence
lower accuracy of association by [1]. Using the mobility-profile Mobile-Free,
we varied the number of devices inside SimBle to 100 for this smaller value of
randomization interval. Devices belong to multiple device classes. We observe
in Figure 10 that indeed accuracy decreases to a minimum of around $78\%$ with
conflict size growing to 97.
(a) Real-World
(b) SimBle
Figure 10: Accuracy of MAC address associations and average conflict size
observed by MAC association strategy[1] on SimBle generated traces for Mobile-
Free mobility-profile with Randomization interval of 3 minutes
With single device classes, [1] might get lower accuracy, but $78\%$ accurate
associations are still a threat to user-privacy. Hence lowering the
randomization interval is not the only solution the BLE standard should
address.
Based on the case study, we summarize the following recommendations to lower
the accuracy of successful MAC address association possibly:
1. 1.
Recommended randomization interval must be lowered. This might lead to
increased connection times. Optimization in the IRK exchange and resolving the
list at the receiver could allow BLE devices to change address frequently
without compromising performance.
2. 2.
The parameter exploited by [1] in VI-A is the characteristic time that acts as
weak identifier. This parameter is unique to a device and varies for the
device population. This makes the identification of the device easier. We
suggest the standard to recommend vendors having similar characteristic times
## VII Final remarks and future steps
MAC address randomization is indispensable for protecting user-privacy in BLE
as we see in Section II. If devices keep on advertising their true MAC address
or their Identity Address, they could easily be tracked by co-coordinated
passive sniffing. Widespread usage of resolvable private addresses could
potentially protect the privacy of users to some extent.
On the other side, vendor-dependent MAC address randomization has lead to the
retrieval of realistic BLE traces more and more challenging. The lack of
ground truth in randomized traces and impracticality of large-scale passive
trace collection is making the testing of solutions based on trajectory
reconstruction or user identification [21] [22] [23] [24] [25] [26] [27]
almost impossible.
All of the existing and future works based on device-identification using MAC
address in BLE must be revisited with the introduction of BLE privacy-
provisions like private addresses. SimBle is the answer to this issue as
researchers could now generate large-scale trace traces with devices of their
interest and use it to validate their works. Sniffers could be deployed
accordingly to emulate real-world passive trace-collection for BLE.
The works that do BLE MAC address association or device-fingerprinting are
threats to privacy provisions of BLE[1][17] [18] [6] as these strategies lead
to tracking of users. Only SimBle can allow the community to compare the
effectiveness of any two of these available solutions. This is because we need
exact/identical conditions for comparing the evaluations. It is not only hard
for experiments/test-beds to emulate identical conditions but are also not
scalable. Moreover, as discussed earlier, finding ground truth for
experimentally obtained traces is practically impossible for large-scale
testing.
SimBle is the first BLE simulation stack capable of generating traces that
preserve privacy. It introduces resolvable private addresses that are the core
to BLE device and network privacy-provisions. We showed that it is capable of
emulating the behavior of any real BLE device/hardware. Users have to choose
the appropriate device class they want to test, based on the targeted device.
It resolved the lack of ground truth for scalable scenarios after the
introduction of MAC address randomization. SimBle provides the associated
ground truth with every trace that is generated.
We presented the case study to the only generic MAC address association
strategy for BLE available in literature using SimBle. Realistic device and
mobility scenarios were used in the evaluation. The case study revealed the
user-privacy trade-off even with the usage of MAC address randomization as
close to $90\%$ private addresses could be associated correctly in the worst-
case case. This enforces the need to revise the recommendations currently
proposed in the standard.
Regarding future works, the key distribution could be done by using control
messages rather than pre-installation at the node. BLE stack could be enriched
by the addition of different device pairing modes. Also, as one of the aims of
SimBle is to emulate any real device, more and more vendor-specific
information could be added to facilitate usability. Finally, we aim to
evaluate and compare more BLE privacy-related works in the future using
SimBle.
## References
* [1] L. Jouans, A. C. Viana, N. Achir, and A. Fladenmuller, “Associating the randomized bluetooth mac addresses of a device,” in IEEE Annual Consumer Communications & Networking Conference, CCNC 2021, Las Vegas, NV, USA, January 9-12, 2021, pp. 1–6, 2021.
* [2] Fredrik Dahlqvist, Mark Patel, Alexander Rajko, and Jonathan Shulman, Growing opportunities in the Internet of Things.
* [3] K. Chang, “Bluetooth: a viable solution for iot? [industry perspectives],” IEEE Wireless Communications, vol. 21, no. 6, pp. 6–7, 2014.
* [4] W. Albazrqaoe, J. Huang, and G. Xing, “Practical bluetooth traffic sniffing: Systems and privacy implications,” in Proceedings of the 14th Annual International Conference on Mobile Systems, Applications, and Services, MobiSys ’16, (New York, NY, USA), p. 333–345, Association for Computing Machinery, 2016.
* [5] M. Cominelli, F. Gringoli, P. Patras, M. Lind, and G. Noubir, “Even black cats cannot stay hidden in the dark: Full-band de-anonymization of bluetooth classic devices,” in 2020 IEEE Symposium on Security and Privacy (SP), pp. 534–548, 2020.
* [6] J. Martin, D. Alpuche, K. Bodeman, L. Brown, E. Fenske, L. Foppe, T. Mayberry, E. Rye, B. Sipes, and S. Teplov, “Handoff all your privacy–a review of apple’s bluetooth low energy continuity protocol,” PoPETs, vol. 2019, no. 4, pp. 34–53, 2019.
* [7] Kartik Patel, http://kartikpatel.in/ns-3-dev-git/.
* [8] Stijn Geysen, https://gitlab.com/Stijng/ns3-ble-module/-/tree/master/ble.
* [9] Ubertooth One, https://greatscottgadgets.com/ubertoothone/.
* [10] B. SIG, Bluetooth SIG. 2010. Bluetooth Core Specification v4.0.
* [11] B. SIG, Specification of the Bluetooth System, Core v4.1. 2013-03-12.
* [12] B. SIG, Specification of the Bluetooth System, Core v5.2. 2019-12-31.
* [13] 802-2014 Standard.
* [14] B. SIG, Specification of the Bluetooth System, Core v5.1. 2019-01-21.
* [15] F. I. P. S. P. 197, “Announcing the advanced encryption standard (aes),” vol. 21, p. 51, 2001.
* [16] Jason Lee, Encryptions.
* [17] J. K. Becker, D. Li, and D. Starobinski, “Tracking anonymized bluetooth devices,” Proceedings on Privacy Enhancing Technologies, vol. 2019, no. 3, pp. 50–65, 2019.
* [18] G. Celosia and M. Cunche, “Saving private addresses: an analysis of privacy issues in the bluetooth-low-energy advertising mechanism,” in MOBIQUITOUS, pp. 444–453, 2019.
* [19] S. Martello and P. Toth, “Linear assignment problems,” in North-Holland Mathematics Studies, vol. 132, pp. 259–282, Elsevier, 1987.
* [20] G. C. Harfst and E. M. Reingold, “A potential-based amortized analysis of the union-find data structure,” ACM SIGACT News, vol. 31, no. 3, pp. 86–95, 2000.
* [21] G. Aceto, D. Ciuonzo, A. Montieri, V. Persico, and A. Pescapé, “Mirage: Mobile-app traffic capture and ground-truth creation,” in 2019 4th International Conference on Computing, Communications and Security (ICCCS), pp. 1–8, 2019.
* [22] G. Michau, A. Nantes, A. Bhaskar, E. Chung, P. Abry, and P. Borgnat, “Bluetooth data in an urban context: Retrieving vehicle trajectories,” IEEE Transactions on Intelligent Transportation Systems, vol. 18, no. 9, pp. 2377–2386, 2017.
* [23] Y. Xu, D. He, P. Chao, J. Kim, W. Hua, and X. Zhou, “Route reconstruction using low-quality bluetooth readings,” in Proceedings of the 28th International Conference on Advances in Geographic Information Systems, pp. 179–182, 2020.
* [24] A. Bhaskar, M. Qu, and E. Chung, “Bluetooth vehicle trajectory by fusing bluetooth and loops: Motorway travel time statistics,” IEEE Transactions on Intelligent Transportation Systems, vol. 16, no. 1, pp. 113–122, 2014.
* [25] A. Alghamdi, T. Nadeem, and M. Cetin, “Bluemap: A pervasive bluetooth-based vehicle trajectory reconstruction system,” in 2018 IEEE Global Communications Conference (GLOBECOM), pp. 1–7, IEEE, 2018.
* [26] A. Alhamoud, A. A. Nair, C. Gottron, D. Böhnstedt, and R. Steinmetz, “Presence detection, identification and tracking in smart homes utilizing bluetooth enabled smartphones,” in 39th Annual IEEE Conference on Local Computer Networks Workshops, pp. 784–789, IEEE, 2014.
* [27] W. Shao, T. Nguyen, K. Qin, M. Youssef, and F. D. Salim, “Bledoorguard: a device-free person identification framework using bluetooth signals for door access,” IEEE Internet of Things Journal, vol. 5, no. 6, pp. 5227–5239, 2018.
|
# $\tau^{9}$ Eri: A bright pulsating magnetic Bp star in a 5.95-day double-
lined spectroscopic binary
K. Woodcock,1 G. A. Wade,1 O. Kochukhov,2 J. Sikora,3 and A. Pigulski4
1Dept. of Physics and Space Science, Royal Military College of Canada, PO Box
17000 Station Forces, Kingston, ON, Canada K7K 0C6
2Department of Physics and Astronomy, Uppsala University, Box 516, SE-751 20
Uppsala, Sweden
3Department of Physics and Astronomy, Bishop’s University, Sherbrooke, Québec
J1M 1Z7, Canada
4Astronomical Institute, University of Wrocław, Kopernika 11, 51-622 Wrocław,
Poland
(Accepted XXX. Received YYY; in original form ZZZ)
###### Abstract
$\tau^{9}$ Eri is a Bp star that was previously reported to be a single-lined
spectroscopic binary. Using 17 ESPaDOnS spectropolarimetric (Stokes $V$)
observations we identified the weak spectral lines of the secondary component
and detected a strong magnetic field in the primary. We performed orbital
analysis of the radial velocities of both components to find a slightly
eccentric orbit ($e=0.129$) with a period of $5.95382(2)$ days.
The longitudinal magnetic field ($B_{\ell}$) of the primary was measured from
each of the Stokes $V$ profiles, with typical error bars smaller than 10 G.
Equivalent widths (EWs) of LSD profiles corresponding to only the Fe lines
were also measured. We performed frequency analysis of both the $B_{\ell}$ and
EW measurements, as well as of the Hipparcos, SMEI, and TESS photometric data.
All sets of photometric observations produce two clear, strong candidates for
the rotation period of the Bp star: 1.21 days and 3.82 days. The $B_{\ell}$
and EW measurements are consistent with only the 3.82-day period. We conclude
that HD 25267 consists of a late-type Bp star (M=
$3.6_{-0.2}^{+0.1}\leavevmode\nobreak\ M_{\odot}$, T= $12580_{-120}^{+150}$ K)
with a rotation period of 3.82262(4) days orbiting with a period of 5.95382(2)
days with a late-A/early-F type secondary companion (M= $1.6\pm
0.1\leavevmode\nobreak\ M_{\odot}$, T= $7530_{-510}^{+580}$ K). The Bp star’s
magnetic field is approximately dipolar with $i=41\pm 2\degr$, $\beta=158\pm
5\degr$ and $B_{\rm d}=1040\pm 50$ G. All evidence points to the strong
$1.209912(3)$ day period detected in photometry, along with several other
weaker photometric signals, as arising from $g$-mode pulsations in the
primary.
###### keywords:
stars: individual: HD 25267 - stars: early-type - stars: magnetic field -
stars: binaries: spectroscopic - stars: oscillations - stars: chemically
peculiar
††pubyear: 2021††pagerange: $\tau^{9}$ Eri: A bright pulsating magnetic Bp
star in a 5.95-day double-lined spectroscopic binary–$\tau^{9}$ Eri: A bright
pulsating magnetic Bp star in a 5.95-day double-lined spectroscopic binary
## 1 Introduction
$\tau^{9}$ Eri (HD 25267) is a very bright ($V=4.66$ mag), nearby ($d=96$ pc),
late-type Bp star that exhibits significant ($\sim$80 km s${}^{-1}\,$)
periodic radial velocity (RV) variations consistent with orbital motion in a
binary system. The system has a long history of study. Frost (1908) first
documented HD 25267 as an RV variable. Struve & Hujer (1927) used 53
additional spectra to identify the system as an SB1 and found a period of
0.85437 d that did not seem to satisfy all of their data. Five additional RVs
were published by Campbell & Moore (1928) that were employed by Hujer (1928)
along with 10 new spectra to propose several alternative orbital periods, of
which one (5.9542159 d) is close to the currently-accepted value. Sahade
(1950) used 38 radial velocity measurements to identify three possible orbital
periods: 0.8542 d, 5.9542 d, and 1.1979 d, stating that the 5.9542 d period
has ‘a slight advantage’. Leone & Catanzaro (1999) combined Sahade’s radial
velocities with 6 of their own, reporting $5.9538\pm 0.0001$ d as HD 25267’s
likely orbital period.
Babcock & Cowling (1953) claimed detection of a magnetic field in this star
(an effective magnetic field of $+1360\pm 700$ G), but later Babcock (1958)
relegated $\tau^{9}$ Eri to his list of stars in which the presence of a
magnetic field is probable but not firmly established. Additional magnetic
measurements were reported by Borra & Landstreet (1980), ranging from $-350$ G
to 0 G with typical uncertainties of 80-95 G, and proposed that the magnetic
period was equal to the orbital period.
Using 83 $uvby$ photometric data points taken in 1975 and 1977, Manfroid et
al. (1985) proposed two additional periods: 1.21 days (previously reported by
Hensberge et al. 1981) and 3.8 days. These results were confirmed by Catalano
et al. (1991). Most recently, Bernhard et al. (2020) identified the 1.21 d
period as the rotational period, while Mathys (2017) reasserted that the 5.95
d period represents both the rotation and orbit periods (in agreement with
Borra & Landstreet 1980), and that the origins of the 1.21 d and 3.82 d
periods remain unclear.
Three periods (1.21 d, 3.82 d, and 5.95 d days) consistently recur in modern
analyses of the system. The 5.95 d period seems firmly established to
represent the orbital period. It may also represent the rotational period of
the Bp star (as proposed by Borra & Landstreet 1980 and Mathys 2017). On the
other hand, the prominent (in photometry) 1.21 d or 3.82 d periods might be
the star’s rotational period. In this paper we analyze high resolution
spectropolarimetric observations of HD 25267 to detect the spectral lines of
the secondary star, revealing the system to be a double-lined spectroscopic
binary (SB2). We determine the radial velocities of both components to model
the 5.95 d orbit of the system and constrain the physical parameters of both
stars. We exploit measurements of the longitudinal magnetic field ($B_{\ell}$)
and equivalent widths of the primary’s mean spectral line to establish its
3.82 d rotational period and to model its magnetic field geometry. Finally, we
evaluate possible origins of the 1.21 d period.
## 2 Observations
### 2.1 Spectropolarimetry
Our investigation takes advantage of 17 spectropolarimetric (Stokes $V$)
observations taken using ESPaDOnS at the Canada-France-Hawaii Telescope
(CFHT). Four observations, acquired from the CFHT archive, were obtained on
August 20, September 23, 24, and 25, 2013. Thirteen new observations took
place almost nightly from December 26, 2017 to January 9, 2018. The spectra,
reduced at the CFHT with the Upena pipeline feeding the Libre-Esprit reduction
software, span a wavelength range from 369 – 1048 nm with a resolving power of
65,000, and a median signal-to-noise ratio (S/N) of approximately 850 per
pixel. The spectra were normalized to the continuum by performing iterative
fitting of continuum points in the unmerged orders using polynomial fits.
Further details about the acquisition and reduction of the data are provided
by, e.g., Wade et al. (2016). The log of spectropolarimetric observations is
reported in Table 1.
Table 1: Log of spectropolarimetric observations, including orbital and rotational phases (columns 3 and 4), longitudinal magnetic field measured from Stokes $V$ and diagnostic null (columns 5 and 6), equivalent widths (from Fe LSD profiles, column 7), exposure time and signal-to-noise ratio (columns 8 and 9), and measured radial velocities (columns 10 and 11). Orbital and rotational phases are computed using the ephmerides described in Eqs. (2) and (3). Spectrum | HJD | $\phi_{\rm orb}$ | $\phi_{\rm rot}$ | $B_{\ell}(V)$ | $B_{\ell}(N)$ | EW | Exp. | S/N | RVpri | RVsec
---|---|---|---|---|---|---|---|---|---|---
ID | | | | (G) | (G) | (km s${}^{-1}\,$) | (s) | (pix-1) | (km ${\rm s}^{-1}$)
(1) | (2) | (3) | (4) | (5) | (6) | (7) | (8) | (9) | (10) | (11)
1648693 | 2456525.142 | 0.646 | 0.061 | $-$142 $\pm$ 8 | $-$2 $\pm$ 6 | 1.525 $\pm$ 0.004 | 110 | 929 | 44 | $-$29
1655882 | 2456559.139 | 0.356 | 0.955 | $-$138 $\pm$ 8 | 4 $\pm$ 8 | 1.484 $\pm$ 0.004 | 110 | 808 | 48 | $-$42
1656054 | 2456560.009 | 0.502 | 0.183 | $-$214 $\pm$ 9 | 11 $\pm$ 9 | 1.590 $\pm$ 0.007 | 110 | 758 | 57 | $-$59
1656328 | 2456560.984 | 0.666 | 0.438 | $-$261 $\pm$ 7 | 3 $\pm$ 7 | 1.631 $\pm$ 0.004 | 110 | 866 | 41 | $-$24
2237404 | 2458114.712 | 0.629 | 0.895 | $-$145 $\pm$ 18 | 4 $\pm$ 18 | 1.459 $\pm$ 0.003 | 180 | 372 | 46 | $-$34
2237408 | 2458114.722 | 0.630 | 0.898 | $-$162 $\pm$ 10 | $-$5 $\pm$ 9 | 1.465 $\pm$ 0.003 | 180 | 747 | 46 | $-$34
2237413 | 2458114.740 | 0.634 | 0.902 | $-$148 $\pm$ 13 | 13 $\pm$ 13 | 1.471 $\pm$ 0.003 | 180 | 555 | 45 | $-$33
2237541 | 2458115.787 | 0.809 | 0.176 | $-$198 $\pm$ 9 | 0 $\pm$ 9 | 1.479 $\pm$ 0.003 | 180 | 767 | 8 | 45
2237725 | 2458116.731 | 0.968 | 0.423 | $-$238 $\pm$ 10 | 13 $\pm$ 10 | 1.560 $\pm$ 0.004 | 180 | 773 | $-$22 | 120
2237881 | 2458117.747 | 0.139 | 0.689 | $-$231 $\pm$ 7 | $-$3 $\pm$ 6 | 1.528 $\pm$ 0.003 | 180 | 1080 | 0 | 71
2238075 | 2458118.770 | 0.310 | 0.957 | $-$137 $\pm$ 7 | 3 $\pm$ 7 | 1.559 $\pm$ 0.003 | 180 | 894 | 41 | $-$24
2238187 | 2458119.762 | 0.477 | 0.216 | $-$207 $\pm$ 6 | 4 $\pm$ 5 | 1.564 $\pm$ 0.003 | 180 | 1137 | 56 | $-$59
2238459 | 2458120.827 | 0.656 | 0.495 | $-$260 $\pm$ 6 | 0 $\pm$ 6 | 1.613 $\pm$ 0.003 | 180 | 917 | 49 | $-$25
2238647 | 2458121.788 | 0.817 | 0.746 | $-$230 $\pm$ 6 | $-$2 $\pm$ 6 | 1.467 $\pm$ 0.003 | 180 | 975 | 7 | 51
2238771 | 2458122.793 | 0.986 | 0.009 | $-$131 $\pm$ 11 | $-$2 $\pm$ 10 | 1.509 $\pm$ 0.003 | 180 | 549 | $-$24 | 121
2238898 | 2458123.829 | 0.160 | 0.280 | $-$227 $\pm$ 5 | $-$4 $\pm$ 5 | 1.577 $\pm$ 0.003 | 180 | 1080 | 4 | 57
2239595 | 2458128.830 | 0.000 | 0.588 | $-$281 $\pm$ 7 | $-$6 $\pm$ 6 | 1.544 $\pm$ 0.004 | 180 | 1089 | $-$24 | 123
Table 2: A summary of most significant periods (in days) detected in the various magnetic, spectroscopic, and photometric datasets. Dataset | 1.21 | 3.82 | 5.95
---|---|---|---
Hipparcos | 1.2100(4) | 3.8227(11) | —
TESS | 1.209921(15) | 3.81879(15) | 5.928(4)
SMEI | 1.209912(3) | 3.82262(4) | —
$B_{\ell}$ | — | 3.8230(6) | —
EW | — | 3.8230(5) | —
RV | — | — | 5.95382(2)
Table 3: Parameters of the least-squares frequency fits to the SMEI and TESS photometric datasets. Phases are given in radians, for the following epochs: HJD 2454000.0 for SMEI and BJD 2458440.0 for TESS. The signal-to-noise ratio (SNR) was calculated by dividing amplitude (S) by the noise (N) calculated locally using frequency spectra of the residuals from the presented multi-frequency fits. “Notes” describe the proposed origin and relationship to any other detected signals. | SMEI | TESS |
---|---|---|---
ID | Frequency | Amplitude | Phase | SNR | Frequency | Amplitude | Phase | SNR | Notes
| (d-1) | (mmag) | (rad) | | (d-1) | (mmag) | (rad) | |
$f_{1}$ | 0.8265067(21) | 8.11(9) | 0.745(11) | 74.5 | 0.826500(10) | 8.938(9) | 5.0816(10) | 70.4 | $g$-mode
$f_{2}$ | 0.2616008(24) | 6.88(9) | 5.166(13) | 49.8 | 0.261863(10) | 8.379(9) | 2.0744(10) | 49.0 | $f_{\rm rot}$
$f_{3}$ | 0.5232017 | 1.08(9) | 4.75(8) | 8.6 | 0.523727(2) | 1.726(9) | 4.920(5) | 12.8 | $2\,f_{\rm rot}$
$f_{4}$ | 0.879600(18) | 0.99(9) | 2.28(9) | 9.4 | 0.88090(8) | 1.224(8) | 5.000(7) | 9.7 | $g$-mode
$f_{5}$ | 0.257385(23) | 0.75(9) | 5.96(11) | 5.4 | — | — | — | — | spurious
$f_{6}$ | 0.564906 | 0.56(9) | 3.63(15) | 4.5 | 0.564636(2) | 0.468(9) | 5.152(18) | 3.5 | $f_{1}-f_{\rm rot}$
$f_{7}$ | 0.61531(4) | 0.55(9) | 1.63(16) | 4.5 | 0.61498(16) | 0.553(8) | 2.917(15) | 4.8 | $g$-mode
$f_{8}$ | — | — | — | — | 1.01773(11) | 0.852(8) | 2.818(10) | 7.8 | $g$-mode
$f_{9}$ | — | — | — | — | 1.10199(16) | 0.563(9) | 0.543(15) | 4.8 | $g$-mode
$f_{10}$ | — | — | — | — | 0.16868(11) | 0.842(9) | 5.531(10) | 4.8 | $f_{\rm orb}$
### 2.2 Photometry
Our investigation employed Hipparcos photometry of HD 25267 extracted from the
Centre de Données Astronomiques de Strasbourg (CDS). A total of 193 data
points were observed over more than 3 years, and 2 outliers were removed from
the analysis. The typical precision was of a few mmag.
The Solar Mass Ejection Imager (SMEI) experiment (Eyles et al., 2003; Jackson
et al., 2004) was placed on-board the Coriolis spacecraft and was aimed at
measuring sunlight scattered by free electrons in the solar wind. We used
photometry of HD 25267 obtained during nearly 8 years and available through
the University of California San Diego (UCSD) web
page111http://smei.ucsd.edu/new_smei/index.html. The SMEI time series are
affected by long-term calibration effects, especially a repeatable variability
with a period of one year. The raw SMEI UCSD photometry of HD 25267 was
corrected for the one-year variability by subtracting an interpolated mean
light curve, which was obtained by folding the raw data with the period of one
year, calculating median values in 200 intervals in phase, and then
interpolating between them. In addition, the worst parts of the light curve
and outliers were removed. The data points were also assigned individual
uncertainties (typically 5-10 mmag) calculated using the scatter of the
neighbouring data. Then, a model consisting of the dominant frequencies was
fit to the data. Finally, the low-frequency instrumental variability was
filtered out by subtracting a trend using residuals from the fit. The last two
steps were iterated several times, yielding 22771 data points that were
ultimately analyzed.
Finally, we employed new photometry from the Transiting Exoplanet Survey
Satellite (TESS). The primary goal of the NASA’s TESS mission (Ricker et al.,
2014, 2015) is the detection of planets by means of the transit method. TESS
observations cover almost the entire sky, excluding only the regions with low
Galactic latitudes ($|b|<$ 6$\degr$). Observations were carried out with a
30-min cadence, but selected stars, including HD 25267, were observed with a
shorter, 2-min cadence. The star was observed in Sectors 4 and 5. The
observations spanned 54 d between October 19, 2018 and December 11, 2018, and
consisted of 33662 data points. In the subsequent analysis we used SAP fluxes
and removed all data points with quality flag different from 0.
## 3 Least-Squares Deconvolution
We employed Least-Squares Deconvolution (LSD; Donati et al., 1997; Kochukhov
et al., 2010) to compute high signal-to-noise ratio (S/N) pseudo-line profiles
following the essential procedure laid out by Grunhut et al. (2017). LSD was
applied to each spectrum to compute mean line profiles with a high S/N. A line
mask is produced using ‘Extract Stellar’ requests from the Vienna Atomic Line
Database (Piskunov et al., 1995), resulting in a list of all absorption lines
and related data expected to be present in a star at a given temperature and
surface gravity. The line mask was cleaned and tweaked using an interactive
graphical tool developed by Jason Grunhut (see e.g. Grunhut et al. 2017).
Line masks of various effective temperatures — ranging from 7 to 14 kK — were
tested on the spectrum. We were able to identify two spectral line features in
our LSD profiles; a strong line produced by the Bp primary star, and a second,
weaker line, corresponding to the secondary component of the system. The 10 kK
mask was selected for our analysis, as it produced the deepest secondary
Stokes $I$ profile. All LSD Stokes $V$ profiles (see e.g. Fig. 1) show clear
Zeeman signatures in the mean spectral line of the primary star, indicative of
the presence of a magnetic field.
Figure 1: Stokes $V$ (red), diagnostic null $N$ (blue), and Stokes $I$ (black)
LSD profiles extracted from one of the spectropolarimetric observations of HD
25267. The secondary’s line profile is visible at $-35$ km s${}^{-1}\,$. The
Stokes $V$ and $N$ profiles have been scaled and shifted for display purposes.
In addition to LSD profiles extracted using the full mask, we also extracted
LSD profile sets using a sub-mask restricted to the lines of Fe.
## 4 Measurements
All measurements are reported in Table 1.
### 4.1 Radial velocities
Radial velocities of the primary and secondary stars were determined by
fitting a Gaussian function to both lines in each Stokes $I$ LSD profile.
Typical uncertainties (inferred from the RMS scatter about the best-fit
orbital model derived in Sect. 6) are about 1 km s${}^{-1}\,$. We were able to
successfully measure radial velocities for both components in all 17
observations.
### 4.2 Longitudinal magnetic field
To calculate the mean longitudinal magnetic field of the primary star, we
measured the first-order moment of the Stokes $V$ profile normalized to the
equivalent width of the Stokes $I$ profile according to the expression:
$B_{\ell}=-2.14\times 10^{11}\frac{\int(v-v_{0})V(v)\mbox{d}v}{\lambda
zc\int[1-I(v)]\mbox{d}v}$ (1)
(Donati et al., 1997; Wade et al., 2000), where $\lambda$ represents the
wavelength in nm, $z$ is the Landé factor, $c$ is the speed of light, and
$V(v)$ and $I(v)$ are the Stokes $V$ and $I$ profile intensities,
respectively. $v_{0}$ is the radial velocity of the center of gravity of the
Stokes $I$ profile. The equation was evaluated in a range $\pm 40$ km
s${}^{-1}\,$about the primary’s center-of-gravity.
### 4.3 Equivalent widths
The Stokes $I$ profiles of the primary star exhibit significant variability.
We measured the equivalent widths (EWs) of the Fe Stokes $I$ LSD profiles by
first centering each profile of the primary star at $v=0$ km s${}^{-1}\,$,
then performing local renormalization of the continuum. We then performed
trapezoidal integration of the intensity between continuum and the profile in
the velocity range $\pm 40$ km s${}^{-1}\,$.
## 5 Period analysis
We performed period analysis of the RV, photometric, $B_{\ell}$ and EW
measurements. We used a Lomb-Scargle approach for the RVs, EWs, and $B_{\ell}$
data, while we employed the Fourier approach of Period04 for the photometric
data. The key periods recovered from each data set are summarized in Table 2.
Details of frequencies detected in the SMEI and TESS timeseries are reported
in Table 3.
All three photometric datasets display 2 similarly-strong peaks in their
periodograms near 0.826 d-1 and 0.262 d-1, corresponding to periods of 1.21
and 3.82 d, respectively.
The Hipparcos photometry shows significant peaks yielding frequencies of
$f_{1}=0.8264(3)$ d-1 and $f_{2}=0.26160(7)$ d-1 (corresponding to periods of
$1.2100(4)$ days and $3.822(1)$ days, with respective amplitudes of 10.8 and
8.5 mmag), consistent with the results of Manfroid et al. (1985).
A significant advantage of the SMEI photometry is excellent frequency
resolution. The data, however, suffer from some instrumental effects. Again,
the amplitude spectrum of the data is dominated by two frequencies,
$f_{1}=0.8265067(21)$ d-1 and $f_{2}=0.2616008(24)$ d-1, corresponding to
periods of 1.209912(3) d (amplitude of 8.11(9) mmag) and 3.82262(4) d
(amplitude of 6.88(9) mmag). The amplitude spectrum after prewhitening these
two dominating terms is also interesting; see the upper panel of Fig. 2. There
are at least two peaks that are definitely significant (Table 3). The first,
at $f_{3}=0.5232017$ d-1 (amplitude 1.08(9) mmag), is the harmonic of $f_{2}$
($f_{3}=2\,f_{2}$). The other, $f_{4}=0.879600$ d-1 (amplitude of 0.99(9)
mmag), is independent. There is no doubt that $f_{3}$ is an harmonic of
$f_{2}$ given the superb resolution of the SMEI data.
Figure 2: Frequency spectra of the photometric data of $\tau^{9}$ Eri. Top
panels: Frequency spectra of the original SMEI data (blue) and after
subtraction of the two dominant frequencies, $f_{1}$ and $f_{2}$ (red). Bottom
panels: Frequency spectra of TESS data after subtraction of $f_{1}$ and
$f_{2}$ (red) and after subtracting all terms listed in Table 3 (green). Grey
vertical lines in the upper panel mark six frequencies (excluding $f_{5}$,
deemed to be instrumental) detected in the SMEI data and listed in Table 3.
The SMEI amplitude spectrum also shows weaker signals. They are of lower
significance, but generally likely significant: $f_{5}=0.257385$ d-1 (0.75(9)
mmag) could be a $g$-mode based on the magnitude of its frequency, but is more
likely a remnant of the detrending procedure. The noise level in SMEI data
increases towards low frequencies, hence we performed detrending which reduces
signals at the lowest frequencies (the drop of signal below 0.25 d-1).
Therefore, we doubt that $f_{5}$ is real. $f_{6}=0.564906$ d-1 (amp. 0.56(9)
mmag) is a combination frequency ($f_{6}=f_{1}-f_{2}$). It is still strong
enough to be significant, but more importantly, its frequency is exactly the
difference between the two dominant frequencies. This provides important
insights into the origins of $f_{1}$ and $f_{2}$ that will be discussed later
in this paper. Finally, we identify $f_{7}=0.61531$ d-1 (0.55(9) mmag), which
is again possibly a $g$-mode based on the magnitude of its frequency. Some
other smaller peaks are also present in the spectrum, but given that many are
close to $1,2,3$, and $4$ d-1 as well as the cluster of peaks at $\sim$4.6 d-1
— where such clusters at frequencies between 3 – 5 d-1 show up in almost all
SMEI data — we conclude that they are likely spurious.
The TESS data are much more precise than SMEI, but also suffer from some
instrumental effects. Two fluxes are available in photometry, the SAP$\\_$FLUX
(hereafter SAP) and the corrected PDCSAP$\\_$FLUX (hereafter PDC). There is a
large drop in flux in SAP after the mid-time gap in the Sector 4 observations.
It is largely corrected in the PDC flux, but not perfectly. We decided to use
the SAP flux, because from our experience analyzing TESS data we conclude that
the PDC correction sometimes introduces unwanted signals. We only removed a
small part of the timeseries between TJD 1420 and 1424 (where
$\mbox{TJD}=\mbox{BJD}-2457000$).
The spectrum of the TESS data is dominated by the same two frequencies as the
SMEI data, $f_{1}$ and $f_{2}$. Given the results from SMEI, we were
interested in the signals which remain after subtraction of these two terms.
The residual frequency spectrum of the TESS data, after prewhitening these two
terms, is shown in Fig. 2. We identify the following significant frequencies:
$f_{3}$ (at 0.523727 d-1 $=2\,f_{2}$) is clearly present; $f_{4}$ (0.88090
d-1, a likely $g$-mode) is present as well. $f_{6}$ (0.564636 d-1 =
$f_{2}-f_{1}$) and $f_{7}$ (0.61498 d-1) are barely seen because of the poor
frequency resolution, but are possibly present. $f_{5}$ (0.257385 d-1) is
absent, which confirms our suspicion that it is an artefact in the SMEI data.
At least two more peaks are present, $f_{8}$ at 1.01773 d-1 and $f_{9}$ at
1.10199 d-1. Both can be seen in the SMEI data, although the frequencies close
to 1 d-1 are affected by instrumental effects in the SMEI data.
There are also two peaks at low frequencies in the TESS data. The first one,
at 0.033 d-1 (not included in Table 3), is likely of instrumental origin
because there is a small jump between the data before and after the gap. The
other one, $f_{10}$, is located at 0.16868 d-1 and corresponds to the known RV
period. Hence, the photometry appears to be weakly modulated according to the
5.95-day orbital period; this will be discussed in the next Section.
It is clear that the residuals in the TESS photometry after subtracting the
above nine terms (Fig. 2) still include significant variability, which can be
seen both in the residual light curve and its frequency spectrum. This will be
discussed later in the paper.
We next combined our RV measurements with 44 published velocities of the
primary component reported by Sahade (1950) (38 measurements) and Leone &
Catanzaro (1999) (6 measurements). Performing period analysis independently on
the primary and secondary RVs yields similar results. The lowest reduced
$\chi^{2}$ in the primary’s periodogram is located at $5.95382(2)$ d, while
the secondary’s periodogram gives $5.9538(4)$ days.
The periodogram of our $B_{\ell}$ measurements exhibits many close peaks
forming a broad envelope centered near $3.8$ days. The peak most compatible
with the frequencies detected in the photometric data is located at
$3.8230(6)$ days. Additional power is present near 1.4 d, the daily alias of
3.8 d. $B_{\ell}$ measurements by Borra & Landstreet (1980) suggested
rotational periods near 0.85 and 5.95 days. Our $B_{\ell}$ data rule out
periods near both of these values, as well as periods near 1.21 d. This is
discussed further in Sect. 8.
Finally, we analyzed the EWs measured from the Fe LSD profiles. As with the
$B_{\ell}$ data, the periodogram exhibits a broad envelope composed of sharp
peaks centered near 3.8 d, again with some power near 1.4 d. The peak most
compatible with the photometric and $B_{\ell}$ periods is located at 3.823(1)
days.
## 6 Modeling of the orbit
We used the IDL orbital fitting code Xorbit to model the orbit. This code
determines the best-fitting orbital period $P_{\rm orb}$, time of periastron
passage ($T_{0}$), eccentricity ($e$), longitude of the periastron ($\omega$),
semi-amplitudes of each component’s radial velocities ($K_{1}$ and $K_{2}$),
and the radial velocity of the center of mass ($\gamma$) (Tokovinin, 1992),
performing least-squares fits to the measured radial velocities. Using the RV
period identified in Sect. 5 as an initial guess for $P_{\rm orb}$, and using
the range of measured RVs as initial guesses for $K_{\rm 1}$ and $K_{\rm 2}$,
the solution converged quickly to an acceptable, stable fit.
Table 4: Final orbital solution for $\tau^{9}$ Eri: results of simultaneous modeling of the RV variations of the primary and secondary components. Quantity | Value | Uncertainty
---|---|---
$P_{\rm orb}$ (d) | 5.95382 | 0.00002
$T_{0}$ (HJD) | 2456991.65 | 0.08
$K_{1}$ (km s${}^{-1}\,$) | 40.0 | 0.6
$K_{2}$ (km s${}^{-1}\,$) | 89.9 | 1.4
$\gamma$ (km s${}^{-1}\,$) | 21.1 | 0.4
$e$ | 0.129 | 0.010
$\omega$ ($\degr$) | 183.2 | 4.3
RMS1 (km s${}^{-1}\,$) | 0.7 |
RMS2 (km s${}^{-1}\,$) | 1.0 |
$M_{1}/M_{2}$ | 2.25 | 0.07
$M_{1}\sin^{3}i$ ($M_{\odot}$) | 0.91 | 0.05
$M_{2}\sin^{3}i$ ($M_{\odot}$) | 0.41 | 0.03
$a\sin i$ (AU) | 0.0705 | 0.0012
Figure 3: Radial velocity and theoretical velocity curves of the primary
(blue) and secondary (red). Grey squares indicate published velocities of the
primary. The ephemeris used for phasing is given by Eq. (2).
Figure 3 illustrates the final model fit to the phased RV measurements
according to the orbital ephemeris:
$T_{0}(E_{\rm orb})=\mbox{HJD}\,2456991.65(8)+5.95382(2)\cdot{E_{\rm orb}},$
(2)
where $T_{0}(E_{\rm orb})$ represents the epoch of periastron passage and
$E_{\rm orb}$ is the number of orbital cycles elapsed from $T_{0}$.
The measurements of both the primary and secondary components exhibit coherent
variations with an RMS scatter of approximately 1 km s${}^{-1}\,$ about the
model. The RVs imply a small but significant eccentricity $e=0.129\pm 0.010$.
The semi-amplitudes $K_{\rm 1}=40.0\pm 0.6$ km s${}^{-1}\,$ and $K_{\rm
2}=89.9\pm 1.4$ km s${}^{-1}\,$ imply a mass ratio $M_{\rm 1}/M_{\rm
2}=2.25\pm 0.07$. The total system mass is $(M_{\rm 1}+M_{\rm 2})=(1.32\pm
0.06)\,\sin^{-3}i\leavevmode\nobreak\ M_{\odot}$, with individual masses of
$M_{\rm 1}=(0.91\pm 0.05)\,\sin^{-3}i\leavevmode\nobreak\ M_{\odot}$ and
$M_{\rm 2}=(0.41\pm 0.03)\,\sin^{-3}i\leavevmode\nobreak\ M_{\odot}$. The
projected semi-major axis is $a\sin i=0.0705\pm 0.0012$ AU. The derived
orbital and stellar parameters are reported in Table 4.
Figure 4 illustrates the LSD Stokes $I$ and $V$ profiles stacked according to
orbital phase and overlaid with the predicted RV variations according to the
adopted orbital model. The primary’s profile is traced by the blue curve,
while the subtle secondary profile is traced by the red curve. It is clear in
the right-hand frame that the Stokes $V$ profiles clearly coincide with the
primary’s Stokes $I$ profile at all orbital phases. No Stokes $V$ profile is
observed in association with the secondary’s profile. In addition, it is clear
that the primary’s Stokes $V$ signatures show diverse shapes at similar
orbital phases. This strongly suggests that the primary’s rotational period is
not equal to the orbital period, in contradiction to the proposals of Borra &
Landstreet (1980) and Mathys (2017).
As discussed above, the frequency of 0.16806 d-1 detected in the TESS data
corresponds to the orbital frequency. It is therefore likely that the orbital
frequency is detected in the TESS photometry. We detrended the data to remove
the lowest frequency (0.033 d-1) and added the discussed frequencies to the
model, also including $f_{\rm orb}$ and $2\,f_{\rm orb}$ (the latter barely
seen in the frequency spectrum). The amplitudes are $\sim$0.87 and 0.38 mmag.
We used the Wilson-Devinney lightcurve modeling code (Wilson & Devinney, 1971)
to compute the predicted lightcurve based on the derived orbital and physical
parameters. Both the observed amplitudes and phases of the variability
components are acceptably explained by the main contributing proximity effects
from the distortion of the primary (with $2\,f_{\rm orb}$) and reflection
effect for the secondary (with $f_{\rm orb}$).
Figure 4: LSD profiles phased according to the orbital ephemeris described in
the text. Solid curves represent the theoretical RV variations described in
Sect. 3. Black line profiles were obtained in 2018. Red line profiles
represent those obtained in 2013.
## 7 Fundamental parameters
The fundamental parameters of the primary component (effective temperature
$T_{\rm eff}$, luminosity $L$, radius $R$, and evolutionary mass $M$) were
estimated by modelling the observed spectral energy distribution (SED) and the
observed line profiles, then situating the star on the Hertzsprung-Russell
diagram (HRD). As discussed in the following sections, we considered models
consisting of (1) only the primary component and (2) both the primary and
secondary components.
### 7.1 SED fitting
Various photometric measurements of HD 25267 are available in the literature.
This includes Johnson $U$, $B$, and $V$, _Gaia_ $G$, $G_{b}$, and $G_{r}$
(Gaia Collaboration et al., 2016, 2018), Geneva (Rufener & Nicolet, 1988),
2MASS (Cohen et al., 2003), and WISE (Wright et al., 2010) measurements. We
used the pyphot Python package222https://github.com/mfouesneau/pyphot to
convert each measurement from magnitudes to units of erg s-1 cm-2 Å-1. A
distance of $d=95.6\pm 1.9$ pc was inferred from the reported Gaia parallax
measurement of $10.67\pm 0.21$ mas (Lindegren et al., 2018); based on the
relatively low distance, reddening produced by the interstellar medium is
assumed to be negligible (e.g. Vergely et al., 1998).
The observed flux measurements were fit using the Markov chain Monte Carlo
(MCMC) routine emcee (Foreman-Mackey et al., 2013). Samples were drawn by
linearly interpolating the grid of synthetic spectral energy distributions
(SEDS) computed by Coelho (2014). This grid spans a range of effective
temperatures ($3000\leq T_{\rm eff}\leq 25000\,{\rm K}$) and surface gravities
($-0.5\leq\log{g}\leq 5.5\,{\rm(cgs)})$ and includes a range of metallicities
characterized by [$\alpha$/Fe] and [Fe/H]; we used the [$\alpha$/Fe$]=0$ and
$-1.0\leq{\rm[Fe/H]}\leq 0.2$ models. Each interpolated model was scaled by a
factor $(R/d)^{2}$ where $R$ corresponds to the star’s radius.
The MCMC analysis was carried out using both a single-star and double-star
model. In both analyses we adopted a Gaussian [Fe/H] prior defined by the
${\rm[Fe/H]}=0.0\pm 0.2$ value derived for a large sample of FGK dwarfs within
the solar neighbourhood (Casagrande et al., 2011). The [Fe/H] values for each
star in the double-star model were assumed to be equal and thus, were
characterized by a single [Fe/H] value. An initial mass function (Chabrier,
2003) prior was also included by deriving the stellar masses associated with
each model from $\log{g}$ and $R$. In the case of the double-star model, the
primary component’s $T_{\rm eff}$ and $R$ values were forced to exceed those
of the secondary component (i.e. $T_{\rm eff,A}>T_{\rm eff,B}$ and $R_{\rm
A}>R_{\rm B}$).
The best-fitting parameters and their associated $1\sigma$ uncertainties were
estimated from the marginalized posterior probability densities. We find that
$T_{\rm eff,A}$ and $R_{\rm A}$ are consistent for both the single- and
double-star models; the $T_{\rm eff,B}$ and $R_{\rm B}$ marginalized
distributions yielded maximum-probability values of $11200\,{\rm K}$ and
$0.6\,R_{\odot}$ with associated uncertainties $\sim$2000 K and $\sim$0.5
$R_{\odot}$. The statistical significance of the double-star model with
respect to the single-star model was evaluated by comparing each fit’s
Bayesian information criterion (BIC) (Schwarz, 1978) where $\Delta{\rm
BIC}\mathrel{\hbox{\hbox to0.0pt{\hbox{\lower
4.0pt\hbox{$\sim$}}\hss}\hbox{$>$}}}10$ is considered significant (Kass &
Raftery, 1995). We find $\Delta{\rm BIC}<2$ and therefore, the improvement of
the fit yielded by the double-star model relative to the single-star model is
not considered to be significant. We therefore only consider the best-fitting
single-star model parameters for the primary component of $T_{\rm
eff,A}=12640_{-90}^{+70}\,{\rm K}$, $R_{\rm A}=3.06\pm 0.06\,R_{\odot}$,
$\log{g}=3.4\pm 0.1\,{\rm(cgs)}$, and ${\rm[Fe/H]}=0.1\pm 0.1$. In Fig. 5, we
show the best-fitting single-star model.
Figure 5: The best-fitting single-star model compared with the photometric
measurements (blue circles). The solid black curve corresponds to the total
flux and the red ‘$\times$’ symbols indicate the model flux associated with
each bandpass. The W3 and W4 WISE measurements were not considered in the fit
since they extend beyond the wavelength range of the adopted model SED grid,
however, they are included here for completeness.
### 7.2 Spectral line fitting
We carried out a spectral modelling analysis in order to provide additional
constraints on HD 25267’s fundamental parameters and verify those values
derived from the SED. As with the SED modelling analysis, we consider both
single-star and double-star models. We used the open-source Dynamic Nested
Sampling Python package dynesty (Speagle, 2020) to generate posterior
probability distributions. Although slower than the MCMC algorithms employed
by emcee, dynesty is well-suited to cases involving complex and multi-modal
distributions.
Grids of synthetic spectra were generated using the Grid Search in Stellar
Parameters (gssp) code (Tkachenko, 2015), which uses the LTE-based radiative
transfer code SynthV (Tsymbal, 1996) and pre-computed LLmodel model
atmospheres (Shulyak et al., 2004). Two grids of synthetic spectra were
generated based on the SED fitting analysis presented above: one with
$9000\leq T_{\rm eff}\leq 16000\,{\rm K}$ and one with $4000\leq T_{\rm
eff}\leq 12000\,{\rm K}$, which correspond to the primary and secondary
components, respectively. Both grids spanned $3.0\leq\log{g}\leq
5.0\,{\rm(cgs)}$ and $-0.8\leq{\rm[M/H]\leq+0.8}$; instrumental broadening
assuming $R=65000$ was included in the models while microturbulent broadening
was fixed at $0\,{\rm km\,s}^{-1}$. Based on the grid of LLmodel atmospheres,
we used grid resolutions of $\Delta T_{\rm eff}=500\,{\rm K}$,
$\Delta\log{g}=0.2\,{\rm(cgs)}$, and $\Delta{\rm[M/H]}=0.2$. Models
corresponding to intermediate parameters were computed by linearly
interpolating the grids of synthetic spectra. In order to reduce the
computation time, rotational broadening was added by convolving the models
using the fastRotBroad function of the PyAstronomy Python
package333https://github.com/sczesla/PyAstronomy. Linear limb-darkening
coefficients were estimated by interpolating the grid computed by Díaz-
Cordovés et al. (1995).
The contribution of the secondary component is weak throughout the majority of
the observed spectra; however, absorption associated with each component’s
H$\alpha$ line and O i triplet near 7 770 Å is discernible by eye. We
attempted to model (1) H$\alpha$ in order to constrain each component’s
$T_{\rm eff}$, $\log{g}$, ${\rm[M/H]}$, and ratio of the radii ($R_{\rm
B}/R_{\rm A}$) and (2) the O i triplet in order to constrain each component’s
$v\sin{i}$. (The significant non-LTE contributions affecting the O i triplet
make it unsuitable for more sophisticated interpretation using our LTE
models.)
The O i triplet modelling was carried out first using disentangled spectra,
which were obtained with the help of the code described by Folsom et al.
(2010) and applied in several other studies (Kochukhov et al., 2018; Kochukhov
& Shulyak, 2019). The fixed orbital solution found in Sect. 6 was adopted. The
spectral disentangling is challenging for $\tau^{9}$ Eri due to the weakness
of the spectral contribution of the secondary and the significant intrinsic
variability of the primary. The O i triplet is one of the few spectral
features for which disentangling yields satisfactory results. Each component’s
$v\sin{i}$ was estimated by modelling the disentangled spectra independently.
The effective temperatures were fixed at the most-probable values inferred
from the double-star SED fitting ($T_{\rm eff,A}=12690$ K and $T_{\rm
eff,B}=11200$ K) while $\log{g}$ and ${\rm[M/H]}$ were fixed at
$4.0\,{\rm(cgs)}$ and $0.0$, respectively. A scaling factor was included as a
free parameter to account for continuum dilution. The dynesty code yielded
$v\sin{i}_{\rm A}=26.8\pm 0.5$ km s${}^{-1}\,$ and $v\sin{i}_{\rm B}=15\pm 3$
km s${}^{-1}\,$ as estimated from the posterior probability distributions. The
derivation of $v\sin{i}_{\rm B}$ was also carried out using a $T_{\rm
eff}=7530$ K based on an estimate of the secondary’s temperature inferred from
its HRD position (see Sect. 7.3 below), which yielded the same value. The fits
to the disentangled O i triplet are shown in Fig. 6 (top).
Figure 6: _Top:_ Comparison between the disentangled O i triplet and the model
spectra used to derive $v\sin{i}$ of each component. The residuals are shown
in the lower panel. _Bottom:_ Best-fitting synthetic spectrum (dashed red)
compared with the observed spectrum (solid black) for H$\alpha$.
The modelling of H$\alpha$ was carried out using the observed spectra obtained
at an epoch at which the two spectral components were found to exhibit some of
the largest relative radial velocities ($\phi_{\rm orb}=0.968$). We selected a
60 Å-width region approximately centered on H$\alpha$; the spectral window was
normalized by fitting a first order polynomial to the continuum located near
the boundaries. Similar to the SED-fitting analysis, two models were adopted:
one consisting of just the primary component and one consisting of both
components. The single-star model consists of three free parameters ($T_{\rm
eff,A}$, $\log{g}_{\rm A}$, and ${\rm[M/H]}$) while the double-star model
includes an additional four free parameters ($T_{\rm eff,B}$, $\log{g}_{\rm
B}$, ${\rm[M/H]}_{\rm B}$, and $R_{\rm B}/R_{\rm A}$); $v\sin{i}_{\rm A}$ and
$v\sin{i}_{\rm B}$ were fixed at the values derived from the O i modelling.
Uniform priors were adopted but only $R_{\rm B}/R_{\rm A}<1$ solutions were
permitted.
The single-star model yielded $T_{\rm eff,A}=12580_{-120}^{+150}\,{\rm K}$,
$\log{g}_{\rm A}=4.12_{-0.04}^{+0.05}\,{\rm(cgs)}$, and ${\rm[M/H]}_{\rm
A}=0.5\pm 0.1$, which are consistent with the values derived using the double-
star model. We note that the primary component exhibits the strong atmospheric
chemical peculiarities common to Bp stars. Since the peculiar abundances are
not well represented by a scaled solar abundance table, the global metallicity
parameter ${\rm[M/H]}$ has little physical meaning. A more sophisticated
chemical abundance analysis will eventually be required to characterize the
detailed chemical composition of the primary component’s atmosphere.
As with the SED fitting analysis, we evaluated the statistical significance of
the double-star model using the BIC. Notwithstanding that the secondary
contributes discernibly to the H$\alpha$ profile, we find $\Delta{\rm
BIC}=1.2$ implying that the single-star model is preferred. This is likely an
indication that the accuracy (in particular the broadening and lack of non-LTE
effects) of our H$\alpha$ profile models is insufficient to achieve a formal
improvement of the quality of the fit.
The single-star model fit to H$\alpha$ is shown in Fig. 6 (bottom).
Given that both the SED and H$\alpha$ modelling analyses did not yield a
statistically significant improvement to the fits using the double-star model
relative to the single-star model, we conclude that our analyses do not
provide useful fundamental parameter constraints (aside from $v\sin{i}_{\rm
B}$) for the secondary component.
### 7.3 Hertzsprung-Russell diagram
Figure 7: HRD position of the primary component (red circle) and its best-
fitting isochrone (dashed red). The position of the secondary component (blue
square) is estimated using the evolutionary models (see Sect. 7.3). The dot-
dashed black lines correspond to the the solar metallicity, non-rotating MIST
stellar evolutionary models computed by Choi et al. (2016); the dotted black
line corresponds to the ZAMS.
The mass and age of HD 25267’s primary component were derived by locating the
star on the HRD using emcee (Foreman-Mackey et al., 2013) in conjunction with
the MESA Isochrones & Stellar Tracks (MIST) grid of evolutionary models (Choi
et al., 2016; Dotter, 2016). This grid has been computed using the Modules for
Experiments in Stellar Astrophysics (MESA) code (Paxton et al., 2011; Paxton
et al., 2013, 2015) over a wide range of masses ($0.1\leq M/M_{\odot}\leq
300$) and Fe abundances ($-4\leq{\rm[Fe/H]}\leq 0.5$). Interpolation of the
model evolutionary tracks was carried out using the isochrones Python package
(Morton, 2015), which interpolates over mass, [Fe/H], and the so-called
“equivalent evolutionary point” (i.e. evolutionary points defined by specific
phases such as the zero age and terminal age main sequence).
Three priors were adopted for the MCMC fitting: a Gaussian [Fe/H] prior based
on the $0.0\pm 0.2$ value reported for solar neighbourhood FGK dwarfs
(Casagrande et al., 2011), an initial mass function prior (Chabrier, 2003),
and an age prior (Eqn. 17 of Angus et al., 2019). Using $T_{\rm eff,A}$
derived from the spectral modelling analysis and $R_{\rm A}$ derived from the
SED modelling analysis, we derived a mass and age for the primary component of
$M_{\rm A}=3.6_{-0.2}^{+0.1}\,M_{\odot}$ and $t_{\rm age,A}=140_{-30}^{+40}$
Myr where the uncertainties correspond to $1\sigma$.
As noted in the previous section, no conclusive constraints for the secondary
component’s effective temperature and radius were able to be derived from
either the SED or H$\alpha$ modelling. We attempted to roughly estimate
$T_{\rm eff,B}$ and $R_{\rm B}$ based on the grid of evolutionary models. This
involved using $M_{\rm A}$ derived above and $M_{\rm A}/M_{\rm B}$ derived
from the radial velocity analysis presented in Sect. 6 to obtain $M_{\rm
B}=1.6\pm 0.1\,M_{\odot}$. We adopted this $M_{\rm B}$ value and assumed an
age equal to that derived for the primary component and carried out the same
MCMC analysis, which yielded $T_{\rm eff,B}=7530_{-510}^{+580}$ K and $R_{\rm
B}=1.5\pm 0.1\,R_{\odot}$. In Fig. 7, we plot the primary component’s location
and the secondary component’s estimated location on the HRD. These are
compared with the solar metallicity, non-rotating MIST grid.
Finally, we computed a synthetic SED and spectrum of the system’s H$\alpha$
region assuming the secondary properties derived above and confirmed that they
are consistent with the general lack of a strong contribution of the
secondary’s light in the observations.
Figure 8: From left to right: $B_{\ell}$ measurements phased to rotational periods of $3.82262$ d, $1.209911$ d, and $5.95382$ d. Curves are sinusoidal fits to the phased data. Figure 9: Rotational variability of the longitudinal field and photometric brightness of $\tau^{9}$ Eri. All data were phased with rotational period according to Eq. (3). Red and green curves represent TESS and SMEI photometry, respectively. The points are average values in 0.02-cycle phase intervals. Prior to phasing, the photometries were freed from the contribution of terms other than $f_{\rm rot}$ and its harmonic. Blue dots are the longitudinal magnetic field measurements. The fit consisting of two sinusoidal terms with $f_{\rm rot}$ and $2f_{\rm rot}$ is shown with the blue line. Table 5: Adopted fundamental parameters of the primary and secondary components derived in Sect. 7. $M_{\rm B}$ was derived by combining the primary component’s evolutionary mass with the dynamical mass ratio listed in Table 4; $T_{\rm eff,B}$, $R_{\rm B}$, and $\log{(L_{\rm B}/L_{\odot})}$ are estimated from evolutionary models (see Sect. 7.3). The orbital inclination, $i$, was inferred via comparison of the primary’s evolutionary mass and the dynamical mass from Table 4. Parameter | Primary | Secondary
---|---|---
$T_{\rm eff}$ (K) | $12580_{-120}^{+150}$ | $7530_{-510}^{+580}$
$R$ ($R_{\odot}$) | $3.06\pm 0.06$ | $1.5\pm 0.1$
$\log{(L/L_{\odot})}$ | $2.32\pm 0.03$ | $0.8\pm 0.1$
$v\sin{i}$ (km s${}^{-1}\,$) | $26.8\pm 0.5$ | $15\pm 3$
$t_{\rm age}$ (Myr) | $140_{-30}^{+40}$ |
$M$ ($M_{\odot}$) | $3.6_{-0.2}^{+0.1}$ | $1.6\pm 0.1$
$i$ ($\degr$) | $40\pm 1$ |
## 8 Rotation period and magnetic geometry of the Bp star
The period analysis presented in Sect. 5 and summarized in Table 2 indicates
that the 5.95 d orbital period is detected in the RV data and possibly weakly
in the TESS data. The $B_{\ell}$ and EW measurements appear to show
significant modulation only according to the 3.82 d period, while the
photometry shows clear signal arising from both the 3.82 d period and the 1.21
d period, with no evidence for the 5.95 d period.
To underscore these results, in Fig. 8 we show the $B_{\ell}$ measurements
phased according to each of the periods mentioned above. This figure shows
clearly that while the magnetic measurements phase well with the 3.82 d
period, no coherent variation is achieved for either the 1.21 d or 5.95 d
periods. As a consequence we conclude that the 3.82 d period corresponds to
the rotation period of the magnetic Bp star primary. We therefore adopt the
rotational ephemeris:
$T_{B_{\ell},{\rm max}}(E_{\rm
rot})=\mbox{HJD}\,2456528.73(5)+3.82262(4)\cdot{E_{\rm rot}},$ (3)
where $E_{\rm rot}$ represents the number of rotational periods from the
initial epoch at maximum $B_{\ell}$ field strength. We have adopted the
precise SMEI period as the rotational period. When phased with this ephemeris,
the $B_{\ell}$ measurements describe an approximately sinusoidal variation.
The phased measurements are shown with a superimposed least-squares fit in
Fig. 9, along with the extracted $3.82$ d photometric variation. The reduced
$\chi^{2}$ of a first-order fit is 4.1, versus 1.8 for a second-order fit.
This implies that non-sinusoidal variability, likely introduced by the
presence of non-uniform surface chemical abundance distributions, is present
in the longitudinal field curve.
Combining the adopted rotation period with the inferred stellar radius
($3.06\pm 0.06\leavevmode\nobreak\ R_{\odot}$; Table 5) and projected
rotational velocity ($26.8\pm 0.5$ km s${}^{-1}\,$; Sect. 7.2) implies a
rotation axis inclination of $i=41\pm 2\degr$. The similarity between this
value and the inferred orbital inclination will be discussed in further detail
in Sect. 11.
We can infer the magnetic field strength and geometry of the primary’s dipole
component directly from the longitudinal field variation. We modeled the phase
$B_{\ell}$ curve using Landstreet’s fldcurv code. Assuming a limb-darkening
coefficient of 0.3 and adopting $i=41\pm 2\degr$ above we derive $\beta=158\pm
5\degr$ and $B_{\rm d}=1040\pm 50$ G. Hence the primary star is inferred to
have a moderately strong ($\sim$1 kG) dipole that is slightly
($\sim$20$\degr$) offset from its rotation axis.
## 9 Magnetic field of the secondary star
The secondary component shows no evidence of circular polarisation associated
with its mean line in the 16 observations that are unblended with the primary.
We measured the longitudinal field from each observation using an integration
range of $\pm 27$ km s${}^{-1}\,$ about the centre-of-gravity of the profile.
All are consistent with zero field, with a median $B_{\ell}$ error bar of
about 18 G, and a best $B_{\ell}$ error bar of 12 G. We conclude that the
secondary component of $\tau^{9}$ Eri is unlikely to have a surface dipole
magnetic field stronger than about 10 times the median error bar, i.e. a few
hundred gauss. However, we can arguably obtain a more realistic constraint if
we assume that the secondary’s rotational axis is aligned with the orbital
axis. This is a reasonable assumption if the primary exhibits aligned
rotation, since the lower-mass secondary is expected to achieve orbital
alignment first. In this case we obtain an upper limit of about 250 G,
assuming an obliquity of $\beta=90\degr$.
## 10 Origin of the 1.21 day period
With the 5.95 d period established as the orbital period, and the 3.82 d
period established as the primary’s rotational period, we are left to explain
the origin of the 1.21 d period (corresponding to $f_{1}$). This period,
previously reported by Manfroid et al. (1985), Catalano et al. (1991), and
Bernhard et al. (2020), heretofore remained a mystery. However, the
temperature of the primary component, coupled with the compatibility of this
period with the known range of $g$-mode frequencies and the detection of
additional frequencies in this range (since $g$-mode pulsation is frequently
detected in multiple independent modes) in two independent data sets, strongly
suggests that the 1.21 d period corresponds to a $g$-mode. $g$ modes originate
in mid B-type stars (SPB stars) as well as F-type stars ($\gamma$ Dor stars).
However, the detection of the difference frequency $f_{6}$ firmly establishes
the association of this frequency with the magnetic Bp star. This conclusion
is also supported by the large intensity of the 1.21 d modulation, given the
rather small contribution of the secondary star to the system’s flux.
For completeness we point out that the 1.21 d period could also potentially be
associated with the rotation of the secondary star. However, adopting this
period as its rotational period, the measured $v\sin i$ and inferred radius of
the secondary (15 km s${}^{-1}\,$ and $1.5\leavevmode\nobreak\ R_{\odot}$
respectively; Table 5) imply a very low rotation axis inclination
($\sim$10$\degr$) which is geometrically unlikely and difficult to reconcile
with the large observed photometric amplitude.
We therefore conclude that the primary component of HD 25267 is a magnetic SPB
star, and the dominant 1.21 d modulation is the most intense of several
$g$-modes that are detected in its photometric amplitude spectrum.
The residual variability in the TESS photometry is likely due to $g$-modes as
the density of the spectrum of $g$-modes is very high. Whether some of this
variability is also contributed by $g$-modes in the secondary is an open
question and cannot be ruled out.
## 11 Summary, discussion and conclusions
We have analyzed 17 ESPaDOnS spectropolarimetric observations of the bright,
nearby spectroscopic binary $\tau^{9}$ Eri, supported by Hipparcos, SMEI and
TESS photometry. We have discovered the weak mean spectral line of the
secondary component, and detect a strong magnetic field in the primary
component. We conclude that $\tau^{9}$ Eri consists of a late-type main
sequence Bp star ($M=3.6_{-0.2}^{+0.1}\leavevmode\nobreak\ M_{\odot}$, $T_{\rm
eff}=12580_{-120}^{+150}$ K) with a rotation period of $3.82262(4)$ days in a
mildly eccentric $5.95382(2)$-day orbit with an late A/early F main sequence
secondary companion ($M=1.6\pm 0.1\leavevmode\nobreak\ M_{\odot}$, $T_{\rm
eff}=7530_{-510}^{+580}$ K). The Bp star’s magnetic field is approximately
dipolar with Oblique Rotator parameters $i=41\pm 2\degr$, $\beta=158\pm
5\degr$ and $B_{\rm d}=1040\pm 50$ G.
Our detection of the secondary’s spectral lines establishes $\tau^{9}$ Eri as
the nearest and brightest spectroscopic binary containing a magnetic Ap/Bp
star.
One interesting feature of this system is the similarity between the
inclination of primary’s rotation axis and the orbital axis, consistent with
spin-orbit alignment. Since the lower-mass secondary star likely experienced
spin-orbit alignment before the primary, its rotation axis inclination is also
likely $\sim$40$\degr$. In this case its derived radius and $v\sin i$ indicate
that it has a rotation period similar to that of the primary. Given that the
system has clearly not achieved spin-orbit synchronization, we are unable to
provide a physical reason for the similarity of the measured primary rotation
period and inferred secondary rotation period, and conclude that it must be a
coincidence.
A second interesting feature of this system is the near anti-alignment of the
primary’s magnetic and rotation axes. Pablo et al. (2019) studied the strength
of tidal and electromagnetic interactions in the doubly-magnetized binary
system $\epsilon$ Lup. They derived that in that system, the lowest-energy
configuration for the magnetic axes has them anti-aligned and parallel with
the rotation axes. $\tau^{9}$ Eri is in many ways very similar to the system
HD 98088 (Folsom et al., 2013). Both have orbital periods near 6 d, both are
moderately eccentric, and both contain a magnetic Ap/Bp primary and a less
massive and less luminous secondary. Folsom et al. (2013) concluded that the
magnetic axis of the Ap star HD 98088A is nearly perpendicular to its rotation
axis, and that its magnetic axis points approximately at the secondary star as
the two stars orbit. (This is only possible because HD 98088 exhibits
synchronized orbital and rotational periods.) Hence HD 25267A provides another
rare and interesting datum concerning the magnetic geometries of magnetic
stars in close binary systems.
An additional period of $1.21$ d that is clearly detected in all photometric
datasets is interpreted as one of several likely $g$-modes detected in the
SMEI and TESS lightcurves. This makes $\tau^{9}$ Eri a very interesting system
in the modern context of stellar magnetism: a close spectroscopic binary
containing a (multi-periodic) pulsating, magnetic star. Moreover, the
detection of the 5.95 d orbital period in the TESS photometry (corresponding
to $f_{10}$) suggests that the stars are sufficiently close to interact
tidally. Given its brightness and proximity, this system is ripe for detailed
follow-up.
## Acknowledgments
We thank Dr. Alexandre David-Uraz for assistance with the TESS data. We also
thank Dr. Coralie Neiner for helpful suggestions. GAW is supported by a
Discovery Grant from the Natural Sciences and Engineering Research Council
(NSERC) of Canada. OK acknowledges funding from the Swedish Research Council
and the Swedish National Space Agency. APi acknowledges support from the
National Science Centre (NCN) grant 2016/21/B/ST9/01126. This work has made
use of the VALD database, operated at Uppsala University, the Institute of
Astronomy RAS in Moscow, and the University of Vienna.
## Data Availability
The spectropolarimetric data underlying this article are available from the
Polarbase database (http://polarbase.irap.omp.eu), and are uniquely identified
with the spectrum IDs listed in Table 1. The photometric data are available
from their respective public databases.
## References
* Angus et al. (2019) Angus R., et al., 2019, AJ, 158, 173
* Babcock (1958) Babcock H. W., 1958, ApJS, 3, 141
* Babcock & Cowling (1953) Babcock H. W., Cowling T. G., 1953, MNRAS, 113, 357
* Bernhard et al. (2020) Bernhard K., Hümmerich S., Paunzen E., 2020, MNRAS, 493, 3293
* Borra & Landstreet (1980) Borra E., Landstreet J., 1980, ApJ, 42, 24
* Campbell & Moore (1928) Campbell W. W., Moore J. H., 1928, Publications of Lick Observatory, 16, 1
* Casagrande et al. (2011) Casagrande L., Schönrich R., Asplund M., Cassisi S., Ramírez I., Meléndez J., Bensby T., Feltzing S., 2011, A&A, 530, A138
* Catalano et al. (1991) Catalano F. A., Kroll R., Leone F., 1991, A&A, 248, 179
* Chabrier (2003) Chabrier G., 2003, PASP, 115, 763
* Choi et al. (2016) Choi J., Dotter A., Conroy C., Cantiello M., Paxton B., Johnson B. D., 2016, ApJ, 823, 102
* Coelho (2014) Coelho P. R. T., 2014, MNRAS, 440, 1027
* Cohen et al. (2003) Cohen M., Wheaton W. A., Megeath S. T., 2003, AJ, 126, 1090
* Díaz-Cordovés et al. (1995) Díaz-Cordovés J., Claret A., Giménez A., 1995, A&AS, 110, 329
* Donati et al. (1997) Donati J. F., Semel M., Carter B. D., Rees D. E., Collier Cameron A., 1997, MNRAS, 291, 658
* Dotter (2016) Dotter A., 2016, ApJS, 222, 8
* Eyles et al. (2003) Eyles C. J., et al., 2003, Sol. Phys., 217, 319
* Folsom et al. (2010) Folsom C. P., Kochukhov O., Wade G. A., Silvester J., Bagnulo S., 2010, MNRAS, 407, 2383
* Folsom et al. (2013) Folsom C. P., Likuski K., Wade G. A., Kochukhov O., Alecian E., Shulyak D., 2013, MNRAS, 431, 1513
* Foreman-Mackey et al. (2013) Foreman-Mackey D., Hogg D. W., Lang D., Goodman J., 2013, PASP, 125, 306
* Frost (1908) Frost E. B., 1908, Astronomische Nachrichten, 177, 171
* Gaia Collaboration et al. (2016) Gaia Collaboration et al., 2016, A&A, 595, A1
* Gaia Collaboration et al. (2018) Gaia Collaboration et al., 2018, A&A, 616, A1
* Grunhut et al. (2017) Grunhut J. H., et al., 2017, MNRAS, 465, 2432
* Hensberge et al. (1981) Hensberge H., et al., 1981, A&AS, 46, 151
* Hujer (1928) Hujer C., 1928, ApJ, 67, 399
* Jackson et al. (2004) Jackson B. V., et al., 2004, Sol. Phys., 225, 177
* Kass & Raftery (1995) Kass R. E., Raftery A. E., 1995, Journal of the American Statistical Association, 90, 773
* Kochukhov & Shulyak (2019) Kochukhov O., Shulyak D., 2019, ApJ, 873, 69
* Kochukhov et al. (2010) Kochukhov O., Makaganiuk V., Piskunov N., 2010, A&A, 524, A5
* Kochukhov et al. (2018) Kochukhov O., Johnston C., Alecian E., Wade G. A., 2018, MNRAS, 478, 1749
* Leone & Catanzaro (1999) Leone F., Catanzaro G., 1999, A&A, 343, 273
* Lindegren et al. (2018) Lindegren L., et al., 2018, A&A, 616, A2
* Manfroid et al. (1985) Manfroid J., Mathys G., Heck A., 1985, A&A, 144, 251
* Mathys (2017) Mathys G., 2017, A&A, 601, A14
* Morton (2015) Morton T. D., 2015, isochrones: Stellar model grid package (ascl:1503.010)
* Pablo et al. (2019) Pablo H., et al., 2019, MNRAS, 488, 64
* Paxton et al. (2011) Paxton B., Bildsten L., Dotter A., Herwig F., Lesaffre P., Timmes F., 2011, ApJS, 192, 3
* Paxton et al. (2013) Paxton B., et al., 2013, ApJS, 208, 4
* Paxton et al. (2015) Paxton B., et al., 2015, ApJS, 220, 15
* Piskunov et al. (1995) Piskunov N. E., Kupka F., Ryabchikova T. A., Weiss W. W., Jeffery C. S., 1995, A&AS, 112, 525
* Ricker et al. (2014) Ricker G. R., et al., 2014, in Space Telescopes and Instrumentation 2014: Optical, Infrared, and Millimeter Wave. p. 914320 (arXiv:1406.0151)
* Ricker et al. (2015) Ricker G. R., et al., 2015, Journal of Astronomical Telescopes, Instruments, and Systems, 1, 014003
* Rufener & Nicolet (1988) Rufener F., Nicolet B., 1988, A&A, 206, 357
* Sahade (1950) Sahade J., 1950, ApJ, 111, 437
* Schwarz (1978) Schwarz G., 1978, Annals of Statistics, 6, 461
* Shulyak et al. (2004) Shulyak D., Tsymbal V., Ryabchikova T., Stütz C., Weiss W. W., 2004, A&A, 428, 993
* Speagle (2020) Speagle J. S., 2020, MNRAS, 493, 3132
* Struve & Hujer (1927) Struve O., Hujer C., 1927, ApJ, 65, 300
* Tkachenko (2015) Tkachenko A., 2015, A&A, 581, A129
* Tokovinin (1992) Tokovinin A., 1992, in McAlister H. A., Hartkopf W. I., eds, Astronomical Society of the Pacific Conference Series Vol. 32, IAU Colloq. 135: Complementary Approaches to Double and Multiple Star Research. p. 573
* Tsymbal (1996) Tsymbal V., 1996, in Adelman S. J., Kupka F., Weiss W. W., eds, Astronomical Society of the Pacific Conference Series Vol. 108, M.A.S.S., Model Atmospheres and Spectrum Synthesis. p. 198
* Vergely et al. (1998) Vergely J.-L., Ferrero R. F., Egret D., Köeppen J., 1998, A&A, 340, 543
* Wade et al. (2000) Wade G. A., Donati J.-F., Landstreet J. D., Shorlin S. L. S., 2000, MNRAS, 313, 851
* Wade et al. (2016) Wade G. A., et al., 2016, MNRAS, 456, 2
* Wilson & Devinney (1971) Wilson R. E., Devinney E. J., 1971, ApJ, 166, 605
* Wright et al. (2010) Wright E. L., et al., 2010, AJ, 140, 1868
|
# Structures of bulk hexagonal post-transition-metal chalcogenides from
dispersion-corrected density-functional theory
S. J. Magorrian National Graphene Institute, University of Manchester, Booth
Street East, Manchester M13 9PL, United Kingdom V. Zólyomi Hartree Centre,
STFC Daresbury Laboratory, Daresbury WA4 4AD, United Kingdom N. D. Drummond
Department of Physics, Lancaster University, Lancaster LA1 4YB, United Kingdom
###### Abstract
We use dispersion-corrected density-functional theory to determine the
relative energies of competing polytypes of bulk layered hexagonal post-
transition-metal chalcogenides, to search for the most stable structures of
these potentially technologically important semiconductors. We show that there
is some degree of consensus among dispersion-corrected exchange-correlation
functionals regarding the energetic orderings of polytypes, but we find that
for each material there are multiple stacking orders with relative energies of
less than 1 meV per monolayer unit cell, implying that stacking faults are
expected to be abundant in all post-transition-metal chalcogenides. By fitting
a simple model to all our energy data, we predict that the most stable
hexagonal structure has P$6_{3}$/mmc space group in each case, but that the
stacking order differs between GaS, GaSe, GaTe, and InS on the one hand and
InSe and InTe on the other. At zero pressure, the relative energies obtained
with different functionals disagree by around 1–5 meV per monolayer unit cell,
which is not sufficient to identify the most stable structure unambiguously;
however, multi-GPa pressures reduce the number of competing phases
significantly. At higher pressures, an AB′-stacked structure of the most
stable monolayer polytype is found to be the most stable bulk structure; this
structure has not been reported in experiments thus far.
## I Introduction
The hexagonal post-transition-metal chalcogenides (PTMCs) GaS, GaSe, GaTe,
InS, InSe, and InTe are layered materials with hexagonal Bravais lattices [1,
2, 3]. Due to the possibility of isolating mono- and few-layer films, in their
ultrathin form they have received considerable attention in recent years as a
new class of two-dimensional (2D) semiconductor [4, 5, 6, 7, 8, 9, 10, 11, 12,
13, 14, 15, 16, 17]. The two dynamically stable structures of PTMC monolayers
are shown in Fig. 1, and are based on the honeycomb motif [6, 7]. Bulk PTMCs
have direct band gaps of $\sim 1.3$–$2.5$ eV [18, 19, 20, 21], light out-of-
plane effective masses [22, 23, 24, 25, 26, 27], and strongly nonlinear
optical properties such as second harmonic generation, optical gain, and
up-/down-conversion [28, 29, 30, 31, 32]. InSe exhibits high in-plane electron
mobility [33], which persists in the thin-film limit, and has enabled the
observation of the quantum Hall effect [13] and the demonstration of PTMCs as
candidate ultrathin transistors [4, 12]. InSe has also shown potential for
applications in photovoltaics [34, 35] and electron-beam-based data-storage
[36].
Figure 1: (a) Top and (b) side views of the $\alpha_{\rm M}$ polytype of
monolayer GaS, and (c) top and (d) side views of the $\beta_{\rm M}$ polytype
[6, 7]. Gallium and sulfur atoms are shown in red and yellow, respectively.
Thin films of PTMCs exhibit high-sensitivity broadband photoresponse [5, 8, 9,
11]. They also show a substantial increase in the band gap, from $1.3$ eV in
bulk InSe to $\sim 2.8$ eV in monolayer InSe [13], and from $2$ eV in bulk
GaSe to $\sim 3.5$ eV in monolayer GaSe [14, 16]. An offset in the location of
the valence-band maximum has been shown to develop in the thinnest films [37,
17], yielding a slightly indirect band gap, unlike the bulk. Combined with the
high density of states at the band edge, this is expected to lead to strongly
correlated phenomena in p-doped monolayer PTMCs [6, 7, 10] as well as
interesting thermoelectric properties [15].
The high tunability of the physical properties of PTMC films stems from the
strong electronic coupling between states localized on neighboring layers
[38]. For this reason, PTMCs are likely to be highly sensitive to changes
brought about by variations in stacking order. The influence of stacking and
interlayer interactions has already been shown to be important in, for
example, the metallic transition-metal dichalcogenides [39, 40, 41], which
feature multiple stacking orders very close in energy. The local stacking
order will vary continuously in the moiré superlattices formed when monolayers
are stacked with a relative rotation or lattice-constant mismatch. In twisted
bilayers of 2D materials with small misalignments and/or lattice-constant
mismatches, the constituent monolayers can adjust to maximize the size of
regions of energetically favorable stacking [42, 43]. Compared to graphene and
transition-metal dichalcogenides, PTMCs have low Young’s moduli and are highly
flexible [44, 45, 46], so in-plane relaxation can be expected to occur more
readily, starting from larger twist angles, and featuring stronger nonuniform
strain fields, than in twisted transition-metal-dichalcogenide bilayers. To
describe such reconstruction in moiré superlattices of PTMCs it is essential
first to attain a proper understanding of the energetics of the various PTMC
polytypes and structures, and the factors contributing to their formation.
In this work, we use a range of dispersion-corrected density-functional-theory
(DFT) methods to investigate systematically the energies and stabilities of
the competing polytypes of bulk layered hexagonal PTMCs. We provide an
expression for the energy per monolayer unit cell of an arbitrary bulk
hexagonal PTMC polytype. We find that each PTMC generally admits a few
polytypes that are energetically very similar, implying that crystal-growth
conditions are likely to be important. Motivated by the observation of
electronic and structural changes in PTMCs under pressure [47, 48, 49, 50], we
also investigate the pressure-dependence of the relative stability of
competing polytypes.
The post-transition-metal (PTM) atoms that we consider are indium and gallium,
while the chalcogen atoms that we consider are sulfur, selenium, and
tellurium. The PTM atoms are strongly bonded in vertical dimers lying on a
hexagonal sublattice. Each PTM atom is strongly bonded to three chalcogen
atoms lying on a different hexagonal sublattice to the PTM dimers. There are
two different single-layer polytypes, as shown in Fig. 1: the chalcogen atoms
may all lie on the same sublattice, or the top and bottom chalcogen atoms may
lie on different sublattices [6, 7]. The former structure, referred to as the
$\alpha_{\rm M}$ monolayer polytype, is slightly more stable and has vertical
mirror symmetry $\sigma_{h}$ about the center of the layer, although it lacks
inversion symmetry (D3h point group). The latter structure, referred to as the
$\beta_{\rm M}$ monolayer polytype, does not have vertical mirror symmetry,
but it does have inversion symmetry (D3d point group). In bulk hexagonal PTMCs
there are further possibilities for polytypism due to the different ways in
which the layers can be stacked. Our reference structure is the simplest
possible bulk structure, which consists of AA-stacked $\alpha_{\rm M}$-PTMC
monolayers, with a four-atom primitive unit cell.
A range of polytypes and stacking orders have been reported for the bulk
structures of the PTMCs obtained in experiments [51]. The $\beta$ [52] and
$\varepsilon$ [53] 2H polytypes both have $\sigma_{h}$ reflection symmetry,
with the former also having an inversion center. Meanwhile, the $\gamma$ 3R
polytype [54] has a single-layer primitive unit cell, and has neither
inversion nor $\sigma_{h}$ reflection symmetry. A polytype known as $\delta$,
consisting of a four-layer unit cell with two interfaces between successive
layers stacked as in the $\beta$ polytype and the other two interfaces stacked
as in the $\gamma$ polytype, has also been reported for GaSe [55]. Note that
the $\beta_{\rm M}$ monolayer polytype should not be confused with the $\beta$
polytype of bulk PTMCs: the former refers to the inversion-symmetric monolayer
shown in Figs. 1(c) and 1(d), the latter to the bulk crystal in which non-
inversion-symmetric monolayers [the $\alpha_{\rm M}$ monolayer polytype shown
in Figs. 1(a) and 1(b)] stack into an AB-type bulk crystal that now exhibits
inversion symmetry. To avoid confusion, in Sec. II we adopt a notation for
PTMC stacking that enables unambiguous characterization of all PTMC crystals
irrespective of the monolayer polytypes or stacking order of successive
layers.
The rest of this paper is structured as follows. Our approach for enumerating
physically relevant PTMC structures is described in Sec. II. We present a
fitting function to describe the energetics of PTMC polytypes in Sec. III. We
compare the DFT energies of PTMC polytypes obtained with different
exchange–correlation functionals in Sec. IV. Our analysis of the most stable
polytypes, including the effects of pressure, is presented in Sec. V. We
examine the relationship between the electronic band gap and the energetic
stability of polytypes in Sec. VI. Finally, we draw our conclusions in Sec.
VII. Our DFT simulation parameters can be found in Appendix A.
## II Characterization of structures
The bulk hexagonal PTMC geometries we have examined are as follows. (i) We
assume that each sublayer of chalcogen atoms and each sublayer of vertical PTM
dimers lie at the A, B, or C hexagonal sublattice sites, because energy minima
are overwhelmingly likely to occur at these high-symmetry configurations. (ii)
We assume that each chalcogen sublayer lies on a different hexagonal
sublattice to the PTM sublayer; our DFT calculations for InSe confirm that the
energy is around 2 eV per monolayer unit cell higher each time the chalcogen
atoms are on the same sublattice as the PTM dimers. In general a two-layer
structure is a 2H polytype in Ramsdell notation [56], and a three-layer
structure is a 3H polytype. However, there are exceptions; e.g., the $\gamma$
structure is a 3R polytype with a rhombohedral primitive Bravais lattice.
Nevertheless, for consistency and ease of automation, we have used a hexagonal
unit cell in all our calculations.
We have performed DFT calculations for all such two-layer and three-layer bulk
structures. We refer to each of these configurations by a character string
summarizing the 2D hexagonal sublattice sites for each sublayer of atoms in
the unit cell. Upper-case letters (A, B, and C) are used for PTM-dimer
sublayers; lower-case letters (a, b, and c) are used for chalcogen sublayers.
The 2D hexagonal sublattice sites for a single sublayer are shown in Fig. 2.
For example, the string “aBabCa” describes a two-layer bulk structure in which
the PTM dimers lie on the B and C sublattices, while the chalcogen atoms in
the first layer are all at the A sublattice sites and the chalcogen atoms in
the second layer are at the B and A sublattice sites. In this notation, the
$\varepsilon$ polytype [53] is aBabCb, the $\beta$ polytype [52] is aBabAb,
the $\gamma$ polytype [54] is aBabCbcAc, and the $\delta$ polytype [55] is
aBabAbaCacAc.
Figure 2: (a) 2D hexagonal sublattice labels A, B, and C for each sublayer,
used to construct structure label strings for bulk PTMCs. (b) Color-coded
structure of the “aBabCb”-stacked $\varepsilon$-GaSe as an example. The colors
of the atoms in panel (b) correspond to the colors of the sublattice sites in
panel (a).
PTMC structures are energetically invariant if we perform any rigid operations
(translations, rotations, or reflections). In-plane translations from one
sublattice to another correspond to even permutations of the sublattice labels
A, B, and C; thus, e.g., “aBabCa” is energetically equivalent to “bCbcAb.” In-
plane point rotations through $60^{\circ}$ or reflections in vertical planes,
together with translations, correspond to odd permutations of the sublattice
labels; thus “aBabCa” is equivalent to “aCacBa.” PTMC structures are also
equivalent under vertical displacements, which correspond to rotating the
structure strings through three characters; thus “aBabCa” is equivalent to
“bCaaBa.” Finally, structures are energetically invariant under reflections in
horizontal planes; thus “aCacBa” is equivalent to “aBcaCa.”
A program was written to loop over all valid structure strings (i.e., strings
in which each chalcogen atom is at a different sublattice site to the
neighboring PTM dimer) for multilayer bulk structures. Energetically
equivalent structure strings were eliminated and DFT input files for the
remaining structures were generated. We find that there are $2$ inequivalent
one-layer structures (these being the $\alpha_{\rm M}$ and $\beta_{\rm M}$
polytypes with AA stacking), $12$ inequivalent two-layer structures (two of
these being supercells of the one-layer structures), $62$ inequivalent three-
layer structures, $494$ inequivalent four-layer structures, $4292$
inequivalent five-layer structures, and $42158$ inequivalent six-layer
structures. The atomic positions and lattice vectors were relaxed within DFT
at zero external pressure, subject to the constraint of the initial symmetry.
The imposition of symmetry constrains the unit cell to be hexagonal and
constrains the atoms to 2D hexagonal sites, but it allows the sublayers to
relax in the out-of-plane direction and it also allows the $a$ and $c$
hexagonal lattice parameters to relax.
## III Fit to the bulk PTMC energies
To represent the energy of each structure $S$ we fit
$\displaystyle E(S)$ $\displaystyle=$ $\displaystyle E_{\rm c}+\frac{1}{N_{\rm
l}(S)}\left[n_{\rm nc}(S)E_{\rm nc}+n_{\rm np}(S)E_{\rm np}\right.$ (1)
$\displaystyle\hskip 55.00008pt\left.{}+n_{\rm ab}(S)E_{\rm ab}+n_{\rm
snn}(S)E_{\rm snn}\right]~{}~{}~{}$
to the energy $E$ per monolayer unit cell, where $N_{\rm l}(S)$ is the number
of PTMC monolayers in structure $S$, $n_{\rm nc}(S)$ is the number of places
in the unit cell in which neighboring chalcogen atoms are on different
hexagonal sublattice sites, $n_{\rm np}(S)$ is the number of places in the
unit cell in which PTM dimers in neighboring layers are on different hexagonal
sites, $n_{\rm ab}(S)$ is the number of $\beta_{\rm M}$-polytype layers in the
unit cell, and $n_{\rm snn}(S)$ is the number of places in which the next-
nearest chalcogen atom is on the same hexagonal site as a PTM dimer. For our
aBa reference structure (AA-stacked $\alpha_{\rm M}$-PTMC), $n_{\rm
nc}(S)=n_{\rm np}(S)=n_{\rm ab}(S)=n_{\rm snn}(S)=0$. Hence the fitting
parameter $E_{\rm c}$ describes the total energy per monolayer unit cell of
the aBa structure. $E_{\rm nc}$ is the energy associated with neighboring
chalcogen atoms lying on different hexagonal sublattice sites rather than the
same sublattice site. $E_{\rm np}$ is the energy associated with PTM dimers in
neighboring layers not lying on the same sublattice. $E_{\rm ab}$ is the
energy of the $\beta_{\rm M}$ polytype of a single layer relative to the
energy of the $\alpha_{\rm M}$ polytype. Finally, $E_{\rm snn}$ is the energy
associated with second-nearest-neighbor chalcogen atoms lying on the same
hexagonal site rather than different sublattice sites. The energy of structure
$S$ relative to the aBa structure is $E_{\rm rel}(S)=E(S)-E_{\rm c}$.
The quality of the resulting fits is illustrated for the DFT-PBE-MBD* data in
Fig. 3. The fitted parameters and the root-mean-square (RMS) error in the fit
per degree of freedom are reported in Table 1. The two- and three-layer
structures are all distinct, with the sole exception of the aBa and aBc
structures. These were independently relaxed for the two- and three-layer
cases, and both the two- and three-layer versions were included in the fit.
The DFT-PBE-MBD* energy difference between the equivalent two- and three-layer
aBa and aBc structures is around $1$–$2$ meV per monolayer unit cell,
suggesting that the data suffer from a random error of this order of magnitude
due to the finite ${\bf k}$-point sampling grids and uncertainties in the
relaxed geometries. Thus the RMS errors shown in Table 1 are primarily due to
noise in the data rather than any shortcoming in the fitting function of Eq.
(1). The fit to the GaTe and InTe energy data is clearly significantly poorer
than for the other PTMCs.
Table 1: Parameters in the fit of Eq. (1) to our two- and three-layer DFT-PBE-MBD* PTMC energy data, together with the RMS error per degree of freedom. The parameters and the RMS error are in units of meV per monolayer unit cell. Parameter $E_{\rm c}$ in Eq. (1) (the total energy of the aBa structure) contains an arbitrary, pseudopotential-dependent offset, and is therefore not reported here. PTMC | $E_{\rm nc}$ | $E_{\rm np}$ | $E_{\rm ab}$ | $E_{\rm snn}$ | RMS error
---|---|---|---|---|---
GaS | $-47$. | $529$ | $-1$. | $709$ | $24$. | $028$ | $-1$. | $202$ | $1$. | $11$
GaSe | $-56$. | $010$ | $-1$. | $692$ | $18$. | $742$ | $-0$. | $857$ | $1$. | $02$
GaTe | $-79$. | $411$ | $-0$. | $662$ | $17$. | $002$ | $-0$. | $804$ | $2$. | $04$
InS | $-69$. | $786$ | $-1$. | $793$ | $17$. | $499$ | $-0$. | $621$ | $0$. | $842$
InSe | $-76$. | $943$ | $-1$. | $347$ | $15$. | $917$ | $1$. | $513$ | $1$. | $09$
InTe | $-98$. | $382$ | $0$. | $041$ | $15$. | $794$ | $2$. | $931$ | $3$. | $11$
Figure 3: Scatter plots showing the fit of Eq. (1) to the DFT-PBE-MBD* energy
data for (a)–(c) gallium chalcogenides and (d)–(f) indium chalcogenides.
$E_{\rm rel}$ is the energy relative to the AA-stacked $\alpha_{\rm M}$-PTMC
structure [aBaaBaaBa ($={}$aBaaBa${}={}$aBa)].
## IV Comparison of DFT functionals
We have computed the DFT energies within the local density approximation (LDA)
and the Perdew-Burke-Ernzerhof (PBE) variant of the generalized gradient
approximation [57]. We compare a representative set of semiempirical
dispersion-correction schemes: Grimme 2006 (G06) [58]; Ortmann, Bechstedt, and
Schmidt (OBS) [59]; and the many-body dispersion (MBD*) method [60, 61]. We
have also investigated the optB86b (MK) and optB88 (BO) nonlocal van der Waals
density functionals [62]. DFT simulation parameters such as the plane-wave
cutoff energy are summarized in Appendix A.
We obtained a complete set of DFT-PBE-G06 and DFT-PBE-MBD* total-energy
results for all two- and three-layer structures and fitted Eq. (1) to the
data. We also obtained DFT-LDA, DFT-PBE, DFT-LDA-OBS, and vdW-DF data for all
two-layer structures, to assess the performance of these functionals. The DFT
results for two-layer structures are shown in Fig. 4. The corresponding
results for three-layer structures are shown in Fig. 5. The disagreements
between different dispersion corrections indicate the limitations of DFT in
studies of layered structures. Alternative methods such as quantum Monte Carlo
approaches are required to provide independent benchmarks [63]. We regard the
DFT-PBE-MBD* method as somewhat more reliable than the other dispersion
corrections because it describes many-body interactions and screening effects
beyond a description by pairwise interatomic potentials, and because it has
been extensively benchmarked against diffusion quantum Monte Carlo data [64].
Figure 4: DFT energy $E_{\rm rel}$ of two-layer structures relative to the AA-
stacked $\alpha_{\rm M}$ polytype [aBaaBa ($={}$aBa)], for (a) GaS, (b) GaSe,
(c) GaTe, (d) InS, (e) InSe, and (f) InTe. Different exchange-correlation
functionals and dispersion-correction methods have been used. The “PBE-vdW”
results were obtained using the PAW method with the optB86b vdW-DF [62]. The
other vdW-DF data [65] are similar to the optB86b results shown.
Figure 5: DFT energy $E_{\rm rel}$ of three-layer structures relative to the
AA-stacked $\alpha_{\rm M}$ polytype [aBaaBaaBa ($={}$aBa)], for (a) GaS, (b)
GaSe, (c) GaTe, (d) InS, (e) InSe, and (f) InTe. Different dispersion-
correction methods are used.
## V Structural stability
### V.1 Zero external pressure
Using Eq. (1) together with the parameters shown in Table 1, we find that for
GaS, GaSe, GaTe, and InS the most stable hexagonal structure is aBabAb, which
corresponds to the $\beta$ polytype described in experiments [52]. This
structure consists of an AA′ stacking of $\alpha_{\rm M}$-polytype monolayers
and has D6h point group and P$6_{3}$/mmc space group. For GaSe, this structure
is more stable than the $\varepsilon$ and $\gamma$ polytypes by $0.86$ meV per
monolayer unit cell, and it is more stable than the $\delta$ polytype by a
mere $0.43$ meV per monolayer unit cell. The most energetically stable
structures of GaSe with unit cells of up to six layers are shown in Table 2.
Table 2: Most energetically competitive structures of GaSe with up to six layers in the unit cell, together with some other structures of interest. The DFT-PBE-MBD* energy of each structure relative to aBa is shown. Structure | Energy (meV/cell/layer)
---|---
aBabAb ($\beta$-GaSe) | $-59.416$
aBabAbaBabAbaBabCb | $-59.130$
aBabAbaBabAbaBacAc | $-59.130$
aBabAbaBabAbaCacAc | $-59.130$
aBabAbaBabCbcBcbCb | $-59.130$
aBabAbaCacAc ($\delta$-GaSe) | $-58.988$
aBabAbaBabCb | $-58.988$
aBabAbaBacAc | $-58.988$
aBabAbaBabCbcAc | $-58.902$
aBabAbaCacAcbCb | $-58.902$
aBabAbaBabCbaBabCb | $-58.845$
aBabAbaBabCbaBacAc | $-58.845$
aBabAbaBabCbcAcbCb | $-58.845$
aBabAbaBacAcaBacAc | $-58.845$
aBabAbaBacAcbCbcAc | $-58.845$
aBabAbaCabAbaBabCb | $-58.845$
aBabAbaCabAbaBacAc | $-58.845$
aBabAbaCabAbaCacAc | $-58.845$
aBabAbaCabAbcBcbCb | $-58.845$
aBabAbaCacAcbCbcAc | $-58.845$
aBabAbaCacBcbCbcAc | $-58.845$
aBabAbcBcbAbaBacAc | $-58.845$
$\vdots$ | $\vdots$
aBabCb ($\varepsilon$-GaSe) | $-58.559$
aBabCbcAc ($\gamma$-GaSe) | $-58.559$
$\vdots$ | $\vdots$
aBacBc | $-56.010$
On the other hand, for InSe and InTe the most stable hexagonal structure is
aBacBc. This consists of an AB′ stacking of $\alpha_{\rm M}$-polytype
monolayers. For InSe this structure is more stable than the $\varepsilon$ and
$\gamma$ polytypes (aBabCb and aBabCbcAc) by just $0.17$ meV per monolayer
unit cell. The most stable structure differs from that of the gallium
chalcogenides and indium sulfide by a horizontal translation of every second
layer. Nevertheless, this structure also has D6h point group and P$6_{3}$/mmc
space group. The most stable structures of InSe in unit cells of up to six
layers are shown in Table 3.
Table 3: Most energetically competitive structures of InSe with up to six layers in the unit cell, together with some other structures of interest. The DFT-PBE-MBD* energy of each structure relative to aBa is shown. Structure | Energy (meV/cell/layer)
---|---
aBacBc | $-76.943$
aBabCbaBacBcaBacBc | $-76.888$
aBabCbaCabCbaBacBc | $-76.888$
aBabCbaCabCbaCabCb | $-76.888$
aBabCbaCabCbaCacBc | $-76.888$
aBabCbaBacBc | $-76.860$
aBabCbaCabCb | $-76.860$
aBabCbaCacBc | $-76.860$
aBabCbaCabAbcAc | $-76.844$
aBabCbaCabCbcAc | $-76.844$
aBabCbaBabCbaBacBc | $-76.832$
aBabCbaBabCbaCabCb | $-76.832$
aBabCbaBabCbaCacBc | $-76.832$
aBabCbaBacAcaBacBc | $-76.832$
aBabCbaBacAcbAbcAc | $-76.832$
aBabCbaBacAcbAbcBc | $-76.832$
aBabCbaBacBcaCacBc | $-76.832$
aBabCbaBacBcbAbcBc | $-76.832$
aBabCbaCabAbaCabCb | $-76.832$
aBabCbaCabAbaCacBc | $-76.832$
aBabCbaCabCbaBacAc | $-76.832$
aBabCbaCacBcbAbcAc | $-76.832$
$\vdots$ | $\vdots$
aBabCb ($\varepsilon$-InSe) | $-76.777$
aBabCbcAc ($\gamma$-InSe) | $-76.777$
$\vdots$ | $\vdots$
aBabAbaCacAc ($\delta$-InSe) | $-76.020$
$\vdots$ | $\vdots$
aBabAb ($\beta$-InSe) | $-75.264$
As a test, we have relaxed the structures of aBabAb (the $\beta$ polytype) and
aBabCb (the $\varepsilon$ polytype) GaSe without any symmetry constraints. The
initial lattice vectors and atom positions were randomly offset by a small
amount from their exact hexagonal-cell values, and the positions and lattice
vectors were relaxed within DFT-PBE-MBD* at zero pressure. This did not lead
to a lowering of the total energy relative to the hexagonal cell, thus
providing direct evidence in support of our assumption that the unit cell is
hexagonal in all cases and that the atoms lie in horizontal sublayers on
hexagonal sublattice sites. Direct confirmation that the structures that we
have found to be most energetically stable in any of the PTMCs are also
dynamically stable is provided by the DFT-PBE-MBD* phonon dispersion curves
shown in Fig. 6. On the other hand, it is known that a monoclinic structure of
GaTe is more stable than the hexagonal structures studied here [66, 67].
Figure 6: DFT-PBE-MBD* phonon dispersion curves of (a) aBabAb (the $\beta$
polytype) GaSe and (b) aBacBc GaSe. The results were obtained using the method
of finite displacements in different sizes of supercell.
While experimental results [55, 53, 52, 54, 68] support our determination of
the $\alpha_{\rm M}$ monolayer structure as the most stable form, and also
agree that the most stable structure of GaS is aBabAb (the $\beta$ polytype)
[52], for InSe and GaSe the aBacBc structure calculated to be most stable is
not one of the commonly observed structures in experiments. Specifically, the
experimental work on InSe finds most often the $\gamma$ polytype (aBabCbcAc)
[54] and occasionally the $\varepsilon$ polytype (aBabCb) [68], neither of
which has inversion symmetry. For GaSe the $\varepsilon$ polytype (aBabCb)
[53] and the $\delta$ polytype (aBabCbcBcbAb) [55] are reported, against our
result of aBabAb (the $\beta$ polytype). It should be noted that our results
show several structures for each PTMC of comparable stability on a sub-meV-
per-monolayer-unit-cell scale. This has important consequences, not only on
the theoretical side, with the structure returned as the most stable being
sensitive to the van der Waals functional chosen, but also on the experimental
side, suggesting that the polytype of a PTMC crystal must be highly sensitive
to the crystal growth conditions. Indeed, it supports the observation of
multiple stacking faults and regions of different polytypes within a single
sample [69], and suggests that the synthesis of different PTMC polytypes
should be possible with careful tuning of experimental conditions. On the
theoretical side, an important conclusion is that a computational method with
an accuracy and precision of around $0.1$ meV per monolayer unit cell is
required to determine the most stable PTMC structure reliably. The $>10$ meV
per monolayer unit cell spread of DFT results with different van der Waals
correction schemes and the $\sim 1$ meV per monolayer unit cell disagreement
between independently relaxed equivalent two- and three-layer structures,
together with the disagreements with experiment regarding the most stable
structures, demonstrate that dispersion-corrected DFT is not currently capable
of such accuracy and precision.
We compare our relaxed lattice parameters with both previous DFT results and
experimental results in Table 4. Where comparison is possible, our dispersion-
corrected DFT-PBE calculations agree with experimental results to within $0.2$
Å (often an order of magnitude better). The hexagonal $a$ lattice parameter is
almost the same for all structures of a given PTMC, reflecting the in-plane
rigidity of the individual layers. However, the $c$ lattice parameter is much
more sensitive to the structure, as shown in Fig. 7. High-energy structures
generally have larger lattice parameters $c$.
Table 4: Hexagonal lattice parameters $a$ and $c$ of $\beta$-GaS (aBabAb), $\varepsilon$-GaSe (aBabCb), and $\gamma$-InSe (aBabCbcAc) obtained using various methods. Results without citation were obtained in the present work. | $\beta$-GaS (aBabAb) | $\varepsilon$-GaSe (aBabCb) | $\gamma$-InSe (aBabCbcAc)
---|---|---|---
Method | $a$ (Å) | $c$ (Å) | $a$ (Å) | $c$ (Å) | $a$ (Å) | $c$ (Å)
DFT-LDA | 3.541 | 15.214 | 3.762 | 15.666 | |
DFT-LDA-OBS | 3.517 | 14.939 | 3.695 | 15.346 | |
DFT-PBE | 3.633 [67], 3.626 | 16.677 [67], 17.633 | 3.823 [67], 3.811 | 17.848 [67], 18.201 | 4.091 [67] | 26.982 [67]
DFT-PBE-G06 | 3.570 | 15.497 | 3.740 | 15.899 | 3.942 | 25.251
DFT-PBE-MBD* | 3.583 | 15.266 | 3.771 | 15.744 | 4.031 | 24.919
vdW-DF2-C09 | 3.575 [67] | 15.460 [67] | 3.761 [67] | 15.943 [67] | 4.028 [67] | 24.996 [67]
Experiment | 3.587 [52] | 15.492 [52] | 3.743 [70] | 15.919 [70] | 4.002 [54] | 24.946 [54]
Figure 7: Hexagonal lattice parameter $c$ divided by number of layers
$n_{\text{layers}}$ against ground-state total energy $E_{\rm rel}$ for DFT-
PBE-MBD*-optimized structures of (a) bulk GaS, GaSe, and GaTe and (b) bulk
InS, InSe, and InTe. In each case the ground-state total energy $E_{\rm rel}$
is plotted relative to that of the aBa structure. The dashed lines show linear
fits to $c/n_{\text{layers}}$ against energy for each material.
### V.2 Nonzero pressure
At zero temperature the most thermodynamically stable polytype is the
structure with the lowest enthalpy $H$. At sufficiently low pressures $p$ we
may approximate the enthalpy of a PTMC structure as
$H\approx E_{0}+pV_{0}+O(p^{2}),$ (2)
where $E_{0}$ and $V_{0}$ are the zero-pressure energy and volume. The
enthalpy is plotted against pressure for energetically competitive structures
of gallium chalcogenides and indium chalcogenides in Fig. 8. Of the two-layer
structures, aBabCb (the $\varepsilon$ polytype), aBacBc, and (in GaS, GaSe,
and InS) aBabAb (the $\beta$ polytype), have DFT-PBE-MBD* energies within a
few meV per monolayer cell of each other at zero pressure, but at higher
pressure, aBabAb (the $\beta$ polytype) is clearly disfavored. More generally,
the application of pressure simplifies the picture by reducing the number of
competing structures and increasing the relative enthalpies of those
structures. In GaSe, the three-layer structures aBabCbcAc (the $\gamma$
polytype) and aBabAbcAc are competitive at low pressure. This is not the case
in InSe. However, at very high pressures, three-layer structures may be
favored in InSe. Below 7.1 GPa, the aBacBc structure of InSe is favored; above
7.1 GPa, the aBacBacBc structure of InSe is favored. The structures favored at
high pressures feature PTM dimers on the same sublattice and neighboring
chalcogen atoms on different sublattices, as would be expected from steric
considerations. At low pressure it is once again clear that accuracy and
precision of around $0.1$ meV per monolayer unit cell are required to identify
the most stable polytype unambiguously.
Figure 8: Enthalpy against pressure [using Eq. (2)] for energetically
competitive structures of (a) GaS, (b) GaSe, (c) GaTe, (d) InS, (e) InSe, and
(f) InTe. The zero-pressure energy $E_{0}$ and volume $V_{0}$ data were
obtained from DFT-PBE-MBD* calculations. The enthalpies are plotted relative
to the enthalpy of the aBacBc structure (not the aBa structure). At any given
pressure, the structure with the lowest enthalpy is thermodynamically favored
at zero temperature. In panel (b) we also show DFT-PBE-MBD* enthalpies
obtained directly by relaxing the structure and lattice parameters at fixed
external pressure. The inset shows the low-pressure region in greater detail.
Note that the zero-pressure results shown here are obtained directly from the
DFT calculations and do not make use of the fit of Eq. (1); thus the relative
enthalpies shown here differ from the results shown in Tables 2 and 3 by
around a meV per monolayer unit cell.
In Fig. 8(b) we compare the linear approximation to the enthalpy [Eq. 2)] with
DFT enthalpies obtained by directly relaxing the lattice vectors at a given
external pressure. We find that the linear approximation is of quantitative
accuracy on a meV-per-monolayer-unit-cell scale for relative enthalpies up to
around $1$ GPa. Beyond this, the linear approximation provides a qualitative
picture that generally preserves the ordering of the structures, at least up
to $\sim 10$ GPa.
In all bulk PTMCs at multi-GPa pressures, the aBacBc structure is found to be
the most stable structure over a broad range of pressures in DFT-PBE-MBD*
calculations. This is the inversion-symmetric structure that is predicted to
be most stable for InSe and InTe at zero pressure, and consists of AB′-stacked
$\alpha_{\rm M}$ monolayers. Despite its ubiquitous presence in the
theoretical calculations, we are not aware that this structure has previously
been reported.
## VI Electronic band structure
In Fig. 9 we plot the DFT-PBE electronic band structures of the theoretically
most stable polytypes of GaSe and InSe (aBabAb and aBacBc, respectively) and
the experimentally observed [53] $\varepsilon$ polytype (aBabCb) of GaSe. The
structures were relaxed using DFT-PBE-MBD*. In each case the polytypes exhibit
a direct band gap at the $\Gamma$ point of the two-layer hexagonal Brillouin
zone. The DFT-PBE band gaps, which are expected to be significant
underestimates of the true gaps [71], are $0.804$ and $0.742$ eV for the
aBabAb and aBabCb structures of GaSe, respectively. The low-energy band
structure is qualitatively similar for these two energetically competitive
structures of GaSe. The DFT-PBE band gap of the most stable structure of InSe
(aBacBc) is much smaller than the gap of GaSe, at $0.183$ eV.
Figure 9: Electronic band structures of low-energy structures of bulk PTMCs:
(a) aBabAb GaSe (the $\beta$ polytype and the lowest-energy structure in
theory), (b) aBabCb GaSe (the $\varepsilon$ polytype), and (c) aBacBc InSe
(the lowest-energy structure in theory). The horizontal dashed line shows the
Fermi energy in each case. The inset to panel (a) shows the hexagonal
Brillouin zone.
We have examined the band gap of a range of two-layer structures for each
material, finding that the vertical band gap at $\Gamma$ and the ground-state
energy of each structure are positively correlated, although with significant
noise: see Fig. 10. PTMC structures with smaller band gaps tend to be more
stable. In fact, the most stable two-layer structure of InTe has a direct gap
at $\Gamma$ of just $0.2$ meV. In most cases the vertical gap at $\Gamma$ is
the fundamental gap, especially for low-energy structures. A notable exception
is GaTe, where the vertical gap at $\Gamma$ is nonfundamental for all the two-
layer structures. In the most stable two-layer GaTe structure, the valence-
band maximum is at $\Gamma$ but the conduction-band minimum is on the
$\Gamma$–$M$ line. Previous work using DFT and many-body perturbation theory
has shown that $\gamma$-InSe changes from a direct-gap material to an
indirect-gap material under high pressure [72].
Figure 10: Vertical band gap at $\Gamma$ against ground-state total energy
$E_{\rm rel}$ relative to that of the aBa structure for DFT-PBE-MBD*-optimized
two-layer structures of (a) bulk GaS, GaSe, and GaTe and (b) bulk InS, InSe,
and InTe. The band gaps were calculated using DFT-PBE and the total energies
were evaluated using DFT-PBE-MBD*. The dashed lines show linear fits to the
gap against energy for each of the three materials. Where the symbols are
filled, the vertical gap at $\Gamma$ is equal to the fundamental band gap.
The experimentally measured gaps of $\beta$-InSe, $\gamma$-InSe and
$\epsilon$-InSe are $1.28$ eV [73], $1.25$–$1.29$ eV [74, 75], and $1.4$ eV
[76], respectively, which are (as expected) very much larger than the DFT-PBE
InSe gaps of energetically stable polytypes shown in Fig. 10. Nevertheless, we
would expect the qualitative conclusion that the band gap of a PTMC polytype
is positively correlated with its energy to continue to hold.
We note that the dispersion of the band-edge states in the out-of-plane
direction along $\Gamma$-A is substantial. The electronic structure is very
much three-dimensional, despite the layered crystalline structure of the
PTMCs. This dispersion arises due to strong interlayer hybridization of
$p_{z}$ orbital states on chalcogen atoms in the band-edge wave functions
[38]. It is the restriction of out-of-plane momentum in ultrathin PTMC films
that gives rise to their strong thickness-dependent electronic and optical
properties, with an increase in band gap for a reduced number of layers [13,
14].
## VII Conclusions
We have used dispersion-corrected DFT methods to examine the relative
stability of a large number of candidate bulk hexagonal PTMC polytypes. For
all PTMCs there is a clear consensus among DFT functionals that the
$\alpha_{\rm M}$ monolayer polytype, in which the chalcogen atoms lie on the
same hexagonal sublattice, is $E_{\rm ab}=16$–$24$ meV per monolayer unit cell
more stable than the $\beta_{\rm M}$ polytype, in which the chalcogen atoms
lie on different hexagonal sublattices; indeed, all experimentally observed
bulk polytypes only feature $\alpha_{\rm M}$ monolayers [55, 53, 52, 54, 68].
Our DFT-PBE-MBD* calculations show that there is an energy gain of $-E_{\rm
nc}=50$–$100$ meV per monolayer unit cell from having neighboring chalcogen
atoms on different hexagonal sublattices; again, all experimentally observed
polytypes only have neighboring chalcogen atoms on different hexagonal
sublattices [55, 53, 52, 54, 68]. The DFT-PBE-MBD* energy gain associated with
PTM dimers in neighboring layers lying on different hexagonal sublattices is
$-E_{\rm np}=0.04$–$2.7$ meV per monolayer unit cell. This leads to a tendency
to avoid AA-stacked structures at zero pressure. However, in InSe and InTe
this is offset by an energy penalty of $E_{\rm snn}=1.5$–$2.9$ meV per
monolayer unit cell associated with PTM dimers and next-nearest chalcogen
atoms lying the same hexagonal sublattices; it is geometrically impossible to
have an AB- or ABC-stacked $\alpha_{\rm M}$ structure in which PTM dimers and
next-nearest chalcogen atoms all lie on different hexagonal sublattices. The
interplay between these effects leads to a subtle, sub-meV competition between
polytypes. Disagreements between dispersion-corrected DFT total energies are
of order $10$ meV per monolayer unit cell. Disagreements between the relative
energies of the lowest-energy polytypes are of order $1$–$5$ meV per monolayer
unit cell. Only for GaS is the observed stable polytype ($\beta$) predicted by
DFT-PBE-MBD* to have the lowest energy; however, in GaSe and InSe the observed
polytypes are very close in energy to the theoretically most stable structure.
We conclude that dispersion-corrected DFT methods are not yet able to predict
the relative stability of bulk PTMC polymorphs reliably; however, they can
provide insights into the energy scales involved and the types of structures
that are favored. The small energy differences between competing polytypes
imply that a wide variety of different polytypes are likely to be found in
experiments, and that stacking faults must be common in PTMC samples.
We find that application of pressure tends to favor an aBacBc PTMC structure
that has not previously been reported. In fact this polytype is found to be
most stable within DFT-PBE-MBD* at zero pressure for InSe and InTe. We also
find that there is a positive correlation between the ground-state total
energy and the electronic band gap; energetically stable PTMC polytypes tend
to have smaller band gaps.
###### Acknowledgements.
We acknowledge useful conversations with V. I. Fal’ko. All relevant data
present in this publication can be accessed at URL???. Computational resources
were provided by Lancaster University’s high-end computing facility, and by
the University of Manchester Computational Shared Facility. SJM acknowledges
support from EC Quantum Technology Flagship Project No. 2D-SIPC.
## Appendix A Methodology
We used DFT as implemented in the castep [77] plane-wave-basis code to compute
the relative energies of PTMC crystals in a variety of bulk hexagonal
structures. We used ultrasoft pseudopotentials to represent atomic cores and
we used plane-wave cutoff energies of at least $566$ eV. The maximum distance
between ${\bf k}$ points in the Monkhorst-Pack grid was less than $0.0189$ Å-1
in each case. The force tolerance for geometry optimization was $0.514$ meV
Å-1. We verified that near-identical relative energies for PTMC structures
were obtained using the vasp [78] DFT code with projector augmented-wave (PAW)
pseudopotentials instead of castep. In the vasp calculations the basis
consisted of plane waves with a cutoff energy of $680$ eV and the Brillouin
zone was sampled by a Monkhorst-Pack grid of $18\times 18\times 4$ points. The
crystals were fully optimized with a force tolerance of $0.005$ eV Å-1. We
also verified that castep DFT relative energies obtained using norm-conserving
pseudopotentials were in agreement (on a meV-per-monolayer-unit-cell scale)
with our results obtained using ultrasoft pseudopotentials. Full data sets can
be found in the Supplemental Information [65].
## References
* Schubert and Dörre [1953] K. Schubert and E. Dörre, Kristallstrukturen des GaSe, Naturwiss. 40, 604 (1953).
* Hahn and Frank [1955] H. Hahn and G. Frank, Über die Kristallstruktur des GaS, Z. Anorg. Allg. Chem. 278, 340 (1955).
* Sugaike [1957] S. Sugaike, Synthesis, crystal lattices and some electrical properties of indium tellurides and selenides, Mineral. J. 2, 63 (1957).
* Late _et al._ [2012] D. J. Late, B. Liu, J. Luo, A. Yan, H. S. S. R. Matte, M. Grayson, C. N. R. Rao, and V. P. Dravid, GaS and GaSe ultrathin layer transistors, Adv. Mater. 24, 3549 (2012).
* Hu _et al._ [2012] P. Hu, Z. Wen, L. Wang, P. Tan, and K. Xiao, Synthesis of few-layer GaSe nanosheets for high performance photodetectors, ACS Nano 6, 5988 (2012).
* Zólyomi _et al._ [2013] V. Zólyomi, N. D. Drummond, and V. I. Fal’ko, Band structure and optical transitions in atomic layers of hexagonal gallium chalcogenides, Phys. Rev. B 87, 195403 (2013).
* Zólyomi _et al._ [2014] V. Zólyomi, N. D. Drummond, and V. I. Fal’ko, Electrons and phonons in single layers of hexagonal indium chalcogenides from ab initio calculations, Phys. Rev. B 89, 205416 (2014).
* Tamalampudi _et al._ [2014] S. R. Tamalampudi, Y.-Y. Lu, R. K. U., Raman Sankar, C.-D. Liao, K. M. B., C.-H. Cheng, F. C. Chou, and Y.-T. Chen, High performance and bendable few-layered InSe photodetectors with broad spectral response, Nano Lett. 14, 2800 (2014).
* Liu _et al._ [2014] F. Liu, H. Shimotani, H. Shang, T. Kanagasekaran, V. Zólyomi, N. Drummond, V. I. Fal’ko, and K. Tanigaki, High-sensitivity photodetectors based on multilayer GaTe flakes, ACS Nano 8, 752 (2014).
* Cao _et al._ [2015] T. Cao, Z. Li, and S. G. Louie, Tunable magnetism and half-metallicity in hole-doped monolayer GaSe, Phys. Rev. Lett. 114, 236602 (2015).
* Mudd _et al._ [2015] G. W. Mudd, S. A. Svatek, L. Hague, O. Makarovsky, Z. R. Kudrynskyi, C. J. Mellor, P. H. Beton, L. Eaves, K. S. Novoselov, Z. D. Kovalyuk, E. E. Vdovin, A. J. Marsden, N. R. Wilson, and A. Patanè, High broad-band photoresponsivity of mechanically formed InSe-graphene van der Waals heterostructures, Adv. Mater. 27, 3760 (2015).
* Sucharitakul _et al._ [2015] S. Sucharitakul, N. J. Goble, U. R. Kumar, Raman Sankar, Z. A. Bogorad, F.-C. Chou, Y.-T. Chen, and X. P. A. Gao, Intrinsic electron mobility exceeding 103 cm2/(V s) in multilayer InSe FETs, Nano Lett. 15, 3815 (2015).
* Bandurin _et al._ [2016] D. A. Bandurin, A. V. Tyurnina, G. L. Yu, A. Mishchenko, V. Zólyomi, S. V. Morozov, R. K. Kumar, R. V. Gorbachev, Z. R. Kudrynskyi, S. Pezzini, Z. D. Kovalyuk, U. Zeitler, K. S. Novoselov, A. Patanè, L. Eaves, I. V. Grigorieva, V. I. Fal’ko, A. K. Geim, and Y. Cao, High electron mobility, quantum Hall effect and anomalous optical response in atomically thin InSe, Nat. Nanotechnol. 12, 223 (2016).
* Ben Aziza _et al._ [2017] Z. Ben Aziza, D. Pierucci, H. Henck, M. G. Silly, C. David, M. Yoon, F. Sirotti, K. Xiao, M. Eddrief, J.-C. Girard, and A. Ouerghi, Tunable quasiparticle band gap in few-layer GaSe/graphene van der Waals heterostructures, Phys. Rev. B 96, 035407 (2017).
* Hung _et al._ [2017] N. T. Hung, A. R. T. Nugraha, and R. Saito, Two-dimensional InSe as a potential thermoelectric material, Appl. Phys. Lett. 111, 092107 (2017).
* Terry _et al._ [2018] D. J. Terry, V. Zólyomi, M. Hamer, A. V. Tyurnina, D. G. Hopkinson, A. M. Rakowski, S. J. Magorrian, N. Clark, Y. M. Andreev, O. Kazakova, K. Novoselov, S. J. Haigh, V. I. Fal’ko, and R. Gorbachev, Infrared-to-violet tunable optical activity in atomic films of GaSe, InSe, and their heterostructures, 2D Mater. 5, 041009 (2018).
* Hamer _et al._ [2019] M. J. Hamer, J. Zultak, A. V. Tyurnina, V. Zólyomi, D. Terry, A. Barinov, A. Garner, J. Donoghue, A. P. Rooney, V. Kandyba, A. Giampietri, A. Graham, N. Teutsch, X. Xia, M. Koperski, S. J. Haigh, V. I. Fal’ko, R. V. Gorbachev, and N. R. Wilson, Indirect to direct gap crossover in two-dimensional InSe revealed by angle-resolved photoemission spectroscopy, ACS Nano 13, 2136 (2019).
* Voitchovsky and Mercier [1974] J. P. Voitchovsky and A. Mercier, Photoluminescence of GaSe, Il Nuovo Cimento B 22, 273 (1974).
* Camassel _et al._ [1978] J. Camassel, P. Merle, H. Mathieu, and A. Chevy, Excitonic absorption edge of indium selenide, Phys. Rev. B 17, 4718 (1978).
* Alekperov _et al._ [1991] O. Alekperov, M. Godjaev, M. Zarbaliev, and R. Suleimanov, Interband photoconductivity in layer semiconductors GaSe, InSe and GaS, Solid State Commun. 77, 65 (1991).
* Ho and Lin [2006] C. H. Ho and S. L. Lin, Optical properties of the interband transitions of layered gallium sulfide, J. Appl. Phys. 100, 083508 (2006).
* Tredgold and Clark [1969] R. Tredgold and A. Clark, Hopping conduction in gallium selenide single crystals, Solid State Commun. 7, 1519 (1969).
* Ottaviani _et al._ [1974] G. Ottaviani, C. Canali, F. Nava, P. Schmid, E. Mooser, R. Minder, and I. Zschokke, GaSe: A layer compound with anomalous valence band anisotropy, Solid State Commun. 14, 933 (1974).
* Schlüter _et al._ [1976] M. Schlüter, J. Camassel, S. Kohn, J. P. Voitchovsky, Y. R. Shen, and M. L. Cohen, Optical properties of GaSe and GaSxSe1-x mixed crystals, Phys. Rev. B 13, 3534 (1976).
* Kuroda _et al._ [1980] N. Kuroda, I. Munakata, and Y. Nishina, Exciton transitions from spin-orbit split off valence bands in layer compound InSe, Solid State Commun. 33, 687 (1980).
* Kress-Rogers _et al._ [1982] E. Kress-Rogers, R. Nicholas, J. Portal, and A. Chevy, Cyclotron resonance studies on bulk and two-dimensional conduction electrons in InSe, Solid State Commun. 44, 379 (1982).
* Gomes da Costa _et al._ [1993] P. Gomes da Costa, R. G. Dandrea, R. F. Wallis, and M. Balkanski, First-principles study of the electronic structure of $\gamma$-InSe and $\beta$-InSe, Phys. Rev. B 48, 14135 (1993).
* Fernelius [1994] N. Fernelius, Properties of gallium selenide single crystal, Prog. Cryst. Growth Ch. 28, 275 (1994).
* Cingolani _et al._ [1981] A. Cingolani, M. Ferrara, M. Lugarà, and F. Lévy, Optical gain in gallium selenide and indium selenide, Physica B 105, 40 (1981).
* Segura _et al._ [1997] A. Segura, J. Bouvier, M. V. Andrés, F. J. Manjón, and V. Muñoz, Strong optical nonlinearities in gallium and indium selenides related to inter-valence-band transitions induced by light pulses, Phys. Rev. B 56, 4075 (1997).
* Singh _et al._ [1998] N. Singh, D. Suhre, V. Balakrishna, M. Marable, R. Meyer, N. Fernelius, F. Hopkins, and D. Zelmon, Far-infrared conversion materials: Gallium selenide for far-infrared conversion applications, Prog. Cryst. Growth Ch. 37, 47 (1998).
* Allakhverdiev _et al._ [2009] K. R. Allakhverdiev, M. O. Yetis, S. Özbek, T. K. Baykara, and E. Y. Salaev, Effective nonlinear GaSe crystal. optical properties and applications, Laser Phys. 19, 1092 (2009).
* Segura _et al._ [1984] A. Segura, F. Pomer, A. Cantarero, W. Krause, and A. Chevy, Electron scattering mechanisms in $n$-type indium selenide, Phys. Rev. B 29, 5708 (1984).
* Segura _et al._ [1979] A. Segura, A. Chevy, J. Guesdon, and J. Besson, Photovoltaic efficiency of InSe solar cells, Sol. Energy Mater. 2, 159 (1979).
* Segura _et al._ [1983] A. Segura, J. P. Guesdon, J. M. Besson, and A. Chevy, Photoconductivity and photovoltaic effect in indium selenide, J. Appl. Phys. 54, 876 (1983).
* Gibson _et al._ [2005] G. A. Gibson, A. Chaiken, K. Nauka, C. C. Yang, R. Davidson, A. Holden, R. Bicknell, B. S. Yeh, J. Chen, H. Liao, S. Subramanian, D. Schut, J. Jasinski, and Z. Liliental-Weber, Phase-change recording medium that enables ultrahigh-density electron-beam data storage, Appl. Phys. Lett. 86, 051902 (2005).
* Ben Aziza _et al._ [2018] Z. Ben Aziza, V. Zólyomi, H. Henck, D. Pierucci, M. G. Silly, J. Avila, S. J. Magorrian, J. Chaste, C. Chen, M. Yoon, K. Xiao, F. Sirotti, M. C. Asensio, E. Lhuillier, M. Eddrief, V. I. Fal’ko, and A. Ouerghi, Valence band inversion and spin-orbit effects in the electronic structure of monolayer GaSe, Phys. Rev. B 98, 115405 (2018).
* Magorrian _et al._ [2016] S. J. Magorrian, V. Zólyomi, and V. I. Fal’ko, Electronic and optical properties of two-dimensional InSe from a DFT-parametrized tight-binding model, Phys. Rev. B 94, 245431 (2016).
* Ritschel _et al._ [2018] T. Ritschel, H. Berger, and J. Geck, Stacking-driven gap formation in layered 1T-TaS2, Phys. Rev. B 98, 195134 (2018).
* Lee _et al._ [2019] S.-H. Lee, J. S. Goh, and D. Cho, Origin of the insulating phase and first-order metal-insulator transition in 1T-TaS2, Phys. Rev. Lett. 122, 106404 (2019).
* Stahl _et al._ [2020] Q. Stahl, M. Kusch, F. Heinsch, G. Garbarino, N. Kretzschmar, K. Hanff, K. Rossnagel, J. Geck, and T. Ritschel, Collapse of layer dimerization in the photo-induced hidden state of 1T-TaS2, Nat. Commun. 11, 10.1038/s41467-020-15079-1 (2020).
* Enaldiev _et al._ [2020] V. V. Enaldiev, V. Zólyomi, C. Yelgel, S. J. Magorrian, and V. I. Fal’ko, Stacking domains and dislocation networks in marginally twisted bilayers of transition metal dichalcogenides, Phys. Rev. Lett. 124, 206101 (2020).
* Weston _et al._ [2020] A. Weston, Y. Zou, V. Enaldiev, A. Summerfield, N. Clark, V. Zólyomi, A. Graham, C. Yelgel, S. Magorrian, M. Zhou, J. Zultak, D. Hopkinson, A. Barinov, T. H. Bointon, A. Kretinin, N. R. Wilson, P. H. Beton, V. I. Fal’ko, S. J. Haigh, and R. Gorbachev, Atomic reconstruction in twisted bilayers of transition metal dichalcogenides, Nat. Nanotechnol. 10.1038/s41565-020-0682-9 (2020).
* Chitara and Ya’akobovitz [2018] B. Chitara and A. Ya’akobovitz, Elastic properties and breaking strengths of GaS, GaSe and GaTe nanosheets, Nanoscale 10, 13022 (2018).
* Zhao _et al._ [2019] Q. Zhao, R. Frisenda, T. Wang, and A. Castellanos-Gomez, InSe: a two-dimensional semiconductor with superior flexibility, Nanoscale 11, 9845 (2019).
* Demirci _et al._ [2017] S. Demirci, N. Avazlı, E. Durgun, and S. Cahangirov, Structural and electronic properties of monolayer group III monochalcogenides, Phys. Rev. B 95, 115409 (2017).
* d’Amour _et al._ [1982] H. d’Amour, W. Holzapfel, A. Polian, and A. Chevy, Crystal structure of a new high pressure polymorph of GaS, Solid State Commun. 44, 853 (1982).
* Ulrich _et al._ [1996] C. Ulrich, M. A. Mroginski, A. R. Goñi, A. Cantarero, U. Schwarz, V. Muñoz, and K. Syassen, Vibrational properties of InSe under pressure: Experiment and theory, Phys. Status Solidi (b) 198, 121 (1996).
* Errandonea _et al._ [1999] D. Errandonea, A. Segura, V. Muñoz, and A. Chevy, Effects of pressure and temperature on the dielectric constant of GaS, GaSe, and InSe: Role of the electronic contribution, Phys. Rev. B 60, 15866 (1999).
* Errandonea _et al._ [2005] D. Errandonea, A. Segura, F. J. Manjón, A. Chevy, E. Machado, G. Tobias, P. Ordejón, and E. Canadell, Crystal symmetry and pressure effects on the valence band structure of ${\gamma}$-InSe and ${\epsilon}$-GaSe: Transport measurements and electronic structure calculations, Phys. Rev. B 71, 125206 (2005).
* Gouskov _et al._ [1982] A. Gouskov, J. Camassel, and L. Gouskov, Growth and characterization of III-VI layered crystals like GaSe, GaTe, InSe, GaSe1-xTex and GaxIn1-xSe, Prog. Cryst. Growth Charact. 5, 323 (1982).
* Kuhn _et al._ [1976] A. Kuhn, A. Chevy, and R. Chevalier, Refinement of the 2H GaS $\beta$-type, Acta Crystallogr. B 32, 983 (1976).
* Kuhn _et al._ [1975a] A. Kuhn, A. Chevy, and R. Chevalier, Crystal structure and interatomic distances in GaSe, Phys. Status Solidi (a) 31, 469 (1975a).
* Rigoult _et al._ [1980] J. Rigoult, A. Rimsky, and A. Kuhn, Refinement of the 3R $\gamma$-indium monoselenide structure type, Acta Crystallogr. B 36, 916 (1980).
* Kuhn _et al._ [1975b] A. Kuhn, R. Chevalier, and A. Rimsky, Atomic structure of a 4H GaSe polytype named $\delta$-type, Acta Crystallogr. B 31, 2841 (1975b).
* Ramsdell [1947] L. S. Ramsdell, Studies on silicon carbide, Am. Mineral. 32, 64 (1947).
* Perdew _et al._ [1996] J. P. Perdew, K. Burke, and M. Ernzerhof, Generalized gradient approximation made simple, Phys. Rev. Lett. 77, 3865 (1996).
* Grimme [2006] S. Grimme, Semiempirical GGA-type density functional constructed with a long-range dispersion correction, J. Comp. Chem. 27, 1787 (2006).
* Ortmann _et al._ [2006] F. Ortmann, F. Bechstedt, and W. G. Schmidt, Semiempirical van der Waals correction to the density functional description of solids and molecular structures, Phys. Rev. B 73, 205101 (2006).
* Tkatchenko _et al._ [2012] A. Tkatchenko, R. A. DiStasio, R. Car, and M. Scheffler, Accurate and efficient method for many-body van der Waals interactions, Phys. Rev. Lett. 108, 236402 (2012).
* Ambrosetti _et al._ [2014a] A. Ambrosetti, A. M. Reilly, R. A. DiStasio, and A. Tkatchenko, Long-range correlation energy calculated from coupled atomic response functions, J. Chem. Phys. 140, 18A508 (2014a).
* Klimeš _et al._ [2009] J. Klimeš, D. R. Bowler, and A. Michaelides, Chemical accuracy for the van der Waals density functional, J. Phys. Condens. Mater. 22, 022201 (2009).
* Mostaani _et al._ [2015] E. Mostaani, N. D. Drummond, and V. I. Fal’ko, Quantum Monte Carlo calculation of the binding energy of bilayer graphene, Phys. Rev. Lett. 115, 115501 (2015).
* Ambrosetti _et al._ [2014b] A. Ambrosetti, D. Alfè, R. A. DiStasio, and A. Tkatchenko, Hard numbers for large molecules: Toward exact energetics for supramolecular systems, J. Phys. Chem. Lett. 5, 849 (2014b).
* [65] See Supplemental Information at [URL???] for tables comparing the DFT relative energies of PTMC structures obtained with different codes, (van der Waals) exchange-correlation functionals, and pseudopotentials.
* Julien-Pouzol _et al._ [1979] M. Julien-Pouzol, S. Jaulmes, M. Guittard, and F. Alapini, Monotellurure de gallium, GaTe, Acta Crystallogr. Section B 35, 2848 (1979).
* Brudnyi _et al._ [2015] V. N. Brudnyi, S. Y. Sarkisov, and A. V. Kosobutsky, Electronic properties of GaSe, InSe, GaS and GaTe layered semiconductors: charge neutrality level and interface barrier heights, Semicond. Sci. Technol. 30, 115019 (2015).
* Grimaldi _et al._ [2020] I. Grimaldi, T. Gerace, M. Pipita, I. Perrotta, F. Ciuchi, H. Berger, M. Papagno, M. Castriota, and D. Pacilé, Structural investigation of InSe layered semiconductors, Solid State Commun. 311, 113855 (2020).
* Blasi _et al._ [1990] C. D. Blasi, D. Manno, and A. Rizzo, Study of the polytypism in melt grown InSe single crystals by convergent beam electron diffraction, J. Cryst. Growth 100, 347 (1990).
* Cenzual _et al._ [1991] K. Cenzual, L. M. Gelato, M. Penzo, and E. Parthé, Inorganic structure types with revised space groups. I, Acta Crystallogr. Section B 47, 433 (1991).
* Perdew [1985] J. P. Perdew, Density functional theory and the band gap problem, Int. J. Quantum Chem. 28, 497 (1985).
* Ferlat _et al._ [2002] G. Ferlat, H. Xu, V. Timoshevskii, and X. Blase, Ab initio studies of structural and electronic properties of solid indium selenide under pressure, Phys. Rev. B 66, 085210 (2002).
* Gürbulak _et al._ [2014] B. Gürbulak, M. Şata, S. Dogan, S. Duman, A. Ashkhasi, and E. F. Keskenler, Structural characterizations and optical properties of InSe and InSe:Ag semiconductors grown by Bridgman/Stockbarger technique, Physica E 64, 106 (2014).
* Manjón _et al._ [2001] F. J. Manjón, D. Errandonea, A. Segura, V. Muñoz, G. Tobías, P. Ordejón, and E. Canadell, Experimental and theoretical study of band structure of InSe and In1-xGaxSe ($x<0.2$) under high pressure: Direct to indirect crossovers, Phys. Rev. B 63, 125330 (2001).
* Julien and Balkanski [2003] C. Julien and M. Balkanski, Lithium reactivity with III-VI layered compounds, Mater. Sci. Eng.: B 100, 263 (2003).
* Lei _et al._ [2014] S. Lei, L. Ge, S. Najmaei, A. George, R. Kappera, J. Lou, M. Chhowalla, H. Yamaguchi, G. Gupta, R. Vajtai, A. D. Mohite, and P. M. Ajayan, Evolution of the electronic band structure and efficient photo-detection in atomic layers of InSe, ACS Nano 8, 1263 (2014).
* Clark _et al._ [2005] S. J. Clark, M. D. Segall, C. J. Pickard, P. J. Hasnip, M. I. J. Probert, K. Refson, and M. C. Payne, First principles methods using CASTEP, Z. Kristallogr. 220, 567 (2005).
* Kresse and Furthmüller [1996] G. Kresse and J. Furthmüller, Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set, Phys. Rev. B 54, 11169 (1996).
|
AA 2021
# Wave Dark Matter
Lam Hui Center for Theoretical Physics, Department of Physics, Columbia
University, New York, NY 10027, USA; email<EMAIL_ADDRESS>
###### Abstract
We review the physics and phenomenology of wave dark matter: a bosonic dark
matter candidate lighter than about $30$ eV. Such particles have a de Broglie
wavelength exceeding the average inter-particle separation in a galaxy like
the Milky Way, thus well described as a set of classical waves. We outline the
particle physics motivations for them, including the QCD axion as well as
ultra-light axion-like-particles such as fuzzy dark matter. The wave nature of
the dark matter implies a rich phenomenology:
* •
Wave interference gives rise to order unity density fluctuations on
de Broglie scale in halos. One manifestation is vortices where the
density vanishes and around which the velocity circulates. There is
one vortex ring per de Broglie volume on average.
* •
For sufficiently low masses, soliton condensation occurs at centers
of halos. The soliton oscillates and random walks, another
manifestation of wave interference. The halo and subhalo
abundance is expected to be suppressed at small masses, but the
precise prediction from numerical wave simulations remains to be
determined.
* •
For ultra-light $\sim 10^{-22}$ eV dark matter, the wave interference
substructures can be probed by tidal streams/gravitational
lensing. The signal can be distinguished from that due to subhalos
by the dependence on stream orbital radius/image separation.
* •
Axion detection experiments are sensitive to interference
substructures for wave dark matter that is moderately light. The
stochastic nature of the waves affects the interpretation of
experimental constraints and motivates the measurement of
correlation functions.
Current constraints and open questions, covering detection experiments and
cosmological/galactic/black-hole observations, are discussed.
###### doi:
10.1146/((please add article doi))
###### keywords:
dark matter, axion, ultra-light scalar, halo substructure, black hole,
structure formation, wave interference, axion detection experiments
††journal: Annu. Rev. Astron. Astrophys.
###### Contents
1. 1 INTRODUCTION
2. 2 Terminology
3. 3 Particle physics motivations
4. 4 Wave dynamics and phenomenology
1. 4.1 Perturbation theory
2. 4.2 Soliton/boson star
3. 4.3 Numerical simulations
4. 4.4 Wave interference—granules and vortices
5. 4.5 Dynamical processes—relaxation, oscillation, evaporation, friction and heating
6. 4.6 Compact objects and relativistic effects—black hole accretion, superradiance and potential oscillation
5. 5 Observational/experimental implications and constraints
1. 5.1 Early universe considerations
2. 5.2 Linear power spectrum and early structure formation
3. 5.3 Galactic dynamics and structure—density profile, stellar scattering, dynamical friction, subhalo mass function and interference substructures
4. 5.4 Probes using compact objects—superradiance, solitons, potential oscillation and stellar cooling
5. 5.5 Photon propagation in axion background
6. 5.6 Experimental detection of axions
6. 6 Discussion—theory exploration, numerical simulations, astrophysical probes and experimental detection
## 1 INTRODUCTION
The astronomical evidence for the existence of dark matter, accumulated over
decades, is rich and compelling (e.g., Zwicky, 1933, Smith, 1936, Rubin &
Ford, 1970, Freeman, 1970, Ostriker & Peebles, 1973, Hoekstra et al., 2004,
Clowe et al., 2006, Bennett et al., 2013, Aghanim et al., 2020). Yet, the
identity and basic properties of dark matter remain shrouded in mystery. An
example is the constituent’s mass: proposals range from ultra-light $\sim
10^{-22}$ eV (Hu, Barkana & Gruzinov, 2000) to astronomical $\sim 10{\,\rm
M_{\odot}}$ (Bird et al., 2016, Garcia-Bellido & Ruiz Morales, 2017, Sasaki et
al., 2018, Jedamzik, 2020). In this vast spectrum, there is nonetheless a
useful demarcation point. Dynamical measurements tell us the dark matter mass
density in the solar neighborhood is about $0.4{\,\rm GeV\,cm^{-3}}$. 111 A
range of local dark matter density values have been reported in the
literature: e.g. $0.008{\,\rm M_{\odot}/pc^{3}}=0.3{\,\rm GeV/cm^{3}}$ (Bovy &
Tremaine, 2012), $0.0122{\,\rm M_{\odot}/pc^{3}}=0.46{\,\rm GeV/cm^{3}}$
(Sivertsson et al., 2018), $0.013{\,\rm M_{\odot}/pc^{3}}=0.49{\,\rm
GeV/cm^{3}}$ (McKee et al., 2015). From this, one can deduce the average
inter-particle separation, given a dark matter particle mass. We can compare
it against the de Broglie wavelength of the particle:
$\lambda_{\rm dB}\equiv{2\pi\over{mv}}=0.48{\,\rm kpc}\left({10^{-22}{\,\rm
eV}\over m}\right)\left({250{\,\rm km/s}\over v}\right)=1.49{\,\rm
km}\left({10^{-6}{\,\rm eV}\over m}\right)\left({250{\,\rm km/s}\over
v}\right)\,,$ (1)
where $v$ is the velocity dispersion of the galactic halo, and $m$ is the dark
matter particle mass, for which two representative values are chosen for
illustration. 222In this article, $\hbar$ and $c$ are set to unity. In most
cases, restoring $\hbar$ is a matter of replacing $m$ by $m/\hbar$. For
instance, the de Broglie wavelength is $\lambda_{\rm
dB}=2\pi\hbar/(mv)=h/(mv)$. The Compton wavelength is $\lambda_{\rm
Compton}=2\pi\hbar/(mc)$. It can be shown that the de Broglie wavelength
exceeds the inter-particle separation if $m\,\lower
3.22916pt\hbox{$\sim$}\hbox to0.0pt{\hss\raise 1.1625pt\hbox{$<$}}\,30$ eV. In
other words, in a Milky-Way-like environment, the average number of particles
in a de Broglie volume $\lambda_{\rm dB}^{3}$ is:
$N_{\rm dB}\sim\left({34{\,\rm eV}\over m}\right)^{4}\left({250{\,\rm
km/s}\over v}\right)^{3}\,.$ (2)
For $m\ll 30$ eV, the occupancy $N_{\rm dB}$ is so large that the set of
particles is best described by classical waves, much as in electromagnetism, a
state with a large number of photons is well described by the classical
electric and magnetic fields. 333A more precise statement is that a coherent
state of photons has negligible quantum fluctuations if the average occupation
number is large. See e.g. the classic paper by Glauber (1963). The associated
wave phenomena is the subject of this review. We emphasize classical, for
large occupancy implies negligible quantum fluctuations. The question of how
the classical description relates to the underlying quantum one is a
fascinating subject. We unfortunately do not have the space to explore it here
(see Sikivie & Yang, 2009, Guth et al., 2015, Dvali & Zell, 2018, Lentz et
al., 2020, Allali & Hertzberg, 2020).
Such a light dark matter particle is necessarily bosonic, for the Pauli
exclusion principle precludes multiple occupancies for fermions—this is the
essence of the bound by Tremaine & Gunn (1979). For concreteness, we focus on
a spin zero (scalar) particle, although much of the wave phenomenology applies
to higher spin cases as well (Graham et al., 2016b, Kolb & Long, 2020, Aoki &
Mukohyama, 2016). There is a long history of investigations of dark matter as
a scalar field (e.g., Baldeschi et al., 1983, Turner, 1983, Press et al.,
1990, Sin, 1994, Peebles, 2000, Goodman, 2000, Lesgourgues et al., 2002,
Amendola & Barbieri, 2006, Chavanis, 2011, Suarez & Matos, 2011, Rindler-
Daller & Shapiro, 2012, Berezhiani & Khoury, 2015a, Fan, 2016, Alexander &
Cormack, 2017). Perhaps the most well motivated example is the Quantum
Chromodynamics (QCD) axion (Peccei & Quinn, 1977, Kim, 1979, Weinberg, 1978,
Wilczek, 1978, Shifman et al., 1980, Zhitnitsky, 1980, Dine et al., 1981,
Preskill et al., 1983, Abbott & Sikivie, 1983, Dine & Fischler, 1983). Its
possible mass spans a large range—experimental detection has focused on masses
around $10^{-6}$ eV, with newer experiments reaching down to much lower
values. For recent reviews, see Graham et al. (2015), Marsh (2016), Sikivie
(2020). String theory also predicts a large number of axion-like-particles
(ALP), one or some of which could be dark matter (Svrcek & Witten, 2006,
Arvanitaki et al., 2010, Halverson et al., 2017, Bachlechner et al., 2019). At
the extreme end of the spectrum is the possibility of an ALP with mass around
$10^{-22}-10^{-20}$ eV, with a relic abundance that naturally matches the
observed dark matter density (see Section 3). More generally, ultra-light dark
matter in this mass range is often referred to as fuzzy dark matter (FDM). It
was proposed by Hu, Barkana & Gruzinov (2000) to address small scale structure
issues thought to be associated with conventional cold dark mater (CDM)
(Spergel & Steinhardt, 2000). This is a large subject we will not discuss in
depth, though it will be touched upon in Section 5. It remains unclear whether
the small scale structure issues point to novelty in the dark matter sector,
or can be resolved by baryonic physics, once the complexities of galaxy
formation are properly understood (for a recent review, see Weinberg et al.,
2015).
In this article, we take a broad perspective on wave dark matter ($m\,\lower
3.22916pt\hbox{$\sim$}\hbox to0.0pt{\hss\raise 1.1625pt\hbox{$<$}}\,30$ eV),
and discuss novel features that distinguish it from particle dark matter
($m\,\lower 3.22916pt\hbox{$\sim$}\hbox to0.0pt{\hss\raise
1.1625pt\hbox{$>$}}\,30$ eV). The underlying wave dynamics is the same whether
the dark matter is ultra-light like fuzzy dark matter, or merely light like
the QCD axion. The length scale of the wave phenomena (i.e. the de Broglie
wavelength) depends of course on the mass. For the higher masses, the length
scales are small, which can be probed by laboratory detection experiments.
(The higher masses can have astrophysical consequences too, despite the short
de Broglie wavelength, for instance around black holes or in solitons, as we
will see.) For the ultra-light end of the spectrum, fuzzy dark matter ($m\sim
10^{-22}-10^{-20}$ eV), the length scales are long and there can be striking
astrophysical signatures, which we will highlight.444There is a recent flurry
of activities on this front, starting from the paper by Schive, Chiueh &
Broadhurst (2014a): Schive et al. (2014b), Veltmaat & Niemeyer (2016), Schwabe
et al. (2016), Hui et al. (2017), Mocz et al. (2017), Nori & Baldi (2018),
Levkov et al. (2018), Bar-Or et al. (2019), Bar et al. (2018), Church et al.
(2019), Li et al. (2019), Marsh & Niemeyer (2019), Schive et al. (2020), Mocz
et al. (2019), Lancaster et al. (2020), Chan et al. (2020), Hui et al. (2020).
A recent review can be found in Niemeyer (2019). A mass $m\,<\,10^{-22}$ eV is
possible, but only if the particle constitutes a small fraction of dark
matter, for the simple reason that an excessively large $\lambda_{\rm dB}$
precludes the existence of dark matter dominated dwarf galaxies (Hu et al.,
2000). When the mass approaches the size of the Hubble constant today $m\sim
10^{-33}$ eV, the scalar field is so slowly rolling that it is essentially a
form of dark energy (Hlozek et al., 2015). (The distinction between a slowly
rolling scalar field as dark energy, and oscillating scalar field as dark
matter, is discussed in Section 3.)
An outline of the article is as follows. Particle physics motivations for
considering wave dark matter are discussed in Section 3. The bulk of this
review is devoted to elucidating the dynamics and phenomenology of wave dark
matter, in Section 4. The observational/experimental implications and
constraints are summarized in Section 5. We conclude in Section 6 with a
discussion of open questions and directions for further research. This article
is intended to be pedagogical: we emphasize results that can be understood in
an intuitive way, while providing ample references. We devote more space to
elucidating the physics than to summarizing the current constraints, which
evolve, sometimes rapidly.
[t]
## 2 Terminology
We use the term axion to loosely refer to both the QCD axion, and an axion-
like-particle (Section 3). The term fuzzy dark matter (FDM) is reserved for
the ultra-light part of the mass spectrum $m\sim 10^{-22}-10^{-20}$ eV. Wave
dark matter is the more general term, $m\,\lower 3.22916pt\hbox{$\sim$}\hbox
to0.0pt{\hss\raise 1.1625pt\hbox{$<$}}\,30$ eV, for which dark matter exhibits
wave phenomena. Wave dark matter, such as the axion, is in fact one form of
cold dark matter (CDM), assuming it is not produced by thermal freeze-out (see
Section 3). We use the term particle dark matter for cases where $m\,\lower
3.22916pt\hbox{$\sim$}\hbox to0.0pt{\hss\raise 1.1625pt\hbox{$>$}}\,30{\,\rm
eV}$, the primary example of which is Weakly Interacting Massive Particle
(WIMP). We sometimes refer to it as conventional CDM.
## 3 Particle physics motivations
In this section, we describe the axion—the QCD axion or an axion-like-
particle—as a concrete example of wave dark matter: (1) how it is motivated by
high energy physics considerations independent of the dark matter problem; (2)
how a relic abundance that matches the observed dark matter density can be
naturally obtained; (3) how it is weakly interacting and cold. Readers not
interested in the details can skip to Section 4 without loss of continuity.
We are interested in a scalar field $\phi$ that has a small mass $m$. A
natural starting point is a massless Goldstone boson, associated with the
spontaneous breaking of some symmetry. Non-perturbative quantum effects can
generate a small mass—hence, a pseudo Goldstone boson—or more generally a
potential $V(\phi)$, giving a Lagrangian density of the form: 555 By non-
perturbative effects, we mean something that is exponentially suppressed in
the $\hbar\rightarrow 0$ limit, analogous to how the tunneling amplitude in
quantum mechanics is exponentially suppressed $\sim e^{-S_{\rm
instanton}/\hbar}$. A moderate value for $S_{\rm instanton}/\hbar$ could yield
a small mass, starting from some high energy scale. See Marsh (2016) for
examples.
${\cal L}=-{1\over 2}\partial_{\mu}\phi\,\partial^{\mu}\phi-V(\phi)\,.$ (3)
A concrete realization is the axion, which is a real angular field, in the
sense that $\phi$ and $\phi+2\pi f$ are identified i.e. $\phi/f$ is
effectively an angle. The periodicity scale $f$, an energy scale, is often
referred to as the axion decay constant.
The classic example is the QCD axion, a particle that couples to the gluon
field strength and derives its mass from the presence of this coupling (and
confinement). It was introduced to address the strong CP (charge-conjugation
parity) problem: that a certain parameter in the standard model, the angle
$\theta_{\rm QCD}$, is constrained to be less than $10^{-9}$ from experimental
bounds on the neutron electric dipole moment. 666 The $\theta_{\rm QCD}$ term
in the Lagrangian takes the form ${\cal L}\sim\theta_{\rm QCD}G\tilde{G}$
where $G$ and $\tilde{G}$ are the gluon field strength and its dual. Such a
term is a total derivative, yet must be included in the path integral to
account for gluon field configurations of different windings. Such topological
considerations tell us $\theta_{\rm QCD}$ is an angle. With non-vanishing
quark masses, a non-zero angle signals the breaking of CP which is severely
constrained by experiments. The idea of the QCD axion is to promote this angle
to a dynamical field $\theta_{\rm QCD}\rightarrow\phi/f$, thereby allowing a
physical mechanism that relaxes it to zero, as suggested by Peccei & Quinn
(1977). The axion $\phi$ is the Goldstone boson associated with the breaking
of a certain global symmetry, Peccei-Quinn U(1), as pointed out by Weinberg
(1978), Wilczek (1978). See Dine (2000), Hook (2019) for reviews on axions and
alternative solutions to the strong CP problem. It has certain generic
couplings to the standard model, allowing the possibility of experimental
detection (see below). More general examples—namely, axion-like-particles
which have similar couplings to the standard model but do not contribute to
the resolution of the strong CP problem—arise naturally in string theory as
the Kaluza-Klein zero modes of higher form fields when the extra dimensions
are compactified (Green et al., 1988, Svrcek & Witten, 2006, Arvanitaki et
al., 2010, Dine, 2016, Halverson et al., 2017, Bachlechner et al., 2019).
Peccei-Quinn U(1)the symmetry associated with shifting $\phi$ by a constant.
Its spontaneous breaking is what makes the axion $\phi$ possible. Its small
explicit breaking by non-perturbative effects gives $\phi$ a potential.
For illustration, consider a potential $V(\phi)$ of the following form:
$V(\phi)=\Lambda^{4}(1-{\,\rm cos\,}[\phi/f])\,.$ (4)
(The QCD axion potential does not have this precise form, but shares similar
qualitative features.) The cosine is consistent with the idea of $\phi/f$
being an angle. The additive constant is not important for our considerations,
and is chosen merely to make $V$ vanish at the minimum $\phi=0$. The mass of
$\phi$ can be read off from expanding the cosine around $\phi=0$:
$m=\Lambda^{2}/f$. Typically, $f$ is some high energy scale up to Planck
scale, while $\Lambda$ is exponentially suppressed compared to that (see
footnote 5), giving a small $m$. For instance, $f\sim 10^{17}$ GeV and
$\Lambda\sim 100$ eV gives $m\sim 10^{-22}$ eV. The QCD axion potential does
not have the exact form above (for a recent computation, see Grilli di Cortona
et al., 2016), but $m\sim\Lambda^{2}/f$ remains true with $\Lambda$ being the
QCD scale $\sim 100$ MeV. For instance, $f\sim 10^{13}$ GeV gives $m\sim
10^{-6}$ eV for the QCD axion.
What determines the contribution of $\phi$ to the energy content of the
universe today? Here we outline the misalignment mechanism (reviewed in Kolb &
Turner, 1990). Consider the equation of motion for a homogeneous $\phi$
(following from Equation 3) in an expanding background):
$\ddot{\phi}+3H\dot{\phi}+\partial_{\phi}V=0\,,$ (5)
where $H$ is the Hubble expansion rate. In the early universe, when $H$ is
large, Hubble friction is sufficient to keep $\phi$ slowly rolling i.e.
balancing the last two terms on the left. Thus $V(\phi)$ plays the role of
dark energy. The value of $\phi$ is essentially stuck at its primordial
value—we assume $\phi_{\rm primordial}/f$, the so called misalignment angle,
is order unity. 777 An interesting variant of the idea, where the primordial
$\phi$ has a significant velocity, was proposed by Co et al. (2020). The
expansion rate drops as time goes on, until $H$ reaches $\sim m$. After that
$\phi$ rolls towards the minimum of the potential and commences oscillations
around it. The expansion of the universe takes energy out of such
oscillations, diminishing the oscillation amplitude. Subsequently, $\phi$
oscillates close to zero, implying it is a good approximation to treat the
potential as:
$V(\phi)\sim{1\over 2}m^{2}\phi^{2}\,.$ (6)
The energy density contained in the $\phi$ oscillations is
$\rho={1\over 2}\dot{\phi}{}^{2}+{1\over 2}m^{2}\phi^{2}\,.$ (7)
It follows from Equation 5 that $\rho$ redshifts like $a^{-3}$ where $a$ is
the scale factor. The $\phi$ oscillations, which can be interpreted as a set
of particles, therefore have the redshifting behavior of (non-relativistic)
matter, making this a suitable dark matter candidate. Following this
cosmological history, it can be shown that the relic density today is (e.g.,
Arvanitaki et al., 2010, Marsh, 2016, Hui et al., 2017):
$\Omega_{\rm axion}\sim 0.1\left({f\over 10^{17}{\,\rm
GeV}}\right)^{2}\left({m\over 10^{-22}{\,\rm eV}}\right)^{1/2}\,$ (8)
where $\Omega_{\rm axion}$ is the axion density today as a fraction of the
critical density. It is worth emphasizing the relic density is more sensitive
to the choice of $f$ than to $m$. The value of $10^{17}$ GeV, close to but
below the Planck scale, is motivated by string theory constructions (Svrcek &
Witten, 2006). 888See Kim & Marsh (2016), Davoudiasl & Murphy (2017), Alonso-
Álvarez & Jaeckel (2018) for recent explorations of model building. But a
slightly different $f$ would have to be paired with a quite different $m$, if
one were to insist on matching the observed dark matter abundance.
Nonetheless, this relic abundance computation motivates the consideration of
light, even ultra-light, axions.
The reasoning above essentially follows the classic computation of the QCD
axion relic density (Preskill et al., 1983, Abbott & Sikivie, 1983, Dine &
Fischler, 1983)—the difference is that while $V(\phi)$ is constant here, it is
temperature dependent for the QCD axion. Besides the misalignment mechanism,
it is also possible axions arise from the decay of topological defects, if the
Peccei-Quinn U(1) symmetry is broken after inflation (for recent lattice
computations, see Gorghetto et al., 2020, Buschmann et al., 2020).
Aside from having the requisite relic abundance, a good dark matter candidate
should be cold and weakly interacting. The coldness is implicit in the
misalignment mechanism: the axion starts off as a homogeneous scalar field in
the early universe, with the homogeneity guaranteed for instance by inflation.
(There are inevitable small fluctuations as well, which is discussed in
Section 5.) The weakly interacting nature is implied by the large axion decay
constant $f$. Possible interactions include:999We list here only interactions
for a pseudo-scalar like the axion. For a scalar, there are other
possibilities; see e.g. Graham et al. (2015).
${\cal L}^{\rm self}_{\rm int.}\sim{m^{2}\over f^{2}}\phi^{4}\quad,\quad{\cal
L}^{\rm\gamma}_{\rm int.}\sim{\phi\over
f}F^{\mu\nu}\tilde{F}_{\mu\nu}\quad,\quad{\cal L}^{\rm\Psi}_{\rm
int.}\sim{\partial_{\mu}\phi\over f}\bar{\Psi}\gamma^{\mu}\gamma_{5}\Psi\,.$
(9)
The first interaction, a self-interaction of $\phi$, follows from expanding
out the potential $V(\phi)$ to quartic order; it is an attractive interaction
for the axion. The second interaction is with the photon, $F$ and $\tilde{F}$
being the photon field strength and its dual (there is an analogous
interaction with gluon field strength and its dual for the QCD axion). The
third interaction is with a fermion $\Psi$, which could represent quarks or
leptons. The last two interactions are both symmetric under a shift of $\phi$
by a constant, as befitting a (pseudo) Goldstone boson. The generic
expectation is that all three coupling strengths are of the order shown, but
models can be constructed that deviate from it (Kim & Marsh, 2016, Kaplan &
Rattazzi, 2016, Choi & Im, 2016). The important point is that $f$ is expected
to be large, keeping these interactions weak, for both the QCD axion and
axion-like-particles. For structure formation purpose, these interactions can
be largely ignored, though their presence is important for direct detection
and in certain extreme astrophysical environments, as we will discuss below.
## 4 Wave dynamics and phenomenology
The discussion above motivates us to consider a scalar field $\phi$ satisfying
the Klein Gordon equation:
$-\Box\phi+m^{2}\phi=0\,,$ (10)
which follows from Equation 3 with the potential approximated by Equation 6.
Much of the following discussion is not specific to axions—it applies to any
scalar (or pseudo-scalar) particle whose dominant interaction is
gravitational. Occasionally, we will comment on features that are specific to
axions, for instance in cases where their self-interaction is important.
Unlike in Equation 5, here we are interested in the possibility of $\phi$
having spatial fluctuations. In the non-relativistic regime relevant for
structure formation, it is useful to introduce a complex scalar $\psi$ ($\phi$
is a real scalar):
$\phi={1\over\sqrt{2m}}\left(\psi e^{-imt}+\psi^{*}e^{imt}\right)\,.$ (11)
The idea is to factor out the fast time dependence of $\phi$—oscillation with
frequency $m$—and assume $\psi$ is slowly varying i.e. $|\ddot{\psi}|\ll
m|\dot{\psi}|$. The Klein-Gordon equation reduces to the Schrödinger equation:
$i\,\partial_{t}\psi=-{\nabla^{2}\over 2m}\psi+m\Phi\psi\,.$ (12)
Several comments are in order. (1) In what sense is the assumption of
$\partial_{t}\ll m$ non-relativistic? From the Schrödinger equation, we see
$\partial_{t}\sim\nabla^{2}/m\sim k^{2}/m$. Thus $\partial_{t}\ll m$ is
equivalent to $k^{2}/m\ll m$ i.e. momentum is small compared to rest mass. (2)
We introduce the gravitational potential $\Phi$. Recall that
$\Box=g^{\mu\nu}\nabla_{\mu}\nabla_{\nu}$ contains the metric $g^{\mu\nu}$,
thus gravitational interaction of $\phi$ is implicit. For many applications,
this is the only interaction we need to include. 101010Wave dark matter
described as such can be thought of as a minimalist version: the primary
interaction is gravitational (though as we will see, other interactions
expected for an axion could be relevant in some cases). In the literature,
there are studies of models where additional interactions play a crucial role
e.g. Rindler-Daller & Shapiro (2012), Berezhiani & Khoury (2015b), Fan (2016),
Alexander & Cormack (2017), Alexander et al. (2019). Some of the phenomenology
described here, such as wave interference, applies to these models as well.
In principle, the metric should account for the cosmic expansion, which we
have ignored to simplify the discussion. Cosmic counterparts of the equations
presented here can be found in (e.g., Hu et al., 2000, Hui et al., 2017). (3)
Despite the appearance of the Schrödinger equation, $\psi$ should be thought
of as a (complex) classical field. The situation is analogous to the case of
electromagnetism: a state with high occupancy is adequately described by the
classical electric and magnetic fields. We will on occasion refer to $\psi$ as
the wavefunction, purely out of habit.
The non-relativistic dynamics of wave dark matter is completely described by
Equation 12, supplemented by the Poisson equation:
$\nabla^{2}\Phi=4\pi G\rho\quad,\quad\rho=m|\psi|^{2}\,.$ (13)
The expression for mass density $\rho$ can be justified by plugging Equation
11 into Equation 7, taking the non-relativistic limit and averaging over
oscillations i.e. $|\psi|^{2}$ has the meaning of particle number density.
Strictly speaking, the energy density should include gradient energy which is
not contained in Equation 7. The gradient energy contribution to $\rho$ is of
order $|\nabla\psi|^{2}/m$ which is negligible compared to the rest mass
contribution $m|\psi|^{2}$ in the non-relativistic regime.
An alternative, fluid description of this wave system is instructive. This is
called the Madelung (1927) formulation (see also Feynman et al., 1963). The
mass density of the fluid is $\rho=m|\psi|^{2}$ as discussed. The complex
$\psi$ can be written as $\psi=\sqrt{\rho/m}\,e^{i\theta}$. The fluid velocity
$\vec{v}$ is related to the phase $\theta$ by:
$\vec{v}={1\over m}\vec{\nabla}\theta={i\over
2m|\psi|^{2}}(\psi\vec{\nabla}\psi^{*}-\psi^{*}\vec{\nabla}\psi)\,.$ (14)
Notice the fluid velocity is a gradient flow, resembling that of a superfluid.
(A superfluid can have vortices as topological defects, see Section 4.4.) With
this identification of the fluid velocity, what is normally understood as
probability conservation in quantum mechanics is now recast as mass
conservation:
$\partial_{t}\rho+\vec{\nabla}\cdot(\rho\vec{v})=0\,.$ (15)
The Schrödinger equation possesses a U(1) symmetry, the rotation of $\psi$ by
a phase. In our context, conservation of the associated Noether current
expresses particle number conservation, or mass conservation, as appropriate
for the $\phi$ particles in the non-relativistic regime.
The Schrödinger equation is complex. Thus, besides mass conservation, it
implies an additional real equation, the Euler equation:
$\partial_{t}\vec{v}+(\vec{v}\cdot\vec{\nabla})\,\vec{v}=-\vec{\nabla}\Phi+\frac{1}{2m^{2}}\vec{\nabla}\left(\frac{\nabla^{2}\sqrt{\rho}}{\sqrt{\rho}}\right).$
(16)
Equations 15 and 16 serve as an alternative, fluid description to the
Schrodinger or wave formulation. The last term in Equation 16 is often
referred to as the quantum pressure term. It is a bit of a misnomer (which we
will perpetuate!), for what we have is a classical system. Also, the term
arises from a stress tensor rather than mere pressure:
$\Sigma_{ij}={1\over
4m^{2}}(\rho^{-1}\partial_{i}\rho\partial_{j}\rho-\partial_{i}\partial_{j}\rho)=-{\rho\over
4m^{2}}\partial_{i}\partial_{j}{\,\rm ln\,}\rho\,,$ (17)
i.e.
$\partial_{i}(\nabla^{2}\sqrt{\rho}/\sqrt{\rho})/(2m^{2})=-\rho^{-1}\partial_{j}\Sigma_{ij}$.
111111 The Euler equation (combined with mass conservation) can be re-
expressed as $\partial_{t}(\rho v_{i})+\partial_{j}(\rho
v_{i}v_{j}+\Sigma_{ij})=-\rho\partial_{i}\Phi$. In other words, the standard
energy-momentum tensor components are: $T^{0}{}_{0}=-\rho$, $T^{0}{}_{i}=\rho
v_{i}$, and $T^{j}{}_{i}=\rho v_{i}v_{j}+\Sigma_{ij}$. It can be shown that
$T^{j}{}_{i}=T_{ji}=(4m)^{-1}(\partial_{i}\psi\partial_{j}\psi^{*}+\partial_{i}\psi^{*}\partial_{j}\psi-\psi^{*}\partial_{i}\partial_{j}\psi-\psi\partial_{i}\partial_{j}\psi^{*})$.
This $T^{j}{}_{i}$ can be rewritten in a more familiar looking way by adding a
tensor that is identically conserved:
$T^{j}{}_{i}\rightarrow(2m)^{-1}(\partial_{i}\psi\partial_{j}\psi^{*}+\partial_{i}\psi^{*}\partial_{j}\psi-\delta_{ij}[\psi\nabla^{2}\psi^{*}/2+\psi^{*}\nabla^{2}\psi/2+\vec{\nabla}\psi\cdot\vec{\nabla}\psi^{*}])$.
Note the Euler equation in Hui et al. (2017) has a factor of $\rho^{-1}$
missing in front of the divergence of the stress tensor ($\sigma_{ij}$ there
differs from $\Sigma_{ij}$ here by an overall sign). The stress tensor
represents how the fluid description accounts for the underlying wave
dynamics. It shows in a clear way how the particle limit is obtained: for
large $m$, the Euler equation reduces to that for a pressureless fluid, as is
appropriate for particle dark matter. We are interested in the opposite
regime, where this stress tensor, or the wave effects it encodes, plays an
important role.
Incidentally, the insight that the wave formulation in the large $m$ limit can
be used to model particle cold dark matter was exploited to good effect by
Widrow & Kaiser (1993). The wave description effectively reshuffles
information in a phase-space Boltzmann distribution into a position-space
wavefunction. It offers a number of insights that might otherwise be obscure
(Uhlemann et al., 2014, 2019, Garny et al., 2020).
In the rest of this section, we deduce a number of intuitive consequences from
this system of equations—Equations 12 and 13 in the wave description, or
Equations 15 , 16 and 13 in the fluid description. Implications for
observations and experiments are discussed in Section 5.
### 4.1 Perturbation theory
Suppose the density is approximately homogeneous with small fluctuations:
$\rho=\bar{\rho}(1+\delta)$ where $|\delta|\ll 1$. We are interested in
comparing the two terms—gravity and quantum pressure—on the right hand side of
the Euler equation (16). Taking the divergence of both, we find:
$-\nabla^{2}\Phi+{1\over 4m^{2}}\nabla^{4}\delta\,,$ (18)
where we have expanded out the quantum pressure term in small $\delta$.
Employing the Poisson equation $\nabla^{2}\Phi=4\pi
G\bar{\rho}\delta$,121212The removal of $\bar{\rho}$ as a source for the
Poisson equation (the so called Jeans swindle) can be justified in the
cosmological context by considering perturbation theory around the Friedmann-
Robertson-Walker background. Our expression is correct with $\nabla$
interpreted as derivative with respect to proper distance. Likewise,
$k_{J}^{-1}$ given below is proper distance. we see that the relative
importance of gravity versus quantum pressure is delineated by the Jeans
scale:
$k_{J}=(16\pi G\bar{\rho})^{1/4}m^{1\over 2}\,,$ (19)
where we have gone to Fourier space and replaced $\vec{\nabla}\rightarrow
i\vec{k}$. This gives $k_{J}\sim 70$/Mpc today for $m\sim 10^{-22}$ eV. On
large length scales $k<k_{J}$, gravity dominates; on small length scales
$k>k_{J}$, quantum pressure wins. The sign difference between the two terms
makes clear quantum pressure suppresses fluctuations on small scales. This is
the prediction of linear perturbation theory—we will see in Section 4.4 that
the opposite happens in the nonlinear regime.
This reasoning tells us the linear power spectrum of wave dark matter should
match that of particle dark matter (or conventional cold dark matter) at low
$k$’s but be suppressed at sufficiently high $k$’s. The precise transition
scale differs from $k_{J}$ given above—a proper computation must include the
effect of radiation in the early universe, and account for the full history,
from slow-roll to oscillations, outlined in Section 3. This was carried out by
Hu et al. (2000), who gave
$k_{1/2}=4.5\left({m\over 10^{-22}{\,\rm eV}}\right)^{4/9}{\,\rm Mpc}^{-1}\,$
(20)
as the (comoving) scale at which the linear power spectrum is suppressed by a
factor of two, and beyond which the power drops precipitously ($\sim
k^{-16}$). This is illustrated in the left panel of Figure 1. For more recent
computations, see Cookmeyer et al. (2020), Hložek et al. (2017), Hlozek et al.
(2015). If the scalar potential $V(\phi)$ is indeed of the form given in
Equation 4, the computation should in principle account for the full shape of
$V(\phi)$ rather than approximating it as quadratic, especially if the
primordial $\phi$ value is comparable to $f$. This was investigated by Zhang &
Chiueh (2017), Arvanitaki et al. (2020), who found that the predicted linear
power spectrum is largely consistent with earlier work, unless the primordial
$\phi$ is extremely close to $\pi f$ i.e. the top of the potential.
131313Computations of the linear power spectrum discussed above assume the
fluctuations are adiabatic i.e. $\phi$ fluctuations, like fluctuations in
photons, baryons and neutrinos, are all inherited from the curvature, or
inflaton, fluctuation. The scalar $\phi$ can in addition have its own
isocurvature fluctuations (see Section 5).
The linear perturbative computation described above is phrased in the fluid
picture. A fluid perturbation theory computation up to third order in $\delta$
and $v$ was carried out in Li et al. (2019) to obtain the one-loop power
spectrum. One could also consider perturbation theory in the wave formulation,
expanding in small $\delta\psi\equiv\psi-\bar{\psi}$, where $\bar{\psi}$ is
the homogeneous contribution. Wave perturbation theory turns out to break down
at higher redshifts compared to fluid perturbation theory (Li et al., 2019).
141414 Wave perturbation theory requires not only the smallness of
$(\delta\psi+\delta\psi^{*})/\bar{\psi}$ (which equals $\delta$), but also the
smallness of $(\delta\psi-\delta\psi^{*})/\bar{\psi}$ (it is related to the
fluid velocity by
$\vec{v}=\vec{\nabla}(\delta\psi-\delta\psi^{*})/(2im\bar{\psi})$). In other
words, wave perturbation theory assumes small $\delta$ and $mv/k$, while fluid
perturbation theory assumes small $\delta$ and $v$. In large scale structure,
one is typically interested in situations where $m/k\gg 1$. Thus perturbation
theory breaks down sooner in the wave formulation.
### 4.2 Soliton/boson star
The Euler equation is useful for intuiting properties of certain nonlinear,
bound objects, known as solitons or boson stars (Kaup, 1968, Ruffini &
Bonazzola, 1969, Friedberg et al., 1987a, b, Seidel & Suen, 1994, Guzman &
Urena-Lopez, 2006a). We are interested in objects in which quantum pressure
balances gravitational attraction i.e. the two terms on the right hand side of
Equation 16 cancel each other:
${GM\over R}\sim{1\over m^{2}R^{2}}\,,$ (21)
where $M$ is the total mass of the object and $R$ is its radius, and we have
replaced $\nabla\sim 1/R$ and dropped factor of $2$. This implies the size of
the soliton/boson star is inversely proportional to its mass:
$\displaystyle R\sim{1\over GMm^{2}}\sim 100{\,\rm pc}\,{10^{9}{\,\rm
M_{\odot}}\over M}\left({10^{-22}{\,\rm eV}\over m}\right)^{2}$
$\displaystyle\quad\sim 300{\,\rm km}\,{10^{-10}{\,\rm M_{\odot}}\over
M}\left({10^{-6}{\,\rm eV}\over m}\right)^{2}\sim 50{\,\rm km}\,{5{\,\rm
M_{\odot}}\over M}\left({10^{-11}{\,\rm eV}\over m}\right)^{2}\,,$ (22)
where we give a few representative values of $M$ and $m$.151515This rough
estimate is about a factor of 4 smaller than the exact relation (Chavanis,
2011). We focus on spherical solitons. Filamentary and pancake analogs are
explored in Desjacques et al. (2018), Alexander et al. (2019), Mocz et al.
(2019), and rotating solitons are discussed in Hertzberg & Schiappacasse
(2018). The example of $m\sim 10^{-22}$ eV corresponds to that of fuzzy dark
matter—such a soliton can form in the centers of galaxies (Schive et al.,
2014a, b, see Section 4.5 below). The example of $m\sim 10^{-6}$ eV
corresponds to that of the QCD axion—such an axion star (often called an axion
minicluster) could form in the aftermath of Peccei-Quinn symmetry breaking
after inflation (Kolb & Tkachev, 1993, 1996, Fairbairn et al., 2018, Eggemeier
& Niemeyer, 2019, Buschmann et al., 2020). The example of $m\sim 10^{-11}$ eV
could be an axion-like-particle—an object like this has been studied as a
possible gravitational wave event progenitor (Helfer et al., 2017, Widdicombe
et al., 2018).
There is an upper limit to the mass of the soliton: $GM/R\,\lower
3.22916pt\hbox{$\sim$}\hbox to0.0pt{\hss\raise 1.1625pt\hbox{$<$}}\,1$ to
avoid collapse to a black hole. Plugging in the expression for $R$, we deduce
the maximum soliton mass (a Chandrasekhar mass of sort):
$M_{\rm max}\sim{1\over Gm}\sim 10^{12}{\,\rm M_{\odot}}\left({10^{-22}{\,\rm
eV}\over m}\right)\sim 10^{-4}{\,\rm M_{\odot}}\left({10^{-6}{\,\rm eV}\over
m}\right)\sim 10{\,\rm M_{\odot}}\left({10^{-11}{\,\rm eV}\over m}\right)\,.$
(23)
Strictly speaking, as one approaches the maximum mass, one should use the
relativistic Klein Gordon description rather than the Schrödinger equation,
but the above provides a reasonable estimate (Kaup, 1968, Ruffini & Bonazzola,
1969, Friedberg et al., 1987b).
Not all gravitationally bound objects are solitons, of course. The argument
above accounts for the two terms on the right of the Euler equation (16). The
velocity terms on the left could also play a role. In other words, a bound
object could exist by balancing gravity against virialized motion instead i.e.
$v^{2}\sim GM/R>1/(m^{2}R^{2})$. Most galaxies are expected to fall into this
category, supported by virialized motion except possibly at the core where a
soliton could condense (see Section 4.5).
The discussion so far ignores the possibility of self-interaction. For an
axion, we expect a $m^{2}\phi^{4}/f^{2}$ contribution to the Lagrangian
(Equation 9). It can be shown the relevant quantities to compare are: $v^{2}$
(virialized motion), $1/(m^{2}R^{2})$ (quantum pressure) balancing against
$GM/R$ (gravity) and $M/(m^{2}f^{2}R^{3})$ (attractive self-interaction of the
axion). This can be deduced by comparing the gravitational contribution to
energy density $\rho\Phi$ with the self-interaction contribution
$m^{2}\phi^{4}/f^{2}\sim\rho^{2}/(m^{2}f^{2})$, and using $\Phi\sim GM/R$ and
$\rho\sim M/R^{3}$. The attractive self-interaction is destabilizing, going as
$1/R^{3}$: if it dominates over gravity, there is nothing that would stop $R$
from getting smaller and making the self-interaction even stronger. Demanding
that the $M$-$R$ relation in Equation 4.2 satisfies $GM/R>M/(m^{2}f^{2}R^{3})$
modifies the maximum soliton mass to (Eby et al., 2016a, b, Helfer et al.,
2017):
$\displaystyle M_{\rm max}\sim{f\over G^{1/2}m}\sim 10^{10}{\,\rm
M_{\odot}}\left({f\over 10^{17}{\,\rm GeV}}\right)\left({10^{-22}{\,\rm
eV}\over m}\right)$ $\displaystyle\quad\quad\quad\sim 10^{-10}{\,\rm
M_{\odot}}\left({f\over 10^{13}{\,\rm GeV}}\right)\left({10^{-6}{\,\rm
eV}\over m}\right)\sim{\,\rm M_{\odot}}\left({f\over 10^{18}{\,\rm
GeV}}\right)\left({10^{-11}{\,\rm eV}\over m}\right)\,.$ (24)
Figure 1: Left panel: the dimensionless linear mass power spectrum
$\Delta^{2}(k)\equiv 4\pi k^{3}P(k)/(2\pi)^{3}$, where $P(k$) is the
dimensionful version, as a function of comoving momentum $k$. This is the
linear power spectrum at redshift $z=0$. The top curve corresponds to that of
conventional cold dark matter. The other two are for wave dark matter with
$m=10^{-20}$ eV and $10^{-22}$ eV respectively, exhibiting the suppression of
power on small scales (high $k$’s). The transfer function is taken from Hu et
al. (2000). Right panel: a $z=5$ snapshot of the dark matter density in a
cosmological simulation of ultra-light dark matter with $m=10^{-22}$ eV. The
snapshot is $700$ kpc comoving on a side. The color scale reflects the density
(in ${\rm\,g/cm^{3}}$). Wave interference fringes can be seen along filaments
and in/around halos. Such interference patterns were first seen in simulations
by Schive, Chiueh & Broadhurst (2014a). Snapshot produced by Xinyu Li (Li et
al., 2019).
### 4.3 Numerical simulations
Great strides have been made in numerical simulations of structure formation
with wave dark matter (the Schrödinger-Poisson system), starting with the work
of Schive, Chiueh & Broadhurst (2014a). There are by now a number of different
algorithms, including spectral method and finite difference (Schive et al.,
2014a, Schwabe et al., 2016, Mocz et al., 2017, Du et al., 2018b, Li et al.,
2019, Edwards et al., 2018, Mocz et al., 2019, Schwabe et al., 2020), often
with adaptive mesh refinement. One key challenge to solving the Schrödinger-
Poisson system (Equations 12 and 13) is the high demand for resolution. In
cosmological applications, one is often interested in predictions on large
scales, say length scale $\lambda$. To accurately describe bulk motion on such
large scales, say velocity $v$, one must include waves with the corresponding
wavelength $2\pi/(mv)$. The trouble is that one is often in situations where
$2\pi/(mv)\ll\lambda$. For instance, with $m\sim 10^{-22}$ eV and a velocity
of $100$ km/s, the de Broglie wavelength $2\pi/(mv)\sim 1.2$ kpc is a lot
smaller than typical length scales of interest in large scale structure
$\lambda>1$ Mpc. A wave simulation, unlike an N-body simulation, thus must
have high resolution even if one is only interested in large scales. This is
why existing wave simulations are typically limited to small box sizes. A
related challenge is the requisite time-step: dimensional analysis applied to
the Schrödinger equation tells us the time-step scales as $m\times{\,\rm
resolution}^{2}$, i.e. the time-step has to be less than the de Broglie
wavelength divided by the typical velocity. Contrast this with the requirement
for an N-body simulation—a time step of $\,\lower 3.22916pt\hbox{$\sim$}\hbox
to0.0pt{\hss\raise 1.1625pt\hbox{$<$}}\,\lambda/v$ suffices. A recent $\sim
10$ Mpc box, de-Broglie-scale-resolved, wave simulation was described by May &
Springel (2021).
An alternative is to simulate the fluid formulation, expressed in Equations
13, 15 and 16 (Mocz & Succi, 2015, Veltmaat & Niemeyer, 2016, Nori & Baldi,
2018, Nori et al., 2019). With $\rho$ and $\vec{v}$ as variables (related to
the amplitude and phase of $\psi$), there is no need to have high spatial
resolution just to correctly capture the large scale flows. The downside is
that the fluid formulation is ill-defined at places where $\rho=0$. This can
be seen by looking at the form of the quantum pressure term in the Euler
equation (16), or more simply, by noting that the phase of the wavefunction
$\psi$ (which determines $\vec{v}$) becomes ill-defined at locations where
$\rho=m|\psi|^{2}$ vanishes. One might think occurrences of vanishing $\rho$
must be rare and have a negligible impact; this turns out to be false (Li et
al., 2019, Hui et al., 2020)—we will have more to say about this in Section
4.4. A promising approach to overcome this and the resolution challenge is a
hybrid scheme, where the large scale evolution proceeds according to the fluid
formulation or an N-body code (the vanishing-$\rho$ issue does not arise on
large scales), and the small scale evolution follows the wave formulation
(Veltmaat et al., 2018).
Recall that the Schrödinger equation originates as a non-relativistic
approximation to the Klein-Gordon equation. If one is interested in
applications where relativity plays a role, such as a soliton close to its
maximum possible mass (Section 4.2), or the scalar field close to black holes
or in the early universe, a Klein-Gordon code (or more generally, a code to
evolve a scalar with arbitrary potential) should be used. There are many
examples in the literature: Felder & Tkachev (2008), Easther et al. (2009),
Giblin et al. (2010), Amin et al. (2012), Helfer et al. (2017), Widdicombe et
al. (2018), Buschmann et al. (2020), Eggemeier & Niemeyer (2019).
Much of the recent progress in understanding halo substructure for wave dark
matter comes from numerical simulations, often in the ultra-light regime of
$m\sim 10^{-22}$ eV. Many of the qualitative features carry over to higher
masses; the quantitative implications for observations/experiments are mass
specific of course, as we will discuss.
### 4.4 Wave interference—granules and vortices
The right panel of Figure 1 shows the dark matter density in a snapshot of a
cosmological wave simulation (Li et al., 2019). A striking feature is the
presence of interference fringes, a characteristic prediction of wave dark
matter, first demonstrated in cosmological simulations by Schive, Chiueh &
Broadhurst (2014a), and subsequently confirmed by many groups (Schive et al.,
2014a, Schwabe et al., 2016, Veltmaat & Niemeyer, 2016, Mocz et al., 2017, Du
et al., 2018b, Li et al., 2019, Edwards et al., 2018, Nori & Baldi, 2018,
Veltmaat et al., 2018, Mocz et al., 2019, Schwabe et al., 2020). The
interference patterns are particularly obvious in the nonlinear regime, along
filaments and in/around collapsed halos. In these nonlinear objects, wave
interference causes order one fluctuations in density: blobs of constructive
interference of de Broglie size (sometimes called granules) interspersed
between patches of destructive interference.
As a simple model of a galactic halo, consider a superposition of plane waves:
$\psi(t,\vec{x})=\sum_{\vec{k}}A_{\vec{k}}e^{iB_{\vec{k}}}e^{i{\vec{k}}\cdot{\vec{x}}-i\omega_{k}t}\,,$
(25)
where $A_{\vec{k}}$ and $B_{\vec{k}}$ are the amplitude and phase of each
plane wave of momentum $\vec{k}$. 161616 Here,
$\omega_{k}=|\vec{k}|^{2}/(2m)$. A more realistic model would superimpose
eigenstates of a desired gravitational potential (Lin et al., 2018, Li et al.,
2021), in which case $\omega_{k}$ would be the energy of each eigenmode
(labeled abstractly by $k$), with $e^{i\vec{k}\cdot\vec{x}}$ replaced by the
corresponding eigenfunction. In a virialized halo, it is reasonable to expect,
as a zero order approximation, that the phases $B_{\vec{k}}$’s are randomly
distributed. This is the analog of assuming random orbital phases for stars in
a halo. We refer to this as the random phase halo model. The amplitudes
$A_{\vec{k}}$’s should reflect the velocity (or momentum) dispersion within
the halo. For instance we can adopt $A_{\vec{k}}\propto e^{-k^{2}/k_{0}^{2}}$
(where $k=|\vec{k}|$), resembling an isothermal distribution, with a de
Broglie wavelength $\propto 1/k_{0}$. The density is:
$\rho=m|\psi|^{2}=m\sum_{\vec{k}}A_{\vec{k}}^{2}+m\sum_{\vec{k}\neq\vec{k}^{\prime}}A_{\vec{k}}A_{\vec{k}^{\prime}}e^{i(B_{\vec{k}}-B_{\vec{k}^{\prime}})}e^{i({\vec{k}}-{\vec{k}^{\prime}})\cdot\vec{x}-i(\omega_{k}-\omega_{k^{\prime}})t}\,.$
(26)
The first term comes from squaring each Fourier mode and summing them. The
second represents the contribution from interference between different Fourier
modes.171717If we had built a more realistic model where the plane waves are
replaced by energy eigenstates (see footnote 16), the first term would be
${\vec{x}}$ dependent, but would remain time independent. It is the second
term that is responsible for the appearance of interference fringes in
numerical simulations such as shown in Figure 1. The typical difference in
momenta between different Fourier modes is of the order of $k_{0}$, which
fixes the characteristic size of the interference fringes or granules i.e. the
de Broglie wavelength $\sim 2\pi/k_{0}$. The typical difference in energy
between the modes is of the order of $\sim k_{0}^{2}/(2m)\sim k_{0}v/2$, where
$v$ is the velocity dispersion. This determines the characteristic time scale
over which the interference pattern changes i.e. the de Broglie time:
$\displaystyle t_{\rm dB}\equiv{2\pi\over mv^{2}}=1.9\times 10^{6}{\,\rm
yr.}\left({10^{-22}{\,\rm eV}\over m}\right)\left({250{\,\rm km/s}\over
v}\right)^{2}$ $\displaystyle\quad=5.9\times 10^{-3}{\,\rm
s}\left({10^{-6}{\,\rm eV}\over m}\right)\left({250{\,\rm km/s}\over
v}\right)^{2}\,.$ (27)
There is some arbitrariness in the choice of the prefactor $2\pi$. Reasonable
choices range within factor of a few.
In other words, wave interference produces de-Broglie-scale, order unity
density fluctuations which vary on time scale of $t_{\rm dB}$. Such
fluctuations can in principle take the density all the way to zero i.e.
complete destructive interference. What is interesting is that (1) such
occurrences are not rare, and (2) the locations of complete destructive
interference are vortices. This was explored in Chiueh et al. (2011), Hui et
al. (2020). 181818More generally, vortices in dark matter were studied in
Silverman & Mallett (2002), Brook & Coles (2009), Kain & Ling (2010), Rindler-
Daller & Shapiro (2012), Zinner (2011), Banik & Sikivie (2013), Alexander &
Cormack (2017), Alexander et al. (2020). Most of the studies focused on a
regime where self-interaction dominates over quantum pressure. Here, we
describe the opposite regime, relevant for weakly-coupled dark matter with a
long de Broglie wavelength, where gravity and quantum pressure completely
describe the physics. Vortices have long been studied in other contexts, such
as high energy and condensed matter physics (Nielsen & Olesen, 1973, Luscher,
1981, Onsager, 1949, Lund, 1991, Fetter, 2008). Below we summarize the
findings, following the line of reasoning in Hui et al. (2020).
Figure 2: Schematic illustration of vortices. Left panel: a vortex line, or
segment thereof (purple line). The loop with arrow indicates velocity
circulation (or phase winding) around the vortex. Right panel: a vortex ring
(purple line). The loops with arrows indicate velocity circulation. The arrow
in the middle indicates the bulk motion of the ring.
In three spatial dimensions, the set of points where the real part of the
wavefunction vanishes generically forms a surface. Likewise for the imaginary
part. Demanding both parts of the wavefunction vanish thus gives a line, where
the two surfaces cross. The purple line in the left panel of Figure 2 depicts
such a line of vanishing $\psi$ (i.e. the amplitude of $\psi$ is zero and the
phase is ill-defined on the line). Consider a loop going around this line: for
the wavefunction to be single-valued, the phase of the wavefunction must wind
by integers of $2\pi$. Recall the fluid velocity is given by the gradient of
the phase (Equation 14); integrating the velocity around a loop encircling the
line of vanishing $\psi$ gives:
${\rm circulation\,}\equiv\oint d\vec{x}\cdot\vec{v}={2\pi n\over m}\,,$ (28)
where $n$ is an integer. The line of vanishing $\psi$ is therefore a vortex.
191919Note that the vortex is distinct from the axion string. The relevant
$U(1)$ for an axion string is the Peccei-Quinn $U(1)$, while that for a vortex
is the $U(1)$ associated with particle number conservation in the non-
relativistic limit. This raises the interesting question of how to view the
vortex from the perspective of the full $\phi$ theory. See discussions in Hui
et al. (2020). It is helpful to consider a Taylor expansion around a point on
the vortex (let’s take it to be the origin):
$\psi(\vec{x})\sim\vec{x}\,\cdot\vec{\nabla}\psi|_{0}\,,$ (29)
assuming $\vec{\nabla}\psi|_{0}$, the derivative evaluated at $x=0$, does not
vanish. It can be shown the winding number $n=\pm 1$ as long as
$\vec{\nabla}\psi|_{0}$ does not vanish. If it vanishes, one would have to
consider the next higher order term in the Taylor expansion, yielding higher
winding. A vortex line, much like a magnetic field line, cannot end, and so
one expects generically a vortex ring, depicted in the right panel of Figure
2. It can be further shown that, in addition to velocity circulation around
the ring, the ring itself moves with a bulk velocity that scales inversely
with its size. Analytic solutions illustrating this behavior (and more) can be
found in Bialynicki-Birula et al. (2000), Hui et al. (2020).
A number of features of vortices in wave dark matter are worth stressing. (1)
One might think these locations of chance, complete destructive interference
must be rare, but they are actually ubiquitous: on average there is about one
vortex ring per de Broglie volume in a virialized halo. This has been verified
analytically in the random phase halo model, and in numerical wave simulations
of halos that form from gravitational collapse.202020 In a numerical
simulation, checking that the density is low is not enough to ascertain that
one has a vortex (keep in mind the density almost never exactly vanishes
numerically). A better diagnostic is to look for non-vanishing velocity
circulation, or phase winding—this is also more robust against varying
resolution. Note that gravity plays an important role in the formation of
vortices in the cosmology setting. In the early universe, the density (and the
wavefunction) is roughly homogeneous with very small fluctuations; this means
nowhere does the wavefunction vanish. It is only after gravity amplifies the
density fluctuations, to order unity or larger, is complete destructive
interference possible. (2) Vortex rings in a realistic halo are not nice round
circles, but rather deformed loops. Nonetheless, certain features are robust.
Close to a vortex, the velocity scales as $1/r$ where $r$ is distance from
vortex (following from Equation 28), and the density scales as $r^{2}$
(following from Equation 29). 212121More generally, the density scales as
$r^{2|n|}$ where $n$ is the winding number. However, simulations suggest
$|n|=1$ is the generic expectation: it is rare to have $\psi$ and
$\vec{\nabla}\psi$ vanish at the same time. Moreover, a segment of a ring
moves with a velocity that scales with the curvature i.e. curvier means
faster. (3) Vortex rings come in a whole range of sizes: the distribution is
roughly flat below the de Broglie wavelength, but is exponentially suppressed
beyond that. (4) Vortex rings are transient, in the same sense that wave
interference patterns are. The coherence time is roughly the de Broglie time
(Equation 4.4). Vortex rings cannot appear or disappear in an arbitrary way,
though. A vortex ring can appear by first nucleating as a point, and then
growing to some finite size. It can disappear only by shrinking back to a
point (or merge with another ring). This behavior can be understood as a
result of Kelvin’s theorem: recall that the fluid description is valid away
from vortices; conservation of circulation tells us that vortices cannot be
arbitrarily removed or created.
To summarize, wave interference substructures, of which vortices are a
dramatic manifestation, are a unique signature of wave dark matter. It is
worth stressing that while the wave nature of dark matter leads to a
suppression of small scale power in the linear regime (Section 4.1), it leads
to the opposite effect in the nonlinear regime, by virtue of interference. We
discuss the implications for observations and experiments in Section 5.
### 4.5 Dynamical processes—relaxation, oscillation, evaporation, friction
and heating
An interesting phenomenon in a wave dark matter halo is soliton condensation,
first pointed out by Schive et al. (2014a, b). It is observed that virialized
halos in a cosmological simulation tend to have a core that resembles the
soliton discussed in Section 4.2, with a soliton mass that scales with the
halo mass as:
$M_{\rm soliton}\sim 6.7\times 10^{7}{\,\rm M_{\odot}}{10^{-22}{\,\rm eV}\over
m}\left({M_{\rm halo}\over 10^{10}{\,\rm M_{\odot}}}\right)^{1/3}\,.$ (30)
222222 It is worth emphasizing that this relation is well-tested only over a
limited range of halo mass: $\sim 10^{9}-10^{11}{\,\rm M_{\odot}}$, because of
the difficulty in simulating large boxes (Section 4.3). The relation can be
roughly understood as follows (Schive et al., 2014b). Recall that $R_{\rm
soliton}\propto 1/M_{\rm soliton}$ (Equation 4.2). Thus, the gravitational
potential of the soliton $\sim GM_{\rm soliton}/R_{\rm soliton}\propto M_{\rm
soliton}^{2}$. Equating this with the gravitational potential of the halo
$\sim GM_{\rm halo}/R_{\rm halo}$, and assuming $M_{\rm halo}/R_{\rm
halo}^{3}$ is constant i.e. $R_{\rm halo}\propto M_{\rm halo}^{1/3}$, the
relation $M_{\rm soliton}\propto M_{\rm halo}^{1/3}$ follows. That the
gravitational potential of the soliton and of the halo roughly match can be
interpreted as some sort of isothermal condition. It would be useful to check
if the kinetic approach of Levkov et al. (2018) can reproduce this. See Bar et
al. (2018) for further discussions.
The condensation process was studied by solving the Landau kinetic equation in
Levkov et al. (2018) (see also Seidel & Suen, 1994, Harrison et al., 2003,
Guzman & Urena-Lopez, 2006b, Schwabe et al., 2016). Here, we describe a
heuristic derivation of the condensation, or relaxation, time scale (Hui et
al., 2017). Consider the part of a halo interior to radius $R$, with velocity
dispersion $v$. Suppose there is no soliton yet. Wave interference as
described in Section 4.4 inevitably produces granules of de Broglie size
$\lambda_{\rm dB}$. In this region, we have $\sim(2R/\lambda_{\rm dB})^{3}$
such granules or quasi-particles. The relaxation time for such a gravitational
system is roughly a tenth of the crossing time $2R/v$ times the number of
granules i.e.
$\displaystyle t_{\rm relax}\sim 0.1{2R\over v}\left({2R\over\lambda_{\rm
dB}}\right)^{3}\sim 10^{8}{\,\rm yr}\left({R\over 2{\,\rm
kpc}}\right)^{4}\left({v\over 100{\,\rm km/s}}\right)^{2}\left({m\over
10^{-22}{\,\rm eV}}\right)^{3}$ $\displaystyle\sim 10^{8}{\,\rm
yr}\left({0.14{\,\rm M_{\odot}}/{\,\rm
pc}^{3}\over\rho}\right)^{2}\left({v\over 100{\,\rm
km/s}}\right)^{6}\left({m\over 10^{-22}{\,\rm eV}}\right)^{3}\,.$ (31)
In essence, we have adapted the standard relaxation time for a gravitational
system (Binney & Tremaine, 2008) by replacing the number of particles/stars by
the number of de Broglie granules. The above estimate suggests the
condensation of solitons quickly becomes inefficient for larger values of $m$.
It remains to be verified, though, whether this is indeed the relevant time
scale for soliton formation in a cosmological setting where halos undergo
repeated mergers. For instance, in a numerical study of six halos by Veltmaat
et al. (2018), all halos have substantial cores from the moment of halo
formation, though two of them exhibit some core growth over time.
Figure 3: Left panel: Snapshots of the formation of a halo. Clockwise from
top-left: initial moment, $1$ Gyr, $1.2$ Gyr and $1.1$ Gyr. Each snapshot is
$10$ kpc on a side. Color coding denotes the projected density in ${\rm
M_{\odot}/\,\rm pc^{2}}$. The cross in the middle denotes the center of mass.
Note how the soliton core wanders. Right panel: Spherically averaged density
profile (density in ${\rm M_{\odot}/\,\rm pc}^{3}$ as a function of radius in
kpc) at several different moments, from $1.2$ Gyr to $1.26$ Gyr. The soliton
core exhibits persistent oscillations. Soliton oscillations and random walk
were first observed in simulations by Veltmaat et al. (2018), Schive et al.
(2020). Figure adapted from Li et al. (2021).
Detailed studies of simulations suggest the core of a fuzzy dark matter halo
is not an exact soliton. Veltmaat, Niemeyer & Schwabe (2018) pointed out that
the core object has persistent oscillations, and Schive, Chiueh & Broadhurst
(2020) demonstrated that it random walks (see Figure 3). This is another
manifestation of wave interference. Think of the halo gravitational potential
as approximately constant (in time); the halo can be decomposed into a
superposition of energy eigenstates (Lin et al., 2018). The ground state (i.e.
the solitonic state) contributes substantially to the density around the halo
center, but it is not the only state that does. Interference between the
ground state and excited states approximately matches the core oscillations
and random walk observed in simulations (Li et al., 2021, Padmanabhan, 2021).
It is well known that a subhalo embedded inside a larger parent halo can be
tidally disrupted. The tidal radius is roughly where the average interior
density of the subhalo matches that of the parent halo. Quantum pressure adds
a new twist to this story: even mass within the tidal radius of the subhalo is
unstable to disruption. The evaporation time scale of a soliton inside a host
halo was computed in Hui et al. (2017): a soliton would evaporate in $\,\lower
3.22916pt\hbox{$\sim$}\hbox to0.0pt{\hss\raise 1.1625pt\hbox{$<$}}\,10$ orbits
if its density is $\,\lower 3.22916pt\hbox{$\sim$}\hbox to0.0pt{\hss\raise
1.1625pt\hbox{$<$}}\,60$ times the host density. This was verified in wave
simulations by Du et al. (2018b).
The wave nature of dark matter also has an impact on dynamical friction.
Recall how dynamical friction works: a heavy object ploughs through a sea of
dark matter particles; gravitational scattering creates an overdense tail of
particles in its wake; the overdense tail gravitationally pulls on the heavy
object, effecting friction. For wave dark matter, one expects a smoothing of
the overdense tail on the de Broglie scale. The dynamical friction is thus
suppressed. A computation, neglecting self-gravity of the dark matter and
assuming the unperturbed background is homogenous, is described in Hui et al.
(2017) (see also Lora et al., 2012): while the frictional force is
$4\pi\rho(GM/v)^{2}\left({\,\rm ln}[2r/(GM/v^{2})]-1\right)$ in the particle
limit, it is $4\pi\rho(GM/v)^{2}\left({\,\rm ln}[2rmv]-1+\gamma\right)$ in the
wave limit. 232323The result is derived by integrating momentum flux over a
sphere surrounding $M$, as opposed to a cylinder like in Chandrasekhar’s
classic computation, hence a small difference in the Coulomb logarithm in the
particle limit. Also, $rmv\gg 1$ is assumed. See Hui et al. (2017) for
details. Here, $\rho$ is the background mass density, $M$ is the mass of the
heavy object (such as a globular cluster), $v$ is the velocity of the heavy
object, $r$ is the size of the galactic halo or the orbital radius of $M$ in
the halo, and $\gamma=0.577...$ is the Euler-Mascheroni constant. The
distinction between the particle limit (i.e. Chandrasekhar) and the wave limit
comes down to comparing two length scales: $GM/v^{2}$ (the impact parameter at
which significant deflection occurs) versus the de Broglie scale $\sim
1/(mv)$. The wave limit applies when the former is less than the latter i.e.
if the following ratio is small:
${GM/v^{2}\over(1/mv)}=0.002\left({M\over 10^{6}{\,\rm
M_{\odot}}}\right)\left({100{\rm\,km/s}\over v}\right)\left({m\over
10^{-22}{\,\rm eV}}\right)\,.$ (32)
Depending on the parameters of interest, dynamical friction can be suppressed
significantly, if $m$ is in the ultra-light range. A computation of dynamical
friction in more general fluid dark matter is carried out in Berezhiani et al.
(2019). Investigations of dynamical friction in fuzzy dark matter in more
realistic settings— inhomogeneous background, with de Broglie granules—can be
found in Du et al. (2017), Bar-Or et al. (2019), Lancaster et al. (2020).
We close this section with a discussion of one more dynamical effect from the
wave nature of dark matter. Recall from Section 4.4 that the wave interference
pattern of granules and vortices is transient, on time scale of $t_{\rm dB}$
(Equation 4.4). The fluctuating gravitational potential leads to the heating
and scattering of stars (Hui et al., 2017, Amorisco & Loeb, 2018, Bar-Or et
al., 2019, Church et al., 2019, Marsh & Niemeyer, 2019, Schive et al., 2020).
A rough estimate can be obtained as follows. Consider a star undergoing
deflection by a de Broglie blob: the angle of (weak) deflection is $\sim
2GM/(bv^{2})$ where $M$ is the mass of the blob and $b$ is the impact
parameter. The deflection imparts a kick to the velocity of the star,
perpendicular to the original direction of motion: $\Delta v\sim 2GM/(bv)$.
Using $M\sim 4\pi\rho(\lambda_{\rm dB}/2)^{3}/3$ and $b\sim\lambda_{\rm
dB}/2$, one finds242424Note that an underdensity, such as around a vortex
ring, would effectively cause a deflection of the opposite sign compared to an
overdensity. We are not keeping track of this sign. Note also if we were more
careful, we should have integrated over a range of impact parameters instead
of setting $b\sim\lambda_{\rm dB}/2$, yielding some Coulomb logarithm.
$\Delta v\sim 0.08{\,\rm km/s}\left({\rho\over 0.01{\,\rm
M_{\odot}\,pc^{-3}}}\right)\left({250{\,\rm km/s}\over
v}\right)^{3}\left({10^{-22}{\,\rm eV}\over m}\right)^{2}\,.$ (33)
This is a stochastic kick, and its rms value accumulates in a root $N$
fashion, where $N$ is the number of de Broglie blobs the star encounters,
which is roughly $Tv/\lambda_{\rm dB}$ where $T$ is the time over which such
encounters take place. Thus,
${\rm rms\,}\Delta v\sim 4{\,\rm km/s}\left({T\over 5{\,\rm
Gyr}}\right)^{1/2}\left({\rho\over 0.01{\,\rm
M_{\odot}\,pc^{-3}}}\right)\left({250{\,\rm km/s}\over
v}\right)^{2}\left({10^{-22}{\,\rm eV}\over m}\right)^{3/2}\,.$ (34)
See Bar-Or et al. (2019), Church et al. (2019) for more careful analyses of
such heating. We discuss the implications for tidal streams, galactic disks
and stellar clusters in Section 5.
### 4.6 Compact objects and relativistic effects—black hole accretion,
superradiance and potential oscillation
What happens to wave dark matter around compact objects, such as black holes?
First of all, accretion onto black holes should occur. This includes accretion
of both mass and angular momentum. Second, for a spinning black hole, the
reverse can happen: mass and angular momentum can be extracted out of a Kerr
black hole, an effect known as superradiance.
To study these phenomena properly, because relativistic effects become
relevant close to the horizon, one needs to revert to the Klein-Gordon
description i.e. $\phi$ obeying Equation 10. There is a long history of
studying solutions to the Klein-Gordon equation in a Schwarzschild or Kerr
background (Starobinskiǐ, 1973, Unruh, 1976, Detweiler, 1980, Bezerra et al.,
2014, Vieira et al., 2014, Konoplya & Zhidenko, 2006, Dolan, 2007, Arvanitaki
et al., 2010, Arvanitaki & Dubovsky, 2011, Barranco et al., 2012, Arvanitaki
et al., 2017). The treatments generally differ in the boundary conditions
assumed: while the boundary condition at the horizon is always ingoing, that
far away can be outgoing (for studying quasi-normal modes), asymptotically
vanishing (for studying superradiance clouds), or infalling (for studying
accretion), or combination of infalling and outgoing (for studying
scattering).
For a black hole immersed in a wave dark matter halo, the infalling boundary
condition is the most relevant. In particular, the stationary accretion flow
around a black hole was investigated in Clough et al. (2019), Hui et al.
(2019), Bamber et al. (2020) i.e. the time-dependence of $\phi$ is a linear
combination of $e^{\pm imt}$ at all radii. The Klein-Gordon equation in a
Schwarzschild background takes the form:
$\left[\partial_{t}^{2}-\partial_{r_{*}}^{2}+U(r)\right](r\phi)=0\quad,\quad
U(r)\equiv\left(1-{r_{s}\over r}\right)\left(m^{2}+{\ell(\ell+1)\over
r^{2}}+{r_{s}\over r^{3}}\right)\,,$ (35)
where $t$ and $r$ are the time and radial coordinates of the Schwarzschild
metric, $r_{s}$ is the Schwarzschild radius, and $r_{*}$ is the tortoise
coordinate: $r_{*}=r+r_{s}{\,\rm log\,}(r/r_{s}-1)$. We have assumed the
angular dependence of $\phi$ is given by a spherical harmonic of some $\ell$.
For $\phi\propto e^{\pm imt}$, this resembles the Schrödinger equation with
some potential. For $\ell=0$, the radial profile of $\phi$ goes roughly as
follows: (1) for $r_{s}^{-1}\,\lower 3.22916pt\hbox{$\sim$}\hbox
to0.0pt{\hss\raise 1.1625pt\hbox{$<$}}\,m$, we have $\phi\sim r^{-3/4}$ i.e.
there is a pile-up of the scalar towards the horizon;252525 This is the
particle limit, in that the Compton wavelength is smaller than the horizon
size. Note that here the relevant wavelength is Compton, not de Broglie. The
$r^{-3/4}$ behavior can be understood as follows. A stationary accretion flow
should have $r^{2}\rho v=$ constant, where $v$ is the radial velocity, and
$\rho$ is the dark matter density. Energy conservation for the dark matter
particle means $v^{2}\sim 1/r$. Thus, $\rho\sim r^{-3/2}$. Noting that
$\rho\sim\phi^{2}$ tells us $\phi\sim r^{-3/4}$. Such a dark matter spike
around a black hole was discussed in Gondolo & Silk (2000), Ullio et al.
(2001). (2) for $m\,\lower 3.22916pt\hbox{$\sim$}\hbox to0.0pt{\hss\raise
1.1625pt\hbox{$<$}}\,v_{\rm halo}\,r_{s}^{-1}$, where $v_{\rm halo}$ is the
velocity dispersion of the ambient halo, the scalar profile is more or less
flat; (3) for $m$ in between these two limits, $\phi$ exhibits both particle
behavior (the $r^{-3/4}$ pile-up) and wave behavior in the form of standing
waves. 262626The stationary accretion flow of $\phi$ onto the black hole can
be thought of as some sort of hair. The classic no-scalar-hair theorem of
Bekenstein (1972b, a) assumes $\phi$ vanishes far away from the black hole,
which is violated in this case. The boundary condition of $e^{\pm imt}$ can be
thought of as a generalization of the $\phi\sim t$ boundary condition
considered by Jacobson (1999) (see also Horbatsch & Burgess, 2012, Wong et
al., 2019). The computation described above assumes the black hole dominates
gravitationally: one can check that, for astrophysically relevant parameters,
the pile-up of the scalar towards the horizon does not lead to significant
gravitational backreaction. There is, however, the possibility that self-
interaction (the quartic interaction for the axion) might be non-negligible
close to the horizon due to the pile-up. As one goes to larger distances from
the black hole, the dark matter (and baryons) eventually dominates
gravitationally. An interesting setting is the wave dark matter soliton at the
center of a galaxy which also hosts a supermassive black hole (Brax et al.,
2020). Investigations of how the black hole modifies the soliton can be found
in Chavanis (2019), Bar et al. (2019b), Davies & Mocz (2020).
Even though the instantaneous gravitational backreaction of the scalar is
small close to the black hole, the cumulative accreted mass could be
significant. The accretion rate in the low $m$ regime (for $\ell=0$) is:
$\dot{M}_{\rm BH}=4\pi r_{s}^{2}\rho_{\rm halo}\sim 4\times 10^{-9}{\,\rm
M_{\odot}}{\,\rm yr}^{-1}\left({M_{\rm BH}\over 10^{9}{\,\rm
M_{\odot}}}\right)^{2}\left({\rho_{\rm halo}\over 0.1{\,\rm
M_{\odot}\,pc^{-3}}}\right)$ (36)
where $M_{\rm BH}$ is the mass of the black hole, and $\rho_{\rm halo}$ is the
ambient dark matter halo density.272727 This is simple to understand: in the
low mass regime, there is essentially no pile-up towards the horizon. Thus,
the dark matter density at horizon is roughly the same as $\rho_{\rm halo}$,
the density far away. At the horizon, dark matter flows into the black hole at
the speed of light, which is unity in our convention. Hence the expression for
$\dot{M}$. In the high $m$ regime, the pile-up enhances this by a factor of
$\sim 1/v_{\rm halo}^{3}$. For $v_{\rm halo}\sim 10^{-3}$, we see that
$\dot{M}_{\rm BH}$ goes up to $4{\,\rm M_{\odot}/yr}$ in the high $m$ limit,
though it should be kept in mind this estimate assumes $\ell=0$. (Note that
$r_{s}^{-1}=6.7\times 10^{-20}{\,\rm eV}(10^{9}{\,\rm M_{\odot}\,}/M_{\rm
BH})$.)
Suppose one solves the Klein-Gordon equation with a different boundary
condition far away from the black hole: that $\phi$ vanishes. In that case,
assuming the time dependence is given by $e^{-i\omega t}$, the allowed
frequency $\omega$ forms a discrete spectrum, much like the energy spectrum of
a hydrogen atom. For a spinning black hole, some of these $\omega$’s are
complex with a positive imaginary part, signaling an instability, known as
superradiance (Zel’Dovich, 1972, Bardeen et al., 1972, Press & Teukolsky,
1972, Starobinskiǐ, 1973, Damour et al., 1976, Dolan, 2007, Arvanitaki et al.,
2010, Arvanitaki & Dubovsky, 2011, Arvanitaki et al., 2017, Endlich & Penco,
2017). The superradiance condition is:
${\,\rm Re\,}\omega<{am_{J}\over{r_{s}r_{+}}}$ (37)
where $r_{s}=2GM$, $r_{+}=(r_{s}/2)+\sqrt{(r_{s}/2)^{2}-a^{2}}$ is the
horizon, $a$ is the black hole angular momentum per unit mass (the
dimensionless spin is $2a/r_{s}$, between $0$ and $1$), and $m_{J}$ is the
angular momentum quantum number of the mode in question. 282828Re $\omega$ is
always of the order of the mass of the particle $m$, and Im $\omega$ is
maximized for the $\ell=m_{J}=1$ mode and $mr_{s}/2\sim 0.1-0.5$ depending on
the value of $a$. It is a weak instability in the sense that Im $\omega$ is at
best about $10^{-6}m$. See Dolan (2007). A superradiant mode extracts energy
and angular momentum from the black hole. That this mode grows with time means
the scalar need not be dark matter at all— even quantum fluctuations could
provide the initial seed to grow a whole superradiance cloud around the black
hole. In the process, the black hole loses mass and angular momentum (much of
which occurs when the cloud is big). At some point, the black hole’s mass and
spin are such that the mode in question is no longer unstable, and in fact
some of the lost energy and angular momentum flow back into the black hole,
until another superradiant mode—one that grows more slowly, typically higher
$\ell$—takes over (see e.g. Ficarra et al., 2019). The implied net black hole
spin-down is used to put constraints on the existence of light scalars, using
black holes with spin measurements (for recent discussions, see e.g. Stott &
Marsh, 2018, Davoudiasl & Denton, 2019). Other phenomena associated with the
black hole superradiance cloud includes gravitational wave emission, and run-
away explosion when self-interaction becomes important (Arvanitaki & Dubovsky,
2011, Yoshino & Kodama, 2014, Hannuksela et al., 2019).
It is worth stressing that these constraints do not assume the scalar in
question is the dark matter. An interesting question is how the constraints
might be modified if the scalar is the dark matter. For instance there can be
accretion of angular momentum from the ambient dark matter, much like the
accretion of mass discussed earlier. 292929There can also be accretion of
baryons, discussed in e.g. Barausse et al. (2014). The cloud surrounding the
black hole is thus a combination of superradiant unstable and stable modes.
This was explored in Ficarra et al. (2019): if the initial seed cloud (of both
unstable and stable modes) is large enough, the long term evolution of the
black hole mass and spin can be quite different from the case of a small
initial seed. 303030 It is worth stressing that, while the Klein-Gordon
equation is linear in $\phi$, the evolution of the combined black-hole-scalar-
cloud system is nonlinear. As the black hole mass and spin evolve due to
accretion/extraction, the background geometry for the Klein-Gordon equation is
modified, which affects the scalar evolution. This feedback loop has non-
negligible effects, even though at any given moment in time, the geometry is
dominated by the black hole rather than the cloud. This is particularly
relevant if the scalar in question is the dark matter, and therefore present
around the black hole from the beginning. It would be worth quantifying how
existing superradiance constraints might be modified in this case. There are
also interesting investigations on how such a cloud interacts with a binary
system (Baumann et al., 2019, Zhang & Yang, 2020, Annulli et al., 2020).
We close this section with the discussion of one more relativistic effect,
pointed out by Khmelnitsky & Rubakov (2014). The energy density associated
with the oscillations of $\phi$ (which can be interpreted as a collection of
$\phi$ particles) is $\rho=(\dot{\phi}^{2}+m^{2}\phi^{2})/2$ (Equation 7). It
can be shown the corresponding pressure is
$P=(\dot{\phi}^{2}-m^{2}\phi^{2})/2$. For $\phi\sim{\,\rm sin}(mt)$ or ${\,\rm
cos}(mt)$, we see that $\rho$ is constant while $P$ oscillates with frequency
$2m$. Einstein equations tell us this sources an oscillating gravitational
potential. In Newtonian gauge, with the spatial part of the metric as
$g_{ij}=(1-2\Psi)\delta_{ij}$, the gravitational potential $\Psi$ has a
constant piece that obeys the usual Poisson equation $\nabla^{2}\Psi=4\pi
G\rho$, and an oscillating part obeying $-\ddot{\Psi}\sim 4\pi GP$. Thus
$\Psi$ oscillates with frequency $2m$ and amplitude $\pi G\rho/m^{2}$. In
other words, the oscillating part of $\Psi$ is suppressed compared to the
constant part by $k^{2}/m^{2}$. The typical (constant part of) gravitational
potential is of the order $10^{-6}$ in the Milky Way; the oscillating part is
then about $10^{-12}$. For $m$ in the ultra-light range, recalling $m^{-1}\sim
0.2{\,\rm yr}\,(10^{-22}{\,\rm eV}/m)$, pulsar timing arrays are well suited
to search for this effect, as proposed by Khmelnitsky & Rubakov (2014). See
further discussions in Section 5.4.
## 5 Observational/experimental implications and constraints
In this section, we discuss the observational and experimental implications of
the wave dynamics and phenomenology explained above. The discussion serves a
dual function. One is to summarize current constraints—because of the wide
scope, the treatment is more schematic than in previous sections, but provides
entry into the literature. The other is to point out the limitations of
current constraints, how they might be improved, and to highlight promising
new directions. Astrophysical observations are relevant mostly, though not
exclusively, for the ultra-light end of the spectrum. Axion detection
experiments, on the other hand, largely probe the heavier masses, though new
experiments are rapidly expanding the mass range. Much of the discussion
applies to any wave dark matter candidate whose dominant interaction is
gravitational. Some of it—on axion detection experiments for instance— applies
specifically to axions with their expected non-gravitational interactions
(Equation 9).
Sections 5.2 and 5.3 focus on ultra-light wave dark matter i.e. fuzzy dark
matter. Table 1 summarizes some of the corresponding astrophysical
constraints. Sections 5.1, 5.4, 5.5 and 5.6 cover more general wave dark
matter, with Section 5.6 on axion detection experiments.
### 5.1 Early universe considerations
Within the inflation paradigm, the light scalar $\phi$ associated with wave
dark matter has inevitable quantum fluctuations which are stretched to large
scales by an early period of accelerated expansion (Axenides et al., 1983,
Linde, 1985, Seckel & Turner, 1985, Turner & Wilczek, 1991). These are
isocurvature fluctuations, distinct from the usual adiabatic fluctuations
associated with the inflaton $\varphi$, which is another light scalar. The
relevant power spectra are (e.g., Baumann, 2011, Marsh et al., 2013):
$\Delta_{\zeta}^{2}={1\over 8\pi^{2}\epsilon}{H_{\rm infl}^{2}\over m_{\rm
pl}^{2}}\quad,\quad\Delta_{\phi}^{2}={1\over\pi^{2}}{H_{\rm
infl}^{2}\over\phi_{i}^{2}}\,,$ (38)
where $\Delta_{\zeta}^{2}$ is the (adiabatic) curvature power spectrum,
$\Delta_{\phi}^{2}$ is the (isocurvature) density power spectrum for $\phi$,
$H_{\rm infl}$ is the Hubble scale during inflation, $m_{\rm pl}\equiv
1/\sqrt{8\pi G}\sim 2.4\times 10^{18}{\,\rm GeV}$ is the reduced Planck mass,
$\phi_{i}$ is the (axion) scalar field value during inflation, and $\epsilon$
is the first slow-roll parameter. 313131 The dimensionless power spectrum
$\Delta^{2}(k)$ is related to the dimensionful power spectrum $P(k)$ by
$\Delta^{2}\equiv 4\pi k^{3}P(k)/(2\pi)^{2}$. We have suppressed a $k$
dependent factor that depends on the spectral index $n$ i.e.
$\Delta^{2}\propto k^{n-1}$. For single field slow roll inflation,
$n-1=2\eta-6\epsilon$, where $\epsilon\equiv({\cal V}_{,\varphi}m_{\rm
pl}/{\cal V})^{2}/2=-\dot{H}_{\rm infl}/H_{\rm infl}^{2}$ and $\eta\equiv
m_{\rm pl}^{2}{\cal V}_{,\varphi\varphi}/{\cal V}$ are the first and second
slow roll parameters, with ${\cal V}$ being the inflaton potential. The
spectral tilt for $\zeta$ is observed to be $n\sim 0.97$ (Hinshaw et al.,
2013, Aghanim et al., 2020). Microwave background anisotropies bound
$\Delta_{\phi}^{2}/\Delta_{\zeta}^{2}\,\lower 3.22916pt\hbox{$\sim$}\hbox
to0.0pt{\hss\raise 1.1625pt\hbox{$<$}}\,0.05$ (Hinshaw et al., 2013, Aghanim
et al., 2020), implying $8\epsilon(m_{\rm pl}/\phi_{i})^{2}\,\lower
3.22916pt\hbox{$\sim$}\hbox to0.0pt{\hss\raise 1.1625pt\hbox{$<$}}\,0.05$.
Consider for instance $\phi_{i}\sim 10^{17}$ GeV (see Equation 8, where
$\phi_{i}\sim f$). In that case, observations require $\epsilon\,\lower
3.22916pt\hbox{$\sim$}\hbox to0.0pt{\hss\raise
1.1625pt\hbox{$<$}}\,10^{-5}$.323232Given that the scalar spectral index is
observed to be $n-1=2\eta-6\epsilon\,\sim\,0.97$. The smallness of $\epsilon$
means the requisite inflation model is one where $\eta\gg\epsilon$. For recent
model building in this direction, see Schmitz & Yanagida (2018). Since
$\Delta^{2}_{\zeta}$ is observed to be about $10^{-9}$, this implies $H_{\rm
infl}/m_{\rm pl}\,\lower 3.22916pt\hbox{$\sim$}\hbox to0.0pt{\hss\raise
1.1625pt\hbox{$<$}}\,10^{-6}$. This is a low inflation scale, suggesting a low
level of gravitational waves, or tensor modes (Lyth, 1990). One can see this
more directly by recalling that tensor modes suffer the same level of
fluctuations as a spectator scalar like $\phi$:
$\Delta^{2}_{\rm tensor}={2\over\pi^{2}}{H_{\rm infl}^{2}\over m_{\rm
pl^{2}}}\quad,\quad r\equiv{\Delta^{2}_{\rm
tensor}\over\Delta^{2}_{\zeta}}=16\epsilon$ (39)
where $\Delta_{\rm tensor}^{2}$ resembles $\Delta_{\phi}^{2}$, with $\phi_{i}$
replaced by $m_{\rm pl}$, and a factor of $2$ for the $2$ polarizations. The
tensor-to-scalar ratio $r$ is thus constrained by the isocurvature bound to
be: $r\,\lower 3.22916pt\hbox{$\sim$}\hbox to0.0pt{\hss\raise
1.1625pt\hbox{$<$}}\,0.1(\phi_{i}/m_{\rm pl})^{2}$. For $\phi_{i}\sim 10^{17}$
GeV, this means $r\,\lower 3.22916pt\hbox{$\sim$}\hbox to0.0pt{\hss\raise
1.1625pt\hbox{$<$}}\,2\times 10^{-4}$, making tensor modes challenging to
observe with future microwave background experiments. Most axion models have
lower $\phi_{i}$’s which would strengthen the bound. This is thus a general
requirement: to satisfy the existing isocurvature bound, the inflation scale
$H_{\rm infl}$ must be sufficiently low, implying a low primordial
gravitational wave background. This holds as long as the scalar dark matter
derives its abundance from the misalignment mechanism, with the misalignment
angle in place during inflation. A way to get around this is to consider
models where the scalar $\phi$ becomes heavy during inflation (Higaki et al.,
2014).
The requirement does not apply in cases where the relic abundance is
determined by other means. For instance, for the QCD axion, it could happen
that the Peccei-Quinn symmetry is broken only after inflation (recall the
axion as a Goldstone mode exists only after spontaneous breaking of the
symmetry), in which case the relic abundance is determined by the decay of
axion strings and domain walls (Kolb & Turner, 1990, Buschmann et al., 2020,
Gorghetto et al., 2020). There are also proposals for vector, as opposed to
scalar, wave dark matter: isocurvature vector perturbations are relatively
harmless because they decay (Graham et al., 2016b, Kolb & Long, 2020).
The above discussion includes only the gravitational interaction of scalar
dark matter. Other early universe effects are possible with non-gravitational
interactions. For instance, Sibiryakov et al. (2020) pointed out if the scalar
has a dilaton-like coupling to the standard model, Helium-4 abundance from big
bang nucleosynthesis can be significantly altered. 333333Such a scalar
coupling to the standard model must be close to being universal to satisfy
stringent equivalence principle violation constraints (Wagner et al., 2012,
Graham et al., 2016a). The pseudo-scalar coupling to fermions (Equation 9)
gives rise to a spin-dependent force that can also be probed experimentally
(Terrano et al., 2015).
### 5.2 Linear power spectrum and early structure formation
As discussed in Section 4.1, light scalar dark matter—produced out of a
transition process from slow-roll to oscillations—has a primordial power
spectrum suppressed on small scales (high $k$’s). For fuzzy dark matter, the
suppression scale is around $k\sim 5$/Mpc (Equation 20). Observations of the
Lyman-alpha forest are sensitive to power on such scales. The Lyman-alpha
forest is the part of the spectrum of a distant object (usually a quasar)
between Lyman-alpha and Lyman-beta in its rest frame. Intergalactic neutral
hydrogen causes absorption, with measurable spatial fluctuations. With
suitable modeling, the spatial fluctuations can be turned into statements
about the dark matter power spectrum (Croft et al., 1998, Hui, 1999, McDonald
et al., 2005b, Palanque-Delabrouille et al., 2013). With this technique, a
limit of $m\,\lower 3.22916pt\hbox{$\sim$}\hbox to0.0pt{\hss\raise
1.1625pt\hbox{$>$}}\,3\times 10^{-21}$ eV was obtained by Iršič et al. (2017),
Kobayashi et al. (2017), Armengaud et al. (2017). Rogers & Peiris (2020) found
a stronger bound of $2\times 10^{-20}$ eV—among the differences in analysis
are assumptions on the reionization history.
In this type of investigation, often the only effect of fuzzy dark matter
accounted for is its impact on the primordial power spectrum. One might worry
about the effect of quantum pressure on the subsequent dynamics, but this was
shown to be a small effect at the scales and redshifts for the Lyman-alpha
forest (Nori et al., 2019, Li et al., 2019). Another assumption is that the
observed fluctuations in neutral hydrogen reflect fluctuations in the dark
matter. This need not be true, since astrophysical fluctuations modulate the
neutral hydrogen distribution, such as fluctuations in the ionizing background
(Croft, 2004, McDonald et al., 2005a, D’Aloisio et al., 2018), the
temperature-density relation (Hui & Gnedin, 1997, Cen et al., 2009, Keating et
al., 2018, Wu et al., 2019, Oñorbe et al., 2019) and from galactic winds
(McDonald et al., 2005a, Viel et al., 2013). Measurements of the power
spectrum growth from the forest suggest the astrophysical fluctuations are
sub-dominant, that gravity is sufficient to account for the observed growth
(McDonald et al., 2005b). Nonetheless, it is worth stressing for the bound on
$m$, one has to worry about systematic effects at the few percent level.
343434For instance, the Lyman-alpha absorption power spectrum for $m=10^{-21}$
eV fuzzy dark matter differs from that for conventional cold dark matter at
the few percent level (at $z\sim 5$; smaller as one goes to lower redshifts),
if one allows the intergalactic medium parameters (especially the temperature)
to float to fit the data. If the latter parameters were held fixed, the two
model predictions differ significantly, up to factor of a few. But that is not
the relevant comparison. Since the intergalactic medium parameters are unknown
and need to be fit from the data, the relevant comparison is between fuzzy
dark matter at its best fit and conventional dark matter at its best fit—they
differ at the few percent level. Thanks are due to Rennan Barkana, Vid Iršič
and Matteo Viel for discussions on this point. The astrophysical fluctuations
were accounted for in the following way in deriving constraints (Iršič et al.,
2017, Kobayashi et al., 2017, Armengaud et al., 2017). Simulations with these
astrophysical fluctuations are compared against those without; the scale and
redshift dependence of the fractional difference in the predicted Lyman-alpha
power spectrum is then fixed, while the amplitude of the difference is treated
as a free parameter to be determined from the data. The question is to what
extent simulations of the astrophysical fluctuations have enough variety to
account for the range of possible scale and redshift dependence. The variety
in question derives from the distribution of ionizing sources, the
reionization history and the strength and form of galactic feedback. 353535The
Lyman-alpha forest can also be used to constrain scenarios where Peccei-Quinn
symmetry breaking occurs after inflation. See Iršič et al. (2020).
Formation of the first nonlinear objects in the universe is also sensitive to
the small scale power spectrum. Recall in hierarchical structure formation, it
is the small, less massive objects that form first. A suppression of small
scale power implies fewer nonlinear objects at high redshifts, delaying
reionization (Barkana et al., 2001). The EDGES experiment (Bowman et al.,
2018) announced the detection of an absorption feature around $78$ MHz that
may result from the hyperfine transition (21cm) of hydrogen at redshift around
$15-20$. This suggests the spin temperature of the 21cm line is coupled to the
gas temperature at such high redshifts, and points to early star formation
which produces the requisite radiation to do so. This was used to place bounds
on fuzzy dark matter $m\,\lower 3.22916pt\hbox{$\sim$}\hbox to0.0pt{\hss\raise
1.1625pt\hbox{$>$}}\,5\times 10^{-21}$ eV (Safarzadeh et al., 2018, Schneider,
2018, Lidz & Hui, 2018). A few considerations should be kept in mind. The
EDGES detection remains to be confirmed (Hills et al., 2018). These bounds
assume (1) star formation tracking halo formation, and (2) an upper limit on
the fraction of halo baryons that turn into stars ($0.05$ in Lidz & Hui,
2018). Another important assumption is that the halo mass function can be
reliably predicted from the linear power spectrum by the standard Press-
Schechter or Sheth-Tormen relations (Press & Schechter, 1974, Sheth & Tormen,
1999, Marsh & Silk, 2014, Kulkarni & Ostriker, 2020). 363636 The idea is to
map the mass of a halo to a comoving length scale. The number density of halos
at that mass (i.e. the mass function) is then related to the linear power
spectrum at the corresponding length scale. These relations have been checked
for fuzzy dark matter models using only N-body, as opposed to wave,
simulations, i.e. the “fuzziness” enters only through the primordial power
spectrum (Schive et al., 2016). Typical wave simulations use too small a box
size to give a reliable halo mass function. It is conceivable that wave
interference phenomena might help make more smaller objects than expected from
Press-Schechter type arguments.
Looking towards the future, spectral distortion measurements of the microwave
background hold the promise of measuring the linear power spectrum down to
very small scales, comoving $k$ as high as $10^{4}$/Mpc (Kogut et al., 2019,
Chluba et al., 2019). 373737An experiment like PIXIE can probe excess power
over the conventional cold dark matter prediction. To check if there is a
power deficit, from wave dark matter for instance, would require something
more ambitious, Super-PIXIE (Chluba et al., 2019). From Equation 20, this kind
of experiment can thus probe a wave dark matter mass as high as $\sim
10^{-15}$ eV.
Table 1: Some constraints in the literature on fuzzy dark matter
Method | Constraint | Sources of systematic uncertainties | Refs.
---|---|---|---
Lyman-alpha forest | m $>3\times 10^{-21}$ eV | Ionizing background/temp. fluctuations | 1
Density profile | m $>10^{-21}$ eV | Baryonic feedback/black hole | 2
Satellite mass | m $>6\times 10^{-22}$ eV | Tidal stripping | 3
Satellite abundance | m $>2.9\times 10^{-21}$ eV | Subhalo mass function prediction | 4
References: 1=Iršič et al. (2017), Kobayashi et al. (2017), Armengaud et al.
(2017), 2=Bar et al. (2018), 3=Safarzadeh & Spergel (2019), 4=Nadler et al.
(2020). See text on the methodology and systematic uncertainties of each
constraint.
### 5.3 Galactic dynamics and structure—density profile, stellar scattering,
dynamical friction, subhalo mass function and interference substructures
There is a wide variety of methods to constrain wave dark matter from galactic
structure or dynamics, especially at the ultra-light end of the spectrum.
Density profile. Wave simulations demonstrate that fuzzy dark matter halos
generically have a solitonic core, and an NFW-like outer density profile
(Schive et al., 2014b). There is a substantial literature on comparing this
prediction against observations. Investigations focusing on the inner density
profile (i.e. within the purported soliton) of Milky Way dwarf satellites
found reasonable agreement with $m\sim 10^{-22}-10^{-21}$ eV (Chen et al.,
2017, Calabrese & Spergel, 2016). A $10^{9}\,{\,\rm M_{\odot}}$ soliton at the
center of the Milky Way was reported by De Martino et al. (2020), though there
is substantial uncertainty because of the dominance of baryons (Li et al.,
2020). Investigations bearing on how the soliton connects with the outer halo
generally found tension with data, for $m\,\lower 3.22916pt\hbox{$\sim$}\hbox
to0.0pt{\hss\raise 1.1625pt\hbox{$<$}}10^{-21}$ eV. Taking the soliton-halo
relation (Equation 30) seriously, one expects an inner circular velocity that
matches the outer asymptotic value (a reflection of the rough equality of the
soliton potential and halo potential; see footnote 22), something not seen in
observations of disk galaxies (Bar et al., 2018). Moreover, dynamical
measurements of Milky Way dwarf satellites, when used to fit for solitonic
cores, predict halo masses that are too large, incompatible with their
survival under dynamical friction, giving a bound of $m\,>\,6\times 10^{-22}$
eV (Safarzadeh & Spergel, 2019). It was also pointed out by Burkert (2020)
that low mass galaxies have a universal core surface density $\sim 75\,{\,\rm
M_{\odot}/pc^{2}}$ while spanning a large range in core radius; this conflicts
with the soliton scaling of $M\propto 1/R$ (Equation 4.2) implying a surface
density $\propto 1/R^{3}$. On the other hand, Pozo et al. (2020) pointed out
that the stellar density profile of dwarfs matches well the mass density
profile in fuzzy dark matter simulations.
Overall, it appears the fuzzy dark matter soliton does not in a
straightforward way match galaxy cores seen in dynamical data, when viewed in
the larger context of the host halo. A number of possible mitigating factors
should be kept in mind. The relaxation time for forming a soliton scales as
$m^{3}$ (Equation 4.5), which can get quite long for the higher masses. Some
of the galaxies investigated are in dense environments; tidal interactions
could perturb them in significant ways that should be taken into account (see
Section 4.5). Inference of galaxy density profiles from dynamical data is
subject to uncertainty from the velocity anisotropy profile (see e.g., Walker
et al., 2009, Amorisco & Evans, 2012), or possible non-circular motions (Oman
et al., 2019). Baryons and central supermassive black holes could affect
galaxy density profiles in non-negligible ways. There has been a lot of work
in this direction for conventional cold dark matter, with some success and
some remaining puzzles e.g. Oman et al. (2015). 383838See also Kaplinghat et
al. (2020) on the self-interacting dark matter model. These considerations
are likely relevant for testing fuzzy dark matter from density profiles (Bar
et al., 2019a, b).
Heating/scattering of stars. Transient, de Broglie size substructures due to
wave interference heat up stars in a galaxy (Section 4.5). Such heating of the
Milky Way disc was investigated by Church et al. (2019) who put a bound
$m\,>\,0.6\times 10^{-22}$ eV to avoid overheating. Stellar streams from
tidally disrupted globular clusters can be heated up in a similar way, leading
to thickening. A bound of $m\,>\,1.5\times 10^{-22}$ eV was placed by Amorisco
& Loeb (2018) based on this argument. The stellar cluster at the center of the
ultra-faint dwarf Eridanus II was used to place constraints on $m$ by Marsh &
Niemeyer (2019). Solitons in wave simulations are observed to have
oscillations (Veltmaat et al., 2018). The oscillation time scale would be
shorter than the dynamical time scale of the stellar cluster for $m\,\lower
3.22916pt\hbox{$\sim$}\hbox to0.0pt{\hss\raise 1.1625pt\hbox{$>$}}\,10^{-21}$
eV, leading to heating and disruption of the stellar cluster for $m$ up to
$10^{-20}$ eV. 393939For $m\,\lower 3.22916pt\hbox{$\sim$}\hbox
to0.0pt{\hss\raise 1.1625pt\hbox{$<$}}\,10^{-21}$ eV, the long soliton
oscillation time ($\sim 1/(mv^{2})$) means the impact on the stellar cluster
is adiabatic i.e. no heating. For $m\,\lower 3.22916pt\hbox{$\sim$}\hbox
to0.0pt{\hss\raise 1.1625pt\hbox{$>$}}\,10^{-20}$, Marsh & Niemeyer (2019)
derived constraints not from heating by soliton oscillation, but from heating
by de Broglie granules. The observation of soliton oscillations was based on
simulations of isolated halos, while Eridanus II is a Milky Way satellite
subject to tidal forces. Recently, a simulation including an external tidal
field was described in Schive et al. (2020). They showed that tidal disruption
of the outer halo surrounding the soliton leads to suppressed heating of a
stellar cluster in the soliton.404040 It was pointed out by Schive et al.
(2020) that the soliton in general undergoes random walks as well as
oscillates. Tidal stripping of the outer halo appears to suppress excitations
associated with such processes. Analytic arguments suggest the same (Li et
al., 2021).
Dynamical friction. The wave nature of dark matter can lead to a suppression
of dynamical friction, as explained in Section 4.5. It was argued by Hui et
al. (2017) that a fuzzy dark matter mass of $m\sim 10^{-22}$ eV helps explain
the survival of globular clusters against orbital decay in the halo of Fornax
(Tremaine, 1976, Oh et al., 2000). See Lancaster et al. (2020) for a numerical
exploration of this phenomenon, and Bar-Or et al. (2019) on how the
suppression of dynamical friction is tempered by diffusion. It is worth noting
that within the conventional cold dark matter model, a possible solution to
this dynamical friction problem is to invoke core-stalling (Goerdt et al.,
2006, Read et al., 2006, Inoue, 2011, Cole et al., 2012). Dynamical data with
higher precision, and on more systems, would be very helpful.
Subhalo mass function. Fuzzy dark matter, with its suppressed power on small
scales, predicts fewer low mass halos compared with conventional cold dark
matter. The same is expected to be true for subhalos of a parent galaxy, such
as the Milky Way. Several different ways to probe the subhalo mass function
have been discussed in the literature. One way is to infer the subhalo mass
function from the observed luminosity function of Milky Way satellites, using
abundance matching. This was carried out by Nadler et al. (2020) who obtained
the bound $m\,>\,2.9\times 10^{-21}$ eV. Another method is to use stellar
streams from tidally disrupted globular clusters or satellites in our galaxy
(Johnston et al., 2002, Ibata et al., 2002). Observed perturbations of streams
were used to place constraints on the subhalo mass function, which were then
turned into constraints on warm dark matter (Banik et al., 2019b) and fuzzy
dark matter (Schutz, 2020), obtaining $m\,>\,2.1\times 10^{-21}$ eV. Yet
another method is to use flux anomaly in strongly lensed systems to probe
subhalos in the lensing galaxies (Dalal & Kochanek, 2002). This was used by
Gilman et al. (2020) to constrain warm dark matter and Schutz (2020) to limit
fuzzy dark matter, obtaining $m\,>\,2.1\times 10^{-21}$ eV. A natural question
for these investigations is to what extent the subhalo mass function for fuzzy
dark matter is accurately known. It is typically computed using Press-
Schechter type formalism, meaning the effect of fuzzy dark matter enters only
through the initial power spectrum (i.e. its suppression on small scales).
Dynamical effects due to wave interference could influence the subsequent
evolution, and thus the subhalo mass function. It would be useful to quantify
it with wave simulations (see discussion at the end of Section 5.2). Moreover,
wave interference granules—not virialized subhalos—could by themselves give
rise to these signals, such as the scattering of stellar streams (Dalal et
al., 2020). Their effects should be taken into account.
Probing interference substructures. One generic prediction of wave dark matter
is the existence of interference substructures in halos. These are de Broglie
scale, order unity density fluctuations. The fluctuation can take the density
all the way to zero (complete destructive interference i.e. vortices; see
Section 4.4). There are different ways to probe these interference
substructures. One is through the heating and scattering of stars, already
discussed above. The other is through gravitational lensing by the
substructures. For instance, a de Broglie size blob in our own galaxy passing
over the line of sight to some distant object would cause the apparent
position of that object to shift (Weiner, 2019, Mondino et al., 2020, Mishra-
Sharma et al., 2020, Hui et al., 2020). The effect is small—Mishra-Sharma et
al. (2020) proposed the correlated shifts of many distant objects could be
used to look for small signals. Another context where a gravitational lensing
signal can be searched for is cases of strong lensing. The lensing flux
anomaly refers to the phenomenon that strongly magnified images of a distant
source have flux ratios that are discordant with expectations from a smooth
lensing halo (Mao & Schneider, 1998, Chiba, 2002, Metcalf & Madau, 2001, Dalal
& Kochanek, 2002, Hezaveh et al., 2016a, Alexander et al., 2020, Dai et al.,
2020). For instance, two images close to a critical line (corresponding to a
fold caustic) are expected to have the same magnification, barring
substructures on scales smaller than the image separation. It has been shown
that interference substructures can cause a $\sim 10\%$ difference in cases of
high magnification $\sim 100$ (Chan et al., 2020, Hui et al., 2020). Since
subhalos also give rise to such flux anomaly, to distinguish between fuzzy
dark matter and conventional cold dark matter, a measurement of the anomaly as
a function of image separation would be helpful. The anomaly power spectrum of
fuzzy dark matter would have a feature around the de Broglie scale.
### 5.4 Probes using compact objects—superradiance, solitons, potential
oscillation and stellar cooling
Superradiance. Superradiance constraints on the existence of light scalars, or
light bosons more generally— not necessarily dark matter—were summarized in
Stott & Marsh (2018). The idea is to use the measured spin of black holes to
put limits on scalars which could drain away their angular momentum, if their
Compton wavelength roughly matches the horizon size (see Section 4.6). The
boson mass probed this way covers a wide range, from $\sim
10^{-13}-10^{-12}{\,\rm eV}$ for black holes at tens of solar mass, to $\sim
10^{-18}-10^{-21}{\,\rm eV}$ for supermassive black holes. It was pointed out
by Davoudiasl & Denton (2019) that the spin constraint on the M87 supermassive
black hole, reported by the Event Horizon Telescope (EHT) collaboration
(Akiyama et al., 2019), disfavors ultra-light bosons around $10^{-21}$ eV. It
is worth noting that the EHT constraint comes not from measurement of the
famous shadow, but from modeling of the jet coming out of the galactic
nucleus.
The existing superradiance constraints were obtained by assuming the
superradiance cloud grows from a small initial seed of superradiance-unstable
modes (produced by quantum fluctuations for instance). As pointed out by
Ficarra et al. (2019), the existence of additional superradiance-stable modes
could significantly modify the long term evolution of the cloud, and therefore
the mass and spin of the black hole (see footnote 30). Such stable modes are
naturally present if the light boson in question were the dark matter. Dark
matter mass and angular momentum accretion onto the black hole inevitably
occurs (Clough et al., 2019, Hui et al., 2019, Bamber et al., 2020). It would
be useful to revisit the superradiance constraints for cases where the light
boson is the dark matter. It is also worth noting that enhanced interactions
of the axion could lead to relaxation of the superradiance constraints (Mathur
et al., 2020).
Boson stars. Light boson dark matter can be probed astrophysically in a
different way, by the boson stars or solitons that could form in the early
universe. Using the Chandrasekhar-like maximum mass as a guide (Equations 23
or 4.2), the interesting boson star mass could range from $10^{-10}{\,\rm
M_{\odot}}$ to $10^{10}{\,\rm M_{\odot}}$, for dark matter mass from $10^{-6}$
eV to $10^{-22}$ eV. Gravitational lensing could be used to detect or
constrain a population of such objects (Kolb & Tkachev, 1996, Fairbairn et
al., 2018). They could also contribute to merger events seen by gravitational
wave experiments if they are sufficiently compact (Macedo et al., 2013,
Palenzuela et al., 2017, Clough et al., 2018, Helfer et al., 2019). The
computation of the early universe production of boson stars, specifically
axion stars, was pioneered by Kolb & Tkachev (1993). Termed axion
miniclusters, they form due to large fluctuations from the breaking of the
Peccei-Quinn symmetry after inflation. The mass function of boson stars
subsequently evolves, due to mergers and condensation processes (Fairbairn et
al., 2018, Eggemeier & Niemeyer, 2019). Further computations to firm up the
prediction of the eventual mass distribution of boson stars would be helpful.
Gravitational potential oscillations. An oscillating scalar produces an
oscillating gravitational potential at frequency $2m$, as pointed out by
Khmelnitsky & Rubakov (2014). This effect can be searched for in pulsar timing
array data, which has a frequency coverage that probes $m\sim
10^{-24}-10^{-22}$ eV. The oscillating potential scales as $\rho/m^{2}$ (see
Section 4.6) so the constraints are stronger at smaller $m$’s. A bound of
$\rho\,<\,6{\,\rm GeV/cm^{3}}$ for $m\leq 10^{-23}$ eV was obtained by Porayko
et al. (2018) from the Parkes Pulser Timing Array data. A bound of
$\rho\,<\,2{\,\rm GeV/cm^{3}}$ for $m\sim 10^{-23}$ eV was obtained by Kato &
Soda (2020) from the NANOGrav data. These are proofs of concept, since the
local dark matter density is already known to be $\rho\sim 0.4{\,\rm
GeV/cm^{3}}$ (Bovy & Tremaine, 2012, Sivertsson et al., 2018, McKee et al.,
2015). As a probe of wave dark matter, this method is interesting because it
directly probes the scalar field oscillations at frequency $m$, and has very
different systematics from other astrophysical probes. The solar system
ephemeris turns out to be an important source of systematic error. Forecasts
of future improvements, with the planned Square Kilometre Array, can be found
in Porayko et al. (2018). To place meaningful limits on $m\sim 10^{-22}$ eV,
it is important to have high cadence in addition to long integration time.
Stellar axion emission. To close this sub-section on compact objects, we
mention one classic probe: axion bounds from the cooling of stars. Axion
couples to photons, gluons and fermions in the standard model (Equation 9).
The interaction strength is weak, but deep in the interior of stars, there can
be enough axion production to affect stellar structure and evolution. (The
weak interaction strength also makes it relatively easy for the axion to
escape from the star.) This has been applied to the Sun (Schlattl et al.,
1999), red giants (Raffelt & Dearborn, 1987), supernova 1987A (Raffelt &
Seckel, 1988, Ellis & Olive, 1987, Turner, 1988, Mayle et al., 1988) and
neutron star mergers (Dietrich & Clough, 2019). 414141 For 1987A, the axion
constraint comes from its effect on the neutrino burst duration. For ways to
evade such supernova or stellar cooling bounds, see Bar et al. (2020), DeRocco
et al. (2020). There are also experiments built specifically to detect solar
axions such as CAST (Anastassopoulos et al., 2017). Phrased in terms of the
axion decay constant $f$ (larger $f$ means weaker coupling; see Equation 9),
the strongest constraint from these considerations is about $f\,\lower
3.22916pt\hbox{$\sim$}\hbox to0.0pt{\hss\raise 1.1625pt\hbox{$>$}}\,10^{9}$
GeV. Note that these constraints on the axion assume only its existence, not
its viability as a dark matter candidate. A comprehensive recent review can be
found in Raffelt (2008). There are also proposals to detect axion dark matter
from the production of photons in strong magnetic fields around neutron stars
(Bai & Hamada, 2018, Hook et al., 2018, Foster et al., 2020a).
### 5.5 Photon propagation in axion background
The axion coupling to $\vec{E}\cdot\vec{B}$ (Equation 9) affects the
propagation of photons in the universe if dark matter is indeed made up of
axions. To be concrete, suppose the Lagrangian for the photon consists of
${\cal L}=-{1\over 4}F_{\mu\nu}F^{\mu\nu}+{1\over 4}g_{\gamma}\phi
F_{\mu\nu}\tilde{F}^{\mu\nu}\,$ (40)
where $F_{\mu\nu}$ is the photon field strength and
$\tilde{F}^{\mu\nu}=\epsilon^{\mu\nu\alpha\beta}F_{\alpha\beta}/2$. The
coupling constant $g_{\gamma}$ plays the role of $\sim 1/f$ in Equation 9. The
modified Maxwell equations, setting $\vec{E}$ and $\vec{B}$ proportional to
$e^{-i\omega t+i{\vec{k}}\cdot{\vec{x}}}$, imply a dispersion relation of the
form (Harari & Sikivie, 1992):
$\omega=|\vec{k}|\pm{1\over
2}g_{\gamma}(\partial_{t}\phi+\hat{k}\cdot\vec{\nabla}\phi)\,,$ (41)
for the two circular polarizations ($\pm$). This is obtained assuming the WKB
limit (i.e. $\partial^{2}\phi\,\ll\,\omega\partial\phi$), and small
$g_{\gamma}$. The fact that the two circular polarizations have different
dispersion relations means a linearly polarized photon rotates in polarization
as it propagates. One can phrase this in terms of the phase difference between
the two circular polarizations:
$\Delta S=g_{\gamma}\int dt{D\phi\over Dt}\,,$ (42)
where $D/Dt$ is a total time derivative:
$\partial_{t}+\hat{k}\cdot\vec{\nabla}$ i.e. the phase for the respective
polarization is $S=-|\vec{k}|t+\vec{k}\cdot\vec{x}\pm\Delta S/2$. There have
been several attempts or proposals to search for this birefringence effect in
astronomical data, for instance the polarization of radio galaxies (Carroll et
al., 1990, Harari & Sikivie, 1992, Nodland & Ralston, 1997, Carroll & Field,
1997) and the microwave background (Harari & Sikivie, 1992, Lue et al., 1999,
Liu & Ng, 2017, Fedderke et al., 2019). 424242 See also Agrawal et al. (2020)
for a proposal to look for axion strings in the microwave background
polarization data. Recently, Ivanov et al. (2019) proposed and searched for a
polarization signal that oscillates in time in observations of jets in active
galaxies (see also Caputo et al., 2019, Fedderke et al., 2019). The frequency
$m$ oscillations in $\phi$ cause the linear polarization angle to oscillate,
which can be searched for in data. A limit of $g_{\gamma}\,\lower
3.22916pt\hbox{$\sim$}\hbox to0.0pt{\hss\raise
1.1625pt\hbox{$<$}}\,10^{-12}{\,\rm GeV}^{-1}$ was obtained for $m\sim 5\times
10^{-23}-1.2\times 10^{-21}$ eV. Note that the birefringence signal does not
depend on the distance over which the photon travels; it depends only on the
values of $\phi$ at the source and at the observer. A source in a high dark
matter density environment (therefore large $\phi$), such as at the center of
a galaxy, is therefore a promising target.
The fact that rotation of the linear polarization angle is independent of
propagation distance means one could also search for this effect in the
laboratory where high precision measurements are possible e.g. Liu et al.
(2019), DeRocco & Hook (2018), Martynov & Miao (2020), Blas et al. (2020).
This brings us naturally to the subject of the next section. We close by
mentioning that the same coupling of the axion to photons (Equation 40) gives
rise to a different effect that can be searched for: the conversion of photons
into axions in an environment with magnetic fields (Raffelt & Stodolsky, 1988,
Mirizzi et al., 2008). This effect does not require the axions to be dark
matter.
### 5.6 Experimental detection of axions
The experimental detection of axions is a large subject we cannot hope to do
justice here. For recent comprehensive reviews, see e.g. Graham et al. (2015),
Irastorza & Redondo (2018), Sikivie (2020). We instead focus on aspects of the
detection that have to do with the wave nature of axion dark matter. This sub-
section is less about summarizing current constraints, and more about
discussing ways to probe or take advantage of the wave dynamics and
interference substructures. 434343In this sub-section, we pick a few
experiments to illustrate how the wave nature of axions is relevant to
detection. There is a tremendous diversity in the variety of axion
experiments. Some aim to detect dark matter; some probe the existence of an
axion regardless of whether it is dark matter. See Graham et al. (2015),
Irastorza & Redondo (2018), Sikivie (2020). There are a number of papers on
this subject. Novel observables for the detection of the axion as a field (or
wave) rather than as a particle were discussed by Graham & Rajendran (2013).
Stochastic properties of the axion field were computed by Derevianko (2018)
and Foster, Rodd & Safdi (2018). Implications for the design and
interpretation of experiments were discussed by them, and by Roberts et al.
(2017), Savalle et al. (2019), Centers et al. (2019), Hui et al. (2020),
Foster et al. (2020b). The discussion here follows that in Hui et al. (2020).
A good place to start is to remind ourselves of the relation between the axion
$\phi$ and the wavefunction $\psi$:
$\phi(t,\vec{x})={1\over\sqrt{2m}}\left(\psi(t,\vec{x})e^{-imt}+\psi^{*}(t,\vec{x})e^{imt}\right)\,.$
(43)
Axion detection experiments measure $\phi$ or its derivatives via its coupling
to photons (${\cal L}\sim g_{\gamma}\phi F\tilde{F}$) and fermions such as
quarks or leptons (${\cal L}\sim
g_{\Psi}\partial_{\mu}\phi\bar{\Psi}\gamma^{\mu}\gamma_{5}\Psi$).444444 The
coupling constants $g_{\gamma}$ and $g_{\Psi}$ play the role of $1/f$ in
Equation (9). There is also the coupling to gluons, related to an oscillating
electric dipole moment for nucleons (Graham & Rajendran, 2013). Writing
$\phi$ in terms of $\psi$ reminds us there are two time scales of interest:
one is the fast Compton time scale $\sim m^{-1}$ of $\phi$ oscillations; the
other is the slow de Broglie time scale $\sim(mv^{2})^{-1}$ of $\psi$
fluctuations due to wave interference ($v$ is the velocity dispersion of dark
matter; see discussion around Equation 4.4):
$\displaystyle t_{\rm osc.}\equiv{2\pi\over m}=1.3{\,\rm
yr.}\left({10^{-22}{\,\rm eV}\over m}\right)=4.1\times 10^{-9}{\,\rm
s}\left({10^{-6}{\,\rm eV}\over m}\right)\,,$ $\displaystyle t_{\rm
dB}\equiv{2\pi\over mv^{2}}=1.9\times 10^{6}{\,\rm yr.}\left({10^{-22}{\,\rm
eV}\over m}\right)\left({250{\,\rm km/s}\over v}\right)^{2}\,$
$\displaystyle\quad\quad=5.9\times 10^{-3}{\,\rm s}\left({10^{-6}{\,\rm
eV}\over m}\right)\left({250{\,\rm km/s}\over v}\right)^{2}\,.$ (44)
Figure 4: Left panel: a schematic illustration of the time dependence of the
scalar $\phi$ at some fixed location. It has short time scale $t_{\rm
osc.}=2\pi/m$ oscillations (around $\phi=0$), and long time scale $t_{\rm
dB}=2\pi/(mv^{2})$ modulations. In practice, $t_{\rm dB}\gg t_{\rm osc.}$.
Right panel: the one-point probability distribution of density in two wave
dark matter halos. Here, $P(\rho)d\rho$ gives the probability that the density
$\rho$ takes the values within the interval $d\rho$ and $\bar{\rho}$ is the
(local) mean density. The solid lines are measured from numerical wave
simulations of two halos that form from mergers of smaller seed halos and
gravitational collapse. The blue line (II) is for a case where the halo is
well-mixed, and the black line (I) is for a case where the halo retains some
memory of the initial conditions. The blue dotted line shows the analytic
prediction from the random phase halo model,
$\bar{\rho}P(\rho)=e^{-\rho/\bar{\rho}}$, which describes case II well. The
black dotted line is an approximate fit to case I:
$\bar{\rho}P(\rho)=0.9\,e^{-1.06(\rho/\bar{\rho})^{2}}+0.1\,e^{-0.42(\rho/\bar{\rho})}$.
Figure adapted from Hui et al. (2020).
The time variation of $\phi$ at a fixed location is depicted in the left panel
of Figure 4. In addition, $\phi$ fluctuates spatially because $\psi$ does, on
the de Broglie length scale $\lambda_{\rm dB}$ (Equation 1 and Figure 1). In
other words, because the halo is composed of a superposition of waves of
largely random phases, the wavefunction $\psi$ is essentially a stochastic
field, which imprints $\sim t_{\rm dB}$ temporal modulations and
$\sim\lambda_{\rm dB}$ spatial fluctuations on the axion $\phi$. Existing
experiments are sensitive to a wide range of axion masses, from $m\sim
10^{-22}$ to $10^{-3}$ eV, though with significant gaps (Graham et al., 2015,
Irastorza & Redondo, 2018, Sikivie, 2020). In many cases, time scales from
$t_{\rm osc.}$ to $t_{\rm dB}$ and beyond are accessible to experiments.
A simple starting point for thinking about the stochastic fluctuations is the
random phase halo model, spelled out in Equation 25: $\psi$ consists of a set
of plane waves each with an amplitude $A_{\vec{k}}$ that depends on momentum
$\vec{k}$, and a random phase. A simple distribution of momentum would be
$A_{\vec{k}}\propto e^{-k^{2}/k_{0}^{2}}$, essentially an isothermal one,
though other distributions are possible. In the random phase model, $\psi$ is
a Gaussian random field obeying: 454545Note how the random phase for each
plane wave is sufficient to guarantee the complex $\psi$ is Gaussian random,
even if $A_{\vec{k}}$ is non-stochastic.
$\langle\psi(t_{1},\vec{x}_{1})\psi^{*}(t_{2},\vec{x}_{2})\rangle=\sum_{\vec{k}}A_{\vec{k}}^{2}\,e^{i\vec{k}\cdot(\vec{x}_{1}-\vec{x}_{2})-i\omega_{k}(t_{1}-t_{2})}\quad,\quad\langle\psi(t_{1},\vec{x}_{1})\psi(t_{2},\vec{x}_{2})\rangle=0\,.$
(45)
The higher point correlation functions obey Wick’s theorem, expressible as
products of the two-point function. From this, all statistical properties of
the axion $\phi$ follow, such as:
$\langle\phi(t_{1},\vec{x}_{1})\phi(t_{2},\vec{x}_{2})\rangle={1\over
2m}\left(\langle\psi(t_{1},\vec{x}_{1})\psi^{*}(t_{2},\vec{x}_{2})\rangle
e^{-im(t_{1}-t_{2})}+{\,\rm c.c.}\right)\,,$ (46)
where ${\,\rm c.c.}$ represents complex conjugate. The Gaussian random nature
of $\psi$ tells us the one-point probability distribution is Gaussian,
specifically a two-dimensional one since $\psi$ has real and imaginary parts
i.e. the Gaussian probability density ${\,\rm
exp}[-|\psi|^{2}/(2\Gamma^{2})]$, where
$\Gamma^{2}\equiv\sum_{\vec{k}}A_{\vec{k}}^{2}/2$, should come with the
measure $d{\rm Re}\psi\,d{\rm Im}\psi=2\pi|\psi|d|\psi|$. In other words,
$d|\psi|{|\psi|\over\Gamma^{2}}{\,\rm exp}\left[-{|\psi|^{2}\over
2\Gamma^{2}}\right]\,,$ (47)
gives the probability that $|\psi|$ takes the values within the interval
$d|\psi|$ (Centers et al., 2019). It can be checked that this is properly
normalized. Recalling the density is $\rho=m|\psi|^{2}$, so average density is
$\bar{\rho}=m\langle|\psi|^{2}\rangle=m^{2}\langle\phi^{2}\rangle=2m\Gamma^{2}$,
the one-point distribution of density is thus: 464646This distribution can be
derived directly from $\phi$ without going through $\psi$, but it is important
to remember $\rho=(\dot{\phi}{}^{2}+m^{2}\phi^{2})/2$ is determined not by
$\phi$ alone, but also by its time derivative. Spatial gradient energy also
contributes to $\rho$ but is sub-dominant in the non-relativistic limit.
${d\rho\over\bar{\rho}}e^{-\rho/\bar{\rho}}\,.$ (48)
There is a non-negligible probability for the density to fluctuate to low
values, indeed all the way to zero (i.e. at sites of complete destructive
interference or vortices). The right panel of Figure 4 shows a comparison of
this analytic prediction with results from numerical simulations of two halos
that form from mergers and gravitational collapse, taken from Hui et al.
(2020). The analytic prediction works reasonably well, especially in the case
(II) where the halo is well mixed. It works less well in the case (I) where
some memory of the initial conditions persists—the halo has coherent
substructures in the form of subhalos. See also Veltmaat et al. (2018) for
correlation function measurements from numerical simulations.
The stochastic nature of the axion field $\phi$ and its derivatives has rich
implications for axion detection. For instance, given the average local
density $\bar{\rho}$ ($\sim 0.4{\,\rm GeV/cm^{3}}$), an axion experiment would
sample from the whole distribution of $\rho$’s depicted in Figure 4, if time
scales longer than the de Broglie time $t_{\rm dB}$ were accessible. In
particular, there would be a non-negligible probability of sampling
$\rho<\bar{\rho}$. As pointed out by Centers et al. (2019), experimental
constraints on the axion couplings, such as $g_{\gamma}$ or $g_{\Psi}$, should
take this into account. The full implications remain to be explored—depending
on the experiment of interest, the relevant correlation function can be
obtained by taking suitable derivatives of Equation 46.
Moreover, the stochastic nature of $\phi$ suggests it would be useful to
measure correlation functions. For instance, the signal for ADMX (Du et al.,
2018a) is often expressed in terms of the power output in a microwave cavity,
which is proportional to $\phi^{2}$, or $\phi^{2}$ averaged over the rapid,
frequency $m$ oscillations.474747 The idea was proposed by Sikivie (1983). It
involves looking for photons produced by axions in the presence of a magnetic
field. One can consider the following correlation function in time
(coincident location):
$\langle\phi(t_{1})^{2}\phi(t_{2})^{2}\rangle-\langle\phi^{2}\rangle^{2}={1\over
m^{2}}|\langle\psi(t_{1})\psi^{*}(t_{2})\rangle|^{2}={\bar{\rho}^{2}\over
m^{4}}\left(1+{k_{0}^{4}(t_{1}-t_{2})^{2}\over 16m^{2}}\right)^{-3/2}\,,$ (49)
where we have implicitly averaged $\phi^{2}(t)$ over the rapid oscillations,
and assumed the random phase model. Here, $k_{0}$ is the rms (3D) momentum
times $2/\sqrt{3}$, following from the distribution $A_{\vec{k}}^{2}\propto
e^{-2k^{2}/k_{0}^{2}}$. This correlation function can be measured in a
microwave cavity experiment. The characteristic power-law decay at large time
separation might be helpful in pulling signal out of noisy data. Some
experiments measure $\dot{\phi}$ by searching for a time varying magnetic flux
produced by the oscillating axion in the presence of an external magnetic
field, such as ABRACADABRA (Kahn et al., 2016, Ouellet et al., 2019). Others
are sensitive to $\vec{\nabla}\phi$, such as CASPEr (Graham & Rajendran, 2013,
Budker et al., 2014) or spin pendulum experiments (Terrano et al., 2019). The
idea is to measure the spin precession around the direction picked out by
$\vec{\nabla}\phi$, using the axion-fermion coupling (Equation 9). Correlation
functions thereof can be obtained by differentiating Equation 46.
More generally, with a network of detectors, one can measure the correlation
function in space-time:
$\displaystyle\langle\phi(t_{1},\vec{x}_{1})^{2}\phi(t_{2},\vec{x}_{2})^{2}\rangle-\langle\phi^{2}\rangle^{2}={\bar{\rho}{}^{2}\over
m^{4}}\left(1+\frac{k_{0}^{4}(t_{1}-t_{2})^{2}}{16m^{2}}\right)^{-3/2}\exp\left(-\frac{4k_{0}^{2}m^{2}\lvert\vec{x}_{1}-\vec{x}_{2}\rvert^{2}}{16m^{2}+k_{0}^{4}(t_{1}-t_{2})^{2}}\right)\,,$
(50)
where again we have implicitly averaged over the rapid oscillations. The
difference in dependence on time-separation versus space-separation originates
from the fact $\omega_{k}$, the frequency for a Fourier mode, goes as $k^{2}$
rather than $k$. The idea of using a network of detectors, much like an
interferometry array in radio astronomy, has been discussed in Pustelny et al.
(2013) for GNOME, and in Derevianko (2018), Foster et al. (2018), Roberts et
al. (2017), Savalle et al. (2019), Centers et al. (2019), Hui et al. (2020),
Foster et al. (2020b). Experiments that measure the rotation of photon
polarization in an axion background naturally measures $\phi$ at points
separated in time and/or space (Liu et al., 2019, DeRocco & Hook, 2018,
Martynov & Miao, 2020).
It is worth pointing out that different experiments respond differently to the
passing of a vortex. As discussed in Section 4.4, at the location of a vortex,
$\psi$ vanishes but its gradient generically does not. This implies
experiments that probe $\phi$ or $\dot{\phi}$ have a vanishing signal while
those that probe $\vec{\nabla}\phi$ have a non-vanishing one. 484848In the
non-relativistic limit, $\dot{\phi}$ and $\phi$ are practically equivalent
i.e. $\phi\sim\psi e^{-imt}+{\,\rm c.c.}$ while $\dot{\phi}\sim-im\psi
e^{-imt}+{\,\rm c.c.}$. Perhaps more interesting is how the generic existence
of vortices (one vortex ring per de Broglie volume) points to interesting
structures in the phase of the axion oscillations. Plugging
$\psi=\sqrt{\rho/m}\,e^{i\theta}$ into Equation 43, the axion field $\phi$ can
be expressed as:
$\phi(t,\vec{x})=m^{-1}\sqrt{2\rho(t,\vec{x})}{\,\rm
cos\,}\left[mt-\theta(t,\vec{x})\right]\,.$ (51)
Dark matter detection, for good reasons, generally focuses on measuring the
amplitude of the axion oscillations, which tells us about the density of dark
matter $\rho$. The arguments in Section 4.4 tell us wave interference
generically produces non-trivial structures in the oscillation phase
$\theta(t,\vec{x})$ i.e. winding around vortices. It would be useful to
explore how such winding could be measured, how it might be exploited to
enhance detection sensitivity. Doing so likely requires a network of
detectors, possibly combining different detection techniques that get at
different derivatives of $\phi$ (Hui et al., 2020).
## 6 Discussion—theory exploration, numerical simulations, astrophysical
probes and experimental detection
We have reviewed the particle physics motivations for considering wave dark
matter, and the observational and experimental implications, with the axion as
the prime example. We close with a list of open questions and directions for
further research.
Theory exploration. The dark matter sector could well be as rich as the
visible sector, with different kinds of particles. This has a certain
plausibility in string theory, which generically predicts a variety of axions.
Most of them would be too massive to be a suitable dark matter candidate. But
if one of them is light enough to be dark matter, perhaps there maybe more
(Arvanitaki et al., 2010, Bachlechner et al., 2019, Luu et al., 2020)? And if
these light axions are coupled, how is the relic abundance computation
modified? What is the impact on galactic substructures if there is a mixture
of wave and particle dark matter, or a mixture of wave dark matter of
different masses (Schwabe et al., 2020)? If the axion as a field exists during
inflation, it has inevitable isocurvature fluctuations—if the energy scale of
inflation is high enough to saturate the existing isocurvature bound, what are
the implications for structure formation (Section 5.1)?
Numerical simulations. There is a great need for more and better simulations
of wave dark matter structure formation. Some of the existing constraints at
the ultra-light end of the spectrum ($10^{-22}-10^{-20}$ eV, fuzzy dark
matter) rely on the halo or subhalo mass function that has not been checked
with wave simulations (Section 5.3). Current estimates of the halo/subhalo
mass function account for the wave nature of dark matter primarily through its
impact on the initial condition i.e. the primordial power spectrum (Section
5.2). It is important to quantify how the wave dynamics affects the subsequent
evolution. Further simulations would also be useful for interpreting
constraints from galaxy density profiles (by including the effects of baryons
and tidal forces), and constraints from the Lyman-alpha forest (by exploring
the variety of fluctuations from the ionizing background, reionization history
and galactic winds). There is also room for improvement in numerical
algorithm: it is challenging to carry out wave simulations in large boxes with
the requisite de-Broglie-scale resolution (Section 4.3). The hybrid scheme of
Veltmaat et al. (2018) is one promising approach. In addition, there is a need
for more simulations of the early universe. If the Peccei-Quinn symmetry is
broken after inflation, large fluctuations are expected to lead to axion star
formation (Kolb & Tkachev, 1993). An accurate mass function of such objects,
accounting for the effect of subsequent mergers (Eggemeier & Niemeyer, 2019),
would be very useful. The axion in question can span a large range in mass and
need not be ultra-light (Sections 4.2 and 5.4).
Astrophysical probes. A striking prediction of wave dark matter is the
interference substructures inside a halo. These are order unity density
fluctuations on the scale of the de Broglie wavelength. The density can even
vanish, where complete destructive interference occurs. These are locations of
vortices—a unique wave phenomenon (Section 4.4). Such interference patterns
are distinct from subhalos as a form of halo substructure. Some observational
signatures, for ultra-light masses, have been worked out, such as the
scattering of stars and gravitational lensing (Section 5.3). Recent
measurements of the density power spectrum along globular cluster tidal
streams GD-1 and Palomar 5, from Gaia and Pan-STARRS data, suggest consistency
with scattering by subhalos in conventional cold dark matter (Bovy et al.,
2017, Banik et al., 2019a, b). 494949For more background on the streams and
the data, see Grillmair & Dionatos (2006), Ibata et al. (2016), Prusti et al.
(2016), Chambers et al. (2019). Are the same measurements consistent with
fuzzy dark mater? To answer this question, one must account for scattering by
both the subhalo contents (Schutz, 2020) and the interference substructures
(Dalal et al., 2020). In addition, it is important to clarify to what extent
the tidal stream density fluctuations can be attributed to the tidal
disruption process itself (Kuepper et al., 2010, Ibata et al., 2020). More
measurements spanning different orbital radii would be helpful in
differentiating between models: scattering by interference substructures is
expected to be more important at small radii relative to scattering by
subhalos (Dalal et al., 2020). It is also worth noting there are other
statistics that might have different sensitivity to the mass and compactness
of subhalos (e.g. Bonaca et al., 2018). Improvement in stellar stream data is
expected from further Gaia data release and the upcoming Vera Rubin
Observatory (Ivezić et al., 2019).
Anomalous flux ratios between gravitationally lensed images have been used to
constrain substructures in galaxy lenses (Hezaveh et al., 2016b, Hsueh et al.,
2020, Gilman et al., 2019, Dai et al., 2020). See Section 5.3. Typically these
constraints are obtained by fitting the data with a parametrized model of
subhalos, which is then checked against the prediction of conventional cold
dark matter. For fuzzy dark matter, two issues should be addressed. One is a
proper wave computation of the subhalo mass function, discussed earlier. The
other is the inclusion of wave interference substructures as an additional
source of flux anomaly (Chan et al., 2020, Hui et al., 2020). This is a
promising technique given the expected improvement in lensing data, e.g. from
ALMA (Vlahakis et al., 2015, Hezaveh et al., 2016b).
Observations of the high redshift ($z>5$) universe have the potential to probe
the linear power spectrum on small scales, and therefore constrain fuzzy dark
matter, as discussed in Section 5.2. Promising future data include those from
the James Webb Space Telescope (Gardner et al., 2006, Hirano et al., 2018) and
21cm experiments (DeBoer et al., 2017, Weltman et al., 2020, Bowman et al.,
2018). To take full advantage of these data, the fuzzy dark matter predictions
for early structure formation should be refined using wave simulations in
larger boxes (Mocz et al., 2019, May & Springel, 2021).
Another area where more data are needed is the study of dynamical friction.
The Fornax dwarf galaxy is the main example where there is possibly a
dynamical friction problem—that its globular clusters survive in its halo
despite efficient dynamical friction (Tremaine, 1976, Oh et al., 2000). One
resolution is to invoke fuzzy dark matter to weaken dynamical friction, though
it appears core stalling might also do the job (see Sections 4.5 and 5.3).
Data on more such systems would be instructive.
Detection experiments. The interference substructures are a robust prediction
of wave dark matter, regardless of the dark matter mass. Away from the ultra-
light end of the spectrum, the corresponding de Broglie wavelength is small,
making the interference substructures challenging to observe astrophysically.
But the substructures remain relevant for axion detection experiments which
are sensitive to much smaller scales. The axion field is effectively
stochastic, in a halo made out of a superposition of waves with random phases.
At a minimum, this stochastic nature should be accounted for in deriving
constraints. Moreover, the stochastic nature motivates the measurement of
correlation functions of the axion field. The correlation can involve both
time and space separations, further motivating the idea of a network of
detectors, like in radio interferometry. An under-explored area is the
information contained in the phase of the axion oscillations (Equation 51).
That vortices generically exist tells us there are non-trivial structures in
the phase, such as winding. An interesting question is whether searching for
such structures might help extract signal out of noisy data (Section 5.6).
## DISCLOSURE STATEMENT
The author is not aware of any affiliations, memberships, funding, or
financial holdings that might be perceived as affecting the objectivity of
this review.
## ACKNOWLEDGMENTS
Thanks are due to my collaborators for teaching me much of the subject: Jamie
Bamber, Jo Bovy, Greg Bryan, Katy Clough, Neal Dalal, Pedro Ferreira, Austin
Joyce, Dan Kabat, Michael Landry, Albert Law, Macarena Lagos, Xinyu Li, Adam
Lidz, Jerry Ostriker, Klaas Parmentier, Luca Santoni, Guanhao Sun, Gianmaria
Tomaselli, Scott Tremaine, Enrico Trincherini, Edward Witten, Sam Wong and
Tomer Yavetz. Thanks to Eric Adelberger, Emanuele Berti, Tom Broadhurst, Vitor
Cardoso, Gary Centers, Andy Cohen, Vincent Desjacques, Sergei Dubovsky, Mark
Hertzberg, Vid Irs̆ic̆, Dima Levkov, Eugene Lim, Doddy Marsh, Philip Mocz,
Alberto Nicolis, Jens Niemeyer, Adi Nusser, Marco Peloso, Massimo Pietroni,
Alessandro Podo, Riccardo Rattazzi, Leslie Rosenberg, Hsi-Yu Schive, Sergei
Sibiryakov, Pierre Sikivie, Will Terrano, Cora Uhlemann, Tanmay Vachaspati,
Jacqueline van Gorkom, Matteo Viel and Dennis Zaritsky for useful discussions.
Special thanks to Xinyu Li for providing some of the figures, and to Kfir
Blum, Jo Bovy, Tom Broadhurst, Katy Clough, Neal Dalal, Anson Hook, Vid
Irs̆ic̆, Eliot Quataert, Jerry Ostriker, Surjeet Rajendran, Leslie Rosenberg,
David Spergel, Will Terrano, Scott Tremaine, Matteo Viel and Dennis Zaritsky
for comments and suggestions on the manuscript. Support by a Simons Fellowship
in Theoretical Physics and the Department of Energy DE-SC0011941 is gratefully
acknowledged.
## References
* Abbott & Sikivie (1983) Abbott L, Sikivie P. 1983. Phys. Lett. B 120:133–136
* Aghanim et al. (2020) Aghanim N, et al. 2020. Astron. Astrophys. 641:A6
* Agrawal et al. (2020) Agrawal P, Hook A, Huang J. 2020. JHEP 07:138
* Akiyama et al. (2019) Akiyama K, et al. 2019. Astrophys. J. Lett. 875:L5
* Alexander et al. (2019) Alexander S, Bramburger JJ, McDonough E. 2019. Phys. Lett. B 797:134871
* Alexander & Cormack (2017) Alexander S, Cormack S. 2017. JCAP 1704:005
* Alexander et al. (2020) Alexander S, Gleyzer S, McDonough E, Toomey MW, Usai E. 2020. Astrophys. J. 893:15
* Allali & Hertzberg (2020) Allali I, Hertzberg MP. 2020. JCAP 07:056
* Alonso-Álvarez & Jaeckel (2018) Alonso-Álvarez G, Jaeckel J. 2018. JCAP 10:022
* Amendola & Barbieri (2006) Amendola L, Barbieri R. 2006. Phys. Lett. B642:192–196
* Amin et al. (2012) Amin MA, Easther R, Finkel H, Flauger R, Hertzberg MP. 2012. Phys. Rev. Lett. 108:241302
* Amorisco & Evans (2012) Amorisco N, Evans N. 2012. Mon. Not. Roy. Astron. Soc. 419:184–196
* Amorisco & Loeb (2018) Amorisco NC, Loeb A. 2018. arXiv:1808.00464
* Anastassopoulos et al. (2017) Anastassopoulos V, et al. 2017. Nature Phys. 13:584–590
* Annulli et al. (2020) Annulli L, Cardoso V, Vicente R. 2020. Phys. Rev. D 102:063022
* Aoki & Mukohyama (2016) Aoki K, Mukohyama S. 2016. Phys. Rev. D 94:024001
* Armengaud et al. (2017) Armengaud E, Palanque-Delabrouille N, Yèche C, Marsh DJ, Baur J. 2017. Mon. Not. Roy. Astron. Soc. 471:4606–4614
* Arvanitaki et al. (2017) Arvanitaki A, Baryakhtar M, Dimopoulos S, Dubovsky S, Lasenby R. 2017. Phys. Rev. D 95:043001
* Arvanitaki et al. (2010) Arvanitaki A, Dimopoulos S, Dubovsky S, Kaloper N, March-Russell J. 2010. Phys. Rev. D81:123530
* Arvanitaki et al. (2020) Arvanitaki A, Dimopoulos S, Galanis M, Lehner L, Thompson JO, Van Tilburg K. 2020\. Phys. Rev. D 101:083014
* Arvanitaki & Dubovsky (2011) Arvanitaki A, Dubovsky S. 2011. Phys. Rev. D83:044026
* Axenides et al. (1983) Axenides M, Brandenberger RH, Turner MS. 1983. Phys. Lett. B 126:178–182
* Bachlechner et al. (2019) Bachlechner TC, Eckerle K, Janssen O, Kleban M. 2019. JCAP 1909:062
* Bai & Hamada (2018) Bai Y, Hamada Y. 2018. Phys. Lett. B 781:187–194
* Baldeschi et al. (1983) Baldeschi MR, Ruffini R, Gelmini GB. 1983. Phys. Lett. 122B:221–224
* Bamber et al. (2020) Bamber J, Clough K, Ferreira PG, Hui L, Lagos M. 2020. arXiv:2011.07870
* Banik et al. (2019a) Banik N, Bovy J, Bertone G, Erkal D, de Boer T. 2019a. arXiv:1911.02662
* Banik et al. (2019b) Banik N, Bovy J, Bertone G, Erkal D, de Boer T. 2019b. arXiv:1911.02663
* Banik & Sikivie (2013) Banik N, Sikivie P. 2013. Phys. Rev. D88:123517
* Bar et al. (2018) Bar N, Blas D, Blum K, Sibiryakov S. 2018. Phys. Rev. D98:083027
* Bar et al. (2020) Bar N, Blum K, D’Amico G. 2020. Phys. Rev. D 101:123025
* Bar et al. (2019a) Bar N, Blum K, Eby J, Sato R. 2019a. Phys. Rev. D 99:103020
* Bar et al. (2019b) Bar N, Blum K, Lacroix T, Panci P. 2019b. JCAP 07:045
* Bar-Or et al. (2019) Bar-Or B, Fouvry JB, Tremaine S. 2019. Astrophys. J. 871:28
* Barausse et al. (2014) Barausse E, Cardoso V, Pani P. 2014. Phys. Rev. D 89:104059
* Bardeen et al. (1972) Bardeen JM, Press WH, Teukolsky SA. 1972. Astrophys. J. 178:347
* Barkana et al. (2001) Barkana R, Haiman Z, Ostriker JP. 2001. Astrophys. J. 558:482
* Barranco et al. (2012) Barranco J, Bernal A, Degollado JC, Diez-Tejedor A, Megevand M, et al. 2012. Phys. Rev. Lett. 109:081102
* Baumann (2011) Baumann D. 2011. Inflation. In Theoretical Advanced Study Institute in Elementary Particle Physics: Physics of the Large and the Small
* Baumann et al. (2019) Baumann D, Chia HS, Porto RA. 2019. Phys. Rev. D 99:044001
* Bekenstein (1972a) Bekenstein J. 1972a. Phys. Rev. D 5:2403–2412
* Bekenstein (1972b) Bekenstein JD. 1972b. Phys. Rev. D 5:1239–1246
* Bennett et al. (2013) Bennett CL, Larson D, Weiland JL, Jarosik N, Hinshaw G, et al. 2013. ApJS 208:20
* Berezhiani et al. (2019) Berezhiani L, Elder B, Khoury J. 2019. JCAP 10:074
* Berezhiani & Khoury (2015a) Berezhiani L, Khoury J. 2015a. Phys. Rev. D92:103510
* Berezhiani & Khoury (2015b) Berezhiani L, Khoury J. 2015b. Phys. Rev. D92:103510
* Bezerra et al. (2014) Bezerra VB, Vieira HS, Costa AA. 2014. Class. Quant. Grav. 31:045003
* Bialynicki-Birula et al. (2000) Bialynicki-Birula I, Bialynicka-Birula Z, Śliwa C. 2000. Phys. Rev. A 61:032110
* Binney & Tremaine (2008) Binney J, Tremaine S. 2008. Galactic Dynamics, 2nd ed. Princeton, NJ, Princeton University Press
* Bird et al. (2016) Bird S, Cholis I, Muñoz JB, Ali-Haïmoud Y, Kamionkowski M, et al. 2016. Phys. Rev. Lett. 116:201301
* Blas et al. (2020) Blas D, Caputo A, Ivanov MM, Sberna L. 2020. Phys. Dark Univ. 27:100428
* Bonaca et al. (2018) Bonaca A, Hogg DW, Price-Whelan AM, Conroy C. 2018. arXiv:1811.03631
* Bovy et al. (2017) Bovy J, Erkal D, Sanders JL. 2017. Mon. Not. Roy. Astron. Soc. 466:628–668
* Bovy & Tremaine (2012) Bovy J, Tremaine S. 2012. Astrophys. J. 756:89
* Bowman et al. (2018) Bowman JD, Rogers AEE, Monsalve RA, Mozdzen TJ, Mahesh N. 2018. Nature 555:67–70
* Brax et al. (2020) Brax P, Cembranos JA, Valageas P. 2020. Phys. Rev. D 101:023521
* Brook & Coles (2009) Brook MN, Coles P. 2009. arXiv:0902.0605
* Budker et al. (2014) Budker D, Graham PW, Ledbetter M, Rajendran S, Sushkov A. 2014. Phys. Rev. X4:021030
* Burkert (2020) Burkert A. 2020. arXiv:2006.11111
* Buschmann et al. (2020) Buschmann M, Foster JW, Safdi BR. 2020. Phys. Rev. Lett. 124:161103
* Calabrese & Spergel (2016) Calabrese E, Spergel DN. 2016. Mon. Not. Roy. Astron. Soc. 460:4397–4402
* Caputo et al. (2019) Caputo A, Sberna L, Frias M, Blas D, Pani P, et al. 2019. Phys. Rev. D 100:063515
* Carroll & Field (1997) Carroll SM, Field GB. 1997. Phys. Rev. Lett. 79:2394–2397
* Carroll et al. (1990) Carroll SM, Field GB, Jackiw R. 1990. Phys. Rev. D 41:1231
* Cen et al. (2009) Cen R, McDonald P, Trac H, Loeb A. 2009. Astrophys. J. 706:L164–L167
* Centers et al. (2019) Centers GP, et al. 2019. arXiv:1905.13650
* Chambers et al. (2019) Chambers KC, Magnier EA, Metcalfe N, Flewelling HA, Huber ME, et al. 2019. arXiv:1612.05560
* Chan et al. (2020) Chan JH, Schive HY, Wong SK, Chiueh T, Broadhurst T. 2020. Phys. Rev. Lett. 125:111102
* Chavanis (2011) Chavanis PH. 2011. Phys. Rev. D84:043531
* Chavanis (2019) Chavanis PH. 2019. Eur. Phys. J. Plus 134:352
* Chen et al. (2017) Chen SR, Schive HY, Chiueh T. 2017. Mon. Not. Roy. Astron. Soc. 468:1338–1348
* Chiba (2002) Chiba M. 2002. Astrophys. J. 565:17
* Chiueh et al. (2011) Chiueh T, Woo TP, Jian HY, Schive HY. 2011. Journal of Physics B 44:115101
* Chluba et al. (2019) Chluba J, et al. 2019. arXiv:1909.01593
* Choi & Im (2016) Choi K, Im SH. 2016. JHEP 01:149
* Church et al. (2019) Church BV, Ostriker JP, Mocz P. 2019. Mon. Not. Roy. Astron. Soc. 485:2861–2876
* Clough et al. (2018) Clough K, Dietrich T, Niemeyer JC. 2018. Phys. Rev. D 98:083020
* Clough et al. (2019) Clough K, Ferreira PG, Lagos M. 2019. Phys. Rev. D100:063014
* Clowe et al. (2006) Clowe D, Bradač M, Gonzalez AH, Markevitch M, Randall SW, et al. 2006. ApJ 648:L109–L113
* Co et al. (2020) Co RT, Hall LJ, Harigaya K. 2020. Phys. Rev. Lett. 124:251802
* Cole et al. (2012) Cole DR, Dehnen W, Read JI, Wilkinson MI. 2012. Mon. Not. Roy. Astron. Soc. 426:601
* Cookmeyer et al. (2020) Cookmeyer J, Grin D, Smith TL. 2020. Phys. Rev. D 101:023501
* Croft et al. (1998) Croft R, Weinberg DH, Katz N, Hernquist L. 1998. Astrophys. J. 495:44–62
* Croft (2004) Croft RA. 2004. Astrophys. J. 610:642–662
* Dai et al. (2020) Dai L, Kaurov AA, Sharon K, Florian MK, Miralda-Escudé J, et al. 2020. Mon. Not. Roy. Astron. Soc. 495:3192–3208
* Dalal et al. (2020) Dalal N, Bovy J, Hui L, Li X. 2020. arXiv:2011.13141
* Dalal & Kochanek (2002) Dalal N, Kochanek CS. 2002. Astrophys. J. 572:25–33
* D’Aloisio et al. (2018) D’Aloisio A, McQuinn M, Davies FB, Furlanetto SR. 2018. Mon. Not. Roy. Astron. Soc. 473:560–575
* Damour et al. (1976) Damour T, Deruelle N, Ruffini R. 1976. Lett. Nuovo Cim. 15:257–262
* Davies & Mocz (2020) Davies EY, Mocz P. 2020. Mon. Not. Roy. Astron. Soc. 492:5721–5729
* Davoudiasl & Denton (2019) Davoudiasl H, Denton PB. 2019. Phys. Rev. Lett. 123:021102
* Davoudiasl & Murphy (2017) Davoudiasl H, Murphy CW. 2017. Phys. Rev. Lett. 118:141801
* De Martino et al. (2020) De Martino I, Broadhurst T, Tye SHH, Chiueh T, Schive HY. 2020. Phys. Dark Univ. 28:100503
* DeBoer et al. (2017) DeBoer DR, et al. 2017. Publ. Astron. Soc. Pac. 129:045001
* Derevianko (2018) Derevianko A. 2018. Phys. Rev. A97:042506
* DeRocco et al. (2020) DeRocco W, Graham PW, Rajendran S. 2020. Phys. Rev. D 102:075015
* DeRocco & Hook (2018) DeRocco W, Hook A. 2018. Phys. Rev. D 98:035021
* Desjacques et al. (2018) Desjacques V, Kehagias A, Riotto A. 2018. Phys. Rev. D 97:023529
* Detweiler (1980) Detweiler SL. 1980. Phys. Rev. D22:2323–2326
* Dietrich & Clough (2019) Dietrich T, Clough K. 2019. Phys. Rev. D 100:083005
* Dine (2000) Dine M. 2000. TASI lectures on the strong CP problem. In Theoretical Advanced Study Institute in Elementary Particle Physics (TASI 2000): Flavor Physics for the Millennium
* Dine (2016) Dine M. 2016. Supersymmetry and String Theory: Beyond the Standard Model. Cambridge University Press
* Dine & Fischler (1983) Dine M, Fischler W. 1983. Phys. Lett. B 120:137–141
* Dine et al. (1981) Dine M, Fischler W, Srednicki M. 1981. Phys. Lett. B 104:199–202
* Dolan (2007) Dolan SR. 2007. Phys. Rev. D76:084001
* Du et al. (2018a) Du N, et al. 2018a. Phys. Rev. Lett. 120:151301
* Du et al. (2017) Du X, Behrens C, Niemeyer JC. 2017. Mon. Not. Roy. Astron. Soc. 465:941–951
* Du et al. (2018b) Du X, Schwabe B, Niemeyer JC, Bürger D. 2018b. Phys. Rev. D 97:063507
* Dvali & Zell (2018) Dvali G, Zell S. 2018. JCAP 07:064
* Easther et al. (2009) Easther R, Giblin John T. J, Hui L, Lim EA. 2009. Phys. Rev. D 80:123519
* Eby et al. (2016a) Eby J, Kouvaris C, Nielsen NG, Wijewardhana L. 2016a. JHEP 02:028
* Eby et al. (2016b) Eby J, Suranyi P, Wijewardhana L. 2016b. Mod. Phys. Lett. A 31:1650090
* Edwards et al. (2018) Edwards F, Kendall E, Hotchkiss S, Easther R. 2018. JCAP 1810:027
* Eggemeier & Niemeyer (2019) Eggemeier B, Niemeyer JC. 2019. Phys. Rev. D 100:063528
* Ellis & Olive (1987) Ellis JR, Olive KA. 1987. Phys. Lett. B 193:525
* Endlich & Penco (2017) Endlich S, Penco R. 2017. JHEP 05:052
* Fairbairn et al. (2018) Fairbairn M, Marsh DJE, Quevillon J, Rozier S. 2018. Phys. Rev. D 97:083502
* Fan (2016) Fan J. 2016. Phys. Dark Univ. 14:84–94
* Fedderke et al. (2019) Fedderke MA, Graham PW, Rajendran S. 2019. Phys. Rev. D 100:015040
* Felder & Tkachev (2008) Felder GN, Tkachev I. 2008. Comput. Phys. Commun. 178:929–932
* Fetter (2008) Fetter AL. 2008. Laser Physics 18:1–11
* Feynman et al. (1963) Feynman RP, Leighton RB, Sands M. 1963. The Feynman Lectures on Physics. Addison Wesley Longman
* Ficarra et al. (2019) Ficarra G, Pani P, Witek H. 2019. Phys. Rev. D 99:104019
* Foster et al. (2020a) Foster JW, Kahn Y, Macias O, Sun Z, Eatough RP, et al. 2020a. Phys. Rev. Lett. 125:171301
* Foster et al. (2020b) Foster JW, Kahn Y, Nguyen R, Rodd NL, Safdi BR. 2020b. arXiv:2009.14201
* Foster et al. (2018) Foster JW, Rodd NL, Safdi BR. 2018. Phys. Rev. D97:123006
* Freeman (1970) Freeman K. 1970. Astrophys. J. 160:811
* Friedberg et al. (1987a) Friedberg R, Lee T, Pang Y. 1987a. Phys. Rev. D 35:3640
* Friedberg et al. (1987b) Friedberg R, Lee T, Pang Y. 1987b. Phys. Rev. D 35:3658
* Garcia-Bellido & Ruiz Morales (2017) Garcia-Bellido J, Ruiz Morales E. 2017. Phys. Dark Univ. 18:47–54
* Gardner et al. (2006) Gardner JP, et al. 2006. Space Sci. Rev. 123:485
* Garny et al. (2020) Garny M, Konstandin T, Rubira H. 2020. JCAP 04:003
* Giblin et al. (2010) Giblin John T. J, Hui L, Lim EA, Yang IS. 2010. Phys. Rev. D 82:045019
* Gilman et al. (2020) Gilman D, Birrer S, Nierenberg A, Treu T, Du X, Benson A. 2020. Mon. Not. Roy. Astron. Soc. 491:6077–6101
* Gilman et al. (2019) Gilman D, Birrer S, Treu T, Nierenberg A, Benson A. 2019. Mon. Not. Roy. Astron. Soc. 487:5721–5738
* Glauber (1963) Glauber RJ. 1963. Phys. Rev. 130:2529–2539
* Goerdt et al. (2006) Goerdt T, Moore B, Read J, Stadel J, Zemp M. 2006. Mon. Not. Roy. Astron. Soc. 368:1073–1077
* Gondolo & Silk (2000) Gondolo P, Silk J. 2000. Nucl. Phys. B Proc. Suppl. 87:87–89
* Goodman (2000) Goodman J. 2000. New Astron. 5:103
* Gorghetto et al. (2020) Gorghetto M, Hardy E, Villadoro G. 2020. arXiv:2007.04990
* Graham et al. (2015) Graham PW, Irastorza IG, Lamoreaux SK, Lindner A, van Bibber KA. 2015. Ann. Rev. Nucl. Part. Sci. 65:485–514
* Graham et al. (2016a) Graham PW, Kaplan DE, Mardon J, Rajendran S, Terrano WA. 2016a. Phys. Rev. D 93:075029
* Graham et al. (2016b) Graham PW, Mardon J, Rajendran S. 2016b. Phys. Rev. D 93:103520
* Graham & Rajendran (2013) Graham PW, Rajendran S. 2013. Phys. Rev. D88:035023
* Green et al. (1988) Green MB, Schwarz J, Witten E. 1988. SUPERSTRING THEORY. VOL. 2: LOOP AMPLITUDES, ANOMALIES AND PHENOMENOLOGY
* Grilli di Cortona et al. (2016) Grilli di Cortona G, Hardy E, Pardo Vega J, Villadoro G. 2016. JHEP 01:034
* Grillmair & Dionatos (2006) Grillmair CJ, Dionatos O. 2006. Astrophys. J. Lett. 643:L17–L20
* Guth et al. (2015) Guth AH, Hertzberg MP, Prescod-Weinstein C. 2015. Phys. Rev. D92:103513
* Guzman & Urena-Lopez (2006a) Guzman F, Urena-Lopez L. 2006a. Astrophys. J. 645:814–819
* Guzman & Urena-Lopez (2006b) Guzman FS, Urena-Lopez LA. 2006b. Astrophys. J. 645:814–819
* Halverson et al. (2017) Halverson J, Long C, Nath P. 2017. Phys. Rev. D96:056025
* Hannuksela et al. (2019) Hannuksela OA, Wong KW, Brito R, Berti E, Li TG. 2019. Nature Astron. 3:447–451
* Harari & Sikivie (1992) Harari D, Sikivie P. 1992. Phys. Lett. B 289:67–72
* Harrison et al. (2003) Harrison R, Moroz I, Tod KP. 2003. Nonlinearity 16:101–122
* Helfer et al. (2019) Helfer T, Lim EA, Garcia MA, Amin MA. 2019. Phys. Rev. D 99:044046
* Helfer et al. (2017) Helfer T, Marsh DJE, Clough K, Fairbairn M, Lim EA, Becerril R. 2017. JCAP 03:055
* Hertzberg & Schiappacasse (2018) Hertzberg MP, Schiappacasse ED. 2018. JCAP 1808:028
* Hezaveh et al. (2016a) Hezaveh Y, Dalal N, Holder G, Kisner T, Kuhlen M, Perreault Levasseur L. 2016a. JCAP 1611:048
* Hezaveh et al. (2016b) Hezaveh YD, et al. 2016b. Astrophys. J. 823:37
* Higaki et al. (2014) Higaki T, Jeong KS, Takahashi F. 2014. Phys. Lett. B 734:21–26
* Hills et al. (2018) Hills R, Kulkarni G, Meerburg PD, Puchwein E. 2018. Nature 564:E32–E34
* Hinshaw et al. (2013) Hinshaw G, et al. 2013. Astrophys. J. Suppl. 208:19
* Hirano et al. (2018) Hirano S, Sullivan JM, Bromm V. 2018. Mon. Not. Roy. Astron. Soc. 473:L6–L10
* Hložek et al. (2017) Hložek R, Marsh DJE, Grin D, Allison R, Dunkley J, Calabrese E. 2017. Phys. Rev. D 95:123511
* Hlozek et al. (2015) Hlozek R, Grin D, Marsh DJE, Ferreira PG. 2015. Phys. Rev. D91:103512
* Hoekstra et al. (2004) Hoekstra H, Yee HK, Gladders MD. 2004. Astrophys. J. 606:67–77
* Hook (2019) Hook A. 2019. PoS TASI2018:004
* Hook et al. (2018) Hook A, Kahn Y, Safdi BR, Sun Z. 2018. Phys. Rev. Lett. 121:241102
* Horbatsch & Burgess (2012) Horbatsch M, Burgess C. 2012. JCAP 05:010
* Hsueh et al. (2020) Hsueh JW, Enzi W, Vegetti S, Auger M, Fassnacht CD, et al. 2020. Mon. Not. Roy. Astron. Soc. 492:3047–3059
* Hu et al. (2000) Hu W, Barkana R, Gruzinov A. 2000. Phys. Rev. Lett. 85:1158–1161
* Hui (1999) Hui L. 1999. Astrophys. J. 516:519–526
* Hui & Gnedin (1997) Hui L, Gnedin NY. 1997. Mon. Not. Roy. Astron. Soc. 292:27
* Hui et al. (2020) Hui L, Joyce A, Landry MJ, Li X. 2020
* Hui et al. (2019) Hui L, Kabat D, Li X, Santoni L, Wong SSC. 2019. JCAP 1906:038
* Hui et al. (2017) Hui L, Ostriker JP, Tremaine S, Witten E. 2017. Phys. Rev. D95:043541
* Ibata et al. (2002) Ibata R, Lewis G, Irwin M. 2002. Mon. Not. Roy. Astron. Soc. 332:915
* Ibata et al. (2020) Ibata R, Thomas G, Famaey B, Malhan K, Martin N, Monari G. 2020. The Astrophysical Journal 891:161
* Ibata et al. (2016) Ibata RA, Lewis GF, Martin NF. 2016. The Astrophysical Journal 819:1
* Inoue (2011) Inoue S. 2011. Mon. Not. Roy. Astron. Soc. 416:1181–1190
* Irastorza & Redondo (2018) Irastorza IG, Redondo J. 2018. Prog. Part. Nucl. Phys. 102:89–159
* Iršič et al. (2017) Iršič V, Viel M, Haehnelt MG, Bolton JS, Becker GD. 2017. Phys. Rev. Lett. 119:031302
* Iršič et al. (2020) Iršič V, Xiao H, McQuinn M. 2020. Phys. Rev. D 101:123518
* Ivanov et al. (2019) Ivanov M, Kovalev Y, Lister M, Panin A, Pushkarev A, et al. 2019. JCAP 02:059
* Ivezić et al. (2019) Ivezić v, et al. 2019. Astrophys. J. 873:111
* Jacobson (1999) Jacobson T. 1999. Phys. Rev. Lett. 83:2699–2702
* Jedamzik (2020) Jedamzik K. 2020. JCAP 09:022
* Johnston et al. (2002) Johnston KV, Spergel DN, Haydn C. 2002. Astrophys. J. 570:656
* Kahn et al. (2016) Kahn Y, Safdi BR, Thaler J. 2016. Phys. Rev. Lett. 117:141801
* Kain & Ling (2010) Kain B, Ling HY. 2010. Phys. Rev. D82:064042
* Kaplan & Rattazzi (2016) Kaplan DE, Rattazzi R. 2016. Phys. Rev. D 93:085007
* Kaplinghat et al. (2020) Kaplinghat M, Ren T, Yu HB. 2020. JCAP 06:027
* Kato & Soda (2020) Kato R, Soda J. 2020. JCAP 09:036
* Kaup (1968) Kaup DJ. 1968. Phys. Rev. 172:1331–1342
* Keating et al. (2018) Keating LC, Puchwein E, Haehnelt MG. 2018. Mon. Not. Roy. Astron. Soc. 477:5501–5516
* Khmelnitsky & Rubakov (2014) Khmelnitsky A, Rubakov V. 2014. JCAP 1402:019
* Kim (1979) Kim JE. 1979. Phys. Rev. Lett. 43:103
* Kim & Marsh (2016) Kim JE, Marsh D. 2016. Phys. Rev. D93:025027
* Kobayashi et al. (2017) Kobayashi T, Murgia R, De Simone A, Iršič V, Viel M. 2017. Phys. Rev. D 96:123514
* Kogut et al. (2019) Kogut A, Abitbol M, Chluba J, Delabrouille J, Fixsen D, et al. 2019. arXiv:1907.13195
* Kolb & Long (2020) Kolb EW, Long AJ. 2020. arXiv:2009.03828
* Kolb & Tkachev (1993) Kolb EW, Tkachev II. 1993. Phys. Rev. Lett. 71:3051–3054
* Kolb & Tkachev (1996) Kolb EW, Tkachev II. 1996. Astrophys. J. Lett. 460:L25–L28
* Kolb & Turner (1990) Kolb EW, Turner MS. 1990. The Early Universe. vol. 69
* Konoplya & Zhidenko (2006) Konoplya RA, Zhidenko A. 2006. Phys. Rev. D73:124040
* Kuepper et al. (2010) Kuepper A, Kroupa P, Baumgardt H, Heggie D. 2010. Mon. Not. Roy. Astron. Soc. 401:105
* Kulkarni & Ostriker (2020) Kulkarni M, Ostriker JP. 2020. arXiv:2011.02116
* Lancaster et al. (2020) Lancaster L, Giovanetti C, Mocz P, Kahn Y, Lisanti M, Spergel DN. 2020. JCAP 01:001
* Lentz et al. (2020) Lentz EW, Quinn TR, Rosenberg LJ. 2020. Nucl. Phys. B 952:114937
* Lesgourgues et al. (2002) Lesgourgues J, Arbey A, Salati P. 2002. New Astron. Rev. 46:791–799
* Levkov et al. (2018) Levkov D, Panin A, Tkachev I. 2018. Phys. Rev. Lett. 121:151301
* Li et al. (2019) Li X, Hui L, Bryan GL. 2019. Phys. Rev. D99:063509
* Li et al. (2021) Li X, Hui L, Yavetz TD. 2021. Phys. Rev. D 103:023508
* Li et al. (2020) Li Z, Shen J, Schive HY. 2020. arXiv:2001.00318
* Lidz & Hui (2018) Lidz A, Hui L. 2018. Phys. Rev. D 98:023011
* Lin et al. (2018) Lin SC, Schive HY, Wong SK, Chiueh T. 2018. Phys. Rev. D 97:103523
* Linde (1985) Linde AD. 1985. Phys. Lett. B 158:375–380
* Liu & Ng (2017) Liu GC, Ng KW. 2017. Phys. Dark Univ. 16:22–25
* Liu et al. (2019) Liu H, Elwood BD, Evans M, Thaler J. 2019. Phys. Rev. D100:023548
* Lora et al. (2012) Lora V, Magana J, Bernal A, Sanchez-Salcedo FJ, Grebel EK. 2012. JCAP 1202:011
* Lue et al. (1999) Lue A, Wang LM, Kamionkowski M. 1999. Phys. Rev. Lett. 83:1506–1509
* Lund (1991) Lund F. 1991. Physics Letters A 159:245 – 251
* Luscher (1981) Luscher M. 1981. Nucl. Phys. B180:317–329
* Luu et al. (2020) Luu HN, Tye SHH, Broadhurst T. 2020. Phys. Dark Univ. 30:100636
* Lyth (1990) Lyth DH. 1990. Phys. Lett. B 236:408–410
* Macedo et al. (2013) Macedo CF, Pani P, Cardoso V, Crispino LCB. 2013. Phys. Rev. D 88:064046
* Madelung (1927) Madelung E. 1927. Zeitschrift für Physik 40:322–326
* Mao & Schneider (1998) Mao Sd, Schneider P. 1998. Mon. Not. Roy. Astron. Soc. 295:587–594
* Marsh (2016) Marsh DJE. 2016. Phys. Rept. 643:1–79
* Marsh et al. (2013) Marsh DJE, Grin D, Hlozek R, Ferreira PG. 2013. Phys. Rev. D 87:121701
* Marsh & Niemeyer (2019) Marsh DJE, Niemeyer JC. 2019. Phys. Rev. Lett. 123:051103
* Marsh & Silk (2014) Marsh DJE, Silk J. 2014. Mon. Not. Roy. Astron. Soc. 437:2652–2663
* Martynov & Miao (2020) Martynov D, Miao H. 2020. Phys. Rev. D 101:095034
* Mathur et al. (2020) Mathur A, Rajendran S, Tanin EH. 2020. Phys. Rev. D 102:055015
* May & Springel (2021) May S, Springel V. 2021. arXiv:2101.01828
* Mayle et al. (1988) Mayle R, Wilson JR, Ellis JR, Olive KA, Schramm DN, Steigman G. 1988. Phys. Lett. B 203:188–196
* McDonald et al. (2005a) McDonald P, Seljak U, Cen R, Bode P, Ostriker JP. 2005a. Mon. Not. Roy. Astron. Soc. 360:1471–1482
* McDonald et al. (2005b) McDonald P, et al. 2005b. Astrophys. J. 635:761–783
* McKee et al. (2015) McKee CF, Parravano A, Hollenbach DJ. 2015. ApJ 814:13
* Metcalf & Madau (2001) Metcalf RB, Madau P. 2001. Astrophys. J. 563:9
* Mirizzi et al. (2008) Mirizzi A, Raffelt GG, Serpico PD. 2008. Lect. Notes Phys. 741:115–134
* Mishra-Sharma et al. (2020) Mishra-Sharma S, Van Tilburg K, Weiner N. 2020. Phys. Rev. D 102:023026
* Mocz & Succi (2015) Mocz P, Succi S. 2015. Phys. Rev. E91:053304
* Mocz et al. (2017) Mocz P, Vogelsberger M, Robles VH, Zavala J, Boylan-Kolchin M, et al. 2017. Mon. Not. Roy. Astron. Soc. 471:4559–4570
* Mocz et al. (2019) Mocz P, et al. 2019. Phys. Rev. Lett. 123:141301
* Mondino et al. (2020) Mondino C, Taki AM, Van Tilburg K, Weiner N. 2020
* Nadler et al. (2020) Nadler E, et al. 2020. arXiv:2008.00022
* Nielsen & Olesen (1973) Nielsen HB, Olesen P. 1973. Nucl. Phys. B61:45–61
* Niemeyer (2019) Niemeyer JC. 2019. Prog. Part. Nucl. Phys. :103787
* Nodland & Ralston (1997) Nodland B, Ralston JP. 1997. Phys. Rev. Lett. 78:3043–3046
* Nori & Baldi (2018) Nori M, Baldi M. 2018. Mon. Not. Roy. Astron. Soc. 478:3935–3951
* Nori et al. (2019) Nori M, Murgia R, Iršič V, Baldi M, Viel M. 2019. Mon. Not. Roy. Astron. Soc. 482:3227–3243
* Oñorbe et al. (2019) Oñorbe J, Davies F, Lukić Z, Hennawi J, Sorini D. 2019. Mon. Not. Roy. Astron. Soc. 486:4075–4097
* Oh et al. (2000) Oh KS, Lin D, Richer HB. 2000. ApJ 531:727–738
* Oman et al. (2019) Oman KA, Marasco A, Navarro JF, Frenk CS, Schaye J, Benítez-Llambay A. 2019\. Mon. Not. Roy. Astron. Soc. 482:821–847
* Oman et al. (2015) Oman KA, et al. 2015. Mon. Not. Roy. Astron. Soc. 452:3650–3665
* Onsager (1949) Onsager L. 1949. Il Nuovo Cimento 6:279–287
* Ostriker & Peebles (1973) Ostriker JP, Peebles PJE. 1973. ApJ 186:467–480
* Ouellet et al. (2019) Ouellet JL, et al. 2019. Phys. Rev. Lett. 122:121802
* Padmanabhan (2021) Padmanabhan N. 2021. in preparation
* Palanque-Delabrouille et al. (2013) Palanque-Delabrouille N, et al. 2013. Astron. Astrophys. 559:A85
* Palenzuela et al. (2017) Palenzuela C, Pani P, Bezares M, Cardoso V, Lehner L, Liebling S. 2017. Phys. Rev. D 96:104058
* Peccei & Quinn (1977) Peccei RD, Quinn HR. 1977. Phys. Rev. Lett. 38:1440–1443
* Peebles (2000) Peebles PJE. 2000. Astrophys. J. 534:L127
* Porayko et al. (2018) Porayko NK, et al. 2018. Phys. Rev. D98:102002
* Pozo et al. (2020) Pozo A, Broadhurst T, de Martino I, Chiueh T, Smoot GF, et al. 2020. arXiv:2010.10337
* Preskill et al. (1983) Preskill J, Wise MB, Wilczek F. 1983. Phys. Lett. 120B:127–132
* Press et al. (1990) Press WH, Ryden BS, Spergel DN. 1990. Phys. Rev. Lett. 64:1084
* Press & Schechter (1974) Press WH, Schechter P. 1974. Astrophys. J. 187:425–438
* Press & Teukolsky (1972) Press WH, Teukolsky SA. 1972. Nature 238:211–212
* Prusti et al. (2016) Prusti T, de Bruijne JHJ, Brown AGA, Vallenari A, Babusiaux C, et al. 2016. Astronomy & Astrophysics 595:A1
* Pustelny et al. (2013) Pustelny S, et al. 2013. Annalen Phys. 525:659–670
* Raffelt & Seckel (1988) Raffelt G, Seckel D. 1988. Phys. Rev. Lett. 60:1793
* Raffelt & Stodolsky (1988) Raffelt G, Stodolsky L. 1988. Phys. Rev. D 37:1237
* Raffelt (2008) Raffelt GG. 2008. Lect. Notes Phys. 741:51–71
* Raffelt & Dearborn (1987) Raffelt GG, Dearborn DS. 1987. Phys. Rev. D 36:2211
* Read et al. (2006) Read JI, Goerdt T, Moore B, Pontzen A, Stadel J, Lake G. 2006. Mon. Not. Roy. Astron. Soc. 373:1451–1460
* Rindler-Daller & Shapiro (2012) Rindler-Daller T, Shapiro PR. 2012. Mon. Not. Roy. Astron. Soc. 422:135–161
* Roberts et al. (2017) Roberts BM, Blewitt G, Dailey C, Murphy M, Pospelov M, et al. 2017. Nature Commun. 8:1195
* Rogers & Peiris (2020) Rogers KK, Peiris HV. 2020. arXiv:2007.12705
* Rubin & Ford (1970) Rubin VC, Ford W. Kent J. 1970. ApJ 159:379
* Ruffini & Bonazzola (1969) Ruffini R, Bonazzola S. 1969. Phys. Rev. 187:1767–1783
* Safarzadeh et al. (2018) Safarzadeh M, Scannapieco E, Babul A. 2018. Astrophys. J. Lett. 859:L18
* Safarzadeh & Spergel (2019) Safarzadeh M, Spergel DN. 2019. arXiv:1906.11848
* Sasaki et al. (2018) Sasaki M, Suyama T, Tanaka T, Yokoyama S. 2018. Class. Quant. Grav. 35:063001
* Savalle et al. (2019) Savalle E, Roberts BM, Frank F, Pottie PE, McAllister BT, et al. 2019. arXiv:1902.07192
* Schive et al. (2014a) Schive HY, Chiueh T, Broadhurst T. 2014a. Nature Phys. 10:496–499
* Schive et al. (2020) Schive HY, Chiueh T, Broadhurst T. 2020. Phys. Rev. Lett. 124:201301
* Schive et al. (2016) Schive HY, Chiueh T, Broadhurst T, Huang KW. 2016. Astrophys. J. 818:89
* Schive et al. (2014b) Schive HY, Liao MH, Woo TP, Wong SK, Chiueh T, et al. 2014b. Phys. Rev. Lett. 113:261302
* Schlattl et al. (1999) Schlattl H, Weiss A, Raffelt G. 1999. Astropart. Phys. 10:353–359
* Schmitz & Yanagida (2018) Schmitz K, Yanagida TT. 2018. Phys. Rev. D 98:075003
* Schneider (2018) Schneider A. 2018. Phys. Rev. D 98:063021
* Schutz (2020) Schutz K. 2020. Phys. Rev. D 101:123026
* Schwabe et al. (2020) Schwabe B, Gosenca M, Behrens C, Niemeyer JC, Easther R. 2020. Phys. Rev. D 102:083518
* Schwabe et al. (2016) Schwabe B, Niemeyer JC, Engels JF. 2016. Phys. Rev. D94:043513
* Seckel & Turner (1985) Seckel D, Turner MS. 1985. Phys. Rev. D 32:3178
* Seidel & Suen (1994) Seidel E, Suen WM. 1994. Phys. Rev. Lett. 72:2516–2519
* Sheth & Tormen (1999) Sheth RK, Tormen G. 1999. Mon. Not. Roy. Astron. Soc. 308:119
* Shifman et al. (1980) Shifman MA, Vainshtein AI, Zakharov VI. 1980. Nucl. Phys. B166:493–506
* Sibiryakov et al. (2020) Sibiryakov S, Sørensen P, Yu TT. 2020. JHEP 20:075
* Sikivie (1983) Sikivie P. 1983. Phys. Rev. Lett. 51:1415–1417. [Erratum: Phys. Rev. Lett. 52, 695 (1984)]
* Sikivie (2020) Sikivie P. 2020. arXiv:2003.02206
* Sikivie & Yang (2009) Sikivie P, Yang Q. 2009. Phys. Rev. Lett. 103:111301
* Silverman & Mallett (2002) Silverman MP, Mallett RL. 2002. Gen. Rel. Grav. 34:633–649
* Sin (1994) Sin SJ. 1994. Phys. Rev. D50:3650–3654
* Sivertsson et al. (2018) Sivertsson S, Silverwood H, Read J, Bertone G, Steger P. 2018. Mon. Not. Roy. Astron. Soc. 478:1677–1693
* Smith (1936) Smith S. 1936. Astrophys. J. 83:23–30
* Spergel & Steinhardt (2000) Spergel DN, Steinhardt PJ. 2000. Phys. Rev. Lett. 84:3760–3763
* Starobinskiǐ (1973) Starobinskiǐ AA. 1973. Soviet Journal of Experimental and Theoretical Physics 37:28
* Stott & Marsh (2018) Stott MJ, Marsh DJ. 2018. Phys. Rev. D 98:083006
* Suarez & Matos (2011) Suarez A, Matos T. 2011. Mon. Not. Roy. Astron. Soc. 416:87
* Svrcek & Witten (2006) Svrcek P, Witten E. 2006. JHEP 06:051
* Terrano et al. (2015) Terrano W, Adelberger E, Lee J, Heckel B. 2015. Phys. Rev. Lett. 115:201801
* Terrano et al. (2019) Terrano WA, Adelberger EG, Hagedorn CA, Heckel BR. 2019. Phys. Rev. Lett. 122:231301
* Tremaine & Gunn (1979) Tremaine S, Gunn JE. 1979. Phys. Rev. Lett. 42:407–410
* Tremaine (1976) Tremaine SD. 1976. ApJ 203:345–351
* Turner (1983) Turner MS. 1983. Phys. Rev. D 28:1243
* Turner (1988) Turner MS. 1988. Phys. Rev. Lett. 60:1797
* Turner & Wilczek (1991) Turner MS, Wilczek F. 1991. Phys. Rev. Lett. 66:5–8
* Uhlemann et al. (2014) Uhlemann C, Kopp M, Haugg T. 2014. Phys. Rev. D90:023517
* Uhlemann et al. (2019) Uhlemann C, Rampf C, Gosenca M, Hahn O. 2019. Phys. Rev. D 99:083524
* Ullio et al. (2001) Ullio P, Zhao H, Kamionkowski M. 2001. Phys. Rev. D 64:043504
* Unruh (1976) Unruh WG. 1976. Phys. Rev. D 14:3251–3259
* Veltmaat & Niemeyer (2016) Veltmaat J, Niemeyer JC. 2016. Phys. Rev. D94:123523
* Veltmaat et al. (2018) Veltmaat J, Niemeyer JC, Schwabe B. 2018. Phys. Rev. D98:043509
* Vieira et al. (2014) Vieira HS, Bezerra VB, Muniz CR. 2014. Annals Phys. 350:14–28
* Viel et al. (2013) Viel M, Schaye J, Booth CM. 2013. Mon. Not. Roy. Astron. Soc. 429:1734
* Vlahakis et al. (2015) Vlahakis C, Hunter TR, Hodge JA, Pérez LM, Andreani P, et al. 2015. The Astrophysical Journal 808:L4
* Wagner et al. (2012) Wagner T, Schlamminger S, Gundlach J, Adelberger E. 2012. Class. Quant. Grav. 29:184002
* Walker et al. (2009) Walker MG, Mateo M, Olszewski EW, Penarrubia J, Evans N, Gilmore G. 2009. Astrophys. J. 704:1274–1287. [Erratum: Astrophys.J. 710, 886–890 (2010)]
* Weinberg et al. (2015) Weinberg DH, Bullock JS, Governato F, Kuzio de Naray R, Peter AHG. 2015. Proc. Nat. Acad. Sci. 112:12249–12255
* Weinberg (1978) Weinberg S. 1978. Phys. Rev. Lett. 40:223–226
* Weiner (2019) Weiner N. 2019. Astrophys. Space Sci. Proc. 56:153–159
* Weltman et al. (2020) Weltman A, et al. 2020. Publ. Astron. Soc. Austral. 37:e002
* Widdicombe et al. (2018) Widdicombe JY, Helfer T, Marsh DJ, Lim EA. 2018. JCAP 10:005
* Widrow & Kaiser (1993) Widrow LM, Kaiser N. 1993. Astrophys. J. 416:L71–L74
* Wilczek (1978) Wilczek F. 1978. Phys. Rev. Lett. 40:279–282
* Wong et al. (2019) Wong LK, Davis AC, Gregory R. 2019. Phys. Rev. D 100:024010
* Wu et al. (2019) Wu X, McQuinn M, Kannan R, D’Aloisio A, Bird S, et al. 2019. Mon. Not. Roy. Astron. Soc. 490:3177–3195
* Yoshino & Kodama (2014) Yoshino H, Kodama H. 2014. PTEP 2014:043E02
* Zel’Dovich (1972) Zel’Dovich YB. 1972. Soviet Journal of Experimental and Theoretical Physics 35:1085
* Zhang & Yang (2020) Zhang J, Yang H. 2020. Phys. Rev. D 101:043020
* Zhang & Chiueh (2017) Zhang UH, Chiueh T. 2017. Phys. Rev. D 96:063522
* Zhitnitsky (1980) Zhitnitsky AR. 1980. Sov. J. Nucl. Phys. 31:260. [Yad. Fiz.31,497(1980)]
* Zinner (2011) Zinner NT. 2011. Phys. Res. Int. 2011:734543
* Zwicky (1933) Zwicky F. 1933. Helv. Phys. Acta 6:110–127
|
Further author information: (Send correspondence to S.S.T.)
S.S.T.: E-mail<EMAIL_ADDRESS>
# The DESI Sky Continuum Monitor System
Suk Sien Tie Department of Astronomy, The Ohio State University, Columbus,
Ohio, USA David Kirkby Department of Physics and Astronomy, University of
California, Irvine, California, USA Paul Martini Department of Astronomy,
The Ohio State University, Columbus, Ohio, USA Center for Cosmology and
AstroParticle Physics, The Ohio State University, Columbus, Ohio, USA Claire
Poppett Lawrence Berkeley National Laboratory, Berkeley, California, USA
Daniel Pappalardo Department of Astronomy, The Ohio State University,
Columbus, Ohio, USA David Schlegel Lawrence Berkeley National Laboratory,
Berkeley, California, USA Jonathan Shover Department of Astronomy, The Ohio
State University, Columbus, Ohio, USA Julien Guy Lawrence Berkeley National
Laboratory, Berkeley, California, USA Kevin Fanning Center for Cosmology and
AstroParticle Physics, The Ohio State University, Columbus, Ohio, USA Klaus
Honscheid Center for Cosmology and AstroParticle Physics, The Ohio State
University, Columbus, Ohio, USA Michael Lampton Lawrence Berkeley National
Laboratory, Berkeley, California, USA Patrick Jelinsky Lawrence Berkeley
National Laboratory, Berkeley, California, USA Robert Besuner Lawrence
Berkeley National Laboratory, Berkeley, California, USA Kai Zhang Lawrence
Berkeley National Laboratory, Berkeley, California, USA David Brooks
Lawrence Berkeley National Laboratory, Berkeley, California, USA Peter Doel
Lawrence Berkeley National Laboratory, Berkeley, California, USA Yutong Duan
Lawrence Berkeley National Laboratory, Berkeley, California, USA Enrique
Gasta$\tilde{\mathrm{n}}$aga Lawrence Berkeley National Laboratory, Berkeley,
California, USA Robert Kehoe Lawrence Berkeley National Laboratory,
Berkeley, California, USA Martin Landriau Lawrence Berkeley National
Laboratory, Berkeley, California, USA Michael Levi Lawrence Berkeley
National Laboratory, Berkeley, California, USA Francisco Prada Lawrence
Berkeley National Laboratory, Berkeley, California, USA Gregory Tarle
Lawrence Berkeley National Laboratory, Berkeley, California, USA
###### Abstract
The Dark Energy Spectroscopic Instrument (DESI) is an ongoing spectroscopic
survey to measure the dark energy equation of state to unprecedented
precision. We describe the DESI Sky Continuum Monitor System, which tracks the
night sky brightness as part of a system that dynamically adjusts the
spectroscopic exposure time to produce more uniform data quality and to
maximize observing efficiency. The DESI dynamic exposure time calculator (ETC)
will combine sky brightness measurements from the Sky Monitor with data from
the guider system to calculate the exposure time to achieve uniform signal-to-
noise ratio (SNR) in the spectra under various observing conditions. The DESI
design includes 20 sky fibers, and these are split between two identical Sky
Monitor units to provide redundancy. Each Sky Monitor unit uses an SBIG
STXL-6303e CCD camera and supports an eight-position filter wheel. Both units
have been completed and delivered to the Mayall Telescope at the Kitt Peak
National Observatory. Commissioning results show that the Sky Monitor delivers
the required performance necessary for the ETC.
###### keywords:
sky background, dynamic exposure time calculator
## 1 INTRODUCTION
The Dark Energy Spectroscopic Instrument (DESI) is a Stage-IV experiment by
the US Department of Energy to measure the dark energy equation of state with
high precision[1]. DESI will achieve this by spectroscopically mapping 34
million galaxies and quasars, an order of magnitude increase compared to
previous surveys, over a five-year period.
Previous major spectroscopic surveys like the Sloan Digital Sky Survey (SDSS)
and BOSS (Baryonic Oscillation Spectroscopic Survey) adopted a fixed exposure
time regardless of observing conditions. To ensure a uniform SNR depth of the
data, these surveys relied on real time reductions of actual spectra to
estimate the SNR of each exposure, after which decisions were made whether to
continue observing with additional exposures. DESI will not adopt this
approach as most DESI targets typically require only one or two exposures.
Instead, it will utilize real time observing conditions to estimate the
optimal exposure time that will meet the SNR requirement for a fiducial
target. Besides maximizing survey efficiency, this also ensures a uniform
depth in the spectroscopic data and more uniform redshift completeness, which
are crucial for cosmological surveys.
The dynamic exposure time calculator (ETC) is responsible for adjusting the
spectroscopic exposure time during each science exposure. It tracks the
SNR/pixel for a fiducial source and closes the spectrograph shutter once a
desired integrated SNR has been achieved. The ETC relies on the DESI guide
cameras [2] for the signal estimate, particularly to extract the atmospheric
seeing and sky transparency. In principle, the noise estimate, i.e. the sky
level, can be estimated from starless regions of the guide cameras. However,
this is not expected to be sufficient as the sky level is degenerate with the
dark current, which varies enough with temperature that it may not be possible
to measure the sky brightness with enough accuracy. Alternatively, starless
regions of the focus cameras can be used instead, as each has a dark occulting
bar that allows separation of the dark current and sky brightness. Here we
present an alternative solution: a dedicated imaging system – the Sky Monitor
– that will be used to monitor the sky flux down the DESI sky fibers.
Information from both the guide cameras and the Sky Monitor will be
continuously fed to the ETC over the course of a science exposure.
In this manuscript, we describe the design of the Sky Monitor in Section 2 and
the commissioning and operation of the Sky Monitor in Section 3. We summarize
the main results and lessons learned in Section 4.
## 2 DESIGN
### 2.1 Mechanical Design
The Sky Monitor will measure the sky background level via seventeen sky fibers
located on the periphery of the DESI focal plane. These sky fibers are split
between two identical units to provide redundancy in the event of a failure of
one unit (the ETC can continue operating with only a subset of the sky
fibers). Figure 1 shows the locations of the sky fibers on the DESI focal
plane and which Sky Monitor unit they are connected to. The DESI sky fibers
from the focal plane terminate at the spectrograph fiber spool boxes. Fiber
spool boxes are used for fiber routing management and are located at each end
of the fiber cable. There are ten spool boxes connecting ten petals to ten
spectrographs. There are two sky fibers from each petal to each spool box.
Rather than being subsequently routed to the spectrographs like the science
fibers, the sky fibers are routed to the Sky Monitors. We used commercial
patch cables constructed with DESI fibers and SMA905 connectors to connect the
spool boxes to the Sky Monitor. The fiber tips of these patch cables have been
AR-coated.
Each Sky Monitor weighs $\sim$ 21 kg and is $\sim$ 525 mm $\times$ 375 mm
$\times$ 300 mm in size. Figure 2 shows the Sky Monitors fully enclosed (left)
and with the top and front enclosures removed (right). Each unit is mounted on
a breadboard and enclosed in commercial black aluminium enclosures to minimize
light leakage. We cut mouse holes on the left and right side panels for the
sky fibers and the power cables, respectively. We also created an opening on
the left side panel to provide an entry point for the camera fan air flow,
which is covered with a dust foam cover.
The Sky Monitor is made up of three main hardware components: an imaging
camera, a mount for the sky fibers, and a back illumination system. Figure 3
shows a top-down view of the system. The imaging components consist of a CCD
camera, an eight-position filter wheel, a commercial lens, and custom
3D-printed baffle tube and air vent. The Sky Monitor reuses the SBIG
STXL-6303e CCD cameras that were used for the DESI Commissioning
Instrument[3]. Their fast readout rate (measured full frame download time of
5.5 seconds with its linux driver) and support for binned readout mode also
make this camera a suitable choice. The SBIG camera mates with the eight-
position FW8S-STXL filter wheel, which supports 50 mm diameter 3 mm thick
filters. The filter wheel takes $\sim$ 2.5 seconds to move to the next slot
and only rotates in one direction.
The filter wheel is currently populated with an SDSS r-band filter (562 $-$
695 nm) and a filter stack for the back illumination (more details below). An
r-band filter was chosen as it covers the wavelength range crucial for the
redshift determination of emission-line galaxies targeted by DESI, whereas the
broadband SDSS filter was used to improve the faint sky flux signal. An
appropriate lens is needed to produce an overall compact system and minimize
lens distortion without vignetting the sky fibers. Our selection criteria led
us to the Nikon 50 mm f/1.2 lens, which was coupled to a 20 mm extension tube
to achieve a magnification of 0.5. A custom 3D-printed black baffle tube is
attached around the lens to minimize stray light. Finally, a 3D-printed air
vent is also attached to the camera fan to direct air flow.
The sky fibers are held in a custom fiber block assembly, as shown in Figure
3, which is then attached to a commercial manual XY translation stage; this
forms the mounting system. The translation stage is positioned such that we
can move the fiber block in the focus and lateral directions.
In addition to collecting light down the sky fibers, the Sky Monitor also back
illuminates these fibers to allow repositioning in the event that a star lands
on a sky fiber. The DESI fiber assignment code allows us to know in advance if
a fiber will fall on a star, and if so, to reposition it as needed. Fiber
positioning requires precise knowledge of the current location of the fibers,
which is achieved by back illuminating the fibers and imaging them with the
Fiber View Camera (FVC)[4] that is located behind the central hole of the
primary mirror of the Mayall telescope. The Sky Monitor back illumination
system consists of a custom printed circuit board that holds eighteen 460 nm
(the wavelength that the FVC detector is most sensitive to) surface mount
LEDs, LED control electronics and power supply, and a diffuser/neutral density
filter stack. We ran lab tests to determine the number of LEDs needed to be
comparable to the brightness of the exposure shutter illuminator[5] that back
illuminates the fiber positioners.
The LED circuit board is mounted in front of the fiber block such that the
LEDs face the filter wheel. The filter stack, made up of a ground-glass
diffuser and a 99.9% reflective metallic neutral density filter, is configured
such that the LED light first hits the diffuser before being reflected by the
neutral density filter. As such, the filter stack reflects diffuse light into
the sky fibers so that the fiber tips can be uniformly illuminated.
Additionally, the LEDs are tightly packed within an allowed region on the
circuit board so that their reflected light uniformly illuminates the
acceptance angle of the fibers.
Figure 1: The DESI focal plane with the sky fibers marked in yellow circles.
The DESI focal plane is divided into ten pie-like modules, or “petals”. With
two sky fibers per petal, there are a total of twenty sky fibers, of which
seventeen are operational. A black cross denotes a broken sky fiber. Ten
fibers are currently fed into Sky Monitor 1 (“SM1”) and seven into Sky Monitor
2 (“SM2”). The red rectangle on each petal denotes either the focus camera or
the guide camera.
Figure 2: The two Sky Monitors, fully enclosed (left) and partially opened up
(right). Figure 3: Internal view of the Sky Monitor, consisting of a CCD
camera, a filter wheel, a lens that is enclosed in a 3D-printed baffle tube
(red box), mounting system for the sky fibers (blue box), and electronics for
the back illumination system. The mounting system is made up of a custom fiber
block that holds the sky fiber, which is mounted on an XY translation stage.
### 2.2 Electrical Design
The electrical components of the Sky Monitor consist of the CCD camera, the
WAGO ethernet fieldbus controller (as part of the back illumination control
electronics), and the LED circuit board. These components require electrical
power and rely on ethernet connection for data transfer and communication. The
power and ethernet connection to both Sky Monitor units are provided by a
Raritan power distribution unit and an ethernet switch.
From the Raritan power distribution unit, one power cable serves the CCD
camera and another serves the 24 VDC power supply for the back illumination
control electronics. The back illumination control electronics subsequently
powers an internal 5V power supply for the LEDs. The CCD camera and the back
illumination control electronics are controlled by the DESI Instrument Control
System (ICS)[6] via an ethernet connection. From the ethernet switch, an
ethernet cable connects to each of these components.
### 2.3 Software Design
The Sky Monitor operations comprise taking exposures, changing the filter
wheel, and turning on/off the back illumination LEDs. These operations are
done through the ETC software and integrated with the DESI ICS exposure
control.
Over the course of a science exposure, we will take Sky Monitor images through
the r-band filter every 60 seconds during (lunar) bright time and every 200
seconds during dark and gray times. These cadences are set such that the Sky
Monitor exposure time is $\sim\frac{1}{5}$ of the shortest science exposure
time under these conditions. The fiber spots in the Sky Monitor images will be
analyzed to measure the sky count and its error. Figure 4 shows the software
design of the ETC, which is integrated with the DESI ICS.
In the event that a sky fiber needs to be repositioned to a blank region of
the sky before the start of a science exposure, the ICS will first rotate the
filter wheel to the position of the back illumination filter stack before
turning on the back illumination LEDs.
Figure 4: Input data and processing flow of the dynamic Exposure Time
Calculator (ETC) in producing a real-time signal-to-noise ratio (SNR)
forecast. The red boxes denote input sources and the black boxes denote the
analyses of these input data. The solid lines refer to the base processing
loop that occurs nightly and the dashed lines refer to day-time or additional
night-time analyses. The guide cameras (‘GFAs’) and the Sky Monitor are
mandatory components of the dynamic ETC. The output of the Sky Monitor (guide
cameras) serves as input parameter to a sky (signal) rate model. Supplementary
information from archived or quick-reduced spectra can be used to improve the
ETC rate models and calibrations.
## 3 COMMISSIONING AND OPERATION
Both units of the Sky Monitor have been installed at the Mayall Telescope,
integrated with the DESI ICS, and have been in operation since late January
2020. The following describes the functional verification and performance
results.
### 3.1 Focusing the fiber tips
The fiber block that secures the sky fibers is mounted on a manual XY
translation stage. We manually adjusted the translation stage and determined
the best focus of the fibers as seen through the r-band filter using ambient
dome light.
We discovered that the fiber tips appear to be at varying best focus positions
and exhibit non-circular spot profiles due primarily to comatic aberrations.
These arise due to a combination of off-axis placement of the fibers on the
fiber block, imperfect tilt alignment (as our stage does not have tilt
adjustment, we did it by eye), and non-uniform thread depth of the fibers in
the fiber block. Therefore, we calculated the average best focus position and
locked the system at this mean position. The spot aberrations are mitigated by
measuring the spot profiles and the relative fiber throughput and accounting
for them in the sky flux determination. Figure 5 shows the spot profiles of
the seventeen sky fibers at the systemic best focus position.
Figure 5: Spot profile of the sky fibers at best focus position, taken during
dark time at zenith position. With the exception of a few, the majority of the
spots exhibit a fair amount of comatic aberration. This arises from off-axis
placement and non-uniform depth of the fibers in the fiber block and imperfect
tilt adjustment. Poor coupling of some fibers also results in a spread in the
resulting count rates, as shown in the text of each thumbnail.
### 3.2 Signal-to-noise ratio (SNR) of sky measurements
The Sky Monitor is required to deliver sky images every 60s during (lunar)
bright time and every 200s in dark and gray time. The sky level is expected to
be measured with 4% accuracy. Achieving better than 4% will not be useful as
the ETC is expected to be limited by calibration errors. Therefore, the SNR
per fiber is required to be $>$ 10 so that the ETC can operate using a subset
of the fibers, if necessary.
Following the hardware installation of both Sky Monitor units in February
2020, the relevant software to operate it has also been successfully
integrated with the DESI ICS. Since then, the Sky Monitor has been used to
obtain commissioning data.
We measured the performance of the Sky Monitor using the commissioning data
collected under various observing conditions. Rather than the entire image of
the Sky Monitor, the ETC only analyses smaller thumbnails centered on each
fiber spot. The mean and background of each thumbnail image are first
calculated to obtain an initial estimate of the variance per pixel, which is
used to mask out hot pixels. After the fiber flux is fit, pixels with large
chi-squares (due e.g. to cosmic rays) are removed. The fiber flux is then re-
fit to obtain the final SNR per fiber. Finally, the weighted average flux and
flux error are calculated to estimate the sky level. The data and analyses
results are summarized in Table 1, which demonstrate that the Sky Monitor
delivers the required performance.
Table 1: Sky Monitor data and results Night | | Number of
---
exposures
| Exposure
---
time (sec)
Conditions | | Moon
---
illumination
| Median
---
fiber SNR
| Median sky
---
level accuracy
2020/01/26 | 10 | 60 | Zenith, dark | 5% | 9.4 | 2.4%
2020/02/26 | 10 | 60 | Moon set | 11% | 11.3 | 2.0%
2020/02/28 | 10 | 60 | Moon set, cloudy | 26% | 20.3 | 1.2%
| 10 | 60 | Dark, cloudy | | 11.9 | 1.9%
2020/02/29 | 10 | 60 | Moon up, cloudy | 34% | 61.9 | 0.4%
### 3.3 Back illumination
We turned on the back illumination system and imaged the focal plane with the
Fiber View Camera (FVC). The FVC relies on a spot detection code to accurately
measure the center of each illuminated fiber and fiducial in its image. As
such, sufficient and uniform counts over all the illuminated spots are
essential. Figure 6 shows the back-illuminated focal plane with a subset of
the backlit sky fibers, which are on average $\sim 2-3$ times brighter than
the back-illuminated positioners and fiducials.
Figure 6: The back-illuminated focal plane as imaged by the Fiber View Camera,
consisting of a subset of the sky fibers that have been integrated with the
Sky Monitor (green circles), one fully backlit petal (forming a pie-like
structure), and all the fiducials.
### 3.4 Light tightness
Stray light should be minimized to allow for an accurate measurement of the
faint sky flux. The Sky Monitor is located in the Large Coude Room, where all
lights are turned off during normal night time operations. This, along with
the enclosures, helps to reduce external light from leaking into the system.
We fitted a baffle tube around the lens to remove internal stray light from
the LED control electronics and status lights from the camera.
Figure 7 shows raw images from both Sky Monitors taken with 60 seconds
exposures, where stray lights can be seen reflecting off shiny parts of the
circuit board. However, they have minimal impact on the analyses as they do
not land directly on any fiber spot.
Figure 7: Raw images from both Sky Monitors show traces of stray light being
reflected off shiny parts of the circuit board. These stray lights have
minimal impact on the performance as they do not overlap with the analysis
regions, denoted by the black squares, which are centered on the fiber spots.
Note that dark areas represent regions with high flux and vice versa.
## 4 CONCLUSION
The DESI Sky Monitor is an imaging system that tracks the night sky brightness
as part of the dynamic ETC system. When combined with data from the DESI guide
cameras, they provide a real-time SNR estimate for the observations. This
allows the dynamic ETC to adjust the spectroscopic exposure time on-the-fly to
produce data of uniform depth and to maximize observing efficiency. Both units
have been successfully installed at the Mayall Telescope and integrated with
the DESI ICS. We verified that all components are working appropriately and
deliver the required performance necessary for the dynamic ETC.
To ensure the most optimum operation of the Sky Monitor, we plan to perform a
number of future upgrades. One potential upgrade is to add a g-band filter
(401 $-$ 550 nm) into the current setup. While an r-filter is selected as it
covers the wavelength range of emission-line galaxies that are observed during
dark time, the inclusion of a g-band filter would benefit low-redshift
galaxies that are observed during bright time. Following this, we plan to
configure the filters in the filter wheel to minimize the filter move time. As
the filter wheel only rotates in one direction, one needs to rotate through
empty slots in the filter wheel to land on a previous filter, where each move
takes $\sim$ 2.5 seconds. Therefore, we plan to fill up the empty slots and
configure the filters in a repeated ABAB (or ABCABC, if g-band filters are
included) format.
Finally, in designing the back illumination system, we used an appropriate
number of LEDs so as to produce a brightness level comparable to the
illumination level of the exposure shutter illuminator. The current system
does not allow for brightness adjustment other than via the supplied voltage.
To allow for flexibility, and considering that the fiber illuminator has
flashing capability to accommodate a range of FVC exposure times, we consider
adding an ability to turn the LEDs on and off at a certain duty cycle. All
these future upgrades would entail extra commissioning steps, such as ensuring
that the required performance level can be met with the new g-band filter and
measuring the duty cycles required to achieve various back illumination
levels.
###### Acknowledgements.
This research is supported by the Director, Office of Science, Office of High
Energy Physics of the U.S. Department of Energy under Contract No.
DE–AC02–05CH1123, and by the National Energy Research Scientific Computing
Center, a DOE Office of Science User Facility under the same contract;
additional support for DESI is provided by the U.S. National Science
Foundation, Division of Astronomical Sciences under Contract No. AST-0950945
to the National Optical Astronomy Observatory; the Science and Technologies
Facilities Council of the United Kingdom; the Gordon and Betty Moore
Foundation; the Heising-Simons Foundation; the French Alternative Energies and
Atomic Energy Commission (CEA); the National Council of Science and Technology
of Mexico; the Ministry of Economy of Spain, and by the DESI Member
Institutions. The authors are honored to be permitted to conduct astronomical
research on Iolkam Du’ag (Kitt Peak), a mountain with particular significance
to the Tohono O’odham Nation.
## References
* [1] Martini, P., Bailey, S., Besuner, R. W., Brooks, D., Doel, P., Edelstein, J., Eisenstein, D., Flaugher, B., Gutierrez, G., Harris, S. E., Honscheid, K., Jelinsky, P., Joyce, R., Kent, S., Levi, M., Prada, F., Poppett, C., Rabinowitz, D., Rockosi, C., Sas, L. C., Schlegel, D. J., Schubnell, M., Sharples, R., Silber, J. H., Sprayberry, D., and Wechsler, R., “Overview of the dark energy spectroscopic instrument,” Proc. SPIE 10702 (June 2018).
* [2] Jiménez, J., Illa, J. M., de Vicente, J., and Casas, R., “Desi-gfa testbench facilities for ccds characterization,” Proc. SPIE 9908 (August 2016).
* [3] Ross, A. J., Martini, P., Coles, R., Derwent, M., Honscheid, K., O’Brien, T. P., Pappalardo, D., Tie, S. S., Brooks, D., Schubnell, M., and Tarle, G., “The commissioning instrument for the dark energy spectroscopic instrument,” Proc. SPIE 10702 (July 2018).
* [4] Baltay, C., Rabinowitz, D., Besuner, R., Casetti, D., Emmet, W., Fagrelius, P., Girard, T., Heetderks, H., Lampton, M., Lathem, A., Levi, M., Padmanabhan, N., and Silber, J., “The desi fiber view camera system,” PASP 131(1000), 065001 (June 2019).
* [5] Derwent, M. A., O’Brien, T. P., Pappalardo, D. P., Martini, P., Coker, C. T., and Pogge, R. W., “The desi shutter with integrated fiber illumination system,” Proc. SPIE 9908 (August 2016).
* [6] Honscheid, K., Elliott, A. E., Buckley-Geer, E., Abreshi, B., Castander, F., da Costa, L., Kent, S., Kirkby, D., Marshall, R., Neilsen, E., Ogando, R., Rabinowitz, D., Roodman, A., Serrano, S., Brooks, D., Levi, M., and Tarle, G., “The desi instrument control systems: status and early testing,” Proc. SPIE 10707 (July 2018).
|
# Linearized theory of the fluctuation dynamics in 2D topological lasers
Aurelian Loirette–Pelous Université Paris-Saclay, Institut d’Optique Graduate
School, CNRS, Laboratoire Charles Fabry, 91127, Palaiseau, France INO-CNR BEC
Center and Dipartimento di Fisica, Università di Trento, 38123 Povo, Italy
Ivan Amelio INO-CNR BEC Center and Dipartimento di Fisica, Università di
Trento, 38123 Povo, Italy Matteo Seclì International School for Advanced
Studies (SISSA), Via Bonomea 265, I-34136 Trieste, Italy Iacopo Carusotto
INO-CNR BEC Center and Dipartimento di Fisica, Università di Trento, 38123
Povo, Italy
###### Abstract
We theoretically study the collective excitation modes of a topological laser
device operating in a single-mode steady-state with monochromatic emission. We
consider a model device based on a two-dimensional photonic Harper-Hofstadter
lattice including a broadband gain medium localized on the system edge.
Different regimes are considered as a function of the value of the optical
nonlinearity and of the gain relaxation time. The dispersion of the excitation
modes is calculated via a full two-dimensional Bogoliubov approach and
physically interpreted in terms of an effective one-dimensional theory.
Depending on the system parameters, various possible physical processes
leading to dynamical instabilities are identified and characterized. On this
basis, strategies to enforce a stable single-mode topological laser operation
are finally pointed out.
## I Introduction
One of the most exciting applications of topological photonics are the so-
called topological lasers Harari _et al._ (2016, 2018); Pilozzi and Conti
(2016); Solnyshkov _et al._ (2016). Such topolaser devices are based on a
topological photonic system embedding a suitable gain material, so that laser
oscillation is induced to occur in a topologically protected edge mode Ozawa
_et al._ (2019); Ota _et al._ (2020). So far, topolasing operation has been
experimentally demonstrated both in the zero-dimensional edge states of one-
dimensional arrays St-Jean _et al._ (2017); Parto _et al._ (2018); Han _et
al._ (2019); Ota _et al._ (2018) as well as in the one-dimensional edge modes
of two-dimensional lattices Bahari _et al._ (2017); Bandres _et al._ (2018);
Zeng _et al._ (2020). As it was theoretically pointed out Wittek _et al._
(2017); Harari _et al._ (2018), such devices hold a promise for
optoelectronic applications, since the chiral nature of the edge modes
guarantees an efficient phase-locking of the emission over macroscopic
distances as well as enhanced robustness against fabrication disorder Harari
_et al._ (2018); Amelio and Carusotto (2020). This is of crucial importance
whenever one needs to combine high power and long-lasting coherence in a
single device.
While a clean single-mode emission has been achieved in Bahari _et al._
(2017); Bandres _et al._ (2018), several other experimental and theoretical
works have pointed out more complex behaviours. The topological quantum
cascade laser of Zeng _et al._ (2020) displays some secondary spectral peaks.
For a tight-binding topolaser model, the possibility of dynamical
instabilities arising from the interplay of optical nonlinearities and slow
carrier dynamics has been numerically highlighted Longhi _et al._ (2018).
Since such effects may dramatically affect the coherence properties of the
topolaser emission as well as its power efficiency, it is of crucial
importance to fully understand the various processes that may lead to
instabilities.
In this work, we report a numerical and analytical study of the dispersion of
the collective excitations around a monochromatically oscillating steady-
state. Our study is based on the Bogoliubov theory of the collective
excitations on top of dilute Bose-Einstein condensates Pitaevskii and
Stringari (2016), which was then generalized to lasers and non-equilibrium
condensates of exciton-polaritons Wouters and Carusotto (2007). On one hand,
our analysis allows identifying the general features of the excitation modes
and the dynamics of quantum and classical fluctuations of generic topolaser
devices. In particular, it provides microscopic support to the numerical
observations in Seclì _et al._ (2019) and to the study of the long-distance
and long-time correlators of the fluctuations that are involved in the spatio-
temporal coherence properties of the emission Amelio and Carusotto (2020). On
the other hand, our theory recovers the dynamical instabilities anticipated in
Longhi _et al._ (2018) and shines light on the different physical processes
that may destabilize a monochromatic topolaser operation and, eventually, lead
to a chaotic multi-mode emission. A related study of the collective
excitations of topolaser devices has appeared in Zapletal _et al._ (2020),
focusing on the case of a photonic Haldane model but restricting to the
idealized class-A limit of a fast carrier dynamics.
Here we go beyond this approximation and develop a more sophisticated theory
that includes the slow carrier dynamics of realistic semiconductor-based
devices. While the idealized tight-binding model considered in the present
work is likely to only provide qualitative insight on semiconductor laser
arrays Pick _et al._ (2015), we expect it to be quantitatively predictive for
the lattices of micropillars Baboux _et al._ (2018) used in polariton-based
topolaser devices Klembt _et al._ (2018). From a general theoretical
perspective, our work offers a powerful framework of major utility to
characterize instability processes in generic topolaser systems. This will be
of great importance in view of designing devices where instabilities are tamed
and the emission is robustly clean and monochromatic.
The structure of the work is the following. In Sec. II we review the general
concepts of a topological laser device based on including gain into a photonic
topological Harper-Hofstadter model and we introduce the theoretical model. In
Sec. III, we characterize the steady-state of the lasing device. In Sec. IV we
calculate the collective excitation modes in the simplest regime where the
gain medium has a very fast recovery time and no optical nonlinearity is
present beyond gain saturation, finding a stable topolaser behaviour. An
effective analytical 1D theory able to recover the main features of the
numerical 2D calculation is then proposed and quantitatively validated. In
Sec. V, we extend our theory of collective excitations to more complicated
regimes displaying a slow carrier dynamics in the gain medium and/or
significant nonlinearities: this allows us to identify the main processes that
may lead to dynamical instabilities and to propose strategies to tame them.
Conclusions are finally drawn in Sec. VI.
## II The Harper-Hofstadter topological laser
In this Section, we review the general features of a laser device built by
introducing gain on the edge of a topological photonic lattice. Going beyond
our previous works Seclì _et al._ (2019); Amelio and Carusotto (2020), we
consider a wider class of devices where different regimes of operation are
found depending on the timescale of the carrier dynamics in the gain medium
and on the optical nonlinearity of the platform.
Figure 1: Harper-Hofstadter model. Panel (a): energy bands of the
conservative Harper-Hofstadter Hamiltonian Eq. (1) with flux $\theta=1/4$ in a
finite lattice of $n_{y}=399$ sites along $y$ with periodic boundary
conditions along $x$. The blue (green) lines indicate the dispersion of the
edge modes localized on the $y=1$ ($y=n_{y}$) edge. The dark dot indicates the
spatially most localized edge mode within the lower energy gap on the $y=1$
edge. Panel (b): spatial localization function $\Lambda(k_{x})$ (red line) and
curvature of the dispersion (blue line) of the edge states. The left/right
part of the plot refers to the edge mode living on the $y=1$ edge in the
lower/upper energy gap. Panel (c): imaginary part of the Bogoliubov spectrum
of the linearized dynamics around the vacuum solution for a pump strength
right at the lasing threshold. The thick black lines show the prediction of
the full 2D model, while the thin red ones show the prediction of the 1D
effective theory for the excitations living on the $y=1$ edge.
### II.1 The Harper-Hofstadter model
As a specific and most relevant example, we focus on the case of a photonic
lattice implementing the so-called Harper-Hofstadter (HH) model Harper (1955);
Hofstadter (1976); Ozawa _et al._ (2019). In the Landau gauge, the HH
Hamiltonian reads:
$H=-J\sum_{x,y}\Big{\\{}\hat{\psi}^{\dagger}_{x,y}\hat{\psi}_{x,y+1}+e^{-2\pi
i\theta y}\hat{\psi}^{\dagger}_{x,y}\hat{\psi}_{x+1,y}+\textrm{h.c.}\Big{\\}}$
(1)
where the zero of energies is set at the bare frequency of the sites, the sum
runs over all the sites of the lattice, $x,y=1,...,n_{x,y}$,
$\hat{\psi}_{x,y}$ is the (bosonic) photon annihilation operator at the site
$(x,y)$ and $J$ is a real-valued hopping amplitude. The topological properties
of this lattice are due to the synthetic magnetic field piercing it. Its
strength is quantified by the flux $\theta$ per plaquette in units of the
magnetic flux quantum. For rational $\theta=p/q$, the bulk eigenstates
distribute in $q$ energy bands with non-trivial topological properties encoded
in their Chern numbers. An example of such band dispersion is shown in Fig.
1(a) for the specific $\theta=1/4$ case on which we are going to focus
throughout this work. This dispersion is obtained by calculating the single-
particle eigenmodes of the full two-dimensional Harper-Hofstadter Hamiltonian
(1) under periodic (resp. open) boundary conditions along the $x$-axis (resp.
$y$-axis). Given the translational invariance along $x$, this reduces to a
one-dimensional diagonalization problem for each value of $k_{x}$.
In particular, note the chiral edge states that appear in the energy gaps
between the bands. Their dispersion $\epsilon(k_{x})$ is plotted in Fig. 1(a)
in blue and green lines for the $y=1,n_{y}$ edges, respectively. Two such edge
modes exist within each energy gap: they are localized on the opposite
$y=1,n_{y}$ physical edges of the system and propagate with opposite group
velocities. For instance, the edge mode of the negative energy gap with
positive group velocity is localized on the $y=1$ side, while the one with
negative group velocity on the $y=n_{y}$ side. The opposite holds for the edge
modes in the positive energy gap.
Some crucial properties of the edge modes localized on the $y=1$ edge are
summarized in Fig. 1(b), namely their effective mass
$m_{*}^{-1}=\partial^{2}_{k_{x}}\epsilon(k_{x})$ (related to the curvature of
the energy dispersion, blue line) and their overlap with the edge site $y=1$
(red line). The latter is quantified by the edge localization function,
$\Lambda(k_{x})=|\phi_{y=1}(k_{x})|^{2}\,,$ (2)
where $\phi_{y}(k_{x})$ is the wavefunction of the edge mode of wavevector
$k_{x}$, normalized according to $\sum_{y}|\phi_{y}(k_{x})|^{2}=1$. As
expected, the localization is maximum at $k_{x}$ values for which the edge
mode is located around the centre of the energy gap (black dot in Fig. 1(a)).
### II.2 Gain, loss and nonlinear terms
At the semiclassical level, we can replace the bosonic field operators on each
lattice site with $c$-number amplitudes and recast the field dynamics in terms
of the following equation of motion,
$i\frac{\partial\psi_{x,y}(t)}{\partial
t}=\Big{[}g|\psi_{x,y}|^{2}+g_{R}N_{x,y}+\frac{i}{2}(RN_{x,y}-\gamma)\Big{]}\psi_{x,y}\\\
-J\Big{[}\psi_{x,y+1}+\psi_{x,y-1}+e^{-2\pi i\theta y}\psi_{x+1,y}+e^{+2\pi
i\theta y}\psi_{x-1,y}\Big{]}.$ (3)
Hopping between neighbouring sites occurs along both $x,y$ directions. In the
chosen Landau gauge, the synthetic magnetic field is encapsulated in a
$y$-dependent phase of the hopping along $x$. All lattice sites experience
losses at a rate $\gamma$ and the nonlinear refractive index results in an
intensity-dependent frequency shift proportional to the nonlinearity
coefficient $g$.
The gain is provided by a reservoir of incoherent excitations of density
$N_{x,y}$ obeying the rate equation,
$\frac{\partial N_{x,y}}{\partial
t}=P\delta_{y,1}-(\gamma_{R}+R|\psi_{x,y}|^{2})N_{x,y}$ (4)
and describing, e.g., the density of electrons promoted to the conduction band
of a semiconductor gain material. This reservoir is pumped at a site-dependent
rate: as indicated in (4), we concentrate on the case where the pumping is
localized on the $y=1$ edge of the lattice and here has a uniform rate $P$.
The reservoir decays on a characteristic timescale set by $\gamma_{R}$ and
provides stimulated emission into field modes with an efficiency $R$. The
effect of the incoherent excitations on the refractive index, and hence on the
resonance frequency of each site, is included by the photon-reservoir
interaction term $g_{R}N_{x,y}$, which is at the origin of the Henry linewidth
enhancement factor Henry (1982).
An especially important regime is identified when the carrier dynamics is very
fast compared to the other timescales of the device, i.e. for
${\gamma_{R}}/{\gamma}\gg 1$. In this case we can make use of the adiabatic
approximation and set the left-hand side of Eq. (4) to zero. The carrier
density then instantaneously follows the field dynamics according to
$N_{x,y}=\frac{P\delta_{y,1}}{\gamma_{R}+R|\psi_{x,y}|^{2}}\,.$ (5)
In the following, a device satisfying this condition and featuring negligible
nonlinearities $g,g_{R}=0$ is referred to as a class-A laser and is described
by the following equations of motion for the field amplitude,
$i\frac{\partial\psi_{x,y}(t)}{\partial
t}=(\mathbf{H}\psi)_{x,y}+\frac{i}{2}\bigg{(}\frac{\beta
P\delta_{y,1}}{1+\beta|\psi_{x,y}|^{2}}-\gamma\bigg{)}\psi_{x,y}$ (6)
where the hopping matrix is such that
$(\mathbf{H}\psi)_{x,y}=-J\big{[}\psi_{x,y+1}+\psi_{x,y-1}+\\\ +e^{-2\pi
i\theta y}\psi_{x+1,y}+e^{+2\pi i\theta y}\psi_{x-1,y}\big{]}$ (7)
and the effective saturation parameter is $\beta={R}/{\gamma_{R}}$. This
simplified model was used in our previous works Seclì _et al._ (2019); Amelio
and Carusotto (2020).
The present work goes beyond this regime and extends the investigation to a
more general class of devices, where the reservoir cannot be adiabatically
eliminated and/or significant nonlinearities are present, $g,g_{R}\neq 0$.
Such a non-adiabatic $\gamma_{R}<\gamma$ regime is commonly found both in
polariton topolaser devices Klembt _et al._ (2018) and semiconductor laser
ones Bandres _et al._ (2018); Longhi _et al._ (2018): in the former case,
this is due to the long recombination rate of the excitons feeding the
condensate, while in the latter case it is due to the slow dynamics of
carriers in the semiconductor gain medium. On the other hand, optical
nonlinearities due to repulsive polariton-polariton and polariton-reservoir
interactions $g,g_{R}>0$ are especially significant in polariton devices where
the ensuing blue-shifts $gn$ and $g_{R}n_{R}$ may exceed the loss rate
$\gamma$ and even approach the hopping amplitude $J$ Carusotto and Ciuti
(2013); Baboux _et al._ (2018); Bobrovska _et al._ (2018).
## III Steady-state lasing solution
As usual, the first step in the calculation of the fluctuation dynamics of a
laser device consists in characterizing its steady state. As long as losses
overcome the gain, the steady-state of the device is the electromagnetic
vacuum $\psi_{x,y}=0$ and $N_{x,y=1}=P/\gamma_{R}$. A non-trivial steady state
is instead reached when the gain starts exceeding losses.
The transition between the two regimes defines a threshold value
$P_{\textrm{th}}$ for the pump rate $P$, which can be calculated by
linearizing the motion equation Eq. (3) for the field $\delta\psi_{x,y}$
around the vacuum solution. Thanks to the translational symmetry of our system
along the periodic $x$ direction, it is useful to move to Fourier space along
$x$ and, for each $k_{x}$ value, solve the one-dimensional eigenvalue problem:
$\omega\,\delta\psi_{k_{x},y}=\frac{i}{2}\Big{[}(R-2ig_{R})\frac{P}{\gamma_{R}}\delta_{y,1}-\gamma\Big{]}\delta\psi_{k_{x},y}+\\\
-J\Big{[}\delta\psi_{k_{x},y+1}+\delta\psi_{k_{x},y-1}+2\cos(2\pi\theta
y+k_{x})\delta\psi_{k_{x},y}\Big{]}\,.$ (8)
An example of the spectrum of the corresponding $n_{y}\times n_{y}$ matrix is
shown in Fig. 1(c) for parameters very close to the lasing threshold. The
precise position $P_{\textrm{th}}$ of the threshold can be determined as the
point at which the imaginary part of one of the eigenvalues turns positive,
meaning that the vacuum solution is no longer dynamically stable.
For $P>P_{\textrm{th}}$ the system departs from the unstable vacuum solution
and, under suitable conditions to be discussed better in what follows, it can
reach a non-trivial, dynamically stable stationary state displaying a periodic
oscillation of the field at some frequency $\omega^{\textrm{Las}}$. Our
numerical study of this dynamics is carried out by solving the evolution
equations (3-4) in real time. This is done using a fourth-order Runge-Kutta
algorithm, starting from a small random complex amplitude on each $x,y$ site
to trigger the instability. In order to characterize the steady-state, we let
the evolution run for long enough times (on the order of $10^{6}$ times steps)
until a clean steady-state is reached. Typical lattice sizes used in our
simulations are $n_{x}=64$ and $n_{y}=23$ with periodic boundary conditions
along $x$. Accurate results are obtained with a typical time step on the order
of $dt=0.005/J$.
Besides the numerical study, analytical arguments can be used to understand
the physics of the steady-state. Thanks to the translational invariance along
the $x$-axis, the monochromatically oscillating steady-state can be formally
written as:
$\displaystyle\psi^{\textrm{ss}}_{x,y}(t)$ $\displaystyle=$
$\displaystyle\psi^{0}_{x,y}e^{-i\omega^{\textrm{Las}}t}=\psi^{0}_{y}e^{-i\omega^{\textrm{Las}}t+ik_{x}^{\textrm{Las}}x}$
(9) $\displaystyle N^{\textrm{ss}}_{x,y}(t)$ $\displaystyle=$ $\displaystyle
N^{0}\delta_{y,1}.$ (10)
where the lasing frequency $\omega^{\textrm{Las}}$ is self-consistently chosen
by the system dynamics. Unless $P$ is very close to the threshold
$P_{\textrm{th}}$, the lasing wavevector $k_{x}^{\textrm{Las}}$ is randomly
selected within the range of $k_{x}$ modes for which the vacuum state is
dynamically unstable. This selection is triggered by external noise or by the
initial conditions imposed to the field Seclì _et al._ (2019). The global
phase of the oscillating field is randomly selected at each instance of laser
operation, but then it stays stable for macroscopically long times: this is a
characterizing feature of laser emission and is related to the spontaneous
breaking of the global $U(1)$ symmetry of Eq. (3) that occurs above threshold.
In Fig. 1(c), we illustrate how, for our pumping localized on an edge, the
lasing instability is stronger for the chiral edge states localized on the
$y=1$ pumped side than for the bulk modes that experience a reduced overlap
with the edge. Among the chiral edge states, the ones located around the
centre of the energy gap are the most localized in space and thus feel the
largest gain. Given the small but significant penetration of the edge states
into the lossy bulk, the threshold is pushed at a slightly higher pumping
value $P_{\textrm{th}}=1.142\gamma\gamma_{R}/R$ than the one $P_{\rm
th,1}=\gamma\gamma_{R}/R$ of an isolated resonator.
As long as the nonlinearites $g,g_{R}$ can be neglected, the effective gain is
equal for the negative and positive frequency edge modes as shown in Fig.
1(c), so the instability occurs with the same probability in each of the
chiral edge modes. This is perfectly consistent with our previous numerical
study Seclì _et al._ (2019). This symmetry that holds for $g,g_{R}=0$ is due
to the extended chiral $\mathcal{C}\mathcal{P}_{x}\mathcal{T}$ symmetry that
our system inherits from the one of the underlying HH model. Indeed, for the
conservative model, we have three discrete symmetries, and we review here
their action on a generic eigenstate in the form (9). To begin with, the
$\mathcal{P}_{y}\mathcal{T}$ symmetry requires that also
$\left[\mathcal{P}_{y}\mathcal{T}\psi^{\textrm{ss}}\right]_{x,y}(t)=e^{2\pi
i\theta(n_{y}+1)x}\left(\psi^{0}_{x,n_{y}-y+1}\right)^{*}e^{-i\omega^{\textrm{Las}}t}$
(11)
is an eigensolution of the same frequency and with momentum
$2\pi\theta(n_{y}+1)-k_{x}$; importantly, if $\psi^{\textrm{ss}}_{x,y}$ were
localized on the $y=1$ side, the transformed state would live on the $y=n_{y}$
edge. Physically, this symmetry corresponds to a reflection plus time-reversal
symmetry of the cyclotron orbits in a Hall bar, connecting edge modes located
in the same energy gap and living on different edges of the system.
Analogously, one would define the $\mathcal{P}_{x}\mathcal{T}$ symmetry as
$\left[\mathcal{P}_{x}\mathcal{T}\psi^{\textrm{ss}}\right]_{x,y}(t)=\left(\psi^{0}_{n_{x}-x+1,y}\right)^{*}e^{-i\omega^{\textrm{Las}}t}\,.$
(12)
However, since $\psi^{0}_{y}$ is (proportional to) a real vector, this mapping
corresponds to simple multiplication by a phase 111Of course, the different
nature of the $\mathcal{P}_{x}\mathcal{T}$ and $\mathcal{P}_{y}\mathcal{T}$
transformation in this case originates from the fact that the system is
translationally invariant along x. Finally, the chiral symmetry
$\left[\mathcal{C}\psi^{\textrm{ss}}\right]_{x,y}(t)=(-1)^{x+y}\psi^{0}_{x,y}e^{+i\omega^{\textrm{Las}}t},$
(13)
defines an eigenstate of opposite frequency $-\omega^{\textrm{Las}}$ and
shifted wavevector $k_{x}+\pi$ Repellin and Goldman (2019). This
transformation explains the overall symmetric structure of the HH spectrum
with respect to the zero-energy point. Evidently, the $\mathcal{C}$ symmetry
can only be defined on a lattice and does not exist in a continuum geometry
where all Landau levels have positive energy.
Coming back to the topolaser, the presence of gain and losses breaks the above
symmetries; nonetheless, given the steady-state lasing state (9), also
$\left[\mathcal{C}\mathcal{P}_{x}\mathcal{T}\psi^{\textrm{ss}}\right]_{x,y}(t)=(-1)^{x+y}(\psi^{0}_{n_{x}-x+1,y})^{*}e^{+i\omega^{\textrm{Las}}t}$
(14)
is a solution of wavevector $k_{x}+\pi$, frequency $-\omega^{\textrm{Las}}$
and localized on the same edge. Notice that the lasing state $\psi^{0}_{y}$
cannot be taken any more with real entries (see Appendix A), that is why the
action of $\mathcal{P}_{x}\mathcal{T}$ is non-trivial. For non-vanishing
nonlinearities $g,g_{R}\neq 0$, this $\mathcal{C}\mathcal{P}_{x}\mathcal{T}$
symmetry breaks down and the lasing states in the two topological energy gaps
have different properties. For instance, if $g_{R}>0$, ($g_{R}<0$) the lasing
threshold is slightly lower for the positive (negative) energy edge modes than
for the negative (positive) energy ones Longhi _et al._ (2018).
## IV Collective excitations of class-A topological lasers
Figure 2: Dispersion of the collective excitation modes on top of a class-A
topological laser. The left/right panels show the real/imaginary part of the
Bogoliubov dispersion for a system of size $n_{y}=23$ that is lasing on the
maximally localized mode at $k_{x}^{\textrm{Las}}=-0.982$ for which gain is
strongest. Black (red) dots indicate the results of the full 2D model (1D
effective theory). G (resp. A) in the right panel indicate the Goldstone
(resp. Amplitude) branches. System parameters: $\gamma=0.02J$, adiabatic
regime $\gamma_{R}/\gamma=+\infty$, $P/P_{\rm th,1}=2$, $g=g_{R}=0$.
After having identified in the previous Section the steady-state lasing state,
we now proceed with the investigation of its collective excitations, namely of
the linearized dynamics around the steady-state. This study is the microscopic
complement to the statistical study of the coherence properties of the
emission Amelio and Carusotto (2020), and it provides crucial insight into the
dynamical stability of the lasing state. After presenting the results of a
full numerical calculation, we will develop a deeper understanding of the
physical features by means of an effective 1D model for the edge state
dynamics. In doing so, a major focus will be put on the soft Goldstone branch
corresponding to the spontaneously broken $U(1)$ symmetry: its dependence on
the curvature of the edge mode dispersion and on its spatial localization on
the edge of the physical lattice will be highlighted. In this Section, we
start our study from the simplest case of class-A devices displaying a fast
reservoir $\gamma_{R}/\gamma\gg 1$ and no optical nonlinearities $g=g_{R}=0$,
postponing the more general analysis to the next Sec. V.
### IV.1 2D Bogoliubov theory
As usual in the Bogoliubov approach Wouters and Carusotto (2007), the first
step of the calculation of the collective modes is to accurately determine the
steady-state in the form (9). As it was discussed in the previous Section,
this can be done by numerically simulating the evolution equation (6) with a
suitable Runge-Kutta technique. The lasing frequency
$\omega_{x}^{\textrm{Las}}$ and wavevector $k_{x}^{\textrm{Las}}$ are obtained
by temporal and spatial Fourier transform of the steady-state field amplitude
on the $y=1$ edge.
Then, one has to linearize the equations of motion around the steady-state
according to the ansatz
$\psi_{x,y}(t)=[\psi^{0}_{y}+\delta\psi_{x,y}(t)]\,e^{-i\omega^{\textrm{Las}}t+ik_{x}^{\textrm{Las}}x}.$
(15)
Thanks to the translational invariance, the collective modes are classified by
the $x$ component of the wavevector $k_{x}$. One can thus switch to Fourier
space along $x$ and, for each value of $k_{x}$, one can write a system of
linear equations for the corresponding components of the field
$\delta\psi_{k_{x},y}$ and $\delta\psi_{k_{x},y}^{*}$. The eigenmodes of this
linearized evolution are obtained from the $2n_{y}\times 2n_{y}$ linear
problem determined by
$\omega^{\rm
Bog}\,\delta\psi_{k,y}=([\mathbf{H}-\omega^{\textrm{Las}}\mathbf{I}]\,\delta\psi)_{k,y}+\mathbf{D}_{y}\,\delta\psi_{k,y}+\mathbf{\tilde{D}}_{y}\,\delta\psi_{-k,y}^{*}$
(16)
and the complex conjugate equation. Here, both the wavevector
$k=k_{x}-k_{x}^{\textrm{Las}}$ and the frequency $\omega^{\rm Bog}$ are
measured with respect to the lasing ones, $\mathbf{I}$ is the $n_{y}\times
n_{y}$ identity matrix, and we have defined the short-hands
$\displaystyle\mathbf{D}_{y}$ $\displaystyle=$
$\displaystyle\frac{i}{2}\Big{(}\frac{\beta
P\delta_{y,1}}{1+\beta|\psi_{1}^{0}|^{2}}-\frac{\beta^{2}P\delta_{y,1}|\psi_{1}^{0}|^{2}}{(1+\beta|\psi_{1}^{0}|^{2})^{2}}-\gamma\Big{)}$
(17) $\displaystyle\mathbf{\tilde{D}}_{y}$ $\displaystyle=$
$\displaystyle-\frac{i}{2}\frac{\beta^{2}P\delta_{y,1}(\psi_{1}^{0})^{2}}{(1+\beta|\psi_{1}^{0}|^{2})^{2}}.$
(18)
where $\psi_{1}^{0}$ is the component of the steady-state on the edge site
$y=1$ that is endowed with gain. Note that the diagonal block $\mathbf{H}$
couples neighbouring sites along $y$, so that in a given $k$ block all
$\delta\psi_{k,:}$ are linearly coupled to $\delta\psi_{-k,:}^{*}$, where the
symbol $:$ indicates a vector with indices $y=1,...,n_{y}$. The problem of
determining the collective excitation modes then amounts to the numerical
diagonalization of a $2n_{y}\times 2n_{y}$ Bogoliubov matrix for each $k_{x}$.
A typical example of collective excitation spectrum is displayed by the black
dots in Fig. 2. As expected, it displays the characteristic features of non-
equilibrium condensates Wouters and Carusotto (2007). In the real part of the
spectrum (left panel), the excitations around the lasing mode (small
$\omega^{\rm Bog}$ and $k$) exhibit a zone of adhesion. In this region, the
Bogoliubov branch follows the HH edge mode, properly shifted according to
$\omega^{\textrm{Las}}$ and $k_{x}^{\textrm{Las}}$. In the imaginary part of
the spectrum (right panel), we recognize instead the typical splitting between
the Goldstone and the amplitude branches, respectively related to phase and
intensity fluctuations. As $k\to 0$ the dispersion $\omega^{\rm Bog}_{+}(k)$
of the Goldstone mode tends to zero in both real and imaginary parts as a
consequence of the spontaneous breaking of the $U(1)$ phase symmetry. The
amplitude mode has instead a finite negative imaginary part
$\mathrm{Im}(\omega^{\rm Bog}_{-}(k\to 0))=-\Gamma$ corresponding to the
relaxation rate of intensity fluctuations
$\Gamma=\gamma(1-P_{\textrm{th}}/P)<\gamma$ (see the 1D model of the next
paragraph for the derivation of this formula): for $P\gg P_{\textrm{th}}$
above threshold, $\Gamma$ recovers the bare decay rate $\gamma$, closer to the
threshold it is smaller than $\gamma$, and tends to zero right above the
threshold $P_{\textrm{th}}$.
One of the peculiarities of our HH laser is the presence of another edge mode
with opposite chirality, living on the same edge in the other energy gap at
opposite wavevector. In the excitation spectrum, this additional edge mode
corresponds to the maximum of the imaginary part around $k\simeq\pm\pi$ .
Since it is also localized on the same edge, this opposite chirality mode also
benefits of gain and thus displays a slower decay rate $\Gamma$ than the bulk
modes. These latter have in fact a negligible overlap with the gain material
and thus decay at the bare loss rate $\gamma$.
Altogether, these numerical calculations confirm the dynamical stability of
topolaser devices in the regime where the gain medium adiabatically follows
the field dynamics and no other optical nonlinearity is present besides gain
saturation. Getting analytical insight into the physics underlying these
numerical results will be the subject of the next Subsection.
### IV.2 Effective 1D model
We now proceed to develop an effective 1D model that is able to provide
analytical insight into the collective excitation spectra numerically
calculated in the previous Subsection using the full 2D theory. This method
relies on the assumption that the lasing wavefunction closely follows the
corresponding eigenstate of the underlying conservative HH model. First,
notice that the translational invariance along $x$ allows writing the steady-
state as a plane wave of quasi-momentum $k^{\rm las}_{x}$ also in the
nonlinear case. Then, one formulates the ansatz
$\psi^{\textrm{ss}}_{x,y}(t)\simeq\psi^{\textrm{ss}}e^{ik_{x}^{\textrm{Las}}x}\phi_{y}(k_{x}^{\textrm{Las}})\,e^{-i\omega^{\textrm{Las}}t}$
(19)
where $\phi_{y}(k_{x})$ is the transverse wavefunction of the edge mode at
wavevector $k_{x}$. This writing is expected to be accurate in the $\gamma\ll
J$ limit where the band gap is much wider than the frequency scale of the
dynamics. A brief discussion of the first corrections in $\gamma/J$ is given
in the Appendix A.
We will also consider small fluctuations on top of this solution, in the form
$\psi_{x,y}(t)=\int\\!\frac{dk}{2\pi}\,e^{i(k_{x}^{\textrm{Las}}+k)x}\,\tilde{\psi}(k,t)\,\phi_{y}(k+k_{x}^{\textrm{Las}})\,e^{-i\omega^{\textrm{Las}}t}$
(20)
where
$\tilde{\psi}(k,t)=2\pi\,\delta(k)\,\psi^{\textrm{ss}}+\delta\tilde{\psi}(k,t)$,
with $\delta\tilde{\psi}(k,t)$ small. The goal of this subsection is to
determine an equation of motion for the 1D wavefunction
$\psi(x,t)\equiv\int\\!\frac{dk}{2\pi}\,e^{i(k_{x}^{\textrm{Las}}+k)x}\,\tilde{\psi}(k,t)\,e^{-i\omega^{\textrm{Las}}t}.$
(21)
As a first step, we are now going to determine the amplitude
$\psi^{\textrm{ss}}$ of the plane wave wavefunction at the steady-state. To
this purpose, we inject the ansatz (19) into the evolution equation, and then
we overlap the result with $\phi_{y}(k_{x}^{\textrm{Las}})$ 222Obviously, in
the general case $\gamma\sim J$ the non-uniform gain couples the edge mode to
the bulk modes of equal $k_{x}$, so that it is no longer sufficient to overlap
with the edge modes only. This coupling becomes negligible for $\gamma\ll J$..
This gives
$0=\epsilon(k^{\textrm{Las}}_{x})-\omega^{\textrm{Las}}+\frac{i}{2}\left[\frac{\beta
P\Lambda(k^{\textrm{Las}}_{x})}{1+\beta\,\Lambda(k^{\textrm{Las}}_{x})\,|\psi^{\textrm{ss}}|^{2}}-\gamma\right]$
(22)
where $\epsilon(k_{x})$ is the edge mode dispersion and
$\Lambda(k_{x})=|\phi_{y=1}(k_{x})|^{2}$ is the edge localization function
that quantifies the overlap of the HH edge states with the $y=1$ edge.
Splitting the real and imaginary parts of this equation, we obtain
$\displaystyle\omega^{\textrm{Las}}$ $\displaystyle=$
$\displaystyle\epsilon(k^{\textrm{Las}}_{x})$ (23)
$\displaystyle|\psi^{\textrm{ss}}|^{2}$ $\displaystyle=$
$\displaystyle\frac{1}{\beta\Lambda(k^{\textrm{Las}}_{x})}\,\left(P/P_{\mathrm{th}}-1\right)\,.$
(24)
The lasing frequency is set by the bare dispersion of the edge state, whereas
the lasing threshold
$P_{\mathrm{th}}=\frac{\gamma}{{\beta\Lambda(k_{x}^{\textrm{Las}})}}$ (25)
depends on the overlap of the mode with the gain region via the
$\Lambda(k_{x})$ localization factor. As one can see on the red lines in Fig.1
(b), for the most localized modes, this factor can reach values close to
unity.
As the next step, we try to write the equation of motion for $\psi(x,t)$ below
threshold. Since the equation of motion in this regime is linear, the
different Fourier components decouple and one can write
$i\partial_{t}\psi(x,t)=\left[\epsilon(\hat{k}_{x})+\frac{i}{2}\left(\Lambda(\hat{k}_{x})\beta
P-\gamma\right)\right]\psi(x,t)\,$ (26)
where $\hat{k}_{x}=-i\partial_{x}$ is the usual momentum operator and with the
only assumption that the coupling to bulk modes is negligible. The
conservative part of the dynamics follows the HH edge dispersion and the
effective gain strength has a $k$-dependence given by the edge localization
function $\Lambda$: the stronger the localization, the stronger the effective
gain. This equation being linear, its collective excitation modes are
trivially given by
$\omega^{\rm
Bog}(k)=\epsilon(k_{x})-\omega^{\textrm{Las}}+\frac{i}{2}\Big{(}\Lambda(k_{x})\beta
P-\gamma\Big{)}\,.$ (27)
where $k$ is the momentum with respect to the lasing one, $k_{x}\equiv
k_{x}^{\textrm{Las}}+k$. This effective 1D prediction for the dispersion
around the vacuum state is plotted as a red line in Fig. 1(c): as long as one
focuses on the edge modes, it excellently recovers the full 2D calculation
shown by the black lines. In particular, the 1D model provides a reliable
prediction for the mode with the strongest gain, which is going to lase first.
Of course, the full 2D calculation also includes the bulk bands that are not
captured by the 1D model: however, given their smaller overlap with gain, the
imaginary part of their frequency is much larger and negative.
While the linear dynamical equation (26) is exact in the $\gamma/J\to 0$
limit, extending it to the nonlinear regime where gain saturation is important
is made non-trivial by the simultaneous $x$\- and $k_{x}$-dependence of the
gain term: gain saturation is in fact a spatially local effect, while the
$k_{x}$ dependence of gain via the edge localization function is a momentum-
space effect. In the spirit of our previous discussion, one can generalize Eq.
(26) and write down
$i\partial_{t}\psi(x)=\left[\epsilon(\hat{k}_{x})+\frac{i}{2}\bigg{(}\Lambda(\hat{k}_{x})\frac{\beta
P}{1+\beta|\psi(x)|^{2}}-\gamma\bigg{)}\right]\psi(x).$ (28)
Notice that the edge localization and the saturation terms do not commute and
the order has been chosen to get consistent results with the 2D theory.
While we provide no rigorous derivation of this formula 333An attempt of
derivation would involve the Bogoliubov ansatz
$\delta\psi_{x,y}=u_{k}e^{ikx}\phi_{y}(k)+v_{k}e^{-ikx}\phi_{y}(-k)$; the
corresponding Bogoliubov problem would not be closed since in general
$\phi_{y}(k)\neq\phi_{y}(-k)$., we show that the Bogoliubov edge eigenenergies
are accurately recovered and conveniently interpreted within this approach.
Indeed, the Bogoliubov equations for Eq. (28) can be cast in the usual
$2\times 2$ matrix form
$\omega^{\rm Bog}(k)\begin{pmatrix}u_{k}\\\
v_{k}\end{pmatrix}=\begin{bmatrix}e(k)+\frac{i\gamma}{2}\left(\lambda(k)-1\right)-\frac{i}{2}\Gamma\lambda(k)&-\frac{i}{2}\Gamma\lambda(k)\\\
-\frac{i}{2}\Gamma\lambda(-k)&-e(-k)+\frac{i\gamma}{2}\left(\lambda(-k)-1\right)-\frac{i}{2}\Gamma\lambda(-k)\end{bmatrix}\begin{pmatrix}u_{k}\\\
v_{k}\end{pmatrix}$ (29)
where we have defined the shorthands
$e(k)=\epsilon(k_{x}^{\textrm{Las}}+k)-\omega^{\textrm{Las}}$ and
$\lambda(k)=\Lambda(k_{x}^{\textrm{Las}}+k)/\Lambda(k_{x}^{\textrm{Las}})$.
Expanding the HH edge state dispersion
$\epsilon(k)-\omega^{\textrm{Las}}\simeq v_{g}k+\frac{k^{2}}{2m_{*}}$, it is
immediate to see that the group velocity (as well as the higher odd terms of
the dispersion) gives a diagonal term that contributes as a constant to the
Bogoliubov dispersion. The $\lambda(k)$ coefficient is of geometric nature and
accounts for the $k$-dependence of the edge mode localization. The Bogoliubov
spectrum that results from the diagonalization of this matrix consists of two
branches $\omega^{\rm Bog}_{\pm}(k)$ and is plotted as red lines in Fig. 2.
The agreement with the full 2D numerical calculation is excellent: both the
Goldstone and amplitude branches are quantitatively recovered by $\omega^{\rm
Bog}_{+}(k)$ and $\omega^{\rm Bog}_{-}(k)$ respectively, as well as the
dispersion of the edge mode with opposite chirality. Of course, the bulk bands
are not included in the 1D model.
Figure 3: Scaling of the Goldstone branch at small k. The red, green and blue
lines show the Goldstone and the amplitude branches, calculated with the full
2D model for different hopping strengths and pumping parameters, as detailed
in the legend. The thick black dashed line is a quadratic fit of
$-\mathrm{Im}(\omega^{\rm Bog}(k))$, on top of which all the three dispersion
relations collapse at small $k$. The thin black dotted lines correspond to the
$(k^{2}/2m_{*})^{2}/{\Gamma}$ scaling predicted by the 1D model. The magenta
dash-dotted line indicates the relaxation rate at which Goldstone and
amplitude branches stick for ${P}/P_{\rm th,1}=2$. Other parameters:
$k_{x}^{\textrm{Las}}=-0.982$, adiabatic regime $\gamma_{R}/\gamma=\infty$,
$g=g_{R}=0$.
Among all Bogoliubov modes, the ones with the slowest relaxation rate play a
very important role in determining the long-distance, long-time behaviour of
the spatio-temporal coherence properties of the laser emission (Amelio and
Carusotto, 2020). If the localization of the edge mode was uniform in $k$,
$\lambda(k)=1$ (as assumed, for instance, in Zapletal _et al._ (2020)), Eq.
(29) would predict a quartic $\sim k^{4}$ behaviour of the decay rate of the
Goldstone branch at small $k$, proportional to the curvature of the edge mode
or, equivalently, to the inverse of the effective mass $1/m_{*}$,
$\mathrm{Im}(\omega^{\rm Bog}_{+}(k))\simeq-{(k^{2}/2m_{*})^{2}}/{\Gamma}$.
Upon closer inspection of our theory, however, one notices that the
$k$-dependence of the localization function entails a quadratic behaviour
$\mathrm{Im}(\omega^{\rm
Bog}_{+}(k))\simeq-\frac{\gamma}{2}\frac{\lambda^{\prime\prime}(0)}{2}k^{2}$
(30)
for $k\to 0$, which has geometric origin and is independent of $J/\gamma$ and,
thus, of the effective mass $m_{*}$.
All these non-trivial predictions of the 1D model are well confirmed by the
exact 2D dispersion, plotted on a magnified scale in Fig. 3 (though, there is
some quantitative discrepancy in the coefficient of $k^{2}$, not shown). For a
few different values of $J/\gamma$ and of the pumping $P$, the three different
dispersions fall on the same curve at very small $k$. At intermediate $k$, the
curvature of the HH edge mode starts playing a crucial role and, for
sufficiently large $J/\gamma$, the imaginary part matches the
$\sim(k^{2}/2m_{*})^{2}/{\Gamma}$ behaviour. For even larger $k$, the
imaginary parts of the Goldstone and the amplitude branches stick at a value
$-\Gamma/2$ determined by the relaxation rate of small wavevector density
fluctuations $\Gamma=\gamma(1-P_{\textrm{th}}/P)$.
These results show how the effective 1D model is able to reproduce the
qualitative features of the edge Bogoliubov modes and, in particular, to
explain the $\sim k^{2}$ behaviour that is crucial for the topological
enhancement of coherence in large lattices Amelio and Carusotto (2020). As a
great advantage, the effective one-dimensional theory introduced here for the
specific case of a HH model can be straightforwardly adapted to topological
lasers built on top of different topological models, e.g. as the Haldane model
considered in Zapletal _et al._ (2020), by just plugging in the suitable
forms of the edge mode dispersion $\epsilon(k_{x})$ and of the localization
function $\Lambda(k_{x})$.
## V Dynamical stability of general topological laser devices
In the previous Sections, we have studied the dispersion of the collective
excitations in the idealized case of a fast gain medium and no optical
nonlinearity besides gain saturation. In particular, we have shown that no
dynamical instability occurs in this regime and the only slow dynamics is the
one of the Goldstone mode, intrinsically related to the U(1) symmetry breaking
mechanism of laser operation. In this Section, we extend our study to a wider
class of devices where the carrier dynamics in the gain medium has a slower
timescale than the bare dynamics of the lasing mode and/or the lattice
resonators and/or the gain medium display an intensity-dependent refractive
index. Our investigation extends the pioneering analysis carried out in Longhi
_et al._ (2018) and provides physical insight into the different microscopic
processes underlying the instabilities predicted there.
As in the previous Sections, our analysis will make a combined use of full
two-dimensional calculations, based on a linearized theory that now includes
the reservoir dynamics into the Bogoliubov formalism as summarized in Appendix
B, and of an effective one-dimensional theory that generalizes Eq. (28) to the
more complex configurations under investigation here. This one-dimensional
theory is based on the following pair of evolution equations for the lasing
field and the reservoir density
$\displaystyle i\partial_{t}\psi(x)$ $\displaystyle=$
$\displaystyle\left[\epsilon(\hat{k}_{x})+g|\psi(x)|^{2}+\right.$
$\displaystyle+$
$\displaystyle\left.\frac{i}{2}\bigg{(}\Lambda(\hat{k}_{x})(R-2ig_{R})N(x)-\gamma\bigg{)}\right]\psi(x),$
$\displaystyle\frac{\partial N(x)}{\partial t}$ $\displaystyle=$
$\displaystyle P-(\gamma_{R}+R|\psi(x)|^{2})N(x)\,.$ (32)
Figure 4: Left panel: imaginary part of the elementary excitation spectrum
with a slow reservoir $\gamma_{R}/\gamma=2.5$. Black (resp. red) lines stand
for results computed with the full 2D (resp. 1D effective) model. Right panel:
comparison of the small $k$ scaling of the Goldstone branch for different
values of the reservoir speed, calculated with the 2D theory. Parameters:
$\gamma=0.2J$, $P/P_{\rm th,1}=2$, $g=g_{R}=0$, $k_{x}^{\textrm{Las}}=-0.982$,
$n_{y}=23$.
### V.1 Slow carrier dynamics
In this first subsection, we focus on the effect of a slow carrier dynamics
$\gamma_{R}\lesssim\gamma$ on the dynamical stability of monochromatic laser
operation. For simplicity, we assume that no other nonlinearity is present
besides gain saturation, that is, we set the intensity-dependent refractive
index to zero, $g=g_{R}=0$.
The imaginary part of the dispersion of the collective excitation modes is
shown in Fig. 4(a) for the case of a moderately slow carrier dynamics with
$\gamma_{R}/\gamma$ of order $1$. The black dots show the result of a full 2D
calculation of the collective excitation modes. Quite interestingly, also in
this case the 1D model (red lines) is able to recover the full 2D calculation
in a remarkably accurate way. While for relatively large wavevectors $k$, the
overall shape of the Goldstone and amplitude branches is deeply changed due to
the hybridization between edge and reservoir modes, the dispersion of the
slowest excitation modes at low $k$ maintains the same structure.
This physics is displayed in better detail in Fig. 4(b). Independently of the
value of $\gamma_{R}$, down to the smallest values of $\gamma_{R}/\gamma$, for
sufficiently small $k$ the Goldstone branch turns slow compared to the
reservoir dynamics: in this window of small $k$ values, the reservoir can thus
be adiabatically eliminated, and the dispersion recovers a $\gamma_{R}$\- and
$m_{*}$-independent, quadratic $\sim k^{2}$ dependence as discussed in the
previous Section. Of course, the amplitude branch and the higher-$k$ part of
the spectrum depend instead strongly on $\gamma_{R}$.
As a further consequence of the reduced value of $\gamma_{R}/\gamma$, in Fig.
4(a) one can see how the counter-propagating edge mode of wavevector $k=\pi$
with opposite chirality gets closer to the instability threshold. In the
adiabatic limit discussed in the previous Section, we saw that its imaginary
part was (in absolute value) equal to $\Gamma/2$, that is $\Gamma/J=0.0858$
for the parameters of Fig. 4(a). This value is way larger than the numerically
calculated value $-\mathrm{Im}[\omega^{\rm Bog}(\pi)]\simeq 0.002J$.
Physically, this reduced value can be understood in terms of the high
frequency $\approx 2\omega^{\textrm{Las}}$ at which the lasing edge mode (of
frequency $\omega^{\textrm{Las}}$ with respect to the bare frequency of the
sites) beats with the counter-propagating mode (of frequency approximately
$-\omega^{\textrm{Las}}$), way higher than the carrier relaxation rate
$\gamma_{R}$. As a result, the fast oscillating interferences are ineffective
in quenching the effective gain experienced by the counter-propagating mode.
Using the linearized form of the one-dimensional equation of motion, one sees
that the imaginary part of the counter-propagating excitations scales as
$\mathrm{Im}(\omega^{\rm
Bog}(\pi))=-\frac{\alpha(1+\alpha)}{2}\left(\frac{\gamma_{R}}{2\omega^{\textrm{Las}}}\right)^{2}\,\gamma$
(33)
with the shorthand $\alpha=P/P_{\textrm{th}}-1$. Since $\omega^{\rm Las}$ is
typically of order $J$, the relaxation rate (33) of the counter-propagating
mode turns out much smaller not only than the bare cavity decay $\gamma$, but
also of the carrier one $\gamma_{R}$.
Even though from a purely mathematical perspective this extremely slow decay
time is not harmful to the dynamical stability of topolaser operation, in
practice it may be problematic for applications, since it may dramatically
slow down the process of selecting one chiral edge mode over the other. In the
transient, the simultaneous presence of oscillations in both chiral modes
results in a multimode emission or, from a different point of view, a fast
modulation of the laser amplitude at a frequency $2\omega^{\rm Las}$. Beyond
this, in Sec. V.4, we will see how the small value of the imaginary part of
the counter-propagating mode makes it susceptible of getting dynamical
unstable once nonlinearities are included in the model.
Figure 5: Imaginary part of the elementary excitation spectrum with small
nonlinearities $g|{\psi^{0}_{1}}|^{2},g_{R}N^{0}\ll J$ in the adiabatic
regime, calculated with the 1D effective theory. Black (resp. blue)
corresponds to unstable $g_{\textrm{eff}}/m_{*}<0$ (resp. stable
$g_{\textrm{eff}}/m_{*}>0$) regime. Parameters: $k_{x}^{\textrm{Las}}=-0.954$,
$\gamma=0.2J$, $P/P_{\rm th,1}=2$, $g/\beta=0.05J$, $g_{R}=0$. For these
parameters, $g\Lambda(k^{\textrm{Las}})|\psi^{ss}_{x}|^{2}=0.037J\ll J$.
### V.2 Optical nonlinearity
We now investigate the effect on the dynamical stability of a relatively small
optical nonlinearity such that $g|\psi^{0}_{1}|^{2},g_{R}N^{0}\ll J$. Under
this condition, the transverse profile of the lasing edge mode remains similar
in shape to the eigenstates of the underlying conservative and linear HH
model. Since much of the interesting physics occurs on the slow Goldstone
mode, we focus our investigation on this branch and, starting from the fast
reservoir $\gamma_{R}\gg\gamma$ limit, we adiabatically eliminate the carrier
dynamics.
The dispersion of the Goldstone and amplitude modes in this regime are shown
in Fig. 5. The amplitude mode is always stable with an imaginary part
$-\Gamma$ at $k=0$. The stability of the Goldstone mode depends instead on the
sign of the nonlinearity. This effect can be understood in terms of the
standard theory of modulational instability in nonlinear optical media Agrawal
(1995) or in dilute Bose-Einstein condensates Pitaevskii and Stringari (2016).
As in Baboux _et al._ (2018), once the carrier dynamics has been
adiabatically eliminated, we can define an effective nonlinearity as
$g_{\textrm{eff}}=g-g_{R}(\gamma/\gamma_{R})(P_{\textrm{th}}/P).$ (34)
If the effective nonlinearity has the same sign as the curvature of the bare
edge mode dispersion (or, equivalently, of the effective mass), the imaginary
part of the collective mode dispersion is negative, and the system is
dynamically stable. Conversely, if the two quantities have opposite signs, the
imaginary part turns positive in a window of wavevectors surrounding $k=0$,
signalling dynamical instability of the spatially uniform solution.
Such behaviour can be understood in the framework of the one-dimensional
theory (28) by including an effective interaction term proportional to
$g_{\textrm{eff}}\,|\psi(x)|^{2}$. Setting for simplicity $\lambda(k)=1$, one
finds the simple form
$\omega^{\rm Bog}_{+}(k)\simeq v_{g}k-i\frac{g_{\textrm{eff}}\,|\psi^{\rm
ss}|^{2}}{m_{*}\Gamma}k^{2}+\mathcal{O}(k^{3})\,$ (35)
from which it is easy to see how the sign of the imaginary part at low-$k$ is
determined by the sign of $g_{\textrm{eff}}/m_{*}$. As usual, the observable
consequence of this kind of instabilities is a slow spatial modulation of the
field amplitude with a wavevector roughly determined by the position of the
maximum of the imaginary part and, eventually, its possible break-up into a
train of solitons.
While the sign of the nonlinearity is typically fixed by the material
properties of the device, the effective mass of the HH dispersion has opposite
sign for edge modes in either the positive and negative frequency energy gap,
as illustrated in Fig. 1(b). This has the remarkable consequence that, for a
given sign of the nonlinearity, topological lasing turns out to be unstable in
one of the two frequency gaps and dynamically stable in the other gap. Here,
interestingly, the dynamical stability is reinforced by the nonlinearity as
signalled by the larger value of the $k^{2}$ coefficient of the Goldstone
mode. In mathematical terms, the different behavior of the edge states in the
two topological gaps can be understood as a consequence of the breaking of the
chiral symmetry Eq. (14) by the optical nonlinearity.
Figure 6: Panel (a): imaginary part of the elementary excitation spectrum
with slow reservoir relaxation rate $\gamma_{R}=0.5\gamma$ for sizable
nonlinear interactions $g_{R}/R=-1.5$ between the lasing mode and the carriers
and $g=0$. Panel (b): imaginary part of the elementary excitation spectrum in
the adiabatic regime for a strong repulsive optical nonlinearity
$g/\beta=3.5J$ and $g_{R}=0$ and a pump strength $P/P_{\rm th,1}=1.25$. For
these parameters, the lasing frequency is
$\omega^{\textrm{Las}}-\epsilon(k_{x}^{\textrm{Las}})\approx
g\Lambda(k^{\textrm{Las}})|\psi^{ss}|^{2}\approx 0.21J$. Lasing occurs at
$k_{x}^{\rm Las}=-0.919$ in panel (a) and $k_{x}^{\rm Las}=-0.982$ in panel
(b); the black lines are the result of numerical 2D calculations, while the
red lines are the prediction of the effective 1D model. In both panels
$\gamma=0.2J$. $n_{y}=31$ for panel (a) and $23$ for panel (b).
### V.3 Interplay of nonlinearity and slow carrier dynamics
The pioneering work Longhi _et al._ (2018) has predicted the occurrence of
unstable regimes when a slow carrier relaxation rate
$\gamma_{R}\lesssim\gamma$ is combined with a sizable nonlinear refractive
index due to the carriers in the gain material, a quantity proportional to
$g_{R}n_{R}$ in our model.
The imaginary part of the elementary excitation spectrum in such a regime is
plotted in Fig. 6(a) for the case of lasing into a positive mass edge mode in
the presence of a relatively slow reservoir $\gamma_{R}/\gamma=0.5$ and a
negative carrier-induced nonlinearity $g_{R}<0$, $g=0$. While for very small
$k$ the positive effective nonlinearity (34) conspires with the positive
effective mass to give a stable Bogoliubov mode, a marked instability occurs
at slightly larger $k$ (around $|k|\sim 0.2$ for the parameters in the figure)
due to the hybridization of the laser and the carrier dynamics. Also in this
case, the observable consequence of the instability is the appearance of a
spatial modulation of the field amplitude, with an oscillation wavevector
roughly determined by the position of the maximum of the imaginary part. Since
this physics has a predominantly one-dimensional character, it is well
captured by the one-dimensional theory of (V-32), as displayed by the red
lines in Fig. 6(a). On the other hand, the origin of the visible discrepancies
at larger $k$ can be attributed to the distortion of the transverse field
profile from the one of the bare HH modes induced by the optical nonlinearity.
In particular, for this choice of parameters the modes with reverse chirality
turn out feeling a lower gain than predicted by the 1D model, and are
therefore less prone to dynamical instabilities.
Remarkably, very similar behaviours were studied in the context of polariton
condensates Wouters and Carusotto (2007); Bobrovska _et al._ (2018); Baboux
_et al._ (2018) and physically understood in the terms of their interaction
with the reservoir of incoherent excitations feeding them: for a positive
effective mass, positive interactions with the reservoir $g_{R}>0$ correspond
to repulsive interactions between the condensate and the slow incoherent
reservoir. Therefore, a local increase of the reservoir density pushes the
condensate particles away creating a local depletion of their density. This
depletion, in turn, reduces the spatial hole-burning effect and leads to a
further increase of the reservoir density. This provides a positive feedback
and makes the initial fluctuation to further grow in time. This process
explains why the lowest-$k$ Bogoliubov modes are unstable in the $m^{*}>0$ and
$g_{R}>0$ case. An opposite behaviour is found in the negative $g_{R}<0$ case
considered in Sec.V.3 or in the negative mass $m^{*}<0$ case considered in
Baboux _et al._ (2018): in both these cases, the interactions with the
carriers tend to further stabilize laser operation against the slowest
Bogoliubov modes. The situation of course changes for modes at increasing $k$,
whose frequency no longer satisfies the adiabaticity condition underlying
(34): for these, one can no longer restrict to the effective interaction
$g_{\rm eff}$ and an instability indeed appears, as displayed in Fig. 6(a).
Quite interestingly, as it was observed in the polariton context Baboux _et
al._ (2018), this finite-wavevector instability is tamed in the presence of a
strong enough wavevector-dependence of the gain away from the highest-gain
condensate mode. In the case of topolasers, this wavevector dependence can be
reinforced with a suitable design of the underlying topological lattice, e.g.
via the edge localization function $\Lambda(k_{x})$.
### V.4 Remarks on the general case
In the previous subsections, we have seen instabilities occurring either in
the neighborhood of the lasing mode at $k\sim 0$ or on modes with opposite
chirality at $k\sim\pi$. This appears to be a general feature and is confirmed
by the calculations for strong optical nonlinearities.
An example of such calculation is displayed in Fig. 6(b) but the general trend
remains the same for other choices of parameters. Here, the nonlinear
frequency shift $gn$ is positive and comparable to the hopping rate $J$ and
induce a significant distortion on the edge modes. Still, the imaginary part
remains relatively large and negative throughout all the Brillouin zone except
for the regions around $k=0,\pi$, where there exist edge modes well localized
on the edge with a strong overlap with the gain medium. All other modes mostly
reside in the bulk of the lattice and thus feel a reduced gain, which ensures
their dynamical stability.
In the $k\sim 0$ region, the positive mass and the positive nonlinear shift
conspire to stabilize the Bogoliubov mode via the same physical mechanism
discussed in Sec. V.2 in terms of the effective one-dimensional theory. In the
$k\sim\pi$ region, instead, the dynamical stability/instability of the
excitation modes depends in a less straightforward way on the system
parameters: in the specific case of Fig. 6(b), for instance, single-mode
lasing is stable, but easily turns unstable as soon as the carrier relaxation
rate is decreased or the pump strength is increased. The experimental
signature of such instability would be the tendency of the device towards a
multi-mode operation with simultaneous emission in the two counter-propagating
edge modes. Further islands of dynamical stability can then be found,
scattered across the wide parameter space of the problem.
In spite of these complications, some useful trends can be identified in view
of ensuring a stable topolaser operation. The arguments in Sec. V.2 can be
used to exploit the nonlinearity to stabilize excitation modes at small $k$
and avoid modulational instabilities. Further stabilization against the
processes discussed in Sec. V.3 can be obtained with a suitable design of the
lattice to further reinforce the $k$-dependence of the edge-mode localization
as discussed in the earlier parts of this work, and/or of the Q-factor of the
edge modes as discussed in Noh _et al._ (2020).
Guaranteeing the stability of the counter-propagating modes at $k\sim\pi$ and
avoid multi-mode laser operation is instead a more subtle issue as its
(typically small) imaginary part is strongly affected by the slow carrier
dynamics as pointed out in Sec. V.1 and also strongly depends on the
microscopic distortion of the edge mode wavefunction by gain and
nonlinearities. While these features are not easily controlled without a
detailed microscopic modelling of the device, some general trends can anyway
be extracted: counter-propagating mode lying in a different topological gap
are well separated in frequency and therefore can be suppressed through a
relatively weak frequency-dependence of gain Seclì _et al._ (2021). An even
more drastic strategy would be to adopt an underlying topological model that
displays a single topological gap, like the Haldane model considered in
Zapletal _et al._ (2020).
## VI Conclusion
In this work we have presented a general theory of the fluctuation dynamics of
a topological laser device. By extending the Bogoliubov theory of the
collective excitations on top of a dilute condensate to the case of a photonic
topological Harper-Hofstadter model with gain and losses, we calculated the
spectrum of collective excitation modes around the lasing steady-state and
identified different mechanisms for dynamical instability. The numerical
results obtained from the full 2D model were then analytically interpreted in
terms of a simple effective 1D theory which only requires the knowledge of the
edge mode dispersion and of their spatial localization on the edge.
While stable topolaser operation is recovered for the class-A laser models
with no optical nonlinearities considered in Amelio and Carusotto (2020);
Seclì _et al._ (2019); Zapletal _et al._ (2020), more complex phenomena are
found for class-B models featuring a slow carrier dynamics and/or in the
presence of an intensity-dependent refractive index. Several processes
potentially leading to instabilities were identified and explained in physical
terms such as long-wavelength modulational instabilities or multi-mode lasing
into the counter-propagating edge mode with opposite chirality. This provides
physical insight into the instabilities originally predicted with numerical
tools in Longhi _et al._ (2018). Based on our understanding of the various
instability processes, strategies to reinforce the stability of topolaser
operation are finally suggested.
As future perspectives for further work, our study has already provided a
microscopic support to the recently published theory of the long-distance,
late-time coherence properties of the emission of a topological laser device
Amelio and Carusotto (2020). Concerning the recent experimental realizations
of topological lasers, we agree that a quantitative study of actual
semiconductor laser devices Bahari _et al._ (2017); Bandres _et al._ (2018);
Zeng _et al._ (2020) may require a more sophisticated modeling of the gain
medium which goes beyond the scope of this work, but we anticipate that our
method should be quantitatively accurate for the specific case of topological
exciton-polariton condensates which are presently under active study Klembt
_et al._ (2018). From a conceptual perspective, we also expect that our
framework will be a useful starting point for qualitative considerations and
theoretical arguments.
###### Acknowledgements.
We acknowledge stimulating exchanges with Moti Segev and useful discussions
with Stefano Longhi. We acknowledge financial support from the European Union
H2020-FETFLAG-2018-2020 project "PhoQuS" (n.820392) and from the Provincia
Autonoma di Trento. A. L.–P. thanks Ecole Normale Supérieure de Paris-Saclay
for the stipend allocated.
*
## Appendix A First-order corrections in $\gamma/J$
In this Appendix, we briefly inspect the first order $\gamma/J$ corrections to
the ansatz (20,23-24) for the lasing steady-state $\psi_{y}^{0}$ for vanishing
nonlinearities $g=g_{R}=0$. We focus on the region in the vicinity of the
edge, namely for $y=1,2$. Numerical results for the steady-state suggest that
$\displaystyle\psi^{0}_{1}$ $\displaystyle\simeq$ $\displaystyle A\phi_{1}$
(36) $\displaystyle\psi^{0}_{2}$ $\displaystyle\simeq$ $\displaystyle
A\phi_{2}+iA\frac{\gamma}{2J}\frac{1-|\phi_{1}|^{2}}{\phi_{1}}+O\left[\left(\frac{\gamma}{J}\right)^{2}\right]\,.$
(37)
While the conservative eigenstate $\\{\phi_{y}\\}$ has purely real entries
indicating the absence of particle flux in the transverse direction orthogonal
to the edge, a non-trivial phase appears in the driven-dissipative steady-
state due to the first order correction in $\gamma/J$.
This interesting feature can be understood in terms of particle conservation
at the different sites. The total radiative losses of the lasing mode (per
unit length along $x$) are given by
$\gamma\sum_{y}|\psi^{0}_{y}|^{2}\simeq\gamma A^{2}$ (38)
where the second equality holds up to second order corrections. All gain is
concentrated on the $y=1$ sites, while the contribution of this sites to the
losses
$\gamma|\psi^{0}_{1}|=\gamma A^{2}|\phi^{0}_{1}|^{2}$ (39)
is only partial. Overall balance between gain and losses then requires the
presence of some current flowing out of the edge, namely from $y=1$ to $y=2$.
This current is provided exactly by the first order correction included in Eq.
(37): even if this term is order $\gamma/J$ the associated current contains
the tunnelling rate $J$ and is of order $\gamma$. Including this current, the
total rate of particle escape from the $y=1$ sites recovers that of the whole
lattice $\gamma A^{2}|\phi_{1}|^{2}$.
## Appendix B Two-dimensional Bogoliubov theory including the carrier
dynamics
Including carrier dynamics, the Ansatz (15) becomes:
$\left\\{\begin{array}[]{lll}\psi_{x,y}(t)=[\psi^{0}_{y}+\delta\psi_{x,y}(t)]\,e^{-i\omega^{\textrm{Las}}t+ik_{x}^{\textrm{Las}}x}\\\
\\\ N_{x,y}(t)=N_{y}^{0}+\delta N_{x,y}(t)\end{array}\right.$ (40)
In Fourier space along the $x-$axis and time, the corresponding system of
linear equations for $\delta\psi_{k,y}$ and $\delta N_{k,y}$ reads for
$k=k_{x}-k_{x}^{\textrm{Las}}$:
$\left\\{\begin{array}[]{lll}\begin{aligned} \omega^{\rm
Bog}(k)&\,\delta\psi_{k,y}=(\mathbf{H}\,\delta\psi)_{k,y}+g\psi^{0}_{y}\
{}^{2}\delta\psi^{*}_{y,-k}\\\
&+[2g|\psi^{0}_{y}|^{2}+g_{R}N_{y}^{0}-\omega^{\textrm{Las}}+i(RN^{0}_{y}-\gamma)]\delta\psi_{y,k}\\\
&+\psi^{0}_{y}[iR+g_{R}]\delta N_{y,k}\end{aligned}\\\ \\\ \begin{aligned}
\omega^{\rm Bog}(k)\,\delta N_{k,y}=&-i(\gamma_{R}+R|\psi^{0}_{y}|^{2})\delta
N_{k,y}\\\
&-RN^{0}_{y}(\psi^{0}_{y}\delta\psi^{*}_{-k,y}+\psi^{0*}_{y}\delta\psi_{k,y})\end{aligned}\end{array}\right.,$
(41)
to be supplemented for $\delta\psi_{k,y}^{*}$ by the complex conjugate of the
first equation.
## References
* Harari _et al._ (2016) G. Harari, M. A. Bandres, Y. Lumer, Y. Plotnik, D. N. Christodoulides, and M. Segev, in _Conference on Lasers and Electro-Optics_ (Optical Society of America, Washington, D.C., 2016) p. FM3A.3.
* Harari _et al._ (2018) G. Harari, M. A. Bandres, Y. Lumer, M. C. Rechtsman, Y. D. Chong, M. Khajavikhan, D. N. Christodoulides, and M. Segev, Science 359, eaar4003 (2018).
* Pilozzi and Conti (2016) L. Pilozzi and C. Conti, Physical Review B 93, 195317 (2016).
* Solnyshkov _et al._ (2016) D. D. Solnyshkov, A. V. Nalitov, and G. Malpuech, Physical Review Letters 116, 046402 (2016).
* Ozawa _et al._ (2019) T. Ozawa, H. M. Price, A. Amo, N. Goldman, M. Hafezi, L. Lu, M. C. Rechtsman, D. Schuster, J. Simon, O. Zilberberg, and I. Carusotto, Rev. Mod. Phys. 91, 015006 (2019).
* Ota _et al._ (2020) Y. Ota, K. Takata, T. Ozawa, A. Amo, Z. Jia, B. Kante, M. Notomi, Y. Arakawa, and S. Iwamoto, Nanophotonics 9, 547 (2020).
* St-Jean _et al._ (2017) P. St-Jean, V. Goblot, E. Galopin, A. Lemaître, T. Ozawa, L. L. Gratiet, I. Sagnes, J. Bloch, and A. Amo, Nature Photonics 11, 651 (2017).
* Parto _et al._ (2018) M. Parto, S. Wittek, H. Hodaei, G. Harari, M. A. Bandres, J. Ren, M. C. Rechtsman, M. Segev, D. N. Christodoulides, and M. Khajavikhan, Physical Review Letters 120, 113901 (2018).
* Han _et al._ (2019) C. Han, M. Lee, S. Callard, C. Seassal, and H. Jeon, Light: Science & Applications 8, 40 (2019).
* Ota _et al._ (2018) Y. Ota, R. Katsumi, K. Watanabe, S. Iwamoto, and Y. Arakawa, Communications Physics 1, 86 (2018).
* Bahari _et al._ (2017) B. Bahari, A. Ndao, F. Vallini, A. El Amili, Y. Fainman, and B. Kanté, Science 358, 636 (2017).
* Bandres _et al._ (2018) M. A. Bandres, S. Wittek, G. Harari, M. Parto, J. Ren, M. Segev, D. N. Christodoulides, and M. Khajavikhan, Science 359, eaar4005 (2018).
* Zeng _et al._ (2020) Y. Zeng, U. Chattopadhyay, B. Zhu, B. Qiang, J. Li, Y. Jin, L. Li, A. G. Davies, E. H. Linfield, B. Zhang, _et al._ , Nature 578, 246 (2020).
* Wittek _et al._ (2017) S. Wittek, G. Harari, M. A. Bandres, H. Hodaei, M. Parto, P. Aleahmad, M. C. Rechtsman, Y. Chong, D. Christodoulides, M. Khajavikhan, and M. Segev, in _Conference on Lasers and Electro-Optics_ (Optical Society of America, Washington, D.C., 2017) p. FTh1D.3.
* Amelio and Carusotto (2020) I. Amelio and I. Carusotto, Phys. Rev. X 10, 041060 (2020).
* Longhi _et al._ (2018) S. Longhi, Y. Kominis, and V. Kovanis, EPL (Europhysics Letters) 122, 14004 (2018).
* Pitaevskii and Stringari (2016) L. P. Pitaevskii and S. Stringari, _Bose Einstein condensation and superfluidity_ (Clarendon Press, Oxford, 2016).
* Wouters and Carusotto (2007) M. Wouters and I. Carusotto, Phys. Rev. Lett. 99, 140402 (2007).
* Seclì _et al._ (2019) M. Seclì, M. Capone, and I. Carusotto, Phys. Rev. Research 1, 033148 (2019).
* Zapletal _et al._ (2020) P. Zapletal, B. Galilo, and A. Nunnenkamp, Optica 7, 1045 (2020).
* Pick _et al._ (2015) A. Pick, A. Cerjan, D. Liu, A. W. Rodriguez, A. D. Stone, Y. D. Chong, and S. G. Johnson, Phys. Rev. A 91, 063806 (2015).
* Baboux _et al._ (2018) F. Baboux, D. D. Bernardis, V. Goblot, V. N. Gladilin, C. Gomez, E. Galopin, L. L. Gratiet, A. Lemaître, I. Sagnes, I. Carusotto, M. Wouters, A. Amo, and J. Bloch, Optica 5, 1163 (2018).
* Klembt _et al._ (2018) S. Klembt, T. Harder, O. Egorov, K. Winkler, R. Ge, M. Bandres, M. Emmerling, L. Worschech, T. Liew, M. Segev, _et al._ , Nature 562, 552 (2018).
* Harper (1955) P. G. Harper, Proceedings of the Physical Society. Section A 68, 874 (1955).
* Hofstadter (1976) D. R. Hofstadter, Physical review B 14, 2239 (1976).
* Henry (1982) C. Henry, IEEE Journal of Quantum Electronics 18, 259 (1982).
* Carusotto and Ciuti (2013) I. Carusotto and C. Ciuti, Rev. Mod. Phys. 85, 299 (2013).
* Bobrovska _et al._ (2018) N. Bobrovska, M. Matuszewski, K. S. Daskalakis, S. A. Maier, and S. Kéna-Cohen, ACS Photonics 5, 111 (2018).
* Note (1) Of course, the different nature of the $\mathcal{P}_{x}\mathcal{T}$ and $\mathcal{P}_{y}\mathcal{T}$ transformation in this case originates from the fact that the system is translationally invariant along x.
* Repellin and Goldman (2019) C. Repellin and N. Goldman, Phys. Rev. Lett. 122, 166801 (2019).
* Note (2) Obviously, in the general case $\gamma\sim J$ the non-uniform gain couples the edge mode to the bulk modes of equal $k_{x}$, so that it is no longer sufficient to overlap with the edge modes only. This coupling becomes negligible for $\gamma\ll J$.
* Note (3) An attempt of derivation would involve the Bogoliubov ansatz $\delta\psi_{x,y}=u_{k}e^{ikx}\phi_{y}(k)+v_{k}e^{-ikx}\phi_{y}(-k)$; the corresponding Bogoliubov problem would not be closed since in general $\phi_{y}(k)\neq\phi_{y}(-k)$.
* Agrawal (1995) G. P. Agrawal, _Nonlinear Fiber Optics: Formerly Quantum Electronics_ (Academic press, 1995).
* Noh _et al._ (2020) W. Noh, H. Nasari, H.-M. Kim, Q. Le-Van, Z. Jia, C.-H. Huang, and B. Kanté, Optics Letters 45, 4108 (2020).
* Seclì _et al._ (2021) M. Seclì, T. Ozawa, M. Capone, and I. Carusotto, APL Photonics 6, 050803 (2021).
|
# Probabilistic Error Analysis For Sequential Summation of Real Floating
Point Numbers††thanks: Funding: This work was supported in part by National
Science Foundation grant DMS-1760374.
Johnathan Rhyne Department of Mathematics, North Carolina State University,
Raleigh, NC 27695-8205, USA<EMAIL_ADDRESS>
###### Abstract
We propose probabilistic models to bound the forward error in the numerically
computed sum of a vector with $n$ real elements. To do so, we generate our own
deterministic bound for ease of comparison, and then create a model of our
errors to generate probabilistic bounds of a comparable structure that can
typically be computed alongside the actual computation of the sum.
The errors are represented as bounded, independent, zero-mean random
variables. We have found that our accuracy is increased when we use vectors
that do not sum to zero. This accuracy reaches to be within 1 order of
magnitude when all elements are of the same sign. We also show that our bounds
are informative for most cases of IEEE half precision and all cases of single
precision numbers for a for a vector of dimension, $n\leq 10^{7}$.
Our numerical experiments confirm that the probabilistic bounds are tighter by
2 to 3 orders of magnitude than their deterministic counterparts for
dimensions of at least 80 with extremely small failure probabilities. The
experiments also confirm that our bounds are much more accurate for vectors
consisting of elements of the same sign.
###### keywords:
roundoff errors, random variables, sums of random variables, Martingales,
forward error
Advisor: Ilse C.F. Ipsen111Department of Mathematics, North Carolina State
University, Raleigh, NC 27695-8205, USA<EMAIL_ADDRESS>
65F99, 65G50, 60G42, 60G50
## 1 Introduction
#### Background
We consider the summation of $n$ real numbers in IEEE floating point
arithmetic. First, we generate a deterministic bound for the relative error,
introduce two probabilistic inequalities that we need to create our
probabilistic bounds, and we confirm that our probabilistic bounds are tighter
than their deterministic counterparts typically by 2 orders of magnitude.
Finally, we note both the best cases of our bounds and under what conditions
they fail along with future directions to possibly make better bounds.
#### Other Work
There have been probabilistic approaches for roundoff analysis applied to
inner products (Ipsen and Zhou [11]), matrix inversion (von Neumann &
Goldstine [15] and Tienari [14]), and LU decomposition and linear system
solutions (Babuška & Söderlind [1], Higham & Mary [7]), and Barlow & Bareiss
[2, 3, 4]. Higham & Mary also approach probabilistic analysis for summations
in [8, Section 2], however their model takes into account the structure of the
data being summed, so a meaningful comparison is difficult. Although roundoff
errors do not behave as random variables, [9, Page 2] and [12, Page 17], our
bounds give much more realistic results than deterministic ones, and we
confirm this for $n\leq 10^{7}$ with our numerical experiments.
#### Contributions
I was inspired by the work in [11] on roundoff errors in the computation of
inner products. There, roundoff errors are represented as random variables and
the probabilistic bounds are derived from concentration inequalities. Here, we
construct two models of our roundoff errors and also apply probabilistic
concentration inequalities. Our probabilistic bounds also are expressed in
terms of a chosen failure probability, so we can more easily see how our
failure tolerance affects the accuracy of the bounds.
## 2 Floating Point Arithmetic
Before deriving the bounds, we give an example of floating point arithmetic.
Since a computer has a finite amount of storage, it is necessary to
approximate real numbers like $\frac{1}{5}$ and $\frac{1}{3}$. The IEEE
society defines conventions for storing real numbers as sums of powers of 2
[10] Figure 2.1 shows the example of $\frac{5}{32}$. Therefore, numbers like
$\frac{1}{3}$ cannot be represented exactly in floating point arithmetic, and
rounding error occur when storing the numbers that are not exact sums of
powers of two.
Figure 2.1: Artistic rendition of how $\frac{5}{32}$ is stored as a 32-bit
floating point number defined [10]. Image courtesy of [16].
The difference between 1 and the next largest floating number is called
machine epsilon, $\varepsilon$, and the unit roundoff is half of machine
epsilon, $u=\varepsilon/2$ [6, Section 2.1].
Table 2.2 presents the unit roundoff for floating point numbers in single and
half precision.
Table 2.2: Unit roundoff for IEEE [10] single and half precision floating point numbers. Half Precision | Single Precision
---|---
$u=2^{-11}$ | $u=2^{-24}$
In this paper, we consider roundoff errors that occur in addition, and use the
standard model [6, (2.4)]
(1)
$\operatorname{\mathrm{fl}}(a+b)=(a+b)(1+\delta)=a+b+\delta(a+b)\qquad\text{where}\quad\left\lvert\delta\right\rvert\leq
u.$
Intuitively, this means floating point addition is exact addition with some
small error that is scaled by the numbers that are being added.
## 3 Deterministic Bounds
We derive one deterministic bound based on the actual errors incurred during
floating point addition.
Given $n$ real numbers $x_{k}$, $1\leq k\leq n$, we compute their sum
$\sum_{k=1}^{n}{x_{k}}$ by adding one element after the other. Table 3.1 shows
the order of summation in exact arithmetic, and the roundoff errors in
floating point arithmetic. We denote by
$\operatorname{\mathrm{fl}}\left(\sum_{k=1}^{n}{x_{k}}\right)$ the sum
computed in floating point arithmetic.
#### Assumptions
We assume that the $x_{k}$ are floating point numbers so they can be stored
exactly and do not incur representation errors. The only roundoff errors occur
during the summation, as shown in (1). The roundoff errors in the individual
additions are $\delta_{k}$, where $\left\lvert\delta_{k}\right\rvert\leq u$,
$2\leq k\leq n$, as described in Section 2. We also assume
$\sum_{k=1}^{n}{x_{k}}\neq 0$ so that the relative error is defined. If the
sum is $0$, then we can remove the denominator to obtain absolute bounds.
Table 3.1: Exact and computed sums Exact computation | Floating point arithmetic | Index range
---|---|---
$z_{1}=x_{1}$ | $\hat{z}_{1}=x_{1}$ |
$z_{2}=x_{1}+x_{2}$ | $\hat{z}_{2}=(x_{1}+x_{2})(1+\delta_{2})$ |
$z_{k}=z_{k-1}+x_{k}$ | $\hat{z}_{k}=(\hat{z}_{k-1}+x_{k})(1+\delta_{k})$ | $2\leq k\leq n$
$z_{n}=\sum_{k=1}^{n}{x_{k}}$ | $\hat{z}_{n}=\operatorname{\mathrm{fl}}\left(\sum_{k=1}^{n}{x_{k}}\right)$ |
In Table 3.1, $z_{k}$ represents the exact $k^{th}$ partial sum, while
$\hat{z}_{k}$ represents the $k^{th}$ partial sum computed in floating point
arithmetic.
Table 3.2: Variables in the first representation and range of indices Construction | Index range
---|---
$c_{1}=0$ |
$c_{k}=u\sum_{\ell=1}^{k}{\left\lvert x_{\ell}\right\rvert}=u\|x_{(k)}\|_{1}$ | $2\leq k\leq n$
$Z_{k}=\delta_{k}\sum_{\ell=1}^{k}x_{\ell}$ | $2\leq k\leq n$
$Z=\sum\limits_{j=2}^{n}{Z_{j}}$ = $\operatorname{\mathrm{fl}}\left(\sum_{k=1}^{n}{x_{k}}\right)-\sum_{k=1}^{n}{x_{k}}$ |
In Table 3.2, $Z$ represents the absolute error in the floating point sum,
$Z_{k}$ represents all the terms affected by the $k^{th}$ error, and $c_{k}$
is an upper bound for $|Z_{k}|$. Note that all summands in $c_{k}$ are
multiples of $u$, that is, $c_{k}=|x_{k}|(u+\cdots)$.
###### Theorem 3.1.
For $1\leq k\leq n$, let $x_{k}\in\mathbb{R}$ where $\sum_{k=1}^{n}{x_{k}}\neq
0$, $c_{k}$ in Table 3.2, $\hat{z}_{n}$ and $z_{n}$ in Table 3.1. Then
(2)
$\left\lvert\dfrac{\hat{z}_{n}-z_{n}}{z_{n}}\right\rvert\leq\dfrac{{\sum_{k=2}^{n}{c_{k}}}}{\left\lvert
z_{n}\right\rvert}.$
###### Proof 3.2.
We first show that the $Z_{k}$ values are bounded above by $c_{k}$ for $2\leq
k\leq n$. These bounds follow directly from our assumption on our roundoff
errors, $\left\lvert\delta\right\rvert\leq u$.
$\left\lvert
Z_{k}\right\rvert=\left\lvert\delta_{k}\sum_{\ell=1}^{k}x_{\ell}\right\rvert\leq
u\sum_{\ell=1}^{k}\left\lvert x_{k}\right\rvert=c_{k}.$
Next, we apply these bounds to the final sum, $Z$,
$\left\lvert\hat{z}_{n}-z_{n}\right\rvert={\left\lvert
Z\right\rvert}\leq{\sum_{k=2}^{n}{c_{k}}}.$
Finally, we divide both sides by $\left\lvert z_{n}\right\rvert$ to get our
desired bound of
$\left\lvert\dfrac{\hat{z}_{n}-z_{n}}{z_{n}}\right\rvert\leq\dfrac{{\sum_{k=1}^{n}{c_{k}}}}{\left\lvert
z_{n}\right\rvert}.$
By the construction of each $c_{k}$ in Table 3.2, $u$ is multiplied by the one
norm of our data. This is the connection between this bound and the precision
of floating point being used. So, when $u$ is relatively large when compared
to $n$, as is noted for the half precision case in Table 2.2, we would want a
bound that is much tighter since we can’t rely on the small nature of $u$. To
achieve this, we must relinquish bounding the actual errors themselves, and
instead bound a probabilistic model of these errors.
## 4 Background Information
Before we move to our probabilistic bounds, we explicitly state Azuma’s
inequality and the Azuma-Hoeffding Martingale inequality along with any
relaxed assumptions for our case. We also use $\operatorname{\mathbb{E}}[Z]$
to denote the expected value of the random variable $Z$.
###### Lemma 4.1 (Theorem 5.3 in [5]).
Let $A=A_{1}+\cdots+A_{n}$ be a sum of independent real-valued random
variables, $0\leq a_{k}$ for $1\leq k\leq n$, $0<\delta<1$, and
$\left\lvert A_{k}-\operatorname{\mathbb{E}}[A_{k}]\right\rvert\leq
a_{k}\quad\quad 1\leq k\leq n.$
Then with probability at least $1-\delta$
$\left\lvert
A-\operatorname{\mathbb{E}}[A]\right\rvert\leq\sqrt{2\ln{\frac{2}{\delta}}}\sqrt{\sum\limits_{k=1}^{n}{a_{k}^{2}}}.$
###### Proof 4.2.
We have two cases to consider. The first being if $c_{k}=0$ for $1\leq k\leq
n$. If this is the case, then we set the right hand side to be $0$ because
this means, from the linearity of the expectation,
$\sum_{k=1}^{n}{A_{k}}=\operatorname{\mathbb{E}}\left[\sum_{k=1}^{n}{A_{k}}\right]\implies
A=\operatorname{\mathbb{E}}[A]\implies\left\lvert
A-\operatorname{\mathbb{E}}[A]\right\rvert=0.$
Otherwise, in [5, Theorem 5.3] set
$\delta=\text{Pr}[\left\lvert A-\operatorname{\mathbb{E}}[A]\right\rvert\geq
t]\leq 2\text{exp}\left(-\dfrac{t^{2}}{2\sum_{k=1}^{n}{a_{k}^{2}}}\right).$
Then, we solve for $t$ in terms of $\delta$ to get
$t\leq\sqrt{2\ln\frac{2}{\delta}}\sqrt{\sum\limits_{k=1}^{n}{a_{k}^{2}}}.$
Since $\delta$ is the probability that $\left\lvert
A-\operatorname{\mathbb{E}}[A]\right\rvert\geq t$, then $\left\lvert
A-\operatorname{\mathbb{E}}[A]\right\rvert\leq t$ holds with probability at
least $1-\delta$.
This inequality tells us, for any fixed failure probability $\delta$, how much
a sum of random variables differs from its expectation given each summand has
bounded difference from its own expectation.
Before we can move to our next inequality, we first define a Martingale.
###### Definition 4.3 (Theorem 12.1 in [13]).
A collection of random variables, $B_{1},B_{2},\dots,B_{n}$ is called
Martingale if the following are satisfied
1. 1.
$\operatorname{\mathbb{E}}[\left\lvert B_{n}\right\rvert]$ is finite.
2. 2.
$\operatorname{\mathbb{E}}[B_{k}|B_{1},\dots,B_{k-1}]=B_{k-1}$
This is also referred to as being a Martingale with respect to itself.
###### Lemma 4.4 (Theorem 12.4 in [13]).
Let $B_{1}=0$, $B_{2},\,\dots,\,B_{n}$ be a Martingale with respect to itself,
as in Definition 4.3. Also let $0\leq b_{k}$ for $1\leq k\leq n-1$, and
$0<\delta<1$ If
$\left\lvert B_{k}-B_{k-1}\right\rvert\leq b_{k-1}\qquad\text{for }2\leq k\leq
n,$
then with probability at least $1-\delta$
$\left\lvert
B_{n}-B_{1}\right\rvert\leq\sqrt{2\ln{\frac{2}{\delta}}}\sqrt{\sum\limits_{k=1}^{n-1}{b_{k}^{2}}}.$
###### Proof 4.5.
Much like in Lemma 4.1, we first address the case of $b_{k}=0$ for $2\leq
k\leq n$. If so, we set the right hand side to be $0$ as this would mean that
each $B_{k}$ is itself $0$. Otherwise, we do the following. In [13, Theorem
12.4], set
$\delta=\text{Pr}[\left\lvert B_{n}-B_{1}\right\rvert\geq t]\leq
2\text{exp}\left(-\dfrac{t^{2}}{2\sum_{k=1}^{n-1}{b_{k}^{2}}}\right).$
Since $\delta$ is the probability that $\left\lvert
B_{n}-B_{1}\right\rvert\geq t$, then $\left\lvert B_{n}-B_{1}\right\rvert\leq
t$ holds with probability $1-\delta$.
The modification that we make for the preceding Lemma is that our indexing of
the $b_{k}$ values range from $1\leq k\leq n-1$ instead of $1\leq k\leq n$.
This does not change the result since [13] suggests defining the first element
to be zero, which means we only have $n-1$ non-zero values to be bounded. The
difference between Lemmas 4.1 and 4.4 is that the former tells us the
probability that a sum of random variables differs from its expected value,
while the latter tells us how likely the $n^{th}$ term differs from the first
term with a chosen failure probability $\delta$.
## 5 Probabilistic Bounds
Moving forward, instead of bounding the actual relative errors that are
incurred, we instead model our roundoff errors as zero mean random variables.
To reduce extraneous variables, we use the same notation as in Tables 3.1 and
3.2. This change comes with the following assumptions, for $2\leq k\leq n$,
$\operatorname{\mathbb{E}}[\delta_{k}]=0$ and each $\delta_{k}$ are mean
independent. Now, we show that this new model, with the same representation as
in Tables 3.1 and 3.2, gives us much tighter bounds than in Section 3.
Before we construct our bounds, we show that $Z$, in Table 3.2, satisfies the
requirements to be used in Lemma 4.1. We already have the bounded property
from the proof of Theorem 3.1, so we need only show the zero mean property.
###### Lemma 5.1.
Assuming $Z$ and $Z_{k}$ for $2\leq k\leq n$ in Table 3.2. Then
$\operatorname{\mathbb{E}}[Z]=0\text{ and
}\operatorname{\mathbb{E}}[Z_{k}]=0.\qquad 2\leq k\leq n$
###### Proof 5.2.
From our construction of $Z_{k}$ for $2\leq k\leq n$, $x_{k}$ being constants,
and from the independence of $\delta_{k}$, we have
$\operatorname{\mathbb{E}}[Z_{k}]=\operatorname{\mathbb{E}}\left[\delta_{k}\sum_{\ell=1}^{k}x_{\ell}\right]=\operatorname{\mathbb{E}}\left[\delta_{k}\right]\sum_{\ell=1}^{k}x_{\ell}=0$
From the linearity of the expectation
$\operatorname{\mathbb{E}}[Z]=\operatorname{\mathbb{E}}\left[\sum\limits_{j=2}^{n}{Z_{j}}\right]=0.$
Now, we construct the probabilistic counterpart of (2).
###### Theorem 5.3.
Let $Z$ in Table 3.2, $x_{k}\in\mathbb{R}$ for $1\leq k\leq n$ ,
$\sum_{k=1}^{n}{x_{k}}\neq 0$, and $0<\delta<1$. Then with probability at
least $1-\delta$
(3)
$\left\lvert\dfrac{\hat{z}_{n}-z_{n}}{z_{n}}\right\rvert\leq\dfrac{\sqrt{2\ln{\frac{2}{\delta}}}\sqrt{\sum\limits_{k=1}^{n}{c_{k}^{2}}}}{\left\lvert
z_{n}\right\rvert}.$
###### Proof 5.4.
By applying Lemma 4.1,
$\left\lvert\hat{z}_{n}-z_{n}\right\rvert=\left\lvert
Z\right\rvert\leq\sqrt{2\ln{\frac{2}{\delta}}}\sqrt{\sum\limits_{k=1}^{n}{c_{k}^{2}}}.$
Then, we divide both sides by $\left\lvert z_{n}\right\rvert$ to get our
desired bound
$\left\lvert\dfrac{\hat{z}_{n}-z_{n}}{z_{n}}\right\rvert\leq\dfrac{\sqrt{2\ln{\frac{2}{\delta}}}\sqrt{\sum\limits_{k=1}^{n}{c_{k}^{2}}}}{\left\lvert
z_{n}\right\rvert},$
which holds with probability at least $1-\delta$.
Before we construct our final probabilistic bound, we first consider the
following random variables.
Table 5.1:
Errors in the Partial Sums and their Bounds Construction | Valid Range
---|---
$M_{k}=\hat{z}_{k}-z_{k}$ | $1\leq k\leq n$
$M_{n}=\hat{z}_{n}-z_{n}$ |
$m_{k}=\left\lvert x_{1}\right\rvert(1+u)^{k-1}+\sum_{j=2}^{k+1}{\left\lvert x_{j}\right\rvert(1+u)^{k-j+1}}$ | $1\leq k\leq n-1$
Intuitively, each $M_{k}$ value represents the difference between the $k^{th}$
floating point and exact partial sums, which gives the error in our floating
point and exact sums. The $m_{k}$ represent an upper bound for the errors in
the $(k+1)^{th}$ partial sum before we account for the $(k+1)^{th}$ roundoff
error, and $M_{1}=\hat{z}_{1}-z_{1}=x_{1}-x_{1}=0.$
Using Table 5.1, we bound a telescoping sum of each $M_{k}$, so we first
provide a recursive characterization of our $M_{k}$, and then bound the
absolute difference of $M_{k}$ and $M_{k-1}$ for $2\leq k\leq n$.
###### Lemma 5.5.
Let $M_{k}$ in Table 5.1, $\hat{z}_{k}$ in Table 3.1, and
$x_{k}\in\mathbb{R}$. Then $M_{k}$ satisfies the recursion
$M_{k}=M_{k-1}+\delta_{k}(\hat{z}_{k-1}+x_{k})\qquad 2\leq k\leq n.$
###### Proof 5.6.
We substitute in the recursive definition of $\hat{z}_{k}$ from Table 3.1 and
get
$\displaystyle M_{k}$
$\displaystyle=(\hat{z}_{k-1}+x_{k})(1+\delta_{k})-(z_{k-1}+x_{k})$
$\displaystyle=M_{k-1}+\delta_{k}(\hat{z}_{k-1}+x_{k})\qquad 2\leq k\leq n.$
In order to bound the absolute difference between $M_{k}$ and $M_{k-1}$ for
$2\leq k\leq n$, we first need to show the bounded nature of each
$\hat{z}_{k}$
###### Lemma 5.7.
The floating point sums $\hat{z}_{k}$ in Table 3.1 are bounded by222The sum is
zero if the lower limit exceeds the upper limit.
$\left\lvert\hat{z}_{k}\right\rvert\leq\left\lvert
x_{1}\right\rvert(1+u)^{k-1}+\sum\limits_{j=2}^{k}\Big{(}{\left\lvert
x_{j}\right\rvert(1+u)^{k-j+1}}\Big{)},\qquad 1\leq k\leq n.$
###### Proof 5.8.
From Table 3.1 follows the explicit representation for the computed partial
sums,
$\hat{z}_{k}=x_{1}\prod\limits_{l=2}^{k}{(1+\delta_{l})}+\sum\limits_{j=2}^{k}\left({x_{j}\prod\limits_{l=j}^{k}{(1+\delta_{l})}}\right),\qquad
1\leq k\leq n.$
Applying the upper bounds $\left\lvert\delta_{k}\right\rvert\leq u$ to the
above expressions gives
$\left\lvert\hat{z}_{k}\right\rvert\leq\left\lvert
x_{1}\right\rvert(1+u)^{k-1}+\sum\limits_{j=2}^{k}\Big{(}{\left\lvert
x_{j}\right\rvert(1+u)^{k-j+1}}\Big{)}\qquad 2\leq k\leq n.$
Next, we use Lemma 5.7, to get an upper bound for the absolute difference
between $M_{k}$ and $M_{k-1}$ for $2\leq k\leq n$
###### Lemma 5.9.
Let $M_{k}$ and $m_{k}$ be defined by Table 5.1. Then
$\left\lvert M_{k}-M_{k-1}\right\rvert\leq um_{k-1}\qquad 2\leq k\leq n.$
###### Proof 5.10.
We recall that for $2\leq k\leq n$, $\left\lvert\delta_{k}\right\rvert\leq u$.
$\displaystyle\left\lvert M_{k}-M_{k-1}\right\rvert$
$\displaystyle=\left\lvert\delta_{k}(\hat{z}_{k-1}+x_{k})\right\rvert$
$\displaystyle\leq u\Big{(}\left\lvert
x_{1}\right\rvert(1+u)^{k-2}+\sum\limits_{j=2}^{k-1}{\left\lvert
x_{j}\right\rvert(1+u)^{k-j}}+\left\lvert x_{k}\right\rvert\Big{)}$
$\displaystyle=u\Big{(}\left\lvert
x_{1}\right\rvert(1+u)^{k-2}+\sum\limits_{j=2}^{k}{\left\lvert
x_{j}\right\rvert(1+u)^{k-j}}\Big{)}=um_{k-1}\qquad 2\leq k\leq n.$
Therefore, we have our desired result of
$\left\lvert M_{k}-M_{k-1}\right\rvert\leq um_{k-1},\qquad 2\leq k\leq n.$
We now show that the collection, $M$, of $M_{k}$ for $1\leq k\leq n$ is a
Martingale by Definition 4.3. The requirement of
$\operatorname{\mathbb{E}}[M_{n}]$ being finite is satisfied since for $1\leq
k\leq n$, $M_{k}$ is a difference between two bounded numbers. We know both
$z_{k}$ and $\hat{z}_{k}$ are bounded since they are sums of $k$ finite
floating point numbers. Finally, we can say that each $\delta_{k}$ is
independent of $M_{1},\dots,M_{k-1}$ for $2\leq k\leq n$ because each $M_{k}$
is written in terms of $\hat{z}_{k}$ and $z_{k}$, where the former is in terms
of constants and $\delta_{l}$ for $1\leq l\leq k$, and the latter is a
constant. Our next task is to prove $M$ satisfies the second requirement of
Definition 4.3.
###### Lemma 5.11.
For $2\leq k\leq n$, let $M_{k}$ be defined by Table 5.1, $\delta_{k}$ be zero
mean, independent random variables, and $M_{1}=0$.
###### Proof 5.12.
We recall the linearity of the expectation, and Lemma 5.5. First we show an
explicit case for $k=2$.
$\displaystyle\operatorname{\mathbb{E}}[M_{2}|M_{1}]$
$\displaystyle=\operatorname{\mathbb{E}}[M_{1}+\delta_{2}(x_{1}+x_{2})|M_{1}]$
$\displaystyle=\operatorname{\mathbb{E}}[M_{1}|M_{1}]+\operatorname{\mathbb{E}}[\delta_{2}|M_{1}](x_{1}+x_{2})$
$\displaystyle=M_{1}+\operatorname{\mathbb{E}}[\delta_{2}](x_{1}+x_{2})=M_{1}.$
Now, let $3\leq k\leq n$
$\displaystyle\operatorname{\mathbb{E}}[M_{k}|M_{1},\dots,M_{k-1}]$
$\displaystyle=\operatorname{\mathbb{E}}[M_{k-1}+\delta_{k}(\hat{z}_{k-1}+x_{k})|M_{1},\dots,M_{k-1}]$
$\displaystyle=\operatorname{\mathbb{E}}[M_{k-1}|M_{1},\dots,M_{k-1}]+\operatorname{\mathbb{E}}[\delta_{k}|M_{1},\dots,M_{k-1}](\hat{z}_{k-1}+x_{k})$
$\displaystyle=M_{k-1}\qquad 3\leq k\leq n.$
Now, we know that $M$ forms a Martingale. So, we show our second probabilistic
bound by applying Lemma 4.4.
###### Theorem 5.13.
Let $x_{k}\in\mathbb{R}$ for $1\leq k\leq n$ where $\sum_{k=1}^{n}{x_{k}}\neq
0$, $m_{k}$ in 3.2, and $0<\delta<1$. Then with probability at least
$1-\delta$
(4) $\dfrac{\left\lvert\hat{z}_{n}-z_{n}\right\rvert}{\left\lvert
z_{n}\right\rvert}\leq\dfrac{u\sqrt{2\ln{\frac{2}{\delta}}}\sqrt{\sum_{k=1}^{n-1}{m_{k}^{2}}}}{\left\lvert
z_{n}\right\rvert}.$
###### Proof 5.14.
We first recognize that by construction of $M$, we know that
$M_{1}=x_{1}-x_{1}=0$. Then we apply Lemma 4.4.
$\displaystyle\left\lvert\hat{z}_{n}-z_{n}\right\rvert$
$\displaystyle=\left\lvert M_{n}\right\rvert=\left\lvert
M_{n}-M_{1}\right\rvert$ $\displaystyle\leq
u\sqrt{2\ln{\frac{2}{\delta}}}\sqrt{\sum\limits_{k=1}^{n-1}{m_{k}^{2}}}.$
Then, we divide both sides by $\left\lvert z_{n}\right\rvert$ to get our
desired bound,
$\dfrac{\left\lvert\hat{z}_{n}-z_{n}\right\rvert}{\left\lvert
z_{n}\right\rvert}\leq\dfrac{u\sqrt{2\ln{(\frac{2}{\delta})}}\sqrt{\sum_{k=1}^{n-1}{m_{k}^{2}}}}{\left\lvert
z_{n}\right\rvert},$
which holds with probability at least $1-\delta$.
#### Important note
Due to the definition of $m_{k}$ in Table 5.1, the computation of (3), we are
unable to calculate this bound for large dimensions, above approximately
$n=10^{6}$, in an informative time frame. A potential workaround is discussed
in Section 7.
## 6 Numerical Experiments
### 6.1 Hardware
The final results were gathered with `matlab2018a` under the NC State
HPC333More information can be found at https://wp.math.ncsu.edu/it/high-
powered-computing-cluster/
### 6.2 Design
Our experiments use vectors of normally distributed random variables with mean
zero and variance of one, and uniformly distributed random variables from zero
to one. To ensure that RAM capacity is not an issue, we generate each of the
$n$ elements during the function that computes our bounds. To ensure
repeatability, we used the Random package to seed before each bound
calculation call. The seed used is 123. Specifics for the programs used are in
Section 6.3. In our experiments, unit roundoff, denoted $u$, is as stated in
Table 2.2. These algorithms are designed to scale to an arbitrary precision,
however we only use the case of the inputs being 32 bit floating point. If you
want to scale to a floating point with more precision digits than the IEEE 64
bit floating point, assign the true precision in the algorithms to 256 bit
precision. The choice to use 256 bits for precision is simply because 256 bit
is the default for Julia’s BigFloat data type.
### 6.3 Algorithms
In this section, we will first list the two algorithms for generating our
bounds. Next, we will describe how we generate a graph, display the graph, and
give an interpretation of the results. Finally, we will make final
interpretations.
#### Naming of Bounds
Within the bounds, we refer to the below equation numbers for the bounds for
ease of reference.
(5) $\displaystyle\left\lvert\dfrac{\hat{z}_{n}-z_{n}}{z_{n}}\right\rvert$
$\displaystyle\leq\sqrt{n}\dfrac{{\sum_{k=1}^{n}{c_{k}}}}{\left\lvert
z_{n}\right\rvert},$ (6)
$\displaystyle\left\lvert\dfrac{\hat{z}_{n}-z_{n}}{z_{n}}\right\rvert$
$\displaystyle\leq\sqrt{2\ln{\frac{2}{\delta}}}\dfrac{\sqrt{\sum_{k=1}^{n}{c_{k}^{2}}}}{\left\lvert
z_{n}\right\rvert},$ (7)
$\displaystyle\left\lvert\dfrac{\hat{z}_{n}-z_{n}}{z_{n}}\right\rvert$
$\displaystyle\leq
u\sqrt{2\ln{\frac{2}{\delta}}}\dfrac{\sqrt{\sum_{k=1}^{n-1}{m_{k}^{2}}}}{\left\lvert
z_{n}\right\rvert}.$
#### Common Inputs
By design, all of our algorithms have the same inputs, which we will list here
to save space.
1. 1.
$n$: the dimension of vector we are simulating
2. 2.
$\delta$: the probability of failure as in Lemma 4.1. We use $10^{-16}$ in our
graphs
3. 3.
T: the type of Floating Point (FP) number we want. The following graphs use
32-bit floating point, however, the algorithm works for any FP type as long as
one can determine machine epsilon.
4. 4.
rf: The function that is used to generate the variables. We use Julia’s
Base.randn function for this generation unless otherwise specified.
#### Common Output
By design, we also return the relative error in all of our algorithms.
* •
Relative Error: $\left\lvert\dfrac{\hat{z}_{n}-z_{n}}{z_{n}}\right\rvert$
Algorithm 1 Computation of (5), (6), and the relative error for some
$n\in\mathbb{N}$
Outputs: Equations 5, 6, and relErr
1: if T is 64 bit Floating Point then
2: TTrue $\leftarrow$ 256 bit Floating Point
3: else
4: TTrue $\leftarrow$ 64 bit Floating Point
5: end if
6: xSum $\leftarrow$ 0 of type T
7: xTrueSum, cSquaredSum, cSum $\leftarrow$ 0 of type TTrue
8: for $1\leq k\leq n$ do
9: $\text{x}_{k}\leftarrow$ number generated by rf of type T
10: $\text{xTrue}_{k}\leftarrow$ $\text{x}_{k}$ cast to type TTrue
11: $c_{k}\leftarrow$ as in Table 3.2
12: cSquaredSum $\leftarrow$ cSquaredSum + $c_{k}^{2}$
13: cSum $\leftarrow$ cSum + $c_{k}$
14: xSum $\leftarrow$ xSum + $\text{x}_{k}$
15: xTrueSum $\leftarrow$ xTrueSum + $\text{xTrue}_{k}$
16: end for
17: compute Equation 5
18: compute Equation 6
19: compute relErr
20: return Equations 5, 6, and relErr of type TTrue
Algorithm 2 Computation of (7) for some $n\in\mathbb{N}$
Outputs: Equations 7 and relErr
1: if T is 64 bit Floating Point then
2: TTrue $\leftarrow$ 256 bit Floating Point
3: else
4: TTrue $\leftarrow$ 64 bit Floating Point
5: end if
6: mSquaredSum, xSum $\leftarrow$ 0 of type T
7: xTrueSum $\leftarrow$ 0 of type TTrue
8: for $1\leq k\leq n$ do
9: $\text{x}_{k}\leftarrow$ number generated by rf of type T
10: $\text{xTrue}_{k}\leftarrow$ $\text{x}_{k}$ cast to type TTrue
11: if $k\neq 1$ then
12: $m_{k-1}\leftarrow$ as in Table 3.2
13: mSquaredSum $\leftarrow$ mSquaredSum + $m_{k-1}^{2}$
14: end if
15: xSum $\leftarrow$ xSum + $\text{x}_{k}$
16: xTrueSum $\leftarrow$ xTrueSum + $\text{xTrue}_{k}$
17: end for
18: compute Equation 7
19: compute relErr
20: return Equation 7 and relErr of type TTrue
### 6.4 Graphs
In the following graphs, we start our vector dimensions at the step size since
we are assuming exact representation of our generated numbers and this will
ensure each data point gives us information.
Figure 6.1: Comparing (5), (6), and (7) against the actual relative errors
experienced for $10^{4}\leq n\leq 10^{6}$ in steps of $10^{4}$. Our data is
normally distributed random variables in single precision.
In Figure 6.1, our data is restricted to dimensions lower than $10^{7}$ due to
the computational cost of our bounds in large vector dimensions. The best
bound, (7), is between one and two orders of magnitude of the true relative
error, which is by far our best estimator even when compared to (6). However,
this comes with the trade off of approximately 500 times the computation time
on the HPC hardware444Information from the Systems Administrator is here:
https://wp.math.ncsu.edu/it/high-powered-computing-cluster/ listed in Section
6.1. Even if higher dimensions are needed, (6) shows promise since it is about
two orders of magnitude tighter than (5) along with being less computationally
expensive.
Figure 6.2: Comparing (5), (6), and (7) against the actual relative errors
experienced for $10^{2}\leq n\leq 10^{4}$ in steps of $10^{2}$. Our data is
normally distributed random variables in half precision.
In Figure 6.2, our dimensions shrink due because (5) and (6) grow larger than
one for larger dimensions meaning they no longer predict any accuracy. We
notice the same pattern of (7) as in Figure 6.1, and the extra tightness
becomes exceptionally useful in the half precision case since even at these
dimensions, we only estimate the true relative error to be below one with (7).
Comparing the general trends of (7) and the true relative error seem to both
be similar.
Figure 6.3: Comparing (5), (6), and (7) against the actual relative errors
experienced for $10^{4}\leq n\leq 10^{6}$ in steps of $10^{4}$. Our data is
uniformly distributed random variables between 0 and 1 in single precision.
In Figure 6.3, the general behavior of the true relative errors becomes much
clearer than in our previous figures. There appears to be a general trend to
one, even though we are much below one at our dimensions, so again higher
dimensions need to be tested to confirm this claim. The accuracy gains in
probabilistic bounds are highlighted by how close (6) is to (7), and the fact
that both are two orders of magnitude tighter than (5) along with being within
one order of magnitude of the true relative error.
Figure 6.4: Comparing (5), (6), and (7) against the actual relative errors
experienced for $10^{2}\leq n\leq 10^{4}$ in steps of $10^{2}$. Our data is
uniformly distributed random variables between 0 and 1 in single precision.
In Figure 6.4, an issue arises. (6) is no longer an upper bound for the true
relative errors. This is either an artifact of the precision being low or a
fundamental error in the bound itself, however as previously stated tests for
larger dimensions are needed to test this claim. However, (5) and (6) are also
uninformative because no accuracy is estimated for $n\gtrapprox 6500$.
## 7 Conclusion and Future Work
We gave roundoff error bounds along with numerical experiments for the
sequentially computed sum of $n$ random real numbers. Our experiments confirm
that, in most circumstances, the probabilistic bounds are between one and two
orders of magnitude more accurate in estimating the actual relative error when
compared to known deterministic bounds. We also have found that our bounds
come in two types. The first being the most accurate, but takes much more time
to compute, (7). While the second type is much faster to compute, but not
nearly as accurate (5) and (6). A potential fix for this issue would be to
implement the sum in such a way that it can be parallelized or to construct a
better implementation of (7).
#### Issues
In Figure 6.4, (6) is no longer an upper bound for vectors composed of low
precision elements of the same sign, or the failure probability is not the
same as our chosen $\delta$ value. A potential solution could be to introduce
a bound that depends on the structure of the vector being summed as described
in [7, Section 5.2]. Further testing for single precision of higher dimensions
than $10^{7}$ are also needed to see if Figure 6.4 is an artifact of the lower
precision computations or if it’s an issue with the bound itself.
## References
* [1] I. Babuška and G. Söderlind, On roundoff error growth in elliptic problems, ACM Trans. Math. Software, 44 (2018), pp. Art. 33, 22.
* [2] E. H. Bareiss and J. L. Barlow, Roundoff error distribution in fixed point multiplication, BIT, 20 (1980), pp. 247–250.
* [3] J. L. Barlow and E. H. Bareiss, On roundoff error distributions in floating point and logarithmic arithmetic, Computing, 34 (1985), pp. 325–347.
* [4] , Probabilistic error analysis of Gaussian elimination in floating point and logarithmic arithmetic, Computing, 34 (1985), pp. 349–364.
* [5] F. Chung and L. Lu, Concentration inequalities and Martingale inequalities: A survey, Internet Math., 3 (2006), pp. 79–127.
* [6] N. J. Higham, Accuracy and stability of numerical algorithms, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, second ed., 2002.
* [7] N. J. Higham and T. Mary, A new approach to probabilistic rounding error analysis, SIAM J. Sci. Comput., 41 (2019), pp. A2815–A2835.
* [8] N. J. Higham and T. Mary, Sharper Probabilistic Backward Error Analysis for Basic Linear Algebra Kernels with Random Data. working paper or preprint, Jan. 2020.
* [9] T. E. Hull and J. R. Swenson, Tests of probabilistic models for the propagation of roundoff errors, Communications of the ACM, 9 (1966), pp. 108–113.
* [10] IEEE Computer Society, IEEE standard for floating-point arithmetic, IEEE Std 754–2019”, July 2019.
* [11] I. C. F. Ipsen and H. Zhou, Probabilistic error analysis for inner products, SIAM J. Matrix Anal. Appl., to appear (2020).
* [12] W. Kahan, The improbability of probabilistic error analyses for numerical computations, (1996).
* [13] M. Mitzenmacher and E. Upfal, Probability and Computing: Randomized Algorithms and Probabilistic Analysis, Cambridge University Press, New York, 2006\.
* [14] M. Tienari, A statistical model of roundoff error for varying length floating-point arithmetic, Nordisk Tidskr. Informationsbehandling (BIT), 10 (1970), pp. 355–365.
* [15] J. von Neumann and H. H. Goldstine, Numerical inverting of matrices of high order, Bull. Amer. Math. Soc., 53 (1947), pp. 1021–1099.
* [16] Wikimedia, File:float example.svg. Wikimedia Upload, Jan. 2008. https://commons.wikimedia.org/wiki/File:Float_example.svg. License: https://creativecommons.org/licenses/by-sa/3.0/.
|
# Compositionality Through Language Transmission, using Artificial Neural
Networks
Hugh Perkins<EMAIL_ADDRESS>
ASAPP (https://asapp.com)
1 World Trade Center, NY 10007 USA
###### Abstract
We propose an architecture and process for using the Iterated Learning Model
(”ILM”) for artificial neural networks. We show that ILM does not lead to the
same clear compositionality as observed using DCGs, but does lead to a modest
improvement in compositionality, as measured by holdout accuracy and topologic
similarity. We show that ILM can lead to an anti-correlation between holdout
accuracy and topologic rho. We demonstrate that ILM can increase
compositionality when using non-symbolic high-dimensional images as input.
## 1 Introduction
Human languages are compositional. For example, if we wish to communicate the
idea of a ‘red box’, we use one word to represent the color ‘red‘, and one to
represent the shape ‘box‘. We can use the same set of colors with other
shapes, such as ‘sphere‘. This contrasts with a non-compositional language,
where each combination of color and shape would have its own unique word, such
as ‘aefabg‘. That we use words at all is a characteristic of compositionality.
We could alternatively use a unique sequence of letters or phonemes for each
possible thought or utterance.
Compositionality provides advantages over non-compositional language.
Compositional language allows us to generalize concepts such as colors across
different situations and scenarios. However, it is unclear what is the
concrete mechanism that led to human languages being compositional. In
laboratory experiments using artificial neural networks, languages emerging
between multiple communicating agents show some small signs of
compositionality, but do not show the clear compositional behavior that human
languages show. Kottur et al. (2017) shows that agents do not learn
compositionality unless they have to. In the context of referential games,
Lazaridou et al. (2018) showed that agent utterances had a topographic rho of
0.16-0.26, on a scale of 0 to 1, even whilst showing a task accuracy of in
excess of 98%.
In this work, following the ideas of Kirby (2001), we hypothesize that human
languages are compositional because compositional languages are highly
compressible, and can be transmitted across generations most easily. We extend
the ideas of Kirby (2001) to artificial neural networks, and experiment with
using non-symbolic inputs to generate each utterance.
We find that transmitting languages across generations using artificial neural
networks does not lead to such clearly visible compositionality as was
apparent in Kirby (2001). However, we were unable to prove a null hypothesis
that ILM using artificial neural networks does not increase compositionality
across generations. We find that objective measures of compositionality do
increase over several generations. We find that the measures of
compositionality reach a relatively modest plateau after several generations.
Our key contributions are:
* •
propose an architecture for using ILM with artificial neural networks,
including with non-symbolic input
* •
show that ILM with artificial neural networks does not lead to the same clear
compositionality as observed using DCGs
* •
show that ILM does lead to a modest increase in compositionality for neural
models
* •
show that two measures of compositionality, i.e. holdout accuracy and
topologic similarity, can correlate negatively, in the presence of ILM
* •
demonstrate an effect of ILM on compositionality for non-symbolic high-
dimensional inputs
## 2 Iterated Learning Method
Kirby (2001) hypothesized that compositionality in language emerges because
languages need to be easy to learn, in order to be transmitted between
generations. Kirby (2001) showed that using simulated teachers and students
equipped with a context-free grammar, the transmission of a randomly
initialized language across generations caused the emergence of an
increasingly compositional grammar. Kirby et al. (2008) showed evidence for
the same process in humans, who were each tasked with transmitting a language
to another participant, in a chain.
Kirby (2001) termed this approach the ”Iterated Learning Method” (ILM).
Learning proceeds in a sequence of generations. In each generation, a teacher
agent transmits a language to a student agent. The student agent then becomes
the teacher agent for the next generation, and a new student agent is created.
A language $G$ is defined as a mapping $G:\mathcal{M}\mapsto\mathcal{U}$ from
a space of meanings $\mathcal{M}$ to a space of utterances $\mathcal{U}$. $G$
can be represented as a set of pairs of meanings and utterances
$G=\\{(m_{1},u_{1}),(m_{2},u_{2}),\dots(m_{n},u_{n})\\}$. Transmission from
teacher to student is imperfect, in that only a subset, $G_{\text{train}}$ of
the full language space $G$ is presented to the student. Thus the student
agent must generalize from the seen meaning/utterance pairs
$\\{(m_{i},u_{i})\mid m_{i}\in M_{\text{train},t}\subset\mathcal{M}\\}$ to
unseen meanings, $\\{m_{i}\mid m_{i}\in(\mathcal{M}\setminus M_{\text{train,
t}})\\}$. We represent the mapping from meaning $m_{i}$ to utterance $u_{i}$
by the teacher as $f_{T}(\cdot)$. Similarly, we represent the student agent as
$f_{S}(\cdot)$ In ILM each generation proceeds as follows:
* •
draw subset of meanings $M_{\text{train},t}$ from the full set of meanings
$\mathcal{M}$
* •
invention: use teacher agent to generate utterances
$U_{\text{train},t}=\\{u_{i,t}=f_{T}(m_{i})\mid m_{i}\in
M_{\text{train},t}\\}$
* •
incorporation: the student memorizes the teacher’s mapping from
$M_{\text{train},t}$ to $U_{\text{train},t}$
* •
generalization: the student generalizes from the seen meaning/utterance pairs
$G_{\text{train},t}$ to determine utterances for the unseen meanings
$M_{\text{train},t}$
In Kirby (2001), the agents are deterministic sets of DCG rules, e.g. see
Figure 1. For each pair of meaning and utterance $(m_{i},u_{i})\in
G_{\text{train},t}$, if $(m_{i},u_{i})$ is defined by the existing grammar
rules, then no learning takes place. Otherwise, a new grammar rule is added,
that maps from $m_{i}$ to $u_{i}$. Then, in the generalization phase, rules
are merged, where possible, to form a smaller set of rules, consistent with
the set of meaning/utterance pairs seen during training, $G_{\text{train},t}$.
The generalization phase uses a complex set of hand-crafted merging rules.
The initial language at generation $t_{0}$ is randomly initialized, such that
each $u_{t,i}$ is initialized with a random sequence of letters. The meaning
space comprised two attributes, each having 5 or 10 possible values, giving a
total meaning space of $5^{2}=25$ or $10^{2}=100$ possible meanings.
$S:(a_{0},b_{3})\rightarrow\text{abc}$ $S:(x,y)\rightarrow A:y\;B:x$
$A:b_{3}\rightarrow\text{ab}$ $B:a_{0}\rightarrow\text{c}$
Figure 1: Two Example sets of DCG rules. Each set will produce utterance ‘abc’ when presented with meanings $(a_{0},b_{3})$. | $a_{0}$ | $a_{1}$ | $a_{2}$ | $a_{3}$ | $a_{4}$
---|---|---|---|---|---
$b_{0}$ | qda | bguda | lda | kda | ixcda
$b_{1}$ | qr | bgur | lr | kr | ixcr
$b_{2}$ | qa | bgua | la | ka | ixca
$b_{3}$ | qu | bguu | lu | ku | ixcu
$b_{4}$ | qp | bgup | lp | kp | ixcp
Table 1: Example language generated by Kirby’s ILM.
Kirby (2001) examined the compositionality of the language after each
generation, by looking for common substrings in the utterances for each
attribute. An example language is shown in Table 1. In this language, there
are two meaning attributes, $a$ and $b$ taking values
$\\{a_{0},\dots,a_{4}\\}$ and $\\{b_{0},\dots,b_{4}\\}$. For example,
attribute $a$ could be color, and $a_{0}$ could represent ‘red’; whilst $b$
could be shape, and $b_{3}$ could represent ‘square’. Then the word for ‘red
square’, in the example language shown, would be ‘qu’. We can see that in the
example, the attribute $a_{0}$ was associated with a prefix ‘q’, whilst
attribute $b_{3}$ tended to be associated with a suffix ‘u’. The example
language thus shows compositionality.
Kirby et al. (2008) extended ILM to humans. They observed that ILM with humans
could lead to degenerate grammars, where multiple meanings mapped to identical
utterances. However, they showed that pruning duplicate utterances from the
results of the generation phase, prior to presentation to the student, was
sufficient to prevent the formation of such degenerate grammars.
## 3 ILM using Artificial Neural Networks
$m$sender$u$Agent Figure 2: Naive ILM using Artificial Neural Networks
We seek to extend ILM to artificial neural networks, for example using RNNs.
Different from the DCG in Kirby (2001), artificial neural networks generalize
over their entire support, for each training example. Learning is in general
lossy and imperfect.
In the case of using ANNs we need to first consider how to represent a single
‘meaning’. Considering the example language depicted in Table 1 above, we can
represent each attribute as a one-hot vector, and represent the set of two
attributes as the concatenation of two one-hot vectors.
More generally, we can represent a meaning as a single real-valued vector,
$\mathbf{m}$. In this work, we will use ‘thought vector‘ and ‘meaning vector‘
as synonyms for ‘meaning‘, in the context of ANNs.
We partition the meaning space $\mathcal{M}$ into $M_{\text{train}}$ and
$M_{\text{holdout}}$, such that $\mathcal{M}=M_{\text{train}}\cup
M_{\text{holdout}}$. We will denote a subset of $M_{\text{train}}$ at
generation $t$ by $M_{\text{train},t}$.
### 3.1 Naive ANN ILM
A naive attempt to extend ILM to artificial neural networks (ANNs) is to
simply replace the DCG in ILM with an RNN, see Figure 2.
In practice we observed that using this formulation leads to a degenerate
grammar, where all meanings map to a single identical utterance. ANNs
generalize naturally, but learning is lossy and imperfect. This contrasts with
a DCG which does not generalize. In the case of a DCG, generalization is
implemented by applying certain hand-crafted rules. With careful crafting of
the generalization rules, the DCG will learn a training set perfectly, and
degenerate grammars are rare. In the case of using an ANN, the lossy teacher-
student training progressively smooths the outputs. In the limit of training
over multiple generations, an ANN produces the same output, independent of the
input: a degenerate grammar. The first two rows of Table 2 show results for
two meaning spaces: 2 attributes each with 33 possible values (depicted as
$33^{2}$), and 5 attributes each with 10 possible values (depicted as
$10^{5}$). The column ‘uniq’ is a measure of the uniqueness of utterances over
the meaning space, where 0 means all utterances are identical, and 1 means all
utterances are distinct. We can see that the uniqueness values are near zero
for both meaning spaces.
We tried the approach of Kirby et al. (2008) of removing duplicate utterances
prior to presentation to the student. Results for ‘nodups’ are shown in the
last two rows of Table 2. The uniqueness improved slightly, but was still near
zero. Thus the approach of Kirby et al. (2008) did not prevent the formation
of a degenerate grammar, in our experiments, when using ANNs.
Meaning space | Nodups | Uniq | $\rho$ | $\text{acc}_{H}$
---|---|---|---|---
$33^{2}$ | - | 0.024 | 0.04 | 0.05
$10^{5}$ | - | 0.024 | 0.08 | 0
$33^{2}$ | yes | 0.039 | 0.1 | 0
$10^{5}$ | yes | 0.05 | 0.1 | 0
Table 2: Results using naive ANN ILM architecture. ‘Nodups’: remove
duplicates; $\rho$: topographic similarity (see later); ‘Uniq’: uniqueness.
Termination criteria for teacher-student training is 98% accuracy.
### 3.2 Auto-encoder to enforce uniqueness
To prevent the formation of degenerate grammars, we propose to enforce
uniqueness of utterances by mapping the generated utterances back into meaning
space, and using reconstruction loss on the reconstructed meanings.
senderreceiverAgent Figure 3: Agent sender-receiver architecture
$m$sender$u$Teacher (a) Teacher sender generates utterances
$m$sender$u_{pred}$$u$Student$\mathcal{L_{CE}}$ (b) Train student sender
supervised
$m$$m_{pred}$receiver$u$Student$\mathcal{L_{CE}}$ (c) Train student receiver
supervised
$m$sender$u$receiver$m_{pred}$Student$\mathcal{L}$ (d) Train student sender-
receiver end-to-end
Figure 4: Neural ILM Training Procedure
Using meaning space reconstruction loss requires a way to map from generated
utterances back to meaning space. One way to achieve this could be to back-
propagate from a generated utterance back onto a randomly initialized meaning
vector. However, this requires multiple back-propagation iterations in
general, and we found this approach to be slow. We choose to introduce a
second ANN, which will learn to map from discrete utterances back to meaning
vectors. Our architecture is thus an auto-encoder. We call the decoder the
‘sender’, which maps from a thought vector into discrete language. The encoder
is termed the ‘receiver’. We equip each agent with both a sender and a
receiver network, Figure 3.
### 3.3 Neural ILM Training Procedure
We will denote the teacher sender network as $f_{T,\text{send}}(\cdot)$, the
student receiver network as $f_{S,\text{recv}}(\cdot)$, and the student sender
network as $f_{S,\text{send}}$. The output of $f_{\cdot,\text{send}}(\cdot)$
will be non-normalized logits, representing a sequence of distributions over
discrete tokens. These logits can be converted into discrete tokens by
applying an argmax.
For teacher-student training, we use the sender network of the teacher to
generate a set of meaning-utterance pairs, which represent a subset of the
teacher’s language. We present this language to the student, and train both
the sender and the receiver network of the student, on this new language.
The ILM training procedures is depicted in Figure 4. A single generation
proceeds as follows. For each step $t$, we do:
* •
meaning sampling we sample a subset of meanings
$M_{\text{train},t}=\\{\mathbf{m}_{t,0}\dots\mathbf{m}_{t,N}\\}\subset
M_{\text{train}}$, where $M_{\text{train}}$ is a subset of the space of all
meanings, i.e. $M_{\text{train}}=\mathcal{M}\setminus M_{\text{holdout}}$
* •
teacher generation: use the teacher sender network to generate the set of
utterances $U_{t}=\\{\mathbf{u}_{t,0},\dots,\mathbf{u}_{t,N}\\}$.
* •
student supervised training: train the student sender and receiver networks
supervised, using $M_{\text{train},t}$ and $U_{t}$
* •
student end-to-end training: train the student sender and receiver network
end-to-end, as an auto-encoder
For the teacher generation, each utterance $\mathbf{u}_{t,n}$ is generated as
$f_{T,\text{send}}(\mathbf{m}_{t,n})$.
For the student supervised training, we train the student receiver network
$f_{S,\text{recv}}(\cdot)$ to generate $U_{t}$, given $M_{t}$, and we train
the student sender network $f_{S,\text{send}}(\cdot)$ to recover $M_{t}$ given
$U_{t}$. Supervised training for each network terminates after
$N_{\text{sup}}$ epochs, or once training accuracy reaches
$\text{acc}_{\text{sup}}$
The student supervised training serves to transmit the language from the
teacher to the student. The student end-to-end training enforces uniqueness of
utterances, so that the language does not become degenerate.
In the end-to-end step, we iterate over multiple batches, where for each batch
$j$ we do:
* •
sample a set of meanings
$M_{\text{train},t,j}=\\{\mathbf{m}_{t,j,0}\dots\mathbf{m}_{t,j,N_{\text{batch}}}\\}\subset
M_{\text{train}}$
* •
train, using an end-to-end loss function $\mathcal{L}_{\text{e2e}}$ as an
auto-encoder, using meanings $M_{\text{train},t,j}$ as both the input and the
target ground truth.
End-to-end training is run for either $N_{e2e}$ batches, or until end-to-end
training accuracy reaches threshold $\text{acc}_{e2e}$
### 3.4 Non-symbolic input
In the general case, the meanings $\mathbf{m}$ can be presented as raw non-
symbolic stimuli $\mathbf{x}$. Each raw stimulus $\mathbf{x}$ can be encoded
by some network into a thought-vector $\mathbf{m}$. We denote such an encoding
network as a ‘perception’ network. As an example of a perception network, an
image could be encoded using a convolutional neural network.
$x$enc$m$sender$u_{pred}$$u$Student$\mathcal{L}_{CE}$ (a) Encoder and sender
supervised training
$x$enc$m_{tgt}$$m_{pred}$receiver$u$Studentstop-grad$\mathcal{L}_{MSE}$ (b)
Receiver supervised training
Figure 5: Generalized Neural ILM Supervised Training
This then presents a challenge when training a receiver network. One possible
architecture would be for the receiver network to generate the original input
$\mathbf{x}$. We choose instead to share the perception network between the
sender and receiver networks in each agent. During supervised training of the
sender, using the language generated by the teacher, we train the perception
and sender networks jointly. To train the receiver network, we hold the
perception network weights constant, and train the receiver network to predict
the output of the perception network, given input utterance $\mathbf{u}$ and
target stimulus $\mathbf{x}$. See Figure 5. Note that by setting the
perception network as the identity operator, we recover the earlier supervised
training steps.
$x_{tgt}$$x_{distr_{1}}$$x_{s}$$m_{tgt}$$m_{distr_{1}}$$m_{s}$ $enc_{img}$
sender$u$receiver$m_{s,pred}$tgtd1 Dot product & softmax $p(m_{tgt}\mid
m_{s,pred},m_{distr_{1}},\dots)$Student Figure 6: End-to-end referential task
for non-symbolic inputs, where $x_{s}$ is the input stimulus presented to the
sender, $x_{tgt}$ is the target input simulus, and $x_{distr_{1}}$ is a
distractor stimulus.
For end-to-end training, with non-symbolic input, we use a referential task,
e.g. as described in Lazaridou et al. (2018). The sender network is presented
the output of the perception network, $\mathbf{m}$, and generates utterance
$\mathbf{u}$. The receiver network chooses a target image from distractors
which matches the image presented to the sender. The target image that the
receiver network perceives could be the original stimulus presented to the
sender, or it could be a stimulus which matches the original image in concept,
but is not the same stimulus. For example, two images could contain the same
shapes, having the same colors, but in different positions. Figure 6 depicts
the architecture, with a single distractor. In practice, multiple distractors
are typically used.
### 3.5 Discrete versus soft utterances
When we train a sender and receiver network end-to-end, we can put a softmax
on the output of the sender network $f_{\cdot,\text{send}}$, to produce a
probability distribution over the vocabulary, for each token. We can feed
these probability distributions directly into the receiver network
$f_{\cdot,\text{recv}}$, and train using cross-entropy loss. We denote this
scenario softmax.
Alternatively, we can sample discrete tokens from categorical distributions
parameterized by the softmax output. We train the resulting end-to-end network
using REINFORCE. We use a moving average baseline, and entropy regularization.
This scenario is denoted rl.
### 3.6 Evaluation of Compositionality
We wish to use objective measures of compositionality. This is necessary
because the compositional signal is empirically relatively weak. We assume
access to the ground truth for the meanings, and use two approaches:
topographic similarity, $\rho$, as defined in Brighton and Kirby (2006) and
Lazaridou et al. (2018); and holdout accuracy $\text{acc}_{H}$.
$\rho$ is the correlation between distance in meaning space, and distance in
utterance space, taken across multiple examples. For the distance metric, we
use the $L_{0}$ distance, for both meanings and utterances. That is, in
meaning space, the distance between ‘red square‘ and ‘yellow square‘ is 1; and
the distance between ‘red square’ and ‘yellow circle’ is 2. In utterance
space, the difference between ‘glaxefw’ and ‘glaxuzg’ is 3. Considered as an
edit distance, we consider substitutions; but neither insertions nor
deletions. For the correlation measure, we use the Spearman’s Rank
Correlation.
$\text{acc}_{H}$ shows the ability of the agents to generalize to combinations
of shapes and colors not seen in the training set. For example, the training
set might contain examples of ‘red square’, ‘yellow square’, and ‘yellow
circle’, but not ‘red circle’. If the utterances were perfectly compositional,
both as generated by the sender, and as interpreted by the receiver, then we
would expect performance on ‘red circle’ to be similar to the performance on
‘yellow circle’. The performance on the holdout set, relative to the
performance on the training set, can thus be interpreted as a measure of
compositionality.
Note that when there is just a single attribute, it is not possible to exclude
any values from training, otherwise the model would never have been exposed to
the value at all. Therefore $\text{acc}_{H}$ is only a useful measure of
compositionality when there are at least 2 attributes.
We observe that one key difference between $\rho$ and $\text{acc}_{H}$ is that
$\rho$ depends only on the compositional behavior of the sender, whereas
$\text{acc}_{h}$ depends also on the compositional behavior of the receiver.
As noted in Lowe et al. (2019), it is possible for utterances generated by a
sender to exhibit a particular behavior or characteristic without the receiver
making use of this behavior or characteristic.
## 4 Related Work
Work on emergent communications was revived recently for example by Lazaridou
et al. (2016) and Foerster et al. (2016). Mordatch and Abbeel (2018) and Leibo
et al. (2017) showed emergent communications in a 2d world. Several works
investigate the compositionality of the emergent language. Kottur et al.
(2017) showed that agents do not generate compositional languages unless they
have to. Lazaridou et al. (2018) used a referential game with high-dimensional
non-symbolic input, and showed the resulting languages contained elements of
compositionality, measured by topographic similarity. Bouchacourt and Baroni
(2018) caution that agents may not be communicating what we think they are
communicating, by using randomized images, and by investigating the effect of
swapping the target image. Andreas et al. (2017) proposed an approach to learn
to translate from an emergent language into a natural language. Obtaining
compositional emergent language can be viewed as disentanglement of the agent
communications. Locatello et al. (2019) prove that unsupervised learning of
disentangled representations is fundamentally impossible without inductive
biases both on the considered learning approaches and the data sets.
Kirby pioneered ILM in Kirby (2001), extending it to humans in Kirby et al.
(2008). Griffiths and Kalish (2007) proved that for Bayesian agents, that the
iterated learning method converges to a distribution over languages that is
determined entirely by the prior, which is somewhat aligned with the result in
Locatello et al. (2019) for disentangled representations. Li and Bowling
(2019), Cogswell et al. (2020), and Ren et al. (2020) extend ILM to artificial
neural networks, using symbolic inputs. Symbolic input vectors are by nature
themselves compositional, typically, the concatenation of one-hot vectors of
attribute values, or of per-attribute embeddings (e.g. Kottur et al. (2017)).
Thus, these works show that given compositional input, agents can generate
compositional output. In our work, we extend ILM to high-dimensional, non-
symbolic inputs. However, a concurrent work Dagan et al. (2020) also extends
ILM to image inputs, and also takes an additional step in examining the effect
of genetic evolution of the network architecture, in addition to the cultural
evolution of the language that we consider in our own work.
Andreas (2019) provides a very general framework, TRE, for evaluating
compositionality, along with a specific implementation that relates closely to
the language representations used in the current work. It uses a learned
linear projection to rearrange tokens within each utterance; and a relaxation
to enable the use of gradient descent to learn the projection. Due to time
pressure, we did not use TRE in our own work.
Our work on neural ILM relates to distillation (Ba and Caruana, 2014) (Hinton
et al., 2015), in which a large teacher networks distills knowledge into a
smaller student network. More recently, Furlanello et al. (2018) showed that
when the student network has identical size and architecture to the teacher
network, distillation can still give an improvement in validation accuracy on
a vision and a language model. Our work relates also to self-training (He et
al., 2019) in which learning proceeds in iterations, similar to ILM
generations.
## 5 Experiments
$\mathcal{M}$ | $\mathcal{L}$ | E2E Tgt | ILM? | $\text{acc}_{H}$ | $\rho$
---|---|---|---|---|---
$33^{2}$ | softmax | e=100k | | 0.97+/-0.02 | 0.23+/-0.01
$33^{2}$ | softmax | e=100k | yes | 0.984+/-0.002 | 0.30+/-0.02
$33^{2}$ | rl | e=500k | | 0.39+/-0.01 | 0.18+/-0.01
$33^{2}$ | rl | e=500k | yes | 0.52+/-0.04 | 0.238+/-0.008
$10^{5}$ | softmax | e=100k | | 0.97+/-0.01 | 0.22+/-0.02
$10^{5}$ | softmax | e=100k | yes | 0.56+/-0.06 | 0.28+/-0.01
$10^{5}$ | rl | e=500k | | 0.65+/-0.17 | 0.17+/-0.02
$10^{5}$ | rl | e=500k | yes | 0.449+/-0.004 | 0.28+/-0.01
Table 3: Results using auto-encoder architecture on synthetic concepts
dataset. ”E2E Tgt”: termination criteria (”target”) for end-to-end training;
”$\rho$”: topographic similarity. Where ILM is used, it is run for 5
generations.
### 5.1 Experiment 1: Symbolic Input
#### 5.1.1 Dataset construction
We conduct experiments first on a synthetic concept dataset, built to resemble
that of Kirby (2001).
We experiment conceptually with meanings with $a$ attributes, where each
attribute can take one of $k$ values. The set of all possible meanings
$\mathcal{M}$ comprises $k^{a}$ unique meanings. We use the notation $k^{a}$
to describe such a meaning space. We reserve a holdout set $\mathcal{H}$ of
128 meanings, which will not be presented during training. This leaves
$(k^{a}-128)$ meanings for training and validation. In addition, we remove
from the training set any meanings having 3 or more attributes in common with
any meanings in the holdout set.
We choose two meanings spaces: $33^{2}$ and $10^{5}$. $33^{2}$ is constructed
to be similar in nature to Kirby (2001), whilst being large enough to train an
RNN without immediately over-fitting. With 33 possible values per attribute,
the number of possible meanings increases from $10^{2}=100$ to $33^{2}\approx
1,000$. In addition to not over-fitting, this allows us to set aside a
reasonable holdout set of 128 examples. We experiment in addition with a
meaning space of $10^{5}$, which has a total of $100,000$ possible meanings.
We hypothesized that the much larger number of meanings prevents the network
from simply memorizing each meaning, and thus force the network to naturally
adopt a more compositional representation.
#### 5.1.2 Experimental Setup
The model architecture for the symbolic concept task is that depicted in
Figure 4.
The sender model converts each meaning into a many-hot representation, of
dimension $k\cdot a$, then projects the many-hot representation into an
embedding space.
#### 5.1.3 Results
Table 3 shows the results for the symbolic concept task. We can see that when
using an RL link, ILM improves the topographic similarity measure, for both
$33^{2}$ and $10^{5}$ meaning spaces. This is true for both softmax and rl.
Interestingly, in the $10^{5}$ meaning space, the increase in compositionality
as measured by $\rho$ is associated with a decrease in $\text{acc}_{H}$, for
both softmax and rl. This could indicate potentially that ILM is inducing the
sender to generate more compositional output, but that the receiver’s
understanding of the utterance becomes less compositional, in this scenario.
It is interesting that $\rho$ and $\text{acc}_{H}$ can be inversely
correlated, in certain scenarios. This aligns somewhat with the findings in
Lowe et al. (2019).
Interestingly, it is not clear that using a $10^{5}$ meaning space leads to
more compositional utterances than the much smaller $33^{2}$ meaning space.
### 5.2 Experiment 2: Images
#### 5.2.1 Dataset
In Experiment One, we conserved the type of stimuli used in prior work on ILM,
e.g. Kirby (2001), using highly structured input. In Experiment Two, we
investigate the extent to which ILM shows a benefit using unstructured high-
dimensional input. We used OpenGL to create scenes containing colored objects,
of various shapes, in different positions.
In the previous task, using symbolic meanings, we required the listener to
reconstruct the symbolic meaning. In the case of images, we use a referential
task, as discussed in Section 3.4. The advantage of using a referential task
is that we do not require the agents to communicate the exact position and
color of each object, just which shapes and colors are present. If the agents
agree on an ordering over shapes, then the number of attributes to be
communicated is exactly equal to the number of objects in the images. The
positions of the objects are randomized to noise the images. We also varied
the colors of the ground plane over each image.
Figure 7: Example referential task images, one example per row. The sender
image and the correct receiver image are the first two images in each row.
(a) One shape, $acc$$0$$2$$4$$6$$8$$10$$0.6$$0.7$$0.8$$0.9$$1$Generationacc
(b) One shape,
$\text{acc}_{H}$$0$$2$$4$$6$$8$$10$$0.6$$0.7$$0.8$$0.9$$1$Generation$\text{acc}_{H}$
(c) One shape, $\rho$$0$$2$$4$$6$$8$$10$$0.5$$0.6$$0.7$$0.8$Generation$\rho$
(d) Two shapes, acc$0$$2$$4$$6$$8$$10$$0.6$$0.8$$1$Generationacc
(e) Two shapes,
$\text{acc}_{H}$$0$$2$$4$$6$$8$$10$$0.2$$0.4$$0.6$Generation$\text{acc}_{H}$
(f) Two shapes, $\rho$$0$$2$$4$$6$$8$$10$$0.2$$0.4$$0.6$Generation$\rho$
(g) Three shapes, acc$0$$2$$4$$6$$8$$10$$0.2$$0.4$$0.6$$0.8$$1$Generationacc
(h) Three shapes,
$\text{acc}_{H}$$0$$2$$4$$6$$8$$10$$0.15$$0.2$$0.25$$0.3$$0.35$Generation$\text{acc}_{H}$
(i) Three shapes,
$\rho$$0$$2$$4$$6$$8$$10$$0.1$$0.2$$0.3$$0.4$Generation$\rho$
Figure 8: Examples of individual ILM runs up to 10 generations.
Example images are shown in Figure 7. Each example comprises 6 images: one
sender image, the target receiver image, and 4 distractor images. Each object
in a scene was a different shape, and we varied the colors and the positions
of each object. Each shape was unique within each image. Two images were
considered to match if the sets of shapes were identical, and if the objects
with the same shapes were identically colored. The positions of the objects
were irrelevant for the purposes of judging if the images matched.
We change only a single color in each distractor, so that we force the sender
and receiver to communicate all object colors, not just one or two. We create
three datasets, for sets of 1, 2 or 3 objects respectively. Each dataset
comprises 4096 training examples, and 512 holdout examples.
In the case of two shapes and three shapes, we create the holdout set by
setting aside combinations of shapes and colors which are never seen in the
training set. That is, the color ‘red’ might have been seen for a cube, but
not for a cylinder. In the case of just one shape, this would mean that the
color had never been seen at all, so for a single shape, we relax this
requirement, and just use unseen geometrical configurations in the holdout
set.
The dataset is constructed using OpenGL and python. The code is available at
111https://github.com/asappresearch/neural-ilm.
#### 5.2.2 Experimental setup
The supervised learning of the student sender and receiver from the teacher
generated language is illustrated in Figure 5. The referential task
architecture is depicted in Figure 6. Owing to time pressure, we experimented
only with using rl. We chose rl over softmax because we felt that rl is more
representative of the discrete nature of natural languages.
#### 5.2.3 Results
Shapes | ILM? | Batches | $\text{acc}_{H}$ | Holdout $\rho$
---|---|---|---|---
1 | | 300k | 0.76+/-0.11 | 0.55+/-0.03
1 | Yes | 300k | 0.95+/-0.03 | 0.69+/-0.04
2 | | 600k | 0.21+/-0.03 | 0.46+/-0.2
2 | Yes | 600k | 0.30+/-0.06 | 0.64+/-0.05
3 | | 600k | 0.18+/-0.01 | 0.04+/-0.02
3 | Yes | 600k | 0.23+/-0.02 | 0.19+/-0.04
Table 4: Results for OpenGL datasets. ‘Shapes’ is number of shapes, and
‘Batches’ is total number of batches. For ILM, batches per generation is total
batches divided by number of ILM generations. For ILM, three generations are
used.
Table 4 shows the results using the OpenGL datasets. We can see that when
training using the rl scenario, ILM shows an improvement across both $33^{2}$
and $10^{5}$ meaning spaces.
The increase in topographic similarity is associated with an improvement in
holdout accuracy, across all scenarios, similar to the $33^{2}$ symbolic
concepts scenario.
Figure 8 shows examples of individual runs. The plots within each row are for
the same dataset, i.e. one shape, two shapes, or three shapes. The first
column shows the end to end accuracy, the second column shows holdout
accuracy, $\text{acc}_{H}$, and the third column shows topologic similarity
$\rho$. We note firstly that the variance across runs is high, which makes
evaluating trends challenging. Results in the table above were reported using
five runs per scenario, and pre-selecting which runs to use prior to running
them.
We can see that end to end training accuracy is good for the one and two
shapes scenario, but that the model struggles to achieve high training
accuracy in the more challenging three shapes dataset. The holdout accuracy
similarly falls dramatically, relative to the training accuracy, as the number
of shapes in the dataset increases. Our original hypothesis was that the more
challenging dataset, i.e. three shapes, would be harder to memorize, and would
thus lead to better compositionality. That the holdout accuracy actually gets
worse, compared to the training accuracy, with more shapes was surprising to
us.
Similarly, the topological similarity actually becomes worse as we add more
shapes to the dataset. This seems unlikely to be simply because the receiver
struggles to learn anything at all, since the end to end training accuracy
stays relatively high across all three datasets. We note that the ILM effect
is only apparent over the first few generations, reaching a plateau after
around 2-3 generations.
## 6 Conclusion
In this paper, we proposed an architecture to use the iterated learning method
(“ILM”) for neural networks, including for non-symbolic high-dimensional
input. We showed that using ILM with neural networks does not lead to the same
clear compositionality as observed for DCGs. However, we showed that ILM does
lead to a modest increase in compositionality, as measured by both holdout
accuracy and topologic similarity. We showed that holdout accuracy and
topologic rho can be anti-correlated with each other, in the presence of ILM.
Thus caution might be considered when using only a single one of these
measures. We showed that ILM leads to an increase in compositionality for non-
symbolic high-dimensional input images.
## Acknowledgements
Thank you to Angeliki Lazaridou for many interesting discussions and ideas
that I’ve tried to use in this paper.
## References
* Andreas (2019) Jacob Andreas. 2019. Measuring compositionality in representation learning. arXiv preprint arXiv:1902.07181 .
* Andreas et al. (2017) Jacob Andreas, Anca Dragan, and Dan Klein. 2017. Translating neuralese. arXiv preprint arXiv:1704.06960 .
* Ba and Caruana (2014) Jimmy Ba and Rich Caruana. 2014. Do deep nets really need to be deep? In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, Curran Associates, Inc., pages 2654–2662. http://papers.nips.cc/paper/5484-do-deep-nets-really-need-to-be-deep.pdf.
* Bouchacourt and Baroni (2018) Diane Bouchacourt and Marco Baroni. 2018. How agents see things: On visual representations in an emergent language game. arXiv preprint arXiv:1808.10696 .
* Brighton and Kirby (2006) Henry Brighton and Simon Kirby. 2006. Understanding linguistic evolution by visualizing the emergence of topographic mappings. Artificial life 12(2):229–242.
* Cogswell et al. (2020) Michael Cogswell, Jiasen Lu, Stefan Lee, Devi Parikh, and Dhruv Batra. 2020. Emergence of compositional language with deep generational transmission.
* Dagan et al. (2020) Gautier Dagan, Dieuwke Hupkes, and Elia Bruni. 2020. Co-evolution of language and agents in referential games. arXiv preprint arXiv:2001.03361 .
* Foerster et al. (2016) Jakob Foerster, Ioannis Alexandros Assael, Nando de Freitas, and Shimon Whiteson. 2016. Learning to communicate with deep multi-agent reinforcement learning. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems. Curran Associates, Inc., volume 29, pages 2137–2145.
* Furlanello et al. (2018) Tommaso Furlanello, Zachary C Lipton, Michael Tschannen, Laurent Itti, and Anima Anandkumar. 2018. Born again neural networks. arXiv preprint arXiv:1805.04770 .
* Griffiths and Kalish (2007) Thomas L Griffiths and Michael L Kalish. 2007. Language evolution by iterated learning with bayesian agents. Cognitive science 31(3):441–480.
* He et al. (2019) Junxian He, Jiatao Gu, Jiajun Shen, and Marc’Aurelio Ranzato. 2019. Revisiting self-training for neural sequence generation.
* Hinton et al. (2015) Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 .
* Kirby (2001) Simon Kirby. 2001. Spontaneous evolution of linguistic structure-an iterated learning model of the emergence of regularity and irregularity. IEEE Transactions on Evolutionary Computation 5(2):102–110.
* Kirby et al. (2008) Simon Kirby, Hannah Cornish, and Kenny Smith. 2008. Cumulative cultural evolution in the laboratory: An experimental approach to the origins of structure in human language. Proceedings of the National Academy of Sciences 105(31):10681–10686.
* Kottur et al. (2017) Satwik Kottur, José MF Moura, Stefan Lee, and Dhruv Batra. 2017. Natural language does not emerge’naturally’in multi-agent dialog. arXiv preprint arXiv:1706.08502 .
* Lazaridou et al. (2018) Angeliki Lazaridou, Karl Moritz Hermann, Karl Tuyls, and Stephen Clark. 2018. Emergence of linguistic communication from referential games with symbolic and pixel input. arXiv preprint arXiv:1804.03984 .
* Lazaridou et al. (2016) Angeliki Lazaridou, Alexander Peysakhovich, and Marco Baroni. 2016. Multi-agent cooperation and the emergence of (natural) language. arXiv preprint arXiv:1612.07182 .
* Leibo et al. (2017) Joel Z Leibo, Vinicius Zambaldi, Marc Lanctot, Janusz Marecki, and Thore Graepel. 2017. Multi-agent reinforcement learning in sequential social dilemmas. arXiv preprint arXiv:1702.03037 .
* Li and Bowling (2019) Fushan Li and Michael Bowling. 2019. Ease-of-teaching and language structure from emergent communication. In Advances in Neural Information Processing Systems. pages 15851–15861.
* Locatello et al. (2019) Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Raetsch, Sylvain Gelly, Bernhard Schölkopf, and Olivier Bachem. 2019. Challenging common assumptions in the unsupervised learning of disentangled representations. In international conference on machine learning. PMLR, pages 4114–4124.
* Lowe et al. (2019) Ryan Lowe, Jakob Foerster, Y-Lan Boureau, Joelle Pineau, and Yann Dauphin. 2019\. On the pitfalls of measuring emergent communication. arXiv preprint arXiv:1903.05168 .
* Mordatch and Abbeel (2018) Igor Mordatch and Pieter Abbeel. 2018. Emergence of grounded compositional language in multi-agent populations. In Proceedings of the AAAI Conference on Artificial Intelligence. volume 32.
* Ren et al. (2020) Yi Ren, Shangmin Guo, Matthieu Labeau, Shay B Cohen, and Simon Kirby. 2020. Compositional languages emerge in a neural iterated learning model. arXiv preprint arXiv:2002.01365 .
## Appendix: hyper-parameters
For all experiments, results and error bars are reported using five runs per
scenario. We pre-select which runs to use for reporting before running them.
### 6.1 Experiment 1
For experiment 1, we use a batch-size of 100, embedding size of 50. RNNs are
chosen to be GRUs. We query the teacher for utterances for 40% of the training
meaning space each generation. We use an utterance length of 6, and a
vocabulary size of 4.
### 6.2 Experiment 2
For experiment 2, we use the same architecture as Lazaridou et al. (2018),
with the exception that we add a max pooling layer after the convolutional
network layers, with kernel size 8 by 8; and we replace the stride 2
convolutional layers by stride 1 convolutional layers, followed by 2 by 2 max
pooling layers.
We use entropy regularization for both the sender and receiver networks, as
per Lazaridou et al. (2018). At test-time, we take the argmax, instead of
sampling.
Other hyper-parameters were as follows:
* •
optimizer: RMSProp
* •
convolutional layers: 8
* •
batch size: 32
* •
no gradient clipping
* •
utterance length: 6
* •
utterance vocabulary size: 100
* •
embedding size: 50
* •
RNN type: GRU
* •
Number RNN layers: 1
* •
dropout: 0.5
* •
supervised training fraction: 0.4
* •
number supervised training steps: 200k
* •
number end to end training steps: 200k
* •
sender entropy regularization: 0.01
* •
receiver entropy regularization: 0.001
|
ARGONNE NATIONAL LABORATORY
9700 South Cass Avenue
Argonne, Illinois 60439
Convergence Analysis of Fixed Point Chance Constrained Optimal Power Flow
Problems
J. J. Brust, M. Anitescu
Mathematics and Computer Science Division
Preprint ANL/MCS-P9431-0121
August 2021
11footnotetext: This work was supported by the U.S. Department of Energy,
Office of Science, Advanced Scientific Computing Research, under Contract DE-
AC02-06CH11357 at Argonne National Laboratory. 22footnotetext: J. J. Brust is
now at Department of Mathematics, University of California, San Diego, CA.
The submitted manuscript has been created by UChicago Argonne, LLC, Operator
of Argonne National Laboratory (“Argonne”). Argonne, a U.S. Department of
Energy Office of Science laboratory, is operated under Contract No. DE-
AC02-06CH11357. The U.S. Government retains for itself, and others acting on
its behalf, a paid-up nonexclusive, irrevocable worldwide license in said
article to reproduce, prepare derivative works, distribute copies to the
public, and perform publicly and display publicly, by or on behalf of the
Government. The Department of Energy will provide public access to these
results of federally sponsored research in accordance with the DOE Public
Access Plan. http://energy.gov/downloads/doe-public-accessplan
# Convergence Analysis of Fixed Point Chance Constrained Optimal Power Flow
Problems
Johannes J. Brust, and Mihai Anitescu, _Member_ , _IEEE_ J.J. Brust and M.
Anitescu were with Argonne National Laboratory, Mathematics and Computer
Science Division upon completion of this article, Lemont, IL, e-mails:
jjbrust<EMAIL_ADDRESS>work was supported by the U.S.
Department of Energy, Office of Science, Advanced Scientific Computing
Research, under Contract DE-AC02-06CH11357 at Argonne National Laboratory.
###### Abstract
For optimal power flow problems with chance constraints, a particularly
effective method is based on a fixed point iteration applied to a sequence of
deterministic power flow problems. However, a priori, the convergence of such
an approach is not necessarily guaranteed. This article analyses the
convergence conditions for this fixed point approach, and reports numerical
experiments including for large IEEE networks.
###### Index Terms:
Fixed point method, Chance Constraints, Stochastic Optimizations, AC Optimal
Power Flow
## I Introduction
Chance constrained optimization problems are often computationally very
challenging. However, when modeling the effects of uncertainty on optimal
power networks, potentially large chance constrained optimization problems
arise [1, 2, 3, 4, 5]. Typically, stochastic optimal power flow models are
developed by reformulating a widely accepted deterministic model. One such
“classical” model is the AC optimal power flow model (AC-OPF) [6]. Because AC-
OPF has multiple degrees of freedom with respect to the problem variables,
various stochastic power flow paradigms can be derived from it. In particular,
one can define different subsets of variables as stochastic variables. For
instance, in [7] power generation is regarded as being stochastic (with
constant demands), resulting in probabilistic objective functions. This
article develops a chance constrained AC optimal power flow model (CC-AC-OPF)
in which the objective function is deterministic, and the demands can be
stochastic. This enables direct interpretation of the meaning of the optimal
objective function values, and can be more realistic, when in practice demand
is more of an uncertainty source as compared to supply. Yet, independent of
how the stochastic power flow model is formulated it typically yields a chance
constrained optimization problem. In order to also solve large instances of
the resulting CC-AC-OPF problems, a fixed point iteration on a sequence of
modified, related, and simpler deterministic AC-OPF problems is solved.
Iterative methods that solve a sequence of simpler problems have been very
effective on a variety of recent power systems problems [8, 9, 10]. In the
context of chance-constrained optimization, such an approach has been
successfully used in [11, 12, 13] among more. However, prior to this work,
there did not exist analytical criteria for when one can expect this fixed
point iteration to converge. Therefore, this article describes a convergence
analysis of the chance constrained fixed point iteration and tests the method
on a set of standard IEEE networks. The numerical experiments report results
for networks with up to 9000 nodes. To summarize, the contributions of this
work are: (1) The formulation of a new CC-AC-OPF model with a deterministic
objective function, derived from uncertain demands; (2) The application and
analysis of a fixed-point algorithm for an implicit chance constrained
problem. Even though the use of iterative (fixed-point) techniques is
widespread in power systems problems, previously no rigorous analysis had been
undertaken for this formulation. In particular, we disentangle effects of
model parameters and network properties on the convergence; (3) We include
numerical experiments on large test cases. The article is organized as
follows: Sections I.A—D are preliminary and review the AC optimal power flow
model and how a chance-constrained model is obtained from it. Section II lists
the fixed-point algorithm. We highlight that the reformulation and iterative
solution of chance-constrained AC-OPF has been proposed in [11], however a
rigorous analysis of its convergence has not yet been developed. Therefore,
Section III analyzes convergence properties of the fixed-point algorithm,
while numerical experiments are described in Section IV. Finally, we conclude
with Section V.
### I-A AC Power Flow (Preliminary)
The power network is defined by $N$ nodes (buses) and $N_{\text{line}}$ edges
(lines). Associated to each bus is a voltage magnitude, $v_{i}$, frequency
$\theta_{i}$ and power generation or consumption at the $i^{\text{th}}$ bus,
for $1\leq i\leq N$. In particular, let $p^{g}_{i}$ be the real power
generated at bus $i$, $q^{g}_{i}$ the corresponding reactive power and
$p^{d}_{i}$, $q^{d}_{i}$ the real and reactive demands, respectively. The
buses are furthermore divided into two groups: generators and loads. The
indexing sets are $G$, and $L$, which are related to the set of all buses by
$B=G\cup L$ (also $N=N_{G}+N_{L}$). Throughout, we will use the indexing sets
$\\{G,L\\}$ as subscripts to bold font vector variables. Note that in our
setup each bus fits exactly into one of the two categories (though one can
easily set a virtual zero load at a node). It is customary to assume that the
network contains one reference bus (typically at $i=1$). Load buses do not
have active power generation and are thus defined by
$\mathbf{p}^{g}_{L}=\mathbf{q}^{g}_{L}=\mathbf{0}$, where
$\mathbf{p}^{g},\mathbf{q}^{g}$ are vectors that contain the $p^{g}_{i}$’s and
$q^{g}_{i}$’s. The power flow in the system is described by the power flow
equations, which couple the variables (e.g., [6, Sections III-IV]).
Specifically, let $G_{ik},B_{ik}$ denote the entries of the network’s
admittance matrix and define the quantities
$\theta^{ik}\equiv\theta_{i}-\theta_{k}$, $c_{ik}\equiv
G_{ik}\cos(\theta^{ik})+B_{ik}\sin(\theta^{ik})$ and $d_{ik}\equiv
G_{ik}\sin(\theta^{ik})-B_{ik}\cos(\theta^{ik})$. With these definitions, the
$2N$ power flow equations, for $i=1,\ldots,N$, are
$\displaystyle v_{i}\sum_{k=1}^{N}v_{k}c_{ik}-({p}^{g}_{i}-{p}^{d}_{i})=0,$
(1) $\displaystyle
v_{i}\sum_{k=1}^{N}v_{k}d_{ik}-({q}^{g}_{i}-{q}^{d}_{i})=0,$
when grouped by busses. If we let
$\mathbf{d}=\text{vec}(\mathbf{p}^{d},\mathbf{q}^{d})$ be the
$\mathbb{R}^{2N}$ vector containing ${p}^{d}_{i},{q}^{d}_{i}$ then in vector
notation (1) is expressed as
$\mathbf{f}(\mathbf{v},\boldsymbol{\theta},\mathbf{p}^{g},\mathbf{q}^{g};\mathbf{d})=\mathbf{0}$,
where $\mathbf{f}$ represents the nonlinear equations in (1). Note, however,
that $\mathbf{f}$ is linear in $\mathbf{d}$, a fact we will use later.
Moreover, $\mathbf{d}$ is regarded as a parameter and not a variable.
### I-B AC Optimal Power Flow (Preliminary)
Optimal power flow determines the best set of variables that satisfy the
network constraints. In addition to the power flow equations from (1), branch
current bounds are typically included in the optimization problem. In
particular, let $LC$ be the set of all line connections, i.e., the set of
index pairs that describe all line connections (e.g., if bus $i$ is connected
to bus $k$, then $(i,k)\in LC$). Subsequently, the so-called branch current
constraints are
$(D^{\text{max}}_{ik})^{2}-(v_{i}\cos(\theta_{i})-v_{k}\cos(\theta_{k}))^{2}-(v_{i}\sin(\theta_{i})-v_{k}\sin(\theta_{k}))^{2}\geq
0$, $\forall(i,k)\in LC$ for constant current limits $D^{\text{max}}_{ik}$. In
vector notation these $N_{\text{line}}$ constraints are represented as
$\mathbf{g}(\mathbf{v},\boldsymbol{\theta})\geq\mathbf{0}$. It is also
desirable to include “hard” bounds on the generation variables, such as
$\mathbf{l}_{p}\leq\mathbf{p}^{g}_{G}\leq\mathbf{u}_{p}$, where
$\mathbf{l}_{p}$, $\mathbf{u}_{p}$ represent constant lower and upper bounds.
The AC optimal power flow (AC-OPF) problem, for a cost function
$C_{0}{\color[rgb]{0,0,0}(\cdot)}$, is thus formulated as
$\displaystyle\underset{\mathbf{v},\boldsymbol{\theta},\mathbf{p}^{g},\mathbf{q}^{g}}{\text{
minimize }}$ $\displaystyle C_{0}(\mathbf{p}^{g}_{G})\quad\text{subject to}$
(2)
$\displaystyle\mathbf{f}(\mathbf{v},\boldsymbol{\theta},\mathbf{p}^{g},\mathbf{q}^{g};\mathbf{d})$
$\displaystyle=\mathbf{0}$ (3)
$\displaystyle\mathbf{g}(\mathbf{v},\boldsymbol{\theta})$
$\displaystyle\geq\mathbf{0}$ $\displaystyle\mathbf{l}_{p}$
$\displaystyle\leq\mathbf{p}^{g}_{G}\leq\mathbf{u}_{p}$
$\displaystyle\mathbf{l}_{q}$
$\displaystyle\leq\mathbf{q}^{g}_{G}\leq\mathbf{u}_{q}$
$\displaystyle\mathbf{l}_{v}$ $\displaystyle\leq\mathbf{v}\leq\mathbf{u}_{v}$
$\displaystyle\mathbf{l}_{\theta}$
$\displaystyle\leq\boldsymbol{\theta}\leq\mathbf{u}_{\theta}$
The cost function is typically a convex quadratic function that only depends
on the real power generation. Specifically,
$C_{0}(\mathbf{p}^{g}_{G})=\sum_{i\in G}q_{ii}({p}^{g}_{i})^{2}+\sum_{i\in
G}q_{i}{p}^{g}_{i}+q_{00}$ for cost data $q_{ii},q_{i}$ and $q_{00}$. A local
solution
$\mathbf{s}^{*}=\text{vec}(\mathbf{v}^{*},\boldsymbol{\theta}^{*},(\mathbf{p}^{g})^{*},(\mathbf{q}^{g})^{*})$
of (2) is called the optimal power flow point, or OPF point.
### I-C Chance Constrained AC Optimal Power Flow (Preliminary)
In order to introduce uncertainty related to e.g., renewable energy into the
OPF problem (2) we regard the demand terms in (1) (i.e.,
${p}^{d}_{i},{q}^{d}_{i}$) as forecasted values with possible error.
Specifically, these stochastic quantities are represented as
${p}^{d}_{i}+\omega_{i},\quad{q}^{d}_{i}+\omega_{N+i},$ (4)
where $\omega_{i}$ and $\omega_{N+i}$ represent forecasting errors. For
compactness, the stochastic errors are represented by the $2N$ vector
$\boldsymbol{\omega}$. Here we assume $\boldsymbol{\omega}$ is normal, and
relaxing this assumption yields different approaches. The chance constrained
AC-OPF model in this Section is developed such that the objective function
only depends on deterministic variables. When the objective function
represents cost, deterministic values are meaningful and important. Since the
power flow equations in (1) are overdetermined, i.e.,
$\mathbf{f}:\mathbb{R}^{2N+N_{G}}\to\mathbb{R}^{2N}$, this system has $N_{G}$
degrees of freedom. Subsequently, let $\mathbf{y}=\mathbf{p}_{G}$ represent a
$\mathbb{R}^{N_{G}}$ vector of deterministic variables and
$\mathbf{x}=\mathbf{x}(\boldsymbol{\omega})=\text{vec}(\mathbf{q}_{G}(\boldsymbol{\omega}),\mathbf{v}_{L}(\boldsymbol{\omega}),\boldsymbol{\theta}(\boldsymbol{\omega}))$
a $2N$ vector of stochastic variables. The stochastic power flow equations are
represented as
$\mathbf{f}(\mathbf{x},\mathbf{y};\mathbf{d})+\boldsymbol{\omega}=\mathbf{f}_{\omega}(\mathbf{x}(\boldsymbol{\omega}),\mathbf{y};\mathbf{d}+\boldsymbol{\omega})\equiv\mathbf{f}_{\omega}=\mathbf{0}.$
(5)
If $\boldsymbol{\omega}=\mathbf{0}$ these equations reduce to the power flow
equations in (1). Note that the power flow equations couple the variables and
that the uncertainty in $\mathbf{x}$ depends on the uncertainty in the demands
“$\mathbf{d}$”, since
$\mathbf{x}=\mathbf{x}(\boldsymbol{\omega})=\mathbf{x}(\mathbf{y},\mathbf{d}+\boldsymbol{\omega})$.
We set $C_{0}(\mathbf{p}^{g}_{G})=C_{0}(\mathbf{y})$ to reflect the change in
variables for the stochastic optimal power flow problem and note that the
objective function is deterministic. Let
$\mathbb{P}(\mathbf{z}\geq\mathbf{0})\geq 1-\boldsymbol{\epsilon}$ represent a
vector of inequalities with non-negative probability constraints, for which
each element in the left hand side corresponds to a cumulative probability
(for $z_{i}\geq 0$) and each element of $\boldsymbol{\epsilon}$ is in the
interval $0<\epsilon<1$. Then the chance constrained (stochastic) optimal
power flow problem is given by
$\displaystyle\underset{\mathbf{x}(\boldsymbol{\omega}),\mathbf{y}}{\text{
minimize }}$ $\displaystyle C_{0}(\mathbf{y})\quad$ subject to (6)
$\displaystyle\mathbf{f}_{\omega}(\mathbf{x}(\boldsymbol{\omega}),\mathbf{y};\mathbf{d}+\boldsymbol{\omega})$
$\displaystyle=\mathbf{0}\quad$ $\displaystyle\forall\boldsymbol{\omega}$
$\displaystyle\mathbb{P}(\mathbf{g}(\mathbf{x}(\boldsymbol{\omega}),\mathbf{y})\geq\mathbf{0})$
$\displaystyle\geq\mathbf{1}-\boldsymbol{\epsilon}_{g}$ (7)
$\displaystyle\mathbb{P}(\mathbf{l}_{x}\leq\mathbf{x}(\boldsymbol{\omega})\leq\mathbf{u}_{x})$
$\displaystyle\geq\mathbf{1}-\boldsymbol{\epsilon}_{x}$ (8)
$\displaystyle\mathbf{l}_{y}$ $\displaystyle\leq\mathbf{y}\leq\mathbf{u}_{y}.$
Observe that the problem in (6) includes chance constraints (probability
constraints) on the line flow limits (7) and the stochastic variables
$\mathbf{x}(\boldsymbol{\omega})$ (8), while deterministic limits are set on
$\mathbf{y}$. Here $\boldsymbol{\epsilon}_{g}$ and $\boldsymbol{\epsilon}_{x}$
correspond to model parameters for setting probability thresholds.
#### I-C1 Computing Chance Constraints
To practically compute the chance constraints in problem (6), the stochastic
variables are linearized around the error $\boldsymbol{\omega}$ (zero mean).
This is equivalent to assuming that $\boldsymbol{\omega}$ is sufficiently
small, which we proceed to do in the rest of the paper. In particular,
$\mathbf{x}(\mathbf{y},\mathbf{d}+\boldsymbol{\omega})\approx\mathbf{x}(\mathbf{y},\mathbf{d})+\frac{\partial\mathbf{x}(\mathbf{y},\mathbf{d}+\boldsymbol{\omega})}{\partial\boldsymbol{\omega}}\bigg{|}_{\boldsymbol{\omega=\mathbf{0}}}\boldsymbol{\omega}\equiv\mathbf{x}_{\omega}.$
(9)
Note that $\mathbf{x}(\mathbf{y},\mathbf{d})=\mathbf{x}_{0}$ and that
$\partial\mathbf{x}(\mathbf{y},\mathbf{d})\big{/}\partial\boldsymbol{\omega}$
are deterministic. Thus the expectation and variance of $\mathbf{x}_{\omega}$
are
$\text{E}[\mathbf{x}_{\omega}]=\mathbf{x}(\mathbf{y},\mathbf{d})=\mathbf{x}_{0}$,
and
$\text{Var}[\mathbf{x}_{\omega}]=(\partial\mathbf{x}(\mathbf{y},\mathbf{d})\big{/}\partial\boldsymbol{\omega})\text{Var}[\boldsymbol{\omega}](\partial\mathbf{x}(\mathbf{y},\mathbf{d})\big{/}\partial\boldsymbol{\omega})^{\top}$,
respectively. Alternatively, more accurate dynamics of the stochastic
variables may be based on a $2^{\text{nd}}$ order expansion
$\mathbf{x}(\mathbf{y},\mathbf{d}+\boldsymbol{\omega})\approx\mathbf{x}(\mathbf{y},\mathbf{d})+(\partial\mathbf{x}/\partial\boldsymbol{\omega})\boldsymbol{\omega}+\textnormal{``second
order terms''}$. When the covariance of the uncertainty has a particular
structure (e.g., diagonal) then the mean of the expansion can be determined
analytically. The expansion’s covariance is more involved and may need to be
estimated. Another possibility to include nonlinearities might be a quadratic
model of the load:
$\boldsymbol{\omega}_{3}\texttt{.*}(\mathbf{p}^{d})^{\texttt{.2}}+\boldsymbol{\omega}_{2}\texttt{.*}(\mathbf{p}^{d})+\boldsymbol{\omega}_{1}$,
where .* and ${}^{\texttt{.2}}$ are element-wise multiplications and squares.
For computational efficiency we use the probabilities of the linearized random
variables. For instance, the constraints from (8) are written as
$\mathbb{P}(\mathbf{x}_{\omega}\leq\mathbf{u}_{x})\geq\mathbf{1}-\boldsymbol{\epsilon}_{x},\quad\mathbb{P}(\mathbf{l}_{x}\leq\mathbf{x}_{\omega})\geq\mathbf{1}-\boldsymbol{\epsilon}_{x}.$
(10)
To handle the vector of probabilities in (10) one can use a Bonferroni bound
[14], in which each individual variable $(\mathbf{x}_{\omega})_{r}$ for $1\leq
r\leq 2N$ satisfies an individual highly conservative probability constraint.
However, when the variables are independent (or can be treated as such) then
the probabilities can be separated without restrictions. The mean of
$(\mathbf{x}_{\omega})_{r}$ is $(\mathbf{x}_{0})_{r}$, while the variance can
also be explicitly computed. Define
$\partial\mathbf{x}\big{/}\partial\boldsymbol{\omega}\equiv\boldsymbol{\Gamma},$
(11)
and let $\mathbf{e}_{r}$ be the r${}^{\text{th}}$ column of the identity
matrix. Moreover, denote
$\text{Var}[\boldsymbol{\omega}]=\boldsymbol{\Sigma}^{2}$. With this the
variance of $(\mathbf{x}_{\omega})_{r}$ is
$\left\|\mathbf{e}^{\top}_{r}\boldsymbol{\Gamma}\boldsymbol{\Sigma}\right\|^{2}_{2}$.
In turn, when the variables can be treated independently, individual
probability constraints, such as
$F^{\text{Nrm}}((\mathbf{x}_{\omega})_{r}\leq(\mathbf{u}_{x})_{r})\geq
1-(\boldsymbol{\epsilon}_{x})_{r}$ (where $F^{\text{Nrm}}$ is the normal
distribution function) can be represented as
$((\mathbf{u}_{x})_{r}-(\mathbf{x}_{0})_{r})\big{/}\left\|\mathbf{e}^{\top}_{r}\boldsymbol{\Gamma}\boldsymbol{\Sigma}\right\|_{2}\geq(F^{{\color[rgb]{0,0,0}\text{Nrm}}})^{-1}(1-(\boldsymbol{\epsilon}_{x})_{r}),$
where $(F^{{\color[rgb]{0,0,0}\text{Nrm}}})^{-1}(\cdot)$ is the inverse
cumulative distribution function. Defining
$z_{r}=(F^{{\color[rgb]{0,0,0}\text{Nrm}}})^{-1}(1-(\boldsymbol{\epsilon}_{x})_{r})$
the constraints are represented as
$(\mathbf{u}_{x})_{r}-\lambda_{r}\geq(\mathbf{x}_{0})_{r},\quad\lambda_{r}=z_{r}\|\mathbf{e}^{\top}_{r}\boldsymbol{\Gamma}\boldsymbol{\Sigma}\|_{2}.$
(12)
Similarly, for $2N+1\leq r_{1}\leq 2N+N_{L}$ and $\bar{r}_{1}=r_{1}-2N$,
defining
$z_{r_{1}}=(F^{{\color[rgb]{0,0,0}\text{Nrm}}})^{-1}(1-(\boldsymbol{\epsilon}_{g})_{r_{1}})$
the remaining probability constraints are represented as
$\mathbf{g}(\mathbf{x}_{0},\mathbf{y})_{\bar{r}_{1}}-\lambda_{r_{1}}\geq\mathbf{0}$
with
$\lambda_{r_{1}}=z_{r_{1}}\|\mathbf{e}^{\top}_{\bar{r}_{1}}(\partial\mathbf{g}\big{/}\partial\mathbf{x})\boldsymbol{\Gamma}\boldsymbol{\Sigma}\|_{2}.$
(13)
Note that $\lambda_{r},\lambda_{r_{1}}$ depend on $\mathbf{x},\mathbf{y}$ and
$\mathbf{d}$, e.g.,
$\lambda_{r}=\lambda_{r}(\mathbf{x}(\mathbf{y},\mathbf{d}))$, which we will
use later. Moreover the inequality in (12) is deterministic and thus
straightforward to compute once $\boldsymbol{\Gamma}$ is known. Second, when
$(\boldsymbol{\epsilon}_{x})_{r}$ is a small number (which is typically the
case) then $z_{r}>0$ and $\lambda_{r}>0$. Thus the
$\lambda_{r},\lambda_{r_{1}}$ values are regarded as constraint tightenings
and represent the effects of stochasticity in the constraints. Note that when
other distributions are desired, one can substitute the
$(F^{\text{Nrm}})^{-1}(\cdot)$ c.d.f. (above (12) and elsewhere) with another
one. Since investigations about distributional robustness have been previously
conducted by other researchers we refer to [15, Sec. III.A] for in-depth
discussions..
#### I-C2 Computing $\partial\mathbf{x}\big{/}\partial\boldsymbol{\omega}$
The partial derivatives
$\partial\mathbf{x}\big{/}\partial\boldsymbol{\omega}=\boldsymbol{\Gamma}$ in
(12) are obtained by using the power flow equation
$\frac{\partial}{\boldsymbol{\partial\omega}}\left(\mathbf{f}_{\omega}(\mathbf{x}(\boldsymbol{\omega}),\mathbf{y};\mathbf{d}+\boldsymbol{\omega})\right)=\frac{\partial\mathbf{f}_{\omega}}{\partial\mathbf{x}}\frac{\partial\mathbf{x}}{\partial\boldsymbol{\omega}}+\frac{\partial\mathbf{f}_{\omega}}{\partial\boldsymbol{\omega}}=\mathbf{0}.$
First, note from (5) that
$\partial\mathbf{f}_{\omega}\big{/}\partial\boldsymbol{\omega}=\mathbf{I}$.
Second, the partial derivatives are only needed at
$\boldsymbol{\omega}=\mathbf{0}$, which yields the representation
$\partial\mathbf{x}\big{/}\partial\boldsymbol{\omega}=-(\partial\mathbf{f}_{0}\big{/}\partial\mathbf{x})^{-1}$
with the convention $\mathbf{f}_{0}=\mathbf{f}$. Since
$\mathbf{x}=\text{vec}(\mathbf{q}^{g}_{G},\mathbf{v}_{L},\boldsymbol{\theta})$
the so-called Jacobian matrix of partial derivatives is given by
$\frac{\partial\mathbf{f}}{\partial\mathbf{x}}=\left[\begin{array}[]{ c c c
}\frac{\partial\mathbf{f}}{\partial\mathbf{q}^{g}_{G}}&\frac{\partial\mathbf{f}}{\partial\mathbf{v}_{L}}&\frac{\partial\mathbf{f}}{\partial\boldsymbol{\theta}}\end{array}\right].$
(14)
The elements of this matrix can be computed from (1). Note that only the last
$N$ equations in (1) depend on $\mathbf{q}^{g}$. Subsequently, we define the
indices $1\leq i\leq N$ and $j=N+i$, as well as $1\leq g\leq N_{G}$, $1\leq
l\leq N_{L}$ and $1\leq t\leq N$. With this, the elements of the Jacobian from
(14) are:
$\displaystyle\left(\frac{\partial\mathbf{f}}{\partial\mathbf{q}^{g}_{G}}\right)_{i,g}$
$\displaystyle=0,$ (15)
$\displaystyle\left(\frac{\partial\mathbf{f}}{\partial\mathbf{q}^{g}_{G}}\right)_{j,g}$
$\displaystyle=\begin{cases}-1&\text{ if }i=[G]_{g}\\\ 0&\text{ otherwise
}\end{cases}$
$\displaystyle\left(\frac{\partial\mathbf{f}}{\partial\mathbf{v}_{L}}\right)_{i,l}$
$\displaystyle=\begin{cases}\sum_{k=1}^{N}v_{k}c_{ik}+2c_{ii}v_{i}&\text{ if
}i=[L]_{l}\\\ v_{i}c_{i[L]_{l}}&\text{ otherwise }\end{cases}$
$\displaystyle\left(\frac{\partial\mathbf{f}}{\partial\mathbf{v}_{L}}\right)_{j,l}$
$\displaystyle=\begin{cases}\sum_{k=1}^{N}v_{k}d_{ik}+2d_{ii}v_{i}&\text{ if
}i=[L]_{l}\\\ v_{i}d_{i[L]_{l}}&\text{ otherwise }\end{cases}$
$\displaystyle\left(\frac{\partial\mathbf{f}}{\partial\boldsymbol{\theta}}\right)_{i,t}$
$\displaystyle=\begin{cases}-v_{i}\sum_{k=1}^{N}v_{k}d_{ik}~{}~{}~{}~{}~{}&\text{
if }i=t\\\ v_{i}v_{t}d_{it}~{}~{}~{}~{}~{}&\text{ otherwise }\end{cases}$
$\displaystyle\left(\frac{\partial\mathbf{f}}{\partial\boldsymbol{\theta}}\right)_{j,t}$
$\displaystyle=\begin{cases}v_{i}\sum_{k=1}^{N}v_{k}c_{ik}~{}~{}~{}~{}~{}~{}~{}&\text{
if }i=t\\\ -v_{i}v_{t}c_{it}~{}~{}~{}~{}~{}~{}~{}&\text{ otherwise
}\end{cases}$
The partial derivatives
$\partial\mathbf{x}/\partial\boldsymbol{\omega}=(\partial\mathbf{f}/\partial\mathbf{x})^{-1}$
are defined by an inverse. This inverse is typically well defined for regular
optimal power flow problems, as described in [7, Section III. B] and [16] (if
numerically the Jacobian matrix becomes (nearly) singular, it may be corrected
by shifting its diagonal elements by adding a multiple of the identity
matrix). Therefore, the smallest singular value of
$\partial\mathbf{f}/\partial\mathbf{x}\equiv\mathbf{J}$ is positive, i.e.,
$\sigma_{\text{min}}(\mathbf{J})>0$. A positive lower bound for the smallest
singular value of a matrix is described in [17, Theorem 1]. Let
$\mathbf{J}_{:,i}$ denote the $i^{\text{th}}$ column of $\mathbf{J}$ and let
$\mathbf{J}_{i,:}$ be the $i^{\text{th}}$ row of $\mathbf{J}$. Then a lower
bound for the smallest singular value is:
$\sigma_{\text{min}}(\mathbf{J})\geq\hat{K}_{\Gamma}>0,$
where
$\hat{K}_{\Gamma}=\left(\frac{\hat{n}-1}{\hat{n}}\right)^{\frac{\hat{n}-1}{2}}|\mathbf{J}|\text{
max}\left(\frac{\text{min}(\|\mathbf{J}_{:,i}\|_{2})}{\prod_{i}\|\mathbf{J}_{:,i}\|_{2}},\frac{\text{min}(\|\mathbf{J}_{i,:}\|_{2})}{\prod_{i}\|\mathbf{J}_{i,:}\|_{2}}\right),$
$\hat{n}\equiv 2N$, and where the determinant is
$|\mathbf{J}|=\text{det}(\mathbf{J})$. This means that
$\|(\partial\mathbf{f}\big{/}\partial\mathbf{x})^{-1}\|_{2}={\color[rgb]{0,0,0}\|\mathbf{J}^{-1}\|_{2}=\frac{1}{\sigma_{\text{min}}(\mathbf{J})}\leq
1\big{/}\hat{K}_{\Gamma}\equiv K_{\Gamma}}$ (16)
where $\hat{K}_{\Gamma},K_{\Gamma}$ are finite constants. In order to compute
solves with $\mathbf{J}$ (which is part of computing the constraints in (12)
and (13)) the LU factorization is based on the decomposition
$\mathbf{J}=\mathbf{P}\mathbf{L}\mathbf{U}$, where $\mathbf{P}$ is a
permutation matrix, $\mathbf{L}$ is a unit lower triangular matrix and
$\mathbf{U}$ is an upper triangular matrix. The determinant and the bounds in
(16) are thus available “without extra expense” based on solves with
$\mathbf{J}$, by multiplying the diagonal elements of $\mathbf{U}$, since
$|\mathbf{J}|=|\mathbf{U}|$ and $\mathbf{U}$ is upper triangular. Since the
determinant can often become large (even if a matrix is well conditioned) a
possibly preferable approach of computing constant $K_{\Gamma}$ is to exploit
the inequality
$\|\mathbf{J}^{-1}\|\leq\sqrt{\|\mathbf{J}^{-1}\|_{1}\|\mathbf{J}^{-1}\|_{\infty}}\equiv
K_{\Gamma}$. Note that computing $K_{\Gamma}$ can be inexpensive since
$\mathbf{J}^{-1}$ is computed as part of e.g., (12).
### I-D Implicit Chance Constrained Optimal Power Flow
An optimal power flow problem that combines components of the AC-OPF problem
in (2) and of the chance constrained problem in (6) is obtained by using the
constraints from (12) and (13). This reformulated problem incorporates
stochastic effects, while at the same time enables efficient computations. The
corresponding _implicit_ chance-constrained AC-OPF problem is:
$\displaystyle\underset{\mathbf{v},\boldsymbol{\theta},\mathbf{p}^{g},\mathbf{q}^{g}}{\text{
minimize }}$ $\displaystyle C_{0}(\mathbf{p}^{g}_{G})\quad\text{subject to}$
(17)
$\displaystyle\mathbf{f}(\mathbf{v},\boldsymbol{\theta},\mathbf{p}^{g},\mathbf{q}^{g};\mathbf{d})$
$\displaystyle=\mathbf{0}$
$\displaystyle\mathbf{g}(\mathbf{v},\boldsymbol{\theta})$
$\displaystyle\geq\boldsymbol{\lambda}_{g}(\mathbf{p}^{g}_{G})$
$\displaystyle\mathbf{l}_{q}+\boldsymbol{\lambda}_{q}(\mathbf{p}^{g}_{G})$
$\displaystyle\leq\mathbf{q}^{g}_{G}\leq\mathbf{u}_{q}-\boldsymbol{\lambda}_{q}(\mathbf{p}^{g}_{G})$
$\displaystyle\mathbf{l}_{v}+\boldsymbol{\lambda}_{v}(\mathbf{p}^{g}_{G})$
$\displaystyle\leq\mathbf{v}_{L}\leq\mathbf{u}_{v}-\boldsymbol{\lambda}_{v}(\mathbf{p}^{g}_{G})$
$\displaystyle\mathbf{l}_{\theta}+\boldsymbol{\lambda}_{\theta}(\mathbf{p}^{g}_{G})$
$\displaystyle\leq\boldsymbol{\theta}\leq\mathbf{u}_{\theta}-\boldsymbol{\lambda}_{\theta}(\mathbf{p}^{g}_{G})$
$\displaystyle\mathbf{l}_{p}$
$\displaystyle\leq\mathbf{p}^{g}_{G}\leq\mathbf{u}_{p},$
where
$\boldsymbol{\lambda}_{q}(\mathbf{p}^{g}_{G}),\boldsymbol{\lambda}_{v}(\mathbf{p}^{g}_{G}),\boldsymbol{\lambda}_{\theta}(\mathbf{p}^{g}_{G})$
are computed using (12) and $\boldsymbol{\lambda}_{g}(\mathbf{p}^{g}_{G})$ is
computed using (13). The problem can be seen to be implicit with regards to
probability constraints, because the effects of uncertainty are implicitly
included in the tightenings of some constraints by the non-negative
$\boldsymbol{\lambda}$’s.
## II Method
The solution of the potentially large nonlinear optimization problem in (17)
can be computed directly. However, the computation of the
$\boldsymbol{\lambda}$’s adds nonlinearities, because they depend on the
matrix
$\boldsymbol{\Gamma}=-(\partial\mathbf{x}/\partial\boldsymbol{\omega})^{-1}\in\mathbb{R}^{2N\times
2N}$, which is an inverse. Because of this, the problem in (17) is still
computationally difficult. Instead of solving (17) directly, an iterative
scheme, which computes an approximate solution of (17), by solving a sequence
of simpler problems has been proposed in [11]. Such an algorithm is stated as:
Algorithm 1 1: Inputs:
$\mathbf{s}^{(0)}=\text{vec}(\mathbf{v}^{(0)},\boldsymbol{\theta}^{(0)},\mathbf{p}^{(0)},\mathbf{q}^{(0)})$,
$0<\tau_{q}$, $0<\tau_{v}$, $0<\tau_{\theta}$, $0<\tau_{g}$
$0<\epsilon_{q}\leq 0.5$, $0<\epsilon_{v}\leq 0.5$, $0<\epsilon_{\theta}\leq
0.5$, $0<\epsilon_{g}\leq 0.5$, $0\prec\boldsymbol{\Sigma}$,
$\boldsymbol{\lambda}^{(0)}_{q}=\boldsymbol{\lambda}^{(0)}_{v}=\boldsymbol{\lambda}^{(0)}_{\theta}=\boldsymbol{\lambda}^{(0)}_{g}=0,k=0$;
2: for $k=0,1,\ldots$ 3: Solve (17) with
$\boldsymbol{\lambda}^{(k)}_{q}$,$\boldsymbol{\lambda}^{(k)}_{v}$,$\boldsymbol{\lambda}^{(k)}_{\theta}$
fixed and obtain $\mathbf{s}^{(k+1)}$; 4: Compute
$\boldsymbol{\lambda}^{(k+1)}_{q}$,$\boldsymbol{\lambda}^{(k+1)}_{v}$,$\boldsymbol{\lambda}^{(k+1)}_{\theta}$,
$\boldsymbol{\lambda}^{(k+1)}_{g}$ using (12) and (13); 5: if
$\displaystyle\|\boldsymbol{\lambda}^{(k+1)}_{q}-\boldsymbol{\lambda}^{(k)}_{q}\|_{\infty}$
$\displaystyle\leq\tau_{q}\text{ and }$
$\displaystyle\|\boldsymbol{\lambda}^{(k+1)}_{v}-\boldsymbol{\lambda}^{(k)}_{v}\|_{\infty}$
$\displaystyle\leq\tau_{v}\text{ and }$
$\displaystyle\|\boldsymbol{\lambda}^{(k+1)}_{\theta}-\boldsymbol{\lambda}^{(k)}_{\theta}\|_{\infty}$
$\displaystyle\leq\tau_{\theta}\text{ and }$
$\displaystyle\|\boldsymbol{\lambda}^{(k+1)}_{g}-\boldsymbol{\lambda}^{(k)}_{g}\|_{\infty}$
$\displaystyle\leq\tau_{g}$ then 6: Stop. Return $\mathbf{s}^{(k+1)}$; 7:
else 8: $k=k+1$; 9: end if
Note that Algorithm 1 stops when the changes in the $\boldsymbol{\lambda}$’s
become small. However no criteria have yet been specified for when one can
expect the iteration to converge. Therefore, we analyze conditions for which
one can expect Algorithm 1 to converge.
## III Analysis
The analysis of Algorithm 1 is based on the insight that this iterative
algorithm is a fixed point iteration. Therefore, in order to derive its
convergence conditions, we investigate the convergence conditions of the fixed
point iteration. Throughout this section, the problem from (17) is
reformulated as
$\displaystyle\underset{\mathbf{s}}{\text{minimize
}}C_{1}(\mathbf{s})\quad\text{subject to}$ (18)
$\displaystyle{\color[rgb]{0,0,0}\mathbf{f}(\mathbf{s})=\mathbf{0}},\quad{\color[rgb]{0,0,0}\mathbf{h}(\mathbf{s};\boldsymbol{\lambda})}{\color[rgb]{0,0,0}\geq\mathbf{0}}$
with
$\mathbf{s}=\text{vec}(\mathbf{v},\boldsymbol{\theta},\mathbf{p}^{g},\mathbf{q}^{g})$,
$C_{1}(\mathbf{s})=C_{0}(\mathbf{p}^{g}_{G})$,
$\boldsymbol{\lambda}=\boldsymbol{\lambda}(\mathbf{s})=\text{vec}(\boldsymbol{\lambda}_{q},\boldsymbol{\lambda}_{v},\boldsymbol{\lambda}_{\theta},\boldsymbol{\lambda}_{g})$,
${\color[rgb]{0,0,0}\mathbf{h}(\mathbf{s};\boldsymbol{\lambda})\geq\mathbf{0}}\in\mathbb{R}^{m}$
($m=N_{L}+4N$ inequality constraints in (17)) and
$\mathbf{f}(\mathbf{s};\mathbf{d})=\mathbf{f}(\mathbf{s})$. Our analysis is
consistent with classical nonlinear programming sensitivity results [18, 19],
however is different because of the fixed point component of the algorithm. In
classical nonlinear programming $\boldsymbol{\lambda}$ is taken as an
independent perturbation parameter, with focus on the sensitivities of
$\mathbf{s}$ with respect to $\boldsymbol{\lambda}$. However, because the
(implicit) chance-constrained problem also contains the dependence
$\boldsymbol{\lambda}=\boldsymbol{\lambda}(\mathbf{s})$ we are led to also
acknowledge the interplay of $\boldsymbol{\lambda}$ and $\mathbf{s}$. Recall
that a fixed point, say $\boldsymbol{\lambda}^{*}$, of a continuous
function/mapping, say ${\color[rgb]{0,0,0}M}(\boldsymbol{\lambda})$, can be
defined by the two conditions
$\underset{\text{Condition
1}}{{\color[rgb]{0,0,0}M}(\boldsymbol{\lambda}^{*})=\boldsymbol{\lambda}^{*}},\quad\underset{\text{Condition
2}}{\|\partial{\color[rgb]{0,0,0}M}(\boldsymbol{\lambda}^{*})\big{/}\partial\boldsymbol{\lambda}\|_{2}\in[0,1)}$
(19)
(see for instance [20, Sec. 10.1] and [21, Sec. 6.3] on fixed points.) These
two conditions are generalizations of a scalar fixed-point (FP) to vector
valued functions. Condition 1 represents the definition of a FP, which is a
point that remains unchanged after being mapped. Condition 2 is a sufficient
condition for a fixed point mapping to converge [20, Theorem 10.6]). This
condition implies that recursive mappings of a fixed-point are characterized
by vanishing derivatives (successively applying the mapping results in a
stable point, which make such mapping be sometimes called a contraction [20,
Theorem 10.6]). Note that Algorithm 1 in line 5 checks whether the differences
in successive constraint tightenings are small, i.e., this is a numerical
check of Condition 1. We further analyze properties of Condition 2, which may
be useful in improving the convergence behavior of the algorithm by providing
insights of re-scaling certain inputs. Now, to closer investigate the relation
between the variables in Algorithm 1, define the mapping in line 3 that
determines $\mathbf{s}^{(k+1)}$ from $\boldsymbol{\lambda}^{(k)}$ by
$\mathbf{s}_{M}(\boldsymbol{\lambda}^{(k)})$ (i.e.,
$\mathbf{s}^{(k+1)}=\mathbf{s}_{M}(\boldsymbol{\lambda}^{(k)})$), and the
operation in line 4 that determines $\boldsymbol{\lambda}^{(k+1)}$ from
$\mathbf{s}^{(k+1)}$ by $\boldsymbol{\lambda}_{M}(\mathbf{s}^{(k+1)})$ (i.e.,
$\boldsymbol{\lambda}^{(k+1)}=\boldsymbol{\lambda}_{M}(\mathbf{s}^{(k+1)})$).
Then the statements in the algorithm recursively define the next iterates,
starting from $k=0$, $\mathbf{s}^{(k)}$, $\boldsymbol{\lambda}^{(k)}$, so
that,
$\displaystyle\mathbf{s}^{(k+1)}$
$\displaystyle=\mathbf{s}_{M}(\boldsymbol{\lambda}^{(k)}),\quad$
$\displaystyle\boldsymbol{\lambda}^{(k+1)}$
$\displaystyle=\boldsymbol{\lambda}_{M}(\mathbf{s}^{(k+1)})$
$\displaystyle=\mathbf{s}_{M}(\boldsymbol{\lambda}_{M}(\mathbf{s}^{(k)})),\quad$
$\displaystyle=\boldsymbol{\lambda}_{M}(\mathbf{s}_{M}(\boldsymbol{\lambda}^{(k)})).$
This means that there is a mapping that generates $\mathbf{s}^{(k+1)}$ from
$\mathbf{s}^{(k)}$ and one that generates $\boldsymbol{\lambda}^{(k+1)}$ from
$\boldsymbol{\lambda}^{(k)}$. In particular, if
$\boldsymbol{\lambda}^{(k)}\to\boldsymbol{\lambda}^{*}$ then
$\mathbf{s}^{(k)}\to\mathbf{s}^{*}$ and vice-versa. Our analysis is based on
the assertion that if Condition 2 in (19) holds, then
$\boldsymbol{\lambda}^{*}$ (and $\mathbf{s}^{*}$) is a fixed point.
First, we describe the basic properties of a solution to (18), which enables
the use of results from nonlinear programming later on, and also the
establishing of reasonable assumptions.
### III-A Basic Conditions
A solution to the nonlinear programming problem (18) is characterized by a set
of conditions for the Lagrangian:
$L(\mathbf{s},\boldsymbol{\mu},\boldsymbol{\rho};\boldsymbol{\lambda})=C_{{\color[rgb]{0,0,0}1}}(\mathbf{s})+\boldsymbol{\mu}^{\top}\mathbf{f}(\mathbf{s})+\boldsymbol{\rho}^{\top}\mathbf{h}(\mathbf{s};\boldsymbol{\lambda}),$
where $\boldsymbol{\mu}\in\mathbb{R}^{2N}$ and
$\boldsymbol{\rho}\geq\mathbf{0}\in\mathbb{R}^{m}$. Subsequently, the Karush-
Kuhn-Tucker (KKT) [22, 23] optimality conditions are the set of nonlinear
conditions that define a solution of (18):
$\displaystyle\frac{\partial}{\partial\mathbf{s}}L(\mathbf{s},\boldsymbol{\mu},\boldsymbol{\rho};\boldsymbol{\lambda})$
$\displaystyle=\mathbf{0}$ $\displaystyle\mathbf{f}(\mathbf{s})$
$\displaystyle=\mathbf{0}$
$\displaystyle\mathbf{h}(\mathbf{s},\boldsymbol{\lambda})$
$\displaystyle\geq\mathbf{0}$ (20)
$\displaystyle\boldsymbol{\rho}_{i}\mathbf{h}(\mathbf{s},\boldsymbol{\lambda})_{i}$
$\displaystyle=0,\quad i=1,\cdots,m$ $\displaystyle\boldsymbol{\rho}$
$\displaystyle\geq\mathbf{0}.$
The set of active inequality constraints is defined as
$\mathcal{A}(\mathbf{s})=\\{i:\mathbf{h}(\mathbf{s},\boldsymbol{\lambda})_{i}=0\\}.$
A solution to (III-A) can be found when the columns of the constraint
Jacobians are linearly independent, i.e., when
$\left[\quad\frac{\partial}{\partial\mathbf{s}}\mathbf{f}(\mathbf{s}),\quad\frac{\partial}{\partial\mathbf{s}}\mathbf{h}(\mathbf{s},\boldsymbol{\lambda})_{i}\quad\right],\quad
i\in\mathcal{A}(\boldsymbol{s}),$ (21)
are linearly independent. Moreover, second order conditions (which state that
the Lagrangian Hessian is positive definite in the nullspace of the constraint
derivatives) ensure strict local optimality of a KKT point. Finally, if the
active set is unchanged in a small neighborhood around $\boldsymbol{\lambda}$,
then changes in $\mathbf{s}$ with regards to changes in $\boldsymbol{\lambda}$
are continuous. Summarizing we assume the three conditions. Assumptions
(Analysis):
A.1: | Linear independence in the constraint Jacobians (21)
---|---
A.2: | Second order sufficient conditions hold for (18)
A.3: | Strict complementary slackness: $\boldsymbol{\rho}_{i}>0,i\in\mathcal{A}(\mathbf{s})$.
The first two assumptions state that a local minimum for the problem in (18)
exists. The third ensures continuity of derivatives, such as
$\frac{\partial\mathbf{s}}{\partial\boldsymbol{\lambda}}$ and
$\frac{\boldsymbol{\lambda}_{M}}{\partial\boldsymbol{\lambda}}$. We further
assume access to a solver that can find a local minimum when it exists.
Because of the continuity of partial derivatives the active set at a (local)
solution does not change. A numerical investigation of a zig-zag behavior of
Algorithm 1 due to a disconnected feasible set can be found in [15]. Note that
when changes in the constraint tightenings are not abrupt, but smooth,
assuming continuity is reasonable. We like to note that [15, Sec. V.A]
empirically observed that in cases where tightenings are small (relative to
other terms) the algorithm converged more frequently on linear programs. The
analysis is reduced into three parts using
$(\boldsymbol{\lambda})_{n}=\lambda_{n}$ (the $n^{\text{th}}$ element):
Part I: | Representation of $\frac{\partial\boldsymbol{\lambda}_{M}}{\partial\boldsymbol{\lambda}}$
---|---
Part II: | Sensitivities $\frac{\partial\mathbf{s}}{\partial\lambda_{n}}$ and $\frac{\partial^{2}\mathbf{f}}{\partial\lambda_{n}\partial\mathbf{x}}$
Part III: | Bound on $\left\|\frac{\partial\boldsymbol{\lambda}_{M}}{\partial\boldsymbol{\lambda}}\right\|_{2}$ and Implications
Since the goal is to deduce properties of the quantity
$\|\frac{\partial\boldsymbol{\lambda}_{M}}{\partial\boldsymbol{\lambda}}\|$
(which corresponds to the left hand side in Condition 2 (cf. (19))) first a
representation of
$\frac{\partial\boldsymbol{\lambda}_{M}}{\partial\boldsymbol{\lambda}}$ is
needed. This is derived in Part I. Subsequently, the representation of
$\frac{\partial\boldsymbol{\lambda}_{M}}{\partial\boldsymbol{\lambda}}$
depends on changes in variables and constraints, which are developed as the
sensitivities $\frac{\partial\mathbf{s}}{\partial\lambda_{n}}$ and
$\frac{\partial^{2}\mathbf{f}}{\partial\lambda_{n}\partial\mathbf{x}}$ in Part
II. Ultimately, Part III uses the previous outcomes to deduce
$\left\|\frac{\partial\boldsymbol{\lambda}_{M}}{\partial\boldsymbol{\lambda}}\right\|_{2}$
and implications. To develop a representation for
$\frac{\partial\boldsymbol{\lambda}_{M}}{\partial\boldsymbol{\lambda}}$ we
analyze the expression
$(\boldsymbol{\lambda}_{M}(\mathbf{s}(\boldsymbol{\lambda})))_{r}=z_{r}\|\mathbf{e}^{\top}_{r}\boldsymbol{\Gamma}(\mathbf{s}(\boldsymbol{\lambda}))\boldsymbol{\Sigma}\|_{2},$
which is (12) with explicit dependencies on $\mathbf{s}$ and
$\boldsymbol{\lambda}$. (The conditions for (13) are done in a similar way,
with an additional constant for the derivatives of the line flow constraints:
$\|\partial\mathbf{g}\big{/}\partial\mathbf{x}\|_{2}\leq K_{g}$).
### III-B Part I: Representation of
$\frac{\partial\boldsymbol{\lambda}_{M}}{\partial\boldsymbol{\lambda}}$
In this section we first make the relation between $\mathbf{s}$ and
$\boldsymbol{\lambda}$ explicit. Thus the $\boldsymbol{\Gamma}$ matrix is
represented as a function of $\mathbf{s}$ and $\boldsymbol{\lambda}$ i.e.,
$\boldsymbol{\Gamma}(\mathbf{s}(\boldsymbol{\lambda}))=-\mathbf{J}(\mathbf{s}(\boldsymbol{\lambda}))^{-1},$
where
$\mathbf{J}(\mathbf{s}(\boldsymbol{\lambda}))=\mathbf{J}=\partial\mathbf{f}\big{/}\partial\mathbf{x}$
(cf. (14)). For notation, we will be using the following definition
$\mathbf{w}_{r}(\mathbf{s}(\boldsymbol{\lambda}))\equiv\boldsymbol{\Sigma}\boldsymbol{\Gamma}(\mathbf{s}(\boldsymbol{\lambda}))^{\top}\mathbf{e}_{r},$
(22)
and write
$(\boldsymbol{\lambda}_{M}(\mathbf{s}(\boldsymbol{\lambda})))_{r}=z_{r}[\mathbf{w}_{r}(\mathbf{s}(\boldsymbol{\lambda}))^{\top}\mathbf{w}_{r}(\mathbf{s}(\boldsymbol{\lambda}))]^{1/2}.$
Because Condition 2 in (19) involves partial derivatives, we take the partial
derivative with respect to $\lambda_{n}$ (the ‘$n$th’ tightening), so that
$\frac{\partial}{\partial\lambda_{n}}(\boldsymbol{\lambda}_{M}(\mathbf{s}(\boldsymbol{\lambda})))_{r}=\frac{z_{r}^{2}}{(\lambda_{M})_{r}}\left[\mathbf{w}_{r}^{\top}\frac{\partial\mathbf{w}_{r}}{\partial\lambda_{n}}\right],$
denoting
$(\boldsymbol{\lambda}_{M}(\mathbf{s}(\boldsymbol{\lambda})))_{r}=(\lambda_{M})_{r}$
in the right hand side. This expression provides a basis for what the matrix
of partial derivatives will look like. For indices $1\leq r,n\leq 2N$ the
elements of the matrix of _sensitivities_ with respect to the vector
$\boldsymbol{\lambda}$ is
${\small}\left(\frac{\partial\boldsymbol{\lambda}_{M}}{\partial\boldsymbol{\lambda}}\right)_{rn}=\frac{z_{r}^{2}}{(\lambda_{M})_{r}}\left[\mathbf{w}_{r}^{\top}\frac{\partial\mathbf{w}_{r}}{\partial\lambda_{n}}\right].$
(23)
Note that since $\mathbf{w}_{r}^{\top}$ is a row vector and
$\frac{\partial\mathbf{w}_{r}}{\partial\lambda_{n}}$ is a column vector the
entries in (23) can be written as
$\mathbf{w}_{r}^{\top}\frac{\partial\mathbf{w}_{r}}{\partial\lambda_{n}}=\left\|\mathbf{w}_{r}\right\|_{2}\left\|\frac{\partial\mathbf{w}_{r}}{\partial\lambda_{n}}\right\|_{2}\cos(\phi_{rn}),$
(24)
where $\phi_{rn}$ represents the angle between $\mathbf{w}_{r}$ and
$\frac{\partial\mathbf{w}_{r}}{\partial\lambda_{n}}$. Since
$(\lambda_{M})_{r}=z_{r}\left\|\mathbf{w}_{r}\right\|_{2}$ therefore the
elements of the matrix in (23) are
${\color[rgb]{0,0,0}\frac{z^{2}_{r}}{(\lambda_{M})_{r}}\left[\mathbf{w}_{r}^{\top}\frac{\partial\mathbf{w}_{r}}{\partial\lambda_{n}}\right]=z_{r}\left\|\frac{\partial\mathbf{w}_{r}}{\partial\lambda_{n}}\right\|_{2}\cos(\phi_{rn}).}$
From (22) it holds that
$\left\|\frac{\partial\mathbf{w}_{r}}{\partial\lambda_{n}}\right\|_{2}=\left\|\boldsymbol{\Sigma}\frac{\partial\boldsymbol{\Gamma}^{\top}}{\partial\lambda_{n}}\mathbf{e}_{r}\right\|_{2}$,
which yields the subsequent inequalities
$\left(\frac{\partial\boldsymbol{\lambda}_{M}}{\partial\boldsymbol{\lambda}}\right)_{rn}\leq|z_{r}|\left\|\boldsymbol{\Sigma}\right\|_{2}\left\|\frac{\partial\boldsymbol{\Gamma}^{\top}}{\partial\lambda_{n}}\mathbf{e}_{r}\right\|_{2}.$
(25)
Note that $z_{r}$ and $\boldsymbol{\Sigma}$ are constants, with upper bounds
$z_{r}\leq\underset{1\leq r\leq 2N}{\max|z_{r}|}\equiv K_{1}$ and
$\|\boldsymbol{\Sigma}\|_{2}\leq K_{2}$. Therefore, a bound for
$\left\|\frac{\partial\boldsymbol{\Gamma}^{\top}}{\partial\lambda_{n}}\mathbf{e}_{r}\right\|_{2}$
fully specifies (25), and thus the elements of
$\frac{\partial\boldsymbol{\lambda}_{M}}{\partial\boldsymbol{\lambda}}$.
### III-C Part II: Sensitivities
$\frac{\partial\mathbf{s}}{\partial\lambda_{n}}$ and
$\frac{\partial^{2}\mathbf{f}}{\partial\lambda_{n}\partial\mathbf{x}}$
In order to derive a bound on
$\|\frac{\partial\boldsymbol{\Gamma}^{\top}}{\partial\lambda_{n}}\mathbf{e}_{r}\|_{2}=\|\mathbf{e}^{\top}_{r}\frac{\partial\boldsymbol{\Gamma}}{\partial\lambda_{n}}\|_{2}$,
recall that ${\color[rgb]{0,0,0}\boldsymbol{\Gamma}=\mathbf{J}^{-1}}$.
Therefore, note that we can make use of the identity
$\frac{\partial}{\partial\lambda_{n}}\mathbf{J}^{-1}=-\mathbf{J}^{-1}\left(\frac{\partial}{\partial\lambda_{n}}\mathbf{J}\right)\mathbf{J}^{-1}.$
Specifically, defining the vector
$\mathbf{j}^{\top}_{r}\equiv\mathbf{e}^{\top}_{r}\mathbf{J}^{-1}$ then
$\left\|\mathbf{e}^{\top}_{r}\frac{\partial\mathbf{J}^{-1}}{\partial\lambda_{n}}\right\|_{2}=\left\|\mathbf{j}^{\top}_{r}\frac{\partial\mathbf{J}}{\partial\lambda_{n}}\mathbf{J}^{-1}\right\|_{2}\leq\left\|\mathbf{J}^{-1}\right\|_{2}\left\|\mathbf{j}^{\top}_{r}\frac{\partial\mathbf{J}}{\partial\lambda_{n}}\right\|_{2}$
(26)
With a bound on $\left\|{\color[rgb]{0,0,0}\mathbf{J}}^{-1}\right\|_{2}\leq
K_{\Gamma}$ (from (16)), it remains to derive an upper bound on
$\left\|\mathbf{j}^{\top}_{r}{\color[rgb]{0,0,0}\frac{\partial\mathbf{J}}{\partial\lambda_{n}}}\right\|_{2}$.
Next we describe the essential components for such a bound, based on the
elements of the $2^{\text{nd}}$ derivative matrices
$\frac{\partial}{\partial\lambda_{n}}\mathbf{J}(\mathbf{s}(\boldsymbol{\lambda}))=\frac{\partial}{\partial\lambda_{n}}\left(\frac{\partial\mathbf{f}}{\partial\mathbf{x}}\right)=\left[\begin{array}[]{
c c c
}\frac{\partial^{2}\mathbf{f}}{\partial\lambda_{n}\partial\mathbf{q}^{g}_{G}}&\frac{\partial^{2}\mathbf{f}}{\partial\lambda_{n}\partial\mathbf{v}_{L}}&\frac{\partial^{2}\mathbf{f}}{\partial\lambda_{n}\partial\boldsymbol{\theta}}\end{array}\right].$
The elements of
$\frac{\partial}{\partial\lambda_{n}}\left(\frac{\partial\mathbf{f}}{\partial\mathbf{x}}\right)$
can be computed from (15) once
$\frac{\partial}{\partial\lambda_{n}}v_{i}\equiv\partial_{n}v_{i},\quad{\color[rgb]{0,0,0}\text{and}}\quad\frac{\partial}{\partial\lambda_{n}}\theta_{i}\equiv\partial_{n}\theta_{i},$
(27)
for $1\leq n\leq 2N$, $1\leq i\leq N$ are known. This is the case, because
$\frac{\partial}{\partial\lambda_{n}}(v_{i}c_{ki})$ and
$\frac{\partial}{\partial\lambda_{n}}(v_{i}d_{ki})$, for $1\leq k\leq N$, (in
e.g., (15)) can be computed from these quantities. In order to develop
expressions for the derivatives in (27) we use the vector variable
$\mathbf{s}$ (from (18)), which contain elements $v_{i}$ and $\theta_{i}$.
Subsequently results from classical nonlinear programming theory, based on
assumptions A.1 — A.3, ensure the existence of a parametrized solution to (18)
with dependence on $\boldsymbol{\lambda}$, defined by
${\color[rgb]{0,0,0}\mathbf{a}(\boldsymbol{\lambda})\equiv\left[\>\>\mathbf{s}(\boldsymbol{\lambda})^{\top}\>\>\boldsymbol{\mu}(\boldsymbol{\lambda})^{\top}\>\>\boldsymbol{\rho}(\boldsymbol{\lambda})^{\top}\>\>\right]^{\top}.}$
Selecting a set of rows in $\mathbf{a}(\boldsymbol{\lambda})$ extracts
$\mathbf{s}(\boldsymbol{\lambda})$ and thus also $v_{i}$ and $\theta_{i}$. The
derivative of $\mathbf{a}(\boldsymbol{\lambda})$ w.r.t. $\boldsymbol{\lambda}$
can be computed from the KKT conditions
$\left[\begin{array}[]{c}\frac{\partial}{\partial\mathbf{s}}L(\mathbf{a}(\boldsymbol{\lambda}),\boldsymbol{\lambda})\\\
\mathbf{f}(\mathbf{s}(\boldsymbol{\lambda}))\\\
\boldsymbol{\rho}_{i}\mathbf{h}(\mathbf{s}(\boldsymbol{\lambda}),\boldsymbol{\lambda})_{i}\end{array}\right]\equiv\mathbf{G}(\mathbf{a}(\boldsymbol{\lambda}),\boldsymbol{\lambda})=\mathbf{0},\quad
i\in\mathcal{A}(\mathbf{s}).$ (28)
From (28) it holds that
$\frac{\partial}{\partial\lambda_{n}}\mathbf{G}(\mathbf{a}(\boldsymbol{\lambda}),\boldsymbol{\lambda})=\frac{\partial\mathbf{G}}{\partial\mathbf{a}}\frac{\partial\mathbf{a}}{\partial\lambda_{n}}+\frac{\partial\mathbf{G}}{\partial\lambda_{n}}=\mathbf{0},$
and
$\frac{\partial\mathbf{a}}{\partial\lambda_{n}}=-\big{[}\frac{\partial\mathbf{G}}{\partial\mathbf{a}}\big{]}^{-1}\frac{\partial\mathbf{G}}{\partial\lambda_{n}}$.
Thus $\frac{\partial\mathbf{s}}{\partial\lambda_{n}}$ is obtained by
extracting elements from $\frac{\partial\mathbf{a}}{\partial\lambda_{n}}$. The
existence of the inverse and hence the derivatives of $\mathbf{a}$ are
guaranteed by [18, Theorem 3.2]. Because the derivatives are finite a bound of
the form
$\big{\|}\frac{\partial\mathbf{a}}{\partial\lambda_{n}}\big{\|}_{2}\leq K_{a}$
exists, which implies
$\big{\|}\frac{\partial\mathbf{s}}{\partial\lambda_{n}}\big{\|}_{2}\leq
K_{a}$,
$\big{\|}\frac{\partial\mathbf{x}}{\partial\lambda_{n}}\big{\|}_{2}\leq K_{a}$
and $|\partial_{n}v_{i}|\leq K_{a}$, $|\partial_{n}\theta_{i}|\leq K_{a}$.
Having deduced bounds on the sensitivities $\partial_{n}v_{i}$ and
$\partial_{n}\theta_{i}$ now the partial derivatives of
$\frac{\partial\mathbf{f}}{\partial\mathbf{x}}$ can be analyzed. Specifically,
the partial derivatives of
$\frac{\partial}{\partial\lambda_{n}}\left(\frac{\partial\mathbf{f}}{\partial\mathbf{x}}\right)$
are computed from (15) from which
$\frac{\partial}{\partial\lambda_{n}}\left(\frac{\partial\mathbf{f}}{\partial\mathbf{q}^{g}_{G}}\right)=\mathbf{0}$.
Since the power flow equations hold with equality at a solution, we simplify
the summations in (15), using the definitions $p_{i}^{\text{net}}\equiv
p_{i}^{g}-p_{i}^{d}$ $q_{i}^{\text{net}}\equiv q_{i}^{g}-q_{i}^{d}$, as e.g.,
$\sum_{k=1}^{N}v_{k}c_{ik}=\frac{p_{i}^{\text{net}}}{v_{i}}$ (and
correspondingly for the other terms). Then for $1\leq i\leq N$ and $j=N+i$ as
well as $1\leq l\leq N_{L}$ and $1\leq t\leq N$
$\displaystyle\frac{\partial}{\partial\lambda_{n}}\left(\frac{\partial\mathbf{f}}{\partial\mathbf{v}_{L}}\right)_{i,l}$
$\displaystyle=\begin{cases}\partial_{n}(\frac{p_{i}^{\text{net}}}{v_{i}})+2\partial_{n}(c_{ii}v_{i})&\text{
if }i=[L]_{l}\\\ \partial_{n}(v_{i}c_{i[L]_{l}})&\text{ otherwise
}\end{cases}$
$\displaystyle\frac{\partial}{\partial\lambda_{n}}\left(\frac{\partial\mathbf{f}}{\partial\mathbf{v}_{L}}\right)_{j,l}$
$\displaystyle=\begin{cases}\partial_{n}(\frac{q_{i}^{\text{net}}}{v_{i}})+2\partial_{n}(d_{ii}v_{i})&\text{
if }i=[L]_{l}\\\ \partial_{n}(v_{i}d_{i[L]_{l}})&\text{ otherwise
}\end{cases}$
$\displaystyle\frac{\partial}{\partial\lambda_{n}}\left(\frac{\partial\mathbf{f}}{\partial\boldsymbol{\theta}}\right)_{i,t}$
$\displaystyle=\begin{cases}-\partial_{n}q_{i}^{\text{net}}&\text{ if }i=t\\\
v_{k}d_{ik}\partial_{n}v_{i}+v_{i}\partial_{n}(v_{k}d_{ik})&\text{ otherwise
}\end{cases}$
$\displaystyle\frac{\partial}{\partial\lambda_{n}}\left(\frac{\partial\mathbf{f}}{\partial\boldsymbol{\theta}}\right)_{j,t}$
$\displaystyle=\begin{cases}\partial_{n}q_{i}^{\text{net}}&\text{ if }i=t\\\
-(v_{k}c_{ik}\partial_{n}v_{i}+v_{i}\partial_{n}(v_{k}c_{ik}))&\text{
otherwise }\\\ \end{cases}$
These derivatives can be bounded with limits on
$\partial_{n}p^{\text{net}}_{i},\partial_{n}q^{\text{net}}_{i},\partial_{n}v_{i},\partial_{n}\theta_{i},\partial_{n}c_{ik},\partial_{n}d_{ik},\partial_{n}(v_{k}c_{ik})$,
and $\partial_{n}(v_{k}d_{ik})$ (i.e., all elements that appear in the
expression of the derivatives). Since all these individual elements are
bounded by $K_{a}$, and since the $c_{ik},d_{ik}$ terms are bounded, too (they
depend on $v_{i},\theta_{i}$), the maximum element will be bounded by a
constant, as well. Denote this constant by $K_{x}$, then
$\underset{1\leq i,j\leq
2N}{\text{max}}\left|\left(\frac{\partial}{\partial\lambda_{n}}\left(\frac{\partial\mathbf{f}}{\partial\mathbf{x}}\right)\right)_{ij}\right|\equiv
K_{x}.$
With this, since $\|\mathbf{j}_{r}\|_{2}\leq K_{\Gamma}$ and
$\frac{\partial^{2}\mathbf{f}}{\partial\lambda_{n}\partial\mathbf{x}}=\frac{\partial}{\partial\lambda_{n}}\mathbf{J}(\mathbf{s}(\boldsymbol{\lambda}))$,
we obtain the upper bound
$\displaystyle\left\|\mathbf{j}_{r}^{\top}\frac{\partial\mathbf{J}(\mathbf{s}(\boldsymbol{\lambda}))}{\partial\lambda_{n}}\right\|_{2}\leq
2K_{\Gamma}K_{x}N.$ (29)
### III-D Part III: Bound on
$\left\|\frac{\partial\boldsymbol{\lambda}_{M}}{\partial\boldsymbol{\lambda}}\right\|_{2}$
and Implications
The analysis from the previous section has implications for the convergence of
Algorithm 1. In particular, since
$\|\mathbf{J}(\mathbf{s}(\boldsymbol{\lambda}))^{-1}\|_{2}\leq K_{\Gamma}$
(cf. Section I-C2) a bound on the magnitude of
$\frac{\partial\boldsymbol{\lambda}_{M}}{\partial\boldsymbol{\lambda}}$ can be
found. By combining (25) and (29) one sees that
$\displaystyle\left(\frac{\partial\boldsymbol{\lambda}_{M}}{\partial\boldsymbol{\lambda}}\right)_{rn}$
$\displaystyle\leq|z_{r}|~{}\left\|\boldsymbol{\Sigma}\right\|_{2}~{}\left\|\mathbf{e}^{\top}_{r}(\partial\boldsymbol{\Gamma}\big{/}\partial\lambda_{n})\right\|_{2}$
$\displaystyle\leq|z_{r}|~{}\left\|\boldsymbol{\Sigma}\right\|_{2}~{}K_{\Gamma}~{}\left\|\mathbf{j}^{\top}_{r}(\partial\mathbf{J}\big{/}\partial\lambda_{n})\right\|_{2}$
$\displaystyle\leq|z_{r}|~{}\left\|\boldsymbol{\Sigma}\right\|_{2}~{}K_{\Gamma}~{}2~{}K_{\Gamma}~{}K_{x}~{}N,$
where the second inequality is based on (26) and the third on (29). Since
typically only $N_{A}\ll 4N+N_{L}$ inequality constraints are active at the
solution, and since $|z_{r}|\leq K_{1}$ the upper bound is
$\left\|\frac{\partial\boldsymbol{\lambda}_{M}}{\partial\boldsymbol{\lambda}}\right\|_{2}\leq
2\cdot\|\boldsymbol{\Sigma}\|_{2}\cdot K_{1}\cdot K^{2}_{\Gamma}\cdot
K_{x}\cdot N_{A}\cdot N.$ (30)
If the right hand side in (30) is less than one then Condition 2 in (19) of
the fixed point iteration is guaranteed to hold. The upper bound in (30) is
composed of the parts
$\|\boldsymbol{\Sigma}\|_{2},\quad K_{1},\quad K^{2}_{\Gamma}\cdot K_{x}\cdot
N_{A}\cdot N.$
Typically, there is digression on the values of $K_{1}$ and
$\|\boldsymbol{\Sigma}\|_{2}$, whereas $K^{2}_{\Gamma}\cdot K_{x}\cdot
N_{A}\cdot N$ is problem (network specification) dependent. For instance,
$K_{1}$ is reduced by reducing the probability constraints in (6). Concretely,
if all $\epsilon\to 1/2$ then $K_{1}\to 0$. Secondly, _if
$\|\mathbf{\Sigma}\|_{2}$ is small enough (sufficiently small uncertainty)
then from (19) the fixed point iteration is guaranteed to converge_. Although
the bounds may be known to be conservative, we show that if the uncertainty is
small enough convergence is achieved. Such an insight may be useful if the
uncertainty can be scaled (e.g., using shorter model horizons) so that its
magnitude becomes smaller. Such conclusions apply to basically any type of
Algorithm 1 as long as $\boldsymbol{\Sigma}$ is used in forming
$\boldsymbol{\lambda}$. Finally, the bound (30) includes the network dimension
$N$. In numerical experiments, we observe that setting
$\|\boldsymbol{\Sigma}\|_{2}\sim\mathcal{O}(1/N^{2})$ enables the solution of
large networks.
### III-E Computing a Bound Estimate
Before applying Algorithm 1 for multiple iterations the bound in (30) can be
qualitatively used to asses the convergence behavior of the method. In
particular, at $k=0$, use $\mathbf{s}^{(1)}$ to compute parts of the right
hand side in (30). Computing $K_{\Gamma}$ is available from a LU factorization
of $\mathbf{J}$ or (sometimes preferably) from
$K_{\Gamma}=\sqrt{\|\boldsymbol{\Gamma}\|_{1}\|\boldsymbol{\Gamma}\|_{\infty}}$
. The computation of $K_{x}$ can be approximated by first estimating $K_{a}$
from a lower bound on the smallest singular value of
$\partial\mathbf{G}/\partial\mathbf{a}$ using [17] (as for (16)), calling such
an estimate $\hat{K}_{a}$. Because $\boldsymbol{\lambda}$ appears linearly in
the inequalities
$\mathbf{h}(\mathbf{s},\boldsymbol{\lambda})=\text{vec}(\mathbf{g}_{1}(\mathbf{s}),\mathbf{g}_{2}(\mathbf{s})+\boldsymbol{\lambda})$
(for appropriately defined
$\mathbf{g}_{1}(\mathbf{s}),\mathbf{g}_{2}(\mathbf{s})$) and in no other
constraints, the expression
$\partial\mathbf{G}/\partial\lambda_{n}={\color[rgb]{0,0,0}\text{vec}(\mathbf{0},\mathbf{e}_{n})}$
simplifies. Subsequently, $K_{a}\sim 1/\hat{K}_{a}$ and $K_{x}=aK_{a}$, for a
constant $a>0$. Because the estimate for $K_{x}$ may be reasonably small near
a solution, and because $K_{x}$ would incur extra computational expenses we
set its value to $K_{x}=1$. If the computed bound, say $B^{(0)}$ exceeds a
fixed threshold (say $\text{threshold}=10$), set
$\boldsymbol{\Sigma}^{\text{Scaled}}=1/B^{(0)}\boldsymbol{\Sigma}$. This
ensures that
$\|\partial\lambda_{r}/\partial\boldsymbol{\lambda}\|_{2}\leq\|\boldsymbol{\Sigma}\|_{2}$
in subsequent iterations. Moreover, we include a scaling factor $\gamma_{g}$
in computing the line flow constraint tightenings from (13), thus
$\bar{\lambda}_{r_{1}}=\gamma_{g}\lambda_{r_{1}}$. Note that for
$\gamma_{g}=1$ no scaling is included. An overview of the constants is
summarized in a table. Furthermore, the quantity
$K_{P}=\|\boldsymbol{\Sigma}\|K^{2}_{\Gamma}N_{A}$ captures specific problem
sensitivities to relevant factors such as magnitude of $\boldsymbol{\Gamma}$
and number of active constraints. This quantity can be computed at the
beginning of applying Algorithm 1, is often inexpensive, and is expected to be
small for guaranteed convergence.
TABLE I: Components in computing a bound estimate in (30) Constant | Meaning | Computation
---|---|---
$K_{1}$ | Distribution bound ($z_{r}=F^{-1}(1-\epsilon_{r})$) | $\underset{r}{\textnormal{max}}|z_{r}|$
$K_{\Gamma}$ | Magnitude inv. power flow Jac. | $\sqrt{\|\boldsymbol{\Gamma}\|_{1}\|\boldsymbol{\Gamma}\|_{\infty}}$ or eq. (16)
$K_{x}$ | Magnitude of power flow Jac. deriv. w.r.t $\lambda$ | $1$
$N_{A}$ | Number of active inequality constraints (w/o deter. vars) | $\text{count}(\mathbf{h}(\mathbf{s}^{(1)})=0)$
(w/o constraints on $\mathbf{y}$)
$K_{P}$ | Problem sensitivity to inv. Jac. and active constraints | $\|\boldsymbol{\Sigma}\|\cdot K^{2}_{\Gamma}\cdot N_{A}$
(If $\|\boldsymbol{\Sigma}\|$ is not known it can be bounded by
$K_{2}=\sqrt{\|\boldsymbol{\Sigma}\|_{1}\|\boldsymbol{\Sigma}\|_{\infty}}$)
## IV Numerical Experiments
This section describes numerical experiments on a set of standard IEEE test
problems. Typically, solving large chance-constrained optimization problems is
numerically very challenging. Even directly solving the reformulated problem
in (17) with state-of-the-art general purpose methods becomes quickly
intractable. Therefore, for larger networks we apply Algorithm 1 to
approximately solve (17). Our implementation of Algorithm 1 uses Julia 1.1.0
[24] and the modeling libraries JuMP v0.18.6 [25] and StructJuMP [26]. In
particular, in order to solve the AC-OPF problem from (2) and its
modifications in Line 3 of Algorithm 1 we use the general purpose nonlinear
optimization solver IPOPT [27]. The stopping criteria in Algorithm 1 are set
as $\tau_{q}=1\times 10^{-3}$, $\tau_{v}=1\times 10^{-5}$,
$\tau_{\theta}=1\times 10^{-5}$, $\tau_{g}=1\times 10^{-3}$. Unless otherwise
stated, we set $\boldsymbol{\Sigma}=\sigma\mathbf{I}$, $\sigma=1/N^{2}$ and
$\epsilon=\text{vec}(\epsilon_{q},\epsilon_{v},\epsilon_{\theta},\epsilon_{g})=\text{vec}(0.1,0.1,0.1,0.2)$,
and $\gamma_{g}=1/N_{L}^{2}$. The initial values are computed as the midpoint
between the upper and lower bounds
$\mathbf{s}^{(0)}=1/2(\mathbf{u}+\mathbf{l})$. The maximum number of
iterations for Algorithm 1 is set as $\text{maxiter}=50$. The test cases are
summarized in Table II:
TABLE II: Test Cases. Column 2 contains the total number of nodes in the network, $N$, and their split among generators, $N_{G}$, and loads, $N_{L}$. The largest case contains more than $9000<N$ nodes. Problem | $N=N_{G}+N_{L}$ | $N_{\text{line}}$ | Reference
---|---|---|---
IEEE 9 | $9=3+6$ | $9$ | [6]
IEEE 30 | $30=6+24$ | $41$ | [6, 28]
IEEE 118 | $118=54+64$ | $186$ | [6]
IEEE 300 | $300=69+231$ | $411$ | [6]
IEEE 1354pegase | $1354=260+1094$ | $1991$ | [29]
IEEE 2383wp | ${\color[rgb]{0,0,0}2383=327+2056}$ | ${\color[rgb]{0,0,0}2896}$ | [6]
IEEE 2869pegase | $2869=510+2359$ | $4582$ | [29]
IEEE 9241pegase | $9241=1445+7796$ | $16049$ | [29]
The numerical experiments are divided into three parts. Experiment I compares
solutions to (17) using a direct solver (i.e., IPOPT) or the iterative
approach from Algorithm 1. Experiment II reports results on the test problems
from Table II. Experiment III describes convergence tests according to our
analysis, outcomes for problems with perturbed loads and an investigation of
joint and dis-joint relative frequencies on the IEEE 9 problem.
### IV-A Experiment I
This experiment compares the optimal objective function values of solving (17)
directly (CC-Direct) using IPOPT or using Algorithm 1 with a fixed point
approach (CC-FP). Solving (17) in JuMP becomes computationally intractable
beyond $N=100$, because the inverse,
$\boldsymbol{\Gamma}(\mathbf{s})\in\mathbb{R}^{2N\times 2N}$, and its
derivatives are continuously recomputed. For this reason, we compare CC-Direct
and CC-FP for the 9 bus and 30 bus cases (which are both tractable by the
direct approach). For the purpose of this comparison we switched the line flow
tightening off (i.e., $\boldsymbol{\lambda}_{g}=\mathbf{0}$) while all other
tightenings are applied. In particular, we compare the solved objective
function values after perturbing the problem formulation slightly.
Specifically, we compare the computed objective function values when the
values $\epsilon_{v}=\\{0.05,0.06,\ldots,0.2\\}$ change the probability
constraints. (Varying other parameters yields similar results). The outcomes
of the 9 bus network are in Figure 1.
Figure 1: Comparison of optimal objective function values using CC-Direct
(direct approach) and CC-FP (fixed-point). The optimal objective function
values nearly coincide on this IEEE 9 bus case as the parameters
$\boldsymbol{\epsilon}_{v}$ (probability thresholds) vary.
Figure 2 contains the outcomes of applying both approaches on the 30 bus
network.
Figure 2: Comparison of optimal objective function values using CC-Direct
(direct approach) and CC-FP (fixed-point). Also on this IEEE 30 bus case the
optimal objective function values nearly coincide as the parameters
$\boldsymbol{\epsilon}_{v}$ (probability thresholds) vary.
Observe in both figures that the optimal objective function values, i.e.,
$C(\mathbf{s}^{*})$, increase as the values of $1-\epsilon_{v}$ increase. This
is because larger values of $1-\epsilon_{v}$ correspond to stricter chance
constraints. Importantly, observe that the objective function values are
virtually equivalent for these test cases, as the values of the blue and red
curves nearly exactly match up. However, the main advantage of Algorithm 1 is
that it scales to larger cases, too.
### IV-B Experiment II
Experiment II reports results of applying Algorithm 1 (CC-FP) and a direct
solver (CC-Direct) to the problem from (17) on the test problems in Table II.
For CC-FP the number of iterations, time and “optimal” objective values are
reported. For CC-Direct the “optimal” objective values and the time are
reported. Algorithm 1 converged on all of the reported problems. These
problems include large network instances, too. Observe in Table III that for
problems on which both solvers can be used, the optimal objective function
values are close to each other. However, for large problems only Algorithm 1,
based on a fixed point iteration, is practical. For consistency with Figs. 1
and 2 the line flow tightenings are switched off in the first two cases, while
all tightenings are applied on all other problems.
TABLE III: Comparison of Algorithm 1 (CC-FP) and a direct solution approach (CC-Direct) on a set of IEEE power networks. The last 6 rows in the “Objective” column for CC-Direct are labeled N/A, because this approach took computational times in excess of $5$ hours. IEEE | CC-FP | CC-Direct
---|---|---
Its. | Obj. (Cost/h) | Time(s) | Obj. (Cost/h) | Time(s)
$9$ | 4 | 5297.928 | 0.0504 | 5297.928 | 0.3075
$30$ | 4 | 577.6665 | 0.1688 | 577.6592 | 148.36
$118$ | 3 | 129662.0 | 0.4710 | $\texttt{N/A}^{\dagger}$ | >5h
$300$ | 5 | 720090.3 | 4.3123 | $\texttt{N/A}^{\dagger}$ | >5h
$1354$ | 3 | 74069.38 | 75.039 | $\texttt{N/A}^{\dagger}$ | >5h
$2383$ | 3 | 1868551. | 236.27 | $\texttt{N/A}^{\dagger}$ | >5h
$2869$ | 3 | 133999.3 | 316.80 | $\texttt{N/A}^{\dagger}$ | >5h
$9241$ | 3 | 315912.6 | 3255.2 | $\texttt{N/A}^{\dagger}$ | >5h
### IV-C Experiment III
In this experiment we investigate three further important aspects of the
analysis and the algorithm. Before proceeding, we note that in Algorithm 1 it
can happen that the constraint tightenings become large enough that adjusted
upper bound constraints fall below a lower bound constraint (or vice-versa).
Since this situation implies an inconsistent constraint, we restore
consistency by re-setting this particular constraint to its original, yet with
a halved feasible interval (to reflect some tightening). Use of this
correction mechanism increases the robustness of the method, meaning that
Algorithm 1 converges more often. Nevertheless, our analysis indicates that a
sufficiently small value of $\|\boldsymbol{\Sigma}\|$ will guarantee
convergence. However, other network dependent factors such as
$\|\boldsymbol{\Gamma}\|$ or $N_{A}$ (number of active constraints) are also
relevant for convergence on specific problems. We capture the conclusion that
a sufficiently small $\|\boldsymbol{\Sigma}\|$ guarantees convergence, and
that relatively large values of $\|\boldsymbol{\Gamma}\|$ and $N_{A}$
typically result in non-convergence using all test problems from Table II.
#### IV-C1 Experiment III.A
This experiment varies the default value $\sigma=1/N^{2}$ as
$\sigma=\alpha/N^{2}$ with $\alpha=\\{1,10^{2},10^{4},10^{6}\\}$. For a
sufficiently small value of $\sigma$ (here $\alpha=1$) all problems are
guaranteed to converge. Moreover, lower values of $\|\boldsymbol{\Gamma}\|$
and $N_{A}$ represent lower sensitivity of a problem to increases in
$\|\boldsymbol{\Sigma}\|$. Table IV reports the outcomes on all test problems,
in accordance to theory.
TABLE IV: Convergence of Algorithm 1 when $\sigma=\frac{\alpha}{N^{2}}$ for $\alpha=\\{1,10,10^{4},10^{6}\\}$. A boxed Y indicates convergence, while a bold or resp. italic $K_{P}$ value represents the smallest resp. second smallest value. For sufficiently small values of $\sigma$ (here column 2) all problems converged. Problems IEEE $\\{1354,2869\\}$ have the smallest $K_{P}$ values indicating their tolerance to larger $\sigma$ values. Problem IEEE $300$ has the largest $K_{P}$ value in column 2, and is viewed as being more sensitive to non-convergence. In columns 3 and 4 ( $\alpha=10,10^{4}$) 2 problems become non-convergent. Ultimately almost all problems are non-convergent. The exception are IEEE $\\{1354,2869\\}$ who have the smallest $K_{P}$ values. IEEE | $\underset{{\scriptsize\texttt{Converged}}}{\begin{array}[]{c}\sigma\\\ K_{P}\end{array}}$
---|---
$9$ | ${\scriptsize\underset{{\scriptsize\texttt{{\scriptsize\boxed{\texttt{Y}} }}}}{\begin{array}[]{c}\texttt{0.012}\\\ \texttt{0.063}\end{array}}}$ | ${\scriptsize\underset{{\scriptsize\texttt{{\scriptsize\boxed{\texttt{Y}} }}}}{\begin{array}[]{c}\texttt{0.12}\\\ \texttt{0.63}\end{array}}}$ | ${\scriptsize\underset{{\scriptsize\texttt{{\scriptsize\boxed{\texttt{Y}} }}}}{\begin{array}[]{c}\texttt{1.2e+02}\\\ \texttt{6.3e+02}\end{array}}}$ | ${\scriptsize\underset{{\scriptsize\texttt{{\scriptsize{N}}}}}{\begin{array}[]{c}\texttt{1.2e+04}\\\ \texttt{6.3e+04}\end{array}}}$
$30$ | ${\scriptsize\underset{{\scriptsize\texttt{{\scriptsize\boxed{\texttt{Y}} }}}}{\begin{array}[]{c}\texttt{0.0011}\\\ \texttt{0.08}\end{array}}}$ | ${\scriptsize\underset{{\scriptsize\texttt{{\scriptsize{N}}}}}{\begin{array}[]{c}\texttt{0.011}\\\ \texttt{0.8}\end{array}}}$ | ${\scriptsize\underset{{\scriptsize\texttt{{\scriptsize{N}}}}}{\begin{array}[]{c}\texttt{11}\\\ \texttt{8e+02}\end{array}}}$ | ${\scriptsize\underset{{\scriptsize\texttt{{\scriptsize{N}}}}}{\begin{array}[]{c}\texttt{1.1e+03}\\\ \texttt{8e+04}\end{array}}}$
$118$ | ${\scriptsize\underset{{\scriptsize\texttt{{\scriptsize\boxed{\texttt{Y}} }}}}{\begin{array}[]{c}\texttt{7.2e-05}\\\ \texttt{0.045}\end{array}}}$ | ${\scriptsize\underset{{\scriptsize\texttt{{\scriptsize\boxed{\texttt{Y}} }}}}{\begin{array}[]{c}\texttt{0.00072}\\\ \texttt{0.45}\end{array}}}$ | ${\scriptsize\underset{{\scriptsize\texttt{{\scriptsize{N}}}}}{\begin{array}[]{c}\texttt{0.72}\\\ \texttt{4.5e+02}\end{array}}}$ | ${\scriptsize\underset{{\scriptsize\texttt{{\scriptsize{N}}}}}{\begin{array}[]{c}\texttt{72}\\\ \texttt{4.5e+04}\end{array}}}$
$300$ | ${\scriptsize\underset{{\scriptsize\texttt{{\scriptsize\boxed{\texttt{Y}} }}}}{\begin{array}[]{c}\texttt{1.1e-05}\\\ \texttt{0.19}\end{array}}}$ | ${\scriptsize\underset{{\scriptsize\texttt{{\scriptsize{N}}}}}{\begin{array}[]{c}\texttt{0.00011}\\\ \texttt{1.9}\end{array}}}$ | ${\scriptsize\underset{{\scriptsize\texttt{{\scriptsize{N}}}}}{\begin{array}[]{c}\texttt{0.11}\\\ \texttt{1.9e+03}\end{array}}}$ | ${\scriptsize\underset{{\scriptsize\texttt{{\scriptsize{N}}}}}{\begin{array}[]{c}\texttt{11}\\\ \texttt{1.9e+05}\end{array}}}$
$1354$ | ${\scriptsize\underset{{\scriptsize\texttt{{\scriptsize\boxed{\texttt{Y}} }}}}{\begin{array}[]{c}\texttt{5.5e-07}\\\ \texttt{\emph{0.0013}}\end{array}}}$ | ${\scriptsize\underset{{\scriptsize\texttt{{\scriptsize\boxed{\texttt{Y}} }}}}{\begin{array}[]{c}\texttt{5.5e-06}\\\ \texttt{\emph{0.013}}\end{array}}}$ | ${\scriptsize\underset{{\scriptsize\texttt{{\scriptsize\boxed{\texttt{Y}} }}}}{\begin{array}[]{c}\texttt{0.0055}\\\ \texttt{\emph{13}}\end{array}}}$ | ${\scriptsize\underset{{\scriptsize\texttt{{\scriptsize\boxed{\texttt{Y}} }}}}{\begin{array}[]{c}\texttt{0.55}\\\ \texttt{\emph{1.3e+03}}\end{array}}}$
$2383$ | ${\scriptsize\underset{{\scriptsize\texttt{{\scriptsize\boxed{\texttt{Y}} }}}}{\begin{array}[]{c}\texttt{1.8e-07}\\\ \texttt{0.092}\end{array}}}$ | ${\scriptsize\underset{{\scriptsize\texttt{{\scriptsize\boxed{\texttt{Y}} }}}}{\begin{array}[]{c}\texttt{1.8e-06}\\\ \texttt{0.92}\end{array}}}$ | ${\scriptsize\underset{{\scriptsize\texttt{{\scriptsize\boxed{\texttt{Y}} }}}}{\begin{array}[]{c}\texttt{0.0018}\\\ \texttt{9.2e+02}\end{array}}}$ | ${\scriptsize\underset{{\scriptsize\texttt{{\scriptsize{N}}}}}{\begin{array}[]{c}\texttt{0.18}\\\ \texttt{9.2e+04}\end{array}}}$
$2869$ | ${\scriptsize\underset{{\scriptsize\texttt{{\scriptsize\boxed{\texttt{Y}} }}}}{\begin{array}[]{c}\texttt{1.2e-07}\\\ \texttt{{0.00078}}\end{array}}}$ | ${\scriptsize\underset{{\scriptsize\texttt{{\scriptsize\boxed{\texttt{Y}} }}}}{\begin{array}[]{c}\texttt{1.2e-06}\\\ \texttt{{0.0078}}\end{array}}}$ | ${\scriptsize\underset{{\scriptsize\texttt{{\scriptsize\boxed{\texttt{Y}} }}}}{\begin{array}[]{c}\texttt{0.0012}\\\ \texttt{{7.8}}\end{array}}}$ | ${\scriptsize\underset{{\scriptsize\texttt{{\scriptsize\boxed{\texttt{Y}} }}}}{\begin{array}[]{c}\texttt{0.12}\\\ \texttt{{7.8e+02}}\end{array}}}$
$9241$ | ${\scriptsize\underset{{\scriptsize\texttt{{\scriptsize\boxed{\texttt{Y}} }}}}{\begin{array}[]{c}\texttt{1.2e-08}\\\ \texttt{0.002}\end{array}}}$ | ${\scriptsize\underset{{\scriptsize\texttt{{\scriptsize\boxed{\texttt{Y}} }}}}{\begin{array}[]{c}\texttt{1.2e-07}\\\ \texttt{0.02}\end{array}}}$ | ${\scriptsize\underset{{\scriptsize\texttt{{\scriptsize\boxed{\texttt{Y}} }}}}{\begin{array}[]{c}\texttt{0.00012}\\\ \texttt{20}\end{array}}}$ | ${\scriptsize\underset{{\scriptsize\texttt{{\scriptsize{N}}}}}{\begin{array}[]{c}\texttt{0.012}\\\ \texttt{2e+03}\end{array}}}$
#### IV-C2 Experiment III.B
This experiment investigates outcomes on all test problems when loads are
perturbed (representing stressed system conditions). In particular, the
algorithm is applied to all problems with loads varying between $80\%$ —
$120\%$ of their original values. Figure 3 displays the outcomes in which
optimal objective values relative to the original optimal value are compared.
Overall, the optimal objective values exhibit a linear relation with changes
in the loads.
Figure 3: Normalized optimal objective values (relative to the original
optimal value), when nonzero loads at all buses in all systems vary by a
factor of $0.8$ — $1.2$. Objective values of $0$ indicate that the problem did
not converge. All problems were solvable with load values up to one percent of
the original. Some problems such as IEEE 9 and IEEE 113 were solvable with
increases of up to $20\%$. Black squares represent problems in which the
corresponding classical ACOPF problem (without a probability model) did
converge on the same perturbed problem. In only 4 instances out of 136 total
perturbed problems did this occur. Overall, the changes in loads linearly
affect the optimal values and the chance-constrained models have similar
sensitivities to load changes as the classical ACOPF.
#### IV-C3 Experiment III.C
This experiment investigates the relation of joint and individual (disjoint)
probabilities. For the IEEE 9 bus case, we sample 500 multivariate random
vectors $\boldsymbol{\omega}^{p},\boldsymbol{\omega}^{q}$ with zero mean and
positive definite dense covariance. The voltage constraints on generator buses
(for this 9 bus case i.e., 1,2 and 3) are deterministic $v_{j}\leq
u_{j},j=1,2,3$. The voltages at the remaining load buses are unbounded. The
problem is re-solved after random loads
$\mathbf{p}^{d}+\boldsymbol{\omega}^{p}$ and
$\mathbf{q}^{d}+\boldsymbol{\omega}^{q}$ are simulated. The joint relative
frequencies of the vector constraint $\mathbf{v}\leq\mathbf{u}$ (the constant
vector $\mathbf{u}$ has elements $u_{i}=1.1$) are displayed on the top of
Figure 4. The individual relative frequencies for each bus are at the bottom
of the figure.
Figure 4: Top: Relative frequencies for the count of elements of vector
$\mathbf{v}$ that satisfy $\mathbf{v}\leq\mathbf{u}$. The constraint that all
9 voltages simultaneously satisfy the inequality has probability of 0.044.
Bottom: Relative frequencies of each individual voltage to satisfy its
constraint. Independently enforcing each voltage to satisfy the constraints
$\prod_{i=1}^{9}P(v_{i}\leq u_{i})$ has a probability of 0.007. In this case
there is a factor of about 6 (i.e. $0.044/0.007$) between enforcing a joint or
individual constraints. However, enforcing joint probability constraints as
part of an optimization for large problems is typically not computationally
feasible.
## V Conclusion
This article develops a chance constrained AC optimal power flow model, in
which the objective function only depends on deterministic variables and
therefore enables immediate interpretation of the optimal function values. By
linearizing the stochastic variables, a deterministic nonlinear optimization
problem is described in lieu of the probabilistic one. Because solving the
reformulated optimization problem is computationally challenging, we analyze
the convergence criteria for a fixed point algorithm that solves a sequence of
modified AC optimal power flow problems and enables scaling to larger network
sizes. The analysis connects the model’s variance of the uncertainty and the
constraint probabilities to the convergence properties of the algorithm. In
numerical experiments, we compare the fixed point iteration to directly
solving the reformulated problem and test the method on IEEE problems,
including a network with over 9000 nodes. Certainly our bounds are quite
conservative in this version. However, this the first attempt at proving
convergence of the approach. Since iteratively resolving a sequence of
tractable optimization problems (by holding specific nonlinear terms fixed)
has been very effective on a variety of power systems applications, our
analysis of such a fixed point method lays the ground for the convergence
criteria of new future algorithms in this category.
## Acknowledgments
This work was supported by the U.S. Department of Energy, Office of Science,
Advanced Scientific Computing Research, under Contract DE-AC02-06CH11357 at
Argonne National Laboratory and by NSF through award CNS-51545046. We
acknowledge fruitful discussion with Dr. Line Roald, who pointed out that
Algorithm 1 was observed to cycle between different points, and encouraged an
analysis of its convergence criteria. We also thank Eric Grimm, who helped in
carrying out parts of the numerical experiments in Section IV. Careful reading
and helpful suggestions of the editor and three anonymous reviewers markedly
improved the manuscript.
## References
* [1] H. Zhang and P. Li, “Chance constrained programming for optimal power flow under uncertainty,” _IEEE Transactions on Power Systems_ , vol. 26, no. 4, pp. 2417–2424, Nov 2011.
* [2] G. Li and X. P. Zhang, “Stochastic optimal power flow approach considering correlated probabilistic load and wind farm generation,” in _IET Conference on Reliability of Transmission and Distribution Networks (RTDN 2011)_ , Nov 2011, pp. 1–7.
* [3] M. Hojjat and M. H. Javidi, “Chance-constrained programming approach to stochastic congestion management considering system uncertainties,” _IET Generation, Transmission Distribution_ , vol. 9, no. 12, pp. 1421–1429, 2015.
* [4] K. Baker, E. Dall’Anese, and T. Summers, “Distribution-agnostic stochastic optimal power flow for distribution grids,” in _2016 North American Power Symposium (NAPS)_ , Sep. 2016, pp. 1–6.
* [5] M. Vrakopoulou, M. Katsampani, K. Margellos, J. Lygeros, and G. Andersson, “Probabilistic security-constrained ac optimal power flow,” in _2013 IEEE Grenoble Conference_ , June 2013, pp. 1–6.
* [6] R. D. Zimmerman, C. E. Murillo-Sánchez, and R. J. Thomas, “Matpower: Steady-state operations, planning, and analysis tools for power systems research and education,” _IEEE Transactions on Power Systems_ , vol. 26, no. 1, pp. 12–19, Feb 2011.
* [7] M. Lubin, Y. Dvorkin, and L. Roald, “Chance constraints for improving the security of ac optimal power flow,” _IEEE Transactions on Power Systems_ , vol. 34, no. 3, pp. 1908–1917, 2019.
* [8] D. Bienstock, M. Chertkov, and S. Harnett, “Chance-constrained optimal power flow: Risk-aware network control under uncertainty,” _Siam Review_ , vol. 56, no. 3, pp. 461–495, 2014.
* [9] D. Lee, K. Turitsyn, D. K. Molzahn, and L. A. Roald, “Feasible path identification in optimal power flow with sequential convex restriction,” _IEEE Transactions on Power Systems_ , vol. 35, no. 5, pp. 3648–3659, 2020\.
* [10] V. Frolov, L. Roald, and M. Chertkov, “Cloud-ac-opf: Model reduction technique for multi-scenario optimal power flow via chance-constrained optimization,” in _2019 IEEE Milan PowerTech_ , 2019, pp. 1–6.
* [11] L. Roald and G. Andersson, “Chance-constrained ac optimal power flow: Reformulations and efficient algorithms,” _IEEE Transactions on Power Systems_ , vol. 33, no. 3, pp. 2906–2918, May 2018.
* [12] J. Schmidli, L. Roald, S. Chatzivasileiadis, and G. Andersson, “Stochastic ac optimal power flow with approximate chance-constraints,” in _2016 IEEE Power and Energy Society General Meeting (PESGM)_ , July 2016, pp. 1–5.
* [13] Y. Xu, M. Korkali, L. Mili, J. Valinejad, T. Chen, and X. Chen, “An iterative response-surface-based approach for chance-constrained ac optimal power flow considering dependent uncertainty,” _IEEE Transactions on Smart Grid_ , vol. 12, no. 3, pp. 2696–2707, 2021.
* [14] R. G. J. Miller, _Simultaneous Statistical Inference_. Springer-Verlag New York, 1981.
* [15] A. Roald, D. Molzahn, and A. F. Tobler, “Power system optimization with uncertainty and ac power flow: Analysis of an iterative algorithm,” in _10th IREP Symp. Bulk Power Syst. Dynamics Control._ , 2017.
* [16] D. Bienstock, _Electrical Transmission System Cascades and Vulnerability_. Philadelphia, PA: Society for Industrial and Applied Mathematics, 2015. [Online]. Available: https://epubs.siam.org/doi/abs/10.1137/1.9781611974164
* [17] Y. Hong and C.-T. Pan, “A lower bound for the smallest singular value,” _Linear Algebra and its Applications_ , vol. 172, pp. 27 – 32, 1992. [Online]. Available: http://www.sciencedirect.com/science/article/pii/0024379592900164
* [18] A. V. Fiacco and J. Kyparisis, “Sensitivity analysis in nonlinear programming under second order assumptions,” in _Systems and Optimization_ , A. Bagchi and H. T. Jongen, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 1985, pp. 74–97.
* [19] A. V. Fiacco, “Nonlinear programming sensitivity analysis results using strong second order assumptions,” in _Numerical Optimization of Dynamic Systems_ , L. C. W. Dixon and e. G. P. Szegö, Eds. North-Holland, Amsterdam: Springer Berlin Heidelberg, 1980, pp. 327–348.
* [20] R. Burden and J. Faires, _Numerical Analysis_. Cengage Learning, 2010. [Online]. Available: https://books.google.com/books?id=zXnSxY9G2JgC
* [21] S. H. Strogatz, _Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering_. Addison-Wesley, 1994.
* [22] W. Karush, “Minima of functions of several variables with inequalities as side conditions,” Master’s thesis, Department of Mathematics, University of Chicago, Illinois, USA, 1939.
* [23] H. W. Kuhn and A. W. Tucker, “Nonlinear programming,” in _Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability_. Berkeley, Calif.: University of California Press, 1951, pp. 481–492. [Online]. Available: https://projecteuclid.org/euclid.bsmsp/1200500249
* [24] J. Bezanson, A. Edelman, S. Karpinski, and V. B. Shah, “Julia: A fresh approach to numerical computing,” _SIAM review_ , vol. 59, no. 1, pp. 65–98, 2017. [Online]. Available: https://doi.org/10.1137/141000671
* [25] I. Dunning, J. Huchette, and M. Lubin, “Jump: A modeling language for mathematical optimization,” _SIAM Review_ , vol. 59, no. 2, pp. 295–320, 2017.
* [26] M. Anitescu and G. C. Petra, “StructJuMP a block-structured optimization framework for jump,” https://github.com/StructJuMP/StructJuMP.jl (retrieved Mar. 27th, 2019), 2019.
* [27] A. Wächter and L. T. Biegler, “On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming,” _Math. Program._ , vol. 106, pp. 25–57, 2006.
* [28] O. Alsac and B. Stott, “Optimal load flow with steady-state security,” _IEEE Transactions on Power Apparatus and Systems_ , vol. PAS-93, no. 3, pp. 745–751, May 1974.
* [29] S. Fliscounakis, P. Panciatici, F. Capitanescu, and L. Wehenkel, “Contingency ranking with respect to overloads in very large power systems taking into account uncertainty, preventive, and corrective actions,” _IEEE Transactions on Power Systems_ , vol. 28, no. 4, pp. 4909–4917, Nov 2013.
| Johannes Brust was a postdoctoral researcher in the Mathematics and
Computer Science Division at Argonne National Laboratory, IL (now Department
of Mathematics, University of California, San Diego). He received a M.Sc. in
financial engineering from Maastricht University and a Ph.D. in applied
mathematics from the University of California, Merced. His research is on
effective large scale computational methods applied to optimal power flow
problems.
---|---
| Mihai Anitescu Is a senior computational mathematician in the Mathematics
and Computer Science Division at Argonne National Laboratory and a professor
in the Department of Statistics at the University of Chicago. He obtained his
engineer diploma (electrical engineering) from the Polytechnic University of
Bucharest in 1992 and his Ph.D. in applied mathematical and computational
sciences from the University of Iowa in 1997. He specializes in the areas of
numerical optimization, computational science, numerical analysis and
uncertainty quantification in which he has published more than 100 papers in
scholarly journals and book chapters. He is on the editorial board of the SIAM
Journal on Optimization and he is a senior editor for Optimization Methods and
Software, he is a past member of the editorial boards of the Mathematical
Programming A and B, SIAM Journal on Uncertainty Quantification, and SIAM
Journal on Scientific Computing. He has been recognized for his work in
applied mathematics by his selection as a SIAM Fellow in 2019.
---|---
The submitted manuscript has been created by UChicago Argonne, LLC, Operator
of Argonne National Laboratory (“Argonne”). Argonne, a U.S. Department of
Energy Office of Science laboratory, is operated under Contract No. DE-
AC02-06CH11357. The U.S. Government retains for itself, and others acting on
its behalf, a paid-up nonexclusive, irrevocable worldwide license in said
article to reproduce, prepare derivative works, distribute copies to the
public, and perform publicly and display publicly, by or on behalf of the
Government. The Department of Energy will provide public access to these
results of federally sponsored research in accordance with the DOE Public
Access Plan. http://energy.gov/downloads/doe-public-accessplan
|
∎
11institutetext: J. F. Ciprián-Sánchez - Corresponding author22institutetext:
Tecnológico de Monterrey, School of Engineering and Science, Av. Lago de
Guadalupe KM 3.5, Margarita Maza de Juárez, 52926 Cd López Mateos, Mexico
22email<EMAIL_ADDRESS>33institutetext: G. Ochoa-Ruiz and Miguel Gonzalez
Mendoza 44institutetext: Tecnológico de Monterrey, School of Engineering and
Sciences, Av. Eugenio Garza Sada 2501, Monterrey, N.L., México 64849, Mexico
44email<EMAIL_ADDRESS>55institutetext: L. Rossi 66institutetext:
Università di Corsica, Laboratoire Sciences Pour l’Environnement, Campus
Grimaldi – BP 52 – 20250, Corti, France
66email<EMAIL_ADDRESS>
# FIRe-GAN: A novel Deep Learning-based infrared-visible fusion method for
wildfire imagery ††thanks: This research is supported in part by Tecnológico
de Monterrey and the Mexican National Council of Science and Technology
(CONACYT). This research is part of the project 7817-2019 funded by the
Jalisco State Council of Science and Technology (COECYTJAL).
J. F. Ciprián-Sánchez G. Ochoa-Ruiz M. Gonzalez-Mendoza L. Rossi
(Received: date / Accepted: date)
###### Abstract
Early wildfire detection is of paramount importance to avoid as much damage as
possible to the environment, properties, and lives. Deep Learning (DL) models
that can leverage both visible and infrared information have the potential to
display state-of-the-art performance, with lower false-positive rates than
existing techniques. However, most DL-based image fusion methods have not been
evaluated in the domain of fire imagery. Additionally, to the best of our
knowledge, no publicly available dataset contains visible-infrared fused fire
images. There is a growing interest in DL-based image fusion techniques due to
their reduced complexity. Due to the latter, we select three state-of-the-art,
DL-based image fusion techniques and evaluate them for the specific task of
fire image fusion. We compare the performance of these methods on selected
metrics. Finally, we also present an extension to one of the said methods,
that we called _FIRe-GAN_ , that improves the generation of artificial
infrared images and fused ones on selected metrics.
###### Keywords:
Image fusion Fire Wildfires Deep Learning Visible Infrared
## 1 Declarations
### 1.1 Funding
This research is supported in part by Tecnologico de Monterrey and the Mexican
National Council of Science and Technology (CONACYT). This research is part of
the project 7817-2019 funded by the Jalisco State Council of Science and
Technology (COECYTJAL).
### 1.2 Availability of data and material
The image fusion methods by Li et al. Li18 and Ma et al. Ma19_GAN are
publicly available as Github repositories in Li18_github ; Ma19_GAN_github
respectively. The RGB-NIR dataset employed for pre-training the method by Zhao
et al. Zhao20 was developed by Brown et al. Brown11 and is publicly
available at RGB-NIRDataset . The Corsican Fire Database Toulouse17 is
available upon request to the University of Corsica at CorsicanFireDatabase .
### 1.3 Code availability
The code generated by the authors implementing _FIRe-GAN_ , an extended
version of the image fusion method proposed by Zhao et al. Zhao20 , is
available as an open-source Github repository at
https://github.com/JorgeFCS/Image-fusion-GAN.
## 2 Introduction
Wildfires can occur naturally or due to human activities and have the
potential to get out of control and have a significant impact on the
environment, properties, and lives. Recently, there have been several
wildfires of significant proportions worldwide, such as the Australian
wildfires of 2019 and 2020. CNN reported that the said fires took the lives of
at least 28 people CNN_Australia . Another more recent example is the ongoing
wildfire season in California in the US. According to the BBC, as of September
17, 2020, 6.7 million acres have burned and more than 30 people have died
BBC_California . Early wildfire detection enabling technologies are thus
crucial in order to avoid as much damage as possible and to help firefighters
in their endeavors.
Vision-based fire detection techniques can be divided into visible or infrared
fire detection systems Yuan15 , according to the spectral range of the cameras
employed. In operative scenarios, visible image-based systems display
significant false alarm rates and missed detections. The latter is due to
constraints present in different situations, such as changing environmental
conditions and illumination Cetin13 . In contrast, infrared cameras can
perform flame detection in weak or no light conditions. Additionally, smoke is
transparent in these kinds of images. As such, it can be practical to employ
infrared-based systems to perform flame detection in both daytime and
nighttime conditions Yuan15 . There has been work on fire detection on the NIR
and LWIR infrared bands; however, it is also not trivial to detect fire on
infrared images, as they present problems such as thermal reflections and IR
blocking Cetin13 .
The fusion of visible and infrared images can be beneficial for improving the
robustness, accuracy, and reliability of fire detection systems Yuan15 .
Although there have been some approaches in this area Arrue2000 ; Martinez-de-
Dios08 , as well as for fire segmentation Nemalidinne18 , to the best of our
knowledge, DL-based visible-infrared fusion methods have not been tested for
the particular task of fire image fusion.
There is a growing interest in DL-based image fusion techniques. The latter is
due to their reduced complexity compared to methods on the multi-scale
transform and representation learning domains Li20 . In order to assess the
applicability of some of the most promising DL-based approaches in IR-visible
fusion to wildfire image fusion, we chose three state-of-the-art methods and
evaluate them on the particular task of fire image fusion. We also implement
extensions on one of them, generating the proposed _FIRe-GAN_ method to
improve its applicability to the Corsican Dataset, introduced in this paper.
The selected methods are as follows: the method proposed by Li et al. Li18 ,
the work by Ma et al. Ma19_GAN , and the architecture proposed by Zhao et al.
Zhao20 . We evaluate and compare these methods on the Corsican Fire Database
Toulouse17 using some of the most important metrics for assessing image
quality in the literature.
These state of the art methods were selected because they present several
desirable features. The method by Li et al. Li18 uses a pre-trained VGG19
Deep Convolutional Neural Network (DCNN) as a part of its process for the
fusion of detail content. Since the authors employ only selected layers of the
network, no further training on new datasets (such as ours) is needed.
The method by Ma et al. Ma19_GAN represents, to the best of our knowledge,
the first approach towards image fusion through the use of Generative
Adversarial Networks (GANs). This technique has the advantage of being end-to-
end, which significantly reduces its implementation and training complexity.
The method by Zhao et al. Zhao20 is a GAN-based approach as well, with the
additional feature of being able to generate approximate infrared images from
visible ones. It is relevant to note that the type of infrared images (near-
infrared (NIR), short-wavelength (SWIR), mid-wavelength (MWIR), or long-
wavelength (LWIR)) that the model learns to generate depends on the training
set. It would learn to generate NIR images if those are the ones contained in
the training set, and so forth.
Finally, it is relevant to note that many visible-infrared fusion methods Li18
; Ma19_GAN ; Zhao20 ; Li20 ; Zhao20_decomposition ; Ma20_adversarial output
grayscale fused images, which means that the color information of the visible
image is lost. In the present paper, we present the _FIRe-GAN_ model, an
extended version of the method proposed by Zhao et al. Zhao20 , which allows
for the processing of higher resolution images and the generation of color
fused images as outputs. The latter is relevant due to color being one of the
most used features in visible image-based fire detection methods Yuan15 .
The main contributions of this article are as follows:
1. 1.
We provide a thorough analysis of existing DL-fusion methods for conventional
imagery.
2. 2.
We provide a quantitative demonstration of the feasibility of applying DL-
based fusion methods for infrared imagery from wildfires.
3. 3.
We introduce a novel artificial IR and fused image generator that has been
tested both in conventional and fire imagery.
We believe that these contributions can potentially boost developments in the
wildfire fighting community that makes use of visible and infrared fire
imagery to perform accurate and reliable wildfire detection and monitoring. It
must be noted that this work is part of a larger endeavour in which the
proposed fusion methods plays only a small but vital role, and the generation
of infrared images, which is a component of the system, is to be used in other
related research efforts and could prove to be useful for the research
community at large.
The rest of this paper proceeds as follows. In Section 3 present the three
compared methods, the evaluation metrics, and the datasets employed. Section 4
describes the experiments and shows the evaluation results and a quantitative
comparison of the selected methods. In Section 5 we provide a discussion on
the obtained results. Finally, section 6 presents the conclusions and outlines
potential future work.
## 3 Methods and data
### 3.1 Evaluated methods
#### 3.1.1 Infrared and visible image fusion using a deep learning framework
This method, proposed by Li et al. Li18 employs a DL framework for the
generation of a single image that contains all the features present in visible
and infrared images.
First, the original images are decomposed into base parts and detail content.
Then, these base parts are fused through weight-averaging. For the detail
parts, the authors employ a DCNN network to extract multi-layer features
Li18_github . With the extracted features, L1 normalization coupled with a
weighted average strategy is employed in order to generate candidates for the
fused detail content. Afterwards, a max selection strategy is used to obtain
the final fused detail content. Finally, the final fused image is constructed
by combining the fused detail and base contents.
It is worth noting that the authors employ the fixed VGG19 network pre-trained
in ImageNet, for the extraction of the multi-layer features. We will be
referring to this method as the $VGG19$ method.
#### 3.1.2 FusionGAN: A generative adversarial network for infrared and
visible image fusion
This method, proposed by Ma et al. introduces an image fusion method based on
a Generative Adversarial Network (GAN). This work is, to the best of our
knowledge, the first time a GAN model is proposed for the image fusion task.
The architecture is an end-to-end model that generates the fused image
automatically from the source images with no need to define fusion rules.
The generator attempts to produce a fused image with significant thermal
information, as well as with gradients from the visible image. The
discriminator, in turn, forces the generated image to contain more details
from the visible image. The latter module enables the model to produce images
that retain both thermal and textural information. Finally, the authors
generalize their proposed model so that it can fuse images with different
resolutions, with the final image free of the noise caused by the upsampling
of infrared information. Ma et al. named this model as the $FusionGAN$ model.
#### 3.1.3 Fusion of Unmatched Infrared and Visible Images Based on
Generative Adversarial Networks
This method was proposed by Zhao et al. Zhao20 . In this work, the authors
propose a network model based on generative adversarial networks (GANs) to
fuse unmatched infrared and visible images. First, the visible image is given
as an input to the generator $G_{1}$ to create a synthetic infrared image.
Then, the visible image and the synthetic infrared image are concatenated and
input into generator $G_{2}$, generating the fused image as the output. The
discriminator $D_{1}$ distinguishes between the real visible image and the
generated fused image so that the fused image is closer to the visible image,
containing more textural details. Simultaneously, the discriminator $D_{2}$
distinguishes the real infrared image, the generated infrared image, and the
fused image. Through the updating cycle, the generated infrared image becomes
closer to the real infrared image, allowing the fused image to contain more
thermal information. We will be referring to this model as the $UnmatchGAN$
model.
#### 3.1.4 Summary
The previously mentioned methods have both advantages and disadvantages. The
framework proposed by Li et al. Li18 has the advantage of only needing some
layers of an already pre-trained VGG19 network to perform feature extraction.
As such, there is no need for further training for the particular application
of image fusion. However, it is not an end-to-end method, and the required
intermediate steps increase its implementation complexity.
The model presented by Ma et al. Ma19_GAN has the advantage of being an end-
to-end model, significantly reducing its implementation complexity. However,
this GAN-based method needs to train on visible-infrared image pairs, and in
consequence, its performance depends on the quality of the training process.
It is also relevant to note that GANs have the challenge of training stability
Miyato18 .
The model proposed by Zhao et al. Zhao20 , a GAN-based model too, has the
advantage of being an end-to-end procedure; however, the challenge lies on the
training stability, as well as the need for a good training dataset. This
method has the additional capability of learning to generate approximate
infrared images based on source visible ones. Since the fusion process
requires perfectly aligned source images Ma19_survey , for the particular
context of fire images, this could prove a significant advantage for the
research community given the burden of obtaining perfectly matched visible-
infrared fire images on realistic operative scenarios.
Finally, it is also relevant to note that the three methods output grayscale
fused images. In the context of fire imagery, the preservation of color could
prove beneficial.
### 3.2 Data
For the present paper, we employ the visible-infrared image pairs of the
Corsican Fire Dataset, first presented by Toulouse et al. Toulouse17 in the
_Fire Safety Journal_ in 2017. This dataset contains 640 pairs of visible and
infrared fire images, alongside their corresponding ground truths for fire
region segmentation.
We also employ the RGB-NIR dataset, developed by Brown et al. Brown11 and
presented at _CVPR 2011_. This dataset contains 477 non-fire visible-infrared
image pairs. Fig. 1 shows sample images from both datasets. It is relevant to
note that the infrared images are NIR ones.
(a) Fire - visible
(b) Fire - NIR
(c) Non-fire - visible
(d) Non-fire - NIR
Figure 1: Sample images for the RGB-NIR and Corsican Fire Database datasets.
### 3.3 Metrics
The metrics selected for the present paper are the information entropy (EN),
the correlation coefficient (CC), the peak signal-to-noise-ratio (PSNR), and
the structural similarity index measure (SSIM); these metrics are by far the
more common in this area, more details can be found in metrics . In the
following subsections, we describe succinctly these metrics.
#### 3.3.1 Information entropy
EN reflects the average amount of information in an image. It is defined as:
$EN=-\sum_{l=1}^{L-1}p_{l}\log_{2}p_{l},$ (1)
where $L$ stands for the gray levels of the image, and $p_{l}$ represents the
proportion of gray-valued pixels $i$ in the total number of pixels. The larger
EN is, the more information is in the fused image Zhao20 .
#### 3.3.2 Correlation coefficient
The CC measures the degree of linear correlation between the fused image and
either the visible or infrared image. It is defined as:
$CC(X,Y)=\frac{Cov(X,Y)}{\sqrt{Var(X)Var(Y)}},$ (2)
where $Cov(X,Y)$ is the covariance between the fused image and the reference
images, and $Var(X)$, $Var(Y)$ represent the variance of the two images. The
larger the value of CC, the higher the correlation between the fused and the
reference images Zhao20 .
#### 3.3.3 Peak signal-to-noise ratio
The PSNR assumes that the difference between the fused image and the reference
image is noise. It is defined as:
$PSNR=10\log_{10}(\frac{MAX^{2}}{MSE}),$ (3)
where $MAX$ is the maximum value of the image color, and $MSE$ is the mean
squared error. An accepted benchmark for this metric is 30 dB; a PSNR value
lower than this threshold means that the fused image presents significant
deterioration Zhao20 .
#### 3.3.4 Structural similarity index measure
The SSIM is a method for measuring the similarity between two images Wang11 .
It is based on the degradation of structural information Wang04 and is
defined as follows:
$SSIM(X,Y)=(\frac{2u_{x}u_{y}+c_{1}}{u_{x}^{2}+u_{y}^{2}+c_{1}})^{\alpha}*(\frac{2\sigma_{x}\sigma_{y}+c_{2}}{\sigma_{x}^{2}+\sigma_{y}^{2}+c_{2}})^{\beta}*\\\
(\frac{\sigma_{xy}+c_{3}}{\sigma_{x}\sigma_{y}+c_{3}})^{\gamma},$ (4)
where $x$ and $y$ are the reference and fused images, respectively; $u_{x}$,
$u_{y}$, $\sigma_{x}^{2}$, $\sigma_{y}^{2}$, and $\sigma_{xy}$ represent the
mean value, variance, and covariance of images $x$ and $y$, respectively.
Finally, $c_{1}$, $c_{2}$, and $c_{3}$ are small numbers that help to avoid a
division by zero, and $\alpha$, $\beta$, and $\gamma$ are used to adjust the
proportions Zhao20 .
The range of values for SSIM goes from 0 to 1, with 1 being the best possible
one.
## 4 Results
We evaluate the methods by Li et al. Li18 and Ma et al. Ma19_GAN through
their corresponding open-source implementations. These models are provided by
the authors pre-trained on the ImageNet and the TNO Dataset, respectively.
### 4.1 Architectural adjustment for wildfire imagery fusion
Since the method proposed by Zhao et al. Zhao20 has no available open-source
implementation, we implemented it from the ground-up, extending it into our
proposed _FIRe-GAN_ model. We refer to the previous work by Isola et al.
Isola17 for relevant implementation details on G1 and the work by Ma et al.
Ma19_GAN for G2 and both discriminators. We also modified the final layer of
both generators from 1 to 3 filters; the latter allows our architecture to
output 3-channel images. Since fire images are generally of a high size and
resolution, we made use of a U-Net architecture for G1 in contrast to the
original method that makes use of a simple enconder-decoder architecture.
Additionally, we integrated the Two Time-Scale Update Rule (TTUR) module
proposed by Heusel et al. Heusel17 and spectral normalization to the
discriminators, as per the work by Miyato et al. Miyato18 to increase the
training stability.
Fig. 2 shows the architecture of G1 with the original enconder-decoder
architecture, Fig. 3 presents G1 with the proposed U-Net architecture, Fig. 4
shows the architecture of G2, and Fig. 5 the architecture of the
discriminators, as per the said considerations.
Figure 2: Implemented architecture for G1 of the method by Zhao et al. Zhao20
with the mentioned considerations. Figure 3: Implemented architecture for G1
with the proposed U-Net architecture. Figure 4: Implemented architecture for
G2 with the mentioned changes. Figure 5: Implemented architecture for both
discriminators with the mentioned changes.
To determine if the proposed U-Net architecture on G1 improves the quality of
the generated infrared images, we pre-train the model with both architectures
on the RGB-NIR dataset and compare the obtained results for the generated
infrared images. We split the RGB-NIR dataset into a train and a validation
sets. The training set contains 6112 image pairs after performing data
augmentation, and the validation set consists of 96 image pairs. We trained
the model with a batch size of 4, 40 epochs, a learning rate for both
generators of 5e-5 and for both discriminators of 1e-4, and spectral
normalization on both discriminators. Additionally, the discriminators were
updated once every two generator updates.
In Table 1 we present the average results for both architectures in terms of
the selected metrics. In this case, the CC, PSNR and SSIM metrics refer to the
comparison of the source and generated infrared (IR) images. Fig. 6 displays a
sample of the images produced by both architectures. We can observe
improvements on the CC, PSNR, and SSIM metrics for our proposed U-Net
architecture. A visual assessment of the produced images allows us to note an
increased amount of detail as well.
Table 1: Average results for both architectures of G1 on the generated IR images. Model | EN | CC | PSNR | SSIM
---|---|---|---|---
Original | 9.9158 | 0.8593 | 17.9341 | 0.5506
U-Net | 9.9474 | 0.9203 | 19.9473 | 0.7541
(a) RGB images
(b) IR images
(c) Original
(d) U-Net
Figure 6: Sample resulting artificial infrared images from both architectures
for G1.
### 4.2 Method comparison
Due to the improvement displayed by the U-Net architecture for G1 on the
proposed _FIRe-GAN_ , we took this extended model and used it for its
comparison with the works by Li et al. Li18 and Ma et al. Ma19_GAN . For
consistency and to make the comparison fair with these methods, we pre-trained
our proposed extended model with the RGB-NIR dataset. Then, we tested the
three models on the Corsican Fire Database. In this way, we were able to
assess the generalization capabilities of these models on the new domain of
fire imagery. Fig. 7 displays the results, Table 2 shows the average results
on the first three columns, and Fig. 8 shows sample images produces by the
three methods. We can observe that the _VGG19_ method presents the most
balanced inclusion of information and, on average, higher quantitative
results. Of the GAN-based models, the _FusionGAN_ one heavily favors the
inclusion of thermal data, while the _FIRe-GAN_ model shows balanced results
regarding the inclusion of source information; however, the metric results are
lower on average than those of the _VGG19_ method.
(a) _VGG19_ method.
(b) _FusionGAN_ method.
(c) Proposed _FIRe-GAN_ method.
Figure 7: Results from the fire images on the three evaluated methods.
(a)
(b)
(c)
(d)
(e)
Figure 8: Sample resulting images from the three methods. In column 8(a) are
the RGB images, in 8(b) the IR images, in 8(c) the fused images from the
_VGG19_ method, in 8(d) the fused images from the _FusionGAN_ method, and in
8(e) the fused images from the _FIRe-GAN_ method.
### 4.3 Transfer learning
The _VGG19_ and _FusionGAN_ methods can rely on source infrared images to
perform the fusion process. In contrast, the _UnmatchGAN_ model and, by
extension, the proposed _FIRe-GAN_ model, must generate approximate infrared
ones and fuse them with the source visible ones. Then, it stands to reason
that it could be more problematic for this model to generalize to new domains.
As such, we perform a transfer learning phase with a segment of the fire
images of the Corsican Fire Database and evaluate its performance.
We segment the dataset into a train and a validation sets. The training set
has 8192 images after data augmentation, and the validation set consists of
128 image pairs. We set the training epochs to 3. Due to the strong thermal
characteristics of fire images, we add a constant term $\gamma$ that
multiplies the element of the loss function of _G2_ that represents the
adversarial loss between _G2_ and _D1_ , thus prioritizing the inclusion of
visible information. The final result is a balanced inclusion of visible and
infrared information for the fire fused images. Experimentally, we found the
best value of $\gamma$ to be 4.5. All other training parameters are the same
as those mentioned in section 4.1, and all other loss functions remain the
same as those of the _UnmatchGAN_ model. The modified loss function for _G2_
is as follows:
$\begin{split}L_{G_{2}}=\gamma[\frac{1}{N}\sum_{n=1}^{N}(D_{1}(I^{n}_{F})-c_{1})^{2}]+\frac{1}{N}\sum_{n=1}^{N}(D_{2}(I^{n}_{F})-c_{2})^{2}+\\\
\frac{\lambda}{HW}(\|I_{F}-I_{R}\|_{F}^{2}+\xi\|\nabla I_{F}-\nabla
I_{V}\|_{F}^{2})\end{split}$ (5)
In Fig. 9 we show the results on the 128 images of the validation set before
and after the transfer learning phase. Table 2 shows the condensed average
results on the last two columns. After only three training epochs, we can
observe a marked improvement in the generated infrared images, as well as more
accurate inclusion of thermal information on the fused ones.
(a) Metric results before transfer learning.
(b) Metric results after transfer learning.
Figure 9: Results for the _FIRe-GAN_ model from the fire images of the
validation set before and after transfer learning.
(a)
(b)
(c)
(d)
(e)
(f)
Figure 10: Sample resulting images before and after transfer learning. In
column 10(a) are the RGB images, in 10(b) the IR images, in 10(c) the
artificial IR images before transfer learning, in 10(d) the artificial IR
images after transfer learning, in 10(e) the fused images before transfer
learning, and in 10(f) the fused images after transfer learning.
### 4.4 Summary
In Table 2 we summarize the average results both for the evaluation of the
three methods on the full Corsican Fire Database, as specified in section 4.2,
and of the transfer learning results on the validation set of the Corsican
Fire Database as specified on section 4.3.
Table 2: Average results for the three evaluated methods on the full Corsican Fire Database as specified on section 4.2 on the first three columns, and for the _FIRe-GAN_ method before and after transfer learning on the validation set of the Corsican Fire Database (section 4.3) on the last two columns. Metric | VGG19 | FusionGAN | FIRe-GAN | Before | After
---|---|---|---|---|---
EN | 6.3415 | 6.0072 | 10 | 10 | 10
CC IR-fused | 0.7976 | 0.9657 | 0.5650 | 0.5774 | 0.7270
CC RGB-fused | 0.7637 | 0.4104 | 0.6651 | 0.6799 | 0.6499
PSNR IR-fused | 19.2014 | 22.5090 | 14.2065 | 14.3196 | 16.4212
PSNR RGB-fused | 19.2255 | 15.3900 | 16.7435 | 16.7687 | 15.6484
SSIM IR-fused | 0.8337 | 0.8389 | 0.6129 | 0.6187 | 0.7064
SSIM RGB-fused | 0.9007 | 0.7783 | 0.8000 | 0.8020 | 0.7935
## 5 Discussion
Of the three evaluated methods, the _VGG19_ one by Li et al. Li18 displayed
the best overall performance for the new domain of fire imagery. This model is
also the one that shows a more balanced behavior in terms of the inclusion of
both visible and thermal information. The latter could be due to the way the
authors leverage the VGG19 DL model. As Li et al. employ only the feature
extraction capabilities of a pre-trained model, they do not need to train it
for the particular task of image fusion. As the model was pre-trained on
ImageNet, it demonstrates significant feature extraction capabilities, which
explains the superior performance.
The _FusionGAN_ model proposed by Ma et al. Ma19_GAN is relevant since it is
the first attempt to use GANs for image fusion. The simplicity of an end-to-
end model is also desirable. However, when applied to the new domain of fire
images, this method tends to incorporate more thermal than visible
information. This can be due to the fact that fire images have more well-
defined thermal information, whereas non-fire images in the training set do
not exhibit that strong of a distinction between thermal and visible images.
Our proposed _FIRe-GAN_ model has the advantages of being able to work with
higher resolution images and to output three-channel fused images. This last
feature allows it to learn to preserve color. Before performing transfer
learning, it shows a balanced approach towards the inclusion of visible and
thermal information; however, the overall performance is lower compared to the
other two methods. Also, the generated IR images are very close to the source
visible images; the model does not compensate for thermal information hidden
behind features like smoke. Upon visual inspection, we can observe that the
fused images preserve colors similar to the visible ones.
When applying transfer learning to our proposed method on a segment of the
Corsican Fire Database, after only three training epochs, the model can
produce artificial IR images that are very close to the original ones, and
fused images containing a balanced combination of thermal and visible
information. This speaks well of the specialization capabilities of the model.
It is also relevant to note that, even though the fused images preserve color
information, the said color is no longer the same as the visible images,
particularly on the fire regions. Since the color of the source visible and
infrared images are significantly different for the case of fire images, it
appears to be that the model learns to seek an intermediate color
representation between the two.
Finally, it is worth remembering that we trained the model to generate
approximate NIR images. The performance could change if the training set
contains a different type of infrared image (SWIR, MWIR, or LWIR).
## 6 Conclusions and future work
In the present paper, we demonstrate the feasibility of DL-based methods for
the particular task of fire image fusion. The framework proposed by Li et al.
Li18 displays the best overall performance. This method takes advantage of
the feature extraction capabilities of DCNNs and traditional image fusion
techniques for an effective combination.
The evaluated GAN-based methods show promise due to the simplicity of their
implementation and their generalization and specialization capabilities. In
particular, our proposed _FIRe-GAN_ model displays a balanced approach towards
the inclusion of visible and infrared information, with consistent color
preservation. Also, it is relevant to note that there is no much data
available by DL standards, of visible-infrared image pairs, especially on the
fire domain; the generation of more visible-infrared image pairs would improve
the performance of the models and reduce the risk of overfitting. Finally,
further experimentation is needed to assess the significance of color
preservation on the fused images for different fire detection techniques.
###### Acknowledgements.
The authors would like to thank the University of Corsica for providing access
to the Corsican Fire Database.
## Conflict of interest
The authors declare that they have no conflict of interest.
## References
* (1) Codes for fusiongan, a gan model for infrared and visible image fusion. https://github.com/jiayi-ma/FusionGAN. Accessed: 2020-09-13
* (2) Corsican fire database. http://cfdb.univ-corse.fr/. Accessed: 2020-12-17
* (3) Infrared and visible image fusion using a deep learning framework. https://github.com/hli1221/imagefusion_deeplearning. Accessed: 2020-08-22
* (4) Rgb-nir scene dataset. https://ivrlwww.epfl.ch/supplementary_material/cvpr11/index.html. Accessed: 2020-12-17
* (5) Arrue, B.C., Ollero, A., Matinez de Dios, J.R.: An intelligent system for false alarm reduction in infrared forest-fire detection. IEEE Intelligent Systems and their Applications 15(3), 64–73 (2000). DOI 10.1109/5254.846287
* (6) Brown, M., Süsstrunk, S.: Multi-spectral sift for scene category recognition. In: CVPR 2011, pp. 177–184 (2011)
* (7) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, p. 6629–6640. Curran Associates Inc., Red Hook, NY, USA (2017)
* (8) Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. CVPR (2017)
* (9) Li, H., Wu, X., Kittler, J.: Infrared and visible image fusion using a deep learning framework. In: 2018 24th International Conference on Pattern Recognition (ICPR), pp. 2705–2710 (2018)
* (10) Li, H., Wu, X.J., Kittler, J.: Mdlatlrr: A novel decomposition method for infrared and visible image fusion. IEEE Transactions on Image Processing 29, 4733–4746 (2020). DOI 10.1109/TIP.2020.2975984
* (11) Liu, Z., Blasch, E., Xue, Z., Zhao, J., Laganiere, R., Wu, W.: Objective assessment of multiresolution image fusion algorithms for context enhancement in night vision: A comparative study. IEEE Transactions on Pattern Analysis and Machine Intelligence 34(1), 94–109 (2012). DOI 10.1109/TPAMI.2011.109
* (12) Ma, J., Liang, P., Yu, W., Chen, C., Guo, X., Wu, J., Jiang, J.: Infrared and visible image fusion via detail preserving adversarial learning. Information Fusion 54, 85 – 98 (2020). DOI https://doi.org/10.1016/j.inffus.2019.07.005. URL http://www.sciencedirect.com/science/article/pii/S1566253519300314
* (13) Ma, J., Ma, Y., Li, C.: Infrared and visible image fusion methods and applications: A survey. Information Fusion 45, 153 – 178 (2019). DOI https://doi.org/10.1016/j.inffus.2018.02.004. URL http://www.sciencedirect.com/science/article/pii/S1566253517307972
* (14) Ma, J., Yu, W., Liang, P., Li, C., Jiang, J.: Fusiongan: A generative adversarial network for infrared and visible image fusion. Information Fusion 48, 11 – 26 (2019). DOI https://doi.org/10.1016/j.inffus.2018.09.004. URL http://www.sciencedirect.com/science/article/pii/S1566253518301143
* (15) Martinez-de Dios, J., Arrue, B., Ollero, A., Merino, L., Gómez-Rodríguez, F.: Computer vision techniques for forest fire perception. Image and Vision Computing 26(4), 550 – 562 (2008). DOI https://doi.org/10.1016/j.imavis.2007.07.002. URL http://www.sciencedirect.com/science/article/pii/S0262885607001096
* (16) Miyato, T., Kataoka, T., Koyama, M., Yoshida, Y.: Spectral normalization for generative adversarial networks (2018)
* (17) Nemalidinne, S.M., Gupta, D.: Nonsubsampled contourlet domain visible and infrared image fusion framework for fire detection using pulse coupled neural network and spatial fuzzy clustering. Fire Safety Journal 101, 84 – 101 (2018). DOI https://doi.org/10.1016/j.firesaf.2018.08.012. URL http://www.sciencedirect.com/science/article/pii/S0379711218301796
* (18) The Visual and Data Journalism Team: California and oregon 2020 wildfires in maps, graphics and images. https://www.bbc.com/news/world-us-canada-54180049. Accessed: 2020-09-30
* (19) Toulouse, T., Rossi, L., Campana, A., Celik, T., Akhloufi, M.A.: Computer vision for wildfire research: An evolving image dataset for processing and analysis. Fire Safety Journal 92, 188 – 194 (2017). DOI https://doi.org/10.1016/j.firesaf.2017.06.012. URL http://www.sciencedirect.com/science/article/pii/S0379711217302114
* (20) Yeung, J.: Australia’s deadly wildfires are showing no signs of stopping. here’s what you need to know. https://edition.cnn.com/2020/01/01/australia/australia-fires-explainer-intl-hnk-scli/index.html. Accessed: 2020-04-10
* (21) Yuan, C., Zhang, Y., Liu, Z.: A survey on technologies for automatic forest fire monitoring, detection and fighting using uavs and remote sensing techniques. Canadian Journal of Forest Research 45, 150312143318009 (2015). DOI 10.1139/cjfr-2014-0347
* (22) Zhao, Y., Fu, G., Wang, H., Zhang, S.: The fusion of unmatched infrared and visible images based on generative adversarial networks. Mathematical Problems in Engineering 2020, 3739040 (2020). DOI 10.1155/2020/3739040. URL https://doi.org/10.1155/2020/3739040
* (23) Zhao, Z., Xu, S., Feng, R., Zhang, C., Liu, J., Zhang, J.: When image decomposition meets deep learning: A novel infrared and visible image fusion method (2020)
* (24) Zhou Wang, Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: The ssim index for image quality assessment. https://www.cns.nyu.edu/~lcv/ssim/. Accessed: 2020-08-26
* (25) Zhou Wang, Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing 13(4), 600–612 (2004)
* (26) Çetin, A.E., Dimitropoulos, K., Gouverneur, B., Grammalidis, N., Günay, O., Habiboǧlu, Y.H., Töreyin, B.U., Verstockt, S.: Video fire detection – review. Digital Signal Processing 23(6), 1827 – 1843 (2013). DOI https://doi.org/10.1016/j.dsp.2013.07.003. URL http://www.sciencedirect.com/science/article/pii/S1051200413001462
|
# Design and implementation of the AMIGA embedded system for data acquisition
The Pierre Auger Collaboration et al
###### Abstract
The Auger Muon Infill Ground Array (AMIGA) is part of the AugerPrime upgrade
of the Pierre Auger Observatory. It consists of particle counters buried
$2.3\text{\,}\mathrm{m}$ underground next to the water-Cherenkov stations that
form the $23.5\text{\,}{\mathrm{km}}^{2}$ large infilled array. The reduced
distance between detectors in this denser area allows the lowering of the
energy threshold for primary cosmic ray reconstruction down to about
${10}^{17}\text{\,}\mathrm{e}\mathrm{V}$. At the depth of
$2.3\text{\,}\mathrm{m}$ the electromagnetic component of cosmic ray showers
is almost entirely absorbed so that the buried scintillators provide an
independent and direct measurement of the air showers muon content. This work
describes the design and implementation of the AMIGA embedded system, which
provides centralized control, data acquisition and environment monitoring to
its detectors. The presented system was firstly tested in the engineering
array phase ended in 2017, and lately selected as the final design to be
installed in all new detectors of the production phase. The system was proven
to be robust and reliable and has worked in a stable manner since its first
deployment.
## 1 Introduction
The Pierre Auger Observatory [1] was originally designed to study ultra-high-
energy cosmic rays (UHECR) with primary particle energy above
$3\text{\times}{10}^{18}\text{\,}\mathrm{eV}$. It is located in the Southern
Hemisphere near the Andes mountains in the south-west part of the province of
Mendoza, Argentina. It uses a hybrid detection technique employing both a
surface detector array (SD) and a fluorescence detector (FD).
The SD of Auger is composed of an extensive array of about 1600 water
Cherenkov detectors (WCD), separated by a $1500\text{\,}\mathrm{m}$ spacing
and covering an area of $3000\text{\,}{\mathrm{km}}^{2}$. As above
$5\text{\times}{10}^{19}\text{\,}\mathrm{eV}$, the flux is of around 1
particle per $\text{\,}{\mathrm{km}}^{2}$ per century, the large area of the
observatory allows to observe more than 30 high energy cosmic rays per year
[1]. A detailed description of the SD of the Auger Observatory can be found in
[2].
Each of the 1600 surface detector stations includes a $3.6\text{\,}\mathrm{m}$
diameter water tank containing a sealed liner with a reflective inner surface.
The liner contains $12\,000\text{\,}\mathrm{l}$itres of purified water.
Cherenkov light produced by the passage of charged particles through the water
is collected by three nine-inch photo-multiplier tubes (PMTs) that are
symmetrically distributed at a distance of $1.20\text{\,}\mathrm{m}$ from the
center of the tank and look downwards through windows of clear polyethylene
into the water. The surface detector station is self-contained. A photovoltaic
system provides an average of $10\text{\,}\mathrm{W}$ for the PMTs and the
electronics system consisting of a processor, GPS receiver, radio transceiver
and power controller.
The surface detector electronics records the PMT signals, makes local
triggering decisions, sends time-stamps to the central data acquisition system
for the global trigger building, and stores event data for retrieval when a
global trigger condition is fulfilled. Due to the low available bandwith of
$150\text{\,}\mathrm{B}\mathrm{y}\mathrm{t}\mathrm{e}\mathrm{s}\mathrm{/}\mathrm{s}$
per station, they must operate semi-autonomously, performing calibrations and
taking action in response to alarm conditions at the station level. The
electronic system was designed 15 years ago using the technology available at
that time.
Figure 1: Map of the Pierre Auger Surface detector (right), located in
Malargüe, in the southern region of the Mendoza province as shown in the lower
right panel, with the AMIGA infill area marked with dotted lines and zoomed in
(left). In the blow-up the positions of the engineering array are named
“Unitary cell” and enclosed by a dotted line. The central communication tower
is located about $7\text{\,}\mathrm{km}$ to the west (distance not to scale).
The denser $433\text{\,}\mathrm{m}$ infill is not shown in this figure for
simplicity, but can be observed in figure 10
AMIGA (Auger Muon Infill for the Ground Array) [3] is an enhancement of the
Pierre Auger Observatory designed to satisfy two different objectives. The
first objective is to lower the minimum detectable cosmic ray energy threshold
to $1\text{\times}{10}^{17}\text{\,}\mathrm{eV}$ by installing 61 SD stations
with $750\text{\,}\mathrm{m}$ spacing in an infill array covering an area of
$23.5\text{\,}{\mathrm{km}}^{2}$, which was completed in 2012. A more recent,
and even denser array with $433\text{\,}\mathrm{m}$ spacing over
$1.9\text{\,}{\mathrm{km}}^{2}$ was also included and completed in 2019.
Figure 1 shows a representation of AMIGA. The second objective is to enhance
the capability of Auger to study ray composition by a direct measurement of
the muonic shower component. The muon detection is achieved by
$30\text{\,}{\mathrm{m}}^{2}$ scintillator detectors buried
$2.3\text{\,}\mathrm{m}$ underground alongside all surface stations. The
$540\text{\,}\mathrm{g}\mathrm{/}\mathrm{c}\mathrm{m}\mathrm{{}^{2}}$ of
overburden determined by the local soil density completely shields the
electromagnetic component of the showers and only muons of energy greater than
$1\text{\times}{10}^{9}\text{\,}\mathrm{eV}$ are capable of hitting the
detectors.
The main function for the AMIGA Data Acquisition System is to manage the data
transfer from the muon counters in a modular, flexible, easily configurable
and scalable fashion compatible with the Pierre Auger Observatory data
trigger. These characteristics led to specifications on how to design the
firmware, software, hardware, network structure and protocol, and finally the
monitoring system. For each of these important aspects of the data acquisition
system, we dedicate a section on this paper, where we describe in more detail
the motivation and the chosen solutions.
The AMIGA buried stations work in sync with the SD, receiving the local
triggers by their accompanying WCD. The energy and geometry of the showers are
reconstructed by the SD while extra information on the particle composition is
provided by the muon-related observables such as the timing and size of the
signal provided by buried scintillators.
In this paper we describe the features of the embedded system design for the
AMIGA buried scintillator detectors, dubbed Underground Muon Detector (UMD).
This description includes the electronics, as well as the synchronization with
the associated SD station. As an illustration of the functionality of the
whole acquisition chain, we conclude showing an event measured by the seven
stations that compose the engineering array.
The main contribution of this work to distributed embedded systems in general
is the following: We present in this paper a flexible scalable massive system
of multiple distributed muon detectors that can be organized in a remote
fashion, using existing tools for a very specific problem: the detection of
the muon component of cosmic showers. In this paper we describe how we
organize the detection system using the preexisting trigger conditions of the
surface detectors in the Pierre Auger Observatory and how we can apply
reliable existing technology that can be adapted and easily applied as a
solution to this problem. The electronics system of the front-end is described
in a companion paper [4] where the details of the front-end electronics of the
UMDs, its design and performance are described. In summary, we must mention
that the UMDs work as muon counters, using two types of detection: 1 bit
multichannel digitalization, one for each of the 64 channels, and a
$14\text{\,}\mathrm{b}\mathrm{i}\mathrm{t}$ Analog to Digital Converter with
two independent channels that measure the charge of the signal from the 64
channels for a muon count that exceeds 64 muons.
## 2 Underground Muon Detector overview
As the main objective of the UMD is to count the number of muons in a cosmic
shower produced by a cosmic ray, the original design of AMIGA was based on
particle detectors that would be able to count individual muons in a radiation
detector. This detector was implemented as a segmented scintillator, divided
into three modules of 64 channels each, with a total area of detection
determined via simulations [5]. These simulations were also used to obtain the
optimum detector segmentation ($192\text{\,}$ scintillator bars). The total
area needed of $30\text{\,}{\mathrm{m}}^{2}$ was therefore covered by three
modules of $10\text{\,}{\mathrm{m}}^{2}$ each.
All SD stations in the infilled array have been installed and are operating
since 2008 (61 stations) [6]. The SD stations in the infilled
$750\text{\,}\mathrm{m}$ array have been installed since 2008 (42 additional
stations) [7] and the complete grid is operative since 2012. More recently,
the SD 433m array was also completed. The companion UMD had an engineering
array (EA) phase of seven stations to validate and optimize the detector
design. This prototype phase ended in 2017 with two major changes: the
replacement of the optical device from PMTs to SiPMs and improved electronics,
which is presented in this work. Immediately after the EA phase, the
production phase started and the UMD array completion is foreseen by the end
of 2022 [7].
AMIGA uses a telecommunications system that consists of a point-to-multipoint
802.11n WiFi radio link, with redundant coordinators located at the Coihueco
fluorescence detector building (about
$6\text{\,}\mathrm{km}10\text{\,}\mathrm{km}$ link distance). That link
provides TCP/IP remote access to the LAN of each station [8]. The system is
also designed to be energy efficient (as power from the photovoltaic system is
limited) and has the capacity to transmit a high amount of data from those
remote areas. All AMIGA stations are expected to work at least 10 years in
remote (and possibly hard to reach) areas. Under these conditions, the systems
need to have a long service life and to allow remote access for diagnostics
and early fault detection for high quality of data.
The electronics were designed as embedded systems to ease eventual hardware
upgrades. They run Linux as open source operating system to be free from any
particular commercial software and be able to implement different hardware
choices used on the detectors.
### 2.1 Electronics Overview
The electronics for the UMD was designed to provide a reliable data transfer
from the buried scintillator modules to the main data storage server in
Malargüe. In order to do this, each UMD is connected to an SD to use the
Pierre Auger Observatory trigger. In order to provide a safe data transfer and
not to overload the existing SD telecommunications and power system, each
station was equipped with a new telecommunications system [8], a new solar
power system [9], a trigger relay system (the _distributor_ , see section
2.5), three buried modules that comprise the UMD itself, and a synchronization
system that connects the UMD data acquisition system with its corresponding SD
data acquisition system. The motivation for the telecommunications system is
stated in the mentioned reference, as a summary, an 802.11n WiFi system
running a TCP/IP network is used. The motivation for the solar panel system
design is stated in the mentioned reference. In summary, a
$24\text{\,}\mathrm{V}$ power system was implemented to provide the power
needed via two deep discharge $12\text{\,}\mathrm{V}$ batteries connected in
series that satisfy the power requirements of the UMDs and their
telecommunications system, a total of $\sim 46\text{\,}\mathrm{W}$ in the
engineering array for at least one day without solar energy (For production
the power of the buried electronics of the UMDs was decreased to $\sim
3.6\text{\,}\mathrm{W}$ as measured in the field, reducing the total power
budget to $\sim 19\text{\,}\mathrm{W}$ per station in the main array, allowing
the photovoltaic system to have an extended autonomy of at least three days
without solar energy). The trigger distribution and synchronization mentioned
was motivated by the need to provide muon counting data stored in the UMDs
each time a trigger request from the main data processing center in Malargüe
is received. Therefore, the UMDs must be synchronized with the SDs and the
trigger lines must be distributed to each of the three buried muon counters in
each station.
A drawing that illustrates the design of the SD and UMD combined detectors at
a position in the infill is shown in figure 2 with its main components. Three
$10\text{\,}{\mathrm{m}}^{2}$ buried modules are used at each surface detector
station for the final design [5].
Figure 2: General overview of an infill station. The figure shows a counter
and surface detector. Three modules are shown buried $2.3\text{\,}\mathrm{m}$
deep, with a detection area of $10\text{\,}{\mathrm{m}}^{2}$. Module
electronics are accessed for field installation and service via a plastic
tube. In addition, the solar panel and the additional battery box are shown.
Figure 3: Block diagram of an AMIGA station electronics. The Ethernet switch
conforms the center of the star of the Ethernet LAN. The distributor
replicates the fast trigger signal from the SD station and injects power from
the photovoltaic system to the Ethernet cables that connect the buried
modules.
The block diagram of the electronics is shown in figure 3, which describes the
components of one combined station and its interconnection as detailed in the
following subsections.
### 2.2 UMD scintillator module
In this section, we present a summary of the underground scintillator module
and its electronics functionality. A thorough description of the scintillator
module and the buried electronics has already been published in [10], [11] and
[4]. In summary, the motivation for the design of this electronics was to
provide a ring buffer that serves as a temporary storage big enough to save
muon data of a full cosmic shower for the time the Auger Observatory trigger
takes to send a request for that data. In this section we introduce the
scintillator module, the hardware of the underground electronics, the
structure of the stored data and the requirements for its implementation in
the digital back-end of the underground electronics. We also review the
hardware structure of the buried electronics interconnected with the
scintillator bars using optical fibers.
Every module comprises 64 scintillation bars, each of $41\text{\,}\mathrm{mm}$
width, $10\text{\,}\mathrm{mm}$ thickness and $4\text{\,}\mathrm{m}$ length
for a detection area of $10\text{\,}{\mathrm{m}}^{2}$. The light produced in
each scintillator bar is collected and propagated along a wavelength-shifting
(WLS) optical fiber of $1.2\text{\,}\mathrm{mm}$ diameter glued into a
lengthwise groove of the bar. All fibers connect to an optical coupler (also
called “cookie”) located in front of the light sensor. The $64\text{\,}$
scintillators and optical fibers are enclosed within a PVC (Polyvinyl
Chloride) casing and they form together with the electronics the detector
module. A detailed description of the AMIGA detector is found in [10].
Two groups of $32\text{\,}$ scintillator bars are mounted in each module at
opposite sides of a central dome that contains the light sensor and
electronics. For the final design we selected the Hamamatsu S13361 Silicon
Photomultiplier (SiPM) [11]. An integrated electronics acquires the analog
signals from this SiPM, processes them and provides control, monitoring and
communication functionality to the module. The system records event data
synchronized with the associated SD station (at about
$100\text{\,}\mathrm{e}\mathrm{v}\mathrm{e}\mathrm{n}\mathrm{t}\mathrm{s}\mathrm{/}\mathrm{s}$)
and stores them for roughly the same time as the SD station. Each recorded
event is composed of $2048\text{\,}$ samples of
$64\text{\,}\mathrm{b}\mathrm{i}\mathrm{t}\mathrm{s}$ measured at
$320\text{\,}\mathrm{M}\mathrm{s}\mathrm{p}\mathrm{s}$ and $1024\text{\,}$
samples of two $14\text{\,}\mathrm{b}\mathrm{i}\mathrm{t}$ ADCs (high and low
gain) measured at $160\text{\,}\mathrm{M}\mathrm{s}\mathrm{p}\mathrm{s}$ [4].
The former and the latter are dubbed the binary and the integrator acquisition
modes respectively. Then, an event with a temporal length of
$6.4\text{\,}\mathrm{\SIUnitSymbolMicro s}$ requires:
$64\text{\,}\mathrm{b}\mathrm{i}\mathrm{t}\mathrm{s}$ x
$2048\text{\,}\mathrm{+}$ 2 x
$16\text{\,}\mathrm{b}\mathrm{i}\mathrm{t}\mathrm{s}$ x 1024 =
$163\text{\,}\mathrm{K}\mathrm{b}\mathrm{i}\mathrm{t}\mathrm{s}$. Notice that
the $14\text{\,}\mathrm{b}\mathrm{i}\mathrm{t}\mathrm{s}$ traces are sent in
two Bytes, hence the $16\text{\,}\mathrm{b}\mathrm{i}\mathrm{t}\mathrm{s}$
value in the second term. Added to this trace are a header of
$336\text{\,}\mathrm{b}\mathrm{i}\mathrm{t}\mathrm{s}$, a trace of 2 x
$2048\text{\,}\mathrm{b}\mathrm{i}\mathrm{t}\mathrm{s}$ that correspond to the
OR gates in the two CITROCs of the front-end [4] and
$16\text{\,}\mathrm{b}\mathrm{i}\mathrm{t}\mathrm{s}$ x
$2048\text{\,}\mathrm{t}$hat are the identifiers of each the bins in the
$64\text{\,}\mathrm{b}\mathrm{i}\mathrm{t}\mathrm{s}$ trace. All this adds up
to a total number for the trace of:
$163\text{\,}\mathrm{K}\mathrm{b}\mathrm{i}\mathrm{t}\mathrm{s}$ \+
$336\text{\,}\mathrm{b}\mathrm{i}\mathrm{t}\mathrm{s}$ \+ 2 x
$2048\text{\,}\mathrm{b}\mathrm{i}\mathrm{t}\mathrm{s}$ \+
$16\text{\,}\mathrm{b}\mathrm{i}\mathrm{t}\mathrm{s}$ x
$2048\text{\,}\mathrm{=}$
$201\text{\,}\mathrm{K}\mathrm{b}\mathrm{i}\mathrm{t}\mathrm{s}$. In order to
store the data of a candidate event for about $20\text{\,}\mathrm{s}$ at a
rate of
$100\text{\,}\mathrm{e}\mathrm{v}\mathrm{e}\mathrm{n}\mathrm{t}\mathrm{s}\mathrm{/}\mathrm{s}$,
the minimum memory needed is
$402\text{\,}\mathrm{M}\mathrm{b}\mathrm{i}\mathrm{t}\mathrm{s}$.
The electronics for each module consists of the front-end board and the
Acquisition and Control board. The front-end processes the 64 light signals by
two Application Specific Integrated Circuit (ASIC) [11] and two
$160\text{\,}\mathrm{M}\mathrm{s}\mathrm{p}\mathrm{s}$
$14\text{\,}\mathrm{b}\mathrm{i}\mathrm{t}$ ADCs (Analog Devices ADS4246)
digitizing the sum of all SiPM channels (one with a low-gain amplifier and one
with a high-gain amplifier). Each ASIC channel provides a pre-amplifier with
programmable gain and a fast shaper with $15\text{\,}\mathrm{n}\mathrm{s}$
peak time. Finally, a discriminator digitizes the signal into one bit. The
discriminator threshold has a coarse setting via a
$10\text{\,}\mathrm{b}\mathrm{i}\mathrm{t}$ DAC (common for the 32 channels)
and a per-channel fine setting using individual
$4\text{\,}\mathrm{b}\mathrm{i}\mathrm{t}$ DACs. The Acquisition and Control
Board (also called in this paper the _back-end_) records the digital
information of the front-end board and stores the data in a ring buffer when a
trigger condition is found. Details are given in section 3.
### 2.3 Synchronization
In order to comply with the trigger structure of the Pierre Auger Observatory,
we need to establish how the synchronization of the UMD will be carried out
whenever a cosmic shower is detected by the SDs. The detailed description of
the trigger structure of the observatory is already thoroughly described in
[12] and [1]. In this section we outline the main restrictions our Data
Acquisition System has to abide by in order to comply with these trigger
requirements. The specifications described in this section serve as a
reference for the trigger interconnection between the surface and buried
electronics, the memory usage in the ring buffer and the data handling that
has to be performed in the SD in order to successfully transfer muon data from
the UMD to the main server in Malargüe that is effectively synchronized with
the detection of a cosmic shower.
The Auger surface detector data acquisition and storage system is comprised of
a central storage server (or “CDAS” located in the city of Malargüe) and
several client stations; the muon counter replicates this structure, merging
the muon counter data with the surface detector data in a post-processing step
offline.
The trigger system of SD is hierarchical: The first level of trigger (T1) is
generated independently by each station based on local analysis of the signals
produced by its PMT. At calibrated SD stations the threshold is set for an
average T1 trigger rate of about
$100\text{\,}\mathrm{e}\mathrm{v}\mathrm{e}\mathrm{n}\mathrm{t}\mathrm{s}\mathrm{/}\mathrm{s}$.
These events are stored in a circular buffer with a capacity of
$3000\text{\,}\mathrm{e}\mathrm{v}\mathrm{e}\mathrm{n}\mathrm{t}\mathrm{s}$.
Events are identified by a 64-bit time-stamp (GTS) obtained from the GPS
receiver with a latency of about $200\text{\,}\mathrm{\SIUnitSymbolMicro s}$
after T1 as measured in the laboratory. The station then applies predefined
quality cuts to generate a second level trigger T2 with an average rate of
$20\text{\,}\mathrm{H}\mathrm{z}$. A list of T2 time-stamps is sent to CDAS
once every second. CDAS searches through the list of T2 triggers for a
coincidence in vicinity and time. When a coincidence is found, an array
trigger (T3) is sent back to participating stations and its neighbors. Upon
receiving a T3 trigger request, the station replies to the CDAS with the event
data, if it is present in the circular buffer [12], [1].
UMD buried modules were designed to rely on an external trigger in order to
acquire event data during a possible particle shower. To provide
synchronization with the underground particle counter modules, additional
electronics were installed in the surface station. These complementary
electronics were designed to trigger and label the particle counters data when
the surface detector encounters a T1 trigger condition, and to forward T3 data
requests from CDAS [13] to the buried electronics afterwards.
#### 2.3.1 Trigger output from Auger Surface Detectors
The time that precedes a T1 event strongly depends on the delay between the
issuing of the trigger signal and the arrival of the identifier. To address
this issue a buffer that stores the T1 data must take this delay into account.
Also a faster additional time-stamp was implemented, capable of arriving at
the buried modules within a few microseconds after the SD station detects the
trigger condition. This time-stamp, called Local Time-Stamp (LTS) consists of
a $24\text{\,}\mathrm{b}\mathrm{i}\mathrm{t}$ number generated by the surface
station. Taking into account the average T1 rate, each LTS value is guaranteed
to be unique for at least $26.8\text{\,}\mathrm{s}$, enough time to ensure
that requested events will not have time-stamp collision (and therefore data
collision) with newer events [14]. The time it takes between the detection of
an event condition (T1) and the sending of the signal is about
$760\text{\,}\mathrm{ns}$. However, using this time-stamping mechanism
requires a way of converting between the time-stamp used by the surface
detector and the LTS. This conversion is provided by electronics named Trigger
and Data Request (TDR).
Figure 4: T1 circuit (LTS) between the surface detector electronics and the
modules. The simplified circuit for a line is shown in order to have a better
understanding. The implemented circuit distributes the signals in the
distributor, after the LVDS receiver.
Both trigger signals (to the buried electronics and the information sent to
the TDR) are sent from an auxiliar board retrofitted in every SD station with
muon counter detectors installed (“Trigger-TX”). The minimum required data
transmission rate is $10\text{\,}\mathrm{M}\mathrm{b}\mathrm{p}\mathrm{s}$
($100\text{\,}\mathrm{ns}$ per bin). This board has two low-voltage
differential signalling (LVDS) drivers. Each LVDS line uses a point-to-point
configuration, using drivers and receivers from Fairchild Semiconductors
(FIN1001 and FIN1002 respectively) supporting data rates higher than
$600\text{\,}\mathrm{M}\mathrm{b}\mathrm{p}\mathrm{s}$.
The typical power consumption of the transmitter is $23\text{\,}\mathrm{mW}$
and of the receiver $13\text{\,}\mathrm{mW}$, thus each data line requires
$36\text{\,}\mathrm{mW}$. This system uses two of the available four pairs of
lines to send signals (data and clock), which gives a power consumption per
link of $72\text{\,}\mathrm{mW}$.
In order to avoid ground loops, the fast trigger signal is isolated before it
reaches the LVDS driver using an Digital Isolator from Texas Instruments
(ADuM1400), as shown in figure 4. The transmission line is terminated by a
$100\text{\,}\mathrm{\SIUnitSymbolOhm}$ resistor placed very close to the
receiver for best impedance match. An Unshielded Twisted Pair (UTP) Cat 5e
cable with RJ45 connectors crimped at both ends (following TIA/EIA norm 586-A)
is used to transmit the signal.
The delays (calculated from timing data in each component’s datasheet) are
shown in table 1.
Component | Delay | Unit
---|---|---
Isolator | $43\text{\,}$ | $\text{\,}\mathrm{ns}$
Driver LVDS | $5\text{\,}$ | $\text{\,}\mathrm{ns}$
Cable UTP | $6\text{\,}$ | $\text{\,}\mathrm{ns}\text{\,}{\mathrm{m}}^{-1}$
LVDS receiver | $5\text{\,}$ | $\text{\,}\mathrm{ns}$
Table 1: Summary of delays for components involved in signal transmission.
Figure 4 shows that the signal crosses two isolators, two LVDS transmitter /
receiver pairs and two sections of UTP cables. With a $6\text{\,}\mathrm{m}$
cable between SD electronics and distributor and a $30\text{\,}\mathrm{m}$
cable to the UMD electronics the latency for signal transmission is
$322\text{\,}\mathrm{ns}$.
To calculate the total delay, the time needed for the acquisition system to
recognize the trigger condition must be added to the signal latency calculated
below. This time is about $500\text{\,}\mathrm{ns}$ therefore, the trigger of
the AMIGA electronics will arrive about $1600\text{\,}\mathrm{ns}$ after the
trigger signal has been generated in the Auger station electronics.
#### 2.3.2 Trigger and Data Request
In the beginning of the engineering array phase the functionality of the TDR
was realized by a Single Board Computer (SBC) with a buffered SPI receiver on
the basis of an Altera MAX-II complex programmable logic device (CPLD). Since
the production phase, an AMIGA Acquisition board is used for this purpose.
With the installation of the new SD electronics called “Upgraded Unified
Board” (UUB) [15] as part of the Auger upgrade, the TDR functions are
available in this board and no auxiliary electronics are needed.
The TDR receives the GTS+LTS time-stamp from the Auger surface electronics via
a $10\text{\,}\mathrm{M}\mathrm{b}\mathrm{p}\mathrm{s}$ SPI channel and sends
it to the SBC, which stores these time-stamps in a circular buffer with a
depth of $2048\text{\,}$ values. The TDR wiretaps the T3 requests to the
station by a RS-232 interface at the moment. The UUB has all the hardware
ports, software (T3 trigger to the buried electronics and distributor
commanding/monitoring), and FPGA code needed to replace the TDR. Therefore, on
stations with a UUB installed, the TDR and SPI line are removed, and the
distributor control cable is connected to the UUB directly.
When a T3 is found, the TDR looks for the received GTS in the GTS+LTS table.
If the GTS is found, its corresponding LTS is broadcasted to all the modules
via UDP; otherwise, a “LTS not found” message is sent. The TDR also provides
access to the SD serial console via SSH or TELNET protocols. Note that this
trigger scheme has no dependency on the buried module electronics, and can be
reused by any project designed to trigger with Auger surface detector
stations.
### 2.4 Communications
The telecommunications system connects the AMIGA stations to the central
storage server, and it comprises of an Ethernet LAN network (connecting the
UMDs and the SD electronics) and a WLAN telecommunications system for the data
transmission to CDAS. The WLAN is implemented as a point-to-multipoint WiFi
radio link attached to the original Auger inter-FD microwave links and it is
provided by a Mikrotik RB493 router, which includes an Ethernet switch and a
802.11n WiFi radio. These characteristics are mentioned in [8], where we can
also find that the network structure for the UMDs was implemented over the
$2.4\text{\,}\mathrm{G}\mathrm{H}\mathrm{z}$ WiFi band. The motivation for
this decision was to have a modular system that is higly reliable to work
outdoors and that can handle the throughput required for the network of
detectors over the $2.4\text{\,}\mathrm{G}\mathrm{H}\mathrm{z}$ Wi-Fi band,
for which the Pierre Auger Observatory has a licence for exclusive use in the
local region granted by the _Comisión Nacional de Comunicaciones_ of
Argentina. The details of the design, the protocol as well as the tests
performed to measure its reliability are described in the mentioned reference.
In this section we expand the description of the network to the higher layers
and how it is organized in subnetworks within each detector.
#### 2.4.1 Network
The network addressing scheme for communications uses one /24 network (Class C
network, 253 available IP addresses) for each station LAN, with all radios
connected on the same WLAN network. (172.16.0.0/16). A station with an
internal LAN network 10.a.b.0/24 will have the IP 172.16.a.b in the WLAN
network. In the internal detector LANs, the IP 10.a.b.1 is reserved for the
gateway (WiFi radio), 10.a.b.2 for the TDR, and 10.a.b.(10 + x) for the module
x of that detector (when x $<$ 90).
### 2.5 Power and Trigger distribution
This section explains the distribution of the Pierre Auger Observatory
triggers and their interconnection at a physical level with the UMDs. It is
worth mentioning that the trigger and power distribution between a SD and the
three buried UMDs are intimately related since they share the same lines,
therefore we also include a brief description of the power and monitoring
system used. As mentioned before, the main piece of hardware that is in charge
of handling the trigger and power lines to the buried UMDs is called the
_distributor_ , and in this section we introduce its functionality.
The power for each station is supplied by independent photovoltaic systems
since all installation sites are far away from any electric power grid [9].
The distributor receives power from the solar panels and distributes it to the
four modules via Passive Power-over-Ethernet (PPoE) lines. It uses the same
cable to distribute the fast trigger signal sent to the buried modules.
The distributor converts the LVDS signals from the Auger surface electronics
to TTL-CMOS logic using LVDS receivers. The TTL fast trigger is isolated in
order to avoid a possible ground loop and is split into four lines. Each line
is independently converted to LVDS signals, available at the distributor
output ports. Each PPoE output uses the pin-out of 802.3af mode
B111$24\text{\,}\mathrm{V}\mathrm{D}\mathrm{C}$ on pins 4 and 5 and GND on
pins 7 and 8 and is connected to the power supply through a relay that allows
switching the power on and off. The relays are commanded by the TDR using
RS-232 messages transmitted via UTP cable, allowing up to four distributors in
a daisy-chain (required when more than four counters are present in a
station).
## 3 Acquisition and Control System
In this section we present the functional structure of the Data Acquisition
and Control (DAQ) system of the UMDs. We describe the implementation at the
hardware level on a Cyclone IV FPGA, the timing issues and memory requirements
for the tasks performed by the DAQ system, including the firmware of each UMD
and the soft microprocessor platform used to implement every function required
by the UMD. We also describe the power system distribution and consumption of
the electronics, the interface required and finally a short description of the
printed circuit board layout and how it was designed. As we will see in this
section the constraints of speed and memory are significant, therefore we
emphasize the need for a soft microprocessor platform in order to have easy
access to upgrades or future hardware changes that may be required due to
spares availability. In this way we have a flexible and scalable embedded
system that can be used for any kind of particle or radiation detector that
works synchronously. The Acquisition and Control System is composed by two
distinct parts: A soft microprocessor and custom acquisition code (both
detailed below). A soft microprocessor was chosen to simplify the electronics,
reducing the component count and improving the electronics future-proofing.
### 3.1 Hardware
In this section we describe the back-end digital hardware used for data
acquisition on each UMD. We introduce the basic structure of the main system,
comprised of an FPGA and a memory that functions as a ring buffer storing data
according to a trigger provided by the SD. The main CPU controls the interface
and data transfer to the surface, as well as monitoring several key hardware
parameters that indicate its good performance. As the main motivation for the
design, we should mention the need for a compact flexible embedded system that
can easily be reprogrammed remotely, as we will see in the following sections.
As mentioned, the CPU and firmware were implemented using an FPGA connected to
two physically independent LPDDR1 memories. This board has an expansion port
with $150\text{\,}$ pins using a $1.8\text{\,}\mathrm{V}$ single ended voltage
standard for fast speed digital signals. Additionally, one RJ-45 connector
receives the fast trigger (T1) and the clock line with LVDS receivers, another
one uses a LVDS transmitter/receiver pair for serial communication. These LVDS
drivers can be fed by power from the connected cable, or by power from the
Acquisition board, selected by a jumper. All input/output signals from LVDS
transmitters/receivers are electrically isolated. In the following subsections
we describe the hardware used with its specifications in terms of timing,
resources and memory usage, the printed circuit board design and the interface
with the SD and the telecommunications system.
Figure 5 shows a picture of the top and bottom side of the assembled board. It
depicts the layout of the board and indicates the locations of the main
connectors and components.
Figure 5: Acquisition board, showing the main connectors and components of the
top side (left), and the front-end connector located at the bottom (right).
#### 3.1.1 FPGA and memory
The board is built around an Altera Cyclone IV FPGA (model EP4CE40U19I7) which
provides $39\,600\text{\,}$ logic cells and a memory of
$1.16\text{\,}\mathrm{M}\mathrm{b}\mathrm{i}\mathrm{t}\mathrm{s}$. Table 2
summarizes the usage of these resources for the custom logic (Firmware) and
the LEON3 softcore CPU. While only $18\text{\,}\%$ of the logic cells are not
used, there is nearly half of the memory cells ($48\text{\,}\%$) left for
future improvements.
The chosen industrial package with $329\text{\,}$ I/O pins (of which
$257\text{\,}$ are used) works in a wide temperature range
($-40\leavevmode\nobreak\ ^{\circ}$C to $+100\leavevmode\nobreak\ ^{\circ}$C),
which is adequate for the operating conditions of the electronics (buried
underground). The FPGA series is optimized for low power consumption with a
low core voltage of $1.2\text{\,}\mathrm{V}$. The firmware incorporates the
Altera Remote Update system [16] with a remote recovery option, which ensures
additional safety in case of an upgrade failure.
Component | Logic Cells | Memory bits
---|---|---
Firmware | 12156 ($30\text{\,}\%$) | 480211 ($41\text{\,}\%$)
LEON3 CPU | 20371 ($51\text{\,}\%$) | 133872 ($11\text{\,}\%$)
Total | 32527 ($82\text{\,}\%$) | 614083 ($52\text{\,}\%$)
Available | 39600 ($100\text{\,}\%$) | 1161216 ($100\text{\,}\%$)
Table 2: Resource usage of the FPGA image, split into LEON3 and Firmware.
These numbers were obtained from logs generated by the the Quartus II
compiler.
The board has two physically separated memories, one for the soft-core CPU and
one for the acquisition firmware. This design avoids logic elements for the
synchronized access to the two RAMs, thus speeding-up the acquisition firmware
and saving resources. Both memories are Micron LPDDR (low power DDR, model
MT46H32M16LFBF-5), with $1\text{\,}\mathrm{G}\mathrm{B}$ of capacity and a
working frequency of up to $200\text{\,}\mathrm{M}\mathrm{H}\mathrm{z}$ (Both
memories are used at $100\text{\,}\mathrm{M}\mathrm{H}\mathrm{z}$).
Figure 6: Close-up of the PCB Layout focusing on the FPGA (“U1”) and LPDDR
memories (“U4” and “U5”).
#### 3.1.2 Power sources
Based on the estimates for the required ratings of the different devices, we
designed a cascaded high-efficient power system as shown in figure 7.
Figure 7: Power Supply diagram. Indicated are the maximum power and current supported for each component. Voltage [V] | Current [mA] | Power [W]
---|---|---
24 | 75 | 1.800
3.3 | 210 | 0.693
1.8 | 44 | 0.079
2.5 | 46 | 0.115
1.2 | 236 | 0.283
Table 3: Power consumption for each of the outputs shown in figure 7.
The board uses a $3\text{\,}\mathrm{W}$ isolated DC/DC-converter RS3-2405S to
avoid ground loops. The device converts the nominal $24\text{\,}\mathrm{V}$
input into a $5\text{\,}\mathrm{V}$ output (max $0.6\text{\,}\mathrm{A}$).
This output is converted by the LM46002 DC/DC-regulator to a stable
$3.3\text{\,}\mathrm{V}$ supply (max $2\text{\,}\mathrm{A}$). With this
$3.3\text{\,}\mathrm{V}$ as input, further step-down DC/DC regulators
TPS62150, TPS62230 and TPS62231 generate $1.2\text{\,}\mathrm{V}$ (max
$1\text{\,}\mathrm{A}$), $2.5\text{\,}\mathrm{V}$ (max 0.5A) and
$1.8\text{\,}\mathrm{V}$ (max $0.5\text{\,}\mathrm{A}$) with an efficiency of
typically $95\text{\,}\%$, respectively. The $3.3\text{\,}\mathrm{V}$ and
$1.8\text{\,}\mathrm{V}$ power supplies are connected to the FPGA I/O blocks,
providing power and reference voltage. Table 3 show the nominal values of
current and power consumption as measured in the laboratory for each of the
outputs in figure 7.
#### 3.1.3 Interface and Communications
The board provides two expansion connectors: one for data/logic signals and
one for power supplies. The data expansion connector is a single high-speed
connector, Samtec QSH-090-01-LDAK-TR, capable of transmitting signals with a
maximum frequency of $9\text{\,}\mathrm{G}\mathrm{H}\mathrm{z}$. It also has a
central ground bar that ensures low resistance for the return currents through
the connector. The connector has 180 lines, with a pitch between pins of
$0.5\text{\,}\mathrm{m}\mathrm{m}$.
The power expansion connector provides the $24\text{\,}\mathrm{V}$ input power
(directly from the PPoE connector) and the $3.3\text{\,}\mathrm{V}$ power
supply. Additionally, two control lines provide the functionality to switch on
or off the connected expansion board.
The Ethernet PHY driver is a DP83848IV from Texas Instruments, which is
supported by the Leon 3 built-in Ethernet MAC and the Linux operating system.
The MAC address of each board is divided in two parts: The most significant 11
nibbles (44 bits) are a firmware compile-time constant, and the lowest nibble
(4 bits) can be set using four switches in the board.
For the monitoring of the system voltages an ADS7828 ADC from Texas
Instruments was chosen. It has 8 multiplexed input channels, uses an internal
reference voltage, and supports I2C communications for transferring monitoring
data to the soft-core.
#### 3.1.4 Printed circuit board
Based on the maximum working frequency within the board ($\sim
200\text{\,}\mathrm{M}\mathrm{H}\mathrm{z}$) it was decided that the
appropriate material for this design is FR4, which can accommodate working
frequencies of up to $500\text{\,}\mathrm{M}\mathrm{H}\mathrm{z}$. A 12-layer
design was made, following the recommendations of the FPGA and RAM
manufacturers. The FPGA recommendations specify tracks with a minimum width of
$5\text{\,}\mathrm{m}\mathrm{i}\mathrm{l}\mathrm{s}$ and a characteristic
impedance of $50\text{\,}\mathrm{\SIUnitSymbolOhm}$. The impedance of the
lines depends on the materials (dielectrics and conductors) and their
geometry, that is, track width, thickness of the dielectric, among others. A
standard thickness ($1.6\text{\,}\mathrm{m}\mathrm{m}$) FR4 PCB with 12 layers
does not allow $50\text{\,}\mathrm{\SIUnitSymbolOhm}$ lines, therefore a
thicker ($2.4\text{\,}\mathrm{m}\mathrm{m}$ ) PCB was used.
Following the application notes of Altera, one LPDDR memory was connected to a
whole FPGA I/O bank. The other memory was connected using part of a bank for
the address pins and part of another for the data bus, as indicated by the
FPGA manual [17]. Once this was defined, the routing of the tracks from the
FPGA to the memories was done.
The tracks between RAM and FPGA must be striplines of controlled impedance and
length to conserve signal integrity and the timing constraints imposed by the
RAM. The track width and distance between the signal and reference layers
needed for a specific impedance was determined by specialized programs taking
into account the properties of the PCB material. They must also have equalised
lengths to ensure that the propagation times of the signals are within the
values recommended by the RAM manufacturer.
### 3.2 Soft microprocessor LEON3
The back-end design went through several iterations before arriving to the
final design presented in this paper. Through this debugging process we
concluded that the flexibility of the hardware was essential, as well as the
remote control and configuration. After two prototypes were built, we chose to
use a LEON3 soft microprocessor running at $50\text{\,}\mathrm{MHz}$ that has
an open source architecture. It allows for the system to be fully programmable
remotely from the Observatory campus in Malargüe, which eased its debugging on
the field and allows for upgrades or changes in the firmware to accommodate
future needs such as changes in the trigger system, calibration, monitoring
and disabling malfunctioning parts of the hardware if necessary. LEON3 is a
32-bit CPU microprocessor core, based on the Scalable Processor Architecture
Version 8 (SPARC-V8) instruction set architecture with an AMBA AHB system bus
[18]. It was originally designed by the European Space Research and Technology
Centre (ESTEC), part of the European Space Agency (ESA), and after that by
Cobham Gaisler Research. It is described in synthesizable VHDL.
This Soft microprocessor has numerous advantages, particularly for a such
long-running project: The base code (CPU without Floating Point support and
most peripherals) is available under a free-software license (GPLv2), it is
FPGA-independent (upstream code supports Altera, Xilinx and Microsemi FPGAs),
comes with a complete library of peripherals, and allows a wide array of
configuration options (both in adding and removing peripherals and selecting
or removing CPU features to prioritize performance or resource usage). The
peripherals used are: SPI Memory controller (used for early boot and remote
Firmware upgrade), I2C (used for environmental sensors), RS-232 serial ports
(used for High Voltage power supply and acquisition control/data transfer),
GPIO ports (used for the front-end power supply control, reboot/power-off and
watchdog), and 10/100 Ethernet MAC (used for communications).
### 3.3 UMD firmware
In this section we introduce the details of the firmware functionality for a
UMD back-end. It handles the data flow and secondary tasks required of the UMD
hardware. We describe each part of the firmware in a block diagram divided
into three parts: the CPU core, the back-end main task of muon data storage
and the interface with the front-end and the peripherals. The current FPGA
implementation is divided into six main blocks, as shown in figure 8. The EMC
block drives $128\text{\,}\mathrm{M}\mathrm{i}\mathrm{B}$ of external LPDDR1
memory for temporary event data storage. This data has a fixed size
($2048\text{\,}$ anode samples for each SiPM, plus data from both ADCs and the
aux digital outputs from both ASICs, equivalent to
$6.4\text{\,}\mathrm{\SIUnitSymbolMicro s}$ per event). The trigger block
receives the T1 signal, immediately notifies the incoming event condition to
the data block and forwards the LTS value corresponding to the incoming event
once it has been fully received. The read-out stage processes the
$64\text{\,}$ digital inputs at $320\text{\,}\mathrm{MHz}$ and packs four
samples into a single $256\text{\,}\mathrm{b}\mathrm{i}\mathrm{t}$ word. To
reduce the number of resources needed, in particularly interconnects, all
firmware except for the readout process runs at $80\text{\,}\mathrm{MHz}$. The
data acquisition block stores
$2048\text{\,}\mathrm{s}\mathrm{a}\mathrm{m}\mathrm{p}\mathrm{l}\mathrm{e}\mathrm{s}$
of these $256\text{\,}\mathrm{b}\mathrm{i}\mathrm{t}$ words in two alternating
circular (internal) buffers. When a trigger arrives, the actual buffer is
filled up and then switched to the alternative buffer. While the alternate
buffer records the data, the recorded data is saved in the external RAM. The
LTS table process manages a look-up table with 2048 entries. When addressed
with an LTS of an event, the memory address of the event data block is
returned, or an ‘event not found’ error is raised [14].
The ASIC programming block receives the ASIC configuration from the embedded
system and then uses it to program both ASICs. The micro-controller interface
block handles communications with the embedded system. This is implemented
using serial ports. Finally, the coordinator block manages the rest of the
blocks.
Figure 8: Block diagram of the UMD electronics. We can observe in the
acquisition board: in green the CPU blocks, in blue the back-end blocks and in
red the peripheral blocks.
### 3.4 Remote upgrade
In order to have a flexible distributed system we need to provide a robust
firmware upgrade system. Therefore we included in the main firmware a remote
upgrade that can be performed by an operator and provide a failsafe if there
is any problem during the upgrade. In this way we can remotely enhance and
develop the main functions of the UMD. Altera Remote Upgrade works as follows:
The hardware stores two FPGA images in a persistent memory, named “Factory”
and “Application” in Altera literature. On FPGA power-on, the Factory image is
programmed, which starts a watchdog and immediately programs the Application
image. If the watchdog isn’t periodically reset (at least once every 60
seconds), the Factory image is reprogrammed in the FPGA and takes over. Both
firmwares can be identified by software via a GPIO pin. The Application image
contains both a LEON3 and AMIGA code, while the Factory image contains only a
LEON3. The watchdog is controlled by software, and the routines to reset it
start late in the boot process. If the module boots in Factory mode, the
software tries to re-enter Application mode ten minutes after booting has
completed.
Firmware upgrades are done by writing a specially processed FPGA image into
the SPI Flash. To prevent an overwriting of deployed software images by
mistake, all Flash regions except the one containing the FPGA Application
image are set write-protected.
## 4 Software overview
This section describes how the software in the UMD is implemented, the tasks
provided to the UMD and the operating system used. The software is responsible
for sending event data and monitoring data via a network to a remote location.
To simplify and speed-up the development phase, this software runs over an
operating system which also provides system tools. The system was designed to
minimise the use of persistent storage in the module electronics, improving
resiliency, easing electronics installation and replacement, and reducing the
possibility of “bricking” the electronics on future software updates (the only
persistent data on every module electronic is LEON3, Firmware, boot-loader and
Ethernet MAC address). A description of the software, the boot process and the
SDK follows.
### 4.1 Operating System
Following the open source philosophy adopted for the soft microprocessor, the
operating system used is also open source. The underground scintillator
detector modules and the TDRs use Linux [19] as the operating system. It was
chosen for user familiarity, relative ease of use, and because it has full
hardware support for LEON3 (CPU and required peripherals). Therefore the
motivations for implementing the software in this manner is to facilitate
remote upgrades and configuration, with a complete access to the hardware
functionality of the UMD electronics deployed in the field. The current
version used at the time of the writing of this paper is 4.9, with patches
(provided by Cobham Gaisler) for hardware support forward-ported to 4.9.
Additionally, the kernel was patched to use the built-in GPIO power-off
driver, which has been-setup to instruct the FPGA to program the Application
image. The system runs in RAM from an initcpio (embedded in the kernel image
due to bootloader limitations). Loadable module support is disabled (all
modules are compiled into the kernel image) because no out-of-tree kernel
modules are used, the connected hardware is known in advance, and disabling
module support results in smaller kernel and initcpio binaries.
#### 4.1.1 Boot
Upon board power-on/CPU Reset, a boot-loader (uBoot 1.16 with Cobham Gaisler
patches) starts running. This boot-loader brings up the Ethernet port, obtains
an IP address via DHCP, and loads the full operating system image (specified
by the DHCP reply) from a TFTP server located in the Mikrotik RB493 router in
each station. The DHCP server runs on the central data storage server, with
DHCP relays running on each station LAN. The TFTP servers are located in each
station LAN to preserve WLAN bandwidth. This system allows for easy image
upgrades: Simply overwrite the image on the respective TFTP servers and reboot
the affected modules. Modules are assigned a fixed IP and hostname based on
their MAC address. This address has the upper 44 bits set by the software
image (common for all stations) and the lower 4 bits configurable by hardware-
switches, admitting at most 16 module electronics per station. Since each
station LAN is physically isolated and has its own network address, no MAC
address conflicts arise as long as MAC addresses are not repeated within the
same station LAN.
#### 4.1.2 System Applications
All basic system applications (init, shell, etc) are provided by Busybox
(currently v1.33.1) [20]. The most notable applications are: Telnet daemon,
watchdog user-space application, NTP client, DHCP client and SPI flash. The
DHCP client has been configured to obtain the client hostname published by the
DHCP server (required by AMIGA software to identify the module), the station
LSID (used by the monitoring software to identify the station), and the NTP
server used to synchronize the internal time.
#### 4.1.3 Software Development Kit (SDK)
The tool-chain used for software development (compiler, libc, linker and
related utilities) was custom-built. It consists of a GCC 8.1 C compiler
targeting by default the Leon3 architecture with software floating point
support, using binutils 2.30 and uCLibc-ng 1.30 C library. The tool-chain was
built for Ubuntu 18.04 running on Intel x86_64 compatible CPUs.
### 4.2 Software
In this section we describe in more detail the software structure and how it
performs the tasks needed for the UMD to store data and transfer it to the
CDAS. We organize this section into three main subsections that describe the
most important routines and their functionality: Module configuration, data
transfer, and calibration. The software was written to control the module and
to manage event and monitoring data acquisition. It consists of five pairs of
client-server applications. With the single exception of the Calibration
software (MdCalib), all clients run automatically after booting in Application
mode on each buried module electronics and all servers run automatically on
boot on a server in CDAS, with the clients keeping a persistent connection to
the server (unless specified). Additionally (and with the same exception), all
programs use the host-name to identify each individual module, allowing the
system to work behind a Network address translation (NAT). A brief description
of each pair of programs, with the names of each program client/server side,
follows.
Figure 9: Software communications diagram. All communications with the central
server go through TCP sockets, maximizing data integrity and minimizing data
loss.
#### 4.2.1 Module configuration - MdCfg/MdCfgd
This software configures the front-end (CITIROCs, ADCs, and HV), the firmware
parameters and provides configuration/status information (ex: acquisition
mode, masked channels, etc) to other programs using a Unix socket. On start-
up, the client connects to the server and requests the initial module
configuration (stored in the central server in INI files). Once the
configuration data has been sent and applied, this daemon disconnects from the
server and keeps running in background, handling configuration and status
requests from all other programs (these requests will be specified in each
program’s section).
#### 4.2.2 Data transfer - MdSend/MdRcvd
This pair of programs handle the event data transfer and storage. When idle,
the client periodically consults with MdCfg the current acquisition mode and
the UDP broadcast port for data requests (only if in acquisition modes sending
of data is requested). Upon a data request, the software instructs the
firmware to search for the specified event, receiving either the event data or
an error code, and sends the result to the server. The server keeps in RAM a
dictionary of events indexed by the 16-bit internal CDAS Event ID. For each
Event ID, the data for all participating counters is stored in a dictionary
indexed by the counter ID. The server has a TTL counter for each event, which
is decreased every 30 seconds and increased every time data for a new module
arrives. Once this counter reaches zero, the event is written to disk and
discarded from RAM (if more data for the same event arrives, it is treated as
a new event and merged as part of the standard offline post-processing of
AMIGA data). The event data is stored in JSON format, using a single file
which is periodically restored and compressed (in gzip format) once per day.
This format allows easy parsing using libraries available in most programming
languages, and has a very good compression ratio when using gzip [21].
#### 4.2.3 Calibration - MdCalib
These programs run the pre-defined Calibration routines on each module as
described in [11]. On the module, MdCalib starts on boot and listens to
calibration requests. Once a calibration request is received, the program
acquires the file-lock for background data, sets the acquisition mode to
‘Calibration’ (all data requests are sent to the server indicating that the
sent data is invalid due to the module being calibrated) and runs the
requested calibration routine, setting the needed configuration parameters via
MdCfg and sending the resulting background data to the requester. Once the
calibration finishes, the module is returned to its previous state and the
file-lock is released. On a remote machine, a dedicated program communicates
with the calibration software running on one or more modules (identified by IP
address) and receives the calibration data. Currently the request for
calibration date must be issued manually by an operator, but full automation
(via scheduled jobs) can be used.
### 4.3 Monitoring system
To guarantee the validity of the acquired data, the status of the detector, as
well as the status of its measured data, have to be monitored. In this section
we include a description of the monitoring system as implemented in the UMD
and the main server. The motivation for the monitoring system is described
below, in summary it is designed to perform two very important tasks:
Background signal monitoring, where the record of the signal counts by the UMD
is constantly taken, and environmental monitoring, where the hardware status
(including power sources voltages, currents and temperature) is handled. The
monitoring acquisition system was designed as a series of processes (one for
each kind of monitoring) using client/server architectures, where each of the
clients runs in a MC electronic and establishes a TCP connection to its
corresponding server. The resources needed by the monitoring processes are
extremely low, particularly from the client side: they use
$1\text{\,}\mathrm{M}\mathrm{B}$ of RAM that is activated every 5 minutes to
send data to the server at the CDAS. The data volume sent is
$1\text{\,}\mathrm{K}\mathrm{B}\mathrm{y}\mathrm{t}\mathrm{e}$ every
$30\text{\,}\mathrm{m}\mathrm{i}\mathrm{n}\mathrm{u}\mathrm{t}\mathrm{e}\mathrm{s}$.
A description of these monitoring processes follows.
#### 4.3.1 Background signal monitoring - backpulse_client/backpulse_server
These programs read and store the background pulse counters from the firmware.
Background pulses are all the current pulses produced by the SiPM (not
necessarily belonging to a possible event). The rate of these pulses for a
given SiPM is expected to be relatively constant throughout the operating
lifetime of the detector; variations on this rate over time may indicate aging
or damage of any of the optical parts. The client periodically orders the
firmware to snapshot the current counter values, which are then sent to the
server and saved in space-separated files (one file per day and module),
formatted with the module time-stamp and the 64 counter values. Since the
background data client shares resources with the calibration daemon, this
program tries to acquire a lock-file in a predetermined location before
reading the counter data, and immediately closes this file after reading.
#### 4.3.2 Environmental monitoring -
monitoring_client/monitoring_server/data_handler
This trio of programs handle the monitoring data readout and storage. The
client periodically reads the data directly from the monitoring ADCs and data
from the HV power supply via MdCfg, and sends it to the monitoring server upon
request. Monitoring data is taken every
$5\text{\,}\mathrm{m}\mathrm{i}\mathrm{n}\mathrm{u}\mathrm{t}\mathrm{e}\mathrm{s}$.
The server spawns a new process for each connection request from a client
which periodically requests its monitoring data. Every
$30\text{\,}\mathrm{m}\mathrm{i}\mathrm{n}\mathrm{u}\mathrm{t}\mathrm{e}\mathrm{s}$
up to $432\text{\,}\mathrm{B}\mathrm{y}\mathrm{t}\mathrm{e}\mathrm{s}$ of
monitoring data plus a header of
$35\text{\,}\mathrm{B}\mathrm{y}\mathrm{t}\mathrm{e}\mathrm{s}$ is sent per
station. This data is then immediately sent to the data_handler process via a
unix socket. The data_handler process stores all the monitoring data it
receives from its unix socket to a MySQL database and, as a backup, to a .csv
file (rotated every
$12\text{\,}\mathrm{h}\mathrm{o}\mathrm{u}\mathrm{r}\mathrm{s}$) that can be
imported manually into the database.
### 4.4 Online-Monitoring
At the server side, the monitoring data is received and stored in a database.
In this section we describe the monitoring data handling from all the UMDs
deployed in the field and how it is presented to the user in order to
visualize it and make decisions accordingly.
Figure 10: Screen capture of the monitoring system (integrated within the
Auger monitoring system) displaying UMD stations. Green ticks indicate
stations with buried modules in normal operating status. The red hexagon
circumscribes the $750\text{\,}\mathrm{m}$ infill and the blue hexagon
circumscribes the $433\text{\,}\mathrm{m}$ infill. Displayed are the stations
deployed at the time of publication of this paper. The station in yellow that
shows a warning is displayed in detail in figure 11.
The monitoring system allows access to the most relevant information of the
different components of the MC via a web-interface. It also enables the
visualisation and analysis of the environmental and trigger monitoring
parameters over time, which can be very useful to find anomalies that could
indicate the presence of failures in some component of the MC.
Figure 11: Screen capture of the monitoring system, displaying installation
and monitoring data for an UMD. As can be observed, module 103 shows a warning
as three of its channels (5, 7 and 27) are not operational.
The main page of the monitoring system shows a Map option with the position of
each SD station. Figure 10 shows an example, where the coloured marker depends
on the alarm state of all the scintillator modules for each SD station. In the
figure, green marks represent a station working in normal state, and gray
marks represent stations that have yet to be installed.
Selecting a SD station shows the deploy view associated to this station and
the monitoring data from the database for a particular date. An example of the
monitoring system displaying a typical AMIGA station after deployment is shown
in figure 11. Selecting a module shows the monitoring information and alarm
state for each variable of the chosen module. This view also gives the option
to mask alarms of any monitoring parameters and visualise the full monitoring
history of the corresponding module.
## 5 AMIGA sample event
As an example of the acquisition process, this section presents data collected
by the AMIGA UMDs for an UHECR of
$4\text{\times}{10}^{18}\text{\,}\mathrm{e}\mathrm{V}$ impinging with a
$20\text{\,}$∘ zenith angle222Values taken from the official Pierre Auger
Observatory event reconstruction, that hit the border of the muon counters.
This event was recorded on May 9, 2018 at 14:44:56 UTC. The temperature
measured in the SiPMs for the four modules belonging to station 1764 (which
participated in the event) is shown in figure 12, taken form the monitoring
system. Event data for each AMIGA station is shown in this section to
illustrate the reconstruction procedure applied to the binary acquisition mode
only333As the time of writing this paper, the reconstruction with the
integrator mode is still under development.
Figure 12: Time plot of the SiPM Temperature Sensor (in
$\text{\,}\mathrm{\SIUnitSymbolCelsius}$) for four AMIGA modules of station
with ID 1764. The 12-day time period includes the moment of the presented
event (at the solid vertical line).
### 5.1 Status of the AMIGA Array at the time of the event recording
As seven stations of the $750\text{\,}\mathrm{m}$ array were part of the
prototyping phase (the so called Unitary Cell), the first operative hexagon
for physics analysis has stations with higher segmentation (256 bars instead
of 192) for the projected $30\text{\,}{\mathrm{m}}^{2}$ detection area as
shown in figure 13. Two of these stations have four buried modules (two
$10\text{\,}{\mathrm{m}}^{2}$ modules and two $5\text{\,}{\mathrm{m}}^{2}$
modules). One of these stations has eight modules (four
$10\text{\,}{\mathrm{m}}^{2}$ modules and four $5\text{\,}{\mathrm{m}}^{2}$
modules) installed symmetrically with respect to the Auger Surface Detector;
also known as a “Twin” (for validation purposes and to assess the resolution
of the reconstruction procedure as described in [22]). One station has six
modules (four $10\text{\,}{\mathrm{m}}^{2}$ modules and two
$5\text{\,}{\mathrm{m}}^{2}$ modules), with the southern side of the station
installed like the others, and the northern side with two
$10\text{\,}{\mathrm{m}}^{2}$ modules. The rest of the stations have one or
two $10\text{\,}{\mathrm{m}}^{2}$ modules.
Figure 13: Status of the AMIGA Engineering Array at March 2018. Each station
is identified by a number (“Station ID”). For each station the size and
relative orientation of the AMIGA muon counter is sketched. The size of the
modules represent the the detection area ($10\text{\,}{\mathrm{m}}^{2}$ or
$5\text{\,}{\mathrm{m}}^{2}$). Drawing is not to scale.
### 5.2 UMD module binary data
AMIGA event data for each buried scintillator module is stored as lists of bin
number/anode sample, only for samples where at least one channel has detected
light (all other samples are not stored), and a list of bin number/sampled
value for each ADC. The anode bin number indicates a time-stamp in
$3.125\text{\,}\mathrm{n}\mathrm{s}$ increments. These traces have a length of
2048 bins ($6.4\text{\,}\mathrm{\SIUnitSymbolMicro s}$). The length of the
pre-trigger / post-trigger window is a configurable parameter of the
electronics. In this example, the pre-trigger and post-trigger windows are
about $4.4\text{\,}\mathrm{\SIUnitSymbolMicro
s}$/$2\text{\,}\mathrm{\SIUnitSymbolMicro s}$ long, thus the trigger arrived
at bin $1400\text{\,}$.
Muon counts are obtained from these raw traces (shown as illustration in top
panels of figure 14) in an offline analysis, searching for patterns in the
signals of each channel using the technique described in [11]. A counting
strategy must be applied to avoid undercounting from pile-up effects [5].
Figure 14: Example of acquisition for two modules in two different stations.
(left) Plots of acquired event data for the presented event on a non-saturated
AMIGA buried muon counter module (1622 south M1) with 29 pulses. (right) Plots
of acquired event data for the presented event on a saturated AMIGA buried
moun counter module (1570 south M1) with 64 pulses. The top graphic shows a
black rectangle on each channel that had signal over threshold in a given time
bin. The bottom graphic shows the sum of all channels with signal over
threshold for a given time bin. The red line indicates the level when all 64
channels are active. A station is considered to be saturated based on the
amount of active channels for a time bin.
As an example of the reconstruction chain applied to extract a number of muons
from the raw data, figure 14 shows the traces collected by two AMIGA modules
at different distances from the shower core for the same event. It is apparent
how the particle density and therefore the triggered raw traces decreases as
the distance increases. In particular, the figure shows an extreme case for a
very close-to-core detector at $145\text{\,}\mathrm{m}$ distance. In this
case, the UMD is saturated444In the binary mode as all 64 channels in three of
its modules are simultaneously triggered. To identify the number of active
channels, i.e. the number of channels with at least four positives samples in
the raw traces in a time window of 18 bins, is the first step in the
reconstruction procedure [11]. For the station 1570, only one of the larger
modules (M1) of $10\text{\,}{\mathrm{m}}^{2}$ is not saturated and measures
471 particles. On the other hand, for the $10\text{\,}{\mathrm{m}}^{2}$ M1
(south) buried module of the detector 1622 (at about $600\text{\,}\mathrm{m}$
of the shower core) the active channels are 29. Applying the pile-up
correction, this number of active channels translate into 33.7 reconstructed
muons. This correction is applied to the number of active channels, and makes
this number differ from the reconstructed muons. The method relies on a
probabilistic model to estimate the number of particles, reporting the mean
number of muons for a given number of input signals (or active channels).
As a final remark, it is important to stress that the detected particles
arrived in a time window of about $200\text{\,}\mathrm{ns}$ and that there are
no identified particles before or after this window.
The reconstructed number of muons for all AMIGA modules triggered in this
sample event is presented in table 4. These counts were obtained using the
Auger Offline software [23], which applies the already described counting
strategy and the pile-up correction described in [5].
Station | | Distance
---
to axis (m)
$\mu$ density $[\frac{\mu}{$\text{\,}\mathrm{m}^{2}$}]$ | | M1
---
($10\text{\,}{\mathrm{m}}^{2}$)
| M2
---
($10\text{\,}{\mathrm{m}}^{2}$)
| M3
---
($5\text{\,}{\mathrm{m}}^{2}$)
| M4
---
($5\text{\,}{\mathrm{m}}^{2}$)
1570 S | 145 | 44.86 | 471.1 | - | - | -
1622 S | 603 | 2.69 | 33.7 | 29.4 | 10.1 | 11.3
1622 N | 603 | 2.84 | 34.7 | 25.0 | - | -
688 S | 633 | 3.17 | 41.8 | 24.8 | - | -
1574 S | 669 | 2.10 | 19.7 | 25.0 | 9.2 | 12.2
1764 S | 1114 | 0.41 | 4.0 | 5.0 | 3.0 | 1.0
1764 N | 1114 | 0.44 | 6.1 | 3.0 | 1.0 | 4.0
93 N | 1170 | 0.22 | 1.0 | 2.0 | 1.0 | 3.0
1773 S | 1344 | 0.00 | 0.0 | - | - | -
Table 4: Muon counts calculated for the presented event, separated by station,
and analysing twin detectors separately. Figure 15: MLDF (Muon Lateral
Distribution Function) of the studied event, including a fit (dashed blue
line) of the calculated data. This event has data points for distances to the
core smaller and greater than $450\text{\,}\mathrm{m}$, which results in a
better quality fit.
After the muon densities as a function of the distance are reconstructed, the
observations are fit to obtain the estimator $\rho_{\mu}(450)$, the muon
density at $450\text{\,}\mathrm{m}$ from the shower core. This observable is
sensitive to the chemical composition of the primary cosmic ray entering the
Earth atmosphere and is the one used for higher-level physical analysis.
Combining this parameter with others from the shower reconstruction (as the
energy, X${}_{\textrm{max}}$, etc), gives valuable information regarding the
primary particle mass. Figure 15 shows the fit of the muon lateral
distribution function made for the event studied following the procedure in
[24].
## 6 Conclusions
The AMIGA front-end electronics was designed as an embedded system around the
Cylone FPGA family. A soft-core LEON3 processor runs Linux as its operating
system for high flexibility, short development time and other benefits. A
front-end compatible with this board was designed and built for AMIGA buried
muon counters using SiPMs as the sensing device and two CITIROC ASICs for
signal conditioning. The system is capable of 64 SiPM channels sampled at
frequencies of $320\text{\,}\mathrm{M}\mathrm{s}\mathrm{p}\mathrm{s}$ and two
$14\text{\,}\mathrm{b}\mathrm{i}\mathrm{t}$ ADCs (high and low gain) measured
at $160\text{\,}\mathrm{M}\mathrm{s}\mathrm{p}\mathrm{s}$.
The first eight prototypes of this system were installed in October 2016.
Afterwards, 53 electronics were built and are acquiring data since February
2018. After several tests, this design will be produced to complete the AMIGA
muon counters, by installing three $10\text{\,}{\mathrm{m}}^{2}$ modules in 61
stations.
## Acknowledgments
The successful installation, commissioning, and operation of the Pierre Auger
Observatory would not have been possible without the strong commitment and
effort from the technical and administrative staff in Malargüe. We are very
grateful to the following agencies and organizations for financial support:
Comisión Nacional de Energía Atómica, Agencia Nacional de Promoción Científica
y Tecnológica (ANPCyT), Consejo Nacional de Investigaciones Científicas y
Técnicas (CONICET), Gobierno de la Provincia de Mendoza, Municipalidad de
Malargüe, NDM Holdings and Valle Las Leñas, in gratitude for their continuing
cooperation over land access, Argentina; the Australian Research Council;
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq),
Financiadora de Estudos e Projetos (FINEP), Fundação de Amparo à Pesquisa do
Estado de Rio de Janeiro (FAPERJ), São Paulo Research Foundation (FAPESP)
Grants No. 2010/07359-6 and No. 1999/05404-3, Ministério de Ciência e
Tecnologia (MCT), Brazil; Grant No. MSMT CR LG15014, LO1305 and LM2015038 and
the Czech Science Foundation Grant No. 14-17501S, Czech Republic; Centre de
Calcul IN2P3/CNRS, Centre National de la Recherche Scientifique (CNRS),
Conseil Régional Ile-de-France, Département Physique Nucléaire et
Corpusculaire (PNC-IN2P3/CNRS), Département Sciences de l’Univers (SDU-
INSU/CNRS), Institut Lagrange de Paris (ILP) Grant No. LABEX ANR-10-LABX-63,
within the Investissements d’Avenir Programme Grant No. ANR-11-IDEX-0004-02,
France; Bundesministerium für Bildung und Forschung (BMBF), Deutsche
Forschungsgemeinschaft (DFG), Finanzministerium Baden-Württemberg, Helmholtz
Alliance for Astroparticle Physics (HAP), Helmholtz-Gemeinschaft Deutscher
Forschungszentren (HGF), Ministerium für Wissenschaft und Forschung, Nordrhein
Westfalen, Ministerium für Wissenschaft, Forschung und Kunst, Baden-
Württemberg, Germany; Istituto Nazionale di Fisica Nucleare (INFN),Istituto
Nazionale di Astrofisica (INAF), Ministero dell’Istruzione, dell’Universitá e
della Ricerca (MIUR), Gran Sasso Center for Astroparticle Physics (CFA),
CETEMPS Center of Excellence, Ministero degli Affari Esteri (MAE), Italy;
Consejo Nacional de Ciencia y Tecnología (CONACYT) No. 167733, Mexico;
Universidad Nacional Autónoma de México (UNAM), PAPIIT DGAPA-UNAM, Mexico;
Ministerie van Onderwijs, Cultuur en Wetenschap, Nederlandse Organisatie voor
Wetenschappelijk Onderzoek (NWO), Stichting voor Fundamenteel Onderzoek der
Materie (FOM), Netherlands; National Centre for Research and Development,
Grants No. ERA-NET-ASPERA/01/11 and No. ERA-NET-ASPERA/02/11, National Science
Centre, Grants No. 2013/08/M/ST9/00322, No. 2013/08/M/ST9/00728 and No.
HARMONIA 5 – 2013/10/M/ST9/00062, Poland; Portuguese national funds and FEDER
funds within Programa Operacional Factores de Competitividade through Fundação
para a Ciência e a Tecnologia (COMPETE), Portugal; Romanian Authority for
Scientific Research ANCS, CNDI-UEFISCDI partnership projects Grants No.
20/2012 and No.194/2012 and PN 16 42 01 02; Slovenian Research Agency,
Slovenia; Comunidad de Madrid, Fondo Europeo de Desarrollo Regional (FEDER)
funds, Ministerio de Economía y Competitividad, Xunta de Galicia, European
Community 7th Framework Program, Grant No. FP7-PEOPLE-2012-IEF-328826, Spain;
Science and Technology Facilities Council, United Kingdom; Department of
Energy, Contracts No. DE-AC02-07CH11359, No. DE-FR02-04ER41300, No. DE-
FG02-99ER41107 and No. DE-SC0011689, National Science Foundation, Grant No.
0450696, The Grainger Foundation, USA; NAFOSTED, Vietnam; Marie Curie-
IRSES/EPLANET, European Particle Physics Latin American Network, European
Union 7th Framework Program, Grant No. PIRSES-2009-GA-246806; and UNESCO.
## References
* [1] Pierre Auger collaboration, _The Pierre Auger Cosmic Ray Observatory_ , _Nucl. Instrum. Methods Phys. Res._ 798 (2015) 172.
* [2] I. Allekotte, A. Barbosa, P. Bauleo, C. Bonifazi, B. Civit, C. Escobar et al., _The surface detector system of the Pierre Auger Observatory_ , _Nucl. Instrum. Methods Phys. Res._ 586 (2008) 409.
* [3] A. Etchegoyen, P. Bauleo, X. Bertou, C. Bonifazi, A. Filevich, M. Medina et al., _Muon-track studies in a water Cherenkov detector_ , _Nucl. Instrum. Methods Phys. Res._ 545 (2005) 602.
* [4] Pierre Auger collaboration, _Design, upgrade and characterization of the silicon photomultiplier front-end for the AMIGA detector at the Pierre Auger Observatory_ , _Journal of Instrumentation_ 16 (2021) P01026 [2011.06633].
* [5] A. Supanitsky, A. Etchegoyen, G. Medina-Tanco, I. Allekotte, M. Gómez Berisso and M. Medina, _Underground muon counters as a tool for composition analyses_ , _Astroparticle Physics_ 29 (2008) 461.
* [6] Pierre Auger collaboration, _The AMIGA infill detector of the Pierre Auger Observatory: Performance and first data_ , in _Proceedings of the 32nd International Cosmic Ray Conference_ , vol. 1, pp. 267–270, 1, 2011, DOI.
* [7] Pierre Auger collaboration, _The AMIGA muon detectors of the Pierre Auger Observatory: overview and status_ , in _Proceedings of the 33rd International Cosmic Ray Conference_ , pp. 712–715, 2013, 1307.5059.
* [8] M. Platino, M. R. Hampel, P. Fiszelew, A. Almela, A. Sedoski, G. D. L. Vega et al., _AMIGA at the Auger Observatory: the telecommunications system_ , _Journal of Instrumentation_ 8 (2013) P12014.
* [9] A. Cancio Montbrun, A. Mancilla, J. Maya, B. García, A. Almela, B. Andrada et al., _Photovoltaic monitoring system for Auger Muons and Infill for the Ground Array_ , _Energy Science & Engineering_ 6 (2018) 289.
* [10] Pierre Auger collaboration, _Prototype muon detectors for the AMIGA component of the Pierre Auger Observatory_ , _Journal of Instrumentation_ 11 (2016) P02012 [1605.01625].
* [11] Pierre Auger collaboration, _Muon counting using silicon photomultipliers in the AMIGA detector of the Pierre Auger Observatory_ , _Journal of Instrumentation_ 12 (2017) P03002.
* [12] Pierre Auger collaboration, _Trigger and aperture of the surface detector array of the Pierre Auger Observatory_ , _Nucl. Instrum. Methods Phys. Res. A_ 613 (2010) 29 [1111.6764].
* [13] Pierre Auger collaboration, _The Pierre Auger Observatory upgrade - Preliminary design report_ , 2016.
* [14] O. Wainberg, A. Almela, M. Platino, F. Sanchez, F. Suarez, A. Lucero et al., _Digital electronics for the Pierre Auger Observatory amiga muon counters_ , _Journal of Instrumentation_ 9 (2014) T04003 [1312.7131].
* [15] Pierre Auger collaboration, _New electronics for the surface detectors of the Pierre Auger Observatory_ , _Nucl. Instrum. Methods Phys. Res. A_ 824 (2016) 302.
* [16] Altera Corporation, _Configuration and Remote System Upgrades in Cyclone IV Devices_. https://www.altera.com/en_US/pdfs/literature/hb/cyclone-iv/cyiv-51008.pdf.
* [17] Intel Corporation, _External memory interfaces in Cyclone IV devices_. https://www.intel.com/content/dam/www/programmable/us/en/pdfs/literature/hb/cyclone-iv/cyiv-51007.pdf.
* [18] Cobham Gaisler, _LEON3 Processor_. https://www.gaisler.com/index.php/products/processors/leon3.
* [19] _Linux_. https://www.kernel.org/.
* [20] _Busybox_. https://busybox.net/.
* [21] ECMA International, _ECMA-404 The JSON data interchange standard. 2nd edition_. http://www.ecma-international.org/publications/files/ECMA-ST/ECMA-404.pdf, 2017\.
* [22] Pierre Auger collaboration, _Direct measurement of the muonic content of extensive air showers between $\mathbf{2\times 10^{17}}$ and $\mathbf{2\times 10^{18}}\leavevmode\nobreak\ $eV at the Pierre Auger Observatory_, _Eur. Phys. J. C_ 80 (2020) 751.
* [23] S. Argiro, S. L. C. Barroso, J. Gonzalez, L. Nellen, T. C. Paul, T. A. Porter et al., _The offline software framework of the Pierre Auger Observatory_ , _Nucl. Instrum. Meth._ A580 (2007) 1485 [0707.1652].
* [24] D. Ravignani and A. Supanitsky, _A new method for reconstructing the muon lateral distribution with an array of segmented counters_ , _Astroparticle Physics_ 65 (2015) 1.
$\bullet$
## The Pierre Auger Collaboration
A. Aab80, P. Abreu72, M. Aglietta52,50, J.M. Albury12, I. Allekotte1, A.
Almela8,11, J. Alvarez-Muñiz79, R. Alves Batista80, G.A. Anastasi61,50, L.
Anchordoqui87, B. Andrada8, S. Andringa72, C. Aramo48, P.R. Araújo Ferreira40,
J. C. Arteaga Velázquez66, H. Asorey8, P. Assis72, G. Avila10, A.M. Badescu75,
A. Bakalova30, A. Balaceanu73, F. Barbato43,44, R.J. Barreira Luz72, K.H.
Becker36, J.A. Bellido12,68, C. Berat34, M.E. Bertaina61,50, X. Bertou1, P.L.
Biermannb, V. Binet6, T. Bister40, J. Biteau35, J. Blazek30, C. Bleve34, M.
Boháčová30, D. Boncioli55,44, C. Bonifazi24, L. Bonneau Arbeletche19, N.
Borodai69, A.M. Botti8, J. Brackd, T. Bretz40, P.G. Brichetto Orchera8, F.L.
Briechle40, P. Buchholz42, A. Bueno78, S. Buitink14, M. Buscemi45, K.S.
Caballero-Mora65, L. Caccianiga57,47, F. Canfora80,82, I. Caracas36, J.M.
Carceller78, R. Caruso56,45, A. Castellina52,50, F. Catalani17, G. Cataldi46,
L. Cazon72, M. Cerda9, J.A. Chinellato20, K. Choi13, J. Chudoba30, L.
Chytka31, R.W. Clay12, A.C. Cobos Cerutti7, R. Colalillo58,48, A. Coleman93,
M.R. Coluccia46, R. Conceição72, A. Condorelli43,44, G. Consolati47,53, F.
Contreras10, F. Convenga54,46, D. Correia dos Santos26, C.E. Covault85, S.
Dasso5,3, K. Daumiller39, B.R. Dawson12, J.A. Day12, R.M. de Almeida26, J. de
Jesús8,39, S.J. de Jong80,82, G. De Mauro80,82, J.R.T. de Mello Neto24,25, I.
De Mitri43,44, J. de Oliveira26, D. de Oliveira Franco20, F. de Palma54,46, V.
de Souza18, E. De Vito54,46, M. del Río10, O. Deligny32, A. Di Matteo50, C.
Dobrigkeit20, J.C. D’Olivo67, R.C. dos Anjos23, M.T. Dova4, J. Ebr30, R.
Engel37,39, I. Epicoco54,46, M. Erdmann40, C.O. Escobara, A. Etchegoyen8,11,
H. Falcke80,83,82, J. Farmer92, G. Farrar90, A.C. Fauth20, N. Fazzinia, F.
Feldbusch38, F. Fenu52,50, B. Fick89, J.M. Figueira8, A. Filipčič77,76, T.
Fodran80, M.M. Freire6, T. Fujii92,e, A. Fuster8,11, C. Galea80, C.
Galelli57,47, B. García7, A.L. Garcia Vegas40, H. Gemmeke38, F. Gesualdi8,39,
A. Gherghel-Lascu73, P.L. Ghia32, U. Giaccari80, M. Giammarchi47, M. Giller70,
J. Glombitza40, F. Gobbi9, F. Gollan8, G. Golup1, M. Gómez Berisso1, P.F.
Gómez Vitale10, J.P. Gongora10, J.M. González1, N. González13, I. Goos1,39, D.
Góra69, A. Gorgi52,50, M. Gottowik36, T.D. Grubb12, F. Guarino58,48, G.P.
Guedes21, E. Guido50,61, S. Hahn39,8, P. Hamal30, M.R. Hampel8, P. Hansen4, D.
Harari1, V.M. Harvey12, A. Haungs39, T. Hebbeker40, D. Heck39, G.C. Hill12, C.
Hojvata, J.R. Hörandel80,82, P. Horvath31, M. Hrabovský31, T. Huege39,14, J.
Hulsman8,39, A. Insolia56,45, P.G. Isar74, P. Janecek30, J.A. Johnsen86, J.
Jurysek30, A. Kääpä36, K.H. Kampert36, B. Keilhauer39, J. Kemp40, H.O.
Klages39, M. Kleifges38, J. Kleinfeller9, M. Köpke37, N. Kunka38, B.L. Lago16,
R.G. Lang18, N. Langner40, M.A. Leigui de Oliveira22, V. Lenok39, A.
Letessier-Selvon33, I. Lhenry-Yvon32, D. Lo Presti56,45, L. Lopes72, R.
López62, L. Lu94, Q. Luce37, J.P. Lundquist76, A. Machado Payeras20, G.
Mancarella54,46, D. Mandat30, B.C. Manning12, J. Manshanden41, P. Mantscha, S.
Marafico32, A.G. Mariazzi4, I.C. Mariş13, G. Marsella59,45, D. Martello54,46,
H. Martinez18, O. Martínez Bravo62, M. Mastrodicasa55,44, H.J. Mathes39, J.
Matthews88, G. Matthiae60,49, E. Mayotte36, P.O. Mazura, G. Medina-Tanco67, D.
Melo8, A. Menshikov38, K.-D. Merenda86, S. Michal31, M.I. Micheletti6, L.
Miramonti57,47, S. Mollerach1, F. Montanet34, C. Morello52,50, M. Mostafá91,
A.L. Müller8, M.A. Muller20, K. Mulrey14, R. Mussa50, M. Muzio90, W.M.
Namasaka36, A. Nasr-Esfahani36, L. Nellen67, M. Niculescu-Oglinzanu73, M.
Niechciol42, D. Nitz89, D. Nosek29, V. Novotny29, L. Nožka31, A Nucita54,46,
L.A. Núñez28, M. Palatka30, J. Pallotta2, P. Papenbreer36, G. Parente79, A.
Parra62, M. Pech30, F. Pedreira79, J. Pȩkala69, R. Pelayo64, J. Peña-
Rodriguez28, E.E. Pereira Martins37,8, J. Perez Armand19, C. Pérez
Bertolli8,39, M. Perlin8,39, L. Perrone54,46, S. Petrera43,44, T. Pierog39, M.
Pimenta72, V. Pirronello56,45, M. Platino8, B. Pont80, M. Pothast82,80, P.
Privitera92, M. Prouza30, A. Puyleart89, S. Querchfeld36, J. Rautenberg36, D.
Ravignani8, M. Reininghaus39,8, J. Ridky30, F. Riehn72, M. Risse42, V.
Rizi55,44, W. Rodrigues de Carvalho19, J. Rodriguez Rojo10, M.J. Roncoroni8,
M. Roth39, E. Roulet1, A.C. Rovero5, P. Ruehl42, S.J. Saffi12, A. Saftoiu73,
F. Salamida55,44, H. Salazar62, G. Salina49, J.D. Sanabria Gomez28, F.
Sánchez8, E.M. Santos19, E. Santos30, F. Sarazin86, R. Sarmento72, C.
Sarmiento-Cano8, R. Sato10, P. Savina54,46,32, C.M. Schäfer39, V. Scherini46,
H. Schieler39, M. Schimassek37,8, M. Schimp36, F. Schlüter39,8, D. Schmidt37,
O. Scholten81,14, P. Schovánek30, F.G. Schröder93,39, S. Schröder36, J.
Schulte40, S.J. Sciutto4, M. Scornavacche8,39, A. Segreto51,45, S. Sehgal36,
R.C. Shellard15, G. Sigl41, G. Silli8,39, O. Sima73,f, R. Šmída92, P.
Sommers91, J.F. Soriano87, J. Souchard34, R. Squartini9, M. Stadelmaier39,8,
D. Stanca73, S. Stanič76, J. Stasielak69, P. Stassi34, A. Streich37,8, M.
Suárez-Durán28, T. Sudholz12, T. Suomijärvi35, A.D. Supanitsky8, J. Šupík31,
Z. Szadkowski71, A. Tapia27, C. Taricco61,50, C. Timmermans82,80, O.
Tkachenko39, P. Tobiska30, C.J. Todero Peixoto17, B. Tomé72, A. Travaini9, P.
Travnicek30, C. Trimarelli55,44, M. Trini76, M. Tueros4, R. Ulrich39, M.
Unger39, L. Vaclavek31, M. Vacula31, J.F. Valdés Galicia67, L. Valore58,48, E.
Varela62, V. Varma K.C.8,39, A. Vásquez-Ramírez28, D. Veberič39, C. Ventura25,
I.D. Vergara Quispe4, V. Verzi49, J. Vicha30, J. Vink84, S. Vorobiov76, H.
Wahlberg4, C. Watanabe24, A.A. Watsonc, M. Weber38, A. Weindl39, L. Wiencke86,
H. Wilczyński69, M. Wirtz40, D. Wittkowski36, B. Wundheiler8, A. Yushkov30, O.
Zapparrata13, E. Zas79, D. Zavrtanik76,77, M. Zavrtanik77,76, L. Zehrer76, A.
Zepeda63 and N. del Castillo8, G. de Innocenti8, L. Ferreyro8, S. Garavano8,
D. Gorbeña8, N. Leal8,9, G. Ríos8,9, M. Paramidani8,11, G. Pierri8,11, C.
Reyes8, A. Riello8, J.M. Salum8,11, A.P.J. Sedoski Croce8, D. Silva8, C.
Varela8
1
Centro Atómico Bariloche and Instituto Balseiro (CNEA-UNCuyo-CONICET), San
Carlos de Bariloche, Argentina
2
Centro de Investigaciones en Láseres y Aplicaciones, CITEDEF and CONICET,
Villa Martelli, Argentina
3
Departamento de Física and Departamento de Ciencias de la Atmósfera y los
Océanos, FCEyN, Universidad de Buenos Aires and CONICET, Buenos Aires,
Argentina
4
IFLP, Universidad Nacional de La Plata and CONICET, La Plata, Argentina
5
Instituto de Astronomía y Física del Espacio (IAFE, CONICET-UBA), Buenos
Aires, Argentina
6
Instituto de Física de Rosario (IFIR) – CONICET/U.N.R. and Facultad de
Ciencias Bioquímicas y Farmacéuticas U.N.R., Rosario, Argentina
7
Instituto de Tecnologías en Detección y Astropartículas (CNEA, CONICET,
UNSAM), and Universidad Tecnológica Nacional – Facultad Regional Mendoza
(CONICET/CNEA), Mendoza, Argentina
8
Instituto de Tecnologías en Detección y Astropartículas (CNEA, CONICET,
UNSAM), Buenos Aires, Argentina
9
Observatorio Pierre Auger, Malargüe, Argentina
10
Observatorio Pierre Auger and Comisión Nacional de Energía Atómica, Malargüe,
Argentina
11
Universidad Tecnológica Nacional – Facultad Regional Buenos Aires, Buenos
Aires, Argentina
12
University of Adelaide, Adelaide, S.A., Australia
13
Université Libre de Bruxelles (ULB), Brussels, Belgium
14
Vrije Universiteit Brussels, Brussels, Belgium
15
Centro Brasileiro de Pesquisas Fisicas, Rio de Janeiro, RJ, Brazil
16
Centro Federal de Educação Tecnológica Celso Suckow da Fonseca, Nova Friburgo,
Brazil
17
Universidade de São Paulo, Escola de Engenharia de Lorena, Lorena, SP, Brazil
18
Universidade de São Paulo, Instituto de Física de São Carlos, São Carlos, SP,
Brazil
19
Universidade de São Paulo, Instituto de Física, São Paulo, SP, Brazil
20
Universidade Estadual de Campinas, IFGW, Campinas, SP, Brazil
21
Universidade Estadual de Feira de Santana, Feira de Santana, Brazil
22
Universidade Federal do ABC, Santo André, SP, Brazil
23
Universidade Federal do Paraná, Setor Palotina, Palotina, Brazil
24
Universidade Federal do Rio de Janeiro, Instituto de Física, Rio de Janeiro,
RJ, Brazil
25
Universidade Federal do Rio de Janeiro (UFRJ), Observatório do Valongo, Rio de
Janeiro, RJ, Brazil
26
Universidade Federal Fluminense, EEIMVR, Volta Redonda, RJ, Brazil
27
Universidad de Medellín, Medellín, Colombia
28
Universidad Industrial de Santander, Bucaramanga, Colombia
29
Charles University, Faculty of Mathematics and Physics, Institute of Particle
and Nuclear Physics, Prague, Czech Republic
30
Institute of Physics of the Czech Academy of Sciences, Prague, Czech Republic
31
Palacky University, RCPTM, Olomouc, Czech Republic
32
CNRS/IN2P3, IJCLab, Université Paris-Saclay, Orsay, France
33
Laboratoire de Physique Nucléaire et de Hautes Energies (LPNHE), Sorbonne
Université, Université de Paris, CNRS-IN2P3, Paris, France
34
Univ. Grenoble Alpes, CNRS, Grenoble Institute of Engineering Univ. Grenoble
Alpes, LPSC-IN2P3, 38000 Grenoble, France
35
Université Paris-Saclay, CNRS/IN2P3, IJCLab, Orsay, France
36
Bergische Universität Wuppertal, Department of Physics, Wuppertal, Germany
37
Karlsruhe Institute of Technology (KIT), Institute for Experimental Particle
Physics, Karlsruhe, Germany
38
Karlsruhe Institute of Technology (KIT), Institut für Prozessdatenverarbeitung
und Elektronik, Karlsruhe, Germany
39
Karlsruhe Institute of Technology (KIT), Institute for Astroparticle Physics,
Karlsruhe, Germany
40
RWTH Aachen University, III. Physikalisches Institut A, Aachen, Germany
41
Universität Hamburg, II. Institut für Theoretische Physik, Hamburg, Germany
42
Universität Siegen, Department Physik – Experimentelle Teilchenphysik, Siegen,
Germany
43
Gran Sasso Science Institute, L’Aquila, Italy
44
INFN Laboratori Nazionali del Gran Sasso, Assergi (L’Aquila), Italy
45
INFN, Sezione di Catania, Catania, Italy
46
INFN, Sezione di Lecce, Lecce, Italy
47
INFN, Sezione di Milano, Milano, Italy
48
INFN, Sezione di Napoli, Napoli, Italy
49
INFN, Sezione di Roma “Tor Vergata”, Roma, Italy
50
INFN, Sezione di Torino, Torino, Italy
51
Istituto di Astrofisica Spaziale e Fisica Cosmica di Palermo (INAF), Palermo,
Italy
52
Osservatorio Astrofisico di Torino (INAF), Torino, Italy
53
Politecnico di Milano, Dipartimento di Scienze e Tecnologie Aerospaziali ,
Milano, Italy
54
Università del Salento, Dipartimento di Matematica e Fisica “E. De Giorgi”,
Lecce, Italy
55
Università dell’Aquila, Dipartimento di Scienze Fisiche e Chimiche, L’Aquila,
Italy
56
Università di Catania, Dipartimento di Fisica e Astronomia, Catania, Italy
57
Università di Milano, Dipartimento di Fisica, Milano, Italy
58
Università di Napoli “Federico II”, Dipartimento di Fisica “Ettore Pancini”,
Napoli, Italy
59
Università di Palermo, Dipartimento di Fisica e Chimica ”E. Segrè”, Palermo,
Italy
60
Università di Roma “Tor Vergata”, Dipartimento di Fisica, Roma, Italy
61
Università Torino, Dipartimento di Fisica, Torino, Italy
62
Benemérita Universidad Autónoma de Puebla, Puebla, México
63
Centro de Investigación y de Estudios Avanzados del IPN (CINVESTAV), México,
D.F., México
64
Unidad Profesional Interdisciplinaria en Ingeniería y Tecnologías Avanzadas
del Instituto Politécnico Nacional (UPIITA-IPN), México, D.F., México
65
Universidad Autónoma de Chiapas, Tuxtla Gutiérrez, Chiapas, México
66
Universidad Michoacana de San Nicolás de Hidalgo, Morelia, Michoacán, México
67
Universidad Nacional Autónoma de México, México, D.F., México
68
Universidad Nacional de San Agustin de Arequipa, Facultad de Ciencias
Naturales y Formales, Arequipa, Peru
69
Institute of Nuclear Physics PAN, Krakow, Poland
70
University of Łódź, Faculty of Astrophysics, Łódź, Poland
71
University of Łódź, Faculty of High-Energy Astrophysics,Łódź, Poland
72
Laboratório de Instrumentação e Física Experimental de Partículas – LIP and
Instituto Superior Técnico – IST, Universidade de Lisboa – UL, Lisboa,
Portugal
73
“Horia Hulubei” National Institute for Physics and Nuclear Engineering,
Bucharest-Magurele, Romania
74
Institute of Space Science, Bucharest-Magurele, Romania
75
University Politehnica of Bucharest, Bucharest, Romania
76
Center for Astrophysics and Cosmology (CAC), University of Nova Gorica, Nova
Gorica, Slovenia
77
Experimental Particle Physics Department, J. Stefan Institute, Ljubljana,
Slovenia
78
Universidad de Granada and C.A.F.P.E., Granada, Spain
79
Instituto Galego de Física de Altas Enerxías (IGFAE), Universidade de Santiago
de Compostela, Santiago de Compostela, Spain
80
IMAPP, Radboud University Nijmegen, Nijmegen, The Netherlands
81
KVI – Center for Advanced Radiation Technology, University of Groningen,
Groningen, The Netherlands
82
Nationaal Instituut voor Kernfysica en Hoge Energie Fysica (NIKHEF), Science
Park, Amsterdam, The Netherlands
83
Stichting Astronomisch Onderzoek in Nederland (ASTRON), Dwingeloo, The
Netherlands
84
Universiteit van Amsterdam, Faculty of Science, Amsterdam, The Netherlands
85
Case Western Reserve University, Cleveland, OH, USA
86
Colorado School of Mines, Golden, CO, USA
87
Department of Physics and Astronomy, Lehman College, City University of New
York, Bronx, NY, USA
88
Louisiana State University, Baton Rouge, LA, USA
89
Michigan Technological University, Houghton, MI, USA
90
New York University, New York, NY, USA
91
Pennsylvania State University, University Park, PA, USA
92
University of Chicago, Enrico Fermi Institute, Chicago, IL, USA
93
University of Delaware, Department of Physics and Astronomy, Bartol Research
Institute, Newark, DE, USA
94
University of Wisconsin-Madison, Department of Physics and WIPAC, Madison, WI,
USA
—–
a
Fermi National Accelerator Laboratory, Fermilab, Batavia, IL, USA
b
Max-Planck-Institut für Radioastronomie, Bonn, Germany
c
School of Physics and Astronomy, University of Leeds, Leeds, United Kingdom
d
Colorado State University, Fort Collins, CO, USA
e
now at Hakubi Center for Advanced Research and Graduate School of Science,
Kyoto University, Kyoto, Japan
f
also at University of Bucharest, Physics Department, Bucharest, Romania
|
# Faster Kernel Interpolation for Gaussian Processes
Mohit Yadav Daniel Sheldon Cameron Musco
University of Massachusetts Amherst
{ymohit, sheldon<EMAIL_ADDRESS>
###### Abstract
A key challenge in scaling Gaussian Process (GP) regression to massive
datasets is that exact inference requires computation with a dense $n\times n$
kernel matrix, where $n$ is the number of data points. Significant work
focuses on approximating the kernel matrix via interpolation using a smaller
set of $m$ “inducing points”. _Structured kernel interpolation_ (SKI) is among
the most scalable methods: by placing inducing points on a dense grid and
using structured matrix algebra, SKI achieves per-iteration time of
$\mathcal{O}(n+m\log m)$ for approximate inference. This linear scaling in $n$
enables inference for very large data sets; however the cost is _per-
iteration_ , which remains a limitation for extremely large $n$. We show that
the SKI per-iteration time can be reduced to $\mathcal{O}(m\log m)$ after a
single $\mathcal{O}(n)$ time precomputation step by reframing SKI as solving a
natural Bayesian linear regression problem with a fixed set of $m$ compact
basis functions. With per-iteration complexity _independent of the dataset
size $n$_ for a fixed grid, our method scales to truly massive data sets. We
demonstrate speedups in practice for a wide range of $m$ and $n$ and apply the
method to GP inference on a three-dimensional weather radar dataset with over
100 million points. Our code is available at https://github.com/ymohit/fkigp.
## 1 Introduction
GPs are a widely used and principled class of methods for predictive modeling.
They have a long history in spatial statistics and geostatistics for spatio-
temporal interpolation problems [16, 4]. They were later adopted in ML as
general-purpose predictive models [26], motivated in part by connections to
neural networks [20, 32]. More recently, similar connections have been
identified between GPs and deep networks [15, 17, 8, 21, 3]. GPs can be used
for general-purpose Bayesian regression [35, 25], classification [33], and
many other applications [26, 36].
A well-known limitation of GPs is running-time scalability. The basic
inference and learning tasks require linear algebraic operations (e.g., matrix
inversion, linear solves, computing log-determinants) with an $n\times n$
kernel matrix, where $n$ is the number of data points. Exact computations
require $\Theta(n^{3})$ time — e.g., using the Cholesky decomposition — which
severely limits applicability to large problems. Hence, a large amount of work
has been devoted to improving scalability of GP inference and learning through
approximation. Most of this work is based on the idea of forming an
approximate kernel matrix that includes low-rank structure, e.g., through the
use of inducing points or random features [34, 28, 23, 24, 13]. With rank-$m$
structure, the running time of key tasks can be reduced to
$\Theta(nm^{2}+m^{3})$ [23]. However, this scaling with $n$ and $m$ continues
to limit the size of input data sets that can be handled, the approximation
accuracy, or both.
Structured kernel interpolation (SKI) is a promising approach to further
improve the scalability of GP methods on relatively low-dimensional data [37].
In SKI, $m$ inducing points are placed on a regular grid, which, when combined
with a stationary kernel covariance function, imposes extra structure in the
approximate kernel matrix. Kernel matrix operations on the grid (i.e.,
multiplying with a vector) require only $\mathcal{O}(m\log m)$ time, and
interpolating from the grid requires $\mathcal{O}(n)$ time. By combining
structured kernel operations with iterative methods for numerical linear
algebra, the running time to solve core GP tasks becomes
$\mathcal{O}(k(n+m\log m))$ where $k$ is the number of iterations, and is
usually much less than $m$ or $n$. The modest per-iteration runtime of
$\mathcal{O}(n+m\log m)$ allows the modeler to select a very large number of
inducing points.
We show how to further improve the scalability of SKI with respect to $n$ and
scale to truly massive data sets. We first show that the SKI approximation
corresponds to exact inference in a Bayesian regression problem with a fixed
set of $m$ compact spatial basis functions. This lets us reduce the per-
iteration runtime to $\mathcal{O}(m\log m)$ — completely _independent_ of $n$
— after a one-time preprocessing cost of $\mathcal{O}(n)$ to compute
sufficient statistics of the regression problem. However, naive application of
these ideas introduces undesirable trade-offs: while the per-iteration cost is
better, we must solve linear systems that are computationally less desirable
than the original ones — e.g., they are asymmetric instead of symmetric, or
have worse condition number. To avoid these trade-offs, we contribute novel
“factorized" conjugate gradient and Lanczos methods, which allow us to solve
the _original_ linear systems in $\mathcal{O}(m\log m)$ time per-iteration
instead of $\mathcal{O}(n+m\log m)$.
Our techniques accelerate SKI inference and learning across a wide range of
settings. They apply to each of the main sub-tasks of GP inference: computing
the posterior mean, posterior covariance, and log-likelihood. We demonstrate
runtime improvements across different data sets and grid sizes, and the
ability to scale GP inference to datasets well outside the typical range that
can be handled by SKI or other approaches, such as inducing point methods. For
example, we demonstrate the ability to perform GP inference on a three-
dimensional weather radar dataset with $120$ million data points, using a grid
of $128,000$ inducing points.
### 1.1 Related Work
Outside of SKI and its variants [38, 7], a variety of scalable GP methods have
been proposed. Most notable are inducing point methods (sparse GP
approximations), such as the Nyström, SoRs, FITC, and SMGP methods [34, 28,
23, 31]. These methods require either $\Omega(nm^{2})$ time for direct solves,
or $\Omega(nm)$ per-iteration cost if using iterative methods. Our approach
significantly improves the dependence on both $n$ and $m$ to just
$\mathcal{O}(n)$ preprocessing time and $\mathcal{O}(m\log m)$ per-iteration
cost.
While the above methods generally do not leverage structured matrix methods
like SKI, especially in higher dimensions, they may achieve comparable
accuracy with a smaller number of inducing points. Directly comparing SKI to
popular inducing point methods is beyond the scope of this work, however prior
work shows significant performance gains on large, relatively low-dimensional
datasets [37, 5]. We note that very recent work [18] seeks to push the limits
of inducing point methods via careful systems and hardware level
implementations. Like our work, they scale to datasets with over 100 million
points.
Many scalable GP methods have also been proposed in the geostatistics
literature – see [11] for a survey. These include structured methods when
observations lie on a grid [10, 29]. Most closely related to our work is
_fixed rank kriging_ (FRK) [4], which can be viewed as a generalization of our
Bayesian regression interpretation of SKI. Like inducing point methods, FRK
using $m$ basis functions requires $\Omega(nm^{2})$ time. We show that SKI can
be viewed as a special case of FRK with a fixed kernel function and a
particular set of basis functions arising from interpolation. These two
choices allow our faster runtime, through the application of structured matrix
techniques and factorized iterative methods, which in turn significantly
increases the number of basis functions that can be used.
## 2 Background
Notation: Throughout we use bold letters to represent vectors and capitals to
represent matrices. $I\in\mathbb{R}^{n\times n}$ represents the identity
matrix, with dimension apparent from context. For $M\in\mathbb{R}^{p\times
r}$, $mv(M)$ denotes the time required to multiply $M$ by any vector in
$\mathbb{R}^{r}$.
### 2.1 Gaussian Process Regression
In GP regression [26], response values are modeled as noisy measurements of a
random function $f$ on input data points. Let $\mathcal{D}$ be a set of $n$
points $\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\in\mathbb{R}^{d}$ with
corresponding response values $y_{1},\ldots,y_{n}\in\mathbb{R}$. Let
$\mathbf{y}\in\mathbb{R}^{n}$ have its $i^{th}$ entry equal to $y_{i}$ and
$X\in\mathbb{R}^{n\times d}$ have its $i^{th}$ row equal to $\mathbf{x}_{i}$.
A Gaussian process with kernel (covariance) function
$k(\mathbf{x},\mathbf{x}^{\prime}):\mathbb{R}^{d}\times\mathbb{R}^{d}\rightarrow\mathbb{R}$
is a random function $f$ such, for any
$\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\in\mathbb{R}^{d}$:
$\displaystyle\mathbf{f}=[f(\mathbf{x}_{1}),...,f(\mathbf{x}_{n})]\sim\mathcal{N}(0,K_{X}),$
(1)
where
$K_{X}=[k(\mathbf{x}_{i},\mathbf{x}_{j})]_{i,j=1}^{n}\in\mathbb{R}^{n\times
n}$ is the kernel (covariance) matrix on the data points $X$. We assume
without loss of generality that $f$ is zero-mean. The responses $\mathbf{y}$
are modeled as measurements of $\mathbf{f}_{X}$ with i.i.d. Gaussian noise,
i.e., $\mathbf{y}\sim\mathcal{N}(\mathbf{f},\sigma^{2}I)$.
The posterior distribution of $f$ given the data $\mathcal{D}=(X,\mathbf{y})$
is itself a Gaussian process. The standard GP inference tasks are to compute
the posterior mean and covariance and the log-likelihood, given in Fact 1.
###### Fact 1 (Exact GP Inference).
The posterior mean, covariance, and log likelihood for Gaussian process
regression are given by: $\displaystyle\textbf{mean:
}\mu_{f|\mathcal{D}}(\mathbf{x})=\mathbf{k}_{\mathbf{x}}^{T}\mathbf{z}$
$\displaystyle\textbf{covariance:
}k_{f|\mathcal{D}}(\mathbf{x},\mathbf{x}^{\prime})=k(\mathbf{x},\mathbf{x}^{\prime})-\mathbf{k}_{\mathbf{x}}^{T}({K}_{X}+\sigma^{2}I)^{-1}\mathbf{k}_{\mathbf{x}^{\prime}}$
$\displaystyle\textbf{log likelihood:
}\log\Pr(\mathbf{y})=-\frac{1}{2}[\log\det({K}_{X}+\sigma^{2}I)+\mathbf{y}^{T}\mathbf{z}+n\log(2\pi)]$
where $\mathbf{k}_{\mathbf{x}}\in\mathbb{R}^{n}$ has $i^{th}$ entry
$k(\mathbf{x},\mathbf{x}_{i})$ and
$\mathbf{z}=({K}_{X}+\sigma^{2}I)^{-1}\mathbf{y}$.
Evaluating the posterior mean $\mu_{f|\mathcal{D}}(\mathbf{x})$, covariance
$k_{f|\mathcal{D}}(\mathbf{x},\mathbf{x}^{\prime})$, and log likelihood
require matrix-vector multiplication (MVM) with $({K}_{X}+\sigma^{2}I)^{-1}$,
which is the major run-time bottleneck for GP inference. Computing this
inverse directly requires $\Theta(n^{3})$ time. The log likelihood requires a
further computation of $\log\det({K}_{X}+\sigma^{2}I)$, which again naively
takes $\Theta(n^{3})$ time using the Cholesky decomposition. Computing its
gradient, which is necessary e.g., in hyper-parameter tuning with gradient
based methods, requires a further trace computation involving
$({K}_{X}+\sigma^{2}I)^{-1}$.
#### Inference Via Iterative Methods.
One way to accelerate GP inference is to avoid full matrix factorization like
Cholesky decomposition and instead use iterative methods. Gardner et al. [6]
detail how to compute or approximate each term in Fact 1 using a modified
version of the conjugate gradient (CG) algorithm.
For example, the vector
$\mathbf{z}=({K}_{X}+\sigma^{2}I)^{-1}\mathbf{y}\in\mathbb{R}^{n}$ is computed
by using CG to solve $({K}_{X}+\sigma^{2}I)\mathbf{z}=\mathbf{y}$, which
yields the posterior mean as
$\mu_{f|\mathcal{D}}(\mathbf{x})=\mathbf{k}_{\mathbf{x}}^{T}\mathbf{z}$, and
is also used in the calculation of the log-likelihood.
We will use two iterative algorithms in our work: (1) conjugate gradient to
solve linear systems $A\mathbf{v}=\mathbf{b}$ for symmetric positive definite
(SPD) $A$, (2) the Lanczos algorithm to (sometimes partially) _tridiagonalize_
an SPD matrix $A\in\mathbb{R}^{p\times p}$ as $A=QTQ^{T}$ where
$Q\in\mathbb{R}^{p\times p}$ is orthonormal and $T\in\mathbb{R}^{p\times p}$
is tridiagonal. Tridiagonalization is used to approximate $\log\det(\cdot)$
terms [5, 30] and to compute a low-rank factorization for approximate
posterior covariance evaluation [22].
Each iteration of CG or Lanczos for GP inference requires matrix-vector
multiplication with $A={K}_{X}+\sigma^{2}I$, which in general takes
$mv(K_{X})+n=\mathcal{O}(n^{2})$ time, but may be faster if $K_{X}$ has
special structure. The number of iterations required to reach a given error
tolerance depends on eigenspectrum of $A$, and is usually much less than $n$,
often on the order of 50 to 100. It can be even lower with preconditioning
[6]. Recent work thus often informally considers the number of iterations to
be a constant independent of $n$.
### 2.2 Structured Kernel Interpolation
For large $n$, the $\Theta(n^{2})$ per-iteration cost of iterative methods is
still prohibitively expensive. Structured kernel interpolation (SKI) is a
method to accelerate MVMs by using an approximate kernel matrix with a special
structure [37]. SKI approximates the kernel
$k(\mathbf{x},\mathbf{x}^{\prime})$ as:
$\tilde{k}(\mathbf{x},\mathbf{x^{\prime}})=\mathbf{w}_{\mathbf{x}}^{T}K_{G}\mathbf{w}_{\mathbf{x}^{\prime}}$
(2)
where $K_{G}\in\mathbb{R}^{m\times m}$ is the kernel matrix for the set of $m$
points on a dense $d$-dimensional grid, and the vector
$\mathbf{w}_{\mathbf{x}}\in\mathbb{R}^{m}$ contains interpolation weights to
interpolate from grid points to arbitrary $\mathbf{x}\in\mathbb{R}^{d}$. I.e.,
the kernel inner product between any two points is approximated by
interpolating kernel inner products among grid points.
SKI can use any interpolation strategy (e.g., linear or cubic); typically, the
strategy is local, so that $\mathbf{w}_{\mathbf{x}}$ has only a constant
number of non-zero entries corresponding to the grid points closest to
$\mathbf{x}$. E.g., for linear interpolation, $\mathbf{w}_{x}$ has $2^{d}$
non-zeros. Let $W\in\mathbb{R}^{n\times m}$ have $i^{th}$ row equal to
$\mathbf{w}_{\mathbf{x}_{i}}$. SKI approximates the true kernel matrix $K_{X}$
as $\tilde{K}_{X}=WK_{G}W^{T}$. Plugging this approximation directly into the
GP inference equations of Fact 1 yields the SKI inference scheme in Def. 1.
###### Definition 1 (SKI Inference).
The SKI approximate inference equations are given by:
$\displaystyle\textbf{mean:
}\mu_{f|\mathcal{D}}(\mathbf{x})\approx\mathbf{w}_{\mathbf{x}}^{T}K_{G}W^{T}\mathbf{\tilde{z}}$
$\displaystyle\textbf{covariance:
}k_{f|\mathcal{D}}(\mathbf{x},\mathbf{x}^{\prime})\approx\mathbf{w}_{\mathbf{x}}^{T}K_{G}\mathbf{w}_{\mathbf{x}^{\prime}}-\mathbf{\tilde{k}}_{\mathbf{x}}^{T}(\tilde{K}_{X}+\sigma^{2}I)^{-1}\mathbf{\tilde{k}}_{\mathbf{x}}$
$\displaystyle\textbf{log likelihood:
}\log\Pr(\mathbf{y})\approx-\frac{1}{2}[\log\det(\tilde{K}_{X}+\sigma^{2}I)+\mathbf{y}^{T}\mathbf{\tilde{z}}+n\log(2\pi)]$
where $\tilde{K}_{X}=WK_{G}W^{T}$ and
$\mathbf{\tilde{z}}=\left(\tilde{K}_{X}+\sigma^{2}I\right)^{-1}\mathbf{y}$ and
$\mathbf{\tilde{k}}_{\mathbf{x}}=WK_{G}\mathbf{w}_{\mathbf{x}}$.
#### SKI Running Time and Memory.
The SKI method admits efficient approximate inference due to: (1) the sparsity
of the interpolation weight matrix $W$, and (2) the structure of the on-grid
kernel matrix $K_{G}$. The cost per iteration of CG or Lanczos is
$\mathcal{O}(mv(\tilde{K}_{X}+\sigma^{2}I))=\mathcal{O}(mv(W)+mv(K_{G})+n)$.
This runtime is $\mathcal{O}(n+m\log m)$ per iteration assuming: (1) $W$ has
$\mathcal{O}(1)$ entries per row and so $mv(W)=\mathcal{O}(n)$, and (2)
$K_{G}$ is multilevel Toeplitz, so $mv(K_{G})=\mathcal{O}(m\log m)$ via fast
Fourier transform [14]. The matrix $K_{G}$ is multilevel Toeplitz (also known
as block Toeplitz with Toeplitz blocks or BTTB) whenever $G$ is an equally-
spaced grid and $k(\cdot,\cdot)$ is stationary. The memory footprint is
roughly $\mathtt{nnz}(W)+m+n=\mathcal{O}(n+m)$ to store $W$, $K_{G}$, and
$\mathbf{y}$, respectively, where $\mathtt{nnz}(A)$ denotes the number of non
zeros of $A$.
Overall, the SKI per-iteration runtime of $\mathcal{O}(n+m\log m)$
significantly improves on the naive $\mathcal{O}(n^{2})$ time required to
apply the true kernel matrix $K_{X}$. However, when $n$ is very large, the
$\mathcal{O}(n)$ term (for both runtime and memory) can become a bottleneck.
Our main contribution is to remove this cost, giving methods with
$\mathcal{O}(m\log m)$ per-iteration runtime with $\mathcal{O}(m)$ memory
after $\mathcal{O}(n)$ preprocessing.
## 3 SKI as Bayesian Linear Regression with Fixed Basis Functions
Our first contribution is to reformulate SKI as _exact inference_ in a
Bayesian linear regression problem with compact basis functions associated
with grid points. This lets us use standard conjugate update formulas for
Bayesian linear regression to reduce SKI’s per-iteration runtime to
$\mathcal{O}(m\log m)$, with $\mathcal{O}(n)$ preprocessing.
Figure 1: GSGP illustration. Bottom: basis functions for cubic interpolation
are compact and centered at grid points. Top: a GSGP (thick black curve) is
formed as the sum of scaled basis functions (lighter colored curves) with
random weights at grid points (vertical dashed lines) drawn from the original
GP.
###### Definition 2 (Grid-Structured Gaussian Process; GSGP).
Let $G=\\{\mathbf{g}_{1},\ldots,\mathbf{g}_{m}\\}\subseteq\mathbb{R}^{d}$ be a
set of grid points and
$k(\mathbf{x},\mathbf{x}^{\prime}):\mathbb{R}^{d}\times\mathbb{R}^{d}\rightarrow\mathbb{R}$
be a positive-definite kernel function. A _grid-structured Gaussian process_
$f$ is defined by the following generative process:
$\displaystyle\boldsymbol{\theta}$ $\displaystyle\sim\mathcal{N}(0,K_{G}),$
$\displaystyle f(\mathbf{x})$
$\displaystyle=\mathbf{w}_{\mathbf{x}}^{T}\boldsymbol{\theta},\quad\forall\mathbf{x}\in\mathbb{R}^{d}.$
where $\mathbf{w}_{x}\in\mathbb{R}^{m}$ is a vector of interpolation weights
from $\mathbf{x}$ to the grid $G$.
Notice that a GSGP is a classical Bayesian linear regression model. In
principle, $\mathbf{w}$ can be any mapping from $\mathbb{R}^{d}$ to
$\mathbb{R}^{m}$. However, for computational efficiency and to match the
notion of interpolation, the vector $\mathbf{w}_{\mathbf{x}}$ will be taken to
be the set of weights used by any fixed scheme to interpolate values from grid
points to arbitrary locations $\mathbf{x}\in\mathbb{R}^{d}$.
The generative process is illustrated in Figure 1 for $d=1$ and cubic
interpolation on an integer grid [12]. The basis functions
$\mathbf{w}^{j}_{\mathbf{x}}$ for each grid point $j$ and for all $\mathbf{x}$
are shown in the bottom panel. The $j$th basis function is centered at $j$ and
supported on $[j-2,j+2)$. For any fixed $\mathbf{x}$, the vector
$\mathbf{w}_{\mathbf{x}}$ has at most four nonzero entries corresponding to
the four nearest grid points.
The GSGP $f$ is generated by first drawing random weights
$\boldsymbol{\theta}$ as the values of the _original_ GP — with covariance
function $k(\cdot,\cdot)$ — at grid points. The generated function
$f(\mathbf{x})$ can be interpreted in two ways: (1) a sum of scaled basis
functions
$f(\mathbf{x})=\sum_{j}\boldsymbol{\theta}_{j}\mathbf{w}^{j}_{\mathbf{x}}$,
(2) the result of interpolating the grid values $\boldsymbol{\theta}$ to
$\mathbb{R}^{d}$ using the interpolation scheme.
It is straightforward to verify the following (full derivations and proofs
appear in Appendix A):
###### Claim 1.
A GSGP with grid weights $\boldsymbol{\theta}$ drawn from a GP with covariance
function $k(\mathbf{x},\mathbf{x}^{\prime})$ is itself a GP with covariance
function
$\tilde{k}(\mathbf{x},\mathbf{x}^{\prime})=\mathbf{w}_{\mathbf{x}}^{T}K_{G}\mathbf{w}_{\mathbf{x}^{\prime}}$.
In other words: the _exact_ covariance function of the GSGP is the same as the
SKI approximation in Eq. (2). Now, suppose noisy observations
$\mathbf{y}\sim\mathcal{N}(\mathbf{f}_{X},\sigma^{2})$ are made of the GSGP at
input locations $X$. It is well known that the posterior distribution of $f$
is a Gaussian process, with mean, covariance and log likelihood given in Fact
2 [19, 2]. From Claim 1 it follows that:
###### Theorem 1 (Equivalence of GSGP Inference and SKI Approximation).
The inference expressions of Fact 2 are identical to the SKI approximations of
Def. 1.
###### Fact 2 (GSGP Inference).
The posterior mean, covariance, and log likelihood functions for the grid-
structured Gaussian process (Def. 2) are given by: $\displaystyle\textbf{mean:
}\mu_{f|\mathcal{D}}(\mathbf{x})=\mathbf{w}_{\mathbf{x}}^{T}\mathbf{\bar{z}}$
$\displaystyle\textbf{covariance:
}k_{f|\mathcal{D}}(\mathbf{x},\mathbf{x}^{\prime})=\mathbf{w}_{\mathbf{x}}^{T}\mathbf{\bar{C}}\mathbf{w}_{\mathbf{x^{\prime}}}$
$\displaystyle\textbf{log likelihood:
}\log\Pr(\mathbf{y})=-\frac{1}{2}\big{[}\log\det(K_{G}W^{T}W+\sigma^{2}I)+\frac{\mathbf{y}^{T}(\mathbf{y}-W\mathbf{\bar{z}})}{\sigma^{2}}+c\big{]},$
where
$\mathbf{\bar{z}}=\mathbb{E}[\boldsymbol{\theta}|\mathbf{y}]=(K_{G}W^{T}W+\sigma^{2}I)^{-1}K_{G}W^{T}\mathbf{y}$
is the posterior mean of $\boldsymbol{\theta}$,
${\mathbf{\bar{C}}}=\operatorname{Var}(\boldsymbol{\theta}|\mathbf{y})=\sigma^{2}\left(K_{G}W^{T}W+\sigma^{2}I\right)^{-1}K_{G}$
is the posterior variance of $\boldsymbol{\theta}$ and
$c=n\log(2\pi)+(n-m)\log\sigma^{2}$.
### 3.1 GSGP Running Time and Memory
By Theorem 1, we can apply the SKI approximation using the GSGP inference
equations of Fact 2; these also involve structured matrices that are well
suited to iterative methods. In particular, they require linear solves and
logdet computation for the $m\times m$ matrix $K_{G}W^{T}W+\sigma^{2}I$ rather
than the $n\times n$ matrix $WK_{G}W^{T}+\sigma^{2}I$. Under the standard SKI
assumptions, this leads to $\mathcal{O}(m\log m)$ per-iteration run time and
$\mathcal{O}(m)$ memory footprint with $\mathcal{O}(n)$ precomputation.
Precomputation: GSGP involves precomputing $W^{T}W\in\mathbb{R}^{m\times m}$,
$W^{T}\mathbf{y}\in\mathbb{R}^{m}$, and
$\mathbf{y}^{T}\mathbf{y}\in\mathbb{R}$, which are the sufficient statistics
of a linear regression problem with feature matrix $W$. Each is a sum over $n$
data points and has fixed size depending only on $m$. Once computed, each
expression in Fact 2 can be computed without referring back to the data $W$
and $\mathbf{y}$.
It is clear that $\mathbf{y}^{T}\mathbf{y}=\sum_{i=1}^{n}y_{i}^{2}$ can be
computed in $\mathcal{O}(n)$ time with a single pass over the data. Assume the
interpolation strategy is local (e.g., linear or cubic interpolation), so that
each $\mathbf{w}_{\mathbf{x}_{i}}$ has $\mathcal{O}(1)$ non-zeros. Then
$W^{T}\mathbf{y}=\sum_{i=1}^{n}\mathbf{w}_{\mathbf{x}_{i}}^{T}\mathbf{y}$ can
also be computed in $\mathcal{O}(n)$ time with one pass over the data, since
each inner product accesses only a constant number of entries of $\mathbf{y}$.
$W^{T}W$ also has desirable computational properties:
###### Claim 2.
Assume that $G=\\{\mathbf{g}_{1},\ldots,\mathbf{g}_{m}\\}$ has spacing $s$,
i.e., $\left\lVert\mathbf{g}_{i}-\mathbf{g}_{j}\right\rVert_{\infty}\geq s$
for any $i,j\in m$ and that $\mathbf{w}^{j}_{\mathbf{x}}$ is non-zero only if
$\|\mathbf{g}_{j}-\mathbf{x}\|_{\infty}<r\cdot s$ for some fixed integer $r$.
Then $W^{T}W$ can be computed in $\mathcal{O}(n(2r)^{2d})$ time and has at
most $(4r-1)^{d}$ entries per row. Therefore
$mv(W^{T}W)=\mathcal{O}(m(4r-1)^{d}).$
For example, $r=1$ for linear interpolation and $r=2$ for cubic interpolation.
The upshot is that $W^{T}W$ can be precomputed in $\mathcal{O}(n)$ time, after
which matrix-vector multiplications take $\mathcal{O}(m)$ time, with
dependence on $r$ and $d$ similar to that of $mv(W)=\mathcal{O}(n(2r)^{d})$.
Per-Iteration and Memory: As discussed in Section 2.2, due to its grid
structure, $K_{G}$ admits fast matrix-vector multiplication:
$mv(K_{G})=\mathcal{O}(m\log m)$ for stationary kernels. Since $W^{T}W$ is
sparse, $mv(W^{T}W)=\mathcal{O}(m)$. Overall,
$mv(K_{G}W^{T}W+\sigma^{2}I)=\mathcal{O}(m\log m)$, giving per-iteration
runtime of $\mathcal{O}(m\log m)$ for computing the approximate mean,
covariance, and log-likelihood in Fact 2 via iterative methods. Importantly,
this complexity is _independent_ of the number of data points $n$. GSGP uses
$\mathtt{nnz}(W^{T}W)+m+m=\mathcal{O}(m)$ memory to store $W^{T}W$,
$W^{T}\mathbf{y}$ and $K_{G}$.
#### Limitations of GSGP.
Directly replacing the classic SKI method with the GSGP inference equations of
Fact 2 reduces per-iteration cost but has some undesirable trade-offs. In
particular, unlike $WK_{G}W^{T}+\sigma^{2}I$, the matrix
$K_{G}W^{T}W+\sigma^{2}I$ is _asymmetric_. Thus, conjugate gradient and
Lanczos—which are designed for _symmetric_ positive semidefinite matrices—are
not applicable. Asymmetric solvers like GMRES [27] can be used, and seem to
work well in practice for posterior mean estimation, but do not enjoy the same
theoretical convergence guarantees, nor do they as readily provide the
approximate tridiagonalization for low-rank approximation for predictive
covariance [22] or log-likelihood estimation [5]. It is possible to
algebraically manipulate the GSGP expressions to yield _symmetric_ $m\times m$
systems, but these lose the desirable ‘regularized form’ $A+\sigma^{2}I$ and
have worse conditioning, leading to more iterations being required in practice
(see Appendix A).
## 4 Efficient SKI via Factorized Iterative Methods
In this section we show how to achieve the best of both SKI and the GSGP
reformulation: we design ‘factorized’ versions of the CG and Lanczos methods
used in SKI with just $\mathcal{O}(m\log m)$ per-iteration complexity. These
methods are mathematically equivalent to the full methods, and so enjoy
identical convergence rates, avoiding the complications of the asymmetric
solves required by GSGP inference.
### 4.1 The Factorized Approach
Our approach centers on a simple observation relating matrix-vector
multiplication with the SKI kernel approximation
$\tilde{K}_{X}=WK_{G}W^{T}+\sigma^{2}I$ and the GSGP operator
$K_{G}W^{T}W+\sigma^{2}I$. To apply the SKI equations of Def. 1 via an
iterative method, a key step at each iteration is to multiply some iterate
$\mathbf{z}_{i}\in\mathbb{R}^{n}$ by $\tilde{K}_{X}$, requiring
$\mathcal{O}(n+m\log m)$ time. We avoid this by maintaining a compressed
representation of any iterate as
$\mathbf{z}_{i}=W\mathbf{\hat{z}}_{i}+c_{i}\mathbf{z}_{0}$, where
$\mathbf{\hat{z}}_{i}\in\mathbb{R}^{m}$, $c_{i}\in\mathbb{R}$ is a scalar
coefficient, and $\mathbf{z}_{0}$ is an initial value. At initialization,
$\mathbf{\hat{z}}_{0}=\mathbf{0}$ and $c_{0}=1$. Critically, this compressed
representation can be updated with multiplication by $\tilde{K}_{X}$ in just
$\mathcal{O}(m\log m)$ time using the following claim:
###### Claim 3 (Factorized Matrix-Vector Multiplication).
For any $\mathbf{z}_{i}\in\mathbb{R}^{n}$ with
$\mathbf{z}_{i}=W\mathbf{\hat{z}}_{i}+c_{i}\mathbf{z}_{0}$,
$\displaystyle(WK_{G}W^{T}+\sigma^{2}I)\mathbf{z}_{i}=W\mathbf{\hat{z}}_{i+1}+c_{i+1}\mathbf{z}_{0},$
where
$\mathbf{\hat{z}}_{i+1}=(K_{G}W^{T}W+\sigma^{2}I)\mathbf{\hat{z}}_{i}+c_{i}K_{G}W^{T}\mathbf{z}_{0}$
and $c_{i+1}=\sigma^{2}\cdot c_{i}$. Call this operation a _factorized update_
and denote it as
$(\mathbf{\hat{z}}_{i+1},c_{i+1})=\mathcal{A}(\mathbf{\hat{z}}_{i},c_{i})$. If
the vector $K_{G}W^{T}\mathbf{z}_{0}$ is precomputed in $\mathcal{O}(n+m\log
m)$ time, each subsequent factorized update takes $\mathcal{O}(m\log m)$ time.
For algorithms such as CG we also need to support additions and inner products
in the compressed representation. Additions are simple via linearity. Inner
products can be computed efficiently as well:
###### Claim 4 (Factorized Inner Products).
For any $\mathbf{z}_{i},\mathbf{y}_{i}\in\mathbb{R}^{n}$ with
$\mathbf{z}_{i}=W\mathbf{\hat{z}}_{i}+c_{i}\mathbf{z}_{0}$ and
$\mathbf{y}_{i}=W\mathbf{\hat{y}}_{i}+d_{i}\mathbf{y}_{0}$,
$\displaystyle\mathbf{z}_{i}^{T}\mathbf{y}_{i}$
$\displaystyle=\mathbf{\hat{z}}_{i}^{T}W^{T}W\mathbf{\hat{y}}_{i}+d_{i}\mathbf{\hat{z}}_{i}^{T}W^{T}\mathbf{y}_{0}$
$\displaystyle+c_{i}\mathbf{\hat{y}}_{i}^{T}W^{T}\mathbf{z}_{0}+c_{i}d_{i}\mathbf{y}_{0}^{T}\mathbf{z}_{0}.$
We denote the above operation by
$\langle(\mathbf{\hat{z}}_{i},c_{i}),(\mathbf{\hat{y}}_{i},d_{i})\rangle$. If
${W}^{T}\mathbf{z}_{0}$, ${W}^{T}\mathbf{y}_{0}$, and
$\mathbf{y}_{0}^{T}\mathbf{z}_{0}$ are precomputed in $\mathcal{O}(n)$ time,
then $\langle(\mathbf{\hat{z}}_{i},c_{i}),(\mathbf{\hat{y}}_{i},d_{i})\rangle$
can be computed using just one matrix-vector multiplication with $W^{T}W$ and
$\mathcal{O}(m)$ additional time.
### 4.2 Factorized Conjugate Gradient
We now give an example of this approach by deriving a “factorized conjugate
gradient” algorithm. Factorized CG has lower per-iteration complexity than
standard CG for computing the posterior mean, covariance, and log likelihood
in the SKI approximations of Def. 1. In Appendix B we apply the same approach
to the Lanczos method, which can be used in approximating the logdet term in
the log likelihood, and for computing a low-rank approximation of the
posterior covariance.
Figure 2 shows a side by side of the classic CG method and our factorized
variant. CG maintains three iterates, the current solution estimate
$\mathbf{x}_{k}$, the residual $\mathbf{r}_{k}$, and a search direction
$\mathbf{p}_{k}$. We maintain each in a compressed form with
$\mathbf{x}_{k}=W\mathbf{\hat{x}}_{k}+c_{k}^{x}\mathbf{r}_{0}+\mathbf{x}_{0}$,
$\mathbf{r}_{k}=W\mathbf{\hat{r}}_{k}+c_{k}^{r}\mathbf{r}_{0}$, and
$\mathbf{p}_{k}=W\mathbf{\hat{p}}_{k}+c_{k}^{p}\mathbf{r}_{0}$. Note that the
initialization term $\mathbf{r}_{0}$ is shared across all iterates with a
different coefficient, and that $\mathbf{x}_{k}$ has an additional fixed
component $\mathbf{x}_{0}$. This is an initial solution guess for the system
solve, frequently zero. With these invariants, simply applying Claims 3 and 4
gives the factorized algorithm.
Algorithm 1 Conjugate gradient
1:procedure CG($K_{G},W,\mathbf{b},\sigma,\mathbf{x}_{0},\epsilon$)
2: $\mathbf{r}_{0}=\mathbf{b}-\tilde{K}\mathbf{x}_{0}$
3: $\mathbf{p}_{0}=\mathbf{r}_{0}$
4: for $k=0$ to maxiter do
5:
$\alpha_{k}={\frac{\mathbf{r}_{k}^{T}\mathbf{r}_{k}}{\mathbf{p}_{k}^{T}\tilde{K}\mathbf{p}_{k}}}$
6: $\mathbf{x}_{k+1}=\mathbf{x}_{k}+\alpha_{k}\cdot\mathbf{p}_{k}$
7: $\mathbf{r}_{k+1}=\mathbf{r}_{k}-\alpha_{k}\cdot\tilde{K}\mathbf{p}_{k}$
8: if $\mathbf{r}_{k+1}^{T}\mathbf{r}_{k+1}\leq\epsilon$ exit loop
9:
$\beta_{k}={\frac{\mathbf{r}_{k+1}^{T}\mathbf{r}_{k+1}}{\mathbf{r}_{k}^{T}\mathbf{r}_{k}}}$
10: $\mathbf{p}_{k+1}=\mathbf{r}_{k+1}+\beta_{k}\mathbf{p}_{k}$
11: return $\mathbf{x}_{k+1}$
Algorithm 2 Factorized conjugate gradient (FCG)
1:procedure FCG($K_{G},W,\mathbf{b},\sigma,\mathbf{x}_{0},\epsilon$)
2:
$\mathbf{r_{0}}=\mathbf{b}-\tilde{K}\mathbf{x}_{0},\,\mathbf{\hat{r}_{0}}=\mathbf{0},\,c_{0}^{r}=1$
3: $\mathbf{\hat{p}}_{0}=\mathbf{0},\,c_{0}^{p}=1$,
$\,\mathbf{\hat{x}}_{0}=\mathbf{0},\,c_{0}^{x}=0$
4: for $k=0$ to maxiter do
5:
$\alpha_{k}=\frac{\langle(\mathbf{\hat{r}}_{k},c_{k}^{r}),(\mathbf{\hat{r}}_{k},c_{k}^{r})\rangle}{\langle(\mathbf{\hat{p}}_{k},c_{k}^{p}),\mathcal{A}(\mathbf{\hat{p}}_{k},c_{k}^{p})\rangle}$
6:
$(\mathbf{\hat{x}}_{k+1},c_{k+1}^{x})=(\mathbf{\hat{x}}_{k},c_{k}^{x})+\alpha_{k}\cdot(\mathbf{\hat{p}}_{k},c_{k}^{p})$
7:
$(\mathbf{\hat{r}}_{k+1},c_{k+1}^{r})=(\mathbf{\hat{r}}_{k},c_{k}^{r})-\alpha_{k}\cdot\mathcal{A}(\mathbf{\hat{p}}_{k},c_{k}^{p})$
8: if
$\langle(\mathbf{\hat{r}}_{k+1},c_{k+1}^{r}),(\mathbf{\hat{r}}_{k+1},c_{k+1}^{r})\rangle\leq\epsilon$
exit loop
9:
$\beta_{k}=\frac{\langle(\mathbf{\hat{r}}_{k+1},c_{k+1}^{r}),(\mathbf{\hat{r}}_{k+1},c_{k+1}^{r})\rangle}{\langle(\mathbf{\hat{r}}_{k},c_{k}^{r}),(\mathbf{\hat{r}}_{k},c_{k}^{r})\rangle}$
10:
$(\mathbf{\hat{p}}_{k+1},c_{k+1}^{p})=(\mathbf{\hat{r}}_{k+1},c_{k+1}^{r})+\beta_{k}\cdot(\mathbf{\hat{p}}_{k},c_{k}^{p})$
11: return
$\mathbf{x}_{k+1}=W\mathbf{\hat{x}}_{k+1}+c_{k+1}^{x}\cdot\mathbf{r}_{0}+\mathbf{x}_{0}$
Figure 2: Above $\tilde{K}=WK_{G}W^{T}+\sigma^{2}I$. $\mathcal{A}(\cdot)$ and
$\langle\cdot,\cdot\rangle$ denote the factorized matrix-vector multiplication
and inner product updates of Claims 3 and 4. The vector
$\mathbf{x}_{0}\in\mathbb{R}^{n}$ is an initial solution, the scalar
$\epsilon>0$ is a tolerance parameter, and maxiter is the maximum number of
iterations.
###### Proposition 1 (Factorized CG Equivalence and Runtime).
The outputs of Algs. 1 and 2 on the same inputs are identical. Alg. 2 performs
two matrix-vector multiplications with $K_{G}$ and three with $W$ initially.
In each iteration, it performs a constant number of multiplications with
$K_{G}$ and $W^{T}W$ plus $\mathcal{O}(m)$ additional work. If $W^{T}W$ is
sparse and $K_{G}$ has multilevel Toeplitz structure, its per iteration
runtime is $\mathcal{O}(m\log m)$.
Appendix B presents a further optimization to only require one matrix-vector
multiplication with $K_{G}$ and one with $W^{T}W$ per iteration. A similar
optimization applies to the factorized Lanczos method.
## 5 Experiments
We conduct experiments to evaluate our “GSGP approach” of using factorized
algorithms to solve SKI inference tasks. We use: (1) factorized CG to solve
the linear systems for the SKI posterior mean and covariance expressions of
Def. 1 and for approximate tridiagonalization within stochastic log-likelihood
approximations [6], and (2) factorized Lanczos for low-rank approximations of
the SKI predictive covariance [22].
Our goals are to: (1) evaluate the running-time improvements of GSGP, (2)
examine new speed-accuracy tradeoffs enabled by GSGP, and (3) demonstrate the
ability to push the frontier of GP inference for massive data sets. We use a
synthetic data set and three real data sets from prior work [5, 37, 1],
summarized in Table 1. We focus on large, relatively low-dimensional datasets
– the regime targeted by structured kernel interpolation methods. The Radar
dataset is a subset of a larger 120M point dataset. While SKI cannot scale to
this data size without significant computational resources, at the end of the
section we demonstrate that GSGP’s ability to scale to this regime with modest
runtime and memory usage.
Figure 3: Per-iteration time taken by the SKI and GSGP methods on a synthetic
dataset for posterior mean approximation. We see significant speedups when the
grid size $m$ is relatively small compared to $n$. Even when $m$ is larger,
e.g., $m=\Theta(n)$, GSGP performs no worse than and sometimes improves upon
SKI’s runtime, e.g., when $m=n/16$.
All linear solves use a tolerance of 0.01, and all kernels are squared
exponential. We utilize cubic interpolation for all experiments and provide
details on hardware and hyperparameters in Appendix C.1. In all cases, error
bars show 95% confidence intervals of mean running time over independent
trials.
Dataset | $n$ | $d$ | $m$ | Time | Memory
---|---|---|---|---|---
Sound | 59.3K | 1 | 8K | 0.433 | 0.247
59.3K | 1 | 60K | 0.941 | 0.505
Radar | 10.5M | 3 | 51.2K | 0.014 | 0.007
10.5M | 3 | 6.4M | 0.584 | 0.425
Precipitation | 528K | 3 | 128K | 0.326 | 0.366
528K | 3 | 528K | 0.491 | 0.628
528K | 3 | 1.2M | 0.806 | 2.941
Table 1: Ratios of GSGP to SKI per-iteration time and memory usage for
posterior mean approximation for different values of $m$ and $n$. GSGP shows
large improvements in a range of settings, even with very large grid size $m$.
Per-iteration resource usage. We first compare the per-iteration runtime for
posterior mean calculation using CG and Factorized CG on synthetic data of
varying sizes. The function $f(x)$ is a sine wave with two periods in the
interval $[0,1]$. Random $x$ locations are sampled in the interval and
$y=f(x)+\epsilon$ with $\epsilon\sim\mathcal{N}(0,0.25)$; grid points are
equally spaced on $[0,1]$. Figure 3 shows the average per-iteration inference
time over all iterations of 8 independent trials for increasing $n$ and three
different settings of grid size: $m=n$, $m=n/16$ and $m=\sqrt{n}$. GSGP is
substantially faster when $m<n$ (note log scale) and no slower when $m=n$.
Memory usage is another important consideration. Table 1 compares both per-
iteration running time and memory usage for posterior mean inference on our
real data sets.
Per-iteration time is averaged over one run of CG and FCG for each setting;
memory usage is calculated as $\mathtt{nnz}(W)+m+n$ for SKI and
$\mathtt{nnz}(W^{T}W)+2m$ for GSGP, where $\mathtt{nnz}(A)$ is the number of
nonzero entries of $A$. The gains are significant, especially when $m\ll n$,
and gains in time and/or memory are possible even when $m$ equals or exceeds
$n$ (e.g., precipitation, $m\in\\{528\mathrm{K},1.2\mathrm{M}\\}$); for very
large $m$ it is more resource efficient to run the original algorithm (or use
FCG without precomputing $W^{T}W$).
Figure 4: Error vs. runtime for approximate inference tasks on the sound
dataset with varying grid size $m$. GSGP gives much faster runtimes for fixed
$m$, allowing one to use a larger grid and achieve better runtime-accuracy
tradeoffs than SKI. Times are averaged over 20 trials and include pre-
processing. See text for description of error metrics.
Inference accuracy vs. time. A significant advantage of GSGP is the ability to
realize speed-accuracy tradeoffs that were not previously possible. Figure 4
illustrates this for the sound data set ($n=59.3\mathrm{K}$) by comparing
error vs. running time for four different GP inference tasks for grid sizes
$m\in\\{1\mathrm{K},2\mathrm{K},5\mathrm{K},6\mathrm{K},8\mathrm{K},10\mathrm{K},30\mathrm{K},60\mathrm{K}\\}$.
For mean estimation we compute SMAE (mean absolute error normalized by the
mean of observations) on a held-out test set of 691 points. For other tasks
(log-likelihood, covariance) we compute error relative to a reference value
computed with SKI for the highest $m$ using absolute difference for log-
likelihood and Frobenius norm from the reference value for covariance
matrices. For log-likelihood, we use 30 samples and $tol=0.01$ for stochastic
logdet approximation [5]. For covariance, we compute the $691\times 691$
posterior covariance matrix for test points, first using the exact SKI
expressions (which requires 691 linear solves) and then using a rank-$k$
approximation [22], that, once computed, yields $\mathcal{O}(k)$ time
approximations of posterior covariances, for $k=\min\\{m,10000\\}$. For each
task, GSGP is faster when $m<n$, sometimes substantially so, and achieves the
same accuracy, leading to strictly better time-accuracy tradeoffs.
Figure 5: Runtime vs. grid size $m$ for GP inference tasks on the radar data
set with $10.5$M data points. We observe significantly faster runtimes for
GSGP across all tasks and a wide range of grid sizes. From top to bottom: pre-
processing time, mean inference runtime, and log-determinant runtime for
$tol=0.1$ and $tol=0.01$ respectively. 30 random vectors are used in both log-
determinant computations.
Very large $n$. Figure 5 shows running time vs. $m$ for GP inference tasks on
a data set of $n=10.5\mathrm{M}$ radar reflectivity measurements in three
dimensions from 13 radar stations in the northeast US [1]. This is a situation
where $m\ll n$ is highly relevant: even the _smallest_ grid size of
$51.2\mathrm{K}=80\times 80\times 8$ is of scientific value for summarizing
broad-scale weather and biological activity. GSGP is much faster, e.g.,
roughly 150x and 15x faster for $m=51.2K$ on mean inference and log-likelihood
estimation respectively after one-time pre-processing (first panel). Pre-
processing is up to 3x slower for GSGP due to the need to compute $W^{T}W$. To
perform only one mean inference, the overall time of GSGP and SKI _including_
pre-processing is similar, which is consistent with the observation that
typical solves use only tens of iterations, and some of the per-iteration gain
is offset by pre-processing.
However, scientific modeling is highly iterative, and tasks other than mean
inference perform _many_ more iterations of linear solvers; the total time for
GSGP in these cases is much smaller than SKI. In realistic applications with
massive data sets, we expect $W^{T}W$ to be computed once and saved to disk.
GSGP also has the significant advantage that its memory footprint is
$\mathcal{O}(m)$, while SKI is $\mathcal{O}(m+n)$. The data above is a
_subset_ from a national radar network, which was the limit on which SKI could
run without exceeding 10GB of memory. To demonstrate scalability of GSGP, we
ran on data from the _entire_ national radar network with $n=120M$ for
$m=128K$, on which SKI far exceeds this memory limit. On this problem, GSGP
takes $4861.60$ +/- $233.42$ seconds for pre-processing, and then $9.33$ +/-
$0.31$ seconds for mean inference (averaged over 10 trials).
## 6 Conclusions and Future Work
Our work shows that the SKI method for approximate Gaussian process regression
in fact performs exact inference for a natural grid-structured Gaussian
process. We leverage this observation to give an implementation for the method
with per-iteration complexity _independent of the dataset size $n$_. This
leads to significantly improved performance on a range of problems, including
the ability to scale GP inference to radar datasets with over $100$ million
data points – a regime far beyond what can typically be handled by SKI or
other approximation methods, such as inducing point and random feature
approaches.
Our work leaves open a number of questions. Algorithmically, it would be
interesting to explore if SKI can be efficiently implemented using direct,
rather than iterative, solvers that take advantage of the Toeplitz and band-
like structures of $K_{G}$ and $W^{T}W$ respectively. Theoretically, it would
be interesting to further explore the grid-structured Gaussian process for
which SKI performs exact inference. Intuitively, by interpolating data points
to a grid, this method seems to suppress ‘high-frequency’ components of the
kernel covariance function. Can this be analyzed to lead to formal
approximation guarantees or practical guidance in how to choose the grid size?
## References
* [1] R. Angell and D. R. Sheldon. Inferring latent velocities from weather radar data using Gaussian processes. In Advances in Neural Information Processing Systems 31 (NeurIPS), pages 8984–8993, 2018.
* [2] C. M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006.
* [3] Z. Cheng, M. Gadelha, S. Maji, and D. Sheldon. A bayesian perspective on the deep image prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5443–5451, 2019.
* [4] N. Cressie and G. Johannesson. Fixed rank kriging for very large spatial data sets. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 70(1):209–226, 2008.
* [5] K. Dong, D. Eriksson, H. Nickisch, D. Bindel, and A. G. Wilson. Scalable log determinants for Gaussian process kernel learning. In Advances in Neural Information Processing Systems 30 (NeurIPS), pages 6327–6337, 2017.
* [6] J. R. Gardner, G. Pleiss, D. Bindel, K. Q. Weinberger, and A. G. Wilson. GPyTorch: Blackbox matrix-matrix Gaussian process inference with GPU acceleration. In Advances in Neural Information Processing Systems 31 (NeurIPS), pages 7576–7586, 2018.
* [7] J. R. Gardner, G. Pleiss, R. Wu, K. Q. Weinberger, and A. G. Wilson. Product kernel interpolation for scalable Gaussian processes. In Proceedings of the 21 International Conference on Artificial Intelligence and Statistics (AISTATS), 2018.
* [8] A. Garriga-Alonso, L. Aitchison, and C. E. Rasmussen. Deep Convolutional Networks as Shallow Gaussian Processes. International Conference on Learning Representations (ICLR), 2019\.
* [9] J. A. Gubner. Block matrix formulas, 2015. Accessed at: https://gubner.ece.wisc.edu/notes/BlockMatrixFormulas.pdf.
* [10] J. Guinness and M. Fuentes. Circulant embedding of approximate covariances for inference from Gaussian data on large lattices. Journal of Computational and Graphical Statistics, 26(1):88–97, 2017.
* [11] M. J. Heaton, A. Datta, A. O. Finley, R. Furrer, J. Guinness, R. Guhaniyogi, F. Gerber, R. B. Gramacy, D. Hammerling, M. Katzfuss, et al. A case study competition among methods for analyzing large spatial data. Journal of Agricultural, Biological and Environmental Statistics, 24(3):398–425, 2019.
* [12] R. G. Keys. Cubic convolution interpolation for digital image processing. IEEE Transactions on Acoustics, Speech, and Signal Processing, 29(6):1153–1160, 1981.
* [13] Q. Le, T. Sarlós, and A. Smola. Fastfood-approximating kernel expansions in loglinear time. In Proceedings of the 30 International Conference on Machine Learning (ICML), volume 85, 2013.
* [14] D. Lee. Fast multiplication of a recursive block Toeplitz matrix by a vector and its application. Journal of Complexity, 2(4):295–305, 1986.
* [15] J. Lee, Y. Bahri, R. Novak, S. Schoenholz, J. Pennington, and J. Sohl-Dickstein. Deep Neural Networks as Gaussian Processes. International Conference on Learning Representations (ICLR), 2018\.
* [16] G. Matheron. The Intrinsic Random Functions and Their Applications. Advances in Applied Probability, 5(3):439–468, 1973.
* [17] A. G. d. G. Matthews, M. Rowland, J. Hron, R. E. Turner, and Z. Ghahramani. Gaussian process behaviour in wide deep neural networks. International Conference on Learning Representations (ICLR), 2018\.
* [18] G. Meanti, L. Carratino, L. Rosasco, and A. Rudi. Kernel methods through the roof: handling billions of points efficiently. arXiv:2006.10350, 2020.
* [19] R. Neal. Lecture notes for sta 414: Statistical methods for machine learning and data mining, 2011. Accessed at: http://www.utstat.utoronto.ca/~radford/sta414.S11/week4a.pdf.
* [20] R. M. Neal. Bayesian Learning for Neural Networks. PhD thesis, University of Toronto, 1995.
* [21] R. Novak, L. Xiao, Y. Bahri, J. Lee, G. Yang, J. Hron, D. A. Abolafia, J. Pennington, and J. Sohl-Dickstein. Bayesian Deep Convolutional Networks with Many Channels are Gaussian Processes. In Proceedings of the 7 International Conference on Learning Representations (ICLR), 2019.
* [22] G. Pleiss, J. R. Gardner, K. Q. Weinberger, and A. G. Wilson. Constant-time predictive distributions for Gaussian processes. In Proceedings of the 35 International Conference on Machine Learning (ICML), pages 4114–4123, 2018.
* [23] J. Quiñonero-Candela and C. E. Rasmussen. A unifying view of sparse approximate Gaussian process regression. Journal of Machine Learning Research, 6(Dec):1939–1959, 2005.
* [24] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In Advances in Neural Information Processing Systems 20 (NeurIPS), pages 1177–1184, 2007.
* [25] C. E. Rasmussen. Evaluation of Gaussian Processes and Other Methods for Non-linear Regression. University of Toronto, 1999.
* [26] C. E. Rasmussen. Gaussian Processes in Machine Learning. In Advanced Lectures on Machine Learning, pages 63–71. Springer, 2004.
* [27] Y. Saad and M. H. Schultz. GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems. SIAM Journal on Scientific and Statistical Computing, 7(3):856–869, 1986.
* [28] E. Snelson and Z. Ghahramani. Sparse Gaussian processes using pseudo-inputs. In Advances in Neural Information Processing Systems 18 (NeurIPS), pages 1257–1264, 2005.
* [29] J. R. Stroud, M. L. Stein, and S. Lysen. Bayesian and maximum likelihood estimation for Gaussian processes on an incomplete lattice. Journal of computational and Graphical Statistics, 26(1):108–120, 2017.
* [30] S. Ubaru, J. Chen, and Y. Saad. Fast estimation of $tr(f(a))$ via stochastic Lanczos quadrature. Journal on Matrix Analysis and Applications, 38(4):1075–1099, 2017\.
* [31] C. Walder, K. I. Kim, and B. Schölkopf. Sparse multiscale Gaussian process regression. In Proceedings of the 25 International Conference on Machine Learning (ICML), pages 1112–1119, 2008.
* [32] C. K. Williams. Computing with infinite networks. In Advances in Neural Information Processing Systems 10 (NeurIPS), pages 295–301, 1997.
* [33] C. K. Williams and D. Barber. Bayesian Classification with Gaussian Processes. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 20(12):1342–1351, 1998.
* [34] C. K. Williams and M. Seeger. Using the Nyström method to speed up kernel machines. In Advances in Neural Information Processing Systems 14 (NeurIPS), pages 682–688, 2001.
* [35] C. K. I. Williams and C. E. Rasmussen. Gaussian Processes for Regression. In Advances in Neural Information Processing Systems 9 (NeurIPS), pages 514–520, 1996.
* [36] A. Wilson and R. Adams. Gaussian process kernels for pattern discovery and extrapolation. In Proceedings of the 30 International Conference on Machine Learning (ICML), pages 1067–1075, 2013.
* [37] A. Wilson and H. Nickisch. Kernel interpolation for scalable structured Gaussian processes (KISS-GP). In Proceedings of the 32 International Conference on Machine Learning (ICML), pages 1775–1784, 2015.
* [38] A. G. Wilson, C. Dann, and H. Nickisch. Thoughts on massively scalable gaussian processes. arXiv:1511.01870, 2015.
Supplementary Appendices
## Appendix A Reformulation of SKI as Bayesian Linear Regression – Omitted
Details
### A.1 Equivalence between GSGP and SKI Approximation
###### Claim 1.
A GSGP with grid weights $\boldsymbol{\theta}$ drawn from a GP with covariance
function $k(\mathbf{x},\mathbf{x}^{\prime})$ is itself a GP with covariance
function
$\tilde{k}(\mathbf{x},\mathbf{x}^{\prime})=\mathbf{w}_{\mathbf{x}}^{T}K_{G}\mathbf{w}_{\mathbf{x}^{\prime}}$.
###### Proof.
For any finite number of input points $\mathbf{x}_{1},\ldots,\mathbf{x}_{n}$,
the joint distribution of $(f(\mathbf{x}_{1}),\ldots,f(\mathbf{x}_{n}))$ is
zero-mean Gaussian, since each
$f(\mathbf{x}_{i})=\mathbf{w}_{\mathbf{x}_{i}}^{T}\boldsymbol{\theta}$ is a
linear transformation of the same zero-mean Gaussian random variable
$\boldsymbol{\theta}$. Therefore, $f(\mathbf{x})$ is a zero-mean GP. For a
particular pair of function values $f(\mathbf{x})$ and
$f(\mathbf{x}^{\prime})$, the covariance is given by
$\operatorname{Cov}(f(\mathbf{x}),f(\mathbf{x}^{\prime}))=\operatorname{Cov}(\mathbf{w}_{\mathbf{x}}^{T}\boldsymbol{\theta},\mathbf{w}_{\mathbf{x}^{\prime}}^{T}\boldsymbol{\theta})=\mathbf{w}_{\mathbf{x}}^{T}\operatorname{Cov}(\boldsymbol{\theta},\boldsymbol{\theta})\mathbf{w}_{\mathbf{x}^{\prime}}=\mathbf{w}_{\mathbf{x}}^{T}K_{G}\mathbf{w}_{\mathbf{x}^{\prime}}=\tilde{k}(\mathbf{x},\mathbf{x}^{\prime}).$
(3)
Therefore $f$ has the claimed covariance function. ∎
### A.2 Exact Inference for GSGP
For completeness we derive the exact GSGP inference equations for Fact 2.
###### Proof of Fact 2.
GSGP is a standard linear basis function model and it is well known (see e.g.,
[19]) that the posterior mean and covariance of $\boldsymbol{\theta}$ given
$\mathbf{y}$ are given by
$\displaystyle\mathbb{E}[\boldsymbol{\theta}|\mathbf{y}]=\frac{1}{\sigma^{2}}\cdot\left[K_{G}^{-1}+\frac{1}{\sigma^{2}}W^{T}W\right]^{-1}\cdot
W^{T}\mathbf{y}.$ (4)
and
$\displaystyle\operatorname{Var}(\boldsymbol{\theta}|\mathbf{y})=\left[K_{G}^{-1}+\frac{1}{\sigma^{2}}W^{T}W\right]^{-1}.$
(5)
Using that for invertible $A,B$, $(AB)^{-1}=B^{-1}A^{-1}$ we can write:
$\displaystyle\frac{1}{\sigma^{2}}\cdot\left[K_{G}^{-1}+\frac{1}{\sigma^{2}}W^{T}W\right]^{-1}=\frac{1}{\sigma^{2}}\cdot\left[I+\frac{1}{\sigma^{2}}K_{G}W^{T}W\right]^{-1}K_{G}=(K_{G}W^{T}W+\sigma^{2}I)^{-1}K_{G}.$
Plugging back into the posterior mean equation (4) gives
$\displaystyle\mathbb{E}[\boldsymbol{\theta}|\mathbf{y}]=(K_{G}W^{T}W+\sigma^{2}I)^{-1}K_{G}W^{T}\mathbf{y}=\mathbf{\bar{z}}.$
By linearity since
$f(\mathbf{x})=\mathbf{w}_{\mathbf{x}}^{T}\boldsymbol{\theta}$, this gives the
GSGP posterior mean equation,
$\mu_{f|\mathcal{D}}(\mathbf{x})=\mathbf{w}_{\mathbf{x}}^{T}\mathbf{\bar{z}}$.
Similarly, plugging back into the posterior variance equation (5) gives:
$\displaystyle\operatorname{Var}(\boldsymbol{\theta}|\mathbf{y})=\sigma^{2}(K_{G}W^{T}W+\sigma^{2}I)^{-1}K_{G}=\mathbf{\bar{C}}.$
Again by linearity this gives the GSGP posterior variance equation
$k_{f|\mathcal{D}}(\mathbf{x},\mathbf{x}^{\prime})=\mathbf{w}_{\mathbf{x}}^{T}\mathbf{\bar{C}}\mathbf{w}_{\mathbf{x^{\prime}}}$.
Finally, it is well known [19] that the log likelihood of $\mathbf{y}$ under
the GSGP linear basis function model is given by:
$\displaystyle\log\Pr(\mathbf{y})=-\frac{1}{2}\left[n\log(2\pi)+n\log(\sigma^{2})+\log\det
K_{G}-\log\det(\mathbf{\bar{C}})+\frac{1}{\sigma^{2}}\mathbf{y}^{T}\mathbf{y}-\mathbf{\bar{z}}^{T}\mathbf{\bar{C}}^{-1}\mathbf{\bar{z}}\right]$
We can split
$\log\det(\mathbf{\bar{C}})=\log\det(\sigma^{2}(K_{G}W^{T}W+\sigma^{2}I)^{-1}K_{G})=m\log\sigma^{2}-\log\det(K_{G}W^{T}W+\sigma^{2}I)+\log\det(K_{G})$,
and canceling this gives:
$\displaystyle\log\Pr(\mathbf{y})=-\frac{1}{2}\left[n\log(2\pi)+(n-m)\log\sigma^{2}+\log\det(K_{G}W^{T}W+\sigma^{2}I)+\frac{1}{\sigma^{2}}\mathbf{y}^{T}\mathbf{y}-\mathbf{\bar{z}}^{T}\mathbf{\bar{C}}^{-1}\mathbf{\bar{z}}\right]$
(6)
Additionally we can write:
$\displaystyle\mathbf{\bar{z}}^{T}\mathbf{\bar{C}}^{-1}\mathbf{\bar{z}}$
$\displaystyle=\frac{1}{\sigma^{2}}\mathbf{\bar{z}}^{T}K_{G}^{-1}(K_{G}W^{T}W+\sigma^{2}I)(K_{G}W^{T}W+\sigma^{2}I)^{-1}K_{G}W^{T}\mathbf{y}=\frac{1}{\sigma^{2}}\mathbf{\bar{z}}^{T}W^{T}\mathbf{y}.$
Thus we have
$\frac{1}{\sigma^{2}}\mathbf{y}^{T}\mathbf{y}-\mathbf{\bar{z}}^{T}\mathbf{\bar{C}}^{-1}\mathbf{\bar{z}}=\frac{\mathbf{y}^{T}(\mathbf{y}-W\mathbf{\bar{z}})}{\sigma^{2}}$.
Plugging back into (6) yields the log likelihood formula of Fact 2, completing
the proof. ∎
### A.3 Equivalence of GSGP Exact Inference and SKI
Theorem 1, which states that the GSGP and SKI inference equations are
identical, follows directly from Claim 1, as GSGP is a Gaussian process whose
kernel is exactly the approximate kernel used by SKI. For illustrative
purposes, we also give a purely linear algebraic proof of Theorem 1 below:
###### Theorem 1.
The inference expressions of Fact 2 are identical to the SKI approximations of
Def. 1.
###### Linear Algebraic Proof of Theorem 1.
The equivalence between the expressions for the posterior mean and posterior
variance is a standard manipulation to convert between the “weight space” and
“function space” view of a GP with an explicit feature expansion, e.g., Eqs.
(2.11), (2.12) of [26]. We provide details with our notation for completeness.
The correspondence between the two expressions for the log-likelihood is due
to the same correspondence between “weight space” and “function space” views,
though we are not aware of a specific reference that provides the formula in
Fact 2.
First, recall these definitions from Def. 1 and Fact 2:
$\displaystyle\tilde{K}_{X}$ $\displaystyle=WK_{G}W^{T}$
$\displaystyle\mathbf{\tilde{z}}$
$\displaystyle=\left(WK_{G}W^{T}+\sigma^{2}I\right)^{-1}\mathbf{y}$
$\displaystyle\mathbf{\bar{z}}$
$\displaystyle=(K_{G}W^{T}W+\sigma^{2}I)^{-1}K_{G}W^{T}\mathbf{y}$
Mean: The two mean expressions are
$\mathbf{w}_{\mathbf{x}}K_{G}W^{T}\mathbf{\tilde{z}}$ (Def. 1) and
$\mathbf{w}_{\mathbf{x}}\mathbf{\bar{z}}$ (Fact 2). Expanding these, it
suffices to show that
$\displaystyle
K_{G}W^{T}\left(WK_{G}W^{T}+\sigma^{2}I\right)^{-1}=\left(K_{G}W^{T}W+\sigma^{2}I\right)^{-1}K_{G}W^{T}.$
(7)
This follows from the following claim:
###### Claim 5.
Let $A\in\mathbb{R}^{m\times n}$ and $B\in\mathbb{R}^{n\times m}$. Then
$A(BA+\sigma^{2}I_{n})^{-1}=(AB+\sigma^{2}I_{m})^{-1}A$
as long as $BA+\sigma^{2}I_{n}$ and $AB+\sigma^{2}I_{m}$ are both invertible.
###### Proof.
Observe that $(AB+\sigma^{2}I_{m})A=A(BA+\sigma^{2}I_{n})=ABA+\sigma^{2}A.$ If
the matrices $AB+\sigma^{2}I_{m}$ and $BA+\sigma^{2}I_{n}$ are invertible the
result follows by left- and right-multiplying by the inverses. ∎
We obtain (7) from Claim 5, applied with $A=K_{G}W^{T}$ and $B=W$. As required
by the claim, we have that both $K_{G}W^{T}W+\sigma^{2}I$ and
$WK_{G}W^{T}+\sigma^{2}I$ are invertible. For $\sigma>0$,
$WK_{G}W^{T}+\sigma^{2}I$ is positive definite, which implies invertibility.
$K_{G}W^{T}W$ is similar to the positive semidefinite matrix
$K_{G}^{1/2}W^{T}WK_{G}^{1/2}$, and thus has all non-negative eigenvalues.
Thus, for $\sigma>0$, $K_{G}W^{T}W+\sigma^{2}I$ has all positive eigenvalues
(in particular, it has no zero eigenvalue) and so is invertible.
Covariance: The two expressions for covariance are
$\mathbf{w}_{x}\mathbf{\tilde{C}}\mathbf{w}_{x}$ (Def. 1) and
$\mathbf{w}_{\mathbf{x}}\mathbf{\bar{C}}\mathbf{w}_{x}$ (Fact. 2) with
$\displaystyle\mathbf{\tilde{C}}$
$\displaystyle=K_{G}-K_{G}W^{T}(WK_{G}W^{T}+\sigma^{2}I)^{-1}WK_{G}$
$\displaystyle\mathbf{\bar{C}}$
$\displaystyle=\sigma^{2}\left(K_{G}W^{T}W+\sigma^{2}I\right)^{-1}K_{G}$
So it suffices to show that $\mathbf{\tilde{C}}=\mathbf{\bar{C}}$. Using Eq.
7, we have that
$\mathbf{\tilde{C}}=K_{G}-(K_{G}W^{T}W+\sigma^{2}I)^{-1}K_{G}W^{T}WK_{G}.$
Now, factor out $K_{G}$ and simplify to get:
$\displaystyle\mathbf{\tilde{C}}$
$\displaystyle=[I-(K_{G}W^{T}W+\sigma^{2}I)^{-1}K_{G}W^{T}W]K_{G}$
$\displaystyle=[(K_{G}W^{T}W+\sigma^{2}I)^{-1}\cdot(K_{G}W^{T}W+\sigma^{2}I-K_{G}W^{T}W)]K_{G}$
$\displaystyle=\sigma^{2}(K_{G}W^{T}W+\sigma^{2}I)^{-1}K_{G}=\mathbf{\bar{C}}$
Log likelihood: By matching terms in the two expressions and using the
definition of $\tilde{K}_{X}$, it suffices to show both of the following:
$\displaystyle\mathbf{y}^{T}\mathbf{\tilde{z}}$
$\displaystyle=\frac{\mathbf{y}^{T}(\mathbf{y}-W\mathbf{\bar{z}})}{\sigma^{2}},$
(8) $\displaystyle\log\det(WK_{G}W^{T}+\sigma^{2}I)$
$\displaystyle=\log\det(K_{G}W^{T}W+\sigma^{2}I)+(n-m)\log\sigma^{2}.$ (9)
For Eq. (8), we first observe that by our argument above that
$\mathbf{\bar{z}}=K_{G}W^{T}\mathbf{\tilde{z}}$, we have
$\displaystyle\frac{\mathbf{y}^{T}(\mathbf{y}-W\mathbf{\bar{z}})}{\sigma^{2}}=\frac{\mathbf{y}^{T}(\mathbf{y}-WK_{G}W^{T}\mathbf{\tilde{z}})}{\sigma^{2}}=\frac{\mathbf{y}^{T}(I-WK_{G}W^{T}(WK_{G}W^{T}+\sigma^{2}I)^{-1})\mathbf{y}}{\sigma^{2}}.$
(10)
We have:
$\displaystyle(I-WK_{G}W^{T}(WK_{G}W^{T}+\sigma^{2}I)^{-1})$
$\displaystyle=[WK_{G}W^{T}+\sigma^{2}I-WK_{G}W^{T}](WK_{G}W^{T}+\sigma^{2}I)^{-1}$
$\displaystyle=\sigma^{2}(WK_{G}W^{T}+\sigma^{2}I)^{-1}.$
Thus we can simplify (10) to:
$\displaystyle\frac{\mathbf{y}^{T}(\mathbf{y}-W\mathbf{\bar{z}})}{\sigma^{2}}=\mathbf{y}^{T}(WK_{G}W^{T}+\sigma^{2}I)^{-1}\mathbf{y}=\mathbf{y}^{T}\mathbf{\tilde{z}}.$
(11)
It remains to prove Eq. 9. This follows from the general claim:
###### Claim 6.
Let $A\in\mathbb{R}^{m\times n}$ and $B\in\mathbb{R}^{n\times m}$. Then:
$\det(BA+\sigma^{2}I_{n})=(\sigma^{2})^{n-m}\det(AB+\sigma^{2}I_{m})$
###### Proof.
The claim generalizes the Weinstein-Aronszajn identity, which states that
$\det(BA+I_{n})=\det(AB+I_{m})$. It can be proven using the block determinant
formulas in [9]. Let $C=\begin{bmatrix}\sigma I_{m}&-A\\\ B&\sigma
I_{n}\end{bmatrix}.$ We have:
$\displaystyle\det(C)=\det(\sigma I_{m})\cdot\det(B(\sigma I_{m})^{-1}A+\sigma
I_{n})$ $\displaystyle=\det(\sigma
I_{m})\cdot\det\left(\frac{1}{\sigma}(BA+\sigma^{2}I_{n})\right)$
$\displaystyle=\det(\sigma
I_{m})\cdot\det\left(\frac{1}{\sigma}I_{n}\right)\cdot\det(BA+\sigma^{2}I_{n})$
and similarly
$\displaystyle\det(C)=\det(\sigma I_{n})\cdot\det(A(\sigma I_{n})^{-1}B+\sigma
I_{m})$ $\displaystyle=\det(\sigma
I_{n})\cdot\det\left(\frac{1}{\sigma}(AB+\sigma^{2}I_{m})\right)$
$\displaystyle=\det(\sigma
I_{n})\cdot\det\left(\frac{1}{\sigma}I_{m}\right)\cdot\det(AB+\sigma^{2}I_{m}).$
Thus,
$\displaystyle\det(\sigma
I_{m})\cdot\det\left(\frac{1}{\sigma}I_{n}\right)\cdot\det(BA+\sigma^{2}I_{n})$
$\displaystyle=\det(\sigma
I_{n})\cdot\det\left(\frac{1}{\sigma}I_{m}\right)\cdot\det(AB+\sigma^{2}I_{m})$
$\displaystyle\det(BA+\sigma^{2}I_{n})$
$\displaystyle=\sigma^{2(n-m)}\cdot\det(AB+\sigma^{2}I_{m}),$
giving the claim. ∎
Applying Claim 6 with $A=K_{G}W^{T}$ and $B=W$ gives that
$\det(\tilde{W}K_{G}W^{T}+\sigma^{2}I)=(\sigma^{2})^{n-m}\det(K_{G}W^{T}W+\sigma^{2}I)$
which in turn gives that
$\log\det(WK_{G}W^{T}+\sigma^{2}I)=\log\det(K_{G}W^{T}W+\sigma^{2}I)+(n-m)\log\sigma^{2}$
and so completes Eq. (9) and the theorem. ∎
### A.4 Efficiency of Computing and Multiplying $W^{T}W$
###### Claim 2.
Assume that $G=\\{\mathbf{g}_{1},\ldots,\mathbf{g}_{m}\\}$ has spacing $s$,
i.e., $\left\lVert\mathbf{g}_{i}-\mathbf{g}_{j}\right\rVert_{\infty}\geq s$
for any $i,j\in m$. Also assume that $\mathbf{w}^{j}_{\mathbf{x}}$ is non-zero
only if $\|\mathbf{g}_{j}-\mathbf{x}\|_{\infty}<r\cdot s$ for some fixed
integer $r$. Then $W^{T}W$ can be computed in $\mathcal{O}(n(2r)^{2d})$ time
and has at most $(4r-1)^{d}$ entries per row. Therefore
$mv(W^{T}W)=\mathcal{O}(m(4r-1)^{d}).$
###### Proof.
First, write $W^{T}W$ as a sum over data points:
$W^{T}W=\sum_{i=1}^{n}\mathbf{w}_{\mathbf{x}_{i}}\mathbf{w}_{\mathbf{x}_{i}}^{T}.$
The vector $\mathbf{w}_{\mathbf{x}_{i}}$ has nonzeros only for grid points
$\mathbf{g}_{j}$ such that $\|\mathbf{g}_{j}-\mathbf{x}_{i}\|_{\infty}\leq
r\cdot s$. Since the grid points have spacing $s$, there are at most
$(2r)^{d}$ such grid points. Therefore the outer product
$\mathbf{w}_{\mathbf{x}_{i}}\mathbf{w}_{\mathbf{x}_{i}}^{T}$ has at most
$(2r)^{2d}$ non-zeros, and the sum can be computed in
$\mathcal{O}(n(2r)^{2d})$ time.
The sum is also sparse. The entry $(W^{T}W)_{jk}$ is non-zero only if there is
_some_ data point $\mathbf{x}_{i}$ within distance $r$ from both
$\mathbf{g}_{j}$ and $\mathbf{g}_{k}$ in each dimension. This is true only if
$\|\mathbf{g}_{j}-\mathbf{g}_{k}\|_{\infty}<2r\cdot s$. For a given grid point
$\mathbf{g}_{j}$, there are at most $(4r-1)^{d}$ grid points $\mathbf{g}_{k}$
satisfying $\|\mathbf{g}_{j}-\mathbf{g}_{k}\|_{\infty}<2r\cdot s$ (e.g., in 1
dimension there are $2r-1$ neighbors to the left, $2r-1$ neighbors to the
right, plus the case $k=j$). Therefore the $j$th row of $W^{T}W$ has as most
$(4r-1)^{d}$ nonzeros, as claimed. ∎
### A.5 Symmetric Reformulation of GSGP
Recall that directly replacing the SKI approximate inference equations of
Definition 1 with the GSGP inference equations of Fact 2 reduces per-iteration
cost from $\mathcal{O}(n+m\log m)$ to $\mathcal{O}(m\log m)$. However, the
matrix $K_{G}W^{T}W+\sigma^{2}I$ is _asymmetric_ , which prevents the
application of symmetric system solvers like the conjugate gradient method
with strong guarantees.
Here we describe a symmetric reformulation of GSGP that doesn’t compromise on
per-iteration cost. We write the matrix $(K_{G}W^{T}W+\sigma^{2}I)^{-1}$,
whose application is required in both the posterior mean and covariance
computation as:
$\displaystyle(K_{G}W^{T}W+\sigma^{2}I)^{-1}$
$\displaystyle=(K_{G}W^{T}WK_{G}K_{G}^{-1}+\sigma^{2}K_{G}K_{G}^{-1})^{-1}$
$\displaystyle=K_{G}(K_{G}W^{T}WK_{G}+\sigma^{2}K_{G})^{-1}.$
Applying the above matrix requires a symmetric solve in
$(K_{G}W^{T}WK_{G}+\sigma^{2}K_{G})^{-1}$, along with a single
$\mathcal{O}(m\log m)$ time MVM with $K_{G}$. Our per-iteration complexity
thus remains at $\mathcal{O}(m\log m)$ – the matrix vector multiplication time
for $K_{G}W^{T}WK_{G}+\sigma^{2}K_{G}$. However, this matrix is no longer of
the ‘regularized form’ $A+\sigma^{2}I$, and may have worse condition number
than $WK_{G}W^{T}+\sigma^{2}I$, possibly leading to slower convergence of
iterative solvers like CG as compared to SKI.
We can similarly symmetrize the logdet computation in the likelihood
expression by writing
$\log\det(K_{G}W^{T}W+\sigma^{2}I)=\log\det(K_{G}W^{T}WK_{G}+\sigma^{2}K_{G})-\log\det(K_{G}).$
Again however, it is unclear how this might affect the convergence of
iterative methods for logdet approximation
## Appendix B Factorized Iterative Methods – Omitted Details
### B.1 Proofs for Factorized Update Steps
We give proofs of our fundamental factorized update steps described in Claims
3 and 4.
###### Claim 3.
For any $\mathbf{z}_{i}\in\mathbb{R}^{n}$ with
$\mathbf{z}_{i}=W\mathbf{\hat{z}}_{i}+c_{i}\mathbf{z}_{0}$,
$\displaystyle(WK_{G}W^{T}+\sigma^{2}I)\mathbf{z}_{i}=W\mathbf{\hat{z}}_{i+1}+c_{i+1}\mathbf{z}_{0},$
where
$\mathbf{\hat{z}}_{i+1}=(K_{G}W^{T}W+\sigma^{2}I)\mathbf{\hat{z}}_{i}+c_{i}K_{G}W^{T}\mathbf{z}_{0}$
and $c_{i+1}=\sigma^{2}\cdot c_{i}$. Call this operation a _factorized update_
and denote it as
$(\mathbf{\hat{z}}_{i+1},c_{i+1})=\mathcal{A}(\mathbf{\hat{z}}_{i},c_{i})$. If
the vector $K_{G}W^{T}\mathbf{z}_{0}$ is precomputed in $\mathcal{O}(n+m\log
m)$ time, each subsequent factorized update takes $\mathcal{O}(m\log m)$ time.
###### Proof.
We have:
$\displaystyle(WK_{G}W^{T}+\sigma^{2}I)\mathbf{z}_{i}$
$\displaystyle=(WK_{G}W^{T}+\sigma^{2}I)W\mathbf{\hat{z}}_{i}+c_{i}(WK_{G}W^{T}+\sigma^{2}I)z_{0}$
$\displaystyle=W(K_{G}W^{T}W+\sigma^{2}I)\mathbf{\hat{z}}_{i}+c_{i}W(K_{G}W^{T}z_{0})+c_{i}\cdot\sigma^{2}\cdot
z_{0}$
$\displaystyle=W\left[(K_{G}W^{T}W+\sigma^{2}I)\mathbf{\hat{z}}_{i}+c_{i}K_{G}W^{T}z_{0}\right]+c_{i}\cdot\sigma^{2}\cdot
z_{0}.$
which completes the derivation of the update.
As discussed in Sec. 3.1, it takes $\mathcal{O}(n+m\log m)$ time to precompute
$K_{G}W^{T}\mathbf{z}_{0}$, after which: it takes $\mathcal{O}(m\log m)$ time
to compute $(K_{G}W^{T}W+\sigma^{2}I)\mathbf{\hat{z}}_{i}$, it takes
$\mathcal{O}(m)$ time to add in $c_{i}K_{G}W^{T}\mathbf{z}_{0}$, and it takes
$\mathcal{O}(1)$ time to update $c_{i}$, for a total of $\mathcal{O}(m\log m)$
time for each update.
∎
###### Claim 4.
For any $\mathbf{z}_{i},\mathbf{y}_{i}\in\mathbb{R}^{n}$ with
$\mathbf{z}_{i}=W\mathbf{\hat{z}}_{i}+c_{i}\mathbf{z}_{0}$ and
$\mathbf{y}_{i}=W\mathbf{\hat{y}}_{i}+d_{i}\mathbf{y}_{0}$,
$\displaystyle\mathbf{z}_{i}^{T}\mathbf{y}_{i}=\mathbf{\hat{z}}_{i}^{T}W^{T}W\mathbf{\hat{y}}_{i}+d_{i}\mathbf{\hat{z}}_{i}^{T}W^{T}\mathbf{y}_{0}+c_{i}\mathbf{\hat{y}}_{i}^{T}W^{T}\mathbf{z}_{0}+c_{i}d_{i}\mathbf{y}_{0}^{T}\mathbf{z}_{0}.$
We denote the above operation by
$\langle(\mathbf{\hat{z}}_{i},c_{i}),(\mathbf{\hat{y}}_{i},d_{i})\rangle$.
###### Proof.
We have:
$\displaystyle\mathbf{z}_{i}^{T}\mathbf{y}_{i}=(W\mathbf{\hat{z}}_{i}+c_{i}\mathbf{z}_{0})^{T}(W\mathbf{\hat{y}}_{i}+d_{i}\mathbf{y}_{0})=\mathbf{\hat{z}}_{i}^{T}W^{T}W\mathbf{\hat{y}}_{i}+d_{i}\mathbf{\hat{z}}_{i}^{T}W^{T}\mathbf{y}_{0}+c_{i}\mathbf{\hat{y}}_{i}^{T}W^{T}\mathbf{z}_{0}+c_{i}d_{i}\mathbf{y}_{0}^{T}\mathbf{z}_{0},$
giving the claim. ∎
Algorithm 3 Efficiently Factorized Conjugate Gradient (EFCG)
1:procedure EFCG($K_{G},W,\mathbf{b},\sigma,\mathbf{x}_{0},\epsilon$)
2: New Iterates:
3:
$\mathbf{\hat{v}}_{k}=\left[K_{G}W^{T}W+\sigma^{2}I\right]\mathbf{\hat{r}}_{k}$,$\mathbf{\hat{u}}_{k}=W^{T}W\mathbf{\hat{r}}_{k}$
4:
$\mathbf{\hat{z}}_{k}=\left[K_{G}W^{T}W+\sigma^{2}I\right]\mathbf{\hat{p}}_{k}$,
$\mathbf{\hat{s}}_{k}=W^{T}W\mathbf{\hat{p}}_{k}$
5:
$\mathbf{r_{0}}=\mathbf{b}-\tilde{K}\mathbf{x}_{0},\,\mathbf{\hat{r}_{0}}=\mathbf{0},\,c_{0}^{r}=1$
6: $\mathbf{\hat{p}}_{0}=\mathbf{0},\,c_{0}^{p}=1$,
$\,\mathbf{\hat{x}}_{0}=\mathbf{0},\,c_{0}^{x}=0$
7: $\mathbf{\hat{v}}_{0}=\mathbf{0},\mathbf{\hat{u}}_{0}=\mathbf{0}$
8: $\mathbf{\hat{z}}_{0}=\mathbf{0},\mathbf{\hat{s}}_{0}=\mathbf{0}$
9: for $k=0$ to maxiter do
10:
$\alpha_{k}=\frac{\mathbf{\hat{u}}_{k}^{T}\mathbf{\hat{r}}_{k}+c_{k}^{r}c_{k}^{r}\left\lVert\mathbf{r}_{0}\right\rVert+2c_{i}^{r}(\mathbf{\hat{r}}_{i}^{T}W^{T}\mathbf{r}_{0})}{\mathbf{\hat{s}}_{k}^{T}\mathbf{\hat{z}}_{k}+c_{k}^{z}c_{k}^{p}\left\lVert\mathbf{r}_{0}\right\rVert+c_{k}^{p}(\mathbf{\hat{z}}_{k}^{T}W^{T}\mathbf{r}_{0})+c_{k}^{z}(\mathbf{\hat{r}}_{k}^{T}W^{T}\mathbf{r}_{0})}$
11:
$(\mathbf{\hat{x}}_{k+1},c_{k+1}^{x})=(\mathbf{\hat{x}}_{k},c_{k}^{x})+\alpha_{k}\cdot(\mathbf{\hat{p}}_{k},c_{k}^{p})$
12:
$(\mathbf{\hat{r}}_{k+1},c_{k+1}^{r})=(\mathbf{\hat{r}}_{k},c_{k}^{r})-\alpha_{k}\cdot(\mathbf{\hat{z}}_{k}+c_{k}^{p}\cdot(K_{G}W^{T}\mathbf{r}_{0}),\sigma^{2}c_{k}^{p})$
13:
$[\mathbf{\hat{v}}_{k+1},\mathbf{\hat{u}}_{k+1}]=\mathcal{B}(\mathbf{\hat{r}}_{k+1})$
14: if
$\mathbf{\hat{u}}_{k+1}^{T}\mathbf{\hat{r}}_{k+1}+c_{k+1}^{r}c_{k+1}^{r}\left\lVert\mathbf{r}_{0}\right\rVert+2c_{k+1}^{r}(\mathbf{\hat{r}}_{k+1}^{T}W^{T}\mathbf{r}_{0})\leq\epsilon$
exit loop
15:
$\beta_{k}=\frac{\mathbf{\hat{u}}_{k+1}^{T}\mathbf{\hat{r}}_{k+1}+c_{k+1}^{r}c_{k+1}^{r}+2c_{k+1}^{r}(\mathbf{\hat{r}}_{k+1}^{T}W^{T}\mathbf{r}_{0})}{\mathbf{\hat{u}}_{k}^{T}\mathbf{\hat{r}}_{k}+c_{k}^{r}c_{k}^{r}\left\lVert\mathbf{r}\right\rVert_{0}+2c_{k}^{r}(\mathbf{\hat{r}}_{k}^{T}W^{T}\mathbf{r}_{0})}$
16:
$(\mathbf{\hat{p}}_{k+1},c_{k+1}^{p})=(\mathbf{\hat{r}}_{k+1},c_{k+1}^{r})+\beta_{k}\cdot(\mathbf{\hat{p}}_{k},c_{k}^{p})$
17:
$\mathbf{\hat{s}}_{k+1}=\mathbf{\hat{u}}_{k+1}+\beta_{k}\cdot\mathbf{\hat{s}}_{k}$
18:
$\mathbf{\hat{z}}_{k+1}=\mathbf{\hat{v}}_{k+1}+\beta_{k}\cdot\mathbf{\hat{z}}_{k}$
19: return
$\mathbf{x}_{k+1}=W\mathbf{\hat{x}}_{k+1}+c_{k+1}^{x}\cdot\mathbf{r}_{0}+\mathbf{x}_{0}$
### B.2 Efficiently Factorized Conjugate Gradient Algorithms
We now present Efficiently factorized conjugate gradient (EFCG) (Algorithm 3),
which improves on our basic Factorized CG algorithm (Algorithm 2) by avoiding
any extra multiplication with $W$ and $W^{T}W$. The central idea of EFCG is to
exploit the fact that each time we perform a matrix-vector multiplication with
the GSGP operator $K_{G}W^{T}W+\sigma^{2}I$, we also must perform one with
$W^{T}W$. We can save the result of this multiplication to avoid repeated
work. In particular, this lets us avoid extra MVM costs associated with
$W^{T}W$ present in factorized inner products steps of Algorithm 2. Similar to
Algorithm 2, Algorithm 3 also maintains iterates CG (Algorithm 1) exactly in
the same compressed form
$\mathbf{x}_{k}=W\mathbf{\hat{x}}_{k}+c_{k}^{x}\mathbf{r}_{0}+\mathbf{x}_{0}$,
$\mathbf{r}_{k}=W\mathbf{\hat{r}}_{k}+c_{k}^{r}\mathbf{r}_{0}$, and
$\mathbf{p}_{k}=W\mathbf{\hat{p}}_{k}+c_{k}^{p}\mathbf{r}_{0}$.
We let $[\mathbf{v},\mathbf{u}]=\mathcal{B}(\mathbf{x})$ denote the operation
that returns $\mathbf{v}=\left[K_{G}W^{T}W+\sigma^{2}I\right]\mathbf{x}$ and
$\mathbf{u}=W^{T}W\mathbf{x}$. Since $\mathbf{u}$ must be computed as in
intermediate step in computing $\mathbf{v}$, this operation has the same cost
as a standard matrix-vector-multiplicaiton with
$\left[K_{G}W^{T}W+\sigma^{2}I\right]$. Notice that Algorithm 3 performs just
one $\mathcal{B}(\mathbf{x})$ operation per iteration, requiring a single
matrix vector multiplication with each of $K_{G}$ and $W^{T}W$ per iteration.
Both $K_{G}W^{T}\mathbf{r}_{0}$ and $W^{T}\mathbf{r}_{0}$ are precomputed.
In addition to $\mathcal{B}(\mathbf{x})$ operation, the superior efficiency of
the EFCG Algorithm 3 over CG and Factorized CG can mainly be attributed to
following facts:
* •
It uses four new iterates:
$\mathbf{\hat{v}}_{k}=\left[K_{G}W^{T}W+\sigma^{2}I\right]\mathbf{\hat{r}}_{k}$,
$\mathbf{\hat{u}}_{k}=W^{T}W\mathbf{\hat{r}}_{k}$,
$\mathbf{\hat{z}}_{k}=\left[K_{G}W^{T}W+\sigma^{2}I\right]\mathbf{\hat{p}}_{k}$
and $\mathbf{\hat{s}}_{k}=W^{T}W\mathbf{\hat{p}}_{k}$. Given these iterates
all factorized inner products can be computed without any extra multiplication
with $W^{T}W$.
* •
In case, the initial solution $\mathbf{x}_{0}=\mathbf{0}$, which is the most
common choice in practice for CG, $\tilde{K}\mathbf{x}_{0}$ multiplication can
be avoided. Also, observe that in SKI mean and covariance approximation
(Definition 1), we only need $W^{T}\mathbf{x}_{k+1}$ which is equal to
$W^{T}W\mathbf{\hat{x}}_{k+1}+W^{T}\mathbf{r}_{0}+W^{T}\mathbf{x}_{0}$. Since,
$W^{T}\mathbf{r}_{0}$ is pre-computed and $W^{T}\mathbf{x}_{0}=0$, no extra
multiplication with $W$ or $W^{T}W$ is required other than computing
$K_{G}W^{T}\mathbf{r}_{0}$ and $W^{T}\mathbf{r}_{0}$.
In Algorithm 4, we present a further simplified variant on EFCG for the case
when initial residual $\mathbf{r}_{0}$ is in the span of $W$ to directly
compute $W^{T}\mathbf{x}_{k+1}$ where $\mathbf{x}_{k+1}$ is the final solution
returned by the EFCG algorithm. Observe SKI mean and covariance expressions of
SKI definition (i.e., Definition 1), we always need to post-process
$\mathbf{x}_{k+1}$ as $W^{T}\mathbf{x}_{k+1}$ to estimate them. Unlike EFCG,
simplified EFCG (i.e., Algorithm 4) maintains a compressed form for
$\mathbf{r}_{k}$ using $\mathbf{\hat{r}}_{k}$ (as
$\mathbf{r}_{k}=W\mathbf{\hat{r}}_{k}$) and doesn’t maintain $\mathbf{p}_{k}$.
In addition to that, Algorithm 4 also maintains another iterate
$\mathbf{\hat{x}}^{d}_{k+1}$ such that
$W^{T}\mathbf{x}_{k+1}=\mathbf{\hat{x}}^{d}_{k+1}+W^{T}\mathbf{x}_{0}$.
Algorithm 4 Simplified EFCG – Initial residual (i.e., $\mathbf{r}_{0}$) is in
span of $W$
1:procedure EFCG($K_{G},W,\mathbf{\hat{r}}_{0},\sigma,\epsilon$)
2: New Iterates:
3:
$\mathbf{\hat{v}}_{k}=\left[K_{G}W^{T}W+\sigma^{2}I\right]\mathbf{\hat{r}}_{k}$,$\mathbf{\hat{u}}_{k}=W^{T}W\mathbf{\hat{r}}_{k}$
4:
$\mathbf{\hat{z}}_{k}=\left[K_{G}W^{T}W+\sigma^{2}I\right]\mathbf{\hat{p}}_{k}$,
$\mathbf{\hat{s}}_{k}=W^{T}W\mathbf{\hat{p}}_{k}$
5: $\mathbf{\hat{x}}^{d}_{0}=\mathbf{0}$
6:
$\mathbf{\hat{v}}_{0},\mathbf{\hat{u}}_{0}=\mathcal{B}(\mathbf{\hat{r}}_{0})$
7:
$\mathbf{\hat{z}}_{0}=\mathbf{\hat{v}}_{0},\mathbf{\hat{s}}_{0}=\mathbf{\hat{u}}_{0}$
8: for $k=1$ to $maxiter$ do
9:
$\alpha_{k}={\frac{\mathbf{\hat{u}}_{k}^{T}\mathbf{\hat{r}}_{k}}{\mathbf{\hat{s}}_{k}^{T}\mathbf{\hat{z}}_{k}}}$
10:
$\mathbf{\hat{x}}^{d}_{k+1}=\mathbf{\hat{x}}^{d}_{k}+\alpha_{k}\cdot\mathbf{\hat{s}}_{k}$
11:
$\mathbf{\hat{r}}_{k+1}=\mathbf{\hat{r}}_{k}-\alpha_{k}\cdot\mathbf{\hat{z}}_{k}$
12:
$\mathbf{\hat{v}}_{k+1},\mathbf{\hat{u}}_{k+1}=\mathcal{B}(\mathbf{\hat{r}}_{k+1})$
13: if $\mathbf{\hat{u}}_{k+1}^{T}\mathbf{\hat{r}}_{k+1}\leq\epsilon$ exit
loop
14:
$\beta_{k}=\frac{\mathbf{\hat{u}}_{k+1}^{T}\mathbf{\hat{r}}_{k+1}}{\mathbf{\hat{u}}_{k}^{T}\mathbf{\hat{r}}_{k}}$
15:
$\mathbf{\hat{s}}_{k+1}=\mathbf{\hat{u}}_{k+1}+\beta_{k}\cdot\mathbf{\hat{s}}_{k}$
16:
$\mathbf{\hat{z}}_{k+1}=\mathbf{\hat{v}}_{k+1}+\beta_{k}\cdot\mathbf{\hat{z}}_{k}$
17: return $\mathbf{\hat{x}}^{d}_{k+1}$
The requirement of initial residual $\mathbf{r}_{0}$ to be in the span of $W$
can be met in two ways. For example, for SKI posterior mean inference, we set
$\mathbf{{x}}_{0}=\frac{1}{\sigma^{2}}\cdot\mathbf{y}$ implying
$\mathbf{r}_{0}=\mathbf{y}-\left[WK_{G}W^{T}+\sigma^{2}I\right]\frac{1}{\sigma^{2}}\cdot\mathbf{y}=-\frac{1}{\sigma^{2}}\cdot
WK_{G}W^{T}\mathbf{y}$, which lies in the span of $W$. Consequently, we
initiate Algorithm 4 with $\mathbf{\hat{r}}_{0}=-\frac{1}{\sigma^{2}}\cdot
K_{G}W^{T}\mathbf{y}\in\mathbb{R}^{m\times 1}$ and following the invariance of
simplified EFCG algorithm, compute transformed solutions as
$W^{T}\mathbf{x}_{k+1}=\mathbf{\hat{x}}^{d}_{k+1}+\frac{1}{\sigma^{2}}\cdot
W^{T}y$. Notice that simplified EFCG (unlike EFCG) does not require pre-
computations of terms $K_{G}W^{T}\mathbf{r}_{0}$ and $W^{T}\mathbf{r}_{0}$. In
fact, simplified ECFG requires only only multiplication with $W^{T}$, i.e., to
obtain $W^{T}y$ which is sufficient for both, initialization of
$\mathbf{r}_{0}$ and the transformed final solution $W^{T}\mathbf{x}_{k+1}$
(which is equal to
$W^{T}\left(\tilde{K}_{X}+\sigma^{2}I\right)^{-1}\mathbf{y}$). Hence,
simplified EFCG can compute SKI posterior mean using only sufficient
statistics ($W^{T}W$ and $W^{T}\mathbf{y}$).
A second way to meet the requirement of the initial residual $\mathbf{r}_{0}$
being in the span of $W$, is by setting $\mathbf{{x}}_{0}=\mathbf{0}$ when
$\mathbf{\hat{y}}$ is provided such that $\mathbf{y}=W\mathbf{\hat{y}}$. This
is the case, e.g., in posterior covariance approximation. The final
transformed solution $W^{T}\mathbf{x}_{k+1}$ in this setting reduces to
$\mathbf{\hat{x}}^{d}_{k+1}$. Notice for this setting also, we need only
$W^{T}W$ and do not require the matrix $W$.
Consquently, simplified EFCG can compute both SKI posterior mean and
covariance function using onlythe sufficient statistics ($W^{T}W$ and
$W^{T}\mathbf{y}$) and without even realizing $W$ matrix in memory.
### B.3 Factorized Lanczos Algorithms
The Lanczos algorithm can be utilized to factorize a symmetric matrix
$A\in\mathbb{R}^{n\times n}$ as $QTQ^{T}$ such that $T\in\mathbb{R}^{n\times
n}$ is a symmetric tridiagonal matrix and $Q\in\mathbb{R}^{n\times n}$ is
orthonormal matrix. Previously, it has been used with $k$ iterations (i.e.
Algorithm 5) to compute low-rank and fast approximations of SKI covariance
matrix and log-likelihood of the data. For further details, we refer readers
to [22, 5].
Algorithm 5 Lanczos Algorithm (LA)
1:procedure LA($K_{G},W,\mathbf{b},\sigma,k$)
2: $\mathbf{q}_{0}=\mathbf{0},\mathbf{q}_{1}=\mathbf{b},\beta_{1}=0$
3: $Q_{:,1}=\mathbf{q}_{1}$
4: for $i=1$ to $k$ do
5: $\mathbf{q}_{i+1}=\tilde{K}\mathbf{q}_{i}-\beta_{i}\cdot\mathbf{q}_{i-1}$
6: $\alpha_{i}=\mathbf{q}_{i}^{T}\mathbf{q}_{i+1}$
7: $T_{i,i}=\alpha_{i}$
8: if $i==k$ then exit loop
9: $\mathbf{q}_{i+1}=\mathbf{q}_{i}-\alpha_{i}\cdot\mathbf{q}_{i}$
10:
$\mathbf{q}_{i+1}=\mathbf{q}_{i+1}-\left[Q_{:,1},...,Q_{:,i}\right]\left(\left[Q_{:,1},...,Q_{:,i}\right]^{T}\mathbf{q}_{i+1}\right)$
11: $\beta_{i+1}=\left\lVert\mathbf{q}_{i+1}\right\rVert$
12: $T_{i,i+1}=T_{i,i+1}=\beta_{i+1}$
13: $\mathbf{q}_{i+1}=\frac{1}{\beta_{i+1}}\cdot\mathbf{q}_{i+1}$
14: $Q_{:,i+1}=\mathbf{q}_{i+1}$
15: return $Q$, $T$
Algorithm 6 Factorized Lanczos Algorithm (FLA)
1:procedure FLA($K_{G},W,\mathbf{b},\sigma,k$)
2:
$\mathbf{\hat{q}}_{0}=\mathbf{0},c_{0}^{q}=0,\mathbf{\hat{q}}_{1}=\mathbf{0},c_{1}^{q}=1,\beta_{1}=0$
3:
$\hat{Q}_{:,1}=\mathbf{\hat{q}}_{1},\mathbf{\Lambda}=\mathbf{0}\in\mathbb{R}^{k\times
1},\mathbf{d}=\mathbf{0}\in\mathbb{R}^{k\times 1}$
4: for $i=1$ to $k$ do
5:
$(\mathbf{\hat{q}}_{i+1},c_{i+1}^{q})=(\mathbf{\hat{q}}_{i},c_{i}^{q})-\beta_{i}\cdot\mathcal{A}(\mathbf{\hat{q}}_{i-1},c_{i-1}^{q})$
6:
$\alpha_{i}=\langle(\mathbf{\hat{q}}_{i},c_{i}^{q}),(\mathbf{\hat{q}}_{i+1},c_{i+1}^{q})\rangle$
7: $T_{i,i}=\alpha_{i};$ $\mathbf{d}_{i}=c_{i}^{q}$
8: if $i==k$ then exit loop
9:
$(\mathbf{\hat{q}}_{i+1},c_{i+1}^{q})=(\mathbf{\hat{q}}_{i},c_{i}^{q})-\alpha_{i}\cdot(\mathbf{\hat{q}}_{i},c_{i}^{q})$
10:
$\mathbf{\Lambda}_{j}=\langle(\hat{Q}_{:,j},c_{j}^{q}),(\mathbf{\hat{q}}_{i+1},c_{i+1}^{q})\rangle;$
$\forall j\in\\{1,...,i\\}$
11:
$\mathbf{\hat{q}}_{i+1}=\mathbf{\hat{q}}_{i+1}-\hat{Q}_{:,1:i}\mathbf{\Lambda}_{1:i};$
$c_{i+1}^{q}=c_{i+1}^{q}-\mathbf{d}_{1:i}^{T}\mathbf{\Lambda}_{1:i}$
12:
$\beta_{i+1}=\sqrt{\langle(\mathbf{\hat{q}}_{i+1},c_{i+1}^{q}),(\mathbf{\hat{q}}_{i+1},c_{i+1}^{q})\rangle}$
13: $T_{i,i+1}=T_{i,i+1}=\beta_{i+1}$
14: $\mathbf{\hat{q}}_{i+1}=\frac{1}{\beta_{i+1}}\cdot\mathbf{\hat{q}}_{i+1};$
$c_{i+1}^{q}=\frac{c_{i+1}^{q}}{\beta_{i+1}}$
15: $\hat{Q}_{:,i+1}=\mathbf{\hat{q}}_{i+1}$
16: return $\hat{Q}$, $T$, $\mathbf{d}$
Algorithm 7 Efficiently Factorized Lanczos algorithm (EFLA)
1:procedure EFLA($K_{G},W,\mathbf{b},\sigma,k$)
2: New Iterates:
3: $\hat{P}_{:,i}=W^{T}W\hat{Q}_{:,i}$ and
$\mathbf{\hat{s}}_{i}=\left[K_{G}W^{T}W+\sigma I\right]\mathbf{\hat{q}}_{i}$
4:
$\mathbf{\hat{q}}_{0}=\mathbf{0},c_{0}^{q}=0,\mathbf{\hat{q}}_{1}=\mathbf{0},c_{1}^{q}=1,\beta_{1}=0$
5: $\hat{Q}_{:,1:k}=[\mathbf{0},...,\mathbf{0}]\in\mathbb{R}^{m\times
k},\hat{P}_{:,1:k}=[\mathbf{0},...,\mathbf{0}]\in\mathbb{R}^{m\times k}$
6:
$\hat{Q}_{:,1}=\mathbf{\hat{q}}_{1},\mathbf{\Lambda}=\mathbf{0}\in\mathbb{R}^{k\times
1},\mathbf{\hat{s}}_{i}=\mathbf{0}\in\mathbb{R}^{m\times
1},\mathbf{d}=\mathbf{0}\in\mathbb{R}^{k\times 1}$
7: for $i=1$ to $k$ do
8: $\mathbf{\hat{q}}_{i+1}=\mathbf{\hat{s}}_{i}+c_{i}^{q}\cdot
K_{G}W^{T}\mathbf{b}-\beta_{i}\cdot\mathbf{\hat{q}}_{i-1}$
9: $c_{i+1}^{q}=\sigma^{2}c_{i}^{q}-\beta_{i}c_{i-1}^{q}$
10:
$\alpha_{i}=\hat{P}_{:,i}^{T}\mathbf{\hat{q}}_{i+1}+c_{i}^{q}c_{i+1}^{q}+\left(c_{i}^{q}\cdot\mathbf{\hat{q}}_{i+1}+c_{i+1}^{q}\cdot\mathbf{\hat{q}}_{i}\right)^{T}W^{T}\mathbf{b}$
11: $T_{i,i}=\alpha_{i};$ $\mathbf{d}_{i}=c_{i}$
12: if $i==k$ then exit loop
13:
$(\mathbf{\hat{q}}_{i+1},c_{i+1}^{q})=(\mathbf{\hat{q}}_{i},c_{i}^{q})-\alpha_{i}\cdot(\mathbf{\hat{q}}_{i},c_{i}^{q})$
14:
$\mathbf{\Lambda}_{1:i}=\hat{P}_{:,1:i}^{T}\mathbf{\hat{q}}_{i+1}+\left(c_{i+1}^{q}+\mathbf{\hat{q}}_{i+1}^{T}W^{T}\mathbf{b}\right)\cdot\mathbf{c}_{1:i}+c_{i+1}^{q}\cdot\hat{Q}_{:,1:i}^{T}W^{T}\mathbf{b}$
15:
$\mathbf{\hat{q}}_{i+1}=\mathbf{\hat{q}}_{i+1}-\hat{Q}_{:,1:i}\mathbf{\Lambda}_{1:i};$
$c_{i+1}^{q}=c_{i+1}^{q}-\mathbf{d}_{1:i}^{T}\mathbf{\Lambda}_{1:i}$
16:
$\mathbf{\hat{s}}_{i+1},\hat{P}_{:,i+1}=\mathcal{B}(\mathbf{\hat{q}}_{i+1})$
17:
$\beta_{i+1}=\sqrt{\hat{P}_{:,i+1}^{T}\mathbf{\hat{q}}_{i+1}+c_{i+1}^{q}c_{i+1}^{q}+2c_{i+1}^{q}\mathbf{\hat{q}}_{i+1}^{T}W^{T}\mathbf{b}}$
18: $T_{i,i+1}=T_{i,i+1}=\beta_{i+1}$
19: $\mathbf{\hat{q}}_{i+1}=\frac{1}{\beta_{i+1}}\cdot\mathbf{\hat{q}}_{i+1};$
$c_{i+1}^{q}=\frac{c_{i+1}^{q}}{\beta_{i+1}}$
20: $\mathbf{\hat{s}}_{i+1}=\frac{1}{\beta_{i+1}}\cdot\mathbf{\hat{s}}_{i+1};$
$\hat{P}_{:,i+1}=\frac{1}{\beta_{i+1}}\cdot\hat{P}_{:,i+1}$
21: $\hat{Q}_{:,i+1}=\mathbf{\hat{q}}_{i+1}$
22: return $\hat{Q}$, $T$, $\mathbf{d}$
Similar to Factorized CG, we derive factorized Lanczos algorithm (FLA) using
factorized inner products and matrix vector multiplication, as described in
Algorithm 6. We maintain all iterates in $\mathbb{R}^{m\times 1}$ similar to
Factorized CG, in particular, ${Q}_{:,i}\in\mathbb{R}^{n\times 1}$ vectors are
maintained in compressed form such that
$Q_{:,i}=W\hat{Q}_{:,i}+d_{i}\cdot\mathbf{b}$. The $T\in\mathbb{R}^{k\times
k}$ matrix of Lanczos algorithm is retained as it is in FLA. Furthermore, in a
manner similar to EFCG, we derive efficient factorized algorithm (EFLA) as
shown in Algorithm 7. Specifically, EFLA relies on two new iterates:
$\hat{P}_{:,i}=W^{T}W\hat{Q}_{:,i}$ and
$\mathbf{\hat{s}}_{i}=\left[K_{G}W^{T}W+\sigma I\right]\mathbf{\hat{q}}_{i}$
and maintains iterates of Lanczos algorithm as
$Q_{:,i}=W\hat{Q}_{:,i}+d_{i}\cdot\mathbf{b}$, similar to FLA. Notice that
EFLA only requires one $\mathcal{B}(\mathbf{x})$ operation per loop thereby
avoiding any extra MVMs with $W$ and $W^{T}W$, except one time pre-
computations of $K_{G}W^{T}\mathbf{b}$ and $W^{T}\mathbf{b}$.
## Appendix C Experiments – Omitted Details and Additional Results
### C.1 Hardware and hyper-parameters details
We run all of our experiments on Intel Xeon Gold 6240 CPU @ 2.60GHz with 10 GB
of RAM. In all experimental settings, our kernels are squared exponential
kernels wrapped within a scale kernel [37, 5]. Therefore, our hyper-parameters
are $\sigma$, length-scales and output-scale as also presented in Table 2.
Length-scales are specific to each dimension for multi-dimensional datasets.
For sine and sound datasets, we have utilized GPytorch to optimize hyper-
parameters. For precipitation and radar datasets, we have considered
previously optimized parameters in [5] and in [1], respectively.
Dataset | $\sigma$ | Length-scale | Output-scale
---|---|---|---
Sine | 0.074 | 0.312 | 1.439
Sound | 0.009 | 10.895 | 0.002
Radar | 50.000 | [0.250, 0.250, 200] | 3.500
Precipitation | 3.990 | [3.094, 2.030, 0.189] | 2.786
Table 2: Hyper-parameters used for all datasets. Length-scale is of size $d$
of each dataset.
### C.2 Results: Synthetic sine dataset
Figure 6 depicts the number of iteration and pre-processing time taken by GSGP
and SKI for synthetic sine dataset wrt number of sample, for the setting on
which Figure 3 reports the results. The number of iterations for GSGP and SKI
are always close and possibly differ only due to finite precision.
Figure 6: Number of iterations taken by SKI and GSGP on synthetic dataset.
Results are averaged over 8 trials.
### C.3 Results: Precipitation dataset
Figure 7: Inference time vs. $m$ for SKI inference tasks on precipitation data
set. From left to right: pre-processing, mean inference, log-determinant for
$tol=0.1$ and for $tol=0.01$ and using 30 random vectors.
Figure 7 shows running time vs. $m$ for GP inference tasks on precipitation
data set of $n=528\mathrm{K}$ [5]. We consider
$m\in\\{12\mathrm{K},96\mathrm{K},128\mathrm{K},528\mathrm{K},640\mathrm{K}\\}$.
This is a situation where even for $m>n$, GSGP is faster compared to SKI. Pre-
processing is up to 6x slower for GSGP due to the need to compute $W^{T}W$. To
perform only one mean inference, the overall time of GSGP and SKI _including_
pre-processing is similar as some of the per-iteration gains are offset by
pre-processing. However, for the log-determinant computation task (as part of
the log-likelihood computation), _several_ more iterations of linear solvers
are required as also demonstrated by log-det computation in Figure 7. It is
worth noting that pre-processing of GSGP is required only once which can be
performed initially for the log-det computation and later be utilized for the
posterior mean and covariance inference. Therefore, overall, GSGP is more
effective than SKI for all inference tasks.
|
# PROTODA: EFFICIENT TRANSFER LEARNING FOR FEW-SHOT INTENT CLASSIFICATION
###### Abstract
Practical sequence classification tasks in natural language processing often
suffer from low training data availability for target classes. Recent works
towards mitigating this problem have focused on transfer learning using
embeddings pre-trained on often unrelated tasks, for instance, language
modeling. We adopt an alternative approach by transfer learning on an ensemble
of related tasks using prototypical networks under the meta-learning paradigm.
Using intent classification as a case study, we demonstrate that increasing
variability in training tasks can significantly improve classification
performance. Further, we apply data augmentation in conjunction with meta-
learning to reduce sampling bias. We make use of a conditional generator for
data augmentation that is trained directly using the meta-learning objective
and simultaneously with prototypical networks, hence ensuring that data
augmentation is customized to the task. We explore augmentation in the
sentence embedding space as well as prototypical embedding space. Combining
meta-learning with augmentation provides upto 6.49% and 8.53% relative
F1-score improvements over the best performing systems in the 5-shot and
10-shot learning, respectively.
Index Terms— meta learning, prototypical networks, data hallucination
## 1 Introduction
Intent classification (IC) is an important natural language processing task of
voice controlled intelligent agents such as Amazon Alexa, Google Home, and
Apple Siri. One of the first steps in such applications after converting
speech to text is intent classification, where user queries are tagged with a
sequence-level label identifying the underlying intent. To increase the
capabilities of such agents, new intents are frequently added to the existing
collection. Often, the development of a new intent starts with a few examples
since labeled training data is scarce and expensive to obtain. Intent
classification, in this case, resembles a few-shot learning setting where the
goal is to generalize from a handful of training samples.
A natural resource available during new intent development is the collection
of intents in-use by the voice controlled agent. These intents are often drawn
from multiple domains such as music, reservations, dining, etc. and represent
significant content variability between them. While transfer learning from
intents in-use tries to borrow high level feature representations during new
intent development, it is prone to overfitting in the few-shot setting case.
On the contrary, learning from a diverse set of intents falls under the
purview of meta-learning [1, 2], which learns across a collection of tasks as
opposed to traditional supervised learning which learns across samples.
Further, meta-learning has shown success in few-shot learning in computer
vision [2, 3] and NLP [4, 5, 6], which can be useful for learning from a few
annotated samples.
Another issue that is associated with few-shot learning is its susceptibility
to sampling bias, i.e., estimated class distributions may not resemble the
true population distribution due to limited sample availability. A popular
approach towards mitigating this bias is _learning_ to artificially synthesize
new samples, known as data augmentation (DA). DA has been an active research
topic in NLP over the years. A number of DA techniques have been experimented
ranging from random perturbation [7] to deep-learning based approaches such as
variability mode transfer between classes [8, 9]. However, most of them
perform augmentation independent of the task, i.e distribution of generated
samples is independent of task loss.
In this work, we optimize data augmentation model using task-specific
objective, and combine it with meta learning to improve intent classification
performance in the few-shot setting. In particular, we propose _ProtoDA_ which
jointly trains a conditional generator network [10] with prototypical networks
[11, 12] (ProtoNets) to generate task specific samples. Combining data
augmentation and ProtoNets is particularly suited, since during the few-shot
setting prototypes are computed using a very limited number of examples, which
incurs a sampling bias. Instead, computing prototype using both real and
synthetic embedded examples allows to better estimate the class-specific
population mean of the training set distribution for the class. We show the
effectiveness of our method on intent classification task using open source
datasets and a production scale corpora. Our primary contributions are: 1)
combining meta-learning and data augmentation as an alternative to
conventional transfer-learning specifically for low-resource IC, and 2)
introducing data augmentation in the ProtoNet embedding space for improving
task performance in NLP.
## 2 Related work
### 2.1 Meta-learning using Prototypical Networks
Meta-learning, or learning-to-learn, is a learning paradigm that learns at
two-levels: within a task; and across multiple tasks while leveraging common
knowledge among them. The accumulated knowledge from an ensemble of tasks is
used to improve few-shot accuracy on the target task, often on unseen classes.
Various meta-learning approaches have been proposed mainly in the field of
computer vision [2, 3].
ProtoNets [11] were first proposed for few-shot image classification with a
relatively simple inductive bias when compared to other metric-learning
methods such as matching networks [13] and relation networks [14]. ProtoNets
are trained in an episodic manner using multiple tasks, with both tasks and
train-test splits sampled within each episode (an episode refers to a single
backpropagation step during the training process). Few approaches exist which
apply ProtoNets in NLP. In [4], ProtoNets were trained using a weighted sum of
metrics to handle diverse tasks. In [5], the authors use an attention
mechanism to weigh both features and samples during distance computation in
the embedding space. In this work, we use the original formulation of
ProtoNets and propose a joint data augmentation for the few-shot learning
task.
### 2.2 Data Augmentation with Meta-Learning
DA techniques in NLP have been explored on the lexical space including synonym
replacement [15], back-translation [16] and sentence-level augmentation by
replacing words with outputs from a language model [17]. Feature-space
augmentation techniques on the other hand, generate new samples at the
embedding space which are added to real samples during model training.
Applications for DA include natural language generation [18], visual question
answering [19], relation classification [20] and machine translation [21].
Most of the previous works focus on first training a data augmentation model,
followed by adhoc data generation to augment the training set for the final
task. Recently, [22] proposed an end-to-end data _hallucination_ method that
is trained using classification (task) loss. The hallucinator, a conditional
generator, takes as inputs an original sample from the low-resource task and a
noise sample, and generates a perturbed version of the samples which are
augmented with the original training set. Inspired by this work, we explore a
joint training of generator and classifier models in a meta-learning setup.
The classifier (protonet in our case) is trained on the augmented set,
including real as well as hallucinated samples. On the other hand, gradients
from the classifier are propagated to the hallucinator for weight update. The
feature extractor weights are typically pretrained and frozen while training
the hallucinator. This setup encourages the hallucinator to generate samples
that improve task performance and does not necessarily prioritize generating
realistic-looking data samples.
## 3 Modeling Setup
Our modeling setup involves learning latent representations of text (e.g.
sentence embeddings) using an encoder network, followed by classification
using ProtoNets. We consider adding the hallucinator at two separate points
during training. We describe the setup of these components and the modeling
architecture below.
### 3.1 Encoder Network
Following [15, 23, 24], we use a neural network encoder in all our experiments
to extract sentence-level embeddings from text. The encoder takes in a
combination of character-level and word-level representations at the input.
Character representations are learnt using a 2-D convolution neural network.
One-hot character encodings (of dimension 32) are passed through 2
convolutional layers (with a kernel size of 5) with max pooling followed by a
temporal pooling layer to form a word-level representation. A dropout with a
probability of 0.2 is used in both layers for regularization. The CNN output
is concatenated with pre-trained word-level embeddings (GloVe [25];
100-dimensional). The resulting word-level representation is then passed
through a Bi-LSTM with a 128-dimensional hidden state to obtain contextualized
word embeddings. A statistics-pooling layer computes the minimum, maximum and
mean values of these embeddings to obtain sentence-level embeddings.
Fig. 1: Data augmentation in ProtoNets. The encoder is pre-trained using
prototypical loss and it’s weights frozen during hallucination. The
hallucinator (G, a conditional generator) is trained with the ProtoNet’s (P)
loss function.
### 3.2 Prototypical Networks
ProtoNets learn a non-linear transformation where each class is reduced to a
single point, specifically the centroid (prototype) of examples from that
class. During inference a test sample is assigned to the class of nearest
centroid. Following, we illustrate a single episode of ProtoNet training, then
extend it to multiple training tasks.
#### 3.2.1 Episodic training
Given a task $t$, consider a set of labeled training embeddings
${D_{t}=(\mathbf{X_{train}},\mathbf{Y_{train}})={(\mathbf{x_{1}},y_{1}),(\mathbf{x_{2}},y_{2}),...}}$${(\mathbf{{x_{N}}_{samples}},{y_{N}}_{samples})}$
where $\mathbf{x_{i}}\in\mathbb{R}^{M}$ and $y_{i}\in\\{1,2,..,C\\}$. $D_{t}$
is sampled to form two sets: supports ($S_{t}$) are used for prototype
computation while queries ($Q_{t}$) are used for estimating class posteriors
and loss computation. $S_{t}$ and $Q_{t}$ are not necessarily mutually
exclusive. ProtoNets learn a mapping
$f_{\theta}:\mathbb{R}^{M}\rightarrow\mathbb{R}^{P}$ where the prototype of
each class is computed as follows:
$v_{c}=\frac{1}{|S_{t,c}|}\sum_{(\mathbf{x_{i}},y_{i})\epsilon
S_{t,c}}f_{\theta}(\mathbf{x_{i}})$ (1)
$S_{t,c}$ is the set of all examples in $S_{t}$ belonging to class $c$. For
every test sample $\mathbf{x}\in Q_{t}$, the posterior probability given class
$c$ is as follows:
$p(y=c|\mathbf{x})=\frac{\exp\left(-d\left(f_{\theta}(\mathbf{x}),\mathbf{v}_{c}\right)\right)}{\sum_{c^{\prime}\in
C}\exp\left(-d\left(f_{\theta}(\mathbf{x}),\mathbf{v}_{c^{\prime}}\right)\right)}$
(2)
$d$ represents the distance function. Euclidean distance was chosen based on
empirical results in the original ProtoNet implementation [11]. Learning
proceeds by minimizing the negative log probability for the true class using
gradient descent. Loss for the episode is computed as follows:
$L=-\frac{1}{|Q_{t}|}\sum_{(\mathbf{x_{i}},y_{i})\epsilon
Q_{t}}\log(p_{\theta}(y_{i}=c\mid\mathbf{x_{i}}))$ (3)
In general, a different task is chosen for each episode. ProtoNets, and meta-
learning in general benefit from a large number of training tasks. In this
work, we treat each training corpus as a task, and select all classes from the
task within an episode. Pseudocode for ProtoNet training is provided in
Algorithm 1.
Algorithm 1 Extending episodic learning to multiple tasks in the training
corpus. SAMPLE$(S,K)$ denotes selecting $K$ samples uniformly at random from
set $S$ with replacement.
Input: $T$: set of tasks, $N_{tasks}$: number of episodes
1:for i $\in\\{1...N_{tasks}\\}$ do
2: $t\leftarrow$SAMPLE$(T,1)$. $\triangleright$ Sample a task
3: for c in $\\{1...C\\}$ do
4: $D_{t,c}\leftarrow$ Embeddings $\in$ class $c$ in task $t$
5: $S_{t,c}\leftarrow$SAMPLE$(D_{t,c},k)$ $\triangleright$ k supports
6: $Q_{t,c}\leftarrow$SAMPLE$(D_{t,c},q)$ $\triangleright$ q queries
7: Perform Episodic Training: Equations (1-3)
The ProtoNet model architecture in this work consists of two feed-forward
layers with 128 units in each layer. Hence, the model takes as input
768-dimensional embeddings from the sentence encoder and outputs
128-dimensional ProtoNet embeddings. Similar to the sentence encoder, dropout
with a probability of 0.2 is used for regularization in both layers.
At each episode, we sample $k$ supports (the value of $k$ is experimented with
5 and 10) and 10 queries per class. The number of classes $C$ varies according
to the task (i.e training corpus). The entire network is trained with Adam
optimizer (lr=0.001, $\beta_{1}$=0.9, $\beta_{2}$=0.99) using the PyTorch
toolkit.
### 3.3 Hallucination
During meta-training, the learning setting (N-way, k-shot: N classes, k
samples/class) is carefully controlled to resemble the testing scenario. In
[11] for instance, the authors matched the k-shot setting during both meta-
training and meta-testing. While additional examples can be sampled from the
class to reduce sampling bias during meta-training, data augmentation can
introduce additional variations that are otherwise not present in the original
data. In this work, we tie the augmentation process directly with the task
objective (i.e intent classification). At every episode, gradients computed
using the task loss are used to update not just the ProtoNet, but also the
conditional generators used for data augmentation (hallucination).
Let $S_{c,orig}$ represent supports from class $c$ during an episode of meta-
training. Let $S_{c,new}$ represent a subset of $S_{c,orig}$ chosen for
augmentation. Each sample $\in S_{c,new}$ is passed as input to a generator
network ($G$) alongwith a noise vector to produce an augmented sample. The new
prototype for class $c$ is computed as centroid of $S_{c,aug}=S_{c,orig}\cup
S_{c,new}$:
$v_{c,aug}=\frac{1}{|S_{c,aug}|}\left(\sum_{\mathbf{e}_{i}\in
S_{c,orig}}\mkern-18.0muf_{\theta}(\mathbf{e}_{i})+\sum_{\mathbf{e}_{j}\in
S_{c,new}}\mkern-18.0muf_{\theta}(G(\mathbf{e}_{j},\mathbf{z}))\right)$ (4)
where $\mathbf{e}=E(\mathbf{x})$, $E$ represents the sentence encoder
described in Section 3.1 and $\mathbf{z}$ is a noise vector with same
dimensionality as $\mathbf{e}$. The generator training does not have a
separate objective of its own, but updates according to the episodic loss in
Equation (3). While the method described above augments samples in the
sentence embedding space (Figure 1b), an alternative approach is to augment
samples in the ProtoNet embedding space (Figure 1c), i.e
$v_{c,aug}=\frac{1}{|S_{c,aug}|}\left(\sum_{\mathbf{e}_{i}\in
S_{c,orig}}\mkern-18.0muf_{\theta}(\mathbf{e}_{i})+\sum_{\mathbf{e}_{j}\in
S_{c,new}}\mkern-18.0muG(f_{\theta}(\mathbf{e}_{j}),\mathbf{z})\right)$ (5)
We meta-train the hallucinator network as follows: First, the sentence encoder
and ProtoNet are pre-trained for 20000 episodes. Next, the sentence encoder
weights are frozen, while the generator network (two feed-forward layers with
128 units in each layer, dropout with a probability of 0.2) and ProtoNet are
trained together for another 20000 episodes. Following [22], the hallucinator
weights are initialized with block diagonal identity matrices. At each
episode, 20% of the original samples are randomly selected for hallucination,
hence $|S_{c,aug}|=1.2\times|S_{c,orig}|$. The generator weights are frozen
during meta-testing, and used to augment the supports for prototype
computation.
## 4 Datasets
We use two source corpora: the task-oriented dialog corpus from Facebook (FB)
[26] containing crowd-sourced annotations for queries from the navigation and
event management domains, and the Air Travel Information System (ATIS) corpus
[27] consisting of spoken queries from the air travel domain. Both corpora
contain natural language queries (as opposed to written form) and are more
suitable to our target domain, i.e voice-controlled agents. We remove intents
with less than 20 utterances from both corpora and utterances with multiple
root intents from FB. This results in a total of 45,489 utterances from 25
intents.
For evaluation purpose, we use the SNIPS corpus [28] which has served as a
benchmark for recent sequence classification tasks in NLP. SNIPS contains
crowd-sourced queries from seven intents with a balanced sample distribution
across classes: $\approx$ 2000 samples for training and 100 samples for
validation for each intent. We divide the seven intents in SNIPS into train
(BookRestaurant, AddToPlaylist, RateBook, SearchScreeningEvent) and test
(PlayMusic, GetWeather, SearchCreativeWork) to evaluate within different
experimental configurations (see Section 5.1)
Similar to previous meta-learning works in computer vision [2, 11] which have
used hundreds of classes during meta-training, we curate an Alexa corpus to
aid the unseen intent configuration (Section 5.1). The Alexa corpus consists
of user queries directed at the devices supported by the smart agent. Voice
queries were manually transcribed and labeled for intents. The queries span 68
Alexa third-party _skills_ 111https://developer.amazon.com/en-US/alexa/alexa-
skills-kit and $\approx$1100 intents. The number of intents per skill ranges
between $2$ to $30$. Treating each Alexa skill as a task (Section 3.2.1), the
augmented training corpus containing FB, ATIS, SNIPS and Alexa contains $71$
training tasks.
Table 1: Training intents used in each experimental setup. SNIPS-4 refers to BookRestaurant, AddToPlaylist, RateBook, SearchScreeningEvent classes. | Seen Intents | Unseen Intents
---|---|---
Single Task | SNIPS (All) | SNIPS-4
Multi Task | | FB,ATIS
---
\+ SNIPS (All)
| FB,ATIS
---
\+ SNIPS-4
Table 4: Micro-F1 scores (%) using different augmentation methods (None,
Noise: standard normal, Hall: Hallucination) and embeddings (Sent: Sentence,
Proto: Protonet) for transfer learning on seen and unseen intents. $\pm$
indicates 95% confidence interval.
| Seen (5-shot) | Seen (10-shot) | Unseen (5-shot) | Unseen (10-shot)
---|---|---|---|---
Augmentation | Sent | Proto | Sent | Proto | Sent | Proto | Sent | Proto
None | 75.60 $\pm$ 4.27 | 86.40 $\pm$ 1.91 | 79.85 $\pm$ 1.43 | 89.02 $\pm$ 1.24
Noise | 75.72 $\pm$ 3.63 | 77.47 $\pm$ 3.66 | 86.85 $\pm$ 1.95 | 86.93 $\pm$ 1.93 | 80.57 $\pm$ 1.74 | 82.17 $\pm$ 1.42 | 89.87 $\pm$ 1.00 | 89.93 $\pm$ 0.79
Hall | 76.30 $\pm$ 3.10 | 76.62 $\pm$ 3.52 | 87.11 $\pm$ 1.88 | 88.08 $\pm$ 1.88 | 81.33 $\pm$ 1.87 | 83.67 $\pm$ 1.65 | 90.38 $\pm$ 0.87 | 91.18 $\pm$ 0.73
## 5 Experiments
### 5.1 Protonets for Transfer Learning
In the first set of experiments, we evaluate protonets for transfer learning
under two conditions: seen intents and unseen intents. For seen intents, we
make use of the train partitions from test intents for model backpropagation.
This may not be always possible, when for instance, on-device computation for
model adaptation is restricted/infeasible. Nevertheless, these experiments
analyze the value of including related tasks during ProtoNet training. We
repeat the experiments by removing test intents during training time, which
more accurately represents the scenario where we wish to evaluate pre-trained
models on newly-introduced skills (intents) for a voice-controlled assistant.
For the case of seen intents, we develop a competitive conventional transfer
learning method (Conv TL) to compare with ProtoNets - We use the sentence
encoder described in Section 3.1 and add two feed-forward layers (128 units in
each layer, dropout with probability of 0.2) similar to the ProtoNet
architecture. The model is trained to minimize cross-entropy loss on FB, ATIS
and SNIPS (train intents). Following, the training partition from the test
intents are used to fine-tune the network by replacing the final softmax
layer.
For both seen and unseen intents, we experiment with two different setups by
varying the number of tasks available during meta-training. Under the single-
task setup, we use only the SNIPS corpus during meta-training. Here, transfer
learning happens in the case of unseen intents. Under the multi-task setup we
make use of 3 tasks - FB, ATIS and SNIPS corpora. Increasing the number of
tasks is expected to improve the learning capability of ProtoNets. Table 1
illustrates the proposed experiment setups.
### 5.2 Data Augmentation
In the next set of experiments, we select the best performing configurations
from seen and unseen intents cases and perform data hallucination (Hall)
during training. As mentioned in Section 3.3, we train the hallucinator at one
of two spaces, sentence embeddings or ProtoNet embeddings. At each space, we
compare hallucination with random perturbation data augmentation (Noise) which
has been shown as a competitive baseline [29] in few-shot data augmentation
experiments. Moreover, we control the amount of random perturbation per class
to match that of hallucination (20% i.e., we introduce one synthetic embedding
for every 5 real embeddings) thereby creating a fair comparison with
hallucination. In this method, we augment the embeddings with additive and
multiplicative noise generated using a normal distribution with zero-mean and
standard variance of 10% batch variance in every dimension.
During evaluation, all experiments including the baseline are repeated for 20
trials by randomly selecting $k$ (= 5,10) labeled examples from the train
partitions for prototype computation. The validation partitions from test
intents are used for evaluation. We report the averaged micro F1 scores
alongwith the 95% confidence intervals for all experiments.
Table 2: Micro-F1 scores (%) for TL with seen intents Method | 5-shot | 10-shot
---|---|---
Conv TL | 74.98 $\pm$ 3.46 | 82.02 $\pm$ 3.94
Single Task | 70.48 $\pm$ 4.20 | 82.48 $\pm$ 3.27
Multi Task | 75.60 $\pm$ 4.27 | 86.40 $\pm$ 1.91
Table 3: Micro-F1 scores (%) for TL with unseen intents Method | 5-shot | 10-shot
---|---|---
Single Task | 49.95 $\pm$ 4.79 | 57.45 $\pm$ 3.33
Multi Task | 70.82 $\pm$ 0.97 | 73.18 $\pm$ 0.54
Across-Domain + Alexa | 79.85 $\pm$ 1.43 | 89.02 $\pm$ 1.24
## 6 Results and Discussion
Tables 2 and 3 show the performance on seen intents and unseen intents
respectively. In the former, we observe that single task transfer does not
provide significant gains over ConvTL, even failing to outperform in the
5-shot case. We observe that ProtoNets benefit with increased task variability
during multiple tasks, where transfer learning happens across corpora. While
unseen intents in general result in lesser classification performance owing to
non-availability of classes during training, gains from increased task
variability (during multiple tasks) are significant. Specifically, the 5-shot
and 10-shot settings result in 20.87% and 15.73% absolute improvement over the
single-task, in comparison to 5.12% and 3.92% during seen intents. When the
number of training tasks is greatly increased using the Alexa corpus, the
gains in classification outperform the best performing models in seen intents,
including ConvTL. These results demonstrate the importance of variability in
training tasks for meta-learning.
In Table 4, we see that both augmentation methods improve performance across
the different settings. In most cases, hallucination improves over random
perturbations, as it can learn to de-bias prototypes computed from a very
small set of examples. Within each TL method and $k$-shot setting, ProtoNet
embeddings prove to be a better choice for DA over sentence embeddings. We
believe that the smaller dimensionality in the ProtoNet space and proximity to
the training objective function (ProtoNet loss) makes hallucination more
effective in the protonet embedding space.
## 7 Conclusion
Conventional TL approaches for low-resource NLU applications are still
dependent on a small number of labeled samples from unseen intents during
model training. In this work, we propose an alternative approach by combining
meta-learning with data hallucination. Given sufficient variability in the
training set (represented as tasks), we show that ProtoNets outperform models
trained with standard cross-entropy objectives. Data augmentation further
assists generalization by reducing sampling bias during prototype computation.
While augmenting samples with additive and multiplicative noise is beneficial,
we show better improvements by learning to optimize the hallucinator directly
with the task loss. In the future, we would like to extend this approach to
downstream NLU tasks for voice controlled agents, such as named entity
recognition which entails sequence labels.
## References
* [1] Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, and Nando De Freitas, “Learning to learn by gradient descent by gradient descent,” in Advances in neural information processing systems, 2016, pp. 3981–3989.
* [2] Sachin Ravi and Hugo Larochelle, “Optimization as a model for few-shot learning,” in 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, 2017.
* [3] Chelsea Finn, Pieter Abbeel, and Sergey Levine, “Model-agnostic meta-learning for fast adaptation of deep networks,” in Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 2017, pp. 1126–1135.
* [4] Mo Yu, Xiaoxiao Guo, Jinfeng Yi, Shiyu Chang, Saloni Potdar, Yu Cheng, Gerald Tesauro, Haoyu Wang, and Bowen Zhou, “Diverse few-shot text classification with multiple metrics,” in Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics, 2018, pp. 1206–1215.
* [5] Tianyu Gao, Xu Han, Zhiyuan Liu, and Maosong Sun, “Hybrid attention-based prototypical networks for noisy few-shot relation classification,” in The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI, Jan 2019, pp. 6407–6414.
* [6] Po-Sen Huang, Chenglong Wang, Rishabh Singh, Wen-tau Yih, and Xiaodong He, “Natural language to structured query generation via meta-learning,” in Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), New Orleans, Louisiana, June 2018, pp. 732–738, Association for Computational Linguistics.
* [7] Terrance DeVries and Graham W Taylor, “Dataset augmentation in feature space,” arXiv preprint arXiv:1702.05538, 2017.
* [8] Bharath Hariharan and Ross Girshick, “Low-shot visual recognition by shrinking and hallucinating features,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 3018–3027.
* [9] Eli Schwartz, Leonid Karlinsky, Joseph Shtok, Sivan Harary, Mattias Marder, Abhishek Kumar, Rogerio Feris, Raja Giryes, and Alex Bronstein, “Delta-encoder: an effective sample synthesis method for few-shot object recognition,” in Advances in Neural Information Processing Systems, 2018, pp. 2845–2855.
* [10] Mehdi Mirza and Simon Osindero, “Conditional generative adversarial nets,” arXiv preprint arXiv:1411.1784, 2014.
* [11] Jake Snell, Kevin Swersky, and Richard Zemel, “Prototypical networks for few-shot learning,” in Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017, pp. 4080–4090.
* [12] Sai Kumar Dwivedi, Vikram Gupta, Rahul Mitra, Shuaib Ahmed, and Arjun Jain, “Protogan: Towards few shot learning for action recognition,” in The IEEE International Conference on Computer Vision (ICCV) Workshops, Oct 2019.
* [13] Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, et al., “Matching networks for one shot learning,” in Advances in neural information processing systems, 2016, pp. 3630–3638.
* [14] Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip HS Torr, and Timothy M Hospedales, “Learning to compare: Relation network for few-shot learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 1199–1208.
* [15] Xiang Zhang, Junbo Zhao, and Yann LeCun, “Character-level convolutional networks for text classification,” in Advances in neural information processing systems, 2015, pp. 649–657.
* [16] Rico Sennrich, Barry Haddow, and Alexandra Birch, “Improving neural machine translation models with monolingual data,” in Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Aug. 2016, pp. 86–96, Association for Computational Linguistics.
* [17] Sosuke Kobayashi, “Contextual augmentation: Data augmentation by words with paradigmatic relations,” in Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), 2018, pp. 452–457.
* [18] Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing, “Toward controlled generation of text,” in Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 2017, pp. 1587–1596.
* [19] Kushal Kafle, Mohammed Yousefhussien, and Christopher Kanan, “Data augmentation for visual question answering,” in Proceedings of the 10th International Conference on Natural Language Generation, 2017, pp. 198–202.
* [20] Yan Xu, Ran Jia, Lili Mou, Ge Li, Yunchuan Chen, Yangyang Lu, and Zhi Jin, “Improved relation classification by deep recurrent neural networks with data augmentation,” arXiv preprint arXiv:1601.03651, 2016.
* [21] Marzieh Fadaee, Arianna Bisazza, and Christof Monz, “Data augmentation for low-resource neural machine translation,” arXiv preprint arXiv:1705.00440, 2017.
* [22] Yu-Xiong Wang, Ross Girshick, Martial Hebert, and Bharath Hariharan, “Low-shot learning from imaginary data,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 7278–7286.
* [23] Jason PC Chiu and Eric Nichols, “Named entity recognition with bidirectional lstm-cnns,” Transactions of the Association for Computational Linguistics, vol. 4, pp. 357–370, 2016.
* [24] Dongyun Liang, Weiran Xu, and Yinge Zhao, “Combining word-level and character-level representations for relation classification of informal text,” in Proceedings of the 2nd Workshop on Representation Learning for NLP, 2017, pp. 43–47.
* [25] Jeffrey Pennington, Richard Socher, and Christopher D Manning, “Glove: Global vectors for word representation,” in Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), 2014, pp. 1532–1543.
* [26] Sonal Gupta, Rushin Shah, Mrinal Mohit, Anuj Kumar, and Mike Lewis, “Semantic parsing for task oriented dialog using hierarchical representations,” in Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, 2018, pp. 2787–2792.
* [27] Charles Hemphill, John Godfrey, and George Doddington, “The ATIS spoken language systems pilot corpus,” in Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, 1990.
* [28] Alice Coucke, Alaa Saade, Adrien Ball, Théodore Bluche, Alexandre Caulier, David Leroy, Clément Doumouro, Thibault Gisselbrecht, Francesco Caltagirone, Thibaut Lavril, et al., “Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces,” arXiv preprint arXiv:1805.10190, 2018.
* [29] Varun Kumar, Hadrien Glaude, Cyprien de Lichy, and Wlliam Campbell, “A closer look at feature space data augmentation for few-shot intent classification,” in Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019), Hong Kong, China, Nov. 2019, pp. 1–10, Association for Computational Linguistics.
|
Multi-dimensional Weyl almost periodic type functions...]Multi-dimensional Weyl almost periodic type functions and applications
Vladimir E. Fedorov
Chelyabinsk State State University, Kashirin Brothers St. 129, Chelyabinsk, 454001 Russia
Marko Kostić
Faculty of Technical Sciences,
University of Novi Sad,
Trg D. Obradovića 6, 21125 Novi Sad, Serbia
[2010 Mathematics
Subject Classification. 42A75, 43A60, 47D99.
Key words and phrases. Multi-dimensional Weyl almost periodic functions, Lebesgue spaces with variable exponents,
abstract Volterra integro-differential equations.
Vladimir E. Fedorov is partially supported by the Russian Foundation for Basic Research, grant 19-01-00244. Marko Kostić is partially supported by grant 451-03-68/2020/14/200156 of Ministry
of Science and Technological Development, Republic of Serbia.]
In this paper, we analyze multi-dimensional Weyl almost periodic type functions in Lebesgue spaces with variable exponents.
The introduced classes seem to be new and not considered elsewhere even in the constant coefficient case. We provide
certain applications to
the abstract Volterra integro-differential equations in Banach spaces.
§ INTRODUCTION AND PRELIMINARIES
The class of almost periodic functions was introduced by the Danish mathematician H. Bohr around 1924-1926 and later reconsidered by many others. Suppose that $\Lambda$ is either ${\mathbb R}$ or $[0,\infty)$ as well as that $f :\Lambda \rightarrow X$ is a given continuous function, where $X$ is a complex Banach space equipped with the norm $\| \cdot \|$. Given a real number $\varepsilon>0,$ we say that a positive real number $\tau>0$ is a $\varepsilon$-period for $f(\cdot)$ if and only if$\varepsilon$-period
\| f(t+\tau)-f(t) \| \leq \varepsilon,$ $ t\in \Lambda.
The set constituted of all $\varepsilon$-periods for $f(\cdot)$ is denoted by $\vartheta(f,\varepsilon).$ We say that the function $f(\cdot)$ is almost periodic if and only if for each $\varepsilon>0$ the set $\vartheta(f,\varepsilon)$ is relatively dense in $[0,\infty),$ which means that
there exists a finite real number $l>0$ such that any subinterval of $[0,\infty)$ of length $l$ meets $\vartheta(f,\varepsilon)$. For further information about almost periodic functions and their applications, we refer the reader to [6, 13, 22, 23, 24, 26, 38, 40, 44].
In [9], we have investigated
various classes of almost periodic functions of form $F : \Lambda \times X\rightarrow Y,$ where $(Y,\|\cdot \|_{Y})$ is a complex Banach spaces and $\emptyset \neq \Lambda \subseteq {\mathbb R}^{n}$ (the region $ \Lambda$ does not generally satisfy the semigroup property $ \Lambda+ \Lambda\subseteq \Lambda$ or contain the zero vector). The main encouragement for writing our recent research article [10] (a joint work with A. Chávez, K. Khalil and M. Pinto), which concerns the multi-dimensional Stepanov almost periodic type functions of form $F : \Lambda \times X\rightarrow Y,$ and this research article, which concerns the multi-dimensional Weyl almost periodic type functions of the same form, was our impossibility to locate any relevant reference in the existing literature which concerns these classes of almost periodic functions (here, we would like to mention two recent papers [37] by D. Lenz, T. Spindeler, N. Strungaru and [41] by T. Spindeler, where the authors have analyzed the Stepanov and Weyl almost periodic functions on locally compact
Abelian groups).
This paper aims, therefore, to continue the research studies [9]-[10] by developing the basic theory of multi-dimensional Weyl almost periodic type functions in Lebesgue spaces with variable exponents. As mentioned in the abstract, the introduced classes of functions seem to be not considered elsewhere even in the constant coefficient case (for one-dimensional Weyl almost periodic type functions and their applications, we refer the reader to [1, 3, 5, 6, 7, 8, 11, 21, 24, 25, 26, 27, 28, 29, 30, 38, 43], as well as the survey article [2] by J. Andres, A. M. Bersani, R. F. Grande, the pioneering papers by A. S. Kovanko [34]-[36] and the master thesis of J. Stryja [42]).
The organization and main ideas of this paper can be briefly described as follows. In Subsection <ref>, we collect the basic definitions and results from the theory of Lebesgue spaces with variable exponents. In Definition <ref>-Definition <ref> [Definition <ref>-Definition <ref>],
we continue our recent analysis of Weyl almost periodic functions [30] by introducing the classes
$e-W^{(p({\bf u}),\phi,{\mathbb F})}_{\Omega,\Lambda',{\mathcal B}}(\Lambda\times X :Y)$ and $e-W^{(p({\bf u}),\phi,{\mathbb F})_{i}}_{\Omega,\Lambda',{\mathcal B}}(\Lambda\times X :Y)$ [$e-W^{[p({\bf u}),\phi,{\mathbb F}]}_{\Omega,\Lambda',{\mathcal B}}(\Lambda\times X :Y)$ and $e-W^{[p({\bf u}),\phi,{\mathbb F}]_{i}}_{\Omega,\Lambda',{\mathcal B}}(\Lambda\times X :Y)$] of Weyl almost periodic functions, where $i=1,2.$
We further analyze these classes in Section <ref>. The main result of this section is Theorem <ref> (see also Theorem <ref>), in which we investigate the convolution invariance of space $(e-)W^{(p_{1}({\bf u}),\phi,{\mathbb F}_{1})}_{\Omega,\Lambda',{\mathcal B}}({\mathbb R}^{n}\times X :Y);$ this is a crucial result for our applications to the multi-dimensional heat equation. With the exception of this result, all other structural results of ours are given in Section <ref>, in which we investigate the usual concept of (equi-)Weyl-$p$-almost periodicity and the corresponding class of function $(e-)W_{ap,\Lambda',{\mathcal B}}^{p}(\Lambda \times X : Y),$
with the constant exponent $p({\bf u})\equiv p\in [1,\infty).$ In Subsection <ref>, we investigate the Weyl $p$-distance and Weyl $p$-boundedness, while in Subsection <ref> we investigate the Weyl $p$-normality and Weyl approximations by trigonometric polynomials. The main results of this subsection are Theorem <ref>, Proposition <ref>-Proposition <ref>, Proposition <ref> and Proposition <ref>. In Subsection <ref>, we analyze the basic results about the existence of Bohr-Fourier coefficients for multi-dimensional Weyl almost periodic functions.
Section <ref> is reserved for giving some applications of our abstract theoretical results to the abstract Volterra integro-differential equations in Banach spaces. The paper does not intend to be exhaustively complete and we present several useful conclusions, remarks and intriguing topics not discussed here in Section <ref>. We also propose some open problems.
Before explaining the notation used in the paper,
the authors would like to express their sincere thanks to Prof. A. Chávez, M. T. Khalladi, M. Pinto, A. Rahmani and D. Velinov for many useful comments and observations. Special thanks go to Prof. Kamal Khalil, who proposed the use of kernel $K(t,s,\cdot,\cdot)$ in the third point of Section <ref>.
We assume henceforth
that $(X,\| \cdot \|)$ and $(Y, \|\cdot\|_Y)$ are complex Banach spaces. By
$L(X,Y)$ we denote the Banach algebra of all bounded linear operators from $X$ into
$Y$ with $L(X,X)$ being denoted $L(X)$. If $A: D(A) \subseteq X \mapsto X$ is a closed linear operator,
then its nullspace (or kernel) and range will be denoted respectively by
$N(A)$ and $R(A)$.
The convolution product $\ast$ of measurable functions $f: {\mathbb R}^{n} \rightarrow {\mathbb C}$ and $g: {\mathbb R}^{n} \rightarrow X$ is defined by $(f\ast g)({\bf t}):=\int_{{\mathbb R}^{n}}f({\bf t}-{\bf s})g({\bf s})\,
d{\bf s},$ ${\bf t}\in {\mathbb R}^{n},$ whenever the limit exists; $\langle \cdot, \cdot \rangle$ denotes the usual inner product in ${\mathbb R}^{n}.$ If ${\sc X},\ {\sc Y} \neq \emptyset,$ then we set ${\sc Y}^{\sc X}:=\{ f \, | \, f: {\sc X} \rightarrow {\sc Y}\};$
$\chi_{A}(\cdot)$ denotes the characteristic function of a set $A\subseteq {\mathbb R}^{n}.$
§.§ Lebesgue spaces with variable exponents
Let $\emptyset \neq \Omega \subseteq {\mathbb R}^{n}$ be a nonempty Lebesgue measurable subset and let
$M(\Omega : X)$ denote the collection of all measurable functions $f: \Omega \rightarrow X;$ $M(\Omega):=M(\Omega : {\mathbb R}).$ Further on, ${\mathcal P}(\Omega)$ denotes the vector space of all Lebesgue measurable functions $p : \Omega \rightarrow [1,\infty].$
For any $p\in {\mathcal P}(\Omega)$ and $f\in M(\Omega : X),$ we define
\varphi_{p(x)}(t):=\left\{
\begin{array}{l}
t^{p(x)},\quad t\geq 0,\ \ 1\leq p(x)<\infty,\\ \\
0,\quad 0\leq t\leq 1,\ \ p(x)=\infty,\\ \\
\infty,\quad t>1,\ \ p(x)=\infty
\end{array}
\right.
\rho(f):=\int_{\Omega}\varphi_{p(x)}(\|f(x)\|)\, dx .
We define the Lebesgue space
$L^{p(x)}(\Omega : X)$ with variable exponent
L^{p(x)}(\Omega : X):=\Bigl\{f\in M(\Omega : X): \lim_{\lambda \rightarrow 0+}\rho(\lambda f)=0\Bigr\}.
\begin{align*}
L^{p(x)}(\Omega : X)=\Bigl\{f\in M(\Omega : X): \mbox{ there exists }\lambda>0\mbox{ such that }\rho(\lambda f)<\infty\Bigr\};
\end{align*}
see, e.g., <cit.>.
For every $u\in L^{p(x)}(\Omega : X),$ we introduce the Luxemburg norm of $u(\cdot)$ by
\|u\|_{p(x)}:=\|u\|_{L^{p(x)}(\Omega :X)}:=\inf\Bigl\{ \lambda>0 : \rho(f/\lambda) \leq 1\Bigr\}.
Equipped with the above norm, the space $
L^{p(x)}(\Omega : X)$ becomes a Banach space (see e.g. <cit.> for the scalar-valued case), coinciding with the usual Lebesgue space $L^{p}(\Omega : X)$ in the case that $p(x)=p\geq 1$ is a constant function.
Further on, for any $p\in M(\Omega),$ we define
p^{-}:=\text{essinf}_{x\in \Omega}p(x) \ \ \mbox{ and } \ \ p^{+}:=\text{esssup}_{x\in \Omega}p(x).
D_{+}(\Omega ):=\bigl\{ p\in M(\Omega): 1 \leq p^{-}\leq p(x) \leq p^{+} <\infty \mbox{ for a.e. }x\in \Omega \bigr \}.
For $p\in D_{+}([0,1]),$ the space $
L^{p(x)}(\Omega : X)$ behaves nicely, with almost all fundamental properties of the Lesbesgue space with constant exponent $
L^{p}(\Omega : X)$ being retained; in this case, we know that
L^{p(x)}(\Omega : X)=\Bigl\{f\in M(\Omega : X) \, ; \, \mbox{ for all }\lambda>0\mbox{ we have }\rho(\lambda f)<\infty\Bigr\}.
We will use the following lemma (cf. [18] for the scalar-valued case):
(i) (The Hölder inequality) Let $p,\ q,\ r \in {\mathcal P}(\Omega)$ such that
\frac{1}{q(x)}=\frac{1}{p(x)}+\frac{1}{r(x)},\quad x\in \Omega .
Then, for every $u\in L^{p(x)}(\Omega : X)$ and $v\in L^{r(x)}(\Omega),$ we have $uv\in L^{q(x)}(\Omega : X)$
\begin{align*}
\|uv\|_{q(x)}\leq 2 \|u\|_{p(x)}\|v\|_{r(x)}.
\end{align*}
(ii) Let $\Omega $ be of a finite Lebesgue's measure and let $p,\ q \in {\mathcal P}(\Omega)$ such $q\leq p$ a.e. on $\Omega.$ Then
$L^{p(x)}(\Omega : X)$ is continuously embedded in $L^{q(x)}(\Omega : X),$ with the constant of embedding less or equal to $2(1+m(\Omega)).$
(iii) Let $f\in L^{p(x)}(\Omega : X),$ $g\in M(\Omega : X)$ and $0\leq \|g\| \leq \|f\|$ a.e. on $\Omega .$ Then $g\in L^{p(x)}(\Omega : X)$ and $\|g\|_{p(x)}\leq \|f\|_{p(x)}.$
We will use the following simple lemma, whose proof can be omitted:
Suppose that $f\in L^{p(x)}(\Omega : X)$ and $A\in L(X,Y).$
Then $Af \in L^{p(x)}(\Omega : Y)$ and
$\|Af\|_{L^{p(x)}(\Omega : Y)}\leq \|A\| \cdot \|f\|_{L^{p(x)}(\Omega : X)}.$
For further information concerning the Lebesgue spaces with variable exponents
$L^{p(x)},$ we refer the reader to [18], [20] and [39]; basic source of information on generalized almost periodic functions in
Lebesgue spaces with variable exponents can be obtained by consulting [10, 14, 15, 16, 17, 30, 31, 32] and the forthcoming monograph [27].
§ MULTI-DIMENSIONAL WEYL ALMOST PERIODIC TYPE FUNCTIONS
In this paper, we will always assume that ${\mathcal B}$ is a non-empty collection of certain subsets of $X$ such that for each $x\in X$ there exists $B\in {\mathcal B}$ such that $x\in B.$ In the first concept, we assume that the following condition holds:
$\emptyset \neq \Lambda \subseteq {\mathbb R}^{n},$ $\emptyset \neq \Lambda' \subseteq {\mathbb R}^{n},$
$\emptyset \neq \Omega \subseteq {\mathbb R}^{n}$ is a Lebesgue measurable set such that $m(\Omega)>0,$ $p\in {\mathcal P}(\Lambda),$
$\Lambda' +\Lambda+ l\Omega \subseteq \Lambda,$ $\Lambda+ l\Omega \subseteq \Lambda$ for all $l>0,$
$\phi : [0,\infty) \rightarrow [0,\infty)$ and ${\mathbb F}: (0,\infty) \times \Lambda \rightarrow (0,\infty).$
We introduce the following classes of multi-dimensional Weyl almost periodic functions (the notion can be further generalized following the approach obeyed in Definition <ref>; all established results can be slightly generalized in this framework):
By $e-W^{(p({\bf u}),\phi,{\mathbb F})}_{\Omega,\Lambda',{\mathcal B}}(\Lambda\times X :Y)$ we denote the set consisting of all functions $F : \Lambda \times X \rightarrow Y$ such that, for every $\epsilon>0$ and $B\in {\mathcal B},$ there exist two finite real numbers
$L>0$ such that for each ${\bf t}_{0}\in \Lambda'$ there exists $\tau \in B({\bf t}_{0},L)\cap \Lambda'$ such that
\begin{align}\label{whatusup}
\sup_{x\in B}\sup_{{\bf t}\in \Lambda}{\mathbb F}(l,{\bf t})\phi\Bigl( \bigl\| F({\bf \tau}+{\bf u};x)-F({\bf u};x) \bigr\|_{Y}\Bigr)_{L^{p({\bf u})}({\bf t}+l\Omega)} <\epsilon.
\end{align}
space!$e-W^{(p({\bf u}),\phi,{\mathbb F})}_{\Omega,\Lambda',{\mathcal B}}(\Lambda\times X :Y)$
(ii) By $W^{(p({\bf u}),\phi,{\mathbb F})}_{\Omega,\Lambda',{\mathcal B}}(\Lambda\times X :Y)$ we denote the set consisting of all functions $F : \Lambda \times X \rightarrow Y$ such that, for every $\epsilon>0$ and $B\in {\mathcal B},$ there exists a finite real number
$L>0$ such that for each ${\bf t}_{0}\in \Lambda'$ there exists $\tau \in B({\bf t}_{0},L) \cap \Lambda'$ such that
\begin{align*}
\limsup_{l\rightarrow +\infty}\sup_{x\in B}\sup_{{\bf t}\in \Lambda}{\mathbb F}(l,{\bf t})\phi\Bigl( \bigl\| F({\bf \tau}+{\bf u};x)-F({\bf u};x) \bigr\|_{Y}\Bigr)_{L^{p({\bf u})}({\bf t}+l\Omega)}
\end{align*}
space!$W^{(p({\bf u}),\phi,{\mathbb F})}_{\Omega,\Lambda',{\mathcal B}}(\Lambda\times X :Y)$
By $e-W^{(p({\bf u}),\phi,{\mathbb F})_{1}}_{\Omega,\Lambda',{\mathcal B}}(\Lambda\times X :Y)$ we denote the set consisting of all functions $F : \Lambda \times X \rightarrow Y$ such that, for every $\epsilon>0$ and $B\in {\mathcal B},$ there exist two finite real numbers
$L>0$ such that for each ${\bf t}_{0}\in \Lambda'$ there exists $\tau \in B({\bf t}_{0},L)\cap \Lambda'$ such that
\begin{align*}
\sup_{x\in B}\sup_{{\bf t}\in \Lambda}{\mathbb F}(l,{\bf t})\phi\Bigl( \bigl\| F({\bf \tau}+{\bf u};x)-F({\bf u};x) \bigr\|_{L^{p({\bf u})}({\bf t}+l\Omega :Y)} \Bigr)
\end{align*}
space!$e-W^{(p({\bf u}),\phi,{\mathbb F})_{1}}_{\Omega,\Lambda',{\mathcal B}}(\Lambda\times X :Y)$
(ii) By $W^{(p({\bf u}),\phi,{\mathbb F})_{1}}_{\Omega,\Lambda',{\mathcal B}}(\Lambda\times X :Y)$ we denote the set consisting of all functions $F : \Lambda \times X \rightarrow Y$ such that, for every $\epsilon>0$ and $B\in {\mathcal B},$ there exists a finite real number
$L>0$ such that for each ${\bf t}_{0}\in \Lambda'$ there exists $\tau \in B({\bf t}_{0},L) \cap \Lambda'$ such that
\begin{align*}
\limsup_{l\rightarrow +\infty}\sup_{x\in B}\sup_{{\bf t}\in \Lambda}{\mathbb F}(l,{\bf t})\phi\Bigl( \bigl\| F({\bf \tau}+{\bf u};x)-F({\bf u};x) \bigr\|_{L^{p({\bf u})}({\bf t}+l\Omega:Y)} \Bigr)
\end{align*}
space!$W^{(p({\bf u}),\phi,{\mathbb F})_{1}}_{\Omega,\Lambda',{\mathcal B}}(\Lambda\times X :Y)$
By $e-W^{(p({\bf u}),\phi,{\mathbb F})_{2}}_{\Omega,\Lambda',{\mathcal B}}(\Lambda\times X :Y)$ we denote the set consisting of all functions $F : \Lambda \times X \rightarrow Y$ such that, for every $\epsilon>0$ and $B\in {\mathcal B},$ there exist two finite real numbers
$L>0$ such that for each ${\bf t}_{0}\in \Lambda'$ there exists $\tau \in B({\bf t}_{0},L)\cap \Lambda'$ such that
\begin{align*}
\sup_{x\in B}\sup_{{\bf t}\in \Lambda}\phi\Bigl( {\mathbb F}(l,{\bf t}) \bigl\| F({\bf \tau}+{\bf u};x)-F({\bf u};x) \bigr\|_{L^{p({\bf u})}({\bf t}+l\Omega:Y)} \Bigr)
\end{align*}
space!$e-W^{(p({\bf u}),\phi,{\mathbb F})_{2}}_{\Omega,\Lambda',{\mathcal B}}(\Lambda\times X :Y)$
(ii) By $W^{(p({\bf u}),\phi,{\mathbb F})_{2}}_{\Omega,\Lambda',{\mathcal B}}(\Lambda\times X :Y)$ we denote the set consisting of all functions $F : \Lambda \times X \rightarrow Y$ such that, for every $\epsilon>0$ and $B\in {\mathcal B},$ there exists a finite real number
$L>0$ such that for each ${\bf t}_{0}\in \Lambda'$ there exists $\tau \in B({\bf t}_{0},L) \cap \Lambda'$ such that
\begin{align*}
\limsup_{l\rightarrow +\infty}\sup_{x\in B}\sup_{{\bf t}\in \Lambda}\phi\Bigl( {\mathbb F}(l,{\bf t})\bigl\| F({\bf \tau}+{\bf u};x)-F({\bf u};x) \bigr\|_{L^{p({\bf u})}({\bf t}+l\Omega:Y)} \Bigr)
\end{align*}
space!$W^{()p({\bf u}),\phi,{\mathbb F})_{2}}_{\Omega,\Lambda',{\mathcal B}}(\Lambda\times X :Y)$
In the second concept, we aim to ensure the translation invariance of multi-dimensional Weyl almost periodic functions. We will assume now
that the following condition holds:
$\emptyset \neq \Lambda \subseteq {\mathbb R}^{n},$ $\emptyset \neq \Lambda' \subseteq {\mathbb R}^{n},$
$\emptyset \neq \Omega \subseteq {\mathbb R}^{n}$ is a Lebesgue measurable set such that $m(\Omega)>0,$ $p\in {\mathcal P}(\Omega),$
$\Lambda' +\Lambda+ l\Omega\subseteq \Lambda,$ $\Lambda+ l\Omega \subseteq \Lambda$ for all $l>0,$
$\phi : [0,\infty) \rightarrow [0,\infty)$ and ${\mathbb F}: (0,\infty) \times \Lambda \rightarrow (0,\infty).$
We introduce the following classes of functions:
By $e-W^{[p({\bf u}),\phi,{\mathbb F}]}_{\Omega,\Lambda',{\mathcal B}}(\Lambda\times X :Y)$ we denote the set consisting of all functions $F : \Lambda \times X \rightarrow Y$ such that, for every $\epsilon>0$ and $B\in {\mathcal B},$ there exist two finite real numbers
$L>0$ such that for each ${\bf t}_{0}\in \Lambda'$ there exists $\tau \in B({\bf t}_{0},L)\cap \Lambda'$ such that
\begin{align*}
\sup_{x\in B}\sup_{{\bf t}\in \Lambda}l^{n}{\mathbb F}(l,{\bf t})\phi\Bigl( \bigl\| F({\bf t}+{\bf \tau}+l{\bf u};x)-F({\bf t}+l{\bf u};x) \bigr\|_{Y}\Bigr)_{L^{p({\bf u})}(\Omega)} <\epsilon.
\end{align*}
space!$e-W^{[p({\bf u}),\phi,{\mathbb F}]}_{\Omega,\Lambda',{\mathcal B}}(\Lambda\times X :Y)$
(ii) By $W^{[p({\bf u}),\phi,{\mathbb F}]}_{\Omega,\Lambda',{\mathcal B}}(\Lambda\times X :Y)$ we denote the set consisting of all functions $F : \Lambda \times X \rightarrow Y$ such that, for every $\epsilon>0$ and $B\in {\mathcal B},$ there exists a finite real number
$L>0$ such that for each ${\bf t}_{0}\in \Lambda'$ there exists $\tau \in B({\bf t}_{0},L) \cap \Lambda'$ such that
\begin{align*}
\limsup_{l\rightarrow +\infty}\sup_{x\in B}\sup_{{\bf t}\in \Lambda}l^{n}{\mathbb F}(l,{\bf t})\phi\Bigl( \bigl\| F({\bf t}+{\bf \tau}+l{\bf u};x)-F({\bf t}+l{\bf u};x) \bigr\|_{Y}\Bigr)_{L^{p({\bf u})}(\Omega:Y)}
\end{align*}
space!$W^{[p({\bf u}),\phi,{\mathbb F}]}_{\Omega,\Lambda',{\mathcal B}}(\Lambda\times X :Y)$
By $e-W^{[p({\bf u}),\phi,{\mathbb F}]_{1}}_{\Omega,\Lambda',{\mathcal B}}(\Lambda\times X :Y)$ we denote the set consisting of all functions $F : \Lambda \times X \rightarrow Y$ such that, for every $\epsilon>0$ and $B\in {\mathcal B},$ there exist two finite real numbers
$L>0$ such that for each ${\bf t}_{0}\in \Lambda'$ there exists $\tau \in B({\bf t}_{0},L)\cap \Lambda'$ such that
\begin{align*}
\sup_{x\in B}\sup_{{\bf t}\in \Lambda}l^{n}{\mathbb F}(l,{\bf t})\phi\Bigl( \bigl\| F({\bf t}+{\bf \tau}+l{\bf u};x)-F({\bf t}+l{\bf u};x) \bigr\|_{L^{p({\bf u})}(\Omega:Y)} \Bigr)
\end{align*}
space!$e-W^{[p({\bf u}),\phi,{\mathbb F}]_{1}}_{\Omega,\Lambda',{\mathcal B}}(\Lambda\times X :Y)$
(ii) By $W^{[p({\bf u}),\phi,{\mathbb F}]_{1}}_{\Omega,\Lambda',{\mathcal B}}(\Lambda\times X :Y)$ we denote the set consisting of all functions $F : \Lambda \times X \rightarrow Y$ such that, for every $\epsilon>0$ and $B\in {\mathcal B},$ there exists a finite real number
$L>0$ such that for each ${\bf t}_{0}\in \Lambda'$ there exists $\tau \in B({\bf t}_{0},L) \cap \Lambda'$ such that
\begin{align*}
\limsup_{l\rightarrow +\infty}\sup_{x\in B}\sup_{{\bf t}\in \Lambda}l^{n}{\mathbb F}(l,{\bf t})\phi\Bigl( \bigl\| F({\bf t}+{\bf \tau}+{\bf u};x)-F({\bf t}+{\bf u};x) \bigr\|_{L^{p({\bf u})}(l\Omega:Y)} \Bigr)
\end{align*}
space!$W^{p({\bf u}),\phi,{\mathbb F},1}_{\Omega,\Lambda',{\mathcal B}}(\Lambda\times X :Y)$
By $e-W^{[p({\bf u}),\phi,{\mathbb F}]_{2}}_{\Omega,\Lambda',{\mathcal B}}(\Lambda\times X :Y)$ we denote the set consisting of all functions $F : \Lambda \times X \rightarrow Y$ such that, for every $\epsilon>0$ and $B\in {\mathcal B},$ there exist two finite real numbers
$L>0$ such that for each ${\bf t}_{0}\in \Lambda'$ there exists $\tau \in B({\bf t}_{0},L)\cap \Lambda'$ such that
\begin{align*}
\sup_{x\in B}\sup_{{\bf t}\in \Lambda}\phi\Bigl( l^{n}{\mathbb F}(l,{\bf t}) \bigl\| F({\bf t}+{\bf \tau}+l{\bf u};x)-F({\bf t}+l{\bf u};x) \bigr\|_{L^{p({\bf u})}(\Omega:Y)} \Bigr)
\end{align*}
space!$e-W^{[p({\bf u}),\phi,{\mathbb F}]_{2}}_{\Omega,\Lambda',{\mathcal B}}(\Lambda\times X :Y)$
(ii) By $W^{[p({\bf u}),\phi,{\mathbb F}]_{2}}_{\Omega,\Lambda',{\mathcal B}}(\Lambda\times X :Y)$ we denote the set consisting of all functions $F : \Lambda \times X \rightarrow Y$ such that, for every $\epsilon>0$ and $B\in {\mathcal B},$ there exists a finite real number
$L>0$ such that for each ${\bf t}_{0}\in \Lambda'$ there exists $\tau \in B({\bf t}_{0},L) \cap \Lambda'$ such that
\begin{align*}
\limsup_{l\rightarrow +\infty}\sup_{x\in B}\sup_{{\bf t}\in \Lambda}\phi\Bigl( l^{n}{\mathbb F}(l,{\bf t})\bigl\| F({\bf t}+{\bf \tau}+l{\bf u};x)-F({\bf t}+l{\bf u};x) \bigr\|_{L^{p({\bf u})}(\Omega:Y)} \Bigr)
\end{align*}
space!$W^{[p({\bf u}),\phi,{\mathbb F}]_{2}}_{\Omega,\Lambda',{\mathcal B}}(\Lambda\times X :Y)$
It is clear that the both concepts are equivalent in the constant coefficient case.
Further on, the notion introduced in Definition <ref>-Definition <ref> generalizes the notion introduced in <cit.>, provided that $\Lambda'=\Lambda=I,$ $\Omega=[0,1]$ and $I$ is equal to $[0,\infty)$ or ${\mathbb R},$ whilst the notion introduced in Definition <ref>-Definition <ref> generalizes the notion introduced in <cit.> in the above-mentioned case. Let us also note that, if a function $F : \Lambda \times X \rightarrow Y$ is Stepanov $(\Omega,p({\bf u}))$-$({\mathcal B},\Lambda')$-almost periodic in the sense of <cit.>, then $F\in e-W^{[p({\bf u}),x,{\mathbb F}]}_{\Omega,\Lambda',{\mathcal B}}(\Lambda\times X :Y)$ for any function ${\mathbb F}(\cdot; \cdot)$ satisfying ${\mathbb F}(1,{\bf t})=1$ for all ${\bf t}\in \Lambda.$ If $X=\{0\}$ and ${\mathcal B}=\{X\},$ then we omit the term “${\mathcal B}$” from the notation.
We continue by providing two illustrative examples:
Let us recall that J. Stryja has proved, in [42], that the function $f(t):=\chi_{[0,1/2]}(t),$ $t\in {\mathbb R}$ is equi-Weyl-$p$-almost periodic for any exponent $p\in [1,\infty)$ but it is not Stepanov $p$-almost periodic for any exponent $p\in [1,\infty)$ (see e.g., <cit.> for the notion); in <cit.>, we have recently extended this result by showing that for each $p\in [1,\infty)$ the function
$f(\cdot)$ belongs to the space $e-W^{(p,x,l^{-\sigma})}_{[0,1],{\mathbb R}}({\mathbb R} :{\mathbb C})$ if and only if $\sigma>0$ (in actual fact, this holds for any $p\in {\mathcal P}({\mathbb R}),$ as easily approved). A similar consideration shows that for each compact set $K\subseteq {\mathbb R}^{n}$ with positive Lebesgue measure and for each $p\in {\mathcal P}({\mathbb R}^{n})$ the function $F(\cdot):=\chi_{K}(\cdot)$ belongs to the space $e-W^{(p({\bf u}),x,l^{-\sigma})}_{[0,1]^{n},{\mathbb R}^{n}}({\mathbb R}^{n} : {\mathbb C})$ if and only if $\sigma>0.$
Let $p\in [1,\infty).$
In [42], it has been proved that the Heaviside function $f(t):=\chi_{[0,\infty)}(t),$ $t\in {\mathbb R}$ is both Weyl-$p$-normal (i.e., Weyl-$({\mathrm R},{\mathcal B},p)$-normal with $\Lambda=\Lambda'={\mathbb R},$
$X=\{0\},$ ${\mathcal B}=\{X\},$ $Y={\mathbb C}$ and ${\mathrm R}$ being the collection of all sequences in ${\mathbb R};$ see Definition <ref> below) and Weyl-$p$-almost periodic as well as that $f(\cdot)$ is not equi-Weyl-$p$-almost periodic. In <cit.>, we have proved that $f(\cdot)$ belongs to the space
$W^{(p,x,l^{-\sigma})}_{[0,1],{\mathbb R}}({\mathbb R} : {\mathbb C})$ if and only if $\sigma>0$
as well as that the function $f(\cdot)$ cannot belong to the space $W^{(p,x,[\psi(l)]^{-1/p})}_{[0,1],{\mathbb R}}({\mathbb R} : {\mathbb C}),$ for any function $\psi : (0,\infty) \rightarrow (0,\infty)$
such that $\limsup_{l\rightarrow +\infty}[\psi(l)]^{-1}>0$ (see also <cit.>).
Suppose now that $F({\bf t}):=\chi_{[0,\infty)^{n}}({\bf t}),$ ${\bf t} \in {\mathbb R}^{n}$ as well as that $\Lambda :=\Lambda' :={\mathbb R}^{n}$ and $\phi(x)\equiv x.$ Then, for every ${\bf t},\ \tau \in {\mathbb R}^{n}$ and $l>0,$ we have
\begin{align*}
&\int_{{\bf t}+l\Omega}|F(\tau +{\bf u})-F({\bf u})|^{p}\, d{\bf u}
\\&=\int_{({\bf t}+l\Omega) \setminus [0,\infty)^{n}}|F(\tau +{\bf u})|^{p}\, d{\bf u}+\int_{({\bf t}+l\Omega) \cap [0,\infty)^{n}}|F({\bf u})|^{p}\, d{\bf u}
\\& =\int_{\tau + [({\bf t}+l\Omega) \setminus [0,\infty)^{n}]}|F({\bf u})|^{p}\, d{\bf u}+\int_{\tau + [({\bf t}+l\Omega) \cap [0,\infty)^{n}]}|F({\bf u})-1|^{p}\, d{\bf u}
\\& \leq m\Bigl( \bigl(\tau + [({\bf t}+l\Omega) \setminus [0,\infty)^{n}]\bigr) \cap [0,\infty)^{n} \Bigr)+m\Bigl( \bigl(\tau + [({\bf t}+l\Omega) \cap [0,\infty)^{n}]\bigr) \setminus [0,\infty)^{n} \Bigr).
\end{align*}
If $l>|\tau|,$ then it is not difficult to prove that the later does not exceed $2^{n}l^{n-1}|\tau|,$ which implies that $F\in W^{(p,x,l^{-\sigma})}_{[0,1]^{n},{\mathbb R}^{n}}({\mathbb R}^{n} : {\mathbb C})$ if $\sigma>(n-1)/p;$ this is also the best constant for $\sigma$ we can obtain here. On the other hand, there is no $\sigma>0$ such that $F\in e-W^{(p,x,l^{-\sigma})}_{[0,1]^{n},{\mathbb R}^{n}}({\mathbb R}^{n} : {\mathbb C}).$
Denote by ${\mathrm A}_{X,Y}$ any of the above introduced classes of function spaces.
Then we have the following:
(i) Suppose that $c\in {\mathbb C}$ and $F(\cdot ; \cdot)$ belongs to ${\mathrm A}_{X,Y}.$
Then $cF(\cdot ; \cdot)$ belongs to ${\mathrm A}_{X,Y},$ provided that there exists a function $\varphi : [0,\infty) \rightarrow [0,\infty)$ satisfying that $\phi(xy)\leq \varphi(y)\phi(x),$ $x,\ y \geq 0.$
(ii) Suppose that $F\in {\mathrm A}_{X,Y},$
$A\in L(Y,Z),$
$\phi(\cdot)$ is monotonically increasing function and there exists a function $\varphi : [0,\infty) \rightarrow [0,\infty)$ satisfying that $\phi(xy)\leq \varphi(y)\phi(x),$ $x,\ y \geq 0.$ Using Lemma <ref>(iii), Lemma <ref> and a simple argumentation, it follows that $AF\in {\mathrm A}_{X,Y}.$
Suppose that $c_{2}\in {\mathbb C}\setminus \{0\}$ and $F(\cdot ; \cdot)$ belongs to ${\mathrm A}_{X,Y}.$
Then $F(\cdot ; c_{2}\cdot)$ and $F(\cdot ; \cdot)$ belong to ${\mathrm A}_{X,Y}$, where ${\mathcal B}_{c_{2}}\equiv \{c_{2}^{-1}B : B\in {\mathcal B}\}.$
(b) Suppose that $c_{1}\in {\mathbb C}\setminus \{0\},$ $c_{2}\in {\mathbb C}\setminus \{0\},$ and $F(\cdot ; \cdot)$ belongs to ${\mathrm A}_{X,Y}.$ Define the function $F_{c_{1},c_{2}}: \Lambda/c_{1} \times X \rightarrow Y$ by
$F_{c_{1},c_{2}}({\bf t},x):=F(c_{1}{\bf t} ; c_{2}x),$ ${\bf t}\in \Lambda/c_{1},$ $x\in X.$ If we assume that
$\phi(\cdot)$ is a monotonically increasing function and there exists a function $\varphi : [0,\infty) \rightarrow [0,\infty)$ satisfying that $\phi(xy)\leq \varphi(y)\phi(x),$ $x,\ y \geq 0,$ then $F\in (e-)W^{(p({\bf u}),\phi,{\mathbb F})}_{\Omega,\Lambda',{\mathcal B}}(\Lambda\times X :Y)$ [$F\in (e-)W^{[p({\bf u}),\phi,{\mathbb F}]}_{\Omega,\Lambda',{\mathcal B}}(\Lambda\times X :Y)$] implies
$F_{c_{1},c_{2}}\in (e-)W^{(p_{c_{1}}({\bf u}),\phi,{\mathbb F}_{c_{1}})}_{\Omega/c_{1},\Lambda'/c_{1},{\mathcal B}_{c_{2}}}((\Lambda /c_{1})\times X :Y)$
[$F_{c_{1},c_{2}}\in (e-)W^{[p_{c_{1}}({\bf u}),\phi,{\mathbb F}_{c_{1}}]}_{\Omega/c_{1},\Lambda'/c_{1},{\mathcal B}_{c_{2}}}((\Lambda /c_{1})\times X :Y)$], where
$p_{c_{1}}({\bf u}):=p(c_{1}{\bf u}),$ ${\bf u}\in \Lambda/c_{1}$ and ${\mathbb F}_{c_{1}}(x,{\bf t}):={\mathbb F}(x,c_{1}{\bf t}),$
$x\geq 0,$
${\bf t} \in \Lambda/c_{1}.$
For the class $e-W^{(p({\bf u}),\phi,{\mathbb F})}_{\Omega,\Lambda',{\mathcal B}}(\Lambda\times X :Y)$, this follows from the inequality
\begin{align*}
\Bigl[\phi &\Bigl( \bigl\| F_{c_{1},c_{2}}({\bf \tau}+{\bf u};x)-F_{c_{1},c_{2}}({\bf u};x) \bigr\|\Bigr)\Bigr]_{L^{p_{c_{1}}({\bf u})}({\bf t}/c_{1}+l\Omega/c_{1}:Y)}
\\&\leq \Bigl(1+|c_{1}|^{-n} \Bigr)
\Bigl[\phi\Bigl( \bigl\| F({\bf \tau}+{\bf u};x)-F({\bf u};x) \bigr\|\Bigr)\Bigr]_{L^{p({\bf u})}({\bf t}+l\Omega:Y)} ,\quad {\bf t}\in \Lambda,
\end{align*}
which follows from a trivial computation involving the chain rule, the elementary definitions and the inequality $\varphi_{p({\bf u})}(c\cdot)\leq |c|\varphi_{p({\bf u})}(\cdot)$ for $|c|\leq 1.$ Similarly, if we assume that there exists a function $\varphi : [0,\infty) \rightarrow [0,\infty)$ satisfying that $\phi(xy)\leq \varphi(y)\phi(x),$ $x,\ y \geq 0$ and $F\in (e-)W^{(p({\bf u}),\phi,{\mathbb F})_{i}}_{\Omega,\Lambda',{\mathcal B}}(\Lambda\times X :Y)$ [$F\in (e-)W^{[p({\bf u}),\phi,{\mathbb F}]_{i}}_{\Omega,\Lambda',{\mathcal B}}(\Lambda\times X :Y)$] for $i=1$ or $i=2$, then
$F_{c_{1},c_{2}}\in (e-)W^{(p_{c_{1}}({\bf u}),\phi,{\mathbb F}_{c_{1}})_{i}}_{\Omega/c_{1},\Lambda'/c_{1},{\mathcal B}_{c_{2}}}((\Lambda /c_{1})\times X :Y)$
[$F_{c_{1},c_{2}}\in (e-)W^{[p_{c_{1}}({\bf u}),\phi,{\mathbb F}_{c_{1}}]_{i}}_{\Omega/c_{1},\Lambda'/c_{1},{\mathcal B}_{c_{2}}}((\Lambda /c_{1})\times X :Y)$].
(iv) The use of Jensen integral inequality in general measure spaces <cit.> may be useful to state some inclusions about the introduced classes of functions. The consideration is similar to that established in the one-dimensional case [30] and therefore omitted.
Regarding the convolution invariance of spaces $(e-)W^{(p({\bf u}),\phi,{\mathbb F})}_{\Omega,\Lambda',{\mathcal B}}({\mathbb R}^{n}\times X :Y)$
$(e-)W^{[p({\bf u}),\phi,{\mathbb F}]}_{\Omega,\Lambda',{\mathcal B}}({\mathbb R}^{n}\times X :Y),$ we will state the following results (the corresponding proofs are very similar to the proof of <cit.>, given in the one-dimensional case, and we will only present the main details of proof for Theorem <ref>; the results on invariance of various kinds of (equi-)Weyl almost periodicity under the actions of convolution products, established in <cit.>, are not simply applied in the multi-dimensional setting and we will not reconsider these results here):
Suppose that
$\varphi :[0,\infty) \rightarrow [0,\infty) ,$
$\phi :[0,\infty) \rightarrow [0,\infty) $ is a convex monotonically increasing function satisfying $\phi (xy)\leq \varphi(x)\phi(y)$ for all $x, \ y\geq 0,$
$h\in L^{1}({\mathbb R}^{n}),$ $\Omega=[0,1]^{n} $,
$F\in (e-)W^{(p({\bf u}),\phi,{\mathbb F})}_{\Omega,\Lambda',{\mathcal B}}({\mathbb R}^{n}\times X :Y),$ $1/p({\bf u})+1/q({\bf u})=1,$ and for each $x\in X$ we have
$\sup_{{\bf t}\in {\mathbb R}^{n}}\| F({\bf t};x)\|_{Y}<\infty.$ If ${\mathbb F}_{1} : (0,\infty) \times {\mathbb R}^{n} \rightarrow (0,\infty),$ $p_{1}\in {\mathcal P}({\mathbb R^{n}})$ and if, for every ${\bf t}\in {\mathbb R}^{n}$ and $l>0,$ there exists a sequence $(a_{k})_{k\in l{\mathbb Z}^{d}}$
of positive real numbers such that $\sum_{k\in l{\mathbb Z}^{n}}a_{k}=1$ and
\begin{align}\label{razaq}
\int_{{\bf t}+l\Omega}\varphi_{p_{1}({\bf u})}\Biggl( 2\sum_{k\in l{\mathbb Z}^{n}}a_{k}l^{-n}\Bigl[\varphi\bigl( a_{k}^{-1}l^{n}h({\bf u}-{\bf v}) \bigr)\Bigr]_{L^{q({\bf v})}({\bf u}-k+l\Omega)}{\mathbb F}_{1}(l,{\bf t})
\bigl[{\mathbb F}(l,{\bf u}-k)\bigr]^{-1}
\Biggr)\, d{\bf u} \leq 1,
\end{align}
then $h\ast F\in (e-)W^{(p_{1}({\bf u}),\phi,{\mathbb F}_{1})}_{\Omega,\Lambda',{\mathcal B}}({\mathbb R}^{n}\times X :Y).$
Since $\sup_{{\bf t}\in {\mathbb R}^{n}}\| F({\bf t};x)\|_{Y}<\infty,$ $x\in X$, it is clear that the value $(h\ast F)({\bf t};x)$ is well defined for all ${\bf t} \in {\mathbb R}^{n}$ and $x\in X.$ Furthermore, since we have assumed that the function $\phi(\cdot)$ is monotonically increasing, we have (${\bf t}\in {\mathbb R}^{n},$ $l>0;$ $x\in X$ fixed):
\begin{align*}
\phi &\Bigl( \bigl\| (h\ast F)({\bf \tau}+{\bf u};x)-(h\ast F)({\bf u};x) \bigr\|_{Y}\Bigr)_{L^{p_{1}({\bf u})}({\bf t}+l\Omega)}
\\& =\phi \Biggl( \Bigl\| \int_{{\mathbb R}^{n}}h({\bf s}) \Bigl[F({\bf \tau}+{\bf u}-{\bf s};x)-F({\bf u}-{\bf s};x) \Bigr] d{\bf s} \Bigr\|_{Y}\Biggr)_{L^{p_{1}({\bf u})}({\bf t}+l\Omega)}
\\& \leq \phi \Biggl( \int_{{\mathbb R}^{n}}|h({\bf s})| \cdot \Bigl\|F({\bf \tau}+{\bf u}-{\bf s};x)-F({\bf u}-{\bf s};x) \Bigr\|_{Y} d{\bf s}\Biggr)_{L^{p_{1}({\bf u})}({\bf t}+l\Omega)}
\\& =\inf\Biggl\{ \lambda>0: \int_{{\bf t}+l\Omega}\varphi_{p_{1}({\bf u})}\Biggl( \frac{\phi \bigl( \int_{{\mathbb R}^{n}}|h({\bf s})| \cdot \bigl\|F({\bf \tau}+{\bf u}-{\bf s};x)-F({\bf u}-{\bf s};x) \bigr\|_{Y} d{\bf s} \bigr)}{\lambda}\Biggr)\, d{\bf u}\leq 1 \Biggr\}.
\end{align*}
But, since we have assumed that
$\phi(\cdot)$ is convex and $\sum_{k\in {\mathbb N}_{0}^{n}}a_{k}=1,$ we have
\begin{align}\label{infinitever}
\phi \Biggl( \sum_{k\in l{\mathbb N}_{0}^{n}}a_{k}x_{k} \Biggr) \leq \sum_{k\in l{\mathbb N}_{0}^{n}}a_{k}\phi \bigl(x_{k}\bigr),
\end{align}
for any sequence $(x_{k})$ of non-negative real numbers. Using (<ref>), the fact that the function $\varphi_{p_{1}({\bf u})}(\cdot)$ is monotonically increasing, the above computation, as well as the Jensen integral inequality and the Hölder inequality (see Lemma <ref>(i)), we get:
\begin{align*}
&\int_{{\bf t}+l\Omega}\varphi_{p_{1}({\bf u})}\Biggl( \frac{\phi \bigl( \int_{{\mathbb R}^{n}}|h({\bf s})| \cdot \bigl\|F({\bf \tau}+{\bf u}-{\bf s};x)-F({\bf u}-{\bf s};x) \bigr\|_{Y} d{\bf s} \bigr)}{\lambda}\Biggr)\, d{\bf u}
\\& \leq \int_{{\bf t}+l\Omega}\varphi_{p_{1}({\bf u})}\Biggl( \frac{\sum_{k\in l{\mathbb Z}^{n}}a_{k}\phi \bigl( \int_{k-l\Omega}a_{k}^{-1}|h({\bf s})| \cdot \bigl\|F({\bf \tau}+{\bf u}-{\bf s};x)-F({\bf u}-{\bf s};x) \bigr\|_{Y} d{\bf s} \bigr)}{\lambda}\Biggr)
\\& \leq \int_{{\bf t}+l\Omega}\varphi_{p_{1}({\bf u})}\Biggl( \frac{\sum_{k\in l{\mathbb Z}^{n}}a_{k}l^{-n}\int_{k-l\Omega}\phi \bigl( a_{k}^{-1}l^{n}|h({\bf s})| \cdot \bigl\|F({\bf \tau}+{\bf u}-{\bf s};x)-F({\bf u}-{\bf s};x) \bigr\|_{Y} d{\bf s} \bigr)}{\lambda}\Biggr)
\\& =\int_{{\bf t}+l\Omega}\varphi_{p_{1}({\bf u})}\Biggl( \frac{\sum_{k\in l{\mathbb Z}^{n}}a_{k}l^{-n}\int_{k-l\Omega}\phi \bigl( a_{k}^{-1}l^{n}|h({\bf u}-{\bf v})| \cdot \bigl\|F({\bf \tau}+{\bf v};x)-F({\bf v};x) \bigr\|_{Y} \bigr) d{\bf v}}{\lambda}\Biggr)
\\& \leq \int_{{\bf t}+l\Omega}\varphi_{p_{1}({\bf u})}\Biggl( \frac{\sum_{k\in l{\mathbb Z}^{n}}a_{k}l^{-n}\int_{{\bf u}-k+l\Omega}\varphi \bigl( a_{k}^{-1}l^{n}|h({\bf u}-{\bf v})| \bigr) \phi\bigl( \bigl\|F({\bf \tau}+{\bf v};x)-F({\bf v};x) \bigr\|_{Y} \bigr) d{\bf v} }{\lambda}\Biggr) \, d{\bf u}
\\& \leq \int_{{\bf t}+l\Omega}\varphi_{p_{1}({\bf u})}\Biggl( \frac{\sum_{k\in l{\mathbb Z}^{n}}2a_{k}l^{-n}\Bigl[\varphi\bigl( a_{k}^{-1}l^{n}h({\bf u}-{\bf v}) \bigr)\Bigr]_{L^{q({\bf v})}({\bf u}-k+l\Omega)}}{\lambda}
\\ &\times \bigl[\phi\bigl( \bigl\|F({\bf \tau}+{\bf v};x)-F({\bf v};x) \bigr\|_{Y} \bigr)\bigr]_{L^{p({\bf v})}({\bf u}-k+l\Omega)} \Biggr) \, d{\bf u}
\\& \leq \int_{{\bf t}+l\Omega}\varphi_{p_{1}({\bf u})}\Biggl( \frac{\sum_{k\in l{\mathbb Z}^{n}}2a_{k}l^{-n}\Bigl[\varphi\bigl( a_{k}^{-1}l^{n}h({\bf u}-{\bf v}) \bigr)\Bigr]_{L^{q({\bf v})}({\bf u}-k+l\Omega)}}{\lambda \cdot {\mathbb F}(l,{\bf u}-k)}\Biggr)\, d{\bf u}.
\end{align*}
The use of (<ref>) simply completes the proof.
Suppose that
$\varphi :[0,\infty) \rightarrow [0,\infty) ,$
$\phi :[0,\infty) \rightarrow [0,\infty) $ is a convex monotonically increasing function satisfying $\phi (xy)\leq \varphi(x)\phi(y)$ for all $x, \ y\geq 0,$
$h\in L^{1}({\mathbb R}^{n}),$ $\Omega=[0,1]^{n} $,
$F\in (e-)W^{[p({\bf u}),\phi,{\mathbb F}]}_{\Omega,\Lambda',{\mathcal B}}({\mathbb R}^{n}\times X :Y),$ $1/p({\bf u})+1/q({\bf u})=1,$ and for each $x\in X$ we have
$\sup_{{\bf t}\in {\mathbb R}^{n}}\| F({\bf t};x)\|_{Y}<\infty.$ If ${\mathbb F}_{1} : (0,\infty) \times {\mathbb R}^{n} \rightarrow (0,\infty),$ $p_{1}\in {\mathcal P}({\mathbb R^{n}})$ and if, for every ${\bf t}\in {\mathbb R}^{n}$ and $l>0,$ there exists a sequence $(a_{k})_{k\in l{\mathbb Z}^{d}}$
of positive real numbers such that $\sum_{k\in l{\mathbb Z}^{n}}a_{k}=1$ and
\begin{align*}
\int_{\Omega}\varphi_{p_{1}({\bf u})}\Biggl( 2\sum_{k\in l{\mathbb Z}^{n}}a_{k}l^{-n}\Bigl[\varphi\bigl( a_{k}^{-1}l^{n}h(k-l{\bf v}) \bigr)\Bigr]_{L^{q({\bf v})}(\Omega)}{\mathbb F}_{1}(l,{\bf t})
\bigl[{\mathbb F}(l,{\bf t}+l{\bf u}-k)\bigr]^{-1}
\Biggr)\, d{\bf u} \leq 1,
\end{align*}
then $h\ast F\in (e-)W^{[p_{1}({\bf u}),\phi,{\mathbb F}_{1}]}_{\Omega,\Lambda',{\mathcal B}}({\mathbb R}^{n}\times X :Y).$
The interested reader may try to formulate the corresponding statements about the convolution invariance of Weyl almost periodicity for the remaining four classes of functions introduced following our considerations from <cit.>.
Concerning the functions $\phi(\cdot)$ and ${\mathbb F}(\cdot,\cdot),$ the most important case is that one in which $\phi(x)\equiv x,$ ${\mathbb F}(l,{\bf t})\equiv m(l\Omega)^{-1}\|1\|_{L^{q({\bf u})}(l\Omega)},$ where $1/p({\bf u})+1/q({\bf u})=1,$ when we obtain the usual concept of (equi-)Weyl-$p({\bf u})$-almost periodicity; if this is the case, the spaces $(e-)W^{(p({\bf u}),\phi,{\mathbb F})}_{\Omega,\Lambda',{\mathcal B}}$, $(e-)W^{(p({\bf u}),\phi,{\mathbb F})_{1}}_{\Omega,\Lambda',{\mathcal B}}$ and $(e-)W^{(p({\bf u}),\phi,{\mathbb F})_{2}}_{\Omega,\Lambda',{\mathcal B}},$ resp. the spaces $(e-)W^{[p({\bf u}),\phi,{\mathbb F}]}_{\Omega,\Lambda',{\mathcal B}}$, $(e-)W^{[p({\bf u}),\phi,{\mathbb F}]_{1}}_{\Omega,\Lambda',{\mathcal B}}$ and $(e-)W^{[p({\bf u}),\phi,{\mathbb F}]_{2}}_{\Omega,\Lambda',{\mathcal B}},$ coincide. Furthermore, the use of Hölder inequality enables one to see these spaces are contained in the corresponding spaces of functions with $p({\bf u})\equiv 1.$
§ THE CONSTANT COEFFICIENT CASE
In this section, we will always assume that $\Omega=[0,1]^{n},$
$\Lambda$ is
a general non-empty subset of
${\mathbb R}^{n}$ satisfying $\Lambda' +\Lambda+ l\Omega\subseteq \Lambda$ and $\Lambda +l\Omega \subseteq \Lambda $
for all $l>0,$
$\phi(x)\equiv x$ and
$p({\bf t})\equiv p\in [1,\infty),$ when the usual concept of (equi-)Weyl-$p$-almost periodicity is obtained by plugging ${\mathbb F}(l,{\bf t})\equiv l^{-n/p}.$ The corresponding class of function is denoted by $(e-)W_{ap,\Lambda',{\mathcal B}}^{p}(\Lambda \times X : Y).$space!$(e-)W_{ap,\Lambda',{\mathcal B}}^{p}(\Lambda \times X : Y)$
Before we switch to Subsection <ref>, we would like to present the following illustrative example:
(see also <cit.>)
Suppose that
the complex-valued mapping $t\mapsto g_{j}(s)\, ds,$ $t\in {\mathbb R}$ is essentially bounded and
(equi-)Weyl-$p$-almost periodic
($1\leq j \leq n$). Define
\begin{align*}
F\bigl(t_{1},\cdot \cdot \cdot,t_{2n}\bigr):=\prod_{j=1}^{n}\Bigl[g_{j}\bigl(t_{j+n}\bigr)-g_{j}\bigl(t_{j}\bigr)\Bigr],\ \mbox{ where } t_{j}\in {\mathbb R} \mbox{ for }\ 1\leq j\leq 2n,
\end{align*}
and $\Lambda':=\{({\bf \tau},{\bf \tau}) : {\bf \tau} \in {\mathbb R^{n}} \}.$
Then the argumentation from <cit.> shows that
there exists a finite constant $M>0$ such that
\begin{align*}
\Bigl\| & F\bigl(t_{1}+\tau_{1},\cdot \cdot \cdot,t_{2n}+\tau_{2n}\bigr)-F\bigl(t_{1},\cdot \cdot \cdot,t_{2n}\bigr)\Bigr\|_{Y}
\\& \leq M\Biggl\{ \Bigl|g_{1}\bigl( t_{n+1}+\tau_{1} \bigr)-g_{1}\bigl( t_{n+1} \bigr)\Bigr|+\Bigl|g_{1}\bigl( t_{1}+\tau_{1} \bigr)-g_{1}\bigl( t_{1} \bigr)\Bigr| +\cdot \cdot \cdot
\\&+\Bigl|g_{n}\bigl( t_{2n}+\tau_{n} \bigr)-g_{n}\bigl( t_{2n} \bigr)\Bigr|+\Bigl|g_{n}\bigl( t_{n}+\tau_{n} \bigr)-g_{n}\bigl( t_{n} \bigr)\Bigr|\Biggr\},
\end{align*}
for any $(t_{1},\cdot \cdot \cdot,t_{2n})\in {\mathbb R}^{2n}$ and $(\tau_{1},\cdot \cdot \cdot,\tau_{2n})\in \Lambda'.$
Using the corresponding definitions, the Fubini theorem and an elementary argumentation, it follows that the function $F(\cdot)$
belongs to the class $(e-)W_{ap,\Lambda'}^{p}({\mathbb R}^{2n} : Y).$ Furthermore, in the case of consideration of equi-Weyl-$p$-almost periodicity, when
any direct product of finite number of equi-Weyl-$p$-almost periodic functions is again equi-Weyl-$p$-almost periodic, we can show that the function $F(\cdot)$
belongs to the class $e-W_{ap,\Lambda''}^{p}({\mathbb R}^{2n} : Y),$ where
$\Lambda'':=\{(a,a,\cdot \cdot \cdot, a) \in {\mathbb R}^{2n} : a\in {\mathbb R}\}.
§.§ Weyl $p$-distance and Weyl $p$-boundedness
In this subsection, we will say a few words about
the Weyl $p$-distance and the Weyl $p$-boundedness. Let us recall the following notion from [10]: Suppose that the function $F : \Lambda \times X \rightarrow Y$ satisfies that for each ${\bf t}\in \Lambda$ and $x\in X$, the function $F({\bf t}+\cdot ;x)$ belongs to the space $L^{p}(\Omega : Y)$.
Then we say that $F(\cdot;\cdot)$ is Stepanov $p$-bounded on ${\mathcal B}$ if and only if for each $B\in {\mathcal B}$ we have
\begin{align*}
\sup_{{\bf t}\in \Lambda;x\in B}\Bigl\| F({\bf t}+\cdot ; x)\Bigr\|_{L^{p}(\Omega : Y)}<\infty.
\end{align*}
If $X=\{0\}$, then, we say that the function $F(\cdot)$ is Stepanov $(\Omega, p)$-bounded
\|F\|_{S^{\Omega,p}}:=\sup_{{\bf t}\in \Lambda}\| F({\bf t}+{\bf u})\|_{L^{p}(\Omega : Y)}.
Suppose now that $F : \Lambda \times X \rightarrow Y$ and $G : \Lambda \times X \rightarrow Y$ are two
functions satisfying that $F({\bf t}+\cdot ;x)-G({\bf t}+\cdot ;x) \in L^{p}(l\Omega : Y) $ for all ${\bf t}\in \Lambda ,$ $x\in X$ and $l>0.$
The Stepanov distance $D_{S_{_{\Omega}}}^{p}(F(\cdot;x),G(\cdot ;x))$ of functions $F(\cdot;x)$ and $G(\cdot;x)$ is defined by
D_{S_{l\Omega}}^{p}(F(\cdot;x),G(\cdot ;x)):=\sup_{{\bf t}\in \Lambda}\Bigl[ l^{-(n/p)} \bigl\| F({\bf t}+\cdot ;x)-G({\bf t}+\cdot ;x) \bigr\|_{L^{p}(l\Omega :Y) } \Bigr],
for any $x\in X$ and $l>0$.
D_{S_{l\Omega},B}^{p}(F,G):=\sup_{x\in B}
D_{S_{l\Omega}}^{p}(F(\cdot;x),G(\cdot ;x)) \ \ (l>0, \ B\in {\mathcal B}).
It is clear that the assumptions $\tau \in {\mathbb R}^{n}$ and $\tau +\Lambda \subseteq \Lambda,$ resp. $\tau +\Lambda = \Lambda,$ implies
\begin{align}\label{jebiga}
D_{S_{l\Omega},B}^{p}(F(\cdot+\tau;\cdot),G(\cdot +\tau ; \cdot))\leq D_{S_{l\Omega},B}^{p}(F,G),\quad l>0, \ B\in {\mathcal B},
\end{align}
\begin{align}\label{jebiga1}
D_{S_{l\Omega},B}^{p}(F(\cdot+\tau;\cdot),G(\cdot +\tau ; \cdot))= D_{S_{l\Omega},B}^{p}(F,G),\quad l>0, \ B\in {\mathcal B}.
\end{align}
Arguing as in [10], we may conclude the following:
D_{S_{l_{1}\Omega},B}^{p}(F,G)\leq \sup_{{\bf t}\in \Lambda}\Bigl[ \frac{l_{2}}{l_{1}}\Bigr]^{n/p}\cdot \, D_{S_{l_{2}\Omega},B}^{p}(F,G),
provided that $l_{2}>l_{1}>0$ and $B\in {\mathcal B}.$
If $l_{2}>l_{1}>0$, $l_{2}=kl_{1}+\theta l_{1}$ for some $k\in {\mathbb N}$ and $\theta \in [0,1),$ then
\begin{align*}
D_{S_{l_{2}\Omega},B}^{p}(F,G)\leq \Bigl(\frac{k+1}{k}\Bigr)^{n/p} \cdot D_{S_{l_{1}\Omega},B}^{p}(F,G),
\end{align*}
provided that $B\in {\mathcal B}.$
Hence, [1.-2.] imply that for each $B\in {\mathcal B}$ we have
\limsup_{l\rightarrow \infty}D_{S_{l\Omega},B}^{p}(F,G)\leq D_{S_{l_{1}\Omega},B}^{p}(F,G),\quad l_{1}>0;
performing the limit inferior as $l_{1}\rightarrow \infty$, we get that
\limsup_{l\rightarrow \infty}D_{S_{l\Omega},B}^{p}(F,G)\leq \liminf_{l\rightarrow \infty}D_{S_{l\Omega},B}^{p}(F,G).
the limit
D_{W,B}^{p}(F,G):=\lim_{l\rightarrow \infty}D_{S_{l\Omega},B}^{p}(F,G)
and for each $l>0$ we have
\begin{align}\label{mekomte}
D_{W,B}^{p}(F,G)\leq D_{S_{l\Omega},B}^{p}(F,G),\quad B\in {\mathcal B}.
\end{align}
We call this limit the Weyl $p$-distance of functions
$F(\cdot)$ and $G(\cdot)$ on $B$; the Weyl $p$-norm of function $F(\cdot)$ on $B$, denoted by $\|F\|_{W,B}^{p},$ is defined by $\|F\|_{W,B}^{p}:=D_{W,B}^{p}(F,0).$
Moreover, if $X\in {\mathcal B},$ then the Weyl $p$-norm $\|F\|_{W,B}^{p}$ of $F(\cdot)$ on $B$ is also said to be the Weyl $p$-norm of function $F(\cdot)$ and it is denoted by $\|F\|_{W}^{p}.$
the Weyl distance of functions
Due to (<ref>)-(<ref>), we have that the assumptions $\tau \in {\mathbb R}^{n}$ and $\tau +\Lambda \subseteq \Lambda,$ resp. $\tau +\Lambda = \Lambda,$ imply
\begin{align*}
D_{W,B}^{p}(F(\cdot+\tau;\cdot),G(\cdot +\tau ; \cdot))\leq D_{W,B}^{p}(F,G),\quad B\in {\mathcal B},
\end{align*}
\begin{align*}
D_{W,B}^{p}(F(\cdot+\tau;\cdot),G(\cdot +\tau ; \cdot))= D_{W,B}^{p}(F,G),\quad B\in {\mathcal B}.
\end{align*}
We will occasionally use the following condition:
The function $F : \Lambda \times X \rightarrow Y$ satisfies that $\|F({\bf t}+\cdot ;x) \|_{Y} \in L^{p}(l\Omega : Y) $ for all ${\bf t}\in \Lambda ,$ $x\in X$ and $l>0.$
Suppose that (L) holds.
Then we say that $F(\cdot;\cdot)$ is Weyl $p$-bounded on ${\mathcal B}$ if and only if for each $B\in {\mathcal B}$ we have $\|F\|_{W,B}^{p}<\infty;$ moreover, if $X\in {\mathcal B},$ then we say that $F(\cdot;\cdot)$ is Weyl $p$-bounded. Weyl $p$-boundedness on ${\mathcal B}$
As is well known, the space of Weyl $p$-bounded functions is not complete with respect to the Weyl norm $\|\cdot\|_{W}^{p}$ in the case that $X\in {\mathcal B}.$ Further on, if (L) holds,
set ${\mathrm B}_{W,B}^{p}:=\{ F : \Lambda \times X \rightarrow Y \, ; \, \|F\|_{W,B}^{p}<+\infty\}$
($B\in {\mathcal B}$).
Let us recall that the terms “Weyl $p$-distance” and “Weyl $p$-norm” are a little bit incorrect because $D_{W,B}^{p}(\cdot,\cdot)$ is a pseudometric on ${\mathrm B}_{W,B}^{p},$ actually
(for example, the function $F:=\chi_{[0,1/2)}(\cdot)$ used before is a non-zero function and $\|F\|_{W}^{p}=0$ for all $p\geq 1$).
The above analysis enables one to clarify the following extension of the well known statement from the one-dimensional framework:
Suppose that (L) holds. Then the function $F(\cdot;\cdot)$ is Weyl $p$-bounded on ${\mathcal B}$ if and only if $F(\cdot;\cdot)$ is Stepanov $p$-bounded on ${\mathcal B}.$
Clearly, if $F(\cdot;\cdot)$ is Stepanov $p$-bounded on ${\mathcal B},$ then $F(\cdot;\cdot)$ is Weyl $p$-bounded on ${\mathcal B}$ due to (<ref>). Suppose now that the function $F(\cdot;\cdot)$ is Weyl $p$-bounded on ${\mathcal B}.$ Let the set $B\in {\mathcal B}$ be fixed. Then there exist two finite real constants $M>0$ and $l\geq 1$ such that
$D_{S_{l\Omega},B}^{p}(F,0)\leq M,$ which implies that for each ${\bf t}\in \Lambda$ and $x\in B$ we have
\Bigl\| F({\bf t}+\cdot; x)\Bigr\|_{L^{p}(\Omega : Y)}\leq \Bigl\| F({\bf t}+\cdot; x)\Bigr\|_{L^{p}(l\Omega : Y)}\leq l^{n/p}D_{S_{l\Omega},B}^{p}(F,0)\leq l^{n/p}M.
This completes the proof.
Under the previous assumptions, the quantity
D_{W,B,1}^{p}(F,G):=\sup_{x\in B}D_{W}^{p}(F(\cdot;x),G(\cdot ;x))=\sup_{x\in B}\lim_{l\rightarrow +\infty}D_{S_{l\Omega}}^{p}(F(\cdot;x),G(\cdot ;x))
also exists and we clearly have
D_{W,B,1}^{p}(F,G)\leq D_{W,B}^{p}(F,G).
$ Finding some sufficient conditions ensuring that $
D_{W,B,1}^{p}(F,G)\geq D_{W,B}^{p}(F,G)
$ could be an interested problem; for simplicity, we will not consider the quantity $
D_{W,B,1}^{p}(F,G)$ henceforth.
Suppose now that $F : \Lambda \times X \rightarrow Y,$ $G : \Lambda \times X \rightarrow Y$ and $H : \Lambda \times X \rightarrow Y$ satisfy
that $F({\bf t}+\cdot ;x)-G({\bf t}+\cdot ;x) \in L^{p}(l\Omega : Y) $ and $G({\bf t}+\cdot ;x)-H({\bf t}+\cdot ;x) \in L^{p}(l\Omega : Y) $ for all ${\bf t}\in \Lambda ,$ $x\in X$ and $l>0.$ Then
D_{S_{l\Omega},B}^{p}(F,G)\leq D_{S_{l\Omega},B}^{p}(F,H)+D_{S_{l\Omega},B}^{p}(H,G),\quad l>0,\ B\in {\mathcal B}
and therefore
\begin{align}\label{joj-boze}
D_{W,B}^{p}(F,G)\leq D_{W,B}^{p}(F,H)+D_{W,B}^{p}(H,G),\quad B\in {\mathcal B}.
\end{align}
Before we switch to Subsection <ref>, we will prove the following extension of <cit.> (cf. also <cit.> and <cit.>):
Suppose that any of the functions $F_{k} : \Lambda \times X \rightarrow Y$ ($k\in {\mathbb N}$) and $F : \Lambda \times X \rightarrow Y$ satisfies condition (L). If for each set $B\in {\mathcal B}$ we have $\lim_{k\rightarrow +\infty}\|F_{k}-F\|_{W,B}^{p}=0$ and
$F_{k}\in e-W_{ap,\Lambda',{\mathcal B}}^{p}(\Lambda \times X : Y)$ for all $k\in {\mathbb N}$, then $F\in e-W_{ap,\Lambda',{\mathcal B}}^{p}(\Lambda \times X : Y) .$
Let $\epsilon>0$ and $B\in {\mathcal B}$ be fixed. Then there exists $K\in {\mathbb N}$ such that $\|F_{K}-F\|_{W,B}^{p}<\epsilon/3;$ hence, there exists $l_{1}>0$ such that
\begin{align}\label{weyl188}
\sup_{{\bf t}\in \Lambda,x\in B}\Biggl[ l^{-n/p}\Bigl\| F_{K}(\cdot ;x ) -F(\cdot;x)\Bigr\|_{L^{p}({\bf t}+l\Omega : Y)} \Biggr]<\epsilon/3,\quad l\geq l_{1}.
\end{align}
On the other hand, since $F_{K}\in e-W_{ap,\Lambda,\Lambda'}^{p}(\Lambda \times X : Y) ,$ we have the existence of two real numbers $l_{2}>0$ and $L>0$ such that for each ${\bf t}_{0}\in \Lambda'$ there exists $\tau \in B({\bf t}_{0},L) \cap \Lambda'$ such that
\begin{align}\label{weyl188188}
\sup_{{\bf t}\in \Lambda,x\in B}\Biggl[ l_{2}^{-n/p}\Bigl\| F_{K}(\cdot +\tau;x ) -F_{K}(\cdot;x)\Bigr\|_{L^{p}({\bf t}+l\Omega : Y)} \Biggr]<2^{-n/p}\epsilon/3.
\end{align}
Set $l:=\max(l_{1},l_{2}),$ fix ${\bf t}\in \Lambda$ and $x\in B$. Then there exist an integer $k\in {\mathbb N}$ and a number $\theta \in [0,1)$ such that $l=kl_{2}+\theta l_{2}.$ Due to (<ref>), we have:
\begin{align*}
&\Biggl[ l^{-n}\int_{{\bf t}+l\Omega}\Bigl\| F_{K}({\bf u}+\tau;x)-F_{K}({\bf u};x)\Bigr\|_{Y}^{p}\, d{\bf u} \Biggr]^{1/p}
\\& \leq \Biggl[ \bigl(kl_{2}\bigr)^{-n}\int_{{\bf t}+(k+1)l_{2}\Omega}\Bigl\| F_{K}({\bf u}+\tau;x)-F_{K}({\bf u};x)\Bigr\|_{Y}^{p}\, d{\bf u} \Biggr]^{1/p}
\\& \leq \Biggl[ \bigl(kl_{2}\bigr)^{-n}2^{-n}(k+1)^{n}\epsilon^{p}3^{-p}l_{2}^{n} \Biggr]^{1/p}= 2^{-n/p}\frac{(k+1)^{n/p}}{k^{n/p}}\frac{\epsilon}{3}\leq \epsilon/3.
\end{align*}
Using this estimate and (<ref>), we get:
\begin{align*}
& l^{-n/p}\Bigl\| F(\cdot+\tau;x)-F(\cdot;x)\Bigr\|_{L^{p}({\bf t}+l\Omega : Y)}
\\& \leq l^{-n/p}\Biggl[\Bigl\| F(\cdot+\tau;x)-F_{K}(\cdot+\tau;x)\Bigr\|_{L^{p}({\bf t}+l\Omega : Y)}
\\&+\Bigl\| F_{K}(\cdot+\tau;x)-F_{K}(\cdot;x)\Bigr\|_{L^{p}({\bf t}+l\Omega : Y)}+\Bigl\| F_{K}(\cdot;x)-F(\cdot;x)\Bigr\|_{L^{p}({\bf t}+l\Omega : Y)}\Biggr]
\\& \leq 3 \cdot \frac{\epsilon}{3}=\epsilon,
\end{align*}
which completes the proof.
§.§ Weyl $p$-normality and Weyl approximations by trigonometric polynomials
We will first introduce the following notion (see also <cit.> and <cit.>):
function!(equi-)Weyl-$({\mathrm R},{\mathcal B},p)$-normal
Suppose that (L) holds, ${\mathrm R}$ is a non-empty collection of sequences in ${\mathbb R}^{n}$ and the following holds:
\begin{align}\label{lepolepo}
\mbox{if}\ \ {\bf t}\in \Lambda,\ {\bf b}\in {\mathrm R}\ \mbox{ and }\ m\in {\mathbb N},\ \mbox{ then we have }\ {\bf t}+{\bf b}(m)\in \Lambda .
\end{align}
Then we say that the function $F(\cdot;\cdot)$ is
Weyl-$({\mathrm R},{\mathcal B},p)$-normal if and only if
for every $B\in {\mathcal B}$ and $({\bf b}_{k}=(b_{k}^{1},b_{k}^{2},\cdot \cdot\cdot ,b_{k}^{n}))\in {\mathrm R}$ there exist a subsequence $({\bf b}_{k_{m}}=(b_{k_{m}}^{1},b_{k_{m}}^{2},\cdot \cdot\cdot , b_{k_{m}}^{n}))$ of $({\bf b}_{k})$ such that $(F(\cdot+(b_{k_{m}}^{1},\cdot \cdot\cdot, b_{k_{m}}^{n});\cdot))_{m\in {\mathbb N}}$
is a Cauchy sequence with respect to the metric $D_{W,B}^{p}(\cdot,\cdot).$
If ${\mathrm R}_{X}$ is a non-empty collection of sequences in ${\mathbb R}^{n} \times X$ satisfying certain conditions,
then the notion of Weyl-$({\mathrm R}_{X},{\mathcal B},p)$-normality can be also intoduced following the approach obeyed for introducing the notion in <cit.>.
By a trigonometric polynomial $P : \Lambda \times X \rightarrow Y$ we mean any linear combination of functions like
\begin{align*}
e^{i[\lambda_{1}t_{1}+\lambda_{2}t_{2}+\cdot \cdot \cdot +\lambda_{n}t_{n}]}c(x),
\end{align*}
where $\lambda_{i}$ are real numbers ($1\leq i \leq n$) and $c: X \rightarrow Y$ is a continuous mapping.
Now we are in a position to introduce the following generalization of the notion considered in <cit.>:
space!$e-{\mathcal B}-W^{p}(\Lambda \times X : Y)$
Suppose that (L) holds.
Then we say that the function $F(\cdot;\cdot)$ belongs to the space
$e-{\mathcal B}-W^{p}(\Lambda \times X : Y)$ if and only if
for every $B\in {\mathcal B}$ and for every $\epsilon>0$ there exist a real number $l_{0}>0$ and a trigonometric polynomial $P(\cdot;\cdot)$ such that
\begin{align}\label{agape-laf}
\sup_{x\in B,{\bf t}\in \Lambda}\Biggl[ l^{-n/p} \Bigl\| P({\bf t} +\cdot ;x)-F({\bf t}+\cdot ;x) \Bigr\|_{L^{p}(l\Omega : Y)}\Biggr]<\epsilon,\quad l\geq l_{0}.
\end{align}
If $X\in {\mathcal B},$ then we also say that $F(\cdot)$ belongs to the space
$e-W^{p}(\Lambda \times X : Y).$
In other words, if (L) holds, then $F\in e-{\mathcal B}-W^{p}(\Lambda \times X : Y)$ if and only if for every $B\in {\mathcal B}$ there exists a sequence of trigonometric polynomials $P_{m}(\cdot;\cdot)$ such that $\lim_{m\rightarrow +\infty}D_{W,B}^{p}(F,P_{m})=0.$ Now we will state the following extension of <cit.>:
Suppose that (L) holds and
$F\in e-{\mathcal B}-W^{p}(\Lambda \times X : Y).$ Let ${\mathrm R}$ be the collection of all sequences in ${\mathbb R}^{n}$ for which (<ref>) holds, and let ${\mathcal B}$ be any collection of compact subsets of $X.$ Then the function $F(\cdot;\cdot)$ is
Weyl-$({\mathrm R},{\mathcal B},p)$-normal.
Let $({\bf b}_{k}=(b_{k}^{1},b_{k}^{2},\cdot \cdot\cdot ,b_{k}^{n}))\in {\mathrm R}.$ Using <cit.>,
for every $Q\in {\mathbb N},$ we can always find a sequence $((b_{k_{m;Q}}^{1},\cdot \cdot \cdot, b_{k_{m;Q}}^{n} ))_{m\in {\mathbb N}}$ and a function $F_{Q}: {\mathbb R}^{n} \times X \rightarrow Y$
such that
\begin{align}\label{slobeks}
\lim_{m\rightarrow +\infty}P_{Q}\Bigl({\bf t}+\bigl(b_{k_{m;Q}}^{1},\cdot \cdot \cdot, b_{k_{m;Q}}^{n} \bigr);x\Bigr)=F_{Q}({\bf t};x),
\end{align}
uniformly for ${\bf t}\in {\mathbb R}^{n} $ and $x\in B;$ furthermore, we may assume that the sequence $((b_{k_{m;Q}}^{1},\cdot \cdot \cdot, b_{k_{m;Q}}^{n} ))_{m\in {\mathbb N}}$ is a subsequence of all sequences $((b_{k_{m;Q'}}^{1},\cdot \cdot \cdot, b_{k_{m;Q'}}^{n} ))_{m\in {\mathbb N}}$ for $1\leq Q'\leq Q$ and the initial sequence $({\bf b}_{k}=(b_{k}^{1},b_{k}^{2},\cdot \cdot\cdot ,b_{k}^{n}))$
as well as that
$(k_{m;m})$ is a strictly increasing sequence of positive integers. Then
a subsequence $({\bf b}_{k_{m}}=(b_{k_{m;m}}^{1},b_{k_{m;m}}^{2},\cdot \cdot\cdot , b_{k_{m;m}}^{n}))$ of $({\bf b}_{k})$ satisfies that $(F(\cdot+(b_{k_{m;m}}^{1},b_{k_{m;m}}^{2},\cdot \cdot\cdot , b_{k_{m;m}}^{n});\cdot))_{m\in {\mathbb N}}$
is a Cauchy sequence with respect to the metric $D_{W,B}^{p}(\cdot,\cdot).$ Indeed, there exists $s\in {\mathbb N}$ such that $D_{W,B}^{p}(P_{s},F)<\epsilon/3$ and we have, due to (<ref>),
\begin{align*}
& D_{W,B}^{p}\Bigl(F\bigl(\cdot +(b_{k_{m;m}}^{1},b_{k_{m;m}}^{2},\cdot \cdot\cdot , b_{k_{m;m}}^{n});x\bigr) , F\bigl(\cdot +(b_{k_{m';m'}}^{1},b_{k_{m';m'}}^{2},\cdot \cdot\cdot , b_{k_{m';m'}}^{n});x\bigr) \Bigr)
\\& \leq D_{W,B}^{p}\Bigl(F\bigl(\cdot +(b_{k_{m;m}}^{1},b_{k_{m;m}}^{2},\cdot \cdot\cdot , b_{k_{m;m}}^{n});x\bigr), P_{s}\bigl(\cdot +(b_{k_{m;m}}^{1},b_{k_{m;m}}^{2},\cdot \cdot\cdot , b_{k_{m;m}}^{n});x\bigr) \Bigr)
\\& +D_{W,B}^{p}\Bigl(P_{s}\bigl(\cdot +(b_{k_{m;m}}^{1},b_{k_{m;m}}^{2},\cdot \cdot\cdot , b_{k_{m;m}}^{n});x\bigr), P_{s}\bigl(\cdot +(b_{k_{m';m'}}^{1},b_{k_{m';m'}}^{2},\cdot \cdot\cdot , b_{k_{m';m'}}^{n});x\bigr) \Bigr)
\\& +D_{W,B}^{p}\Bigl(P_{s}\bigl(\cdot +(b_{k_{m';m'}}^{1},b_{k_{m';m'}}^{2},\cdot \cdot\cdot , b_{k_{m';m'}}^{n});x\bigr), F\bigl(\cdot +(b_{k_{m';m'}}^{1},b_{k_{m';m'}}^{2},\cdot \cdot\cdot , b_{k_{m';m'}}^{n});x\bigr) \Bigr)
\\& \leq 2D_{W,B}^{p}\bigl( F,P_{s} \bigr)
\\& +D_{W,B}^{p}\Bigl(P_{s}\bigl(\cdot +(b_{k_{m;m}}^{1},b_{k_{m;m}}^{2},\cdot \cdot\cdot , b_{k_{m;m}}^{n});x\bigr), P_{s}\bigl(\cdot +(b_{k_{m';m'}}^{1},b_{k_{m';m'}}^{2},\cdot \cdot\cdot , b_{k_{m';m'}}^{n});x\bigr) \Bigr)
\\& \leq 2\epsilon/3
+D_{W,B}^{p}\Bigl(P_{s}\bigl(\cdot +(b_{k_{m;m}}^{1},b_{k_{m;m}}^{2},\cdot \cdot\cdot , b_{k_{m;m}}^{n});x\bigr), P_{s}\bigl(\cdot +(b_{k_{m';m'}}^{1},b_{k_{m';m'}}^{2},\cdot \cdot\cdot , b_{k_{m';m'}}^{n});x\bigr) \Bigr)
\\& \leq 2\epsilon/3
\\& +\sup_{y\in B,\cdot \in \Lambda}\Bigl\| P_{s}\bigl(\cdot +(b_{k_{m;m}}^{1},b_{k_{m;m}}^{2},\cdot \cdot\cdot , b_{k_{m;m}}^{n});y\bigr)- P_{s}\bigl(\cdot +(b_{k_{m';m'}}^{1},b_{k_{m';m'}}^{2},\cdot \cdot\cdot , b_{k_{m';m'}}^{n});y\bigr) \Bigr\|_{Y},
\end{align*}
for every $m,\ m'\in {\mathbb N}$ and $x\in B.$ Since $((b_{k_{m;m}}^{1},\cdot \cdot \cdot, b_{k_{m;m}}^{n} ))_{m\in {\mathbb N}}$ is a subsequence of the sequence $((b_{k_{m;s}}^{1},\cdot \cdot \cdot, b_{k_{m;s}}^{n} ))_{m\in {\mathbb N}}$ for $s\leq m,$ this simply implies the required statement by applying (<ref>) with $Q=s.$
Our next structural result generalizes <cit.>:
Suppose that (L) holds, ${\mathcal B}$ is any collection of bounded subsets of $X$ and $F\in e-{\mathcal B}-W^{p}(\Lambda \times X : Y).$ Then $F\in e-W^{p}_{ap,\Lambda,{\mathcal B}}(\Lambda \times X : Y).$
Let a bounded set $B\in {\mathcal B}$ and a real number $\epsilon>0$
be given.
By definition, there exist a real number $l_{0}>0$ and a trigonometric polynomial $P(\cdot;\cdot)$ such that (<ref>) holds.
\begin{align*}
P({\bf t};x):=\sum_{j=1}^{k}e^{i[\lambda_{1,j}t_{1}+\lambda_{2,j}t_{2}+\cdot \cdot \cdot +\lambda_{n,j}t_{n}]}c_{j}(x),\quad {\bf t}=\bigl(t_{1},t_{2},\cdot \cdot \cdot,t_{n}\bigr)\in {\mathbb R}^{n},\ x\in X,
\end{align*}
for some integer $k\in {\mathbb N}.$ Since the functions $c_{j}(\cdot)$ is continuous ($1\leq j\leq k$), there exists a finite real constant $M>1$ such that
\sup_{x\in B}\sup_{1\leq j\leq k}\bigl\| c_{j}(x) \bigr\|_{Y}\leq M.
Since every trigonometric polynomial is almost periodic in ${\mathbb R}^{n}$ (cf. [9]), the existence of such a constant $M$ and the Bochner criterion
applied to the functions $e^{i[\lambda_{1,j}t_{1}+\lambda_{2,j}t_{2}+\cdot \cdot \cdot +\lambda_{n,j}t_{n}]}$
for $1\leq j\leq k$, together imply the existence of a finite real number $L>0$ such that
for each point ${\bf t}_{0}\in {\mathbb R}^{n}$ there exists $\tau \in B({\bf t}_{0},L)$ which satisfies $\| P({\bf t}+\tau;x)-P({\bf t};x)\|_{Y}\leq (\epsilon/3)$ for all ${\bf t}\in {\mathbb R}^{n}$ and $x\in B.$ Suppose now that ${\bf t}_{0}\in \Lambda$ and $\tau \in B({\bf t}_{0},L)$ is chosen as above. This yields
\begin{align*}
\Bigl\|F(\tau &+\cdot ;x)-F(\cdot;x)\Bigr\|_{L^{p}({\bf t}+l\Omega)}\leq \Bigl\|F(\tau +\cdot ;x)-P(\tau +\cdot ;x)\Bigr\|_{L^{p}({\bf t}+l\Omega)}
\\& +\Bigl\|P(\tau +\cdot ;x)-P(\cdot;x)\Bigr\|_{L^{p}({\bf t}+l\Omega)} +\Bigl\|P(\cdot ;x)-F(\cdot;x)\Bigr\|_{L^{p}({\bf t}+l\Omega)}
\\& \leq \frac{2\epsilon}{3}l^{n/p}+\Bigl\|P(\tau +\cdot ;x)-P(\cdot;x)\Bigr\|_{L^{p}({\bf t}+l\Omega)}\leq \frac{2\epsilon}{3}l^{n/p}+\frac{\epsilon}{3}l^{n/p}=\epsilon l^{n/p},
\end{align*}
which completes the proof.
Now we will extend the statement of <cit.> in the following way:
Suppose that $F\in e-W^{p}_{ap,\Lambda',{\mathcal B}}(\Lambda \times X : Y)$ and there exists a finite real number $M>0$ such that, for every ${\bf t}\in \Lambda ,$ there exists ${\bf t}_{0}
\in \Lambda'$ such that $|{\bf t}+{\bf t}_{0}|\leq M.$ Then for each $B\in {\mathcal B}$ there exist real numbers $l>0$ and $M'>0$ such that
\sup_{{\bf t}\in \Lambda, x\in B}\Bigl[ l^{-(n/p)} \bigl\| F(\cdot ;x) \bigr\|_{L^{p}({\bf t}+l\Omega :Y) } \Bigr] \leq M'.
Let the set $B\in {\mathcal B}$ be fixed and let $\epsilon=1.$ Then there exist real numbers $l>0$ and $L>0$
such that for each ${\bf t}_{0}\in \Lambda'$ there exists $\tau \in B({\bf t}_{0},L)\cap \Lambda'$ such that (<ref>) holds.
Fix now a point ${\bf t}\in \Lambda.$ Due to our assumption, there exists ${\bf t}_{0}\in \Lambda'$ such that $|{\bf t}+{\bf t}_{0}|\leq M.$ Choose $\tau$ as above for this ${\bf t}_{0}.$ Then $|{\bf t}+\tau|\leq |{\bf t}+{\bf t}_{0}|+|{\bf t}_{0}-\tau|\leq M+L,$ so that
\begin{align*}
\bigl\| F(\cdot ;x) \bigr\|_{L^{p}({\bf t}+l\Omega :Y) } &\leq \bigl\| F(\cdot ;x)- F(\tau+\cdot ;x)\bigr\|_{L^{p}({\bf t}+l\Omega :Y) }+\bigl\| F(\tau +\cdot ;x) \bigr\|_{L^{p}({\bf t}+l\Omega :Y)}
\\& \leq l^{n/p}+\bigl\| F(\cdot ;x) \bigr\|_{L^{p}({\bf t}+\tau+l\Omega :Y) }
\\& \leq l^{n/p}+\sup_{|{\bf v}|\leq M}\bigl\| F(\cdot ;x) \bigr\|_{L^{p}({\bf v}+l\Omega :Y) },
\end{align*}
which simply completes the proof.
Similarly we can prove the following extension of <cit.>:
Suppose that $F\in e-W^{p}_{ap,\Lambda'}(\Lambda : Y)$ and there exists a finite real number $M>0$ such that, for every ${\bf t}\in \Lambda ,$ there exists ${\bf t}_{0}
\in \Lambda'$ such that $|{\bf t}+{\bf t}_{0}|\leq M.$ Then $F(\cdot)$ is equi-$W^{p}$-uniformly continuous, i.e., for each $\epsilon>0$ there exist real numbers $l>0$ and $\delta>0$ such that, for every ${\bf v}\in \Lambda$ with $|{\bf v}|\leq \delta,$ we have
\sup_{{\bf t}\in {\Lambda}}\Biggl[ l^{-n/p}\Bigl\| F({\bf t}+\cdot +{\bf v})-F({\bf t}+\cdot ) \Bigr\|_{L^{p}(l\Omega)} \Biggr]<\epsilon.
Now we are able to prove the following generalization of <cit.>:
Suppose that (L) holds with $X=\{0\}$ and ${\mathcal B}=\{X\}$. Then $F\in e-W^{p}_{ap,{\mathbb R}^{n}}({\mathbb R}^{n} : Y)$ if and only if $F\in e-W^{p}({\mathbb R}^{n} : Y).$
van Hove
Clearly, if $F\in e-W^{p}({\mathbb R}^{n} : Y),$ then $F\in e-W^{p}_{ap,{\mathbb R}^{n}}({\mathbb R}^{n} : Y)$ due to Proposition <ref>. In order to prove that the assumption $F\in e-W^{p}_{ap,{\mathbb R}^{n}}({\mathbb R}^{n} : Y)$ implies
$F\in e-W^{p}({\mathbb R}^{n} : Y),$ we basically follow the approach obeyed in the proof of <cit.> in the abstract framework developed by T. Spindeler [41] for the scalar-valued equi-Weyl-$p$-almost periodic functions defined on the locally compact Abelian group $G={\mathbb R}^{n},$ with a little abuse of notation used. First of all, we note that
the sequence $(A_{l}\equiv l\Omega)_{l\in {\mathbb N}}$ is a van Hove
sequence (see also Example <ref> and the proof of Theorem <ref> below) in the sense of <cit.> as well as that
Proposition <ref> implies that $F(\cdot)$ is equi-$W^{p}$-uniformly continuous, so that <cit.> continue to hold in the vector-valued case. It can be simply shown that the construction of kernel $K : {\mathbb R}^{n} \rightarrow [0,\infty)$ holds for the vector-valued functions,
so that <cit.> continue to hold in the vector-valued case, as well. Further on, for a real number $\epsilon>0$ given in advance, the function
\Theta({\bf t}):=\liminf_{l\rightarrow +\infty}l^{-n}\int_{l\Omega}F({\bf t}+{\bf s})K({\bf s})\, d{\bf s}=\lim_{l\rightarrow +\infty}l^{-n}\int_{l\Omega}F({\bf t}+{\bf s})K({\bf s})\, d{\bf s},\quad {\bf t} \in {\mathbb R^{n}}
is almost periodic and satisfies $\lim_{m\rightarrow +\infty}D_{W,B}^{p}(F,\Theta)<\epsilon$ by the same argumentation as in the proof of implication (2) $\Rightarrow$ (1) of <cit.>. The remainder of proof is trivial and therefore omitted.
Now we would like to introduce the following notion:
Suppose that $\emptyset \neq \Lambda \subseteq {\mathbb R}^{n},$ $\Lambda+\Lambda+ l\Omega\subseteq \Lambda$ and $\Lambda +l\Omega \subseteq \Lambda $
for all $l>0.$ Then we say that $\Lambda$ is admissible with respect to the (equi-)Weyl-$p$-almost periodic extensions if and only if for any complex Banach space $Y$ and for any function $F \in (e-)W^{p}_{ap,\Lambda}(\Lambda : Y)$
there exists a function $\tilde{F}\in (e-)W^{p}_{ap,{\mathbb R}^{n}}({\mathbb R}^{n} : Y)$ such that $\tilde{F}({\bf t})=F({\bf t})$ for all ${\bf t}\in \Lambda.$
set!admissible with respect to the (equi-)Weyl-$p$-almost periodic extensions
Now we are able to state the following extensions of <cit.>, whose proofs are immediate consequences of Theorem <ref>, the fact that $e-W^{p}({\mathbb R}^{n} : Y)$ is a vector space and the notion introduced in Definition <ref>:
Suppose that $\emptyset \neq \Lambda \subseteq {\mathbb R}^{n},$ $\Lambda+\Lambda+ l\Omega\subseteq \Lambda$ and $\Lambda +l\Omega \subseteq \Lambda $
for all $l>0.$ If $\Lambda$ is admissible with respect to the equi-Weyl-$p$-almost periodic extensions, then $e-W^{p}_{ap,\Lambda}(\Lambda : Y)$ is a vector space.
Suppose that $\emptyset \neq \Lambda \subseteq {\mathbb R}^{n},$ $\Lambda+\Lambda+ l\Omega\subseteq \Lambda$ and $\Lambda +l\Omega \subseteq \Lambda $
for all $l>0.$ Suppose, further, that $p,\ q,\ r\in [1,\infty),$ $1/p+1/r=1/q,$ $\Lambda$ is admissible with respect to the equi-Weyl-$p$-almost periodic extensions, $f\in e-W^{p}_{ap,\Lambda}(\Lambda : {\mathbb C})$ and $F\in e-W^{r}_{ap,\Lambda}(\Lambda : Y).$
Define $F_{1}({\bf t}):=f({\bf t})F({\bf t}),$ ${\bf t}\in \Lambda.$ Then
$F_{1} \in e-W^{q}_{ap,\Lambda}(\Lambda : Y).$
Before proceeding further, let us note that Theorem <ref> can be illustrated by many elaborate
examples. For instance,
we know that there exists a bounded scalar-valued infinitely differentiable Weyl-$p$-almost periodic function $f: {\mathbb R} \rightarrow {\mathbb R}$ for all $p\in [1,\infty)$ such that the regular distribution determined by this function is not almost periodic (cf. [4], <cit.> and [26] for the notion and more details). Define now
F\bigl(t_{1},t_{2},\cdot \cdot \cdot,t_{n}\bigr)=f(t_{1})f(t_{2})\cdot \cdot \cdot f(t_{n}),\quad {\bf t}=\bigl(t_{1},t_{2},\cdot \cdot \cdot,t_{n}\bigr)\in {\mathbb R}^{n}.
Then Theorem <ref> inductively implies that $F \in e-W^{p}_{ap,{\mathbb R}^{n}}({\mathbb R}^{n} : Y)$ for all $p\in [1,\infty)$ (even for all $p\in D_{+}(\Omega))$.
It is clear that the notion introduced in Definition <ref> is not trivial as well as that some known results for the usual classes of multi-dimensional Bohr and Stepanov
almost periodic type functions cannot be easily transferred to the corresponding Weyl classes. In connection with this problem, we would like to ask the following question, which seems to be not proposed elsewhere even in the one-dimensional setting:
Problem. Suppose that $\Lambda$ is a convex polyhedral in ${\mathbb R}^{n}$, i.e., there exists
a basis
$({\bf v}_{1},\cdot \cdot \cdot ,{\bf v}_{n})$ of ${\mathbb R}^{n}$ such that
\Lambda=\bigl\{ \alpha_{1} {\bf v}_{1} +\cdot \cdot \cdot +\alpha_{n}{\bf v}_{n} : \alpha_{i} \geq 0\mbox{ for all }i\in {\mathbb N}_{n} \bigr\}.
Is is true that $\Lambda$ is admissible with respect to the (equi-)Weyl-$p$-almost periodic extensions?
In the remainder of this section, we assume that condition (L) holds. If
$\tau \in {\mathbb R}^{n}$ satisfies $\tau +\Lambda \subseteq \Lambda$ and
$F\in {\mathrm B}_{W,B}^{p}$ for all $B\in {\mathcal B},$ then $F(\cdot +\tau;\cdot)\in {\mathrm B}_{W,B}^{p}$ for all $B\in {\mathcal B}.$
Therefore, the following notion is meaningful:
Suppose that $F : \Lambda \times X \rightarrow Y$ is such that (L) holds. If $\Lambda_{0} \subseteq \{ \tau \in {\mathbb R}^{n} : \tau +\Lambda \subseteq \Lambda\},$ then we say that the function $F(\cdot;\cdot)$ is $({\mathcal B},\Lambda_{0})$-normal if and only if for each $B\in {\mathcal B}$ the set ${\mathrm F}_{\Lambda_{0}}\equiv \{ F(\cdot +\tau;\cdot) : \tau \in \Lambda_{0}\}$ is totally bounded in the pseudometric space $({\mathrm B}_{W,B}^{p},D_{W,B}^{p}),$ which means that for any $\epsilon>0$ and $B\in {\mathcal B}$ the set ${\mathrm F}_{\Lambda_{0}}$ admits a
cover by finitely many open balls of radius $\epsilon$ in $({\mathrm B}_{W,B}^{p},D_{W,B}^{p}).$
Consider now the following condition:
\begin{itemize}
\item[(WM3):]
$\emptyset \neq \Lambda \subseteq {\mathbb R}^{n},$ $\emptyset \neq \Lambda' \subseteq {\mathbb R}^{n},$ $\emptyset \neq \Lambda'' \subseteq {\mathbb R}^{n},$
$\Omega=[0,1]^{n},$ $p({\bf u})\equiv p\in [1,\infty),$
$\Lambda''+\Lambda+ l\Omega \subseteq \Lambda,$ $\Lambda+ l\Omega \subseteq \Lambda$ for all $l>0,$
$\phi (x)\equiv x$ and ${\mathbb F}(l,{\bf t})\equiv l^{-n/p}.$
\end{itemize}
The following notion has an important role for our further investigations of the notion introduced in Definition \ref{jebaigaman}:
\begin{defn}
\label{marinavisconti}
Suppose that (WM3) holds.
\begin{itemize}
\item[(i)]
By $e-W^{p}_{\Omega,\Lambda',\Lambda'',{\mathcal B}}(\Lambda\times X :Y)$ we denote the set consisting of all functions $F : \Lambda \times X \rightarrow Y$ such that, for every $\epsilon>0$ and $B\in {\mathcal B},$ there exist two finite real numbers
$L>0$ such that for each ${\bf t}_{0}\in \Lambda'$ there exists $\tau \in B({\bf t}_{0},L)\cap \Lambda''$ such that
\begin{align*}
\sup_{x\in B}\sup_{{\bf t}\in \Lambda}{\mathbb F}(l,{\bf t})\phi\Bigl( \bigl\| F({\bf \tau}+{\bf u};x)-F({\bf u};x) \bigr\|_{Y}\Bigr)_{L^{p({\bf u})}({\bf t}+l\Omega)} <\epsilon.
\end{align*}
\index{space!$e-W^{p}_{\Omega,\Lambda',\Lambda'',{\mathcal B}}(\Lambda\times X :Y)$}
\item[(ii)] By $W^{p}_{\Omega,\Lambda',\Lambda'',{\mathcal B}}(\Lambda\times X :Y)$ we denote the set consisting of all functions $F : \Lambda \times X \rightarrow Y$ such that, for every $\epsilon>0$ and $B\in {\mathcal B},$ there exists a finite real number
$L>0$ such that for each ${\bf t}_{0}\in \Lambda'$ there exists $\tau \in B({\bf t}_{0},L) \cap \Lambda''$ such that
\begin{align*}
\limsup_{l\rightarrow +\infty}\sup_{x\in B}\sup_{{\bf t}\in \Lambda}{\mathbb F}(l,{\bf t})\phi\Bigl( \bigl\| F({\bf \tau}+{\bf u};x)-F({\bf u};x) \bigr\|_{Y}\Bigr)_{L^{p({\bf u})}({\bf t}+l\Omega)}
\end{align*}
\index{space!$W^{p}_{\Omega,\Lambda',\Lambda'',{\mathcal B}}(\Lambda\times X :Y)$}
\end{itemize}
\end{defn}
Now we are able to state the following result (see also \cite[Corollary 4.24]{deda} and the proof of sufficiency in \cite[Theorem 4.12]{deda}):
\begin{prop}\label{marencew}
Suppose that
$F : \Lambda \times X \rightarrow Y$ is such that \emph{(L)} holds, $\Lambda_{0} \subseteq \{ \tau \in {\mathbb R}^{n} : \tau +\Lambda \subseteq \Lambda\},$
$F(\cdot;\cdot)$ is $({\mathcal B},\Lambda_{0})$-normal, $\tau +\Lambda=\Lambda$ for all $\tau \in \Lambda_{0},$ and condition
\emph{(WM3)} holds with $\Lambda':=-
\Lambda_{0},$ $\Lambda'':=\Lambda_{0}-\Lambda_{0}.$ Then $F\in W^{p}_{\Omega,\Lambda',\Lambda'',{\mathcal B}}(\Lambda\times X :Y).$
\end{prop}
\begin{proof}
Let $\epsilon>0$ and $B\in {\mathcal B}$ be fixed.
Due to the $({\mathcal B},\Lambda_{0})$-normality of function $F(\cdot;\cdot)$, we have that there exist a positive integer $m\in {\mathbb N}$ and a finite subset $\{\tau_{1},\tau_{2},\cdot \cdot \cdot, \tau_{m}\}$ of $\Lambda_{0}$ such that for each ${\bf t}_{0}=-\tau\in
-\Lambda_{0}$ there exist $j\in {\mathbb N}_{m}$ and $l_{0}>0$ such that, for every $l\geq l_{0}$ and $x\in B,$ we have
\sup_{{\bf t}\in \Lambda,x\in B}\Biggl[ l^{-n/p}\Bigl\| F({\bf t}+\tau+\cdot;x)-F({\bf t}+\tau_{j}+\cdot;x)\Bigr\|_{L^{p}(l\Omega : Y)} \Biggr]<\epsilon.
Substituting $T={\bf t}+\tau$ and using the assumption that $\tau +\Lambda=\Lambda$ for all $\tau \in \Lambda_{0},$ the above implies
\sup_{{\bf t}\in \Lambda,x\in B}\Biggl[ l^{-n/p}\Bigl\| F({\bf t}+\cdot;x)-F({\bf t}+(\tau_{j}-\tau)+\cdot;x)\Bigr\|_{L^{p}(l\Omega : Y)} \Biggr]<\epsilon.
Set $L:=\max\{ |\tau_{j}| : j\in {\mathbb N}_{m} \}.$ Then $\tau_{j}-\tau\in \Lambda_{0}-\Lambda_{0}$ and $\tau_{j}-\tau\in B({\bf t}_{0},L),$ which simply implies the required.
\end{proof}
It is worth noting that Proposition \ref{marencew} can be applied even in the case that the assumption $Λ=Λ_0=ℝ^n$ is not satisfied. For example, we can take $Λ:={(x_1,···,x_n-1,x_n) ∈ℝ^n : x_i≥0i∈ℕ_n-1}$ and $Λ_0:={(0,0,···,0,x_n) : x_n∈ℝ};$
furthermore, the case in which $-Λ_0≠Λ_0-Λ_0$ can also happen since we can take $Λ:=ℝ^n$ and $Λ_0:=a+W,$ where $a≠0$ and $W$ is a non-trivial subspace of $ℝ^n$ (then $Λ_0-Λ_0=W≠-Λ_0$).
\begin{example}\label{prckoIII} ([42])
Let $\Lambda=\Lambda'={\mathbb R},$
$X=\{0\},$ ${\mathcal B}=\{X\},$ $Y={\mathbb C}$ and ${\mathrm R}$ being the collection of all sequences in ${\mathbb R}$. Define the function $f : {\mathbb R} \rightarrow {\mathbb R} $ by $f(x):=0$ for $x \leq 0,$ $f(x):=\sqrt{n/2}$ for $x\in (n-2,n-1]$ ($n\in 2{\mathbb N}$) and $f(x):=-\sqrt{n/2}$ for $x\in (n-1,n]$ ($n\in 2{\mathbb N}$). Then $f(\cdot)$ is Weyl-$1$-almost periodic, Weyl $1$-unbounded, but neither equi-Weyl-$1$-almost periodic nor Weyl-$1$-normal, so that the converse of Proposition \ref{marencew} does not hold, in general. Although may be interesting, we will not consider here the general case $p>1$ as well as some more complicated relatives of Example \ref{prckoI}-Example \ref{prckoII} with locally integrable functions $F : {\mathbb R}^{n}\rightarrow {\mathbb R}$ whose range is at most countable.
\end{example}
Therefore, one needs to impose some extra conditions ensuring that the inclusion\\ $F∈W^p_Ω,-Λ_0,Λ_0-Λ_0,ℬ(Λ×X :Y)$ implies that $F(·;·)$ is $(ℬ,Λ_0)$-normal. In the following result, the assumption $Λ=Λ_0=ℝ^n$ is almost inevitable to be made (see also [34], \cite[Theorem 4.22, Theorem 4.23]{deda} and the proof of necessity in \cite[Theorem 4.12]{deda}; the compactness criteria for the sets in the spaces of (equi-)Weyl-$p$-almost periodic functions have been analyzed in [35]-[36] with the help of Lusternik type theorems, we will not reconsider these results in the multi-dimensional framework):
\begin{prop}\label{kovanko-supere}
Suppose that $F : {\mathbb R}^{n} \times X \rightarrow Y$ is such that \emph{(L)} holds, $\Lambda_{0} ={\mathbb R}^{n}$ and\\ $F\in W^{p}_{\Omega,{\mathbb R}^{n},{\mathbb R}^{n},{\mathcal B}}({\mathbb R}^{n}\times X :Y).$ If for each $\epsilon>0$ and $B\in {\mathcal B}$ there exists $\delta>0$ such that $D_{W,B}^{p}(F(\cdot;\cdot),F(\cdot +{\bf v} ;\cdot))<\epsilon$ for every ${\bf v} \in {\mathbb R}^{n}$ with $|{\bf v}|\leq \delta,$
then $F(\cdot;\cdot)$ is $({\mathcal B},{\mathbb R}^{n})$-normal.
\end{prop}
\begin{proof}
Let $\epsilon>0$ and $B\in {\mathcal B}$ be given.
Due to our assumption, we have the existence of a finite real number $l>0$ such that, for every ${\bf t}_{0}\in {\mathbb R}^{n},$ there exists $\eta \in B({\bf t}_{0},l)$ such that
$D_{W,B}^{p}( F(\cdot;\cdot),F(\cdot +\eta;\cdot))<\epsilon/2.$ Furthermore, there exists $\delta>0$ such that $D_{W,B}^{p}(F(\cdot;\cdot),F(\cdot +{\bf v} ;\cdot))<\epsilon/2$ for every ${\bf v} \in {\mathbb R}^{n}$ with $|{\bf v}|\leq \delta.$ Let $m\in {\mathbb N}$ be such that $m\delta>l,$ and let $S_{\delta}$ denote the set consisting of all points of form $(a_{1}\delta,\cdot \cdot \cdot, a_{n}\delta) \in B(0,m\delta),$ where $a_{j}\in {\mathbb Z}$ for all $j\in {\mathbb N}_{n}.$ With the same notation as above, we have $-{\bf t}_{0}+\eta\in B(0,l),$ and therefore, there exists $\zeta\in S_{\delta}$ such that
$|{\bf v}|=|-{\bf t}_{0}+\eta-\zeta|<\delta.$ This implies $D_{W,B}^{p}(F(\cdot;\cdot), F(\cdot +[-{\bf t}_{0}+\eta-\zeta]; \cdot))=D_{W,B}^{p}(F(\cdot +\zeta; \cdot), F(\cdot -{\bf t}_{0}+\eta; \cdot))<\epsilon/2.$ But, then we have{\small
\begin{align*}
& D_{W,B}^{p}\bigl(F(\cdot -{\bf t}_{0}; \cdot), F(\cdot +\zeta; \cdot)\bigr)
\\& \leq D_{W,B}^{p}\bigl(F(\cdot +\zeta; \cdot), F(\cdot -{\bf t}_{0}+\eta; \cdot)\bigr)+D_{W,B}^{p}\bigl(F(\cdot -{\bf t}_{0}+\eta; \cdot), F(\cdot -{\bf t}_{0}; \cdot)\bigr) \leq 2\cdot \frac{\epsilon}{2}=\epsilon,
\end{align*}}
which completes the proof.
\end{proof}
\subsection{The existence of Bohr-Fourier coefficients for multi-dimensional Weyl almost periodic functions}\label{prckojam}
At the very beginning of this subsection, we feel it is our duty to emphasize that some relations presented in \cite[Table 2, p. 56]{deda} seem to be stated incorrectely.
The main mistake made is that the authors have interchanged at some places the class of equi-Weyl-$p$-almost periodic functions and Weyl-$p$-almost periodic functions, which can be simply justified by taking a closer look at the references quoted: in the research articles [7] and [8], as well as in the research monographs [6], [24] and its English translation published by Pergamon Press, Oxford in 1966, the class of Weyl-$p$-almost periodic functions in the sense of Kovanko's approach has not been considered at all (the authors of [6], [7], [8] and [24] have called an equi-Weyl-$p$-almost periodic function simply a Weyl-$p$-almost periodic functions therein). Therefore, there is no reasonable information yet which could tell us whether the class of Weyl-$p$-almost periodic functions is contained in the class of Besicovitch $p$-almost periodic functions or not, as well as whether a Weyl-$p$-almost periodic function $f : ℝ →ℂ$ has the mean value ($1≤p<∞$). We would like to propose these questions to our readers.
Based on the evidence given in the proof of the subsequent result, it is our strong belief that we must deal with the class of equi-Weyl-$p$-almost periodic functions in order to ensure the existence of the mean value and the Bohr-Fourier coefficients for a function $F : Λ×X →Y.$ The assumptions $X={0}$ and $p=1$ (due to the obvious embedding) are reasonable to be made, when we have the following:
\begin{thm}\label{nie}
Suppose that $\lambda \in {\mathbb R}^{n},$ $[0,\infty)^{n}= \Lambda' \subseteq \Lambda,$ $\Omega=[0,1]^{n},$
$F : \Lambda \rightarrow Y$ is Stepanov $(\Omega,1)$-bounded and satisfies that the function ${\bf t}\mapsto F_{\lambda}({\bf t}):=e^{-i\langle \lambda, {\bf {\bf t}}\rangle }F({\bf t}),$ ${\bf t}\in {\mathbb R}^{n}$
belongs to the space
$e-W_{ap,\Lambda}^{1} (\Lambda : Y).$ Then the Bohr-Fourier coefficient
$P_{\lambda}(F)$ of $F(\cdot),$ defined by
\begin{align}\label{cell}
P_{\lambda}(F):=\lim_{T\rightarrow +\infty}\frac{1}{T^{n}}\int_{{\bf s}+[0,T]^{n}}e^{-i\langle \lambda, {\bf {\bf t}}\rangle }F({\bf t})\, d{\bf t},
\end{align}
exists and does not depend on the choice of a tuple ${\bf s} \in [0,\infty)^{n}.$ Moreover, for every $\epsilon>0,$ there exists a real number $T_{0}(\epsilon)>0$ such that, for every $T\geq T_{0}(\epsilon)$ and ${\bf s} \in [0,\infty)^{n},$ we have
\begin{align}\label{zajebano}
\Biggl\| \frac{1}{T^{n}}\int_{[0,T]^{n}}e^{-i\langle \lambda, {\bf {\bf t}}\rangle }F({\bf t})\, d{\bf t}-\frac{1}{T^{n}}\int_{{\bf s}+[0,T]^{n}}e^{-i\langle \lambda, {\bf {\bf t}}\rangle }F({\bf t})\, d{\bf t} \Biggr\|_{Y}<\epsilon.
\end{align}
\end{thm}
\begin{proof}
We slightly modify the arguments contained in the proof of corresponding statement given in the one-dimensional case (see e.g., \cite[Theorem 1.3.1-Theorem 1.3.2, pp. 32-35]{188}).
Fix the numbers $\epsilon>0$ and $\lambda \in {\mathbb R}^{n}.$ We know that there exist two finite real numbers
$L>0$ such that for each ${\bf t}_{0}\in [0,\infty)^{n}$ there exists $\tau \in B({\bf t}_{0},L)\cap [0,\infty)^{n}$ such that
\begin{align}\label{sajko}
\sup_{{\bf t}\in \Lambda}\bigl\| F({\bf \tau}+\cdot)-F(\cdot) \bigr\|_{L^{1}({\bf t}+l\Omega:Y)}
<\epsilon \cdot l^{n}.
\end{align}
Let $T>l$ be an arbitrary real number and let $k\in {\mathbb N}.$ Denote by $A_{T,k}=\{{\bf s}_{1},\cdot \cdot \cdot ,{\bf s}_{k^{n}}\}$ the collection of all points ${\bf s}\in T\cdot {\mathbb N}_{0}^{n}$ such that ${\bf s}+[0,T]^{n} \subseteq [0,kT]^{n}.$ Further on, let $B_{T,k}=\{{\bf \tau}_{1},\cdot \cdot \cdot ,{\bf \tau}_{k^{n}}\}$ be a collection of points in $[0,\infty)^{n}$ such that
$|{\bf \tau}_{k}-{\bf \tau}_{j}|\leq L$
for all $j\in {\mathbb N}_{k^{n}}$ as well as that \eqref{sajko} holds with the number $\tau$ replaced therein with the number $\tau_{j}$ ($j\in {\mathbb N}_{k^{n}}$). Due to the computation following the equation \eqref{weyl188188}, we have that \eqref{sajko} implies
\sup_{{\bf t}\in \Lambda}\| F({\bf \tau}+\cdot)-F(\cdot) \|_{L^{1}({\bf t}+T\Omega:Y)}
<\epsilon \cdot 2^{n}T^{n};
in particular,
\begin{align}\label{sajko1}
\bigl\| F({\bf \tau}+\cdot)-F(\cdot) \bigr\|_{L^{1}(T\Omega:Y)}
<\epsilon \cdot 2^{n}T^{n}.
\end{align}
Keeping in mind \eqref{sajko1}, we get:
\begin{align*}
\Biggl\|&\frac{1}{T^{n}}\int_{[0,T]^{n}}F_{\lambda}({\bf t})\, d{\bf t}-\frac{1}{(kT)^{n}}\int_{[0,kT]^{n}}F_{\lambda}({\bf t})\, d{\bf t}\Biggr\|_{Y}
\\& \leq \frac{\sum_{j=1}^{k^{n}}\Bigl\|\frac{1}{T^{n}}\int_{[0,T]^{n}}F_{\lambda}({\bf t})\, d{\bf t}-\frac{1}{T^{n}}\int_{{\bf s}_{j}+[0,T]^{n}}F_{\lambda}({\bf t})\, d{\bf t}\Bigr\|_{Y}}{k^{n}}
\\& =\frac{\sum_{j=1}^{k^{n}}\Bigl\|\frac{1}{T^{n}}\int_{[0,T]^{n}}F_{\lambda}({\bf t})\, d{\bf t}-\frac{1}{T^{n}}\int_{[0,T]^{n}}F_{\lambda}({\bf s}_{j}+{\bf t})\, d{\bf t}\Bigr\|_{Y}}{k^{n}}
\\& \leq \frac{\sum_{j=1}^{k^{n}}\Bigl\|\frac{1}{T^{n}}\int_{[0,T]^{n}}F_{\lambda}({\bf t})\, d{\bf t}-\frac{1}{T^{n}}\int_{[0,T]^{n}}F_{\lambda}({\bf \tau}_{j}+{\bf t})\, d{\bf t}\Bigr\|_{Y}}{k^{n}}
\\& +\frac{\sum_{j=1}^{k^{n}}\Bigl\|\frac{1}{T^{n}}\int_{[0,T]^{n}}F_{\lambda}({\bf \tau}_{j}+{\bf t})\, d{\bf t}-\frac{1}{T^{n}}\int_{[0,T]^{n}}F_{\lambda}({\bf s}_{j}+{\bf t})\, d{\bf t}\Bigr\|_{Y}}{k^{n}}
\\& \leq \epsilon 2^{n}+\frac{\sum_{j=1}^{k^{n}}\Bigl\|\frac{1}{T^{n}}\int_{[0,T]^{n}}F_{\lambda}({\bf \tau}_{j}+{\bf t})\, d{\bf t}-\frac{1}{T^{n}}\int_{[0,T]^{n}}F_{\lambda}({\bf s}_{j}+{\bf t})\, d{\bf t}\Bigr\|_{Y}}{k^{n}}
\\& =\epsilon 2^{n}+\frac{\sum_{j=1}^{k^{n}}\Bigl\|\frac{1}{T^{n}}\int_{({\bf \tau}_{j}+[0,T]^{n}) \setminus ({\bf s}_{j}+[0,T]^{n})}F_{\lambda}({\bf t})\, d{\bf t}\Bigr\|_{Y}}{k^{n}} .
\end{align*}
Since $|{\bf s}_{j}-{\bf \tau}_{j}|\leq L$
for all $j\in {\mathbb N}_{k^{n}}$, an elementary geometrical argument shows that there exists a finite real constant $c_{n}\in {\mathbb N}$ such that the set $({\bf \tau}_{j}+[0,T]^{n}) \setminus ({\bf s}_{j}+[0,T]^{n})$ can be covered by at most $\lceil L T^{n-1}\rceil $ translations of the cell $[0,1]^{n},$ so that the Stepanov $(\Omega,1)$-boundedness of $F(\cdot)$ implies that there exists a finite real number $T(\epsilon)>0$ such that
\begin{align}
\notag\Biggl\|&\frac{1}{T^{n}}\int_{[0,T]^{n}}F_{\lambda}({\bf t})\, d{\bf t}-\frac{1}{(kT)^{n}}\int_{[0,kT]^{n}}F_{\lambda}({\bf t})\, d{\bf t}\Biggr\|_{Y}
\\\label{ledzendo}&\leq
\epsilon 2^{n}+\|F\|_{S^{\Omega,1}}\frac{\lceil L T^{n-1}\rceil }{T}\leq \epsilon \bigl( 2^{n}+1\bigr),\quad T\geq T(\epsilon).
\end{align}
After that, we can repeat verbatim the argumentation contained in the proof of \cite[Theorem 1.3.1, p. 33]{188} in order to see that the limit
\lim_{T\rightarrow +\infty}\frac{1}{T^{n}}\int_{[0,T]^{n}}e^{-i\langle \lambda, {\bf {\bf t}}\rangle }F({\bf t})\, d{\bf t}
exists on the account of the Cauchy principle of convergence. The above geometrical argument with ${\bf s}_{j}=0$ and ${\bf t}_{j}=0$
implies that
\begin{align*}
\lim_{T\rightarrow +\infty}\frac{1}{T^{n}}\int_{[0,T]^{n}}e^{-i\langle \lambda, {\bf {\bf t}}\rangle }F({\bf t})\, d{\bf t}=\lim_{T\rightarrow +\infty}\frac{1}{T^{n}}\int_{{\bf s}+[0,T]^{n}}e^{-i\langle \lambda, {\bf {\bf t}}\rangle }F({\bf t})\, d{\bf t}
\end{align*}
for all ${\bf s}\in [0,\infty)^{n},$ which completes the first part of proof. For the second part of proof, observe that for each ${\bf s}\in [0,\infty)^{n}$ the function ${\bf t}\mapsto F_{\lambda}({\bf t}+{\bf s}),$ ${\bf t}\in \Lambda$ belongs to the class $e-W_{ap,\Lambda}^{1} (\Lambda : Y)$ as well as that the numbers $l>0$ and $L>0$ in the corresponding definition can be chosen independently of ${\bf s}.$ Letting $k\rightarrow +\infty$ in \eqref{ledzendo}, we get:
\begin{align}\label{nije da nije}
\Biggl\|\frac{1}{T^{n}}\int_{[0,T]^{n}}F_{\lambda}({\bf t})\, d{\bf t}-\lim_{T\rightarrow +\infty}\frac{1}{T^{n}}\int_{[0,T]^{n}}F_{\lambda}({\bf t})\, d{\bf t}\Biggr\|_{Y}\leq
\epsilon 2^{n}+\|F\|_{S^{\Omega,1}}\frac{\lceil L T^{n-1}\rceil }{T}.
\end{align}
By the foregoing, the same estimate holds for the function ${\bf t}\mapsto F_{\lambda}({\bf t}+{\bf s}),$ ${\bf t}\in \Lambda$, so that
\begin{align}
\notag
\Biggl\| & \frac{1}{T^{n}}\int_{[0,T]^{n}}F_{\lambda}({\bf t}+{\bf s})\, d{\bf t}-\lim_{T\rightarrow +\infty}\frac{1}{T^{n}}\int_{[0,T]^{n}}F_{\lambda}({\bf t}+{\bf s})\, d{\bf t}\Biggr\|_{Y}
\\& \label{nije da nije1} \leq
\epsilon 2^{n}+\|F\|_{S^{\Omega,1}}\frac{\lceil L T^{n-1}\rceil }{T},\quad {\bf s}\in [0,\infty)^{n}.
\end{align}
After simple substitution, the first part of proof shows that, for every ${\bf s}\in [0,\infty)^{n},$ we have:
\lim_{T\rightarrow +\infty}\frac{1}{T^{n}}\int_{[0,T]^{n}}F_{\lambda}({\bf t})\, d{\bf t}=\lim_{T\rightarrow +\infty}\frac{1}{T^{n}}\int_{[0,T]^{n}}F_{\lambda}({\bf t}+{\bf s})\, d{\bf t}.
Hence, in view of \eqref{nije da nije} and \eqref{nije da nije1}, we get
\Biggl\|\frac{1}{T^{n}}\int_{[0,T]^{n}}F_{\lambda}({\bf t})\, d{\bf t}-\frac{1}{T^{n}}\int_{[0,T]^{n}}F_{\lambda}({\bf t}+{\bf s})\, d{\bf t}\Biggr\|_{Y}\leq
\epsilon 2^{n+1}+2\|F\|_{S^{\Omega,1}}\frac{\lceil L T^{n-1}\rceil }{T},
which completes the proof of theorem.
\end{proof}
\begin{rem}\label{fghjkl}
If we assume $\Lambda'=\Lambda={\mathbb R}^{n}$ and accept all remaining requirements in Theorem \ref{nie}, then we get into a classical situation in which the corresponding class is contained in the class of Besicovitch $p$-almost periodic functions in ${\mathbb R}^{n}$ (see \cite[pp. 12-13]{pankov}; we can use the set $\Omega =[-1,1]^{n}$ here producing the same results). In this case, the function $F_{\lambda}\in
e-W_{ap,\Lambda}^{1} ({\mathbb R}^{n} : Y)$ if and only if $F \in e-W_{ap,\Lambda}^{1} ({\mathbb R}^{n} : Y)$ for each (some) $\lambda \in {\mathbb R}^{n};$ cf. also Theorem \ref{nie-normalan}. Further on, the argumentation contained in the proof of Theorem \ref{nie} shows that
\begin{align*}
\lim_{T\rightarrow +\infty}\frac{1}{(2T)^{n}}\int_{{\bf s}+[-T,T]^{n}}e^{-i\langle \lambda, {\bf {\bf t}}\rangle }F({\bf t})\, d{\bf t}
\end{align*}
exists and does not depend on the choice of a tuple ${\bf s} \in {\mathbb R}^{n}$ as well as that, for every $\epsilon>0,$ there exists a real number $T_{0}(\epsilon)>0$ such that, for every $T\geq T_{0}(\epsilon)$ and ${\bf s} \in {\mathbb R}^{n},$ we have
\begin{align*}
\Biggl\| \frac{1}{(2T)^{n}}\int_{[-T,T]^{n}}e^{-i\langle \lambda, {\bf {\bf t}}\rangle }F({\bf t})\, d{\bf t}-\frac{1}{(2T)^{n}}\int_{{\bf s}+[-T,T]^{n}}e^{-i\langle \lambda, {\bf {\bf t}}\rangle }F({\bf t})\, d{\bf t} \Biggr\|_{Y}<\epsilon.
\end{align*}
But, the restriction of function $F(\cdot)$ to $[0,\infty)^{n}$ satisfies the requirements of Theorem \ref{nie} with $\Lambda'=\Lambda=[0,\infty)^{n}$ and we similarly obtain that \eqref{cell} holds for all ${\bf s}\in {\mathbb R}^{n}$ as well as that \eqref{zajebano}
holds for all ${\bf s} \in {\mathbb R}^{n};$ plugging ${\bf s}=(-T/2,\cdot \cdot \cdot ,-T/2)$ in this estimate, we particularly get that
\lim_{T\rightarrow +\infty}\frac{1}{T^{n}}\int_{{\bf s}+[0,T]^{n}}e^{-i\langle \lambda, {\bf {\bf t}}\rangle }F({\bf t})\, d{\bf t}=\lim_{T\rightarrow +\infty}\frac{1}{(2T)^{n}}\int_{{\bf s}+[-T,T]^{n}}e^{-i\langle \lambda, {\bf {\bf t}}\rangle }F({\bf t})\, d{\bf t},
as well as that the above limits exist and do not depend on the choice of a tuple ${\bf s} \in {\mathbb R}^{n}.$ It should be also noted that there exist at most countable values of $\lambda \in {\mathbb R}^{n}$ for which $P_{\lambda}(F) \neq 0$ since $F(\cdot)$ can be uniformly approximated in the Weyl norm by trigonometric polynomials and each of them has a finite Bohr-Fourier spectrum (i.e., the set $\{\lambda \in {\mathbb R}^{n} : P_{\lambda}(F)\neq 0\}$); see also \cite[Proposition 5.2]{TIMO}. But, the function $\chi_{[0,1/2)}(\cdot)$ is equi-Weyl-$p$-almost periodic for every $p\geq 1$ and its Bohr-Fourier spectrum is empty so that we cannot expect the validity of Parseval equality in our framework.
\end{rem}
\section{Applications to the abstract Volterra integro-differential equations and inclusions}\label{manuel}
In this section, we apply our results in the analysis of existence and uniqueness of the multi-dimensional Weyl almost periodic type solutions for various classes of abstract Volterra integro-differential equations.
1. In the first example, we continue our analysis of the famous d'Alembert formula from \cite[Example 1.2]{multi-ce}.
Let $a>0;$ then we know that the regular solution of the wave equation $u_tt=a^2u_xx$ in domain ${(x,t) : x∈ℝ, t>0},$ equipped with the initial conditions $u(x,0)=f(x)∈C^2(ℝ)$ and $u_t(x,0)=g(x)∈C^1(ℝ),$ is given by the d'Alembert formula
u(x,t)=\frac{1}{2}\bigl[ f(x-at) +f(x+at) \bigr]+\frac{1}{2a}\int^{x+at}_{x-at}g(s)\, ds,\quad x\in {\mathbb R}, \ t>0.
Let us suppose that the function $x↦(f(x),g^[1](x)),$ $x∈ℝ$
belongs to the class $e-W^(1,x,𝔽)_[0,1],ℝ(ℝ : ℂ)$, where
g^[1](·) ≡∫^·_0g(s) ds.$ Then the solution
$u(x,t)$ can be extended to the whole real line in the time variable and this solution
belongs to the class $e-W^(1,x,𝔽_1)_[0,1]^2,ℝ^2(ℝ^2 : ℂ),$
provided that
\sup_{l>0}\sup_{(t_{1},t_{2})\in {\mathbb R}^{2}}\Biggl[ \int^{t_{1}+(l/a)}_{t_{1}}\frac{{\mathbb F}_{1}(l,{\bf t})}{{\mathbb F}(l,x-at_{2}-l)}dx + \int^{t_{1}+(l/a)}_{t_{1}}\frac{{\mathbb F}_{1}(l,{\bf t})}{{\mathbb F}(l,x+at_{2})}dx\Biggr]<+\infty.
To verify this, fix a positive real number $ϵ>0.$ Then
there exist two finite real numbers
$L>0$ such that for each $t_0∈ℝ$ there exists $τ∈B(t_0,L)$ such that
\begin{align}\label{whatusupprimer}
\sup_{t\in {\mathbb R}}{\mathbb F}(l,t)\bigl\| f(\tau+\cdot)-f(\cdot) \bigr\|_{L^{1}(t+l[0,1]: {\mathbb C})} <\epsilon
\end{align}
as well as that \eqref{whatusupprimer} holds with the function $f(·)$ replaced therein with the function $g^[1](·).$ For our purposes, we choose
the real numbers
$L'>L$ sufficiently large (see also the final part of the above-mentioned example).
We have ($x, t, τ_1, τ_2∈ℝ$):
\begin{align}\label{jarakb}
\begin{split}
\Bigl|u\bigl(x&+\tau_{1},t+\tau_{2}\bigr)-u(x,t)\Bigr|
\\& \leq\frac{1}{2}\Bigl| f\bigl( (x-at)+(\tau_{1}-a\tau_{2}) \bigr)-f(x-at)\Bigr|
\\&+ \frac{1}{2}\Bigl| f\bigl( (x+at)+(\tau_{1}+a\tau_{2}) \bigr)-f(x+at )\Bigr|
\\& +\frac{1}{2a}\Bigl|g^{[1]}\bigl( (x-at)+(\tau_{1}-a\tau_{2}) \bigr)-g^{[1]}(x-at)\Bigr|
\\& +\frac{1}{2a}\Bigl| g^{[1]}\bigl( (x+at)-(\tau_{1}-a\tau_{2}) \bigr)-g^{[1]}(x+at)\Bigr|,
\end{split}
\end{align}
so that the final conclusion simply follows from condition imposed, the estimate \eqref{jarakb}, the computation
\begin{align*}
&\int_{t_{1}}^{t_{1}+(l/a)}\int_{t_{2}}^{t_{2}+(l/a)}\frac{1}{2}\Bigl| f\bigl( (x-at)+(\tau_{1}-a\tau_{2}) \bigr)-f(x-at)\Bigr|\, dx\, dt
\\& \leq \frac{1}{2}\int_{t_{1}}^{t_{1}+(l/a)}\int_{x-at_{2}-l}^{x-at_{2}}\Bigl| f\bigl( z+(\tau_{1}-a\tau_{2}) \bigr)-f(z)\Bigr|\, dz\, dx
\\& \leq \frac{1}{2}\int_{t_{1}}^{t_{1}+(l/a)}\frac{\epsilon}{{\mathbb F}(l,x-at_{2}-l)}\, dx,
\end{align*}
a similar computation for the corresponding term $f( (x+at)+(τ_1+aτ_2))-f(x+at)$ and the corresponding terms with the function $g^[1](·).$\vspace{0.1cm}
We continue with the following illustrative application to the Gaussian semigroup in $ℝ^n$:\vspace{0.1cm}
2. Let $Y$ be one of the spaces $L^p(ℝ^n),$ $C_0(ℝ^n)$ or $BUC(ℝ^n),$ where $1≤p<∞.$ It is well known that the Gaussian semigroup\index{Gaussian semigroup}
(G(t)F)(x):=\bigl( 4\pi t \bigr)^{-(n/2)}\int_{{\mathbb R}^{n}}F(x-y)e^{-\frac{|y|^{2}}{4t}}\, dy,\quad t>0,\ f\in Y,\ x\in {\mathbb R}^{n},
can be extended to a bounded analytic $C_0$-semigroup of angle $π/2,$ generated by the Laplacian $Δ_Y$ acting with its maximal distributional domain in $Y.$ Suppose now that $1≤p <∞,$ $1/p+1/q=1,$
$∅≠Λ'⊆Λ= ℝ^n,$ $h∈L^1(ℝ^n),$ $Ω=[0,1]^n $, $F∈(e-)W^(p(u),ϕ,𝔽)_Ω,Λ'(ℝ^n :ℂ),$ $1/p(u)+1/q(u)=1,$
and $sup_t∈ℝ^n F(t)<∞.$
Suppose, further, that the functions $𝔽: (0,∞) ×ℝ^n →(0,∞)$
$𝔽_1 : (0,∞) ×ℝ^n →(0,∞)$
does not depend on $t$, as well as that $p_1(u)≡1.$
If $ϕ(x)=φ(x)=x,$ $x≥0$ and for each $l>0$ we have
2l^{-n/p}\bigl(4\pi t_{0}\bigr)^{-n/2}\sum_{k\in l{\mathbb Z}^{n}}e^{-\frac{(|k|-3l\sqrt{n})^{2}}{4t_{0}}}\frac{{\mathbb F}_{1}(l)}{{\mathbb F}(l)} \leq 1,
then Proposition \ref{shokiran} can be applied and gives that the function $ℝ^n∋x↦u(x,t_0)≡(G(t_0)F)(x) ∈ℂ$ belongs to the class
It is worth noting that this proposition can be applied even in the case that $ϕ(x)=φ(x)=x^α,$ $x≥0$ for some constant $α>1$ but then we must allow that the function $𝔽_1(l)$ rapidly decays to zero as $l→+∞$ (notice only that the assumptions $u∈t+lΩ$ and $v∈u-k+lΩ$ for some $t∈ℝ^n$ and $k∈lℤ^n$ imply $u-v∈k+lΩ-lΩ-lΩ$ and therefore $|u-v|≥|k|-3l√(n)$); Proposition \ref{shokiran1} can be also applied here.
Here, we would like to stress that our recent analyses from \cite[Example 0.1]{marko-manuel-ap} and the fifth point of the application section from [10] can be used for certain applications of the multi-dimensional Weyl almost periodic functions. Suppose, for example, that $A$ generates a strongly continuous semigroup $(T(t))_t≥0$ on a Banach space $X$ whose elements are certain complex-valued functions defined on $ℝ^n.$ Under some assumptions,
the function
\begin{align*}
u(t,x)=\bigl(T(t)u_{0}\bigr)(x)+\int^{t}_{0}[T(t-s)f(s)](x)\, ds,\quad t\geq 0,\ x\in {\mathbb R}^{n}
\end{align*}
presents a unique classical solution of the abstract Cauchy problem
\begin{align*}
u_{t}(t,x)=Au(t,x)+F(t,x),\ t\geq 0,\ x\in {\mathbb R}^{n}; \ u(0,x)=u_{0}(x),
\end{align*}
where $F(t,x):=[f(t)](x),$ $t≥0,$ $x∈ℝ^n.$ In many cases (for example, this holds for the Gaussian semigroup on $ℝ^n$), there exists a kernel $(t,y)↦E(t,y),$ $t> 0,$ $y∈ℝ^n$ which is integrable on any set $[0,T]×ℝ^n$ ($T>0$) and satisfies that
[T(t)f(s)](x)=\int_{{\mathbb R}^{n}}F(s,x-y)E(t,y)\, dy,\quad t>0,\ s\geq 0,\ x\in {\mathbb R}^{n}.
Let it be the case, and let
$t_0>0.$ Suppose, for example, that the function $F(t,x)$
belongs to the space $(e-)W^[1,x,𝔽]_Ω,Λ'(ℝ^n : ℂ)$ with respect to the variable $x∈ℝ^n,$ uniformly in the variable $t$ on compact subsets of $[0,∞),$ with the meaning clear. Then we have ($t, τ∈ℝ^n;$ $u∈Ω,$ $l>0$):
\begin{align*}
\Bigl| & u_{t_{0}}({\bf t}+\tau +l{\bf u})- u_{t_{0}}({\bf t}+l{\bf u}) \Bigr|
\\& \leq \int^{t_{0}}_{0} \int_{{\mathbb R}^{n}}
| F(s,{\bf t}+\tau-y+l{\bf u})-F(s,{\bf t}-y+l{\bf u})| \cdot \bigl|E\bigl(t_{0},y\bigr)\bigr|\, dy\, ds.
\end{align*}
Suppose also that the function $𝔽(l,t)
$ does not depend on the variable $t.$
Integrating the above estimate over $Ω$ and using the Fubini theorem, we obtain ($t, τ∈ℝ^n,$ $l>0$):
\begin{align*}
\int_{\Omega}&\Bigl| u_{t_{0}}({\bf t}+\tau +l{\bf u})- u_{t_{0}}({\bf t}+l{\bf u}) \Bigr|\, du
\\& \leq \int^{t_{0}}_{0} \int_{{\mathbb R}^{n}}\Biggl[\int_{\Omega}| F(s,{\bf t}+\tau-y+l{\bf u})-F(s,{\bf t}-y+l{\bf u})|\, d{\bf u}\Biggr] \cdot \bigl|E\bigl(t_{0},y\bigr)\bigr|\, dy\, ds
\\& \leq \epsilon l^{-n}\bigl[{\mathbb F}(l)\bigr]^{-1} \int^{t_{0}}_{0} \int_{{\mathbb R}^{n}}\bigl|E\bigl(t_{0},y\bigr)\bigr|\, dy\, ds,
\end{align*}
which implies that the function $u_t_0(·)$ belongs to the class $(e-)W^[1,x,𝔽]_Ω,Λ'(ℝ^n :ℂ).$
3. Suppose now that $Y:=L^r(ℝ^n)$ for some $r∈[1,∞)$ and $ A(t):= Δ+a(t)I$, $t ≥0$, where $Δ$ is the Dirichlet Laplacian on $L^r(ℝ^n),$ $I$ is the identity operator on $L^r(ℝ^n)$ and $ a ∈L^∞([0,∞)) $. Then it is well known
that the evolution system $(U(t,s))_t≥s≥0⊆L(Y)$ generated by the family $(A(t))_t≥0$ exists and is given by $U(t,t):=I$ for all $t≥0$ and
[U(t,s)F]({\bf u}):=\int_{{\mathbb R}^{n}} K(t,s,{\bf u},{\bf v})F({\bf v}) \, d{\bf v}, \quad F\in L^{r}({\mathbb R}^{n}),\quad t> s\geq 0,
$K(t,s,u,v)$ is given by
K(t,s,{\bf u},{\bf v}):= (4\pi (t-s))^{-\frac{n}{2}} e^{\int_{s}^{t} a(\tau)\, d\tau }\exp \Biggl(-\frac{| x-y|^{2}}{4(t-s)}\Biggr),\quad
t>s,\ {\bf u},\ {\bf v} \in \mathbb{R}^{n} ;
see [12] for more details.
Hence, for every $ τ∈ℝ^n $, we have
K(t,s,{\bf u}+\tau,{\bf v}+\tau)=K(t,s,{\bf u},{\bf v}),\quad t>s\geq 0,\ {\bf u},\ {\bf v} \in \mathbb{R}^{n} .
It is well known that, under certain assumptions, a unique mild solution of the abstract Cauchy problem
(∂/∂t)u(t,x) = A(t)u(t,x) ,$ $t > 0;$ $u(0,x) = F(x)$
is given by
u(t,x):=[U(t,0)F](x),$ $t≥0,$ $x∈ℝ^n.$
Suppose now that $F∈L^r(ℝ^n) ∩(e-)W_[0,1]^n,Λ'^(p,x,𝔽)(ℝ^n : ℂ),$
where $1≤p<∞,$ $∅≠Λ' ⊆ℝ^n$ and the function $𝔽(l,t)≡𝔽(l)$ does not depend on $t$ (at this place, it is worth noting that, in the usual Bohr or Stepanov concept, this immediately yields $F≡0$). Let $1/p+1/q=1$ and let $ϵ>0$ be given.
Then there exist two finite real numbers
$L>0$ such that for each $t_0∈Λ'$ there exists $τ∈B(t_0,L)∩Λ'$ such that
\begin{align*}
\sup_{{\bf t}\in {\mathbb R}^{n}}{\mathbb F}(l) \bigl| F({\bf \tau}+{\bf u})-F({\bf u}) \bigr|_{L^{p}({\bf t}+l[0,1]^{n})} <\epsilon.
\end{align*}
Therefore, for every $t>0,$ $l>0$ and $u, τ∈ℝ^n,$ there exists a finite real constant $c_t>0$ such that:
\begin{align*}
& |u(t,{\bf u}+\tau)-u(t,{\bf u})|=\Biggl| \int_{{\mathbb R}^{n}}\bigl[ K(t,0,{\bf u}+\tau,{\bf v}) -K(t,0,{\bf u},{\bf v})\bigr] F({\bf v})\, d{\bf v} \Biggr|
\\& =\Biggl| \int_{{\mathbb R}^{n}}K(t,0,{\bf u}+\tau,{\bf v}+\tau) F({\bf v}+\tau)\, d{\bf v} -\int_{{\mathbb R}^{n}}K(t,0,{\bf u},{\bf v}) F({\bf v})\, d{\bf v} \Biggr|
\\& =\Biggl| \int_{{\mathbb R}^{n}}K(t,0,{\bf u},{\bf v}) \bigl[F({\bf v}+\tau)\, d{\bf v} - F({\bf v})\bigr]\, d{\bf v} \Biggr|
\\& \leq c_{t}\int_{{\mathbb R}^{n}}e^{-\frac{|{\bf u}-{\bf v}|^{2}}{4t}}|F({\bf v}+\tau)-F({\bf v})|\, d{\bf v}=c_{t}\sum_{k\in l{\mathbb Z}^{n}}\int_{k+l[0,1]^{n}}e^{-\frac{|{\bf u}-{\bf v}|^{2}}{4t}}|F({\bf v}+\tau)-F({\bf v})|\, d{\bf v}
\\& \leq c_{t}\sum_{k\in l{\mathbb Z}^{n}}\Bigl\| e^{-\frac{|{\bf u}-\cdot|^{2}}{4t}}\Bigr\|_{L^{q}(k+l[0,1]^{n})}\Bigl\| F(\cdot+\tau)-F(\cdot)\Bigr\|_{L^{p}(k+l[0,1]^{n})}
\\& \leq c_{t}\frac{\epsilon}{{\mathrm F}(l)}\sum_{k\in l{\mathbb Z}^{n}}\Bigl\| e^{-\frac{|{\bf u}-\cdot|^{2}}{4t}}\Bigr\|_{L^{q}(k+l[0,1]^{n})}:=c_{t}\frac{\epsilon}{{\mathbb F}(l)}G(l,{\bf u}).
\end{align*}
The convergence of series defining $G(l,u)$ can be simply justified by the fact that for each $k∈lℤ^n$ with a sufficiently large absolute value we have $|u-k-v|≥|k|-l-|u|$ for all $v∈l[0,1]^n.$ Now we will fix a number $t>0$ and a new exponent $p'∈[1,∞).$ Since the function $u ↦G(l,u),$ $u∈ℝ^n$ is continuous and positive for every fixed $l>0,$ we can define the function $𝔽_1(·; ·)$ by
{\mathbb F}_{1}(l,{\bf t}):=\frac{{\mathbb F}(l)}{\Bigl( \int_{{\bf t}+l[0,1]^{n}}G(l,{\bf u})^{p'}\, d{\bf u} \Bigr)^{1/p'}},\quad l>0.
By the above given argumentation, we immediately get from the corresponding definition that the mapping $x↦u(t,x),$ $x∈ℝ^n$ belongs to the class $(e-)W_[0,1]^n,Λ'^(p',x,𝔽_1)(ℝ^n : ℂ).$
\section{Conclusions and final remarks}\label{prinuda}
This paper investigates various classes of multi-dimensional Weyl almost periodic type functions in Lebesgue spaces with variable exponents.
We pay special attention to the analysis of constant coefficient case, providing also
some applications to
the integro-differential equations.
Let us mention, finally, a few intriguing topics which have not been discussed here. Composition theorems for Weyl almost periodic type functions were considered by F. Bedouhene, Y. Ibaouene, O. Mellah, P. Raynaud de Fitte [5] and M. Kosti\' c [28] in the one-dimensional setting; we have not analyzed the multi-dimensional analogues of the results established in these research studies (although considered Weyl almost periodic type functions depend on two parameters, $t∈ℝ^n$ and $x∈X,$ the applications to semilinear Cauchy equations and inclusions are not examined here, as well).
On the other hand, in \cite[Section 6]{deda}, the authors have presented several results and examples about the relationship between one-dimensional Weyl almost periodic type functions and one-dimensional Besicovitch almost periodic type functions (concerning Besicovitch almost periodic functions on $ℝ^n$ and general topological groups, the reader may consult the important research monograph [40] by A. A. Pankov; in this monograph, we have find many intriguing applications of multi-dimensional Besicovitch almost periodic functions to evolution variational inequalities, positive boundary value problems for symmetric hyperbolic systems and
nonlinear Schr\"odinger equations). For the sake of brevity and better exposition, we will skip all details concerning this theme in the multi-dimensional framework. Also, many crucial properties and important counterexamples in the theory of one-dimensional Stepanov, Weyl and Besicovitch almost periodic type functions have been established by
H. Bohr and E. F$ø$lner in their landmark paper [8]; for example, for any real number $P>1,$ the authors of this paper have constructed a locally integrable function $f: ℝ →ℝ$ which is Stepanov $p$-almost periodic for any exponent $p∈[1,P)$ but not equi-Weyl-$P$-almost periodic (see \cite[Main example 3, pp. 83--91]{bohr-folner}). We have not been able to reconsider here such exotic examples in the multi-dimensional setting (it is also worth noting that L. I. Danilov [11] and H. D. Ursell [43] have established two interesting characterizations of equi-Weyl-$p$-almost periodic functions as well as that the notion of Weyl almost periodicity has been investigated by A. Iwanik [25] within the field of topological dynamics, as emphasized earlier in [26]).
\begin{thebibliography}{99}
[1]
S. Abbas,
A note on Weyl pseudo almost automorphic
functions and their properties,
Math. Sci. (Springer) {\bf 6}:29 (2012), 5 pp, doi:10.1186/2251-7456-6-29.
[2]
J. Andres, A. M. Bersani, R. F. Grande,
Hierarchy of almost-periodic function spaces,
Rend. Mat. Appl. (7) {\textbf 26} (2006), 121-188.
[3]
J. Andres, A. M. Bersani, K. Le\' sniak,
On some almost-periodicity problems in various
metrics, Acta Appl. Math. \textbf{65} (2001), 35--57.
[4]
B. Basit, H. G\"uenzler,
Generalized vector valued almost periodic
and ergodic distributions,
J. Math. Anal. Appl. {\bf 314} (2006), 363--381.
[5]
F. Bedouhene, Y. Ibaouene, O. Mellah, P. Raynaud de Fitte,
Weyl almost periodic solutions to abstract linear and
semilinear equations with Weyl almost periodic coefficients,
Math. Method. Appl. Sci. {\bf 41} (2018), 9546--9566.
[6]
A. S. Besicovitch,
Almost Periodic Functions,
Dover Publ, New York, 1954.
[7]
A. S. Besicovitch, H. Bohr,
Almost periodicity and general trigonometric
series, Acta Math. {\bf 57} (1931), 203--292.
[8]
H. Bohr, E. F$\o$lner,
On some types of functional spaces: A contribution to the theory of almost periodic functions,
Acta Math. {\bf 76} (1944) 31--155.
[9]
A. Ch\'avez, K. Khalil, M. Kosti\'c, M. Pinto,
$({\mathrm R},{\mathcal B})$-Multi-almost periodic type functions and applications, preprint. arXiv:2012.00543.
[10]
A. Ch\'avez, K. Khalil, M. Kosti\'c, M. Pinto,
Stepanov $({\mathrm R},{\mathcal B})$-multi-almost periodic type functions and applications, preprint. hal-03035195.
[11]
L. I. Danilov,
On Weyl almost periodic selections of
multivalued maps,
J. Math. Anal. Appl. {\bf 316} (2016), 110--127.
[12]
E. B. Davies,
Heat Kernels and Spectral Theory,
Cambridge University Press, Cambridge, 1989.
[13]
T. Diagana,
Almost Automorphic Type and Almost Periodic Type Functions in Abstract Spaces,
Springer-Verlag, New York, 2013.
[14]
T. Diagana, M. Kosti\' c,
Generalized almost periodic and generalized asymptotically almost periodic type functions in Lebesgue spaces with variable exponents
Filomat {\bf 34} (2020), 1629--1644.
[15]
T. Diagana, M. Kosti\' c,
Generalized almost automorphic and generalized asymptotically almost automorphic type functions in Lebesgue spaces with variable exponents $L^{p(x)},$
Book chapter in: Recent Studies in Differential Equations, Nova Science Publishers, New York, in press.
[16]
T. Diagana, M. Zitane,
Weighted Stepanov-like pseudo-almost periodic functions in Lebesgue space with variable exponents $L^{p(x)},$
Afr. Diaspora J. Math. {\bf 15} (2013), 56--75.
[17]
T. Diagana, M. Zitane,
Stepanov-like pseudo-almost automorphic functions in Lebesgue spaces with variable exponents $L^{p(x)},$
Electron. J. Differential Equations 2013, no. {\bf 188}, 20 pp.
[18]
L. Diening, P. Harjulehto, P. H\"ast\"uso, M. Ruzicka,
Lebesgue and Sobolev Spaces with Variable
Lecture Notes in Mathematics, 2011. Springer, Heidelberg, 2011.
[19]
S. S. Dragomir, M. A. Khan, A. Abathun,
Refinement of the Jensen integral inequality,
Open Math. {\bf 14} (2016), 221--228.
[20]
X. L. Fan, D. Zhao,
On the spaces $L^{p(x)}(O)$ and $W^{m,p(x)}(O),$
J. Math. Anal. Appl. {\bf 263}
(2001), 424--446.
[21]
V. Fedorov, M. Kosti\' c,
A note on (asymptotically) Weyl-almost periodic
properties of convolution products,
Chelyabinsk Phy. Math. J. {\bf 4} (2019),
[22]
A. M. Fink,
Almost Periodic Differential Equations,
Springer-Verlag, Berlin, 1974.
[23]
G. M. N'Gu\' er\' ekata,
Almost Automorphic and Almost Periodic Functions
in Abstract Spaces,
Kluwer Acad. Publ, Dordrecht, 2001.
[24]
R. S. Guter, L. D. Kudryavtsev, B. M. Levitan,
Elements of the Theory
of Functions, Fizmatlit, Moscow, 1963 (in Russian).
[25]
A. Iwanik,
Weyl almost periodic points in topological dynamics,
Colloq. Math. {\bf 56}
(1988), 107--119.
[26]
M. Kosti\'c,
Almost Periodic and Almost Automorphic Type Solutions to Integro-Differential Equations,
W. de Gruyter, Berlin, 2019.
[27]
M. Kosti\'c,
Selected Topics in Almost Periodicity,
Book Manuscript, 2020.
[28]
M. Kosti\'c,
Composition principles for generalized almost periodic functions,
Bull. Cl. Sci. Math. Nat. Sci. Math.
{\bf 43} (2018), 65--80.
[29]
M. Kosti\'c,
Weyl-almost periodic solutions and asymptotically Weyl-almost periodic solutions of abstract Volterra integro-differential equations,
Banach J. Math. Anal. {\bf 13} (2019), 64--90.
[30]
M. Kosti\'c,
Asymptotically Weyl almost periodic functions in Lebesgue spaces with variable exponents,
J. Math. Anal. Appl. {\bf 498} (2021), 124961, in press, https://doi.org/10.1016/j.jmaa.2021.124961.
[31]
M. Kosti\'c, W.-S. Du,
Generalized almost periodicity in Lebesgue spaces with variable exponents, in:
Fixed Point Theory and Dynamical Systems with Applications, special issue of Mathematics,
Mathematics {\bf 8} (2020), 928; doi:10.3390/math8060928.
[32]
M. Kosti\'c, W.-S. Du,
Generalized almost periodicity in Lebesgue spaces with variable exponents, Part II,
in: Fixed Point Theory and Dynamical Systems with Applications, special issue of Mathematics,
Mathematics {\bf 8(7)} (2020), 1052; https://doi.org/10.3390/math8071052.
[33]
M. Kosti\'c,
Multi-dimensional $c$-almost periodic type functions and applications,
preprint. arXiv:2012.15735.
[34]
A. S. Kovanko,
Sur la compaci\' e des sys\' emes de fonctions presque p\' eriodiques
g\' e\' eralis\' ees de H. Weyl, C.R. (Doklady) Ac. Sc. URSS {\bf 43} (1944), 275--276.
[35]
A. S. Kovanko,
On convergence of sequences of functions in the sense of Weyl's
metric $D_{W_{\omega}}$,
Ukrainian Math. J. {\bf 3} (1951), 465--476 (in Russian).
[36]
A. S. Kovanko,
On compactness of systems of generalized almost-periodic functions
of Weyl,
Ukrainian Math. J. {\bf 5} (1953), 185--195 (in Russian).
[36]
A. S. Kovanko,
On compactness of systems of generalized almost-periodic functions
of Weyl,
Ukrainian Math. J. {\bf 5} (1953), 185--195 (in Russian).
[37]
D. Lenz, T. Spindeler, N. Strungaru,
Pure point diffraction and mean, Besicovitch and Weyl almost periodicity,
preprint. arXiv:2006.10821.
[38]
M. Levitan,
Almost Periodic Functions,
G.I.T.T.L., Moscow, 1953 (in Russian).
[39]
P. Q. H. Nguyen,
On variable Lebesgue spaces,
Thesis Ph.D., Kansas State University. Pro-
Quest LLC, Ann Arbor, MI, 2011. 63 pp.
[40]
A. A. Pankov,
Bounded and Almost Periodic Solutions of Nonlinear Operator
Differential Equations, Kluwer Acad. Publ., Dordrecht, 1990.
[41]
T. Spindeler,
Stepanov and Weyl almost periodicity functions in locally compact
Abelian groups,
preprint. arXiv:2006.07266v1.
[42]
J. Stryja,
Analysis of Almost-Periodic Functions,
Mgr. Thesis, Palack\'y University,
Olomouc, 2001 (in Czech).
[43]
H. D. Ursell,
Parseval's theorem for almost-periodic functions,
Proc. London Math. Soc. {\bf 2} (1931),
[44]
S. Zaidman,
Almost-Periodic Functions in Abstract Spaces,
Pitman Research Notes in
Math, Vol. \textbf{126}, Pitman, Boston, 1985.
\end{thebibliography}
\end{document}
|
# A note on tight projective $2$-designs
Joseph W. Iverson111Department of Mathematics, Iowa State University, Ames, IA
Emily J. King222Department of Mathematics, Colorado State University, Fort
Collins, CO Dustin G. Mixon333Department of Mathematics, The Ohio State
University, Columbus, OH 444Translational Data Analytics Institute, The Ohio
State University, Columbus, OH
###### Abstract
We study tight projective $2$-designs in three different settings. In the
complex setting, Zauner’s conjecture predicts the existence of a tight
projective $2$-design in every dimension. Pandey, Paulsen, Prakash, and
Rahaman recently proposed an approach to make quantitative progress on this
conjecture in terms of the entanglement breaking rank of a certain quantum
channel. We show that this quantity is equal to the size of the smallest
weighted projective $2$-design. Next, in the finite field setting, we
introduce a notion of projective $2$-designs, we characterize when such
projective $2$-designs are tight, and we provide a construction of such
objects. Finally, in the quaternionic setting, we show that every tight
projective $2$-design for $\mathbb{H}^{d}$ determines an equi-isoclinic tight
fusion frame of $d(2d-1)$ subspaces of $\mathbb{R}^{d(2d+1)}$ of dimension
$3$.
## 1 Introduction
Let $S(\mathbb{C}^{d})$ denote the sphere of $x\in\mathbb{C}^{d}$ with
$\|x\|_{2}^{2}=1$, and consider its uniform probability measure $\sigma$. We
let $\operatorname{Hom}_{d}(t)$ denote the complex vector space spanned by all
monomial functions $\mathbb{C}^{d}\to\mathbb{C}$ that map
$z=(z_{1},\ldots,z_{d})$ to $z_{1}^{\alpha_{1}}\cdots
z_{d}^{\alpha_{d}}\overline{z}_{1}^{\beta_{1}}\cdots\overline{z}_{d}^{\beta_{d}}$
with $t=\sum_{j}\alpha_{j}=\sum_{j}\beta_{j}$. Next, we take $\Pi_{d}^{(t)}$
to denote orthogonal projection onto the symmetric subspace
$(\mathbb{C}^{d})_{\operatorname{sym}}^{\otimes t}$ of
$(\mathbb{C}^{d})^{\otimes t}$. Finally, put $[n]:=\\{1,\ldots,n\\}$. Having
established the neccesssary notation, a projective $t$-design for
$\mathbb{C}^{d}$ is defined to be any $\\{x_{k}\\}_{k\in[n]}$ in
$S(\mathbb{C}^{d})$ that satisfies the following equivalent properties:
###### Proposition 1 (see [45]).
Given $\\{x_{k}\\}_{k\in[n]}$ in $S(\mathbb{C}^{d})$ and $t\in\mathbb{N}$, the
following are equivalent:
* (a)
$\frac{1}{n}\sum_{k\in[n]}p(x_{k})=\int_{S(\mathbb{C}^{d})}p(x)d\sigma(x)$ for
every $p\in\operatorname{Hom}_{d}(t)$.
* (b)
$\frac{1}{n}\sum_{k\in[n]}(x_{k}^{\otimes t})(x_{k}^{\otimes
t})^{*}=\binom{d+t-1}{t}^{-1}\cdot\Pi_{d}^{(t)}$.
* (c)
$\frac{1}{n^{2}}\sum_{k\in[n]}\sum_{\ell\in[n]}|\langle
x_{k},x_{\ell}\rangle|^{2t}=\binom{d+t-1}{t}^{-1}$.
In words, the cubature rule Proposition 1(a) says that, for the purposes of
integration over the sphere $S(\mathbb{C}^{d})$, a projective $2$-design
“fools” every $p\in\operatorname{Hom}_{d}(t)$ by mimicking the entire sphere.
Note that one may “trace out” one of the $t$ subsystems in Proposition 1(b) to
show that every projective $t$-design is also a projective $(t-1)$-design.
Proposition 1(b) says that $\\{x_{k}^{\otimes t}\\}_{k\in[n]}$ forms what
frame theorists would call a unit norm tight frame [9] for the
$\binom{d+t-1}{t}$-dimensional complex Hilbert space
$(\mathbb{C}^{d})^{\otimes t}_{\operatorname{sym}}$. With this perspective,
Proposition 1(c) corresponds to the frame potential [5] of $\\{x_{k}^{\otimes
t}\\}_{k\in[n]}$. One may generalize Proposition 1(c) to obtain notions of
projective $t$-designs for real, quaternionic, and octonionic spaces [36]. The
complex case with $t=2$ is particularly relevant in quantum state tomography
[41], where it is desirable to take $n$ as small as possible. This motivates
the following result, which refers to $\\{x_{k}\\}_{k\in[n]}$ in
$S(\mathbb{C}^{d})$ as equiangular if
$|\\{|\langle x_{k},x_{\ell}\rangle|^{2}:k,\ell\in[n],k\neq\ell\\}|=1.$
###### Proposition 2 (special case of Proposition 1.1 in [3]).
Consider $X=\\{x_{k}\\}_{k\in[n]}$ in $S(\mathbb{C}^{d})$.
* (a)
If $X$ is a projective $2$-design, then $n\geq d^{2}$ with equality precisely
when $X$ is equiangular.
* (b)
If $X$ is equiangular, then $n\leq d^{2}$ with equality precisely when $X$ is
a projective $2$-design.
In the case of equality $n=d^{2}$, $\\{x_{k}\\}_{k\in[n]}$ is known as a tight
projective $2$-design for $\mathbb{C}^{d}$, which corresponds to an object in
quantum physics known as a symmetric, informationally complete positive
operator–valued measure [10]. In his Ph.D. thesis [50], Zauner conjectured
that for every $d>1$, there exists a tight projective $2$-design for
$\mathbb{C}^{d}$ of a particular form. To date, there are only finitely many
$d\in\mathbb{N}$ for which a tight projective $2$-design is known to exist
[20, 22, 19], and the conjecture is apparently related to the Stark
conjectures in algebraic number theory [32]. A solution to Zauner’s conjecture
will be rewarded with a $2021$ EUR prize from the National Quantum Information
Centre in Poland [31].
Pandey, Paulsen, Prakash, and Rahaman [38] recently proposed a new approach to
Zauner’s conjecture. They identify an explicit quantum channel
$\mathfrak{Z}_{d}\colon\mathbb{C}^{d\times d}\to\mathbb{C}^{d\times d}$ whose
so-called entanglement breaking rank is
$\operatorname{ebr}(\mathfrak{Z}_{d})\geq d^{2}$, and furthermore, equality
holds if and only if there exists a tight projective $2$-design for
$\mathbb{C}^{d}$. As such, any new upper bound on this entanglement breaking
rank represents quantitative progress towards Zauner’s conjecture. In
addition, Pandey et al. consider various analytic approaches to obtain such
bounds in small dimenions.
As a consequence of Proposition 2, every tight projective $2$-design for
$\mathbb{C}^{d}$ is necessarily an equiangular tight frame (ETF), that is, a
unit norm tight frame that is also equiangular. ETFs correspond to optimal
codes in projective space that achieve equality in the Welch bound [48] (also
known as the simplex bound [12]). By virtue of this optimality, ETFs find
applications in wireless communication [43], compressed sensing [2], and
digital fingerprinting [35]. Motivated by these applications, many ETFs were
recently constructed using various mixtures of algebra and combinatorics [43,
49, 14, 18, 16, 7, 27, 26, 28, 29]; see [17] for a survey. Despite this flurry
of work, several problems involving ETFs (such as Zauner’s conjecture) remain
open, and a finite field model was recently proposed to help study these
remaining problems [23, 24].
Notice that if $\\{x_{k}\\}_{k\in[n]}$ is a tight projective $2$-design for
$\mathbb{C}^{d}$, then Propositions 1 and 2 together imply that
$\\{x_{k}^{\otimes 2}\\}_{k\in[n]}$ is an ETF for $(\mathbb{C}^{d})^{\otimes
2}_{\operatorname{sym}}$. This suggests another approach to Zauner’s
conjecture [1] in which one seeks ETFs of $d^{2}$ vectors in the
$\binom{d+1}{2}$-dimensional complex Hilbert space $(\mathbb{C}^{d})^{\otimes
2}_{\operatorname{sym}}$. There are several known constructions of such ETFs
[17, 7, 27, 26], but in order to correspond to a tight projective $2$-design,
the ETF must consist of rank-$1$ symmetric tensors.
We note that one may leverage linear programming bounds to obtain analogous
results to Proposition 2 that relate projective $t$-designs over different
spaces to different sized angle sets [3]. In the real case, tight projective
$2$-designs have size $\binom{d+1}{2}$ and are only known to exist for
$d\in\\{2,3,7,23\\}$, with $d=119$ being the smallest dimension for which
existence is currently unknown; see [33, 34, 4, 37, 21]. In the quaternion
case, tight projective $2$-designs have size $2d(d-1)$; they are only known to
exist for $d\in\\{2,3\\}$ [11, 15], and there is numerical evidence that they
do not exist for $d\in\\{4,5\\}$ [11]. The octonions are only capable of
supporting a projective space for $d\in\\{2,3\\}$, and tight projective
$2$-designs exist in both cases [11].
In this paper, we study tight projective $2$-designs in three different
settings. In Section 2, we consider the complex setting, specifically, the new
quantitative approach of Pandey et al. [38]. Here, we show that
$\operatorname{ebr}(\mathfrak{Z}_{d})$ is precisely the size of the smallest
weighted projective $2$-design for $\mathbb{C}^{d}$. This identification
allows us to find new upper bounds on $\operatorname{ebr}(\mathfrak{Z}_{d})$.
Next, in Section 3, we use Proposition 1(b) to find an analog of projective
$2$-designs in a finite field setting. This continues the line of inquiry from
[23, 24] of tackling hard problems from frame theory in a finite field model.
In this setting, we obtain an analog of Proposition 2, and then we construct a
family of tight projective $2$-designs. Finally in Section 4, we consider the
quaternionic setting, where we take inspiration from the fact that a tight
projective $2$-design for $\mathbb{C}^{d}$ can be used to produce an ETF of
$d^{2}$ vectors in $(\mathbb{C}^{d})^{\otimes 2}_{\operatorname{sym}}$. In
particular, we show how a tight projective $2$-design for $\mathbb{H}^{d}$ can
be used to produce an equi-isoclinic tight fusion frame of $2d(d-1)$ different
$3$-dimensional subspaces of the $d(2d+1)$-dimensional real Hilbert space of
$d\times d$ quaternionic anti-Hermitian matrices.
## 2 The complex setting
A linear map $\Phi\colon\mathbb{C}^{d\times d}\to\mathbb{C}^{m\times m}$ is
said to be entanglement breaking if it admits an entanglement breaking
decomposition:
$\Phi(X)=\sum_{k\in[n]}R_{k}XR_{k}^{*},\qquad\sum_{k\in[n]}R_{k}^{*}R_{k}=I_{d},\qquad\operatorname{rank}R_{k}=1~{}~{}\text{for
every}~{}~{}k\in[n].$ (1)
The entanglement breaking rank of $\Phi$, denoted by
$\operatorname{ebr}(\Phi)$, is the smallest $n$ for which there exists
$\\{R_{k}\\}_{k\in[n]}$ in $\mathbb{C}^{m\times d}$ satisfying (1). Let
$\\{e_{i}\\}_{i\in[d]}$ denote the standard basis in $\mathbb{C}^{d}$. The
Choi matrix of $\Phi$ is given by
$C_{\Phi}:=\sum_{i\in[d]}\sum_{j\in[d]}e_{i}e_{j}^{*}\otimes\Phi(e_{i}e_{j}^{*})\in\mathbb{C}^{d\times
d}\otimes\mathbb{C}^{m\times m}.$
In words, $C_{\Phi}$ is a $d\times d$ block array whose $(i,j)$th block is
$\Phi(e_{i}e_{j}^{*})\in\mathbb{C}^{m\times m}$. One may use $C_{\Phi}$ to
discern useful properties about $\Phi$. For example, $\Phi$ is completely
positive when $C_{\Phi}$ is positive semidefinite; see Theorem 2.22 in [47].
We are interested in the quantum depolarizing channel
$\mathfrak{Z}_{d}\colon\mathbb{C}^{d\times d}\to\mathbb{C}^{d\times d}$
defined by
$\mathfrak{Z}_{d}(X):=\frac{1}{d+1}\Big{(}X+\operatorname{tr}X\cdot
I_{d}\Big{)}.$
One may verify that $\mathfrak{Z}_{d}$ is entanglement breaking as a
consequence of its scaled Choi matrix $\frac{1}{d}C_{\mathfrak{Z}_{d}}$ being
a separable bipartite state (specifically, the isotropic state with
$\lambda=1/d$ from Example 7.25 in [47]). Pandey, Paulsen, Prakash, and
Rahaman [38] pointed to this quantum channel as an opportunity for
quantitative progress on Zauner’s conjecture:
###### Proposition 3 (cf. Corollary III.3 and Theorem V.3 in [38]).
* (a)
$\operatorname{ebr}(\mathfrak{Z}_{d})\geq d^{2}$, with equality if and only if
there exists a tight projective $2$-design for $\mathbb{C}^{d}$.
* (b)
$\operatorname{ebr}(\mathfrak{Z}_{d})\leq d^{2}+d$ whenever $d$ is a prime
power.
We say unit vectors $\\{x_{k}\\}_{k\in[n]}$ in $\mathbb{C}^{d}$ form a
weighted projective $t$-design if there exist weights $\\{w_{k}\\}_{k\in[n]}$
such that
$\sum_{k\in[n]}w_{k}(x_{k}^{\otimes t})(x_{k}^{\otimes
t})^{*}=\tbinom{d+t-1}{t}^{-1}\cdot\Pi_{d}^{(t)},\qquad\sum_{k\in[n]}w_{k}=1,\qquad
w_{k}\geq 0,\qquad k\in[n].$
Notice that every projective $t$-design is a weighted projective $t$-design
with weights $w_{k}=1/n$. In addition, it is known that every weighted
projective $2$-design for $\mathbb{C}^{d}$ has size at least $d^{2}$, and if
equality holds, then the weights are all $1/n$; see Theorem 4 in [41]. What
follows is the main result of this section, of which Proposition 3 is a
corollary:
###### Theorem 4.
The smallest weighted projective $2$-design for $\mathbb{C}^{d}$ has size
$\operatorname{ebr}(\mathfrak{Z}_{d})$.
In fact, one may use Theorem 4 to improve upon Proposition 3(b) by collecting
various weighted $2$-designs from the literature. Specifically, Theorem 4.1,
Proposition 4.2, and Corollary 4.2 in [40], and Corollaries 4.4 and 4.6 in [6]
give the following:
###### Corollary 5.
* (a)
$\operatorname{ebr}(\mathfrak{Z}_{d})\leq kd^{2}+2d$ whenever $kd+1$ is a
prime power with $k\in\mathbb{N}$.
* (b)
$\operatorname{ebr}(\mathfrak{Z}_{d})\leq d^{2}+(p+1)d$ whenever $d+1=p^{k}$
with $p$ prime and $k\in\mathbb{N}$.
* (c)
$\operatorname{ebr}(\mathfrak{Z}_{d})\leq d^{2}+1$ whenever $d-1$ is a prime
power.
* (d)
$\operatorname{ebr}(\mathfrak{Z}_{d})\leq d^{2}+d-1$ whenever $d$ is a prime
power.
Since $\mathfrak{Z}_{d}$ is entanglement breaking, Theorem 4 also implies the
existence of weighted projective $2$-designs; we note that this also follows
from the main result in [42].
###### Corollary 6.
For each $d\in\mathbb{N}$, there is a weighted projective $2$-design for
$\mathbb{C}^{d}$ of size $\binom{d+1}{2}^{2}$.
###### Proof.
Since $\mathfrak{Z}_{d}$ is entanglement breaking, Theorem 4 promises a
weighted projective $2$-design $\\{x_{k}\\}_{k\in[n]}$ for $\mathbb{C}^{d}$.
Then $\Pi_{d}^{(2)}$ resides in the conic hull of $\\{(x_{k}^{\otimes
2})(x_{k}^{\otimes 2})^{*}\\}_{k\in[n]}$, which in turn is contained in the
$\binom{d+1}{2}^{2}$-dimensional real vector space of Hermitian operators over
$(\mathbb{C}^{d})^{\otimes 2}_{\operatorname{sym}}$. By Carathéodory’s
theorem, there exists $S\subseteq[n]$ with $|S|\leq\binom{d+1}{2}^{2}$ and
weights $w_{k}\geq 0$ for $k\in S$ such that
$\sum_{k\in S}w_{k}(x_{k}^{\otimes 2})(x_{k}^{\otimes
2})^{*}=\tbinom{d+1}{2}^{-1}\cdot\Pi_{d}^{(2)}.$
Furthermore, taking the trace of both sides reveals that $\sum_{k\in
S}w_{k}=1$. As such, $\\{x_{k}\\}_{k\in S}$ is a weighted projective
$2$-design. ∎
The remainder of this section proves Theorem 4. We first collect a few helpful
lemmas. Let $T\colon\mathbb{C}^{d\times d}\to\mathbb{C}^{d\times d}$ denote
the tranposition operator defined by $T(X):=X^{\top}$.
###### Lemma 7 (cf. Proposition III.5 in [38]).
It holds that $\operatorname{ebr}(\Phi)=\operatorname{ebr}(T\circ\Phi)$ with
$\Phi(X)=\sum_{k\in[n]}x_{k}y_{k}^{\top}X(x_{k}y_{k}^{\top})^{*}\qquad\Longleftrightarrow\qquad(T\circ\Phi)(X)=\sum_{k\in[n]}\overline{x}_{k}y_{k}^{\top}X(\overline{x}_{k}y_{k}^{\top})^{*}.$
###### Proof.
The claim follows from the following manipulation:
$\bigg{(}\sum_{k\in[n]}x_{k}y_{k}^{\top}X\overline{y}_{k}\overline{x}_{k}^{\top}\bigg{)}^{\top}=\sum_{k\in[n]}\overline{x}_{k}(\overline{y}_{k}^{\top}X^{\top}y_{k})x_{k}^{\top}=\sum_{k\in[n]}\overline{x}_{k}(\overline{y}_{k}^{\top}X^{\top}y_{k})^{\top}x_{k}^{\top}=\sum_{k\in[n]}\overline{x}_{k}y_{k}^{\top}X\overline{y}_{k}x_{k}^{\top},$
where the second step takes the transpose of a scalar. Indeed, we have both
$(x_{k}y_{k}^{\top})^{*}=\overline{y}_{k}\overline{x}_{k}^{\top}$ and
$(\overline{x}_{k}y_{k}^{\top})^{*}=\overline{y}_{k}x_{k}^{\top}$. ∎
Both of the following lemmas were implicitly used in the proof of Corollary
III.7 in [38]. To prove them, we will repeatedly use the following identity,
which is valid for any linear $\Phi\colon\mathbb{C}^{d\times
d}\to\mathbb{C}^{m\times m}$, and any $w,y\in\mathbb{C}^{d}$ and
$x,z\in\mathbb{C}^{m}$:
$\displaystyle(w\otimes x)^{\top}C_{\Phi}(y\otimes z)$
$\displaystyle=\sum_{i\in[d]}\sum_{j\in[d]}w_{i}y_{j}\cdot
x^{\top}\Phi(e_{i}e_{j}^{*})z$
$\displaystyle=x^{\top}\Phi\bigg{(}\sum_{i\in[d]}\sum_{j\in[d]}w_{i}y_{j}\cdot
e_{i}e_{j}^{*}\bigg{)}z=x^{\top}\Phi(wy^{\top})z.$ (2)
###### Lemma 8.
An entanglement breaking map $\Phi$ has entanglement breaking decomposition
$\Phi(X)=\sum_{k\in[n]}a_{k}b_{k}^{\top}X(a_{k}b_{k}^{\top})^{*}.$
if and only if $\Phi$ has Choi matrix
$C_{\Phi}=\sum_{k\in[n]}(b_{k}\otimes a_{k})(b_{k}\otimes a_{k})^{*}.$
###### Proof.
($\Rightarrow$) For any $w,x,y,z$, we may apply (2) to get
$\displaystyle(w\otimes x)^{\top}C_{\Phi}(y\otimes z)$
$\displaystyle=x^{\top}\bigg{(}\sum_{k\in[n]}a_{k}b_{k}^{\top}(wy^{\top})\overline{b}_{k}\overline{a}_{k}^{\top}\bigg{)}z$
$\displaystyle=\sum_{k\in[n]}(x^{\top}a_{k}b_{k}^{\top}w)(y^{\top}\overline{b}_{k}\overline{a}_{k}^{\top}z)$
$\displaystyle=\sum_{k\in[n]}(w\otimes x)^{\top}(b_{k}\otimes
a_{k})(\overline{b}_{k}\otimes\overline{a}_{k})^{\top}(y\otimes z)$
$\displaystyle=(w\otimes x)^{\top}\bigg{(}\sum_{k\in[n]}(b_{k}\otimes
a_{k})(b_{k}\otimes a_{k})^{*}\bigg{)}(y\otimes z).$
Since $w,x,y,z$ are arbitrary, the result follows.
($\Leftarrow$) For any $w,x,y,z$, we may similarly apply (2) to get
$\displaystyle x^{\top}\Phi(wy^{\top})z$ $\displaystyle=(w\otimes
x)^{\top}C_{\Phi}(y\otimes z)$ $\displaystyle=(w\otimes
x)^{\top}\bigg{(}\sum_{k\in[n]}(b_{k}\otimes a_{k})(b_{k}\otimes
a_{k})^{*}\bigg{)}(y\otimes z)$
$\displaystyle=x^{\top}\bigg{(}\sum_{k\in[n]}a_{k}b_{k}^{\top}(wy^{\top})\overline{b}_{k}\overline{a}_{k}^{\top}\bigg{)}z.$
Since $w,x,y,z$ are arbitrary, the result follows. ∎
###### Lemma 9.
$C_{T\circ\mathfrak{Z}_{d}}=\frac{2}{d+1}\Pi_{d}^{(2)}$.
###### Proof.
For any $w,x,y,z$, we may apply (2) to get
$\displaystyle(w\otimes x)^{\top}C_{T\circ\mathfrak{Z}_{d}}(y\otimes z)$
$\displaystyle=x^{\top}\Big{[}(T\circ\mathfrak{Z}_{d})(wy^{\top})\Big{]}z$
$\displaystyle=x^{\top}\frac{1}{d+1}\Big{(}yw^{\top}+\operatorname{tr}wy^{\top}\cdot
I_{d}\Big{)}z$ $\displaystyle=\frac{1}{d+1}\Big{(}w^{\top}z\cdot
x^{\top}y+w^{\top}y\cdot x^{\top}z\Big{)}=(w\otimes
x)^{\top}\frac{1}{d+1}\Big{(}z\otimes y+y\otimes z\Big{)}.$
Since $w,x$ are arbitrary, it follows that
$C_{T\circ\mathfrak{Z}_{d}}(y\otimes z)=\frac{2}{d+1}\cdot\frac{1}{2}(z\otimes
y+y\otimes z)=\frac{2}{d+1}\Pi_{d}^{(2)}(y\otimes z).$
Since $y,z$ are arbitrary, the result follows. ∎
We are now ready to prove the main result of this section.
###### Proof of Theorem 4.
Suppose $\\{x_{k}\\}_{k\in[n]}$ is a weighted projective $2$-design for
$\mathbb{C}^{d}$ with weights $\\{w_{k}\\}_{k\in[n]}$. We claim that
$\operatorname{ebr}(\mathfrak{Z}_{d})\leq n$. To see this, first recall that
$\sum_{k\in[n]}w_{k}(x_{k}\otimes x_{k})(x_{k}\otimes
x_{k})^{*}=\frac{2}{d(d+1)}\cdot\Pi_{d}^{(2)}.$
Taking $a_{k}:=x_{k}$ and $b_{k}:=\sqrt{dw_{k}}x_{k}$ then gives
$\sum_{k\in[n]}(b_{k}\otimes a_{k})(b_{k}\otimes
a_{k})^{*}=d\sum_{k\in[n]}w_{k}(x_{k}\otimes x_{k})(x_{k}\otimes
x_{k})^{*}=\frac{2}{d+1}\cdot\Pi_{d}^{(2)}=C_{T\circ\mathfrak{Z}_{d}},$
where the last step applies Lemma 9. Lemmas 7 and 8 then imply that
$\operatorname{ebr}(\mathfrak{Z}_{d})=\operatorname{ebr}(T\circ\mathfrak{Z}_{d})\leq
n.$
Next, suppose
$n=\operatorname{ebr}(\mathfrak{Z}_{d})=\operatorname{ebr}(T\circ\mathfrak{Z}_{d})$
and consider entanglement breaking decomposition
$(T\circ\Phi)(X)=\sum_{k\in[n]}a_{k}b_{k}^{\top}X(a_{k}b_{k}^{\top})^{*}.$
Then Lemmas 8 and 9 together imply
$\sum_{k\in[n]}(b_{k}\otimes a_{k})(b_{k}\otimes
a_{k})^{*}=\frac{2}{d+1}\cdot\Pi_{d}^{(2)}.$
Notice that for every antisymmetric $x\in(\mathbb{C}^{d})^{\otimes 2}$, it
holds that
$\sum_{k\in[n]}|\langle b_{k}\otimes
a_{k},x\rangle|^{2}=x^{*}\bigg{(}\sum_{k\in[n]}(b_{k}\otimes
a_{k})(b_{k}\otimes a_{k})^{*}\bigg{)}x=x^{*}\frac{2}{d+1}\Pi_{d}^{(2)}x=0.$
It follows that each $b_{k}\otimes a_{k}$ is necessarily symmetric, and
therefore takes the form $\sqrt{dw_{k}}x_{k}\otimes x_{k}$ for some unit
vector $x_{k}$ and scalar $w_{k}\geq 0$. Then
$\sum_{k\in[n]}w_{k}(x_{k}\otimes x_{k})(x_{k}\otimes
x_{k})^{*}=\frac{2}{d(d+1)}\cdot\Pi_{d}^{(2)}.$
The fact that $\sum_{k\in[n]}w_{k}=1$ follows from taking the trace of both
sides. Overall, there exists a weighted projective $2$-design for
$\mathbb{C}^{d}$ of size $n=\operatorname{ebr}(\mathfrak{Z}_{d})$, as claimed.
∎
## 3 The finite field setting
In this section, we introduce a notion of projective $2$-designs in a finite
field setting. Here, we will find an analog to Proposition 2 in which tight
projective $2$-designs are identified as maximal systems of equiangular lines,
and then we will provide several examples. We start by reviewing some
preliminaries; the reader is encouraged to see [23] for more information.
Let $q$ be a prime power. Given $a\in\mathbb{F}_{q^{2}}$, we abbreviate
$\overline{a}=a^{q}$ for its image under the Frobenius automorphism fixing
$\mathbb{F}_{q}\leq\mathbb{F}_{q^{2}}$. The conjugate transpose of a matrix
$A$ is denoted by $A^{*}$. We consider $\mathbb{F}_{q^{2}}^{d}$ under the
nondegenerate Hermitian form $\langle x,y\rangle=x^{*}y$, which is notably
conjugate-linear in the first variable. A subspace
$V\leq\mathbb{F}_{q^{2}}^{d}$ is called nondegenerate if $V\cap
V^{\perp}=\\{0\\}$, where
$V^{\perp}:=\\{x\in\mathbb{F}_{q^{2}}^{d}:\langle x,y\rangle=0\text{ for every
}y\in V\\}.$
In that case, every $x\in\mathbb{F}_{q^{2}}^{d}$ can be written uniquely as
$x=Px+Qx$ with $Px\in V$ and $Qx\in V^{\perp}$, where
$P\colon\mathbb{F}_{q^{2}}^{d}\to\mathbb{F}_{q^{2}}^{d}$ is orthogonal
projection onto $V$.
###### Definition 10.
Let $V\leq\mathbb{F}_{q^{2}}^{d}$ be nondegenerate. We say
$\\{x_{k}\\}_{k\in[n]}$ in $V$ is a $c$-tight frame for $V$ with constant
$c\in\mathbb{F}_{q}$ if
* (i)
$\operatorname{span}\\{x_{k}\\}_{k\in[n]}=V$, and
* (ii)
$\sum_{k\in[n]}\langle x_{k},y\rangle x_{k}=cy$ for every $y\in V$.
For $a\in\mathbb{F}_{q}$, a $c$-tight frame is an equal-norm tight frame, or
$(a,c)$-NTF, if
* (iii)
$\langle x_{k},x_{k}\rangle=a$ for every $k\in[n]$.
For $b\in\mathbb{F}_{q}$, an $(a,c)$-NTF is an equiangular tight frame, or
$(a,b,c)$-ETF, if
* (iv)
$\langle x_{k},x_{\ell}\rangle\langle x_{\ell},x_{k}\rangle=b$ for every
$k,\ell\in[n]$ with $k\neq\ell$.
Meanwhile, an $(a,b)$-equiangular system in $V$ satisfies (iii) and (iv), but
not necessarily (i) or (ii).
Notice that (ii) implies (i) if $c\neq 0$. Furthermore, if $P$ is orthogonal
projection onto $V$, then (ii) is equivalent to
* (ii′)
$\sum_{k\in[n]}x_{k}x_{k}^{*}=cP$.
We will repeatedly make use of the following basic results from [23]:
###### Proposition 11 (Corollary 3.8 in [23]).
If $V\leq\mathbb{F}_{q^{2}}^{d}$ is nondegenerate and $\\{x_{k}\\}_{k\in[n]}$
is a tight frame for $V$ with constant $c=0$, then $n\geq 2\dim V$.
###### Proposition 12 (Equation (3.2) and Proposition 4.7 in [23]).
* (a)
If $\\{x_{k}\\}_{k\in[n]}$ is an $(a,c)$-NTF for $V$, then $na=c\dim V$.
* (b)
If $\\{x_{k}\\}_{k\in[n]}$ is an $(a,b,c)$-ETF for $V$, then $a(c-a)=(n-1)b$.
###### Proposition 13 (Gerzon’s bound, see Theorem 4.1 in [23] and its
proof).
If $\\{x_{k}\\}_{k\in[n]}$ is an $(a,b)$-equiangular system in
$\mathbb{F}_{q^{2}}^{d}$ and $a^{2}\neq b$, then $n\leq d^{2}$. If equality
holds and $a\neq 0$, then $\\{x_{k}x_{k}^{*}\\}_{k\in[n]}$ is a basis for the
$\mathbb{F}_{q}$-linear space
$\\{X\in\mathbb{F}_{q^{2}}^{d\times d}:X=X^{*}\\}.$
If equality holds and $a=0$, then the $\mathbb{F}_{q}$-span of
$\\{x_{k}x_{k}^{*}\\}_{k\in[n]}$ is the subspace
$\\{X\in\mathbb{F}_{q^{2}}^{d\times d}:X=X^{*},~{}\operatorname{tr}X=0\\},$
and $\sum_{k\in[n]}x_{k}x_{k}^{*}=0$ is the unique $\mathbb{F}_{q}$-linear
dependency of $\\{x_{k}x_{k}^{*}\\}_{k\in[n]}$ up to a scalar.
### 3.1 Projective 2-designs
Throughout this subsection, we assume $q$ is odd. Let
$e_{1},\dotsc,e_{d}\in\mathbb{F}_{q^{2}}^{d}$ denote the standard basis. Then
$\\{e_{i}\otimes e_{j}\\}_{i,j\in[d]}$ is a basis for
$(\mathbb{F}_{q^{2}}^{d})^{\otimes 2}$. We write
$(\mathbb{F}_{q^{2}}^{d})^{\otimes
2}_{\operatorname{sym}}:=\bigg{\\{}\sum_{i\in[d]}\sum_{j\in[d]}c_{ij}(e_{i}\otimes
e_{j}):c_{ij}=c_{ji}\text{ for every }i,j\in[d]\bigg{\\}}$
for the subspace of symmetric tensors, and we define
$\Pi_{d}^{(2)}:=\frac{1}{2}\sum_{i\in[d]}\sum_{j\in[d]}\Big{(}e_{i}e_{i}^{*}\otimes
e_{j}e_{j}^{*}+e_{i}e_{j}^{*}\otimes
e_{j}e_{i}^{*}\Big{)}\in(\mathbb{F}_{q^{2}}^{d\times d})^{\otimes 2}.$
###### Lemma 14.
$(\mathbb{F}_{q^{2}}^{d})^{\otimes
2}_{\operatorname{sym}}\leq(\mathbb{F}_{q^{2}}^{d})^{\otimes 2}$ is
nondegenerate, and $\Pi_{d}^{(2)}$ is its orthogonal projection.
###### Proof.
For nondegeneracy, let $y=\sum_{i\in[d]}\sum_{j\in[d]}a_{ij}(e_{i}\otimes
e_{j})\in(\mathbb{F}_{q^{2}}^{d})^{\otimes 2}_{\operatorname{sym}}$ be
nonzero. Then there exist $k,\ell\in[d]$ such that $a_{k\ell}=a_{\ell k}\neq
0$. It follows that
$\displaystyle\langle y,e_{k}\otimes e_{\ell}+e_{\ell}\otimes e_{k}\rangle$
$\displaystyle=\sum_{i\in[d]}\sum_{j\in[d]}\overline{a_{ij}}\langle
e_{i}\otimes e_{j},e_{k}\otimes
e_{\ell}\rangle+\sum_{i\in[d]}\sum_{j\in[d]}\overline{a_{ij}}\langle
e_{i}\otimes e_{j},e_{\ell}\otimes e_{k}\rangle=2\overline{a_{k\ell}},$
which is nonzero by assumption. Thus, $y$ is not orthogonal to
$(\mathbb{F}_{q^{2}}^{d})^{\otimes 2}_{\operatorname{sym}}$.
Next, we show that $\Pi_{d}^{(2)}$ projects orthogonally onto
$(\mathbb{F}_{q^{2}}^{d})^{\otimes 2}_{\operatorname{sym}}$. To this end,
choose any vector $x=\sum_{k\in[d]}\sum_{\ell\in[d]}c_{k\ell}(e_{k}\otimes
e_{\ell})$ and compute
$\displaystyle\Pi_{d}^{(2)}x$
$\displaystyle=\frac{1}{2}\sum_{i\in[d]}\sum_{j\in[d]}\sum_{k\in[d]}\sum_{\ell\in[d]}c_{k\ell}\Big{(}(e_{i}e_{i}^{*}\otimes
e_{j}e_{j}^{*})(e_{k}\otimes e_{\ell})+(e_{i}e_{j}^{*}\otimes
e_{j}e_{i}^{*})(e_{k}\otimes e_{\ell})\Big{)}$
$\displaystyle=\frac{1}{2}\sum_{i\in[d]}\sum_{j\in[d]}\sum_{k\in[d]}\sum_{\ell\in[d]}c_{k\ell}\Big{(}(e_{i}e_{i}^{*})e_{k}\otimes(e_{j}e_{j}^{*})e_{\ell}+(e_{i}e_{j}^{*})e_{k}\otimes(e_{j}e_{i}^{*})e_{\ell}\Big{)}$
$\displaystyle=\frac{1}{2}\sum_{k\in[d]}\sum_{\ell\in[d]}c_{k\ell}\bigg{(}\sum_{i\in[d]}\sum_{j\in[d]}(e_{i}e_{i}^{*})e_{k}\otimes(e_{j}e_{j}^{*})e_{\ell}+\sum_{i\in[d]}\sum_{j\in[d]}(e_{i}e_{j}^{*})e_{k}\otimes(e_{j}e_{i}^{*})e_{\ell}\bigg{)}$
$\displaystyle=\frac{1}{2}\sum_{k\in[d]}\sum_{\ell\in[d]}c_{k\ell}(e_{k}\otimes
e_{\ell}+e_{\ell}\otimes e_{k}),$
which belongs to $(\mathbb{F}_{q^{2}}^{d})^{\otimes 2}_{\operatorname{sym}}$.
Then
$x-\Pi_{d}^{(2)}x=\frac{1}{2}\sum_{k\in[d]}\sum_{\ell\in[d]}c_{k\ell}(e_{k}\otimes
e_{\ell}-e_{\ell}\otimes e_{k}),$
and it is straightforward to check this is orthogonal to every
$y\in(\mathbb{F}_{q^{2}}^{d})^{\otimes 2}_{\operatorname{sym}}$. As such,
$x=(\Pi_{d}^{(2)}x)+(x-\Pi_{d}^{(2)}x)$ gives the desired decomposition of
$x$. ∎
As a consequence of Lemma 14, we may consider tight frames over
$(\mathbb{F}_{q^{2}}^{d})^{\otimes 2}_{\operatorname{sym}}$. This allows us to
define the following analog of Proposition 1(b):
###### Definition 15.
$\\{x_{k}\\}_{k\in[n]}$ in $\mathbb{F}_{q^{2}}^{d}$ is an
$(a,c_{1},c_{2})$-projective $2$-design if $a,c_{1},c_{2}\in\mathbb{F}_{q}$
and
* (i)
$\langle x_{k},x_{k}\rangle=a$ for every $k\in[n]$,
* (ii)
$\\{x_{k}\\}_{k\in[n]}$ is a $c_{1}$-tight frame for $\mathbb{F}_{q^{2}}^{d}$,
and
* (iii)
$\\{x_{k}^{\otimes 2}\\}_{k\in[n]}$ is a $c_{2}$-tight frame for
$(\mathbb{F}_{q^{2}}^{d})^{\otimes 2}_{\operatorname{sym}}$.
With this definition, the finite field setting enjoys an analogy to
Proposition 2:
###### Theorem 16.
If $\\{x_{k}\\}_{k\in[n]}$ in $\mathbb{F}_{q^{2}}^{d}$ is a projective
$2$-design, then $n\geq d^{2}$.
To prove this theorem, we will repeatedly make use of the following:
###### Lemma 17.
If $\\{x_{k}\\}_{k\in[n]}$ in $\mathbb{F}_{q^{2}}^{d}$ is an
$(a,c_{1},c_{2})$-projective $2$-design with $c_{2}\neq 0$, then for every
$A\in\mathbb{F}_{q^{2}}^{d\times d}$, it holds that
$A=\frac{2}{c_{2}}\sum_{k\in[n]}x_{k}x_{k}^{*}Ax_{k}x_{k}^{*}-\operatorname{tr}A\cdot
I_{d}.$
###### Proof.
To prove the result, we multiply both sides of the identity
$\sum_{k\in[n]}(x_{k}^{\otimes 2})(x_{k}^{\otimes
2})^{*}=c_{2}\cdot\Pi_{d}^{(2)}$ by $A\otimes I_{d}$ and then “trace out” the
first subsystem. Explicitly, we define the partial trace
$\operatorname{tr}_{1}\colon\mathbb{F}_{q^{2}}^{d\times
d}\otimes\mathbb{F}_{q^{2}}^{d\times d}\to\mathbb{F}_{q^{2}}^{d\times d}$ by
taking
$\operatorname{tr}_{1}(A\otimes B):=\operatorname{tr}(A)\cdot B$
and extending linearly. Since $\\{x_{k}\\}_{k\in[n]}$ is a projective
$2$-design, we have
$\operatorname{tr}_{1}\bigg{(}\sum_{k\in[n]}(x_{k}^{\otimes 2})(x_{k}^{\otimes
2})^{*}(A\otimes
I_{d})\bigg{)}=\operatorname{tr}_{1}\Big{(}c_{2}\cdot\Pi_{d}^{(2)}\cdot(A\otimes
I_{d})\Big{)}.$ (3)
We cycle the trace to simplify the left-hand side of (3):
$\displaystyle\operatorname{tr}_{1}\bigg{(}\sum_{k\in[n]}(x_{k}^{\otimes
2})(x_{k}^{\otimes 2})^{*}(A\otimes I_{d})\bigg{)}$
$\displaystyle=\operatorname{tr}_{1}\bigg{(}\sum_{k\in[n]}(x_{k}x_{k}^{*})^{\otimes
2}(A\otimes I_{d})\bigg{)}$
$\displaystyle=\operatorname{tr}_{1}\bigg{(}\sum_{k\in[n]}x_{k}x_{k}^{*}A\otimes
x_{k}x_{k}^{*}\bigg{)}$
$\displaystyle=\sum_{k\in[n]}\operatorname{tr}(x_{k}x_{k}^{*}A)\cdot
x_{k}x_{k}^{*}=\sum_{k\in[n]}x_{k}(x_{k}^{*}Ax_{k})x_{k}^{*}.$ (4)
For the right-hand side of (3), we apply the definition of $\Pi_{d}^{(2)}$:
$\displaystyle\operatorname{tr}_{1}\Big{(}c_{2}\cdot\Pi_{d}^{(2)}\cdot(A\otimes
I_{d})\Big{)}$
$\displaystyle=\operatorname{tr}_{1}\bigg{(}\frac{c_{2}}{2}\sum_{i\in[d]}\sum_{j\in[d]}\Big{(}e_{i}e_{i}^{*}\otimes
e_{j}e_{j}^{*}+e_{i}e_{j}^{*}\otimes e_{j}e_{i}^{*}\Big{)}(A\otimes
I_{d})\bigg{)}$
$\displaystyle=\frac{c_{2}}{2}\sum_{i\in[d]}\sum_{j\in[d]}\Big{(}\operatorname{tr}(e_{i}e_{i}^{*}A)\cdot
e_{j}e_{j}^{*}+\operatorname{tr}(e_{i}e_{j}^{*}A)\cdot e_{j}e_{i}^{*}\Big{)}$
$\displaystyle=\frac{c_{2}}{2}\Big{(}\operatorname{tr}A\cdot I_{d}+A\Big{)}.$
(5)
The result follows by equating (4) to (5) and rearranging. ∎
###### Proof of Theorem 16.
Suppose $\\{x_{k}\\}_{k\in[n]}$ in $\mathbb{F}_{q^{2}}^{d}$ is an
$(a,c_{1},c_{2})$-projective $2$-design.
Case I: $c_{2}=0$. Then $\\{x_{k}x_{k}^{*}\\}_{k\in[n]}$ is an $(a^{2},0)$-NTF
for $(\mathbb{F}_{q^{2}}^{d})_{\operatorname{sym}}^{\otimes 2}$. By
Proposition 11,
$n\geq
2\cdot\operatorname{dim}((\mathbb{F}_{q^{2}}^{d})_{\operatorname{sym}}^{\otimes
2})=d^{2}+d>d^{2}.$
Case II: $c_{1}\neq 0$ and $c_{2}\neq 0$. Apply Lemma 17 and the identity
$\sum_{k\in[n]}x_{k}x_{k}^{*}=c_{1}\cdot I_{d}$:
$\displaystyle A$
$\displaystyle=\frac{2}{c_{2}}\sum_{k\in[n]}x_{k}x_{k}^{*}Ax_{k}x_{k}^{*}-\operatorname{tr}A\cdot
I_{d}$
$\displaystyle=\frac{2}{c_{2}}\sum_{k\in[n]}x_{k}x_{k}^{*}Ax_{k}x_{k}^{*}-\operatorname{tr}A\cdot\frac{1}{c_{1}}\sum_{k\in[n]}x_{k}x_{k}^{*}=\sum_{k\in[n]}\bigg{(}\frac{2}{c_{2}}x_{k}^{*}Ax_{k}-\frac{1}{c_{1}}\operatorname{tr}A\bigg{)}x_{k}x_{k}^{*}$
for every $A\in\mathbb{F}_{q^{2}}^{d\times d}$. It follows that the
$\mathbb{F}_{q^{2}}$-span of $\\{x_{k}x_{k}^{*}\\}_{k\in[n]}$ is
$\mathbb{F}_{q^{2}}^{d\times d}$, and so $n\geq d^{2}$.
Case III: $c_{1}=0$ and $c_{2}\neq 0$. Lemma 17 implies that the
$\mathbb{F}_{q^{2}}$-span of $\\{x_{k}x_{k}^{*}\\}_{k\in[n]}\cup\\{I_{d}\\}$
is $\mathbb{F}_{q^{2}}^{d\times d}$, while the identity
$\sum_{k\in[n]}x_{k}x_{k}^{*}=c_{1}\cdot I_{d}=0$ implies that
$\\{x_{k}x_{k}^{*}\\}_{k\in[n]}$ is linearly dependent. Since
$\\{x_{k}x_{k}^{*}\\}_{k\in[n]}\cup\\{I_{d}\\}$ is a linearly dependent
spanning set of size $n+1$, it follows that $n+1\geq d^{2}+1$, i.e., $n\geq
d^{2}$. ∎
###### Theorem 18.
Any two of the following statements together imply the third statement:
* (a)
$\\{x_{k}\\}_{k\in[n]}$ in $\mathbb{F}_{q^{2}}^{d}$ is a projective
$2$-design.
* (b)
$n=d^{2}$.
* (c)
There exist $a,b,c_{1}\in\mathbb{F}_{q}$ such that
* (i)
$a^{2}\neq b$,
* (ii)
$a^{2}-b=\frac{bc_{1}}{a}$ if $a\neq 0$,
* (iii)
$d\equiv-1\bmod p$ if $a=0$, and
* (iv)
$\\{x_{k}\\}_{k\in[n]}$ in $\mathbb{F}_{q^{2}}^{d}$ is an
$(a,b,c_{1})$-equiangular tight frame.
When (a), (b), and (c) hold, $\\{x_{k}\\}_{k\in[n]}$ is an
$(a,c_{1},c_{2})$-projective $2$-design with $c_{2}=2(a^{2}-b)$.
To prove Theorem 18, we need a method of demonstrating that a collection of
vectors forms a projective $2$-design. For this, we will apply the following:
###### Lemma 19.
Take $\mathbb{F}$ to be $\mathbb{C}$ or $\mathbb{F}_{q^{2}}$. Given
$\\{x_{k}\\}_{k\in[n]}$ in $\mathbb{F}^{d}$, define
$\Psi\colon\mathbb{F}^{d\times d}\to\mathbb{F}^{d\times d}$ by
$\Psi(A):=\sum_{k\in[n]}x_{k}x_{k}^{*}A^{*}x_{k}x_{k}^{*}.$
Then
$\sum_{k\in[n]}(x_{k}^{\otimes 2})(x_{k}^{\otimes
2})^{*}=\sum_{i\in[d]}\sum_{j\in[d]}e_{i}e_{j}^{*}\otimes\Psi(e_{i}e_{j}^{*}).$
To prove Lemma 19, we will use lemmas from the previous section, with the
appropriate interpretation of conjugation in the case
$\mathbb{F}=\mathbb{F}_{q^{2}}$; indeed, the proofs of these results are valid
under this interpretation.
###### Proof of Lemma 19.
Consider the linear map $\overline{\Psi}\colon\mathbb{F}^{d\times
d}\to\mathbb{F}^{d\times d}$ defined by
$\overline{\Psi}(A):=\overline{\Psi(A)}$. Then
$(T\circ\overline{\Psi})(A)=\Psi(A)^{*}=\sum_{k\in[n]}x_{k}x_{k}^{*}Ax_{k}x_{k}^{*}=\sum_{k\in[n]}x_{k}\overline{x}_{k}^{\top}A(x_{k}\overline{x}_{k}^{\top})^{*}.$
Lemma 7 then gives
$\overline{\Psi}(A)=\sum_{k\in[n]}\overline{x}_{k}\overline{x}_{k}^{\top}A(\overline{x}_{k}\overline{x}_{k}^{\top})^{*}.$
Finally, we apply Lemma 8 to get
$\sum_{k\in[n]}(\overline{x}_{k}^{\otimes 2})(\overline{x}_{k}^{\otimes
2})^{*}=\sum_{i\in[d]}\sum_{j\in[d]}e_{i}e_{j}^{*}\otimes\overline{\Psi}(e_{i}e_{j}^{*}),$
and we take conjugates of both sides to obtain the result. ∎
###### Proof of Theorem 18.
First, (a)$\wedge$(c)$\Rightarrow$(b) follows from Theorem 16 and Proposition
13.
Next, we demonstrate (a)$\wedge$(b)$\Rightarrow$(c) by considering each case
in the proof of Theorem 16.
Case I: $c_{2}=0$. This case does not occur since $n\geq d^{2}+d$ implies
$n\neq d^{2}$.
For the remaining cases, we have $c_{2}\neq 0$. For these cases, we will use
the fact that $a=0$ if and only if $c_{1}=0$. To see this, apply Lemma 17 with
$A=I_{d}$ and the identity $\sum_{k\in[n]}x_{k}x_{k}^{*}=c_{1}\cdot I_{d}$ to
get
$I_{d}=\frac{2}{c_{2}}\sum_{k\in[n]}x_{k}x_{k}^{*}I_{d}x_{k}x_{k}^{*}-\operatorname{tr}I_{d}\cdot
I_{d}=\frac{2a}{c_{2}}\sum_{k\in[n]}x_{k}x_{k}^{*}-d\cdot
I_{d}=\bigg{(}\frac{2ac_{1}}{c_{2}}-d\bigg{)}\cdot I_{d}$
If $0\in\\{a,c_{1}\\}$, then the above identity implies $d\equiv-1\bmod p$ and
$n=d^{2}\equiv 1\bmod p$. Furthermore, since $\\{x_{k}\\}_{k\in[n]}$ is an
$(a,c_{1})$-NTF, Proposition 12(a) gives that $na=dc_{1}$. If
$0\in\\{a,c_{1}\\}$, then squaring both sides gives
$a^{2}=n^{2}a^{2}=d^{2}c_{1}^{2}=c_{1}^{2}$. It follows that $a=0$ if and only
if $c_{1}=0$, as claimed. Furthermore, $d\equiv-1\bmod p$ if $a=0$.
Case II: $c_{1}\neq 0$ and $c_{2}\neq 0$. For every
$A\in\mathbb{F}_{q^{2}}^{d\times d}$, Lemma 17 and the identity
$\sum_{k\in[n]}x_{k}x_{k}^{*}=c_{1}\cdot I_{d}$ together imply
$A=\sum_{k\in[n]}\bigg{(}\frac{2}{c_{2}}x_{k}^{*}Ax_{k}-\frac{1}{c_{1}}\operatorname{tr}A\bigg{)}x_{k}x_{k}^{*}.$
(6)
Since $\\{x_{k}x_{k}^{*}\\}_{k\in[n]}$ is a spanning set of
$\mathbb{F}_{q^{2}}^{d\times d}$ of size
$n=d^{2}=\operatorname{dim}(\mathbb{F}_{q^{2}}^{d\times d})$, it is also a
basis. Then the decomposition (6) is unique. For $A:=x_{\ell}x_{\ell}^{*}$,
this implies
$\frac{2}{c_{2}}\langle
x_{k},x_{\ell}\rangle^{q+1}-\frac{a}{c_{1}}=\frac{2}{c_{2}}x_{k}^{*}Ax_{k}-\frac{1}{c_{1}}\operatorname{tr}A=\left\\{\begin{array}[]{cl}1&\text{if
}k=\ell\\\ 0&\text{if }k\neq\ell.\end{array}\right.$ (7)
It follows that
$\langle x_{k},x_{\ell}\rangle^{q+1}=\frac{ac_{2}}{2c_{1}}=:b$
whenever $k\neq\ell$, i.e., $\\{x_{k}\\}_{k\in[n]}$ is an $(a,b,c_{1})$-ETF.
Then (7) gives
$\displaystyle\frac{2}{c_{2}}a^{2}-\frac{a}{c_{1}}$ $\displaystyle=1,$ (8)
$\displaystyle\frac{2}{c_{2}}b-\frac{a}{c_{1}}$ $\displaystyle=0.$ (9)
Subtract (9) from (8) and rearrange to get
$a^{2}-b=\frac{c_{2}}{2}\neq 0.$ (10)
Finally, since $a\neq 0$, we may rearrange (9) to get
$\frac{c_{2}}{2}=\frac{bc_{1}}{a}$, which combined with (10) gives
$a^{2}-b=\frac{bc_{1}}{a}$, as claimed.
Case III: $c_{1}=0$ and $c_{2}\neq 0$. We claim that the
$\mathbb{F}_{q^{2}}$-span of $\\{x_{k}x_{k}^{*}\\}_{k\in[n]}$ equals the
$(d^{2}-1)$-dimensional subspace $\\{X\in\mathbb{F}_{q^{2}}^{d\times
d}:\operatorname{tr}X=0\\}$. Indeed, the inclusion $\subseteq$ follows from
the fact that $\operatorname{tr}(x_{k}x_{k}^{*})=a=0$ for every $k\in[n]$. The
reverse inclusion follows from a dimension count, since Lemma 17 implies that
the $\mathbb{F}_{q^{2}}$-span of
$\\{x_{k}x_{k}^{*}\\}_{k\in[n]}\cup\\{I_{d}\\}$ is all of
$\mathbb{F}_{q^{2}}^{d\times d}$. Next, since $n=(d^{2}-1)+1$, it follows that
the identity $\sum_{k\in[n]}x_{k}x_{k}^{*}=c_{1}\cdot I_{d}$ gives the only
linear dependency of $\\{x_{k}x_{k}^{*}\\}_{k\in[n]}$ up to scalar
multiplication, namely, $\sum_{k\in[n]}x_{k}x_{k}^{*}=0$. Taking
$A:=x_{\ell}x_{\ell}^{*}$ in Lemma 17 gives
$x_{\ell}x_{\ell}^{*}=\frac{2}{c_{2}}\sum_{k\in[n]}x_{k}x_{k}^{*}x_{\ell}x_{\ell}^{*}x_{k}x_{k}^{*}-\operatorname{tr}x_{\ell}x_{\ell}^{*}\cdot
I_{d}=\frac{2}{c_{2}}\sum_{k\in[n]}\langle
x_{k},x_{\ell}\rangle^{q+1}x_{k}x_{k}^{*},$
and rearranging gives
$\sum_{k\in[n]}z_{k\ell}x_{k}x_{k}^{*}=0,$ (11)
where
$z_{k\ell}:=\left\\{\begin{array}[]{ll}\frac{2}{c_{2}}\langle
x_{k},x_{\ell}\rangle^{q+1}&\text{if }k\neq\ell\\\ \frac{2}{c_{2}}\langle
x_{\ell},x_{\ell}\rangle^{q+1}-1&\text{if }k=\ell\\\
\end{array}\right\\}=\left\\{\begin{array}[]{cl}\frac{2}{c_{2}}\langle
x_{k},x_{\ell}\rangle^{q+1}&\text{if }k\neq\ell\\\ -1&\text{if }k=\ell.\\\
\end{array}\right.$
Since $\sum_{k\in[n]}x_{k}x_{k}^{*}=0$ is the unique dependency up to scaling,
the dependency (11) requires $z_{k\ell}=-1$ for every $k\neq\ell$, i.e.,
$\langle x_{k},x_{\ell}\rangle^{q+1}=-\frac{c_{2}}{2}=:b.$
As such, $\\{x_{k}\\}_{k\in[n]}$ is an $(a,b,c_{1})$-ETF. Since $c_{2}\neq 0$,
we have $b\neq 0=a^{2}$, as claimed.
Finally, we demonstrate (b)$\wedge$(c)$\Rightarrow$(a) with the help of Lemma
19. To this end, we consider the linear map $\Psi^{*}\colon
A\mapsto\Psi(A)^{*}$. Since $\\{x_{k}\\}_{k\in[n]}$ is an $(a,b,c_{1})$-ETF,
then
$\displaystyle\Psi^{*}(x_{\ell}x_{\ell}^{*})=\sum_{k\in[n]}x_{k}x_{k}^{*}x_{\ell}x_{\ell}^{*}x_{k}x_{k}^{*}$
$\displaystyle=\sum_{k\in[n]}\langle
x_{k},x_{\ell}\rangle^{q+1}x_{k}x_{k}^{*}$
$\displaystyle=a^{2}x_{\ell}x_{\ell}^{*}+\sum_{\begin{subarray}{c}k\in[n]\\\
k\neq\ell\end{subarray}}bx_{k}x_{k}^{*}=(a^{2}-b)x_{\ell}x_{\ell}^{*}+bc_{1}\cdot
I_{d}.$ (12)
This expression obfuscates the linearity of $\Psi^{*}$, which we elucidate in
two separate cases.
Case I: $a\neq 0$. Since $\operatorname{tr}(x_{\ell}x_{\ell}^{*})=a$, we may
continue (12):
$\Psi^{*}(x_{\ell}x_{\ell}^{*})=(a^{2}-b)x_{\ell}x_{\ell}^{*}+\frac{bc_{1}}{a}\cdot\operatorname{tr}(x_{\ell}x_{\ell}^{*})\cdot
I_{d}.$
Since $0\neq a^{2}\neq b$, it follows from Proposition 13 that the
$\mathbb{F}_{q^{2}}$-span of $\\{x_{k}x_{k}^{*}\\}_{k\in[n]}$ has dimension
$\operatorname{dim}_{\mathbb{F}_{q}}\\{X\in\mathbb{F}_{q^{2}}^{d\times
d}:X=X^{*}\\}=d^{2}$, and so it equals $\mathbb{F}_{q^{2}}^{d\times d}$. Thus,
we may linearly extend the above identity to get
$\Psi^{*}(A)=(a^{2}-b)A+\frac{bc_{1}}{a}\cdot\operatorname{tr}A\cdot I_{d}$
for every $A\in\mathbb{F}_{q^{2}}^{d\times d}$. In particular, we have
$\displaystyle\Psi(e_{i}e_{j}^{*})=\Big{(}\Psi^{*}(e_{i}e_{j}^{*})\Big{)}^{*}$
$\displaystyle=\Big{(}(a^{2}-b)e_{i}e_{j}^{*}+\frac{bc_{1}}{a}\cdot\operatorname{tr}(e_{i}e_{j}^{*})\cdot
I_{d}\Big{)}^{*}$
$\displaystyle=(a^{2}-b)e_{j}e_{i}^{*}+\frac{bc_{1}}{a}\cdot\delta_{ij}\cdot
I_{d}=(a^{2}-b)\cdot\Big{(}e_{j}e_{i}^{*}+\delta_{ij}\cdot I_{d}\Big{)},$
where the last step applies our assumption that $a^{2}-b=\frac{bc_{1}}{a}$.
Then Lemma 19 gives
$\displaystyle\sum_{k\in[n]}(x_{k}^{\otimes 2})(x_{k}^{\otimes 2})^{*}$
$\displaystyle=\sum_{i\in[d]}\sum_{j\in[d]}e_{i}e_{j}^{*}\otimes\Psi(e_{i}e_{j}^{*})$
$\displaystyle=2(a^{2}-b)\sum_{i\in[d]}\sum_{j\in[d]}e_{i}e_{j}^{*}\otimes\frac{1}{2}\Big{(}e_{j}e_{i}^{*}+\delta_{ij}\cdot
I_{d}\Big{)}=2(a^{2}-b)\cdot\Pi_{d}^{(2)}.$
Since $c_{2}:=2(a^{2}-b)\neq 0$ by assumption, it follows that
$\\{x_{k}\\}_{k\in[n]}$ is an $(a,c_{1},c_{2})$-projective $2$-design, as
desired.
Case II: $a=0$. Then $d\equiv-1\bmod p$ by assumption. Furthermore, since
$\\{x_{k}\\}_{k\in[n]}$ is an $(a,c_{1})$-NTF, Proposition 13(a) gives that
$0=na=dc_{1}=-c_{1}$, i.e., $c_{1}=0$. With this, we continue (12):
$\Psi^{*}(x_{\ell}x_{\ell}^{*})=(a^{2}-b)x_{\ell}x_{\ell}^{*}+bc_{1}\cdot
I_{d}=-bx_{\ell}x_{\ell}^{*}.$ (13)
By equality in Gerzon’s bound, Proposition 13 implies that the
$\mathbb{F}_{q^{2}}$-span of $\\{x_{k}x_{k}^{*}\\}_{k\in[n]}$ has dimension
$\operatorname{dim}_{\mathbb{F}_{q}}\\{X\in\mathbb{F}_{q^{2}}^{d\times
d}:X=X^{*},~{}\operatorname{tr}X=0\\}=d^{2}-1$, and therefore equals
$\\{X\in\mathbb{F}_{q^{2}}^{d\times d}:\operatorname{tr}X=0\\}$. By linearity,
(13) implies $\Psi^{*}(A)=-bA$ for every $A\in\mathbb{F}_{q^{2}}^{d\times d}$
with $\operatorname{tr}A=0$. In order to determine $\Psi^{*}(A)$ for all $A$,
we also consider
$\Psi^{*}(I_{d})=\sum_{k\in[n]}x_{k}x_{k}^{*}I_{d}x_{k}x_{k}^{*}=a\sum_{k\in[n]}x_{k}x_{k}^{*}=0.$
Since $\\{x_{k}x_{k}^{*}\\}_{k\in[n]}\cup\\{I_{d}\\}$ spans
$\mathbb{F}_{q^{2}}^{d\times d}$, we may obtain a formula for $\Psi^{*}(A)$ by
extending linearly. To this end, denote
$\hat{A}:=A+\operatorname{tr}A\cdot I_{d}=A-\frac{\operatorname{tr}A}{d}\cdot
I_{d}\in\\{X\in\mathbb{F}_{q^{2}}^{d\times d}:\operatorname{tr}X=0\\}.$
Then
$\Psi^{*}(A)=\Psi^{*}(\hat{A}-\operatorname{tr}A\cdot
I_{d})=\Psi^{*}(\hat{A})-\operatorname{tr}A\cdot\Psi^{*}(I_{d})=-b\hat{A}=-b(A+\operatorname{tr}A\cdot
I_{d}).$
In particular, we have
$\Psi(e_{i}e_{j}^{*})=\Big{(}\Psi^{*}(e_{i}e_{j}^{*})\Big{)}^{*}=\Big{(}-b(e_{i}e_{j}^{*}+\operatorname{tr}(e_{i}e_{j}^{*})\cdot
I_{d})\Big{)}^{*}=-b(e_{j}e_{i}^{*}+\delta_{ij}\cdot I_{d}).$
Then Lemma 19 gives
$\displaystyle\sum_{k\in[n]}(x_{k}^{\otimes 2})(x_{k}^{\otimes 2})^{*}$
$\displaystyle=\sum_{i\in[d]}\sum_{j\in[d]}e_{i}e_{j}^{*}\otimes\Psi(e_{i}e_{j}^{*})$
$\displaystyle=-2b\sum_{i\in[d]}\sum_{j\in[d]}e_{i}e_{j}^{*}\otimes\frac{1}{2}\Big{(}e_{j}e_{i}^{*}+\delta_{ij}\cdot
I_{d}\Big{)}=-2b\cdot\Pi_{d}^{(2)}.$
Since $c_{2}:=-2b\neq-2a^{2}=0$ by assumption, it follows that
$\\{x_{k}\\}_{k\in[n]}$ is an $(a,c_{1},c_{2})$-projective $2$-design, as
desired. ∎
### 3.2 A construction for Gerzon equality
Theorem 18 allows one to easily identify projective $2$-designs over
$\mathbb{F}_{q^{2}}^{d}$. For example, [23] constructs a $(0,1,0)$-ETF of
$d^{2}$ vectors in $\mathbb{F}_{3^{2}}^{d}$ for every $d=2^{2\ell+1}$ with
$\ell\in\mathbb{N}$. Since $2^{2\ell+1}\equiv-1\bmod 3$, Theorem 18 implies
that each of these systems of vectors forms a $(0,0,1)$-projective $2$-design
for $\mathbb{F}_{3^{2}}^{d}$. The following result constructs additional
examples:
###### Theorem 20.
Select any prime $p$, positive integer $k$, and prime power $r$ such that $p$
divides $r-1$ and $r^{2}+r+1$ divides $p^{k}+1$. Put $q:=p^{k}$ and
$d:=r^{2}+r+1$. Let $D\subseteq\mathbb{Z}/d\mathbb{Z}$ denote the Singer
difference set, select a primitive element
$\alpha\in\mathbb{F}_{q^{2}}^{\times}$, put $\omega:=\alpha^{(q^{2}-1)/d}$,
and define translation and modulation operators by
$(Tf)(x):=f(x-1),\qquad(Mf)(x):=\omega^{x}\cdot f(x),\qquad
f\colon\mathbb{Z}/d\mathbb{Z}\to\mathbb{F}_{q^{2}}.$
Then $\\{M^{s}T^{t}\mathbf{1}_{D}\\}_{s,t\in\mathbb{Z}/d\mathbb{Z}}$ is a
$(2,1,2d)$-equiangular tight frame of $d^{2}$ vectors in
$\mathbb{F}_{q^{2}}^{d}$.
The ETF construction in Theorem 20 is a finite field analog of a biangular
Gabor frame that was suggested in [8, 25]. In the finite field setting, one
might view this as a Steiner ETF [18] in which a harmonic ETF [44, 43, 49, 14]
plays the role of a “flat” simplex. Empirically, there are many
$(p,k,r)\in\mathbb{N}^{3}$ that satisfy the constraints that $p$ is prime, $r$
is a prime power, $p$ divides $r-1$, and $r^{2}+r+1$ divides $p^{k}+1$. In
fact, there are infinitely many, conditioned on the Lenstra–Pomerance–Wagstaff
conjecture of the infinitude of Mersenne primes: whenever there exists a
Mersenne prime $r=2^{m}-1$, we may take $p=2$ and $k=3m$. We claim that the
ETF construction in Theorem 20 forms a projective $2$-design for
$\mathbb{F}_{q^{2}}^{d}$ whenever $p>3$. First, we have $a^{2}=4\neq 1=b$ and
$a=2\neq 0$, and so by Theorem 18, it suffices to verify that
$a^{2}-b=\frac{bc_{1}}{a}$. Indeed, $a^{2}-b=4-1=3$ and $\frac{bc_{1}}{a}=d$,
and since $p$ divides $r-1$ by assumption, we have $d=r^{2}+r+1\equiv
1+1+1=3\bmod p$, as desired. The following table lists the smallest dimensions
for which Theorem 20 offers a construction, with gray columns indicating
projective $2$-designs.
$d$ | 13 | 57 | 73 | 307 | 757 | 993 | 1723 | 1723 | 2257 | 2257 | 2451 | 3541 | 3541 | 5113
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---
$p$ | 2 | 2 | 7 | 2 | 2 | 2 | 2 | 5 | 2 | 23 | 2 | 2 | 29 | 2
$k$ | 6 | 9 | 12 | 51 | 378 | 15 | 287 | 287 | 90 | 30 | 63 | 118 | 590 | 213
$r$ | 3 | 7 | 8 | 17 | 27 | 31 | 41 | 41 | 47 | 47 | 49 | 59 | 59 | 71
###### Proof of Theorem 20.
Proposition 6.1 in [23] gives that
$\\{M^{s}T^{t}\mathbf{1}_{D}\\}_{s,t\in\mathbb{Z}/d\mathbb{Z}}$ is an
$(a,da)$-NTF with
$a=\langle\mathbf{1}_{D},\mathbf{1}_{D}\rangle=|D|\equiv 2\bmod p,$
where the last step applies the fact that $|D|=r+1\equiv 2\bmod p$. It remains
to verify equiangularity with parameter $b=1$. To this end, we compute
$\langle M^{s}T^{t}\mathbf{1}_{D},M^{u}T^{v}\mathbf{1}_{D}\rangle^{q+1}$
for every $(s,t)\neq(u,v)$ in two cases.
Case I: $t=v$. Consider the discrete Fourier transform matrix
$\mathcal{F}=[\omega^{ij}]_{i,j\in\mathbb{Z}/d\mathbb{Z}}\in\mathbb{F}_{q^{2}}^{d\times
d}$, and observe that
$\\{M^{s}T^{t}\mathbf{1}_{D}\\}_{s\in\mathbb{Z}/d\mathbb{Z}}$ equals the
columns of $\operatorname{diag}(\mathbf{1}_{D+t})\mathcal{F}$, which in turn
is a zero-padded version of the $|D|\times d$ submatrix $\mathcal{F}_{D+t}$.
Since $D+t$ is a difference set with parameter $\lambda=1$, (the proof of)
Theorem 5.7 in [23] gives that $\mathcal{F}_{D+t}$ is a
$(|D|,|D|-\lambda,d)$-ETF. It follows that
$\langle
M^{s}T^{t}\mathbf{1}_{D},M^{u}T^{t}\mathbf{1}_{D}\rangle^{q+1}=|D|-\lambda\equiv
1\bmod p$
whenever $s\neq u$.
Case II: $t\neq v$. We exploit the well-known fact that the so-called
development
$\\{D+z:z\in G\\}$
of a difference set $D\subseteq G$ with parameter $\lambda$ gives a symmetric
block design in which every pair of blocks intersects in exactly $\lambda$
points (see Theorem 18.6 in [30]). Since the Singer difference set has
parameter $\lambda=1$, we have
$\displaystyle\langle
M^{s}T^{t}\mathbf{1}_{D},M^{u}T^{v}\mathbf{1}_{D}\rangle$
$\displaystyle=\sum_{x\in\mathbb{Z}/d\mathbb{Z}}(M^{s}T^{t}\mathbf{1}_{D})(x)^{q}(M^{u}T^{v}\mathbf{1}_{D})(x)$
$\displaystyle=\sum_{x\in\mathbb{Z}/d\mathbb{Z}}(\omega^{sx}\mathbf{1}_{D}(x-t))^{q}(\omega^{ux}\mathbf{1}_{D}(x-v))$
$\displaystyle=\sum_{x\in\mathbb{Z}/d\mathbb{Z}}\omega^{(qs+u)x}\cdot\mathbf{1}_{D}(x-t)\mathbf{1}_{D}(x-v)=\omega^{(qs+u)x_{0}},$
where $x_{0}$ is the unique member of the block intersection $(D+t)\cap(D+v)$.
Then
$\langle
M^{s}T^{t}\mathbf{1}_{D},M^{u}T^{v}\mathbf{1}_{D}\rangle^{q+1}=(\omega^{(qs+u)x_{0}})^{q+1}=1,$
as desired. ∎
## 4 The quaternionic setting
Consider the following generalization of Proposition 1(c) for
$\mathbb{F}\in\\{\mathbb{R},\mathbb{C},\mathbb{H}\\}$; see [36, 46]. Put
$m:=\frac{1}{2}\cdot[\mathbb{F}:\mathbb{R}]$ and $N=md$, and given
$A,B\in\mathbb{F}^{d\times d}$, define $\langle
A,B\rangle:=\operatorname{Re}\operatorname{tr}(A^{*}B)$. We say unit vectors
$\\{x_{k}\\}_{k\in[n]}$ in $\mathbb{F}^{d}$ form a projective $2$-design if
$\frac{1}{n^{2}}\sum_{k\in[n]}\sum_{\ell\in[n]}\langle
x_{k}x_{k}^{*},x_{\ell}x_{\ell}^{*}\rangle=\frac{m}{N},\qquad\frac{1}{n^{2}}\sum_{k\in[n]}\sum_{\ell\in[n]}\langle
x_{k}x_{k}^{*},x_{\ell}x_{\ell}^{*}\rangle^{2}=\frac{m(m+1)}{N(N+1)}.$
We say a projective $2$-design $\\{x_{k}\\}_{k\in[n]}$ for $\mathbb{F}^{d}$ is
tight if it is also equiangular, which occurs precisely when equality is
achieved in the lower bound $n\geq d+m(d^{2}-d)$ [3]. In this section, we use
tight projective $2$-designs in quaternionic spaces to form nice arrangements
of real subspaces. Given $r$-dimensional subspaces $\\{S_{k}\\}_{k\in[n]}$ of
a real vector space $V$, select an orthonormal basis $\\{x_{ki}\\}_{i\in[r]}$
for each $S_{k}$ and compute the cross-Gramians $G_{k\ell}:=[\langle
x_{ki},x_{\ell j}\rangle]_{i,j\in[r]}$. We say $\\{S_{k}\\}_{k\in[n]}$ is
equi-isoclinic if there exists $\alpha\geq 0$ such that
$G_{k\ell}^{*}G_{k\ell}=\alpha I_{r}$ for every $k,\ell\in[n]$ with
$k\neq\ell$. We say $\\{S_{k}\\}_{k\in[n]}$ forms a tight fusion frame if
$\frac{1}{n^{2}}\sum_{k\in[n]}\sum_{\ell\in[n]}\|G_{k\ell}\|_{F}^{2}=\frac{r^{2}}{\operatorname{dim}V}.$
An equi-isoclinic tight fusion frame is an optimal code in the Grassmannian
$\operatorname{Gr}(r,V)$ under the chordal distance, as it achieves equality
in the simplex bound [12]. It is also an optimal packing in terms of the
spectral distance, where we view each $S_{k}$ as a subset of projective space
and seek to maximize the minimum distance between these subsets of this metric
space [13]. One may rightly view such objects as analogs of equiangular tight
frames.
Since quaternion multiplication is noncommutative (e.g.,
$\mathrm{i}\mathrm{j}=-\mathrm{j}\mathrm{i}$), we start by carefully verifying
a few simple facts that may be unfamiliar to the reader:
###### Lemma 21.
$\mathbb{H}^{d\times d}$ is a real Hilbert space with inner product $\langle
A,B\rangle=\operatorname{Re}\operatorname{tr}(A^{*}B)$.
###### Proof.
First, given $u,v\in\mathbb{H}$, we observe that
$\operatorname{Re}(\overline{u}v)$ equals the dot product between the
corresponding real coordinate vectors:
$\displaystyle\operatorname{Re}((a-b\mathrm{i}-c\mathrm{j}-d\mathrm{k})(e+f\mathrm{i}+g\mathrm{j}+h\mathrm{k}))=ae+bf+cg+dh=(a,b,c,d)\cdot(e,f,g,h).$
Next, given $A\in\mathbb{H}^{d\times d}$, let
$\operatorname{vec}(A)\in\mathbb{R}^{4d^{2}}$ denote the vector of real
coordinates of the entries of $A$. Then the above observation gives
$\operatorname{Re}\operatorname{tr}(A^{*}B)=\operatorname{Re}\sum_{j\in[d]}(A^{*}B)_{jj}=\operatorname{Re}\sum_{j\in[d]}\sum_{i\in[d]}(A^{*})_{ji}B_{ij}=\sum_{i\in[d]}\sum_{j\in[d]}\operatorname{Re}\overline{A_{ij}}B_{ij}=\operatorname{vec}(A)\cdot\operatorname{vec}(B).$
The result follows. ∎
###### Lemma 22.
Suppose $A\in\mathbb{H}^{m\times n}$ and $B\in\mathbb{H}^{n\times m}$. Then
$\operatorname{Re}\operatorname{tr}(AB)=\operatorname{Re}\operatorname{tr}(BA)$.
###### Proof.
Consider the algebra homomorphism $f\colon\mathbb{H}\to\mathbb{C}^{2\times 2}$
defined by
$f(a+b\mathrm{i}+c\mathrm{j}+d\mathrm{k})=\left[\begin{array}[]{rr}a+b\mathrm{i}&c+d\mathrm{i}\\\
-c+d\mathrm{i}&a-b\mathrm{i}\end{array}\right].$
Observe that $\operatorname{tr}f(z)=2\operatorname{Re}z$. Given
$M\in\mathbb{H}^{n\times n}$, one may apply $f$ entrywise to obtain
$f(M)\in\mathbb{C}^{2n\times 2n}$, and then
$\operatorname{Re}\operatorname{tr}M=\frac{1}{2}\operatorname{tr}f(M)$.
Applying this to $AB$ and $BA$ gives
$\operatorname{Re}\operatorname{tr}(AB)=\tfrac{1}{2}\operatorname{tr}(f(AB))=\tfrac{1}{2}\operatorname{tr}(f(A)f(B))=\tfrac{1}{2}\operatorname{tr}(f(B)f(A))=\tfrac{1}{2}\operatorname{tr}(f(BA))=\operatorname{Re}\operatorname{tr}(BA),$
as claimed. ∎
###### Lemma 23.
Suppse $A,B\in\mathbb{H}^{d\times d}$ satisfy $A^{*}=A$ and $B^{*}=-B$. Then
$\langle A,B\rangle=0$.
###### Proof.
Symmetry of the real inner product gives
$\langle A,B\rangle=\langle
B,A\rangle=\operatorname{Re}\operatorname{tr}(B^{*}A)=\operatorname{Re}\operatorname{tr}(-BA^{*})=-\operatorname{Re}\operatorname{tr}(A^{*}B)=-\langle
A,B\rangle.$
Rearrange to get the result. ∎
In fact, the real subspace of anti-Hermitian matrices is the orthogonal
complement of the real subspace of Hermitian matrices. This can be seen by
dimension counting: Hermitian matrices have dimension $d+4\binom{d}{2}$, while
anti-Hermitian matrices have dimension $3d+4\binom{d}{2}$, and the sum of
these is $4d^{2}$, i.e., the dimension of $\mathbb{H}^{d\times d}$.
Given $x\in\mathbb{H}^{d}\setminus\\{0\\}$, let $S(x)$ denote the
$3$-dimensional real subspace of $\mathbb{H}^{d\times d}$ defined by
$S(x):=\\{xzx^{*}:z\in\mathbb{H},~{}\operatorname{Re}z=0\\}.$
For each $A\in S(x)$, it holds that
$A^{*}=(xzx^{*})^{*}=x\overline{z}x^{*}=-xzx^{*}=-A.$
As such, $S(x)$ is actually a subspace of the anti-Hermitian matrices. Observe
that for each $u,v\in\\{\mathrm{i},\mathrm{j},\mathrm{k}\\}$, it holds that
$\displaystyle\langle xux^{*},xvx^{*}\rangle$
$\displaystyle=\operatorname{Re}\operatorname{tr}((xux^{*})^{*}xvx^{*})$
$\displaystyle=\operatorname{Re}\operatorname{tr}(x\overline{u}x^{*}xvx^{*})$
$\displaystyle=\operatorname{Re}\operatorname{tr}(\overline{u}x^{*}xvx^{*}x)=\|x\|^{4}\cdot\operatorname{Re}(\overline{u}v)=\left\\{\begin{array}[]{cl}\|x\|^{4}&\text{if
}u=v\\\ 0&\text{otherwise.}\end{array}\right.$
Thus, $\\{x\mathrm{i}x^{*},x\mathrm{j}x^{*},x\mathrm{k}x^{*}\\}$ is an
orthogonal basis for $S(x)$. With this, we prove the following:
###### Theorem 24.
Consider unit vectors $\\{x_{k}\\}_{k\in[n]}$ in $\mathbb{H}^{d}$.
* (a)
If $\\{x_{k}\\}_{k\in[n]}$ is equiangular, then $\\{S(x_{k})\\}_{k\in[n]}$ is
equi-isoclinic.
* (b)
If $\\{x_{k}\\}_{k\in[n]}$ is a projective $2$-design, then
$\\{S(x_{k})\\}_{k\in[n]}$ is tight in anti-Hermitian space.
###### Proof.
Given unit vectors $x,y\in\mathbb{H}^{d}$, we are interested in the cross-
Gramian $G_{xy}$ between
$\\{x\mathrm{i}x^{*},x\mathrm{j}x^{*},x\mathrm{k}x^{*}\\}$ and
$\\{y\mathrm{i}y^{*},y\mathrm{j}y^{*},y\mathrm{k}y^{*}\\}$. The entry indexed
by $(u,v)\in\\{\mathrm{i},\mathrm{j},\mathrm{k}\\}^{2}$ is given by
$\displaystyle\langle
xux^{*},yvy^{*}\rangle=\operatorname{Re}\operatorname{tr}((xux^{*})^{*}yvy^{*})$
$\displaystyle=\operatorname{Re}\operatorname{tr}(x\overline{u}x^{*}yvy^{*})$
$\displaystyle=\operatorname{Re}\operatorname{tr}(\overline{u}x^{*}yvy^{*}x)=\operatorname{Re}(\overline{u}x^{*}yvy^{*}x).$
(14)
If $x^{*}y=0$, then $G_{xy}=0$. Otherwise, define $z:=\frac{x^{*}y}{|x^{*}y|}$
and continue (14):
$\langle
xux^{*},yvy^{*}\rangle=\operatorname{Re}(\overline{u}x^{*}yvy^{*}x)=|x^{*}y|^{2}\cdot\operatorname{Re}(\overline{u}zvz^{-1}).$
It follows that $G_{xy}$ equals $|x^{*}y|^{2}$ times a matrix representation
of the special orthogonal map $q\mapsto zqz^{-1}$ over imaginary
$q\in\mathbb{H}$. This implies (a).
For (b), we apply the facts that $G_{xy}=|x^{*}y|^{2}\cdot Q$ for some
$Q\in\operatorname{SO}(3)$ and
$\displaystyle|x^{*}y|^{2}=\overline{x^{*}y}x^{*}y=y^{*}xx^{*}y=\operatorname{Re}\operatorname{tr}(y^{*}xx^{*}y)=\operatorname{Re}\operatorname{tr}(xx^{*}yy^{*})=\langle
xx^{*},yy^{*}\rangle$
in order to compute the frame potential of $\\{S(x_{k})\\}_{k\in[n]}$:
$\displaystyle\frac{1}{n^{2}}\sum_{k\in[n]}\sum_{\ell\in[n]}\|G_{x_{k}x_{\ell}}\|_{F}^{2}$
$\displaystyle=\frac{1}{n^{2}}\sum_{k\in[n]}\sum_{\ell\in[n]}3|x_{k}^{*}x_{\ell}|^{4}$
$\displaystyle=\frac{3}{n^{2}}\sum_{k\in[n]}\sum_{\ell\in[n]}\langle
x_{k}x_{k}^{*},x_{\ell}x_{\ell}^{*}\rangle^{2}=3\cdot\frac{m(m+1)}{N(N+1)}=\frac{9}{d(2d+1)}.$
It follows that $\\{S(x_{k})\\}_{k\in[n]}$ forms a tight fusion frame in the
$d(2d+1)$-dimensional real vector space of $d\times d$ quaternionic anti-
Hermitian matrices. ∎
Theorem 24 implies that every tight projective $2$-design for $\mathbb{H}^{d}$
corresponds to an equi-isoclinic tight fusion frame of $d(2d-1)$ subspaces of
$\mathbb{R}^{d(2d+1)}$ of dimension $3$. To date, such designs are only known
to exist for $d\in\\{2,3\\}$. First, $\mathbb{HP}^{1}$ is isometric to
$S^{4}$, and so the six vertices of a $5$-dimensional regular simplex easily
deliver a tight projective $2$-design for $\mathbb{H}^{2}$; this in turn
determines $6$ subspaces of $\mathbb{R}^{10}$ of dimension $3$. The $d=3$ case
is resolved by Theorem 4.12 in [11], which uses a variant of the
Newton–Kantorovich theorem to obtain a computer-assisted proof of the
existence of a $15$-point simplex in $\mathbb{HP}^{2}$; this determines $15$
subspaces of $\mathbb{R}^{21}$ of dimension $3$. The authors are not aware of
any other constructions of equi-isoclinic tight fusion frames with these
parameters. Also, it is open whether tight projective $2$-designs exist for
$\mathbb{H}^{d}$ with $d>3$.
## Acknowledgments
The first part of this paper was inspired by a beautiful talk [39] given by
Vern Paulsen at the Codes and Expansions online seminar. Much of this work was
conducted during the SOFT 2020: Summer of Frame Theory virtual workshop. DGM
was partially supported by AFOSR FA9550-18-1-0107 and NSF DMS 1829955.
## References
* [1] M. Appleby, I. Bengtsson, S. Flammia, D. Goyeneche, Tight frames, Hadamard matrices and Zauner’s conjecture, J. Phys. A 52 (2019) 295301.
* [2] A. S. Bandeira, M. Fickus, D. G. Mixon, P. Wong, The road to deterministic matrices with the restricted isometry property, J. Fourier Anal. Appl. 19 (2013) 1123–1149.
* [3] E. Bannai, S. G. Hoggar, On tight $t$-designs in compact symmetric spaces of rank one, Proc. Japan Acad., 61, Ser. A (1985) 78–82.
* [4] E. Bannai, A. Munemasa, B. Venkov, The nonexistence of certain tight spherical designs, St. Petersburg Math. J. 16 (2005) 609–625.
* [5] J. J. Benedetto, M. Fickus, Finite normalized tight frames, Adv. Comput. Math. 18 (2003) 357–385.
* [6] B. G. Bodmann, J. Haas, Achieving the orthoplex bound and constructing weighted complex projective $2$-designs with Singer sets, Linear Algebra Appl. 511 (2016) 54–71.
* [7] B. G. Bodmann, E. J. King, Optimal arrangements of classical and quantum states with limited purity, J. London Math. Soc. 101 (2020) 393–431.
* [8] I. Bojarovska, V. Paternostro, Gabor fusion frames generated by difference sets, Wavelets and Sparsity XVI 9597 (2015) 95970D.
* [9] P. G. Casazza, G. Kutyniok, eds., Finite frames: Theory and applications, Springer, 2012.
* [10] C. M. Caves, Symmetric informationally complete POVMs, UNM Information Physics Group Internal Report, 1999, http://info.phys.unm.edu/~caves/reports/infopovm.ps.
* [11] H. Cohn, A. Kumar, G. Minton, Optimal simplices and codes in projective spaces, Geom. Topol. 20 (2016) 1289–1357.
* [12] J. H. Conway, R. H. Hardin, N. J. A. Sloane, Packing lines, planes, etc.: Packings in Grassmannian spaces, Exp. Math. 5 (1996) 139–159.
* [13] I. S. Dhillon, R. W. Heath, T. Strohmer, J. A. Tropp, Constructing packings in Grassmannian manifolds via alternating projection, Exp. Math. 17 (2008) 9–35.
* [14] C. Ding, T. Feng, A generic construction of complex codebooks meeting the Welch bound, IEEE Trans. Inform. Theory 53 (2007) 4245–4250.
* [15] B. Et-Taoui, Quaternionic equiangular lines, Adv. Geom. 20 (2020) 273–284.
* [16] M. Fickus, J. Jasper, D. G. Mixon, J. Peterson, Tremain equiangular tight frames, J. Combin. Theory A 153 (2018) 54–66.
* [17] M. Fickus, D. G. Mixon, Tables of the existence of equiangular tight frames, arXiv:1504.00253.
* [18] M. Fickus, D. G. Mixon, J. C. Tremain, Steiner equiangular tight frames, Linear Algebra Appl. 436 (2012) 1014–1027.
* [19] S. T. Flammia, Exact SIC fiducial vectors, http://www.physics.usyd.edu.au/~sflammia/SIC/.
* [20] C. A. Fuchs, M. C. Hoang, B. C. Stacey, The SIC question: History and state of play, Axioms 6 (2017) 21.
* [21] N. I. Gillespie, Equiangular lines, incoherent sets and quasi-symmetric designs, arXiv:1809.05739.
* [22] M. Grassl, SIC-POVMs, http://sicpovm.markus-grassl.de/.
* [23] G. R. W. Greaves, J. W. Iverson, J. Jasper, D. G. Mixon, Frames over finite fields: Basic theory and equiangular lines in unitary geometry, arXiv:2012.12977.
* [24] G. R. W. Greaves, J. W. Iverson, J. Jasper, D. G. Mixon, Frames over finite fields: Equiangular lines in orthogonal geometry, arXiv:2012.13642.
* [25] J. I. Haas, J. Cahill, J. Tremain, P. G. Casazza, Constructions of biangular tight frames and their relationships with equiangular tight frames, arXiv:1703.01786.
* [26] J. W. Iverson, J. Jasper, D. G. Mixon, Optimal line packings from finite group actions, Forum Math. Sigma 8 (2020).
* [27] J. W. Iverson, J. Jasper, D. G. Mixon, Optimal line packings from nonabelian groups, Discrete Comput. Geom. 63 (2020) 731–763.
* [28] J. W. Iverson, D. G. Mixon, Doubly transitive lines I: Higman pairs and roux, arXiv:1806.09037.
* [29] J. W. Iverson, D. G. Mixon, Doubly transitive lines II: Almost simple symmetries, arXiv:1905.06859.
* [30] D. Jungnickel, A. Pott, K. W. Smith, Difference Sets, Handbook of Combinatorial Designs, Chapman and Hall/CRC, 2006, pp. 445–461.
* [31] KCIK Award on Quantum Information of Polish National Quantum Information Centre (KCIK), https://kcik.ug.edu.pl/post.php?id=1981.
* [32] G. S. Kopp, SIC-POVMs and the Stark conjectures, Int. Math. Res. Not. (2019).
* [33] P. W. H. Lemmens, J. J. Seidel, Equiangular lines, J. Algebra 24 (1973) 494–512.
* [34] A. A. Makhnev, On the nonexistence of strongly regular graphs with the parameters $(486,165,36,66)$, Ukr. Math. J. 54 (2002) 1137–1146.
* [35] D. G. Mixon, C. J. Quinn, N. Kiyavash, M. Fickus, Fingerprinting with equiangular tight frames, IEEE Trans. Inform. Theory 59 (2013) 1855–1865.
* [36] A. Munemasa, Spherical designs, Handbook of Combinatorial Designs (2007) 637–643.
* [37] G. Nebe, B. Venkov, On tight spherical designs, St. Petersburg Math. J. 24 (2013) 485–491.
* [38] S. K. Pandey, V. I. Paulsen, J. Prakash, M. Rahaman, Entanglement breaking rank and the existence of SIC POVMs, J. Math. Phys. 61 (2020) 042203.
* [39] V. Paulsen, Entanglement Breaking Maps and Zauner’s Conjecture, Codes and Expansions online seminar, https://www.youtube.com/watch?v=VpVwb_i7s0I.
* [40] A. Roy, A. J. Scott, Weighted complex projective $2$-designs from bases: Optimal state determination by orthogonal measurements, J. Math. Phys. 48 (2007) 072110.
* [41] A. J. Scott, Tight informationally complete quantum measurements, J. Phys. A 39 (2006) 13507.
* [42] P. D. Seymour, T. Zaslavsky, Averaging sets: A generalization of mean values and spherical designs, Adv. Math. 52 (1984) 213–240.
* [43] T. Strohmer, R. W. Heath, Grassmannian frames with applications to coding and communication, Appl. Comput. Harmon. Anal. 14 (2003) 257–275.
* [44] R. J. Turyn, Character sums and difference sets, Pacific J. Math. 15 (1965) 319–346.
* [45] S. Waldron, A Sharpening of the Welch Bounds and the Existence of Real and Complex Spherical $t$-Designs, IEEE Trans. Inform. Theory 63 (2017) 6849–6857.
* [46] S. Waldron, A variational characterisation of projective spherical designs over the quaternions, arXiv:2011.08439.
* [47] J. Watrous, The Theory of Quantum Information, Cambridge University Press, doi:10.1017/9781316848142.
* [48] L. Welch, Lower bounds on the maximum cross correlation of signals, IEEE Trans. Inform. Theory 20 (1974) 397–399.
* [49] P. Xia, S. Zhou, G. B. Giannakis, Achieving the Welch bound with difference sets, IEEE Trans. Inform. Theory 51 (2005) 1900–1907.
* [50] G. Zauner, Quantum designs—Foundations of a non-commutative theory of designs, Ph.D. thesis, U. Vienna, 1999.
|
# An Overview of Machine Learning Techniques for Radiowave Propagation
Modeling
Aristeidis Seretis, Costas D. Sarris
(December 2019)
###### Abstract
We give an overview of recent developments in the modeling of radiowave
propagation, based on machine learning algorithms. We identify the input and
output specification and the architecture of the model as the main challenges
associated with machine learning-driven propagation models. Relevant papers
are discussed and categorized based on their approach to each of these
challenges. Emphasis is given on presenting the prospects and open problems in
this promising and rapidly evolving area.
###### Index Terms:
Artificial Intelligence, Machine learning, Neural Networks, Radiowave
Propagation, Propagation Losses
## I Introduction
For the intelligent planning and efficient management of any wireless
communication system, channel propagation models are indispensable [1]. As a
growing number of wireless services with high performance demands is offered,
the need for new propagation models becomes more urgent. Safety-critical,
high-throughput and low-latency are just some of the required characteristics
needed in current and future wireless systems.
Over the years, various empirical propagation models, such as Okumura-Hata or
Walfish-Bertoni among others, have been created [2, 3]. Empirical models are
measurement-driven, formulated after fitting measurements taken at a specific
site. These models are computationally efficient, yet they fail to capture the
full spectrum of complex wave effects that often determine the performance of
a wireless link.
On the other hand, deterministic models are increasingly used over the past
years. Such models include methods based on solving Maxwell’s equations, such
as integral equation [4] and finite difference [5] methods. Moreover,
approximate methods, such as ray-tracing (RT) for indoor and urban scenarios
[6], and the vector parabolic equation (VPE) method for terrestrial
propagation and propagation in tunnels [7], have been popular deterministic
techniques. Deterministic models are site-specific. Therefore, they can
provide reliable predictions for a given environment. Nevertheless, despite
recent advances in processing power, their computational demands are still
considered high.
Machine learning (ML)-driven propagation models are promising tools for
resolving the standard dichotomy between accuracy and efficiency of
propagation models. They can be trained offline, by either measured or
synthetic (simulated) data. Moreover, their highly non-linear nature makes
them promising candidates for predicting propagation parameters such as
multipath fading. Finally, they can be made either site-specific or general-
purpose, something that gives them great flexibility.
Given the growing interest in ML techniques for propagation modeling, we
present an overview of various relevant papers. We discuss what an ML-driven
propagation model is and how it can be created. We also focus on explaining
how a propagation scenario can influence various decisions regarding the ML
propagation model.
The paper is organized as follows. First, in Section II, the three main
building blocks of any ML radio propagation model are introduced. These are
the input to the ML model, the model itself, and its output. Then, the
challenges associated with each one of them, as dealt with by various ML-based
propagation modeling papers, are discussed. The key ideas drawn from these
papers are presented in the next three sections. Section III identifies
several ways to specify the input to the ML model. Section IV highlights key
points regarding the various ML models that have been used for propagation
modeling, while Section V presents the types of output data that have been
derived through these models. Section VI presents the main conclusions of the
paper.
## II ML propagation models
### II-A ML propagation models: goal
InputML modelOutputHidden neuronInput layerHidden layerWeighted
connectionAccurate and compact input specification, source of dataML model and
hyperparameter selection Useful and accurate output specification Figure 1:
Flowchart of an ML-driven propagation model, along with its main challenges.
Generally, ML problems can be classified into supervised and unsupervised.
Supervised problems are associated with data pairs of ($\boldsymbol{x}$,
$\boldsymbol{y}$), where the input $\boldsymbol{x}$ to the ML model is mapped
to a specific output $\boldsymbol{y}$. On the other hand, in unsupervised
problems, only the input $\boldsymbol{x}$ is known. For supervised problems,
the main goal of the ML model is to learn an unknown function $f$, mapping an
input space $V_{x}$ to a target space $V_{y}$:
$f:V_{x}\xrightarrow{}V_{y}$ (1)
where $V_{x}$ is the set of all possible input vectors $\boldsymbol{x}$ and
$V_{y}$ is the set of all possible output vectors $\boldsymbol{y}$,
accordingly. The ML model, however, does not have access to the whole set of
$V_{x}$ and $V_{y}$, but attempts to approximate $f$ with a function $g$ that
is computed from the given data pairs of $(\boldsymbol{x},\boldsymbol{y})$:
$g:\boldsymbol{x}\xrightarrow{}\boldsymbol{y}$ (2)
The iterative process of computing $g$ is called training. It revolves around
minimizing the in-sample error between the ML model’s predictions
$g(\boldsymbol{x})$ and the true target values $\boldsymbol{y}$, given a cost
function $L$, over the parameters of the ML model $\boldsymbol{\psi}$:
$\underset{\boldsymbol{\psi}}{\text{min}}\,L(\mathcal{E}_{\text{in}})$ (3)
At the end of the training procedure, the model has learned parameters
$\hat{\boldsymbol{\psi}}$, so that the final output of the model is:
$\hat{\boldsymbol{y}}=g(\boldsymbol{x}|\hat{\boldsymbol{\psi}})$ (4)
For the unsupervised problems, the purpose of the ML model is to find
underlying patterns in the input data, e.g. grouping them into classes. Given
these definitions, computing various propagation parameters, such as the path
loss (PL) or the received signal strength (RSS) at a location, from measured
or synthetic data, is a supervised problem.
One of the most important attributes of any ML model is its ability to
generalize to similar problems, rather than using its parameters to memorize a
specific one. An ML model is expected to not only exhibit low in-sample error,
but small out-of sample error as well, computed on data not used during its
training. In the context of PL prediction, an ML propagation model should be
sufficiently accurate when tested on data collected at different environments
or taken at different positions than the ones used to train it. Since $f$ is
unknown, $g$ is only an approximation of $f$ computed on a dataset that may
also contain noisy samples. Thus, it can be proved that [8]:
$\mathcal{E}_{\text{out}}(g)\leq\mathcal{E}_{\text{in}}(g)+\Omega(g)$ (5)
where $\Omega(g)$ is a complexity penalty term associated with the final model
$g$ that has been chosen as part of the training process. Eq. (5) implies that
for the ML model to have good generalization abilities, the in-sample error
has to be made as small as possible, while also keeping the model complexity
to a minimum. Simple models may not be capable of achieving a small enough
$\mathcal{E}_{\text{in}}$, underfitting the available data. Overly complex
models can potentially achieve a zero $\mathcal{E}_{\text{in}}$ at the cost of
a large complexity penalty term, overfitting the data. This is known as the
bias-variance trade-off [8]. Finding a balance between minimizing
$\mathcal{E}_{\text{in}}$ and restricting $\Omega(g)$ is accomplished through
the evaluation of different models on a dataset that is not used during
training. Consequently, it is common practice to create three different
datasets for a specific task; the training, the validation and the test set
[8]. The training set is used to train the learning parameters
$\boldsymbol{\psi}$ of a network, such as the weights $\boldsymbol{w}$ and
biases $\boldsymbol{b}$ in in an artificial neural network (ANN). The
validation set is used for model selection as well as for ensuring that the
network is not overfitting during training. Finally, the test set is used to
evaluate the chosen ML model, after it is trained.
### II-B ML propagation models: challenges
The general flowchart of an ML-based propagation model can be seen in Fig. 1.
By inspection of the diagram, the three building blocks of any ML propagation
model, along with their main challenges, can be identified.
#### II-B1 Input
The input to the ML model should contain features that are relevant to the
propagation problem and representative of the relation between
$\boldsymbol{x}$ and $\boldsymbol{y}$. It has to also be compact to avoid long
training times. The input data can be measured, creating a direct connection
between the trained model and the reality observed, or synthetic ones,
generated by site-specific or empirical models (RT, VPE, PL exponent (PLE)
models etc).
#### II-B2 ML model
The choice of the ML model defines the type of function $g$ that we seek to
learn. The function is often non-linear, since that gives the model additional
degrees of freedom to fit the data. There are many available ML models, from
deep ANNs [9] to powerful implementations of regression decision trees and
support vector machines (SVMs) [10]. The hyperparmeters of the ML model have
to also be carefully chosen [11]. Those include parameters that strongly
affect the model’s performance, although they are pre-fixed rather than
trained.
#### II-B3 Output
Finally, the output specification of the ML model has to contain useful
information about the propagation characteristics of the communication
channel. The output can be a scalar quantity $y$, such as the PLE of the
communication channel, or a vector $\boldsymbol{y}$ consisting of complex
electromagnetic field values, at one or more receiver points. The output can
also represent a probabilistic PL model. Finally, the predictions of the ML
model have to exhibit small $\mathcal{E}_{\text{out}}$.
In the following, we group the papers under review in three categories based
on their approach to the three challenges we identified (input, ML model and
output). Each category is further divided in sub-categories to better reflect
the diversity of the work that has been conducted in the area. We also present
a brief, yet representative case study of how to create an ML propagation
model in the Appendix.
## III Input specification for ml models
The general flowchart for generating the input to the ML model can be seen in
Fig. 2. Input features specify the geometry (topographic information of an
area or indoor floorplan, position of antennas) and electromagnetic properties
of a communication channel (pattern/polarization of antennas,
permittivity/conductivity of various surfaces, frequency of operation). The
input features have to be pre-processed before they are used by the ML model.
That processing step may include feature scaling and normalization or
dimensionality reduction techniques, among others [10]. After processing the
input data, input vectors $\boldsymbol{x}$ are generated. Depending on how the
target values $\boldsymbol{y}$ are created, i.e via measurements or through a
model, the input vectors may be different, hence the difference in notation
($\boldsymbol{x_{1}}$ and $\boldsymbol{x_{2}}$, respectively). Either way,
measured or generated $\boldsymbol{y}$ vectors will be used together with
$\boldsymbol{x_{1}}$ or $\boldsymbol{x_{2}}$ to form the training pairs. After
training, the ML model is able to output its prediction $\boldsymbol{\hat{y}}$
for any new input vector $\boldsymbol{x}$.
Geometry- based featuresEM-based featuresInput featuresInput data processingML
model$\hat{\boldsymbol{y}}$$\boldsymbol{x_{1}}$$\boldsymbol{x_{2}}$$\boldsymbol{y}$$\boldsymbol{y}$Measurement-
based data collectionSolver-based data generation Figure 2: The flowchart of
the input specification of an ML model.
### III-A Modeling environment
Over the past years, many papers have used an ML approach for determining
various large-scale propagation characteristics, such as path gain (PG) or
PLE, for a variety of diverse communication environments. There have been
papers focusing on urban environments, such as [12, 13], rural, such as [14,
15], or even a mix of different outdoor environments, such as [16]. Special
environments such as roads, mines and subway tunnels have also been considered
[17, 18, 19].
#### III-A1 Environmental and topographical features
The propagation environment plays an important role and can influence many
design choices for the ML propagation model, such as what input features to
use. As an example, parameters such as the number and width of the streets,
the height of the buildings, the building separation and the orientation of
the streets are often used in urban environments [12, 20]. In forested areas,
some input features may relate to the vegetation and canopy in the environment
[21]. For propagation over irregular terrain, the path profile can be used as
an input to the ML model [22]. A path profile is created by tracing the line
connecting the receiver and the transmitter, sampling the elevation of the
ground at fixed intervals. This is done to account for the morphological
variations of the ground. It can also be used as an input in urban scenarios,
where there may be numerous obstacles obstructing line of sight (LOS) [23].
#### III-A2 Propagation features
The input to the ML model can also take into account the diverse propagation
mechanisms present in an environment. A common trend among papers modeling
propagation in urban areas is to differentiate between LOS and non-LOS (NLOS)
cases [12, 20]. For NLOS cases, the authors often use an expanded input set to
account for their higher complexity. This differentiation has been shown to
improve the accuracy of the model [20]. Moreover, several papers on urban
propagation also include input features that account for diffraction losses
[24, 25, 26]. Diffraction is more pronounced in such environments. Hence, its
inclusion generally improves the accuracy of the ML model, irrespective of its
type. In [25], the authors investigated the influence diffraction losses can
have in the accuracy of their ML propagation model (ANN). They found that the
ANN that accounted for diffraction losses in the city of Paris was more
accurate than one that did not. Similar findings were also reported in [20].
### III-B Input features
#### III-B1 Type of input features
For most propagation modeling cases, input features take continuous, real
values, such as the frequency of operation or the distance between transmitter
and receiver. Input features can also take discrete or even binary values. For
example, the $j$-th input feature $x^{j}$ of input vector $\boldsymbol{x}$ may
be binary, where $x^{j}=\\{0,1\\}$, denoting the presence or absence of an LOS
component. Additionally, there may be an input feature for classifying the
type of environment in the vicinity of the receiver [12], [27], or the terrain
complexity [21]. For example, we can have $x^{j}=\\{0,1,...,M-1\\}$, where $M$
represents the number of different types/classes of the feature. The input to
the ML model can also be visual, as in [28, 29], where the authors utilized
satellite images as part of their training data.
When no correlation between different samples is assumed, training samples can
be used as individual inputs to the ML model. When there is dependence among
different samples, they can also be passed on to the ML model as sequences of
input data. The length of the sequence is a hyperparameter that has to be
tuned accordingly. If it is set too small for a given problem, the network may
not fully exploit the correlation between different samples. On the contrary,
if the length is set too high, the different sequences may be uncorrelated.
That can lead to unnecessary computations or to a sub-optimal ML model. For
example, in [30], the authors found that using a sequence of 200 RSS samples
collected at different timestamps gave better results than using a larger or
smaller number of them. Similar findings were reported in [31, 32, 33].
#### III-B2 Number of input features
Expanding the input information given to the ML model by increasing the number
of input features, when uncorrelated, generally improves the model
predictions. In [34], the authors found that their ML model predictions
improved, when the number of inputs was increased. In [35], the authors used
an RT solver to simulate propagation in an urban environment. They provided
the network with global information with regards to the height of the building
at the center of each cell, as well as the transmitter and receiver
coordinates. They also provided local information to the ANN, i.e. the same
type of input data as the global one, but using only 8 building heights in the
proximity of the receiver. The model that used global and local information
was more accurate than the one that used only local information.
#### III-B3 Dimensionality reduction
When the number of input parameters is increased disproportionately with
respect to the underlying complexity of the problem, the computational
performance of the network is compromised. For these cases, dimensionality
reduction techniques can be helpful [31]. For example, in [23], the authors
discretized the path between the transmitter and the receiver. Each one of the
discrete segments was assigned input variables describing the main obstacle
present there. Two dimensionality reduction techniques were used to reduce the
input space: principal component analysis (PCA) and a nonlinear PCA (nPCA).
The authors found that reducing the input representation helped considerably.
In cases where a path profile is generated, PCA is readily used to condense
the high-dimensional input space into a lower-dimensional. A similar procedure
was followed in a number of other papers, such as [27], where the authors used
PCA to convert 9 input features describing the surrounding environment into 4
principal components, and [36], where the authors mapped 4 input features to
one. Dimensionality reduction techniques are also good candidates for reducing
correlation among the input parameters. Even though some ML models, like ANNs,
are highly immune to redundant information, using more inputs than needed
affects the computational performance of the model. For example, [27] and [36]
showed that dimensionality reduction accounted for up to 30% savings in
training time, for comparable model accuracy. Reducing input dimensionality
can also be achieved without explicitly using PCA techniques. In [28], the
authors converted the colored input images of a propagation area into grey-
scale, thus decreasing the number of color channels from 3 to 1.
#### III-B4 Impact of input features
It is often necessary to choose the input parameters by trying different
options. In [37], the authors tried different numbers of input features. They
saw bigger improvements in the accuracy of their radial basis function
(RBF)-ANN, by adding street orientation as an input feature, compared to
adding the difference in height between the base station and the buildings,
for urban and suburban cases alike. In [27], the authors saw a noticeable
difference in the accuracy of their ANN, when they included environmental
features as part of their input. These environmental features corresponded to
different elements of a suburban terrain, such as roads, tunnels and
buildings. The length of the straight line connecting the receiver and the
transmitter within each of these was used as an input feature. In [38], the
authors found that even though global input information about an RT-generated
urban grid was more important than local one, they could replace it with a
reduced input representation that led to more accurate predictions. That input
representation consisted of the path profile between the receiver and the
transmitter, as well as local information about the receiver. Uncorrelated
input features have a bigger impact on the performance of the ML model. In
[26], the authors achieved no substantial improvement in their ANN model of an
urban scenario, when they added the signal strength computed with a knife-edge
diffraction model. That extra input feature though was highly correlated with
one of the initial inputs to the ANN, that of the diffraction loss computed by
the same method.
### III-C Training data
#### III-C1 Size of training dataset
Increasing the volume of training data is always helpful, as long as the ML
model is not in the overfitting regime. That is because the model can explore
a bigger space of $V_{x}$ as part of its training. Hence, its predictions can
be more reliable. This has been observed in several papers [26, 39]. The new
training data have to be as representative as possible of the propagation
problem. For example, in [40], the authors used two different sized training
sets. The smaller dataset contained a small number of cases where the signal
reached the receiver after reflecting off from buildings’ walls. The bigger
training set gave more accurate predictions, not only because it contained
more training data in general, but also because it included a wider collection
of multipath propagation cases. We should note that the ML model has to be
complex enough to accommodate the increased training set, otherwise,
underfitting will occur. The same also applies when increasing the number of
input features.
In classification problems, extra consideration has to be given when
constructing the training set. The number of samples collected from each class
distribution should be comparable to avoid bias. [41]. As an example, in [42],
the authors investigated how a balanced training set could impact the accuracy
of an ML classifier, predicting the building and floor a user is located at.
Balancing the initial dataset improved the localization accuracy by about 1%.
#### III-C2 Dataset augmentation
Increasing the volume of measured training data is an expensive and time-
consuming process. In these cases, data augmentation techniques can be used.
When there is a visual input to the ML model in the form of images, various
image transformations can be applied to create new images that can be used as
new training data. For example, in [43], every input image was rotated by 1°,
achieving an increase of the training set by a factor of 180. In [28], apart
from rotation, the authors also used sheering for the training images. The new
images were indeed correlated to the old ones, however, they could still be
useful since the data demands of any ML model are considerable.
Simulated data may also be used to increase the training set. In [44], the
authors used training samples coming from a log-distance model to improve the
ML model predictions in a newly employed frequency, in an airplane cabin. To
train their model, they only used those coordinate points inside the aircraft
where the log-distance model showed good agreement with the actual measured
values. When they tested their model on a new frequency, the authors found
that the accuracy improved, and was in fact higher than by just using an ML
model trained without these points. However, when the authors included a small
part of measurements at the new frequency band, the accuracy improved even
more. Nevertheless, even using seemingly lower-quality data (data coming from
an empirical model), can be helpful during the training of the ML model. That
was also validated when the authors used a fusion of measured and empirical
data in their final experiment, improving the accuracy of their ML model even
further.
Fusing training data from various sources may be necessary to construct large
training datasets. It has also been shown to improve the performance of the ML
model. As an example, in [31], the authors used training data coming from UAV,
Wi-Fi and cellular base stations. In [45], the authors used a fusion of Wi-Fi
RSS and geomagnetic field (GMF) data. The authors also found that by using
only RSS or GMF data, the accuracy of their ML model deteriorated. Likewise,
in [46], the authors trained their model using a fusion of measured (Wi-Fi,
Bluetooth, GM data) as well as synthetic data (RT).
Finally, generative models can also be used to produce additional training
data. In [47], the authors used a generative adversarial network (GAN) to
boost their measured dataset by generating additional channel state
information (CSI, [2]) amplitude maps of an indoor environment. They found
that the classification accuracy of their network significantly improved.
Likewise, in [48], the authors used GANs for data recovery. Their goal was
user tracking, done by measuring successive locations of a moving user. In
cases where no measurements were collected (eg. not reachable from an access
point (AP)), the GAN was used to estimate the user’s location.
#### III-C3 Impact of training data
Just like uncorrelated input features may improve the accuracy of an ML model
more than correlated ones, uncorrelated training samples are also more useful
than correlated ones. As an example, in [44], the authors decided to use 20%
of their measured data to train their ML models for a new frequency deployment
in the airplane cabin previously mentioned. Their experiments showed that
using measurements taken uniformly across the cabin rows led to a more
accurate model, than using measurements taken only at the front or back rows
of the airplane. The geometry at the front of the plane differed from the one
at the back, therefore, taking the majority of measurements at either of these
areas led to a non-representative of the airplane’s geometry dataset. Finding
similarities among the training data is helpful and can lead to a more
balanced and representative training set. Clustering algorithms can be used to
group the training data and present the ML model with evenly distributed
training samples among the clusters. Such a procedure was followed in [39] to
cluster the coordinates (longitude, latitude) and the altitude of the training
samples, using the well-known $k$-means clustering algorithm. However, this
procedure added another hyperparameter to the ML propagation modeling process,
that of determining the number of clusters $k$.
#### III-C4 Type of training data
As already discussed, ML models can be very demanding in terms of training
data, often making the task of collecting large sets of measured data
infeasible. Instead of measurements, synthetic data generated by
electromagnetic solvers may also be used as training data for ML propagation
models. For these cases, there is an additional choice to be made, that of the
solver. Some of them may be computationally more efficient than others.
Moreover, it may be easier to construct the simulation environment in some
solvers than others. One of the most popular ones for outdoor environments is
RT [35, 49, 50, 51, 29, 52]. Another deterministic method used in propagation
modeling simulations is the VPE method. The method assumes a paraxial
approximation with respect to the direction of propagation of the wave.
Therefore, it is often used in simulating enclosed environments that have
waveguiding characteristics [19], or terrestrial propagation scenarios [15].
The use of physics-based solvers for generating the training/test data also
leads to input features that are usually different from the ones used in
measurement-based training of ML models. These input features are solver-
specific. For example, in [50], the authors included parameters such as the
number of reflected and diffracted rays that reach the receiver according to
their RT. Grid-based methods, such as VPE, allow for the assignment of input
features on individual “cells”. Likewise, RT employs reflecting surfaces,
whose specification introduces input features for the model. To that end,
extra input parameters may be used to convey information relating to each cell
of the grid. In [49], the authors designed a grid representing an urban
environment. In [51], a public square in front of a station was implemented
into RT. In both cases, cell-specific information was provided to the ML model
as input. In [49], it was a parameter indicating whether the cell was indoor
or outdoor. In [51], it was the maximum obstacle height within the cell.
Figure 3: Diagram of the main ML models used for radio propagation.
## IV ML model
As already mentioned, one important decision for the accuracy of the
propagation model is what type of ML method to use. This section separates the
propagation papers into three main groups. The number of propagation modeling
papers that have used ANNs is significant. We expect this trend to continue
due to the growing research activity on ANNs. Thus, the first two groups are
defined by whether the ML model used is ANN or non-ANN based. The last group
describes hybrid models that combine more than one ML methods. A schematic
diagram on the main ML models that this paper considers can be seen in Fig. 3.
Finally, we discuss the constraints in the range of the input features, and
how these can lead to relatively constrained or more general ML propagation
models.
There are many different ML models. More complex models may perform better
than others, when given large datasets or a large number of input features.
Simpler models may be faster for small-scale tasks. For any ML model, its
hyperparameters play an important role. These can pertain to the architecture
of the ML model or can be directly connected to its training. The number of
hidden layers and nodes for an ANN is an example of the first category, while
hyperparameters such as the learning algorithm, the amount of the learning
rate or the type of kernel functions for the SVMs are part of the second
category. The values of the hyperparameters can be set via several heuristic
techniques, such as grid search, random search and others [11].
### IV-A ANN-based models
This subsection covers papers that utilize both the standard multi-layer
perceptron (MLP) ANNs as well as their many variations, such as RBF-ANNs,
convolutional neural networks (CNNs), recurrent neural networks (RNNs) and
GANs.
Generally, adding depth to an ANN by increasing the number of layers, or
increasing the number of neurons/nodes, improves its accuracy. This has been
reported in various papers [14, 27, 43, 42], where the authors experimented
with different numbers of layers and found that deeper ANNs gave more accurate
predictions. Similar findings were reported in [13] and [26], where the
authors observed that increasing the number of neurons while keeping the
number of hidden layers constant, improved the accuracy of the RBF-ANN. Bigger
ANNs (and generally more complex models) are more prone to overfitting,
especially if only limited numbers of training data are available. Hence,
regularization techniques have to be used, such as early stopping or L1 and L2
regularization [8, 9]. The former method stops network training when the
validation error follows an increasing trend, a sign of overfitting. The
latter penalize big network weights by incorporating the L1 and L2 weight
norm, respectively, into the cost function (see Eq. (7) in the Appendix).
Many variations of MLP-ANNs have been developed over the years, with some of
them utilized for propagation modeling. One variation of the standard MLP-ANN,
widely employed in propagation modeling, is the RBF-ANN [13, 37]. For these
models, the choice of radial basis functions is just as important as choosing
an efficient activation function for the MLP-ANNs. In [13], the authors used
spline functions with good results. Other types of ANNs have also been used
such as the wavelet neural networks (WNNs) [53] that use a wavelet as an
activation function.
A variation of ANNs, highly popular for classification tasks and widely used
for image recognition, is the CNNs [9]. The input to a CNN is typically a
3-dimensional tensor, instead of input vectors used in the standard ANN. For
example, for image classification tasks, each input layer of the data contains
pixel values for each of the three RGB color channels in the form of a
2-dimensional matrix. Stacking these three matrices together will form the
3-dimensional input tensor. The same procedure is followed when grey-scale
images are used as input [28, 29]. In the context of propagation modeling, the
input to the CNN does not have to be visual, i.e. instead of color
intensities, each pixel can represent other useful information. Moreover, a
varying number of input layers/channels can be used. For example, in [43], the
authors used two input channels. One, encoded the normalized height of each
building in an RT-generated city grid. The other, accounted for the normalized
difference in height between the transmitter and the ground level at each
point of the grid. In [51], the authors created a square grid representing an
urban square that was modeled in RT. They used three channels containing
information regarding the distance from the transmitter, the distance from the
receiver, as well as the height of each cell on the grid, respectively. Three
layers of input features were also used in [54]. Furthermore, the input is not
required to contain spatial information. For example, in [42], the authors
used images of RSS coming from multiple APs for a given indoor location.
Instead of using an input vector containing these values, they converted them
into a 2-dimensional format by zero-padding. Some of the most popular CNN
implementations in computer vision have also been used for propagation
modeling. Among them are VGG-16 [55] as well residual network (ResNets)
implementations [56]. CNNs use filters (feature detectors) to learn mappings
between the input and output space. A parameter sharing scheme exists among
filters, i.e. the same filter is applied at different parts of the input image
[9]. That trait as well as sparsity of connections make CNNs more efficient
than standard MLPs [9].
Another class of ANNs that has recently grown in popularity and utilized in
propagation modeling, is the RNN. Contrary to the previously discussed ANN
types, these networks exhibit a dynamic behaviour, whereby their output
depends not only on the current input, but on previous inputs as well. Hence,
RNNs process sequences of input data in a recurrent fashion. Many different
variations of RNNs have been used for propagation modeling purposes, such as
the echo state networks (ESNs), [57], Elman RNNs [53], standard RNNs [58, 32]
and gated RNNs that use a gated recurrent unit (GRU) instead of a standard
recurrent unit cell [31, 32]. The most important and popular type of RNN is
the long short-term memory (LSTM) RNN that can capture longer dependencies in
the input data, compared to the other types of RNNs [30, 33, 32, 45, 59]. In
propagation modeling problems, the input sequence to the RNN is usually
spatial [32] or temporal [31, 58]. The ability to learn temporal sequences
also makes RNNs ideal for modeling time variations in a communication channel.
Finally, another recent and very powerful class of ML models involves GANs.
[60]. These models consist of two neural networks, usually CNNs [61], the
generator (G) and the discriminator (D). G tries to model the distribution of
the real (target) data, eg. an image of a human face, by producing fake
images, while D tries to discriminate between these and the real images that
are provided to it. Based on this adversarial game, a trained G can produce
artificial/synthetic data that are almost indistinguishable from the real
ones. Whereas the inception of GANs envisaged them in an unsupervised setting,
they have been adapted accordingly for semi-supervised learning by providing D
with labeled data. Moreover, a useful type of GANs for propagation modeling is
conditional GANs (cGANs) [62]. These can include constraints on the data
produced by G. GANs’ generative ability is highly utilized in propagation
modeling for dataset augmentation [48, 47] or to improve the reliability of ML
models when the number of labeled data is small [63]. Finally, apart from
GANs, other generative networks can also be used, such as deep belief networks
(DBN) [46].
### IV-B Non-ANN-based models
Apart from ANNs, many other ML models can be used for regression tasks, such
as computing PL in a given communication link. Even though they differ from
ANNs in their tuning parameters, architecture or learning algorithm, the main
challenge remains the same; namely, what network parameters to choose for an
efficient configuration.
One popular choice outside ANNs is SVMs and more specifically their regression
(SVR) version [23, 64, 57, 44, 39]. SVM kernels are equivalent to the
activation functions of the ANNs. The type of kernel is important for the
accuracy of the model. In [64], the authors experimented with three different
kernels, namely a polynomial, a Laplacian and a Gaussian kernel. Results on
the test set showed that the Laplacian kernel gave more accurate predictions
than the other two. Other kernels, such as RBF kernels, have also been used
for propagation modeling [39].
Another alternative is a genetic algorithm (GA), used to derive a closed form
expression of the received PL [65]. Other well known regression algorithms,
such as the $k$-nearest neighbors, are also popular for radio propagation
modeling [21, 39]. Recently, ensemble methods have been used with promising
results [66]. Their function is based on the simple notion that a combination
of learners can be more powerful than each one separately. Random forests (RF)
[67], which operate by constructing a variety of decision trees, is such an
ensemble method, used for radio propagation modeling [66, 21, 44, 39]. Another
form of ensemble learning is boosting. One of its more popular
implementations, the Adaboost algorithm [67], has also been applied for
propagation modeling [66, 21, 44, 39]. Any one of the models previously
mentioned, such as the SVR or the $k$-NN algorithms, can be used as a learner
for the ensemble method. Since ensemble methods require the participation of
many learners for the generation of the final output, a weighting scheme
between them has to be chosen. Finally, it should be noted that training
ensemble methods may take considerable time, compared to separately training
the base learner.
### IV-C Hybrid models
Instead of creating an ML model to directly predict various propagation
parameters, one can implement a correction mechanism for an existing
propagation model. The main goal of the ML model is to enhance the knowledge
provided by a baseline propagation model, by either “learning” its errors or
by using its predictions as part of the ML model’s input. This combination of
a baseline and an ML model can be considered as a hybrid approach. The
architecture of one such error correction hybrid model can be seen in Fig. 4.
The input to the baseline and the ML model can be generally different.
Considering one sample point, the prediction of the baseline model $y_{1}$ is
used to compute the error $e$ with respect to the real target value $y$. Then,
this error is used as the target value for the ML model. Therefore, the
prediction of the ML model $\hat{y}$ will correspond to that learned error.
Consequently, the final prediction of the hybrid model would consist of the
sum of $y_{1}$ and $\hat{y}$. The initial prediction of the baseline model can
be thought of as a starting point, from which the ML model can improve on the
baseline model’s predictions. Hence, the training of the hybrid model is
expected to be more efficient.
Various papers over the past years have used ML models as error correction
tools in predicting various propagation parameters of a communication channel.
The baseline models that have been used in the literature are mostly empirical
ones. For example, in [37, 34, 68], the COST-Walfisch-Ikegami (CWI) model was
used as a baseline model. In [69], a log-distance model was used to drive the
training of the ML model, while in [70], Hata’s model provided the starting
knowledge. On the other hand, the baseline model can be an ML model itself.
For example, in [71], the base model consisted of an Adaline (adaptive linear
network) NN, a very basic network computing only the linear weighted sum of
its input. The input to the ML model can be the same as the one of the
baseline model [37, 71]. It can also include input parameters that are not
modeled by the baseline model itself [34]. Finally, the ML model can also
improve on the predictions of a lower fidelity model, to match those of a
higher one. For example, in [52], an ANN was used to improve on the RSS
predictions of an RT using 4 reflections. The authors found its accuracy was
close to the case of using 7 reflections, but its runtime smaller.
Hybridization can also refer to the interconnection of several ML models to
create a more complex one. Such a procedure was followed in [28]. The authors
used two ANNs and a single CNN to synthesize a more complex ML model. The task
of the CNN was to learn latent features from digital images, while the first
ANN was tasked with learning mappings from input features pertaining to the
transmitter and receiver, as well as their distance. Finally, the second ANN
was used to concatenate the outputs of the other two models and produce the
final output. Likewise, in [54], the authors first used a CNN to extract
latent features from RT-generated maps of urban environments. Those features
along with the inputs of the CNN were then used as input to a 2-hidden-layer
ANN, which in turn generated the output (PL) of the cascaded network. Another
cascaded-layered network was created in [58]. The authors combined two RNNs,
with the first classifying the building where the user may be located, and the
second the corresponding room. Finally, in [33], the authors used a hybrid of
a ResNet and an LSTM network. ResNet learned the spatial correlation between
RSS samples at a given timestamp in the form of black-and-white radio maps.
Sequences of 5 RSS maps were then stacked together and passed on to the LSTM
to extract the temporal correlations between them.
GANs can be thought of as the interconnection of two baseline models (eg.
CNNs). However, some papers hybridized GANs with other models to perform even
more complex functions. For example, in [47], the authors interconnected a
DCGAN with a CNN for classifying user’s location. In [63], the authors used a
DCGAN employing a CNN classifier sharing weights with D. In that case, both D
and the classifier learned concurrently as a dual separate model, with the
first classifying user location in an indoor environment, and the latter
discriminating between real and fake samples. In [48], the authors used a cGAN
in combination with an LSTM network. The RSS maps of the modeling environment
created by the cGAN were used by the LSTM for user tracking reasons. For the
same purpose, the authors in [46], combined an unsupervised DBN with a
supervised classifier.
InputfeaturesBaseline modelML
model$x_{1}$$x_{2}$$y_{1}$$e$$e=y-y_{1}$$\hat{y}$ Figure 4: The architecture
of an error correction hybrid model.
As can be seen from Fig. 4, the operation of the baseline and the ML model are
linked together. However, they still exist as two separate models. One
interesting alternative is to merge the two models and integrate the baseline
model into the ML model itself, as was the case in [70]. One of the ANN nodes
was used to implement the Hata model. The output of that node was connected to
the total output of the network through a unit weight, to ensure that the
final output would be always influenced by the baseline model’s predictions.
### IV-D Input-constrained models
A propagation model can consist of separate ML-based models derived for a
subset of the frequency range of interest, specific cases of the receiver
position (e.g. LOS or NLOS locations), or other subsets for specific cases of
the input features. For example, several authors have created frequency- [39],
environment- [40], route- [26, 40], or distance-specific [38] ML models,
respectively. Nevertheless, more general ML propagation models can be created
too. One such example of designing a multi-frequency, multi-environment
propagation model can be found in [16]. To that end, the authors collected
measurements at a variety of different frequency bands and at areas ranging
from urban to rural. More examples of multi-frequency networks can be found in
[15, 68, 44]. A similar procedure was followed in [21], where the forested
areas measured had different features, such as canopy density and terrain
complexity. Their ML model accounted for all these diverse environments.
Relaxing the constraints on the range of the input features requires the
training of more complex ML models that demand bigger volumes of training data
and higher computing power. On the other hand, since they are general, they
are more flexible.
## V Output
In this section, we present different types of outputs that have been
considered in ML-based propagation models. We also discuss the accuracy of
these models and connect some output errors to specific input features. As a
note, for regression problems, various error metrics can been used to evaluate
the accuracy of the ML model on the test set. The mean absolute error (MAE),
the mean squared error (MSE) or the correlation factor (CF) between the model
predictions and the target values are just some of these. For classification
tasks, the accuracy of the model corresponds to the probability of correctly
predicting the output class.
### V-A Type of output
Even though most ML-based propagation scenarios correspond to regression
tasks, such as those previously described in the paper, classification
problems have also been considered. As an example, in [72], the authors used
high-level information at the receiver, such as PLE, RMS delay, RMS angular
spread and others, about various measured as well as simulated communication
scenarios. Then, they used these as inputs to classify each environment into
urban or rural, each one having also an LOS or NLOS specification. In some
classification problems, choosing the number of different classes may not be
completely straightforward and the labeling may also be subject to human
errors. In [72], an environment could have both urban and rural features,
therefore, its classification as urban or rural may be misleading. In such
cases, unsupervised algorithms may be used to cluster the data. In [72], the
authors experimented with 2 supervised algorithms, $k$-NN and SVM, as well as
with 2 unsupervised, the $k$-means [10] and a probabilistic inference model,
the Gaussian mixture model (GMM). It is interesting to note that the authors
found that the optimal number of clusters for the $k$-means algorithm was
equal to the number of classifying classes.
As already mentioned, the output of an ML model can also be probabilistic. In
[73], the authors trained an ANN using channel statistics (moments and
covariances of channel transfer functions) over many simulated realisations of
two stochastic propagation models; the path graph (PGr) and the Saleh-
Valenzuela (SV) model. The trained ANN was able to generate valid parameters
for the two probabilistic models, so that both showed a similar power density
profile to the actual one. Likewise, in [29], the authors used a CNN-based
approach for PL distribution estimation in a variety of different urban and
suburban environments. Satellite images of the environments were first
converted to 3D models that were imported into an RT solver. Then, the
simulated PL was used to generate the PL probability density function in each
environment. In such cases, the number of bins that comprise the distributions
should be carefully chosen.
Similar to the sequential input of an RNN, its output can be sequential too.
In [32], the authors investigated whether their multiple-input multiple-output
(MIMO) LSTM was more accurate than a multiple-input single-output (MISO) one.
They found that the MIMO model was more accurate. Similar findings were
reported in [48], where a MIMO model was more accurate than a single-input
single-output (SISO) LSTM.
An ML propagation model that learns the RSS inside a specific geometry can be
used to compute other parameters and influence network-level decisions. As an
example, in [74], the authors created an ANN that outputs the conductivity and
electric permittivity of the ground. Moreover, large-scale fading outputs,
such as the PLE of a communication channel, are also helpful for a higher-
level analysis of a given environment [43]. ML propagation models that learn
RSS maps can also be used for localization and tracking, especially in indoor
environments. Examples include models that learn to estimate user location,
both as a regression [42, 58] as well as a classification problem [31, 30].
The former is done by computing the coordinates of the user, while the latter
by classifying the subspace the user is located in.
### V-B Understanding output errors
Some output errors of ML-based propagation models can be attributed to certain
input features. For example, the more complex the multipath mechanisms are in
a given environment, the more error-prone becomes an ML model of the channel
impulse response of a communication system in that environment. Similar
connections are made in the following.
#### V-B1 Type of environment
The type of environment, as well as the presence or absence of an LOS
component, are two factors that influence the error of ML models. ML
propagation models in urban scenarios exhibit lower accuracy than in rural
[16] or semi-urban [37, 29] environments. A similar finding was reported in
[43]. The authors noted a slight increase in error when they inserted more
buildings in their RT-generated urban grid, i.e. when they made the urban
environment even more complex. Moreover, ML models for NLOS scenarios are also
less accurate than LOS ones [20, 51, 40, 30]. Similar results were reported in
[72], where the authors trained and tested their network using two different
routes in an urban environment and ascertained that the route that maintained
better LOS conditions exhibited smaller prediction errors. Also in [38], the
largest errors corresponded to receiving points for which LOS was obstructed
by several tall buildings. For such cases, the authors found that local
(around the receiver) information could improve accuracy. In addition to that,
increasing the height of the transmitter had the same effect, since by doing
so, the signal strength at the same receiving points increased (some NLOS
cases also changed into LOS ones). Similar findings with respect to the
transmitter antenna height were reported in [29].
#### V-B2 Distance and frequency
Another factor affecting the accuracy of predictions is the simulated
distance. Generally, errors are higher when the receiver is in the vicinity of
the transmitter. For example, in [16], the authors concluded that for
distances smaller that 500 m, the ANN presented higher errors than for larger
distances. Similar findings were reported in [43]. A workaround is to build
distance-specific ML models. In [38], the authors created three distance-
specific ANNs, for propagation distances less than 350 m, between 350 m and
700 m, and larger than 700 m. Interestingly, the ANN trained on small
distances was slightly more accurate than the one trained on medium distances,
due to overtraining of the first network on learning the more pronounced fast
fading characteristics of smaller distances. When that network was tested on
medium or large distances, it was found less accurate than the network trained
on medium distances. For a distance-agnostic ML model, the higher errors
observed closer to the transmitter are related to the break-point of the
communication channel [75] and the near field of the transmitting antenna,
both of which depend on the frequency of operation. Generally, as frequency
increases, the small-scale fading characteristics of the channel become more
pronounced. That leads to increasing errors for the ML model [68, 36, 19, 29].
For example, in [44], the authors observed increased errors at the frequency
band of 3.52 GHz and 5.8 GHz compared to the one of 2.4 GHz. This is also
demonstrated in the case study of the Appendix. On the other hand, in [16],
the authors concluded that their ANN’s errors did not follow a specific
pattern across the frequency range used (450 MHz - 2600 MHz). However, they
averaged their measurements over a 40-wavelength distance, reducing the
effects of fast fading.
### V-C Accuracy of ML models
#### V-C1 Comparison to empirical models
In early ML propagation papers, empirical models were mainly used as a
reference for the accuracy of the ML models. Many papers have showcased the
higher accuracy of ML propagation models over various empirical ones. In [20],
the authors showed that the ANNs they created, both for LOS and NLOS cases,
outperformed three reference empirical models used, namely the Walfish-Bertoni
(WB) model, the single-slope model and the CWI model. In [68], the MLP-ANN
implementation was more accurate than the ARMA forecasting model at both
frequencies of 800 and 1800 MHz, while in [14], the ANN was found more
accurate than Recommendation ITU-R P.1546 for rural areas and the Hata model.
Finally, in [26], the ANNs outperformed empirical models that also included
diffraction losses, such as the Cascade Knife Edge (CKE) and Delta-Bullington
model. Similar observations were made for other ML models. As a matter of
fact, in [24], the RBF-ANN implementation was found more accurate than Meeks
[76] and Maximum Shadowing (simplified Meeks) empirical models. In [37], the
same type of network was found more accurate than the single-slope, WB and CWI
models. Similar results were also observed in papers using hybrid ML models,
such as in [34, 37, 28]. In [70], the authors also showed that a combined
Hata-ANN model was more accurate than each one of its constituent models.
#### V-C2 Comparison to other ML models
Other papers compared different ML models to identify the most accurate for
their application. In [64], the authors found that results of the MLP-ANN were
very close to that of an SVR. Similar findings were reported in [37, 23],
where the accuracy of the RBF compared to the MLP-ANN was only marginally
higher. In [15], the authors concluded that for their setup, the WNN
implementation was more accurate than the RBF-ANN. In [57], the authors found
that the ESN was more accurate than an SVR implementation, but also more time
consuming during training. In [22], the CNN outperformed a standard ANN for
predicting PG over irregular terrain. In [42], the authors also found that
their CNN was more accurate than an MLP-ANN classifier and a stacked auto-
encoder (SAE). RNNs have also been found more accurate and faster than MLP-
ANNs in the modeling of time-varying communication channels [31, 32, 30].
Among the RNN models, the authors in [31] found that GRU outperformed their
LSTM implementation. On the other hand, LSTMs were found more accurate than
both standard RNNs [32, 59], as well as GRUs [32].
Regarding ensemble implementations, in [21], their RF performed more
accurately than the MLP-ANN, $k$-NN and AdaBoost implementations. Similar
results were observed in [66], where the author’s RF implementation
outperformed the $k$-NN, SVR, Adaboost and the gradient tree boosting (GTB)
methods. However, it should be noted that all 5 ML models were close,
accuracy-wise. Finally, a voting regressor (VR) ensemble model constructed
from these 5 learners was slightly more accurate. One more paper that found RF
to be the most accurate model is [44], where the authors compared it against
an empirical model (log-distance) and 3 other ML models; a single hidden-layer
ANN, SVR and AdaBoost. RF was the most accurate model across all three
frequencies the authors used (2.4, 3.52 and 5.8 GHz). In [72], the RF
implementation was again found more accurate than the MLP-ANN and the SVR.
Finally, in [39], the authors compared AdaBoost, RF, $k$-NN, SVR and ANN
regression algorithms against Lasso regression [77] (a type of linear
restricted regression) and the kriging algorithm (often used in
geostatistics). The goal was to predict the RSS of a digital terrestrial
television (DTT) system at any point inside a coverage area and project it on
Google maps. They found that all ML models were noticeably more accurate than
kriging and Lasso. Among the ML models, the most accurate were AdaBoost and
the RF regression algorithm, exhibiting similar performance.
### V-D Generalizability and test set
As discussed in Section II, the ML model should have good generalization
abilities. This is determined by how accurate the ML model is when evaluated
on the test set.
#### V-D1 Composition of the test set
For any reliable conclusion about the generalizability of an ML model, the
test set should contain data samples that have not been used during training.
The kind of samples is implementation-specific and connected to the goal of
the ML model. In [12], the authors tested their ANN using a transmitter
location that was different from the one used to train it. In [16] and [26],
the authors used different routes to test and train their ANN. Likewise, for
cases where synthetic data were used, as in [49], the authors trained their
network on a grid of uniformly placed streets and buildings, but tested it on
a non-uniform grid. The test samples can represent an interpolation problem,
i.e. when the values of the test features fall within the range of the
training inputs, or they can be extrapolation samples. For example, in [68],
the generalization abilities of the ML model were tested for receiver
distances higher than those used during the training, simulating a time-series
forecasting problem. Moreover, in [44], the authors tested their network on
frequencies outside the ones used to train it. Another extrapolation problem
can be found in [53], where the authors used an Elman RNN to calculate the
propagation factor in VPE simulations.
When an ML model is trained only on synthetic data, its accuracy is bounded by
the accuracy of the solver that generates the data. Therefore, the solver
itself has to be accurate. In order to enhance the reliability and thus the
generalizability of such an ML model, measurements can also be used as test
cases. We presented such an example in [19], where we used a fusion of
synthetic as well as measured data to evaluate an ML model. More specifically,
an ANN was trained on RT-generated data simulating various arch-shaped tunnels
of different cross-sections. Afterwards, the ANN was tested not only on RT-
generated test tunnel configurations, but also on sections of the London
Underground.
#### V-D2 Test set distribution
As explained in Section III, the training set has to be representative of the
test set. Ideally, both datasets should be generated by the same distribution.
Otherwise, large errors may be observed. As an example, in [26], the authors
created two different ANNs for two measured routes. Both ANNs were highly
accurate when tested on the routes that were used to train them. However, when
measured data from one route was used to generate predictions for the other,
the performance of the ML model deteriorated. In fact, accuracy was lower when
using data from route 1 to generate predictions for route 2, than the other
way around. The reason for that was that route 1 crossed an urban area, while
route 2 crossed a mix of urban, rural and semi-rural terrain. Hence, it
included a wider range of propagation effects than route 1. As another
example, the authors in [49] trained their ML model on RT-generated data
inside a uniform urban grid. When they tested their model on data generated in
a non-uniform grid, they noticed a considerable increase in the prediction
error.
## VI Conclusion
In this paper, we gave an overview of ML-driven propagation models. We
described what an ML propagation model is and how it can be constructed. We
analyzed its three building blocks: its input, the ML model used and its
output. Moreover, we presented the challenges associated with each one of
these blocks and how they were tackled by several relevant papers in the
literature. This was done in a systematic way, by focusing on the methods,
conclusions and higher-level decisions of each paper. More specifically, we
can conclude that:
* •
Input features should convey useful information about the propagation problem
at hand, while also having small correlation between them.
* •
Dimensionality reduction techniques can help identifying the dominant
propagation-related input features by removing redundant ones.
* •
Increasing the number of training data by presenting the ML model with more
propagation scenarios improves its accuracy.
* •
Synthetic data generated by high-fidelity solvers, such as RT or VPE, or
empirical propagation models, can be used to increase the size of the training
set and refine the accuracy of ML based models.. Data augmentation techniques
can also be used for that purpose.
* •
Regarding the accuracy of the ML models, RF was found to be the most accurate
by a number of papers. Generally though, the differences in accuracy between
the various ML models are implementation-dependant and were not large for the
ML models we reviewed.
* •
More general ML propagation models, covering a wide range of frequencies and
propagation environments, require more training data than simpler ones. The
same applies for models that correspond to more complex propagation scenarios,
such as in urban environments.
* •
ML models can be connected to create hybrid ones that can be employed in more
complex propagation problems.
* •
The evaluation of an ML model for a given propagation problem requires a test
set modeling all present propagation mechanisms. Its samples should come from
the same distribution as that of the training samples.
There are still open problems or questions to be further investigated in this
area. Examples of such problems are:
* •
The connection of the physical propagation mechanisms present in a channel
with the architecture and the volume of training data required for the
development of an ML-based channel model.
* •
The potential of ML methods to simplify the input required for the development
of PL models in complex environments (e.g. replacing elaborate CAD models with
a sequence of images of the environment to be processed by the model).
* •
Further research on GANs and how they can improve the accuracy of an ML
propagation model, especially in the regime of low-volume training data.
* •
Reinforcement learning (RL [78]) (being the third class of ML-related
problems, apart from supervised and unsupervised ones) has been used for
wireless channel modeling, tackling tasks such as interference mitigation [79]
or resource and channel allocation [80]. Investigation of RL techniques for
electromagnetic wave propagation modeling seems highly promising.
We do believe though that advances in the ML field will keep influencing and
advancing the area of radio propagation modeling.
## Appendix - a case study of an ml propagation model
$x_{c}$$y_{c}$$r$$\textrm{T}_{\textrm{x}}$$\textrm{R}_{\textrm{x}}$ Figure 5:
The cross-section of a circular tunnel. Visible also are the position of the
transmitter and the receiver.
We will illustrate the key steps for formulating an ML propagation model,
using an example of an ANN-based model predicting the RSS inside a straight
circular tunnel. The ML model of our choice is the ANN. The cross-section of
such a tunnel can be seen in Fig. 5. The position of the transmitter
$\text{T}_{\text{x}}$ remains fixed at the center of the tunnel, with
coordinates $(x_{c},y_{c})=(0,r)$, where $r$ is the radius of the circular
tunnel. The receiver $\text{R}_{\text{x}}$ is moving along the tunnel in the
$z_{c}$ direction, with its position on the $x$-$y$ plane fixed at
$(x_{c},y_{c})=(0,3r/2)$. Thus, the receiver’s trajectory is emulating an
antenna, mounted on a moving train or wagon. The length of the tunnel is 500
m. The transmitter radiates a 20 dBm single Gaussian beam [81], while the
receiver uses a half-wave dipole antenna. Both antenna gains are 1 dBi.
Table I lists all the input features used, as well as their respective range
of values. Two different frequency bands are used, centered at 900 MHz and 2.4
GHz respectively, each with a bandwidth of 200 MHz. Since these two frequency
bands exhibit different propagation characteristics, separate ANNs are created
for each band. Our particular selection of input features is based on our
knowledge of parameters that influence the RSS. Both the geometry of the
tunnel, the axial distance between the receiver and the transmitter, as well
as the frequency of operation are such features. Thus, each input vector
$\boldsymbol{x}$ is 3-dimensional.
TABLE I: Circular tunnel input features
Parameter | Symbol | Min. Value | Max. Value | Increment | # conf.
---|---|---|---|---|---
Radius | $r$ | 2 m | 4 m | 0.1 m | 21
1st Freq. Band | $f_{1}$ | 800 MHz | 1 GHz | 25 MHz | 9
2nd Freq. Band | $f_{2}$ | 2.3 GHz | 2.5 GHz | 25 MHz | 9
Axial Distance | $z_{c}$ | 0 m | 500 m | 1 m | 501
The three datasets are generated as follows. The $i$-th input vector
$\boldsymbol{x}_{i}$ is associated with a scalar target value $y_{i}$,
corresponding to the received power (in dBm) at the specified point. The
target values are generated using an in-house VPE solver that has been
extensively validated [81]. We generate 189 training/validation tunnel
configurations for each of the two frequency bands. Each tunnel configuration
corresponds to 501 different input vectors $\boldsymbol{x}_{i}$, one for each
receiving point along $z_{c}$. From these 189 tunnel configurations, 20 are
randomly chosen to form the validation set. Another 24, different than those
comprising the training and the validation set, are randomly generated and
used as test cases.
The input data are then pre-processed. In our case, they are normalized to
have zero mean and unit standard deviation. To achieve this, the following
transformation is applied to the $j$-th input feature:
$x_{norm}^{j}=\frac{x^{j}-\mu^{j}}{\sigma^{j}}$ (6)
where $\mu^{j}$ and $\sigma^{j}$ is the $j$-th input feature’s mean and
standard deviation, respectively, computed only over the training samples.
That helps the ANN speed-up training, as all the input features have
comparable ranges. We choose a cost function that minimizes the squared error
between the network’s predictions and the target values and also apply
$\text{L}_{2}$ regularization in order to penalize the complexity term of Eq.
(5). Thus, the in-sample mean squared error (MSE) is given by:
$\mathcal{E}_{\text{in}}(\boldsymbol{w})=\frac{1}{N}{||g(\boldsymbol{x})-\boldsymbol{y}||}_{2}^{2}+\frac{\lambda}{2}{||\boldsymbol{w}||}_{2}^{2}$
(7)
where $\lambda$ is a regularization parameter, $N$ is the number of input
samples, also called the batch size, and $\boldsymbol{w}$ is a vector
containing all the weights of the network. We use the identity linear function
for the output node and tanh activations for the rest. Finally, we use the
ADAM optimizer [82], a more advanced version of the gradient descent
algorithm, as our learning algorithm for minimizing the cost function.
Figure 6: Hidden neuron selection in a 3-hidden-layer ANN.
Before training the ANN, we choose the number of hidden layers and neurons per
layer, evaluating different combinations of them with respect to their
validation error. This is done by first training on the training set and then
computing the MAE over all the validation samples for each different
architecture. Then, we choose the architecture that gives the smallest MAE as
our final model architecture. Fig. 6 shows the validation MAE for a 3-hidden-
layer ANN of various configurations of nodes per layer, for the 900 MHz case.
Fig. 7 illustrates the same for a 4-hidden-layer ANN. Based on these figures,
a 300-200-75 node configuration seems optimal for the 3-hidden-layer ANN,
since the validation error increases thereafter. Meanwhile, the 4-hidden layer
case exhibits a slightly smaller MAE for the node configurations checked. It
also follows a decreasing trend as the total number of nodes is increased. As
a result, we choose the 4-hidden-layer architecture. Using less than 3 hidden
layers makes it more difficult for the network to capture some of the
oscillations of the signal. On the other hand, using more than 4 or further
increasing the number of neurons per layer, increases training time without
substantial improvement in accuracy. The same procedure is also followed to
determine all other hyperparameters, such as the amount of regularization and
the batch size of Eq. (7).
Figure 7: Hidden neuron selection in a 4-hidden-layer ANN.
We then train the two ANNs. We set the same number of total iterations for
both, and record the values of $\mathcal{E}_{\text{in}}$ on the training set.
After a sufficiently large number of training iterations, the training error
starts to converge while the validation error remains almost constant,
allowing us to stop training. Then, we evaluate the trained model on the test
set. The final training and test errors recorded for the two ANNs can be seen
in Table II. It is noted that the 4-hidden-layer network is more accurate than
its 3-hidden-layer counterpart. That is the reason we only trained the
4-hidden-layer ANN for the 2.4 GHz band. Using the same set of hyperparameters
and training time, we can see that all three errors increase, compared to the
900 MHz case. That can be easily explained if we look closely at Figs. 8 and
9. The plots show the predicted RSS versus the actual one for two arbitrarily
selected tunnels at the two different frequency bands. It is apparent that the
ANN is very accurate at 2.4 GHz too. The difference in the test MAE is mainly
caused by the fast fading characteristics of the received signal. That is also
validated by the large increase in the training MSE compared to the 900 MHz
case, since this type of error is much more sensitive to the oscillations of
the received signal than MAE.
Figure 8: Predicted versus actual received signal strength of two arbitrarily
selected tunnel configurations for the 900 MHz case. Figure 9: Predicted
versus actual received signal of the same two arbitrarily selected tunnel
configurations for the 2.4 GHz case. TABLE II: Error metrics for the case
study
Frequency | Layers | Train. MSE (dB) | Val. MAE (dB) | Test MAE (dB)
---|---|---|---|---
900 MHz | 3 | 2.677 | 0.946 | 0.827
900 MHz | 4 | 1.677 | 0.785 | 0.725
2.4 GHz | 4 | 15.03 | 2.579 | 2.746
## Acknowledgements
This work has been supported by the Huawei Innovation Research Program (HIRP)
and the Natural Sciences and Engineering Research Council of Canada’s (NSERC)
Discovery Grant.
## References
* [1] M. Catedra and J. Perez, Cell planning for wireless communications. Artech House, Inc., 1999.
* [2] A. Molisch, Wireless Communications. Wiley-IEEE Press, 2010.
* [3] T. S. Rappaport, Wireless communications: principles and practice. Prentice Hall, New Jersey, 1996.
* [4] G. K. Theofilogiannakos, T. D. Xenos, and T. V. Yioultsis, “A hybrid parabolic equation-integral equation technique for wave propagation modeling of indoor communications,” IEEE Trans. Magn., vol. 45, no. 3, pp. 1112–1115, 2009.
* [5] Y. Wang, S. Safavi-Naeini, and S. K. Chaudhuri, “A hybrid technique based on combining ray tracing and FDTD methods for site-specific modeling of indoor radio wave propagation,” IEEE Trans. Antennas Propag., vol. 48, no. 5, pp. 743–754, 2000.
* [6] K. R. Schaubach, N. J. Davis, and T. S. Rappaport, “A ray tracing method for predicting path loss and delay spread in microcellular environments,” in 42nd Veh. Technol. Conf., pp. 932–935 vol. 2, 1992.
* [7] X. Zhang, N. Sood, J. K. Siu, and C. D. Sarris, “A hybrid ray-tracing/vector parabolic equation method for propagation modeling in train communication channels,” IEEE Trans. Antennas Propag., vol. 64, no. 5, pp. 1840–1849, 2016.
* [8] Y. Abu-Mostafa, M. Magdon-Ismail, and H. Lin, Learning From Data. AMLBook, 2012.
* [9] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016.
* [10] T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition. Springer, 2016.
* [11] M. Claesen and B. De Moor, “Hyperparameter search in machine learning,” arXiv:1502.02127, 2015.
* [12] A. Neskovic, N. Neskovic, and D. Paunovic, “ANN microcell electric field level prediction model,” in Int. Conf. Trends Commun. (EUROCON), vol. 1, pp. 128–131, 2001.
* [13] P.-R. Chang and W.-H. Yang, “Environment-adaptation mobile radio propagation prediction using radial basis function neural networks,” IEEE Trans. Veh. Technol., vol. 46, no. 1, pp. 155–160, 1997.
* [14] E. Ostlin, H.-J. Zepernick, and H. Suzuki, “Macrocell path-loss prediction using artificial neural networks,” IEEE Trans. Veh. Technol., vol. 59, no. 6, pp. 2735–2747, 2010.
* [15] F. Cheng and H. Shen, “Field strength prediction based on wavelet neural network,” in 2nd Int. Conf. Educ. Technol. and Comput., vol. 2, pp. 255–258, 2010.
* [16] M. Ayadi, A. Ben Zineb, and S. Tabbane, “A UHF path loss model using learning machine for heterogeneous networks,” IEEE Trans. Antennas Propag., vol. 65, no. 7, pp. 3675–3683, 2017.
* [17] N. Zaarour, S. Affes, N. Kandil, and N. Hakem, “Comparative study on a 60 GHz path loss channel modeling in a mine environment using neural networks,” in IEEE Int. Conf. Ubiquitous Wireless Broadband (ICUWB), pp. 1–4, 2015.
* [18] D. Wu, G. Zhu, and B. Ai, “Application of artificial neural networks for path loss prediction in railway environments,” in 5th Int. ICST Conf. Commun. and Netw. in China, pp. 1–5, 2010.
* [19] A. Seretis, X. Zhang, K. Zeng, and C. D. Sarris, “Artificial neural network models for radiowave propagation in tunnels,” IET Microw., Antennas Propag., vol. 14, no. 11, pp. 1198–1208, 2020.
* [20] I. Popescu, I. Nafomita, P. Constantinou, A. Kanatas, and N. Moraitis, “Neural networks applications for the prediction of propagation path loss in urban environments,” in IEEE VTS 53rd Veh. Technol. Conf., vol. 1, pp. 387–391, 2001.
* [21] C. A. Oroza, Z. Zhang, T. Watteyne, and S. D. Glaser, “A machine-learning-based connectivity model for complex terrain large-scale low-power wireless deployments,” IEEE Trans. Cogn. Commun. Netw., vol. 3, no. 4, pp. 576–584, 2017.
* [22] M. Ribero, R. W. Heath, H. Vikalo, D. Chizhik, and R. A. Valenzuela, “Deep learning propagation models over irregular terrain,” in IEEE Int. Conf. Acoust., Speech, Signal Process. (ICASSP), pp. 4519–4523, 2019.
* [23] M. Piacentini and F. Rinaldi, “Path loss prediction in urban environment using learning machines and dimensionality reduction techniques,” Comput. Manag. Sci., vol. 8, no. 4, pp. 371–385, 2011.
* [24] R. Fraile, L. Rubio, and N. Cardona, “Application of RBF neural networks to the prediction of propagation loss over irregular terrain,” in IEEE 52nd Veh. Technol. Conf. (VTS), vol. 2, pp. 878–884, 2000.
* [25] T. Balandier, A. Caminada, V. Lemoine, and F. Alexandre, “170 MHz field strength prediction in urban environment using neural nets,” in Proc. 6th Int. Symp. Pers., Indoor Mobile Radio Commun., vol. 1, pp. 120–124, 1995.
* [26] G. P. Ferreira, L. J. Matos, and J. M. M. Silva, “Improvement of outdoor signal strength prediction in UHF band by artificial neural network,” IEEE Trans. Antennas Propag., vol. 64, no. 12, pp. 5404–5410, 2016.
* [27] L. Wu, D. He, K. Guan, B. Ai, C. Briso-Rodríguez, T. Shui, C. Liu, L. Zhu, and X. Shen, “Received power prediction for suburban environment based on neural network,” in Int. Conf. Inf. Netw. (ICOIN), pp. 35–39, 2020.
* [28] J. Thrane, D. Zibar, and H. L. Christiansen, “Model-aided deep learning method for path loss prediction in mobile communication systems at 2.6 GHz,” IEEE Access, vol. 8, pp. 7925–7936, 2020.
* [29] O. Ahmadien, H. F. Ates, T. Baykas, and B. K. Gunturk, “Predicting path loss distribution of an area from satellite images using deep learning,” IEEE Access, vol. 8, pp. 64982–64991, 2020.
* [30] B. Xu, X. Zhu, and H. Zhu, “An efficient indoor localization method based on the long short-term memory recurrent neuron network,” IEEE Access, vol. 7, pp. 123912–123921, 2019.
* [31] A. B. Adege, H. Lin, and L. Wang, “Mobility predictions for iot devices using gated recurrent unit network,” IEEE Internet Things J., vol. 7, no. 1, pp. 505–517, 2020.
* [32] M. T. Hoang, B. Yuen, X. Dong, T. Lu, R. Westendorp, and K. Reddy, “Recurrent neural networks for accurate RSSI indoor localization,” IEEE Internet Things J., vol. 6, no. 6, pp. 10639–10651, 2019.
* [33] R. Wang, H. Luo, Q. Wang, Z. Li, F. Zhao, and J. Huang, “A spatial–temporal positioning algorithm using residual network and lstm,” IEEE Trans. Instrum. Meas., vol. 69, no. 11, pp. 9251–9261, 2020.
* [34] B. Gschwendtner and F. Landstorfer, “Adaptive propagation modelling based on neural network techniques,” in Proc. Veh. Technol. Conf. (VTC), vol. 2, pp. 623–626, 1996.
* [35] S. P. Sotiroudis, S. K. Goudos, K. A. Gotsis, K. Siakavara, and J. N. Sahalos, “Modeling by optimal artificial neural networks the prediction of propagation path loss in urban environments,” in IEEE-APS Topical Conf. Antennas Propag. Wireless Commun. (APWC), pp. 599–602, 2013.
* [36] H.-S. Jo, C. Park, E. Lee, H. Choi, and J. Park, “Path loss prediction based on machine learning techniques: Principal component analysis, artificial neural network, and Gaussian process,” Sensors, vol. 20, 2020.
* [37] I. Popescu, A. Kanatas, E. Angelou, L. Nafornita, and P. Constantinou, “Applications of generalized RBF-NN for path loss prediction,” in 13th IEEE Int. Symp. Pers., Indoor Mobile Radio Commun., vol. 1, pp. 484–488, 2002.
* [38] S. Sotiroudis and K. Siakavara, “Mobile radio propagation path loss prediction using artificial neural networks with optimal input information for urban environments,” Int. J. Electron. Commun., vol. 69, 2015.
* [39] C. E. G. Moreta, M. R. C. Acosta, and I. Koo, “Prediction of digital terrestrial television coverage using machine learning regression,” IEEE Trans. Broadcast., vol. 65, no. 4, pp. 702–712, 2019.
* [40] I. Fernández Anitzine, J. A. Romo Argota, F. P. Fontán, and C. B. Rodríguez, “Influence of training set selection in artificial neural network-based propagation path loss predictions,” Int. J. Antennas Propag., vol. 2012, 2012.
* [41] B. Krawczyk, “Learning from imbalanced data: open challenges and future directions,” Progress Artificial Intell., vol. 5, pp. 221–232, 2016.
* [42] J. Jang and S. Hong, “Indoor localization with wifi fingerprinting using convolutional neural network,” in 10th Int. Conf. Ubiquitous and Future Netw. (ICUFN), pp. 753–758, 2018.
* [43] J. Lee, M. Y. Kang, and S. Kim, “Path loss exponent prediction for outdoor millimeter wave channels through deep learning,” in IEEE Wireless Commun. and Netw. Conf. (WCNC), pp. 1–5, 2019.
* [44] J. Wen, Y. Zhang, G. Yang, Z. He, and W. Zhang, “Path loss prediction based on machine learning methods for aircraft cabin environments,” IEEE Access, vol. 7, pp. 159251–159261, 2019.
* [45] R. Kumar Yadav, B. Bhattarai, L. Jiao, M. Goodwin, and O. C. Granmo, “Indoor space classification using cascaded lstm,” in 15th IEEE Conf. Industrial Electron. and Applications (ICIEA), pp. 1110–1114, 2020.
* [46] X. Gan, B. Yu, L. Huang, and Y. Li, “Deep learning for weights training and indoor positioning using multi-sensor fingerprint,” in Int. Conf. Indoor Positioning and Indoor Navig. (IPIN), pp. 1–7, 2017.
* [47] Q. Li, H. Qu, Z. Liu, N. Zhou, W. Sun, S. Sigg, and J. Li, “Af-dcgan: Amplitude feature deep convolutional gan for fingerprint construction in indoor localization systems,” IEEE Trans. Emerg. Topics Comput. Intell., pp. 1–13, 2019.
* [48] A. Belmonte-Hernández, G. Hernández-Peñaloza, D. Martín Gutiérrez, and F. Álvarez, “Recurrent model for wireless indoor tracking and positioning recovering using generative networks,” IEEE Sensors J., vol. 20, no. 6, pp. 3356–3365, 2020.
* [49] S. Sotiroudis, K. Siakavara, and J. Sahalos, “A neural network approach to the prediction of the propagation path-loss for mobile communications systems in urban environments,” Piers Online, vol. 3, pp. 1175–1179, 2007.
* [50] G. Cerri, M. Cinalli, F. Michetti, and P. Russo, “Feed forward neural networks for path loss prediction in urban environment,” IEEE Trans. Antennas Propag., vol. 52, no. 11, pp. 3137–3139, 2004.
* [51] N. Kuno, W. Yamada, M. Sasaki, and Y. Takatori, “Convolutional neural network for prediction method of path loss characteristics considering diffraction and reflection in an open-square environment,” in URSI Asia-Pacific Radio Sci. Conf. (AP-RASC), pp. 1–3, 2019.
* [52] L. Azpilicueta, M. Rawat, K. Rawat, F. M. Ghannouchi, and F. Falcone, “A ray launching-neural network approach for radio wave propagation analysis in complex indoor environments,” IEEE Trans. Antennas Propag., vol. 62, no. 5, pp. 2777–2786, 2014.
* [53] F. Cheng and H. Shen, “An improved recurrent neural network for radio propagation loss prediction,” in Int. Conf. Intell. Comput. Technol. and Autom., vol. 1, pp. 579–582, 2010.
* [54] T. Imai, K. Kitao, and M. Inomata, “Radio propagation prediction model using convolutional neural networks by deep learning,” in 13th European Conf. Antennas and Propag. (EuCAP), pp. 1–5, 2019.
* [55] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv:1409.1556, 2014.
* [56] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” arXiv:1512.03385, 2016.
* [57] K. Gideon, C. Nyirenda, and C. Temaneh-Nyah, “Echo state network-based radio signal strength prediction for wireless communication in northern Namibia,” IET Commun., vol. 11, no. 12, pp. 1920–1926, 2017.
* [58] H. Turabieh and A. Sheta, “Cascaded layered recurrent neural network for indoor localization in wireless sensor networks,” in 2nd Int. Conf. new Trends in Comput. Sci. (ICTCS), pp. 1–6, 2019.
* [59] H. Hsieh, S. W. Prakosa, and J. Leu, “Towards the implementation of recurrent neural network schemes for wifi fingerprint-based indoor positioning,” in IEEE 88th Veh. Technol. Conf., pp. 1–5, 2018.
* [60] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” arXiv:1406.2661, 2014.
* [61] A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” arXiv:1511.06434, 2015.
* [62] M. Mirza and S. Osindero, “Conditional generative adversarial nets,” arXiv:1411.1784, 2014.
* [63] K. Chen and R. Chang, “Semi-supervised learning with gans for device-free fingerprinting indoor localization,” arXiv:2008.07111, 2020.
* [64] R. Timoteo, D. Cunha, and G. Cavalcanti, “A proposal for path loss prediction in urban environments using support vector regression,” Adv. Int. Conf. Telecommun., (AICT), vol. 2014, pp. 119–124, 2014.
* [65] L. Fernandes and A. Soares, “A hybrid model for path loss calculation in urban environment,” in 17th Conf. Comput. Electromagn. Fields (COMPUMAG), 2009\.
* [66] D. Karra, S. K. Goudos, G. V. Tsoulos, and G. Athanasiadou, “Prediction of received signal power in mobile communications using different machine learning algorithms:a comparative study,” in Panhellenic Conf. Electron. Telecommun. (PACET), pp. 1–4, 2019.
* [67] Z.-H. Zhou, Ensemble Methods: Foundations and Algorithms. Chapman and Hall, 2012.
* [68] S. Cheerla, V. Ratnam, and H. Borra, “Neural network-based path loss model for cellular mobile networks at 800 and 1800 MHz bands,” AEU - Int. J. of Electron. and Commun., vol. 94, pp. 179–186, 2018.
* [69] J. Isabona and V. M. Srivastava, “A neural network based model for signal coverage propagation loss prediction in urban radio communication environment,” Int. J. Appl. Eng. Res., vol. 11, pp. 11002–11008, 2016\.
* [70] G. Panda, R. K. Mishra, and S. S. Palai, “A novel site adaptive propagation model,” IEEE Antennas Wireless Propag. Lett., vol. 4, pp. 447–448, 2005.
* [71] V. Ebhota, J. Isabona, and V. M. Srivastava, “Environment-adaptation based hybrid neural network predictor for signal propagation loss prediction in cluttered and open urban microcells,” Wireless Pers. Commun., 2018.
* [72] Y. Zhang, J. Wen, G. Yang, Z. He, and J. Wang, “Path loss prediction based on machine learning: Principle, method, and data expansion,” Appl. Sci., vol. 9, p. 1908, 2019.
* [73] R. Adeogun, “Calibration of stochastic radio propagation models using machine learning,” IEEE Antennas Wireless Propag. Lett., vol. 18, no. 12, pp. 2538–2542, 2019.
* [74] B. Monteiro, G. P. S. Cavalcante, H. S. Gomes, D. M. Rosario, F. F. Lima, and H. A. Junior, “Evaluation of radio propagation parameters for field strength prediction using neural network,” in SBMO/IEEE MTT-S Int. Microw. and Optoelectron. Conf., pp. 888–892, 2007.
* [75] H. L. Bertoni, Radio Propagation for Modern Wireless Systems. Prentice Hall, New Jersey, 2000.
* [76] M. Meeks, “VHF propagation over hilly, forested terrain,” IEEE Trans. Antennas Propag., vol. 31, no. 3, pp. 483–489, 1983.
* [77] R. Tibshirani, “Regression shrinkage selection via the Lasso,” J. of the Royal Statistical Society Series B, vol. 73, pp. 273–282, 06 2011.
* [78] R. S. Sutton and A. G. Barto, Reinforcement learning: An introduction, Second Edition. Bradford Books, 2018.
* [79] L. X. et al., “Reinforcement learning-based downlink interference control for ultra-dense small cells,” IEEE Wireless Commun., vol. 19, no. 1, pp. 423–434, 2020.
* [80] C. He, Y. Hu, Y. Chen, and B. Zeng, “Joint power allocation and channel assignment for noma with deep reinforcement learning,” IEEE J. Sel. Areas Commun., vol. 37, no. 10, pp. 2200–2210, 2019.
* [81] X. Zhang and C. D. Sarris, “A Gaussian beam approximation approach for embedding antennas into vector parabolic equation-based wireless channel propagation models,” IEEE Trans. Antennas Propag., vol. 65, no. 3, pp. 1301–1310, 2017.
* [82] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv:1412.6980, 2014.
* [83] O. Perrault, J. . Rossi, and T. Balandier, “Predicting field strength with a neural ray-tracing model,” in IEEE Global Telecommun. Conf., vol. 2, pp. 1167–1171, 1996.
* [84] J. Zhang, L. Liu, Y. Fan, L. Zhuang, T. Zhou, and Z. Piao, “Wireless channel propagation scenarios identification: A perspective of machine learning,” IEEE Access, vol. 8, pp. 47797–47806, 2020.
* [85] W. Hou, D. Shi, Y. Gao, and C. Yao, “A new method for radio wave propagation prediction based on finite integral method and machine learning,” in IEEE 5th Int. Symp. Electromagn. Compat., pp. 1–4, 2017\.
TABLE III: Type of input, ML model and output of papers under review.
Paper | Type of env. (U, sR, R)* | ANN-based model | Non-ANN based model | Hybrid model | Type of training data (Ms, Sn)** | Output - Application
---|---|---|---|---|---|---
[12, 37, 40, 74] | U | MLP | - | - | Meas. | The first paper is an RSS microcell prediction model at 900 MHz, while the second one corresponds to a PL prediction model at 1890 MHz. The third one is a PL prediction model at 900 and 1800 MHz, while the last paper predicts the electric perimittivity and conductivity of the ground.
[25, 83] | U | MLP | - | ✓ | Meas. | RSS prediction model at 170 MHz, RSS error correction and acceleration of an RT solver, respectively.
[34, 68, 69] | U | MLP | - | ✓ | Meas. | PL correction of CWI (first two papers) and the log-distance model (third paper). The second paper operates at 800 and 1800 MHz Comparison with empirical models.
[26, 70, 71] | U | MLP | - | ✓ | Meas. | The first paper corresponds to a PL prediction model driven by the CKE empirical model at 1140 MHz. The second paper implements PL correction of Hata model. The third one is a PL prediction model driven by an Adaline network at 1900 MHz.
[35, 38, 49, 50] | U | MLP | - | - | Synth. (RT) | PL prediction models. In the last paper, the PL equation is divided into free space and building attenuation. The latter is computed by the ML model.
[23] | U | MLP | SVR | - | Meas. | PL prediction model. Experiments with PCA/nPCA.
[39] | U | MLP | Various | - | Meas. | RSS prediction model for DTT systems. Comparison between MLP, $k$-NN, SVR, RF, AdaBoost, LASSO and kriging.
[43, 51] | U | CNN | - | - | Synth. (RT) | PLE prediction models; the first one at 28 GHz.
[29] | U, sR | CNN | - | - | Synth. (RT) | PL distribution estimation at 900 MHz and 3.5 GHz. The environments were constructed in RT from satellite images.
[42] | I | CNN | - | - | Meas. | User localization in an indoor environment by learning RSS maps.
[54] | U | MLP, CNN | - | ✓ | Synth. (RT) | Hybrid PL prediction model consisting of an CNN and an MLP-ANN.
[64] | U | - | SVR | - | Meas. | PL prediction model at 853 MHz. Comparison with empirical models.
[65] | U | - | GA | - | Meas. | PL prediction model.
[72] | U | MLP | SVR, RF | - | Meas. | PL prediction model at 2021 MHz.
[36] | sR | MLP | - | - | Meas. | PL and shadowing prediction model at 450, 1450 and 2300 MHz.
[14, 27] | R | MLP | - | - | Meas. | The first two papers correspond to macrocell PL prediction models at 881 MHz. They include comparisons with empirical models. The third paper is an RSS prediction model, using also PCA.
[21] | R | MLP | RF, $k$-NN, AdaBoost | - | Meas. | PL prediction model for sensor network connectivity at 2.4 GHz. Comparison between different ML models.
[22] | R | MLP, CNN | - | - | Meas. | PG prediction model at 1.8 GHz over irregular terrain.
[15] | R | WNN | - | - | Synth. (VPE) | RSS prediction model for various frequencies (from 900 MHz up to 2.2 GHz).
[28] | U, R | MLP, CNN | - | ✓ | Meas. | PL prediction model at 811 and 2630 MHz, trained by measurements and satellite images.
[84] | U, R | - | Various | - | Both (5G simulations) | Environment classification for outdoor railway environments. Comparison between $k$-NN, SVR, $k$-means and GMM models.
[20] | U, sR | RBF | - | ✓ | Meas. | PL error correction of CWI model. Comparison with empirical models.
[66] | U, sR | - | Various | - | Meas. | PL prediction model trained by UAV-taken images. Comparison between $k$-NN, SVR, RF, AdaBoost, GTB and VR.
[16] | All | MLP | - | - | Meas. | Multiband (450-2600 MHz), multi-environment PL prediction model. Comparison with empirical models.
[57] | All | ESN | SVR | - | Meas. | RSS prediction model at 900, 1800 and 2100 MHz at various terrain types.
[17] | - | MLP, RBF | - | - | Meas. | Wideband PL prediction model at 60 GHz in mines.
[18] | - | MLP | - | - | Meas. | PL prediction model for railway environments at 930 MHz.
[19] | - | MLP | - | - | Synth. (VPE) | PG prediction model for tunnel environments at 0.9 and 2.4 GHz.
[44] | - | MLP | SVR, RF, AdaBoost | - | Meas. | PL prediction model for aircraft cabin at 2.4, 3.52 and 5.8 GHz.
[73, 52] | I | MLP | - | - | Synth. (PGr, SV / RT) | Statistical propagation parameters prediction for indoor model calibration at 60 GHz (first paper). RT acceleration ML model at 2.4 GHz (second paper).
[85] | I | - | RF | - | Synth. (RT) | RSS prediction acceleration of RT for indoor environments.
[31, 32, 45] | U, I | RNN | - | - | Meas. | The first paper is about user trajectory prediction at 860 MHz based on measured RSS in urban environments. The other two, concern fingerprinting localization in indoor environments. The third paper used an LSTM RNN.
[58, 59] | I | RNN | - | ✓ | Meas. | User localization in indoor environments. The first paper used two RNNs for classifying building and floor respectively, while the second paper two LSTMs for calculating the coordinates and floor, respectively, of sensors.
[30, 53] | I / - | RNN | - | - | Synth. | The first paper is about tracking in indoor environment. Their LSTM is trained by simulations. In the second one, an Elman RNN is used to calculate the propagation factor over flat earch.
[33] | I | RNN, CNN | - | ✓ | Meas. | User location estimation in an indoor environment by a hybrid model (LSTM and ResNet).
[63, 47] | I | GAN, CNN | - | ✓ | Both | CSI indoor localization. The first paper used a hybrid of a DCGAN sharing weights with a CNN classifier, while the second one a DCGAN with an MLP.
[48] | I | GAN, RNN | - | ✓ | Both | User tracking in an indoor environment A cGAN was used for data augmentation, while an LSTM computes the location of the user.
[46] | I | DBN | - | ✓ | Both | User tracking in an indoor environment.
* *
U: urban, sR: semi-rural, R: rural, I: indoor [**] Meas.: measured, Synth.:
synthetic
|
# On the Performance of Large-Scale Wireless Networks in the Finite Block-
Length Regime
Nourhan Hesham and Anas Chaaban This publication is based upon work supported
by King Abdullah University of Science and Technology (KAUST) under Award No.
OSR-2018-CRG7-3734. School of Engineering, University of British Columbia,
Kelowna, BC V1V1V7, Canada
Email<EMAIL_ADDRESS>
###### Abstract
Ultra-Reliable Low-Latency Communications have stringent delay constraints,
and hence use codes with small block length (short codewords). In these cases,
classical models that provide good approximations to systems with infinitely
long codewords become imprecise. To remedy this, in this paper, an average
coding rate expression is derived for a large-scale network with short
codewords using stochastic geometry and the theory of coding in the finite
blocklength regime. The average coding rate and upper and lower bounds on the
outage probability of the large-scale network are derived, and a tight
approximation of the outage probability is presented. Then, simulations are
presented to study the effect of network parameters on the average coding rate
and the outage probability of the network, which demonstrate that results in
the literature derived for the infinite blocklength regime overestimate the
network performance, whereas the results in this paper provide a more
realistic performance evaluation.
###### Index Terms:
Stochastic Geometry; Large-Scale Network; Capacity; Outage Probability; Finite
Blocklength; URLLC.
## I Introduction
The density of cellular networks has increased significantly from 2G up to 5G,
and continues to increase in order to serve a larger number of users/devices,
and provide wider coverage and higher data speeds. Additionally, current and
future networks are expected to support a multitude of connectivity
requirements, including the Internet-of-Things (IoT) and Machine-Type
communications (MTC). Many such applications have stringent delay
requirements, which necessitate using different approaches in studying
performance [1]. Recent works on Ultra-Reliable Low-Latency Communications
(URLLC) focused on this topic, investigating low-latency communications from
different perspectives [2, 3, 4, 5, 6].
Large-scale networks can be studied using stochastic geometry (SG) tools [7].
SG is the study of random spatial patterns. It is a strong tool used for
interference modeling in large-scale networks, and has been used to study
several network performance metrics in the literature [8, 9, 10, 11]. In [8],
the wireless network is modelled using SG as the Poisson point process (PPP).
This led to results on the connectivity, the capacity, the outage probability,
and other fundamental limits of wireless networks. In [9], the coverage
probability of cellular networks in urban areas modelled as a PPP is provided.
The results are compared to the hexagonal grid model to find that the SG model
provides a more accurate upper bound on the coverage probabilities than when
using the hexagonal grid model. In [10, 11], large-scale networks using non-
orthogonal multiple-access have been analyzed using SG. Note that works in
this area commonly use Shannon’s channel capacity expression [12] to study
performance which is not suitable for delay-limited applications.
Aggregate interference modeling and performance characterization were active
research topics for decades. Due to the difficulty in modeling aggregate
interference, many papers provided approximations to be able to reach a tight
approximation to the capacity, symbol error probability, outage probability,
etc. A common method is to approximate the aggregate interference as a
Gaussian random variable as in [13, 7, 14]. In [13], the aggregate
interference was approximated as a sum of Gaussian random variables with
random scaling. In [7], the authors modified the work in [13] to approximate
the aggregate interference as a single Gaussian random variable with random
scaling giving the same results as in [13]. Moreover in [14], the authors
provide the kurtosis of the interference distribution for different values of
exclusion regions which shows that the interference tends to be Gaussian for
large exclusion regions. As a conclusion from different papers, the Gaussian
approximation is a valid approximation for dense networks or large exclusion
regions.
Since that delay-constrained applications require the use of short codes.
Studying the achievable information rate in such applications using Shannon’s
capacity expression becomes imprecise, as this expression is derived for
infinitely long codes and vanishing error probability. Instead, the coding
rate under a codelength limitations has to be used for such studies. In [15],
Polyanskiy et al. proposed tight bounds on the maximal channel coding rate
achievable for a given blocklength regime (short codewords) for different
types of channels such as the binary symmetric channel (BSC), binary errasure
channel (BEC), and additive white gaussian noise (AWGN) channel. Moreover, in
[16], Polyankiy et al. extended [15] to include the maximal achievable coding
rate over block-fading channels in a finite blocklength regime. These works
were extended to studying the coding rate in the finite blocklength regime of
other scenarios, including relaying [17], MTC [18], and multiaccess
communication in [19]. However, to the best of our knowledge, there are no
works apart from [5] in the literature which derives the decoding error for
large-scale networks in the finite blocklength regime. In this paper, we
derive the average coding rate of large-scale networks in the finite
blocklength regime. We also formulate the outage probability of the network in
the finite blocklength regime, and derive bounds and a fairly tight
approximation on this outage probability. Then, we investigate the effect of
network parameters on performance, and demonstrate clearly how the Shannon’s
capacity expression for the infinite blocklength regime overestimates
performance. These results are applicable for studying IoT networks, MTC, etc.
The rest of the paper is organized as follows. In Sec. II, the system model is
presented. In Sec. III, the average capacity of a large-scale network in the
finite blocklength regime is derived, and the outage probability is
formulated. Finally, the effect of network parameters on the capacity is
investigated in Sec. IV, and the paper is concluded in Sec. V.
## II System Model
We consider a downlink scenario where a serving base station (BS) transmits to
a user located within its coverage area, interfered by other non-serving BSs
as shown in Fig. 1. The channel is modeled as a block-fading Rayleigh channel.
The received signal is given by
$y=h_{0}\sqrt{P}r_{0}^{-\eta/2}s_{0}+I_{gg}+w,$ (1)
where the channel gain $(h_{0})$ is circularly symmetric complex Gaussian
($\mathcal{CN}(0,1)$), $P$ is the transmit power, $r_{0}$ is the distance
between the BS and the user, $\eta$ is the path loss exponent, $s_{0}$ is a
codeword symbol with unit power, $I_{gg}$ is the interference from other BSs,
and $w$ is $\mathcal{CN}(0,N_{0})$. The channel is considered to be a block
fading channel model where the channel coefficient ($h_{0}$) remains constant
for a block of L consecutive symbols and changes to an independent realization
in the next block. The interference term $I_{gg}$ is the sum of interference
signals received from all non-serving BSs and is given by
$I_{gg}=\sum_{i=1}^{\infty}\sqrt{P}h_{i}r_{i}^{-\eta/2}s_{i},$ (2)
where $P$ is the transmit power (assumed equal across BSs), $h_{i}$ is the
block-fading channel gain from non-serving $\mathrm{BS}_{i}$ to the user,
$r_{i}$ is the distance between non-serving $\mathrm{BS}_{i}$ and the user,
and $s_{i}$ is the codeword symbol transmitted by non-serving
$\mathrm{BS}_{i}$.
It is worth noting that $r_{0}$ is random in general, but for simplicity it is
assumed to be fixed in this work to study the performance for different system
parameters. We assume that there is no interfering (non-serving) BS within a
circle of radius $r_{0}$ about the user, which is known as the interference
exclusion region [7]. To study the average performance of such a network over
different geometries, the BS locations are often modeled by a repulsive point
process (PP) [7]. For tractability, a PPP is commonly considered as an
accurate approximation for several types of intractable repulsive PP, and the
reader is referred to [7] for more details. Hence, in this work, we assume
that the BSs locations follow a PPP with an interference exclusion region of
radius $r_{0}$ and intensity $\lambda$ $\mathrm{BS/km^{2}}$.
$r_{0}$$r_{1}$$r_{2}$$r_{3}$$r_{4}$$r_{5}$$r_{6}$$r_{7}$$r_{8}$$r_{9}$
Interference exclusion region Figure 1: A realization of a cellular network
with exclusion region of $r_{0}$ between the user of interest and the serving
BS, where BSs are randomly distributed with distances $r_{i}>r_{0}$ to the
user.
Studying the network performance (Capacity, BER, etc.) under this model is
rather difficult. To remedy this, an approximation is commonly considered in
the literature, wherein the interference is modeled as conditionally Gaussian,
conditioned on geometry ($r_{0}$ and $r_{i}$) [7]. Thus, the simplified
interference representation is a randomly scaled Gaussian given by:
$\displaystyle I_{eq}=\sqrt{\mathcal{B}}{G},$ (3)
where ${G}$ is $\mathcal{CN}(0,1)$, and $\mathcal{B}>0$ is the power of
interference, and it is a random variable independent of ${G}$ but is
dependent on the network geometry, and has the following Laplace transform
(LT)
$\displaystyle\mathcal{L}_{\mathcal{B}}(z)=\exp\left\\{\sum_{k=1}^{\infty}a_{k}z^{k}\right\\},$
(4)
where the coefficients $a_{k}$ are given by
$\displaystyle a_{k}=(-1)^{k}2\pi\lambda
r_{0}^{2}\left(\frac{P}{r_{0}^{\eta}}\right)^{k}\frac{\mathbb{E}\big{\\{}|s_{0}|^{2k}\big{\\}}}{(\eta
k-2)k!}.$ (5)
For $\eta=4$, the LT expression provided in (4) can be simplified to
$\displaystyle\mathcal{L}_{\mathcal{B}}(z)=\exp\left(-\pi\lambda\sqrt{zP}\arctan\left(\frac{\sqrt{zP}}{r_{0}^{2}}\right)\right).$
(6)
Using the approximation in (3), the simplified model becomes
$\displaystyle y=h_{0}\sqrt{P}r_{0}^{-\eta/2}s_{0}+I_{eq}+w.$ (7)
We assume that the serving BS sends information to the user using codewords of
length $n$ symbols, and that the codewords have to be decoded correctly with
probability $(1-\epsilon)$ where $\epsilon$ is the frame error probability.
The goal of this work is to study the average coding rate (in bits per
transmission) and outage probability of this network under these
considerations. The average coding rate is discussed in the next section.
## III Average Coding Rate in the finite blocklength regime
In this section, the average coding rate in the finite blocklength regime is
derived. We denote the average coding rate of the network given blocklength
$n$ and frame error rate $\epsilon$ by $\bar{R}_{n,\epsilon}$. Assuming
channel knowledge is available at the BSs, the signal to interference and
noise ratio $(\gamma)$ is defined as
$\displaystyle\gamma$
$\displaystyle=\frac{Pr_{0}^{-\eta}|h_{0}|^{2}}{N_{0}+\mathcal{B}}=\frac{|h_{0}|^{2}}{\frac{N_{0}}{Pr_{0}^{-\eta}}+\frac{\mathcal{B}}{Pr_{0}^{-\eta}}}$
$\displaystyle=\frac{|h_{0}|^{2}}{\gamma_{o}+\zeta}$ (8)
where $\gamma_{o}=\frac{Pr_{0}^{-\eta}}{N_{0}}$ and
$\zeta=\frac{\mathcal{B}}{Pr_{0}^{-\eta}}$. The average capacity of the large-
scale network in the infinite blocklength regime is given by [7]
$\displaystyle\bar{C}_{\infty,0}(\gamma_{o})$
$\displaystyle=\mathbb{E}_{h_{0},\zeta},\Big{\\{}\log_{2}(1+\frac{|h_{0}|^{2}}{\gamma_{o}+\zeta})\Big{\\}},$
To derive the average coding rate in the finite blocklength regime, i.e.,
where $n<\infty$ and $\epsilon>0$, we rely on a result in [16] concerning the
maximum coding rate of a block fading channel in the finite blocklength
regime, which is introduced in the following lemma.
###### Lemma 1.
([15]) For a block fading channel with a signal-to-noise ratio
$\alpha=\frac{P}{N_{0}}$, blocklength $n$, a target frame error rate
$\epsilon$ satisfying $0<\epsilon<0.5$, and channel gain coefficient $H$ , the
maximum coding rate is approximated as
$R_{n,\epsilon}(H,\alpha)\approx
C_{\infty,0}(H,\alpha)-\frac{\sqrt{V(H,\alpha)}Q^{-1}(\epsilon)}{\sqrt{n}}+o(1/\sqrt{n}),$
(9)
where $C_{\infty,0}(H,\alpha)=\log_{2}(1+|H|^{2}\alpha)$ is the AWGN capacity
in the infinite blocklength regime, $V(H,\alpha)$ is the channel dispersion
given by
$\displaystyle
V(H,\alpha)=\frac{|H|^{2}\alpha}{2}\frac{|H|^{2}\alpha+2}{(|H|^{2}\alpha+1)^{2}}\log_{2}^{2}(e),$
(10)
and $Q(\cdot)$ is the Q-function.
In what follows, we treat this approximation as the true maximum coding rate,
since this approximation is accurate enough for practical values of $n$ as
demonstrated in [16]. To extend this to the average coding rate expression of
the large-scale network in the finite blocklength regime, $|H|^{2}\alpha$
should be replaced by $\gamma$ defined in (III), followed by averaging with
respect to $\gamma$, to obtain the average capacity of the large-scale network
in the infinite blocklength regime $(\bar{C}_{\infty,0}(\gamma_{o}))$ and the
average channel dispersion $(\bar{V}(\gamma_{o}))$ which are discussed in the
next subsections. This extension is valid since the Gaussian approximation is
considered for the aggregate interference.
### III-A Average Capacity in the Infinite blocklength Regime for large-scale
network
The average capacity of the large-scale network in the infinite blocklength
regime with channel state information available at the BSs is given by the
following lemma [7].
###### Lemma 2.
For a large-scale network topology with an average power constraint block-
fading Rayleigh channel, signal to noise ratio $\gamma_{o}$ and interference
power $\zeta$, the average capacity is given by:
$\displaystyle\bar{C}_{\infty,0}(\gamma_{o})=\int_{0}^{\infty}\exp\Big{(}-\frac{2^{c}-1}{\gamma_{o}}\Big{)}\mathcal{L}_{\zeta}\Big{\\{}(2^{c}-1)\Big{\\}}dc$
(11)
This result was derived in [7] to which the reader is referred for the proof.
Next, we derive the average channel dispersion.
### III-B Average Channel Dispersion for a Large-Scale Network
The average channel dispersion for the large-scale network is given by
$\displaystyle\bar{V}(\gamma_{o})=\mathbb{E}\left\\{V(\gamma)\right\\}$
$\displaystyle=\mathbb{E}\left\\{\frac{\gamma}{2}\frac{\gamma+2}{(\gamma+1)^{2}}\log_{2}^{2}(e)\right\\}$
$\displaystyle=\mathbb{E}\left\\{\frac{\log_{2}^{2}(e)}{2}\left(1-\frac{1}{(\gamma+1)^{2}}\right)\right\\}$
$\displaystyle=\int_{0}^{\infty}(1-\mathbb{F}_{V}({v}))d{v}$ (12)
where $\mathbb{F}_{V}(v)$ is the cumulative density function (CDF) of the
channel dispersion $V(\gamma)$, and the last step follows as an application of
Fubini’s theorem [20]. The following lemma expresses $\bar{V}(\gamma_{o})$.
###### Lemma 3.
The average channel dispersion $\mathbb{E}\\{V(\gamma)\\}$ is given by
$\displaystyle\bar{V}(\gamma_{o})=\int_{0}^{\frac{1}{2}\log_{2}^{2}(e)}\exp\left(-\frac{1}{\gamma_{o}}\left(\sqrt{\frac{1}{1-\frac{2{v}}{\log_{2}^{2}(e)}}}-1\right)\right)$
$\displaystyle\times\mathcal{L}_{\zeta}\left\\{\sqrt{\frac{1}{1-\frac{2{v}}{\log_{2}^{2}(e)}}}-1\right\\}d{v},$
(13)
where $\mathcal{L}_{\zeta}\\{\cdot\\}$ is given in (4) and
$\zeta=\frac{\mathcal{B}}{Pr_{0}^{-\eta}}$.
###### Proof.
We start by expressing the CDF of $V(\gamma)$ for a given $\zeta$ as follows:
$\displaystyle\mathbb{F}_{V}({v}|\zeta)=\mathbb{P}(V(\gamma)<{v}|\zeta)$
$\displaystyle=\mathbb{P}\left(\frac{\log_{2}^{2}(e)}{2}\left(1-\frac{1}{(\gamma+1)^{2}}\right)<{v}\right)$
$\displaystyle=\mathbb{P}\left(\gamma<\sqrt{\frac{1}{1-\frac{2{v}}{\log_{2}^{2}(e)}}}-1\right)$
$\displaystyle=\mathbb{P}\left(\frac{|h_{0}|^{2}}{\frac{1}{\gamma_{o}}+\zeta}<\sqrt{\frac{1}{1-\frac{2{v}}{\log_{2}^{2}(e)}}}-1\right)$
$\displaystyle=\mathbb{P}\left(|h_{0}|^{2}<\left(\frac{1}{\gamma_{o}}+\zeta\right)\left(\sqrt{\frac{1}{1-\frac{2{v}}{\log_{2}^{2}(e)}}}-1\right)\right)$
$\displaystyle=1-\exp\left(-\left(\frac{1}{\gamma_{o}}+\zeta\right)\left(\sqrt{\frac{1}{1-\frac{2{v}}{\log_{2}^{2}(e)}}}-1\right)\right),$
(14)
where the last step follows from the Rayleigh distribution of $h_{0}$.
Averaging with respect to the interference term $\zeta$ yields
$\displaystyle\mathbb{E}_{\zeta}\\{\mathbb{F}_{V}({v})\\}$
$\displaystyle=\mathbb{E}_{\zeta}\left\\{1-\exp\left(-\left(\frac{1}{\gamma_{o}}+\zeta\right)\left(\sqrt{\frac{1}{1-\frac{2{v}}{\log_{2}^{2}(e)}}}-1\right)\right)\right\\}$
$\displaystyle=1-\exp\left(-\frac{1}{\gamma_{o}}\left(\sqrt{\frac{1}{1-\frac{2{v}}{\log_{2}^{2}(e)}}}-1\right)\right)$
$\displaystyle\hskip
99.58464pt\times\mathcal{L}_{\zeta}\left\\{\sqrt{\frac{1}{1-\frac{2{v}}{\log_{2}^{2}(e)}}}-1\right\\}.$
(15)
By substituting (III-B) in (12), we obtain
$\bar{V}(\gamma_{o})=\int_{0}^{\infty}(1-\mathbb{F}_{V}({v}))d{v}\\\
=\int_{0}^{\frac{1}{2}\log_{2}^{2}(e)}\exp\left(-\frac{1}{\gamma_{o}}\left(\sqrt{\frac{1}{1-\frac{2{v}}{\log_{2}^{2}(e)}}}-1\right)\right)\\\
\times\mathcal{L}_{\zeta}\left\\{\sqrt{\frac{1}{1-\frac{2{v}}{\log_{2}^{2}(e)}}}-1\right\\}d{v}.$
(16)
This proves the statement of the lemma. ∎
Using Lemma 1,2,3, we obtain the following theorem which expresses the average
coding rate of the large-scale network in the finite blocklength regime.
###### Theorem 1.
The average coding rate of the large-scale network modeled by (1) with
blocklength $n$, target frame error rate $\epsilon$, and signal-to-noise ratio
$\gamma_{0}=\frac{r_{0}^{-\eta}P}{N_{0}}$ is given by
$\displaystyle\bar{R}_{n,\epsilon}(\gamma_{o})=\bar{C}_{\infty,0}(\gamma_{o})-\frac{\sqrt{\bar{V}(\gamma_{o})}Q^{-1}(\epsilon)}{\sqrt{n}}+o(1/\sqrt{n}),$
(17)
where $\bar{C}_{\infty,0}(\gamma_{o})$ and $\bar{V}(\gamma_{o})$ are as
defined in (11) and (3), respectively.
###### Proof:
The result follows by averaging (9) with respect to $h_{0}$ and $\mathcal{B}$,
and using (11) and (3). ∎
To achieve this rate, the transmitter uses a code of length $n$, and adapts
the rate for each transmission block depending on the channel state $h_{0}$,
which is assumed to be known at the transmitter. Note that this reproduces the
result in the infinite blocklength regime when $n\to\infty$ since terms vanish
as $n\rightarrow\infty$. Next, we discuss the outage probability in the finite
blocklength regime.
## IV Outage Probability
Outage is defined as the event where the channel capacity is lower than a rate
threshold corresponding to the target coding rate. In the infinite blocklength
regime, this rate threshold can be converted to an $\mathrm{SINR}$ threshold.
The outage probability of a large-scale network in the infinite blocklength
regime is given in [7] as
$\displaystyle\mathcal{O}(r_{0},T)=1-\exp\left(-\frac{TN_{0}r_{0}^{4}}{P}\right)\mathcal{L}_{\zeta}(T)$
(18)
where $T$ is the $\mathrm{SINR}$ threshold on $\gamma$, i.e., an outage occurs
when $\gamma$ is less than $T$. The reader is referred to [7] for complete
proof.
In the infinite blocklength regime, when outage occurs, the channel is not
guaranteed to support reliable communication at the target rate, where
reliability is defined in the sense of a vanishingly small error probability.
To extend this to the finite blocklength regime, we define outage in the
finite blocklength regime as follows. We say that the channel is in outage
when it is not guaranteed to support transmission at the target rate at the
desired frame error rate $\epsilon$ and blocklength $n$. This occurs when the
average coding rate in a finite blocklength regime drops below the rate
threshold.
For a large-scale network in the finite blocklength regime, the outage
probability can be calculated using (17) as follows. Let $R=\log_{2}(1+T)$ be
the target rate. Then the outage probability is given by
$\displaystyle\mathcal{O}(r_{0},T,n,\epsilon)$
$\displaystyle=\mathbb{P}(C_{n,\epsilon}(\gamma)<R)$
$\displaystyle=\mathbb{P}\left(C_{\infty,0}(\gamma_{o})-\sqrt{\frac{V(\gamma)}{n}}Q^{-1}(\epsilon)+o\left(\frac{1}{\sqrt{n}}\right)<R\right).$
(19)
Let
$\displaystyle a$
$\displaystyle=\sqrt{\frac{\log_{2}^{2}(e)}{2n}}Q^{-1}(\epsilon)\text{ \ and \
}b=o\left(\frac{1}{\sqrt{n}}\right).$ (20)
Then, we can write
$\displaystyle\mathcal{O}(r_{0},T,n,\epsilon)$
$\displaystyle=\mathbb{P}\left(\log_{2}(1+\gamma)-a\sqrt{\left(1-\frac{1}{(1+\gamma)^{2}}\right)}+b<R\right).$
(21)
Noting that $\sqrt{\left(1-\frac{1}{(1+\gamma)^{2}}\right)}$ is in $[0,1]$, we
conclude that
$\displaystyle\mathcal{O}_{l}(r_{0},T,n,\epsilon)\leq\mathcal{O}(r_{0},T,n,\epsilon)\leq\mathcal{O}_{u}(r_{0},T,n,\epsilon),$
(22)
where the lower bound is given by
$\displaystyle\mathcal{O}_{l}(r_{0},T,n,\epsilon)$
$\displaystyle=\mathbb{P}\left(\log_{2}(1+\gamma)+b<R\right)$
$\displaystyle=1-\exp\left(-\frac{(2^{R-b}-1)}{\gamma_{o}}\right)\mathcal{L}_{\zeta}(2^{R-b}-1),$
(23)
and the upper bound is given by
$\displaystyle\mathcal{O}_{u}(r_{0},T,n,\epsilon)$
$\displaystyle=\mathbb{P}\left(\log_{2}(1+\gamma)-a+b<R\right)$
$\displaystyle=1-\exp\left(-\frac{(2^{R+a-b}-1)}{\gamma_{o}}\right)\mathcal{L}_{\zeta}(2^{R+a-b}-1).$
(24)
Neglecting $b=o\left(\frac{1}{\sqrt{n}}\right)$ in (IV) which is small for
practical values of $n$ ($n>100$ e.g.), we can see that the outage probability
lower bound coincides with the outage probability in the infinite blocklength
regime (18), confirming that the outage probability in the finite blocklength
regime is larger than that in the infinite blocklength regime. On the other
hand, by observing the term $\sqrt{\left(1-\frac{1}{(1+\gamma)^{2}}\right)}$,
we can see that this term quickly approaches one as $\gamma$ grows. This
indicates that the upper bound $\mathcal{O}_{u}(r_{0},T,n,\epsilon)$ provides
a good approximation for the outage probability when $\gamma$ is reasonably
large, as stated next.
###### Theorem 2.
The outage probability of the large-scale network modeled by (1) with
blocklength $n$, target frame error rate $\epsilon$, and signal-to-noise ratio
$\gamma_{0}=\frac{r_{0}^{-\eta}P}{N_{0}}$ can be approximated as
$\displaystyle\mathcal{O}(r_{0},T,n,\epsilon)\approx\mathcal{O}_{u}(r_{0},T,n,\epsilon).$
(25)
###### Proof:
This follows by approximating $\sqrt{\left(1-\frac{1}{(1+\gamma)^{2}}\right)}$
by $1$ which is a good approximation for moderate/large values of $\gamma$. ∎
Next, the average coding rate and the outage probability expressions are
evaluated for different system parameters.
## V Numerical Results
Four main parameters affect the average coding rate of the large-scale
network: The distance between the user and the serving BS ($r_{0}$) which also
determines the interference exclusion region, the density of BSs per square
kilometer $\lambda$, the blocklength $n$, and the target frame error
probability $\epsilon$. The effect of these parameters on the average coding
rate and the outage probability is discussed next.
The blocklength $n$ depends on the application scenario. Recent IoT
applications use short packets in the range of 512 bits up to 4096 bits
according to [21]. Moreover, the URLLC systems require low frame error
probabilities ($\epsilon$). The average coding rate of the network is
investigated for a range of $n$ and $\epsilon$ in Fig. 2, where $n$ takes
values from the set $\\{128,\ 2048\\}$ and $\epsilon$ takes values from the
set $\\{10^{-2},\ 10^{-8}\\}$. As shown in Fig. 2, at a specific target frame
error probability, the average coding rate increases as the $n$ increases
moving towards the average capacity in the infinite blocklength regime.
However, the average coding rate decreases as the target frame error
probability decreases. Moreover, the impact of $n$ becomes more significant if
the target frame error probability is small, which is the requirement of URLLC
which makes the expression more practical.
Fig. 3 shows the outage probability verses $r_{0}$ at (a) $n=128$ and (b)
$n=2048$ and $\lambda=1$ and $9\ \mathrm{BS/km^{2}}$, with $\epsilon=10^{-2}$
and $10^{-8}$. The figure shows the Monte Carlo simulated outage probability,
the approximation in (25), and the outage probability in the infinite
blocklength regime (18). It shows that the approximate outage probability
expression in (25) is fairly accurate, providing a convenient approximation
for broad range of $n,\lambda,\epsilon$. The figure shows that the outage
probability increases as the target frame error probability decreases because
the term $\sqrt{\frac{V}{n}}Q^{-1}(\epsilon)$ increases as the $\epsilon$
decreases. Hence, the average coding rate decreases and outage probability
increases. However, at a specific $\epsilon$, as the blocklength increases,
the outage probability decreases towards the outage of the infinite
blocklength regime.
Both figures 2 and 3 show how the results based on the capacity expression for
the infinite blocklength regime overestimate performance, whereas the average
coding rate and outage probability results in this paper provide an accurate
estimate.
Figure 2: The average coding rate of the large-scale network vs. $\gamma_{o}$
for blocklengths $n$ where $n=\ 128,\ 2048$ at frame error probabilities
$\epsilon=10^{-2},\ 10^{-8}$ with $\lambda=1\ \mathrm{BS/Km^{2}}$ and
$r_{0}=250\ m$.
(a) $n=128$
(b) $n=2048$
Figure 3: The outage probability of the large-scale network vs. $r_{0}$ at
$\lambda=1,9\ \mathrm{BS/km^{2}}$, $\epsilon=10^{-2},10^{-8}$, and (a) $n=128$
(b) $n=2048$
The effect of $\lambda$ and $r_{0}$ on average coding rate is shown in Fig. 4,
which shows the average coding rate at different BSs densities $\lambda=1,\
3,\ 9\ \mathrm{BS/km^{2}}$ at $n=128$ and $\epsilon=10^{-5}$. The average
coding rate decreases as $\lambda$ increases since increasing $\lambda$
increases interference. Moreover, at a specific value of $\lambda$, the
average coding rate also decreases as $r_{0}$ increases since the strength of
the desired signal decreases.
Figure 4: The average coding rate of the large-scale network vs. $\gamma_{o}$
for $r_{0}=10,\ 250,\ 500$ and $\lambda=1,\ 3,\ 9\ \mathrm{BS/km^{2}}$ with
$n=128$ and $\epsilon=10^{-5}$.
## VI Conclusion
We have studied the average coding rate in the finite blocklength regime for a
large-scale network using stochastic geometry, and provided a valid
approximation for the outage probability of the system. Moreover, we evaluated
the performance metrics as a function of the network parameters. The results
show that existing results that use the capacity expression in the infinite
blocklength regime for studying performance overestimate performance,
especially when the blocklength is small or the desired frame error rate is
small, which are important requirements of URLLC, IoT, and MTC. The provided
expressions can be used to study large-scale networks in the presence of
stringent delay or reliability constraints, such as IoT networks, MTC, etc.
This work will further be extended to specific communication technologies like
OMA, NOMA,…, etc.
## References
* [1] M. Bennis, M. Debbah, H. V. Poor, “Ultrareliable and low-latency wireless communication: Tail, risk, and scale”, Proceedings of the IEEE 106 (10) (2018) 1834–1853.
* [2] M. Shehab, E. Dosti, H. Alves, M. Latva-aho, “On the effective energy efficiency of ultra-reliable networks in the finite blocklength regime”, in: 2017 International Symposium on Wireless Communication Systems (ISWCS), 2017, pp. 275–280.
* [3] Z. Hou, C. She, Y. Li, B. Vucetic, “Ultra-reliable and low-latency communications: Prediction and communication co-design”, in: ICC 2019 - 2019 IEEE International Conference on Communications (ICC), 2019, pp. 1–7.
* [4] A. Z. Hindi, S. Elayoubi, T. Chahed, “Performance evaluation of ultra-reliable low-latency communication over unlicensed spectrum”, in: ICC 2019 - 2019 IEEE International Conference on Communications (ICC), 2019, pp. 1–7.
* [5] J. Park, “Rate analysis of ultra-reliable low-latency communications in random wireless networks”, arXiv:1910.13868 (2019) 1–4.
* [6] A. Chaaban, A. Sezgin, “Multi-Hop relaying: An end-to-end delay analysis”, IEEE Transactions on Wireless Communications 15 (4) (2016) 2552–2561.
* [7] H. ElSawy, A. Sultan-Salem, M.-S. Alouini, M. Z. Win, “Modeling and analysis of cellular networks using stochastic geometry: A tutorial”, IEEE Communications Surveys & Tutorials 19 (1) (2016) 167–203.
* [8] M. Haenggi, J. G. Andrews, F. Baccelli, O. Dousse, M. Franceschetti, “Stochastic geometry and random graphs for the analysis and design of wireless networks”, IEEE Journal on Selected Areas in Communications 27 (7) (2009) 1029–1046.
* [9] C.-H. Lee, C.-Y. Shih, Y.-S. Chen, “Stochastic geometry based models for modeling cellular networks in urban areas”, Wireless networks 19 (6) (2013) 1063–1072.
* [10] K. S. Ali, M. Haenggi, H. ElSawy, A. Chaaban, M.-S. Alouini, “Downlink non-orthogonal multiple access (NOMA) in poisson networks”, IEEE Transactions on Communications 67 (2) (2018) 1613–1628.
* [11] K. S. Ali, H. Elsawy, A. Chaaban, M.-S. Alouini, “Non-orthogonal multiple access for large-scale 5G networks: Interference aware design”, IEEE Access 5 (2017) 21204–21216.
* [12] C. E. Shannon, “Coding theorems for a discrete source with a fidelity criterion”, IRE Nat. Conv. Rec 4 (142-163) (1959) 1.
* [13] M. D. Renzo, W. Lu, “The equivalent-in-distribution (EiD)-based approach: On the analysis of cellular networks using stochastic geometry”, IEEE Communications Letters 18 (5) (2014) 761–764.
* [14] S. Srinivasa, M. Haenggi, “Modeling interference in finite uniformly random networks”, in: International Workshop on Information Theory for Sensor Networks (WITS’07), 2007, pp. 1–12.
* [15] Y. Polyanskiy, H. V. Poor, S. Verdú, “Channel coding rate in the finite blocklength regime”, IEEE Transactions on Information Theory 56 (5) (2010) 2307–2359.
* [16] W. Yang, G. Durisi, T. Koch, Y. Polyanskiy, “Block-fading channels at finite blocklength”, in: ISWCS 2013; The Tenth International Symposium on Wireless Communication Systems, 2013, pp. 1–4.
* [17] Y. Hu, J. Gross, A. Schmeink, “On the capacity of relaying with finite blocklength”, IEEE Transactions on Vehicular Technology 65 (3) (2015) 1790–1794.
* [18] M. Shehab, E. Dosti, H. Alves, M. Latva-aho, “On the effective capacity of MTC networks in the finite blocklength regime”, in: 2017 European Conference on Networks and Communications (EuCNC), IEEE, 2017, pp. 1–5.
* [19] E. MolavianJazi, J. N. Laneman, “Multiaccess communication in the finite blocklength regime”, in: Information Theory and Applications Workshop (ITA), 2012\.
* [20] G. Fubini, “Sugli integrali multipli”, Rend. Acc. Naz. Lincei 16 (1907) 608–614.
* [21] A. J. Pinheiro, J. [de M. Bezerra], C. A. Burgardt, D. R. Campelo, “Identifying IoT devices and events based on packet length from encrypted traffic”, Computer Communications 144 (2019) 8 – 17. doi:https://doi.org/10.1016/j.comcom.2019.05.012.
|
Primal-dual algorithm for quasi-static contact problem
with Coulomb’s friction
Yoshihiro Kanno 222Mathematics and Informatics Center, The University of
Tokyo, Hongo 7-3-1, Tokyo 113-8656, Japan. E-mail<EMAIL_ADDRESS>
###### Abstract
This paper presents a fast first-order method for solving the quasi-static
contact problem with the Coulomb friction. It is known that this problem can
be formulated as a second-order cone linear complementarity problem, for which
regularized or semi-smooth Newton methods are widely used. As an alternative
approach, this paper develops a method based on an accelerated primal-dual
algorithm. The proposed method is easy to implement, as most of computation
consists of additions and multiplications of vectors and matrices. Numerical
experiments demonstrate that this method outperforms a regularized and
smoothed Newton method for second-order cone complementarity problems.
> Keywords
>
> Optimization; complementarity problem; second-order cone; primal-dual
> algorithm; contact mechanics; Coulomb friction.
## 1 Introduction
Frictional contact is ubiquitous in engineering applications, and its
computational aspect has been studied extensively [44, 2, 9, 32]. Consider two
bodies in the three-dimensional space. When the signed distance between their
boundaries, called the gap, is positive, there exists no interaction in terms
of forces. Alternatively, if the gap is equal to zero, then the contact
pressure force, called the normal reaction, can be present at interface. This
relation, which evidently possesses disjunction nature, is called the normal
contact law. When the gap is equal to zero, the tangential reaction can also
be present due to friction. The most fundamental model of friction is
Coulomb’s law. The relative tangential velocity of the two bodies at contact
interface is whether equal to zero (said to be stick) or not (said to be slip
or slide). The slip can occur when the magnitude of the tangential reaction is
equal to a threshold value (which is proportional to the magnitude of the
normal reaction), where the tangential reaction is in parallel with the
relative tangential velocity in the opposite direction. In contrast, only
stick is admissible when the magnitude of the tangential reaction is smaller
than the threshold. Thus, the tangential contact law also possesses
disjunction nature.
The disjunction nature in frictional contact can be treated within the
framework of complementarity problem. To date, diverse formulations, as well
as diverse numerical methods, have been proposed; see, e.g., Acary and
Brogliato [2] and Wriggers [44]. This reflects the fact that, as Acary et al.
[1] concluded through comprehensive numerical experiments, “there is no
universal solver” for the frictional contact problem. Indeed, to the best of
the author’s knowledge, there is no algorithm that has guarantee of
convergence for the three-dimensional quasi-static incremental contact problem
with the Coulomb friction.
In this paper, we address the three-dimensional quasi-static incremental
problem of elastic bodies subjected to the unilateral contact with the Coulomb
friction, under the assumption of small deformation and linear elasticity. For
solving this problem, this paper presents an algorithm based on the primal-
dual algorithm (a.k.a. the primal-dual hybrid gradient algorithm) [11, 13, 25,
35]. The primal-dual algorithm has received considerable attention in
applications to large-scale optimization problems arising in image processing
[11, 12, 8, 19, 14].
If we approximate the friction cone (see (8) in section 2.2 for definition of
the friction cone) as a polyhedral cone, the problem considered in this paper
is reduced to a linear complementarity problem [33]. In contrast, without use
of this approximation, the problem is formulated as a nonlinear
complementarity problem. Regularized or semi-smooth Newton methods are mainly
used to solve such formulations [4, 16, 17, 41, 6]. Kanno et al. [31] showed
that the problem can be recast as a second-order cone linear complementarity
problem. To this formulation, various methods developed in the field of
mathematical optimization may be applicable (although existing methods do not
have a proof of convergence for this formulation, due to the lack of
monotonicity; see the formulation in Kanno et al. [31, section 4]). See
Yoshise [45] and Chen and Pan [15] for survey on the second-order cone
complementarity problem and its numerical solutions. For example, a
regularized smoothing Newton method proposed by Hayashi et al. [24] was
adopted in Kanno et al. [31].
Recently, accelerated gradient methods (a.k.a. optimal first-order methods)
have been successfully developed for solving problems, especially large-scale
ones, in applied and computational mechanics. Namely, accelerated proximal
gradient methods have been proposed for solving the incremental problems in
elastoplasticity with various yield criteria [27, 28, 42, 43], as well as the
bi-modulus elasticity problem [30]. It is worth noting that the problems dealt
with in the literature above are formulated as convex optimization problems.
Specifically, the problem in [27] can be recast as a quadratic programming
(QP) problem, the one in [42] can be recast as a second-order cone programming
(SOCP) problem, and the ones in [43, 30] can be recast as semidefinite
programming (SDP) problems. The numerical experiments demonstrate that these
accelerated proximal gradient methods outperform standard solvers implementing
primal-dual interior-point methods for QP, SOCP, and SDP. As other types of
accelerated gradient methods in mechanics, the reader may refer to an
accelerated Uzawa method for the frictionless contact problem [29] and an
accelerated steepest descent method for the elasticity problem of trusses with
material nonlinearity [21].
In the field of computer graphics, Mazhar et al. [38] proposed an accelerated
projected gradient method to (approximately) solve a problem stemming from
time-discretization of a rigid multi-body dynamical system involving
frictional contact. Through numerical experiments, Melanz et al. [39]
concluded that this method is most efficient compared with conventional first-
order methods (the projected Jacobi method and the projected Gauss–Seidel
method) in this research area and second-order optimization methods (primal-
dual interior-point methods). It should be clear that the method of Mazhar et
al. [38] does not solve the problem to be solved in dynamic simulation, but is
a method to solve a modified problem which is easier than the original one.
More concretely, the original problem is a nonlinear complementarity problem
that does not correspond to the optimality condition of an optimization
problem, while Mazhar et al. [38] solves a convex optimization problem that is
obtained by adding modification to the original problem. Such artificial
modification certainly alters physical phenomena, and causes artifacts in
simulation results as illustrated in Mazhar et al. [38, section 2.3]. It is
worth noting that there exists some numerical methods for solving the original
problem; e.g., nonsmooth Newton methods [10, 7], a fixed-point method that
sequentially solves convex optimization problems [3], etc. These methods are
not first-order methods. In contrast, in this paper we consider the quasi-
static incremental problem of elastic bodies, and attempt to solve the problem
itself, i.e., without adding any modification, via an accelerated first-order
method.
The paper is organized as follows. Section 2 summarizes the frictional contact
problem considered in this paper. As a main contribution, section 3 presents
an algorithm for solving this problem, based on an accelerated primal-dual
algorithm. Section 4 performs numerical experiments. Section 5 presents some
conclusions.
In our notation, ⊤ denotes the transpose of a vector or a matrix. For
$\boldsymbol{x}\in\mathbb{R}^{n}$, we use $\|\boldsymbol{x}\|$ to denote its
Euclidean norm, i.e.,
$\|\boldsymbol{x}\|=\sqrt{\boldsymbol{x}^{\top}\boldsymbol{x}}$. For
$\boldsymbol{x}$, $\boldsymbol{y}\in\mathbb{R}^{n}$, we write
$\boldsymbol{x}\perp\boldsymbol{y}$ if
$\boldsymbol{x}^{\top}\boldsymbol{y}=0$. We use
$\langle\boldsymbol{x},\boldsymbol{y}\rangle$ to denote
$\boldsymbol{x}^{\top}\boldsymbol{y}$. For a closed convex function
$f:\mathbb{R}^{n}\to\mathbb{R}\cup\\{+\infty\\}$, its proximal mapping is
defined by
$\displaystyle\mathop{\boldsymbol{\mathsf{prox}}}\nolimits_{f}(\boldsymbol{x})=\operatornamewithlimits{\mathrm{arg\,min}}_{\boldsymbol{z}\in\mathbb{R}^{n}}\Bigl{\\{}f(\boldsymbol{z})+\frac{1}{2}\|\boldsymbol{z}-\boldsymbol{x}\|^{2}\Bigr{\\}}.$
(1)
For $S\subseteq\mathbb{R}^{n}$, we use
$\delta_{S}:\mathbb{R}^{n}\to\mathbb{R}\cup\\{+\infty\\}$ to denote its
indicator function, i.e.,
$\displaystyle\delta_{S}(\boldsymbol{x})=\begin{dcases*}0&if
$\boldsymbol{x}\in S$,\\\ +\infty&otherwise.\end{dcases*}$
We use $\Pi_{S}(\boldsymbol{x})\in\mathbb{R}^{n}$ to denote the projection of
$\boldsymbol{x}\in\mathbb{R}^{n}$ onto $S$, i.e.,
$\displaystyle\Pi_{S}(\boldsymbol{x})=\operatornamewithlimits{\mathrm{arg\,min}}_{\boldsymbol{z}\in
S}\\{\|\boldsymbol{z}-\boldsymbol{x}\|\\}.$
For a nonempty convex cone $C\subseteq\mathbb{R}^{n}$, define the dual cone by
$\displaystyle
C^{*}=\\{\boldsymbol{s}\in\mathbb{R}^{n}\mid\langle\boldsymbol{s},\boldsymbol{x}\rangle\geq
0\ (\forall\boldsymbol{x}\in C)\\}.$
We readily see that
$\displaystyle\inf_{\boldsymbol{x}\in\mathbb{R}^{n}}\\{\langle\boldsymbol{s},\boldsymbol{x}\rangle+\delta_{C}(\boldsymbol{x})\\}=-\delta_{C^{*}}(\boldsymbol{s})$
(2)
holds.
## 2 Fundamentals: Quasi-static incremental problem
This section summarizes the quasi-static incremental analysis of an elastic
solid, with the Coulomb friction for the unilateral contact. For fundamentals
of contact mechanics, see, e.g., Duvaut and Lions [18] and Wriggers [44].
### 2.1 Contact kinematics
Consider an elastic body in the three-dimensional space. The body is
discretized according to the conventional finite element method. Let $d$
denote the number of degrees of freedom of the nodal displacements. We use
$\boldsymbol{u}\in\mathbb{R}^{d}$ to denote the nodal displacement vector.
To investigate the time evolution in the specified time interval $[0,T]$,
suppose that the time interval is subdivided into finitely many intervals. In
the quasi-static analysis, we assume that the inertia term in the equation of
motion is negligible. This assumption is applicable if the external force
applied to the elastic body changes sufficiently slowly. For a specific time
subinterval, denoted by $[t^{l},t^{l+1}]$, let $\boldsymbol{u}^{l}$ and
$\boldsymbol{u}^{l+1}$ denote the nodal displacements at time $t^{l}$ and
$t^{l+1}$, respectively. The incremental problem solved for time $t^{l+1}$ is
to find $\boldsymbol{u}^{l+1}$, or, equivalently, to find the incremental
displacement [34]
$\displaystyle\Delta\boldsymbol{u}=\boldsymbol{u}^{l+1}-\boldsymbol{u}^{l}.$
$\Delta\boldsymbol{u}_{\mathrm{t}j}$$\hat{g}_{j}(\Delta\boldsymbol{u})$$g_{j}$$t=t^{l}$$t=t^{l+1}$obstacle
Figure 1: Contact candidate node and obstacle.
For simplicity, we restrict ourselves to the case that the boundary of an
elastic body can possibly touch the surface of a fixed rigid obstacle;
extension of the proposed method to the case that two elastic bodies can touch
each other is straightforward. We assume that a set of contact candidate nodes
(i.e., nodes that can possibly contact with the obstacle at time $t=t^{l+1}$)
is specified a priori. Figure 1 depicts one of contact candidate nodes. It
should be clear that we do not know in advance whether each contact candidate
node contacts with the obstacle or not at $t=t^{l+1}$ (because
$\Delta\boldsymbol{u}$ is unknown).
Let $c$ denote the number of contact candidate nodes. For contact candidate
node $j$ $(j=1,\dots,c)$, let $g_{j}$ denote the initial gap (i.e., the
distance between node $j$ and the obstacle at time $t=t^{l}$), which is a
nonnegative constant; see Figure 1. The gap at time $t=t^{l+1}$, denoted by
$\hat{g}_{j}(\Delta\boldsymbol{u})$, is given in the form
$\displaystyle\hat{g}_{j}(\Delta\boldsymbol{u})=g_{j}-\boldsymbol{t}_{\mathrm{n}j}^{\top}\,\Delta\boldsymbol{u}$
(3)
with a constant vector $\boldsymbol{t}_{\mathrm{n}j}\in\mathbb{R}^{d}$,
because we assume small deformation. The incremental displacement of node $j$
can be decomposed additively into two components, which are normal and
tangential to the obstacle surface. The tangential component, denoted by
$\Delta\boldsymbol{u}_{\mathrm{t}j}$, is given in the form
$\displaystyle\Delta\boldsymbol{u}_{\mathrm{t}j}=T_{\mathrm{t}j}^{\top}\,\Delta\boldsymbol{u},$
(4)
where $T_{\mathrm{t}j}\in\mathbb{R}^{d\times 2}$ is a constant matrix; see
Klarbring [34] for more details. For notational simplicity, define
$T_{\mathrm{n}}\in\mathbb{R}^{d\times c}$ and
$T_{\mathrm{t}}\in\mathbb{R}^{d\times 2c}$ by
$\displaystyle
T_{\mathrm{n}}=\begin{bmatrix}\boldsymbol{t}_{\mathrm{n}1}&\cdots&\boldsymbol{t}_{\mathrm{n}c}\\\
\end{bmatrix},\quad
T_{\mathrm{t}}=\begin{bmatrix}T_{\mathrm{t}1}&\cdots&T_{\mathrm{t}c}\\\
\end{bmatrix}.$
If the gap $\hat{g}_{j}(\Delta\boldsymbol{u})$ is equal to zero, then the
contact reaction force can be present at node $j$. The reaction also has two
components, denoted by $r_{\mathrm{n}j}\in\mathbb{R}$ and
$\boldsymbol{r}_{\mathrm{t}j}\in\mathbb{R}^{2}$, which are normal and
tangential to the obstacle surface, respectively. It is worth noting that
$r_{\mathrm{n}j}$ stems from a kinematic constraint that node $j$ cannot
penetrate the obstacle, while $\boldsymbol{r}_{\mathrm{t}j}$ is due to
friction. For notational convenience, define
$\boldsymbol{r}_{j}\in\mathbb{R}^{3}$, $\boldsymbol{r}\in\mathbb{R}^{3c}$,
$\boldsymbol{r}_{\mathrm{n}}\in\mathbb{R}^{c}$, and
$\boldsymbol{r}_{\mathrm{t}}\in\mathbb{R}^{2c}$ by
$\displaystyle\boldsymbol{r}_{j}=\begin{bmatrix}r_{\mathrm{n}j}\\\
\boldsymbol{r}_{\mathrm{t}j}\\\
\end{bmatrix},\quad\boldsymbol{r}=\begin{bmatrix}\boldsymbol{r}_{1}\\\
\vdots\\\ \boldsymbol{r}_{c}\\\
\end{bmatrix},\quad\boldsymbol{r}_{\mathrm{n}}=\begin{bmatrix}r_{\mathrm{n}1}\\\
\vdots\\\ r_{\mathrm{n}c}\\\
\end{bmatrix},\quad\boldsymbol{r}_{\mathrm{t}}=\begin{bmatrix}\boldsymbol{r}_{\mathrm{t}1}\\\
\vdots\\\ \boldsymbol{r}_{\mathrm{t}c}\\\ \end{bmatrix}.$
### 2.2 Coulomb’s friction law under unilateral contact
Let $\mu$ denote the coefficient of friction, which is a positive constant.
For the incremental problem, the Coulomb friction law for unilateral contact
is given by [18, 44]
$\displaystyle{-}\mu r_{\mathrm{n}j}\geq\|\boldsymbol{r}_{\mathrm{t}j}\|,$
(5a) $\displaystyle{-}\mu r_{\mathrm{n}j}>\|\boldsymbol{r}_{\mathrm{t}j}\|$
$\displaystyle\quad\Rightarrow\quad\Delta\boldsymbol{u}_{\mathrm{t}j}=\boldsymbol{0},$
(5b) $\displaystyle{-}\mu r_{\mathrm{n}j}=\|\boldsymbol{r}_{\mathrm{t}j}\|>0$
$\displaystyle\quad\Rightarrow\quad\exists\alpha\geq 0:\
\Delta\boldsymbol{u}_{\mathrm{t}j}=-\alpha\boldsymbol{r}_{\mathrm{t}j}.$ (5c)
Here, (5a) is inclusion of the reaction into the friction cone, which implies
$r_{\mathrm{n}j}\leq 0$, i.e., there exists no adhesion. Disjunction of the
sticking and slipping states is described in (5b) and (5c).
Besides the non-adhesion condition (i.e., $r_{\mathrm{n}j}\leq 0$), the
unilateral contact condition consists of [18, 44]
$\displaystyle\hat{g}_{j}(\Delta\boldsymbol{u})\geq 0,$ (6a)
$\displaystyle\hat{g}_{j}(\Delta\boldsymbol{u})>0$
$\displaystyle\quad\Rightarrow\quad r_{\mathrm{n}j}=0,$ (6b) $\displaystyle
r_{\mathrm{n}j}<0$
$\displaystyle\quad\Rightarrow\quad\hat{g}_{j}(\Delta\boldsymbol{u})=0.$ (6c)
Here, (6a) is the non-penetration condition. We say that node $j$ is in the
free state and in contact (with nonzero reaction), respectively, if (6b) and
(6c) hold.
It is known that (5) and (6) can be rewritten equivalently as [37, 36]
$\displaystyle\hat{g}_{j}(\Delta\boldsymbol{u})\geq 0,$ (7a)
$\displaystyle{-}\mu r_{\mathrm{n}j}\geq\|\boldsymbol{r}_{\mathrm{t}j}\|,$
(7b)
$\displaystyle\langle\boldsymbol{r}_{\mathrm{t}j},\Delta\boldsymbol{u}_{\mathrm{t}j}\rangle-\langle
r_{\mathrm{n}j},\hat{g}_{j}(\boldsymbol{u})+\mu\|\Delta\boldsymbol{u}_{\mathrm{t}j}\|\rangle=0.$
(7c)
Let $F\subset\mathbb{R}^{3}$ denote the friction cone, i.e.,
$\displaystyle
F=\\{(r_{\mathrm{n}},\boldsymbol{r}_{\mathrm{t}})\in\mathbb{R}\times\mathbb{R}^{2}\mid-\mu
r_{\mathrm{n}}\geq\|\boldsymbol{r}_{\mathrm{t}}\|\\}.$ (8)
The dual cone of $F$ is
$\displaystyle
F^{*}=\\{(v_{\mathrm{n}},\boldsymbol{v}_{\mathrm{t}})\in\mathbb{R}\times\mathbb{R}^{2}\mid-
v_{\mathrm{n}}\geq\mu\|\boldsymbol{v}_{\mathrm{t}}\|\\}.$
Since (7a) holds if and only if
$-\hat{g}_{j}(\Delta\boldsymbol{u})-\mu\|\Delta\boldsymbol{u}_{\mathrm{t}j}\|\in
F^{*}$, we see that (7) can be recast as the following cone complementarity
condition [26, section 10.3.4.1]:
$\displaystyle
F^{*}\ni\begin{bmatrix}-\hat{g}_{j}(\Delta\boldsymbol{u})-\mu\|\Delta\boldsymbol{u}_{\mathrm{t}j}\|\\\
\Delta\boldsymbol{u}_{\mathrm{t}j}\\\
\end{bmatrix}\perp\begin{bmatrix}r_{\mathrm{n}j}\\\
\boldsymbol{r}_{\mathrm{t}j}\\\ \end{bmatrix}\in F.$ (9)
### 2.3 Equilibrium equation
Let $K\in\mathbb{R}^{d\times d}$ denote the stiffness matrix of the elastic
body, which is symmetric and positive definite. We use
$\boldsymbol{p}^{l}\in\mathbb{R}^{d}$ to denote the nodal external force
applied to the elastic body at time $t^{l}$.
At an equilibrium state, the internal force, the external force, and the
reaction are balanced. At time $t^{l}$, this balance law, called the
equilibrium equation, is given by [34, 44]
$\displaystyle
K\boldsymbol{u}^{l}-\boldsymbol{p}^{l}=T_{\mathrm{n}}\boldsymbol{r}_{\mathrm{n}}^{l}+T_{\mathrm{t}}\boldsymbol{r}_{\mathrm{t}}^{l}.$
(10)
In the incremental problem for time $t^{l+1}$, we have already known
$\boldsymbol{u}^{l}$, $\boldsymbol{r}_{\mathrm{n}}^{l}$, and
$\boldsymbol{r}_{\mathrm{t}}^{l}$ satisfying (10), and attempt to find
$\boldsymbol{u}^{l+1}$, $\boldsymbol{r}_{\mathrm{n}}^{l+1}$, and
$\boldsymbol{r}_{\mathrm{t}}^{l+1}$ satisfying the equilibrium equation at
time $t^{l+1}$, i.e.,
$\displaystyle
K(\boldsymbol{u}^{l}+\Delta\boldsymbol{u})-\boldsymbol{p}^{l+1}=T_{\mathrm{n}}\boldsymbol{r}_{\mathrm{n}}^{l+1}+T_{\mathrm{t}}\boldsymbol{r}_{\mathrm{t}}^{l+1}.$
(11)
For notational simplicity, we use $\boldsymbol{p}\in\mathbb{R}^{d}$,
$\boldsymbol{r}_{\mathrm{n}}\in\mathbb{R}^{c}$, and
$\boldsymbol{r}_{\mathrm{t}}\in\mathbb{R}^{2c}$ to denote
$\displaystyle\boldsymbol{p}=\boldsymbol{p}^{l+1}-K\boldsymbol{u}^{l},\quad\boldsymbol{r}_{\mathrm{n}}=\boldsymbol{r}_{\mathrm{n}}^{l+1}\quad\boldsymbol{r}_{\mathrm{t}}=\boldsymbol{r}_{\mathrm{t}}^{l+1}.$
From (10) and (11), we have
$\displaystyle
K\,\Delta\boldsymbol{u}-\boldsymbol{p}=T_{\mathrm{n}}\boldsymbol{r}_{\mathrm{n}}+T_{\mathrm{t}}\boldsymbol{r}_{\mathrm{t}}.$
(12)
### 2.4 Incremental problem as nonlinear cone complementarity problem
We have seen that the Coulomb friction law and the unilateral contact
condition are written as (9), where $\hat{g}_{j}(\Delta\boldsymbol{u})$ and
$\Delta\boldsymbol{u}_{\mathrm{t}j}$ are related to $\Delta\boldsymbol{u}$ by
(3) and (4). Also, the equilibrium equation is written as (12).
Consequently, the incremental problem can be formulated as follows:
$\displaystyle
K\,\Delta\boldsymbol{u}-\boldsymbol{p}=\sum_{j=1}^{n}\boldsymbol{t}_{\mathrm{n}j}r_{\mathrm{n}j}+\sum_{j=1}^{n}T_{\mathrm{t}j}\boldsymbol{r}_{\mathrm{t}j},$
(13a) $\displaystyle
F^{*}\ni\begin{bmatrix}-g_{j}-\mu\|T_{\mathrm{t}j}^{\top}\,\Delta\boldsymbol{u}\|+\boldsymbol{t}_{\mathrm{n}j}^{\top}\,\Delta\boldsymbol{u}\\\
T_{\mathrm{t}j}^{\top}\,\Delta\boldsymbol{u}\\\
\end{bmatrix}\perp\begin{bmatrix}r_{\mathrm{n}j}\\\
\boldsymbol{r}_{\mathrm{t}j}\\\ \end{bmatrix}\in F,\quad j=1,\dots,c.$ (13b)
Here, $\Delta\boldsymbol{u}$, $r_{\mathrm{n}j}$, and
$\boldsymbol{r}_{\mathrm{t}j}$ $(j=1,\dots,c)$ are unknown variables.
It is worth noting that (13) is a nonlinear cone complementarity problem.
Also, it is known that there exists no optimization problem the optimality
condition of which corresponds to (13) (this is because, for the Coulomb
friction law, there does not exist a potential [2, section 3.9.2]).
## 3 Accelerated primal-dual algorithm
In this section, based on the primal-dual algorithm [13] we develop an
algorithm for solving problem (13).
### 3.1 Optimization problem associated with (13)
Since directly solving problem (13) seems not to be easy, we first consider an
optimization problem that is similar to the one found in [3]. This
optimization problem is suited for application of a primal-dual algorithm.
Let $\tilde{g}_{j}\in\mathbb{R}$ $(j=1,\dots,c)$ be constants. Replace
$g_{j}+\mu\|T_{\mathrm{t}j}^{\top}\Delta\boldsymbol{u}\|$ in (13) with
$\tilde{g}_{j}$ to obtain the following complementarity problem:
$\displaystyle
K\,\Delta\boldsymbol{u}-\boldsymbol{p}=\sum_{j=1}^{c}\boldsymbol{t}_{\mathrm{n}j}r_{\mathrm{n}j}+\sum_{j=1}^{c}T_{\mathrm{t}j}\boldsymbol{r}_{\mathrm{t}j},$
(14a) $\displaystyle
F^{*}\ni\begin{bmatrix}-\tilde{g}_{j}+\boldsymbol{t}_{\mathrm{n}j}^{\top}\,\Delta\boldsymbol{u}\\\
T_{\mathrm{t}j}^{\top}\,\Delta\boldsymbol{u}\\\
\end{bmatrix}\perp\begin{bmatrix}r_{\mathrm{n}j}\\\
\boldsymbol{r}_{\mathrm{t}j}\\\ \end{bmatrix}\in F,\quad j=1,\dots,c.$ (14b)
For notational simplicity, define $\pi:\mathbb{R}^{d}\to\mathbb{R}$ by
$\displaystyle\pi(\boldsymbol{u})=\frac{1}{2}\boldsymbol{u}^{\top}K\boldsymbol{u}-\boldsymbol{p}^{\top}\boldsymbol{u}.$
We readily see that (14) corresponds to the optimality condition for the
following convex optimization problem (see appendix B for details):
$\displaystyle\mathop{\mathrm{Minimize}}_{\Delta\boldsymbol{u}}\quad\pi(\Delta\boldsymbol{u})+\sum_{j=1}^{c}\delta_{F^{*}}(-\tilde{g}_{j}+\boldsymbol{t}_{\mathrm{n}j}^{\top}\,\Delta\boldsymbol{u},T_{\mathrm{t}j}^{\top}\,\Delta\boldsymbol{u}).$
(15)
Here, for notational simplicity, for $v_{\mathrm{n}}\in\mathbb{R}$,
$\boldsymbol{v}_{\mathrm{t}}\in\mathbb{R}^{2}$, and
$\delta_{F^{*}}:\mathbb{R}^{3}\to\mathbb{R}\cup\\{+\infty\\}$ we write
$\delta_{F^{*}}(v_{\mathrm{n}},\boldsymbol{v}_{\mathrm{t}})$ instead of
$\delta_{F^{*}}\bigl{(}(v_{\mathrm{n}},\boldsymbol{v}_{\mathrm{t}}^{\top})^{\top}\bigr{)}$.
Problem (15) is a minimization problem of a convex quadratic function under
second-order cone constraints.
It follows from (2) that we have
$\displaystyle\delta_{F^{*}}(\boldsymbol{v})$
$\displaystyle=-\inf_{\boldsymbol{r}_{j}\in\mathbb{R}^{3}}\\{\langle\boldsymbol{v},\boldsymbol{r}_{j}\rangle+\delta_{F}(\boldsymbol{r}_{j})\\}$
$\displaystyle=\sup_{\boldsymbol{r}_{j}\in\mathbb{R}^{3}}\\{-\langle\boldsymbol{v},\boldsymbol{r}_{j}\rangle-\delta_{F}(\boldsymbol{r}_{j})\\}.$
(16)
Application of (16) reduces (15) to the following form:
$\displaystyle\mathop{\mathrm{Minimize}}_{\Delta\boldsymbol{u}\in\mathbb{R}^{d}}\quad\pi(\Delta\boldsymbol{u})+\sup_{\boldsymbol{r}\in\mathbb{R}^{3c}}\left\\{-\sum_{j=1}^{c}\left\langle\begin{bmatrix}r_{\mathrm{n}j}\\\
\boldsymbol{r}_{\mathrm{t}j}\\\
\end{bmatrix},\begin{bmatrix}-\tilde{g}_{j}+\boldsymbol{t}_{\mathrm{n}j}^{\top}\,\Delta\boldsymbol{u}\\\
T_{\mathrm{t}j}^{\top}\,\Delta\boldsymbol{u}\end{bmatrix}\right\rangle-\sum_{j=1}^{c}\delta_{F}(r_{\mathrm{n}j},\boldsymbol{r}_{\mathrm{t}j})\right\\}.$
(17)
It should be clear that problems (14), (15), and (17) in this section are
equivalent to each other, but they are different from (13).
### 3.2 Primal-dual algorithm for optimization problem (17)
For ease of comprehension of the algorithm presented in section 3.3, in this
section we apply a primal-dual algorithm to problem (17). Specifically, we
apply Chambolle and Pock [13, Algorithm 1] (see also Chambolle and Pock [12,
Algorithm 7]).
For notational simplicity, define $T\in\mathbb{R}^{d\times 3c}$,
$f:\mathbb{R}^{3c}\to\mathbb{R}\cup\\{+\infty\\}$, and
$h:\mathbb{R}^{3c}\to\mathbb{R}$ by
$\displaystyle T$
$\displaystyle=\begin{bmatrix}\boldsymbol{t}_{\mathrm{n}1}&T_{\mathrm{t}1}&\cdots&\boldsymbol{t}_{\mathrm{n}c}&T_{\mathrm{t}c}\\\
\end{bmatrix},$ $\displaystyle f^{*}(\boldsymbol{r})$
$\displaystyle=\sum_{j=1}^{c}\delta_{F}(r_{\mathrm{n}j},\boldsymbol{r}_{\mathrm{t}j}),$
$\displaystyle h(\boldsymbol{r})$
$\displaystyle=\tilde{\boldsymbol{g}}^{\top}\boldsymbol{r}_{\mathrm{n}}.$
Problem (17) is concisely written as follows:
$\displaystyle\min_{\Delta\boldsymbol{u}}\max_{\boldsymbol{r}}\Bigl{\\{}-\langle
T^{\top}\,\Delta\boldsymbol{u},\boldsymbol{r}\rangle+\pi(\Delta\boldsymbol{u})+h(\boldsymbol{r})-f^{*}(\boldsymbol{r})\Bigr{\\}}.$
(18)
The primal-dual algorithm solving this problem updates the incumbent solution,
denoted by $\boldsymbol{u}^{(k)}$ and $\boldsymbol{r}^{(k)}$, as
$\displaystyle\boldsymbol{r}^{(k+1)}$
$\displaystyle:=\mathop{\boldsymbol{\mathsf{prox}}}\nolimits_{\alpha
f^{*}}(\boldsymbol{r}^{(k)}+\alpha(\nabla
h(\boldsymbol{r}^{(k)})-T^{\top}\,\Delta\hat{\boldsymbol{u}}^{(k)})),$ (19)
$\displaystyle\Delta\boldsymbol{u}^{(k+1)}$
$\displaystyle:=\mathop{\boldsymbol{\mathsf{prox}}}\nolimits_{\beta\pi}(\Delta\boldsymbol{u}^{(k)}+\beta
T\boldsymbol{r}^{(k+1)}),$ (20)
$\displaystyle\Delta\hat{\boldsymbol{u}}^{(k+1)}$
$\displaystyle:=\Delta\boldsymbol{u}^{(k+1)}+\theta(\Delta\boldsymbol{u}^{(k+1)}-\Delta\boldsymbol{u}^{(k)}).$
(21)
Here, $\alpha>0$ and $\beta>0$ are step lengths, and $\theta\in[0,1]$ is a
constant.
A direct calculation using definition (1) of the proximal mapping yields
$\displaystyle\mathop{\boldsymbol{\mathsf{prox}}}\nolimits_{\alpha
f^{*}}(\boldsymbol{r})$
$\displaystyle=\begin{bmatrix}\Pi_{F}(\boldsymbol{r}_{1})\\\ \vdots\\\
\Pi_{F}(\boldsymbol{r}_{c})\\\ \end{bmatrix},$
$\displaystyle\mathop{\boldsymbol{\mathsf{prox}}}\nolimits_{\beta\pi}(\boldsymbol{u})$
$\displaystyle=(\beta K+I)^{-1}(\boldsymbol{u}+\beta\boldsymbol{p}).$
Therefore, the updates in (19), (20), and (21) can be described explicitly as
Algorithm 1.
Algorithm 1 Primal-dual algorithm for optimization problem (14).
1:$\Delta\boldsymbol{u}^{(0)}\in\mathbb{R}^{d}$,
$\boldsymbol{r}^{(0)}\in\mathbb{R}^{3c}$, $\alpha>0$, $\beta>0$,
$\theta\in[0,1]$.
2:$\Delta\hat{\boldsymbol{u}}^{(0)}\leftarrow\Delta\boldsymbol{u}^{(0)}$.
3:for $k=0,1,2,\dots$ do
4:
$\boldsymbol{s}_{\mathrm{n}}\leftarrow\boldsymbol{r}_{\mathrm{n}}^{(k)}+\alpha(\tilde{\boldsymbol{g}}-T_{\mathrm{n}}^{\top}\,\Delta\hat{\boldsymbol{u}}^{(k)})$.
5:
$\boldsymbol{s}_{\mathrm{t}}\leftarrow\boldsymbol{r}_{\mathrm{t}}^{(k)}-\alpha
T_{\mathrm{t}}^{\top}\,\Delta\hat{\boldsymbol{u}}^{(k)}$.
6: $\boldsymbol{r}_{j}^{(k+1)}\leftarrow\Pi_{F}(\boldsymbol{s}_{j})$
$(j=1,\dots,c)$.
7:
$\boldsymbol{b}\leftarrow\Delta\boldsymbol{u}^{(k)}+\beta(T\boldsymbol{r}^{(k+1)}+\boldsymbol{p})$.
8: Solve $(\beta K+I)\,\Delta\boldsymbol{u}^{(k+1)}=\boldsymbol{b}$ to obtain
$\Delta\boldsymbol{u}^{(k+1)}$.
9:
$\Delta\hat{\boldsymbol{u}}^{(k+1)}\leftarrow\Delta\boldsymbol{u}^{(k+1)}+\theta(\Delta\boldsymbol{u}^{(k+1)}-\Delta\boldsymbol{u}^{(k)})$.
10:end for
###### Remark 1.
Problem (15) is regarded as a particular case of the following convex
optimization problem in variable $\boldsymbol{x}\in\mathbb{R}^{n}$:
$\displaystyle\mathop{\mathrm{Minimize}}$
$\displaystyle\hat{f}(\boldsymbol{x}):=\frac{1}{2}\boldsymbol{x}^{\top}Q\boldsymbol{x}+\boldsymbol{h}^{\top}\boldsymbol{x}$
(22a) $\displaystyle\mathop{\mathrm{subject~{}to}}$ $\displaystyle
A_{i}\boldsymbol{x}+\boldsymbol{b}_{i}\in L^{n_{i}},\quad i=1,\dots,m,$ (22b)
$\displaystyle C\boldsymbol{x}+\boldsymbol{d}=\boldsymbol{0}.$ (22c)
Here, $Q\in\mathbb{R}^{n\times n}$ is symmetric and positive semidefinite, and
$L^{n_{i}}$ denotes the $n_{i}$-dimensional second-order cone, i.e.,
$\displaystyle
L^{n_{i}}=\\{(x_{0},\boldsymbol{x}_{1})\in\mathbb{R}\times\mathbb{R}^{n_{i}-1}\mid
x_{0}\geq\|\boldsymbol{x}_{1}\|\\}.$
Problem (22) is minimization of a convex quadratic function under second-order
cone constraints. It follows from the self-duality of the second-order cone
that problem (22) is equivalently rewritten as follows:
$\displaystyle\mathop{\mathrm{Minimize}}_{\boldsymbol{x}}\quad\sup_{\boldsymbol{s}_{1}\in
L^{n_{1}},\dots,\boldsymbol{s}_{m}\in
L^{n_{m}},\boldsymbol{y}}\Bigl{\\{}\frac{1}{2}\boldsymbol{x}^{\top}Q\boldsymbol{x}+\boldsymbol{p}^{\top}\boldsymbol{x}-\sum_{i=1}^{m}\langle\boldsymbol{s}_{i},A_{i}\boldsymbol{x}+\boldsymbol{b}_{i}\rangle-\boldsymbol{y}^{\top}(C\boldsymbol{x}+\boldsymbol{d})\Bigr{\\}}.$
(23)
The primal-dual algorithm for solving this problem updates the dual variables
as
$\displaystyle\boldsymbol{s}^{(k+1)}$
$\displaystyle:=\Pi_{L^{n_{i}}}(\boldsymbol{s}_{i}^{(k)}-\alpha(A_{i}\boldsymbol{x}^{(k)}+\boldsymbol{b}_{i})),\quad
i=1,\dots,m,$ (24) $\displaystyle\boldsymbol{y}^{(k+1)}$
$\displaystyle:=\boldsymbol{y}^{(k)}-\alpha(C\boldsymbol{x}^{(k)}+\boldsymbol{d}),$
(25)
and then updates the primal variable as
$\displaystyle\boldsymbol{x}^{(k+1)}$
$\displaystyle:=\mathop{\boldsymbol{\mathsf{prox}}}\nolimits_{\beta\hat{f}}\Bigl{(}\boldsymbol{x}^{(k)}+\sum_{i=1}^{n}A_{i}^{\top}\boldsymbol{s}_{i}^{(k+1)}+C^{\top}\boldsymbol{y}^{(k+1)}\Bigr{)},$
where
$\displaystyle\mathop{\boldsymbol{\mathsf{prox}}}\nolimits_{\beta\hat{f}}(\boldsymbol{x})=(\beta
Q+I)^{-1}(\boldsymbol{x}+\beta\boldsymbol{h}).$
This algorithm converges to a saddle point of problem (23) [13]. An advantage
of using the primal-dual algorithm lies in the fact that the update of the
dual variables in (24) and (25) can be performed very easily (particularly an
explicit formula for the projection in (24) is available [22]), compared with
the projection of the primal variable $\boldsymbol{x}$ onto the feasible set
of problem (22). Algorithm 1 presented in this section has the same advantage,
compared with directly handling the primal formulation in (15). $\blacksquare$
### 3.3 Algorithm for frictional contact problem
We have seen in section 3.2 that problem (14) can be solved with Algorithm 1.
To deal with the frictional contact problem in (13), we make two alterations
to Algorithm 1 as follows.
One alteration is to implement an acceleration scheme. Since $\pi$ is a
strongly convex function, the acceleration scheme in Chambolle and Pock [13,
Algorithm 4] (see also Chambolle and Pock [12, Algorithm 8]) is likely to work
efficient.
The other is to update $\tilde{g}_{j}$ $(j=1,\dots,c)$ at each iteration.
Since we have obtained problem (14) by replacing
$g_{j}+\mu\|T_{\mathrm{t}j}^{\top}\,\Delta\boldsymbol{u}\|$ in (13) with
$\tilde{g}_{j}$, a natural update may be
$\displaystyle\tilde{g}_{j}:=g_{j}+\mu\|T_{\mathrm{t}j}^{\top}\,\Delta\boldsymbol{u}^{(k)}\|,\quad
j=1,\dots,c.$
It is worth noting that, with this update, there is no guarantee of
convergence to a solution of problem (13). In the numerical experiments
reported in section 4, the proposed algorithm converges to a solution of every
problem instance.
Application of the two alterations above to Algorithm 1 yields Algorithm 2.
Here, $\sigma_{T}$ and $\mu_{\pi}$ are the maximum singular value of $T$ and
the minimum eigenvalue of $\nabla^{2}\pi(\boldsymbol{x})=K$, respectively.
In line 10 of Algorithm 2, we solve a system of linear equations to obtain
$\Delta\boldsymbol{u}^{(k+1)}$. We adopt a preconditioned conjugate gradient
method with setting $\Delta\boldsymbol{u}^{(k)}$ as an initial point, because
we may probably expect that change from $\Delta\boldsymbol{u}^{(k)}$ to
$\Delta\boldsymbol{u}^{(k+1)}$ is not large. The computation in line 8 is
performed for each contact candidate node $j=1,\dots,c$ independently, and
hence can be highly parallelized. In section 3.4, we present a concrete
procedure for computing $\Pi_{F}(\boldsymbol{s}_{j})$. All the other
computations in Algorithm 2 are additions and multiplications, which are
computationally cheap.
Algorithm 2 Primal-dual algorithm for frictional contact problem (13).
1:$\Delta\boldsymbol{u}^{(0)}$, $\boldsymbol{r}^{(0)}$, $\alpha_{0}>0$.
2:$\Delta\hat{\boldsymbol{u}}^{(0)}\leftarrow\Delta\boldsymbol{u}^{(0)}$.
3:$\beta_{0}\leftarrow 1/(\alpha_{0}\sigma_{T}^{2})$.
4:for $k=0,1,2,\dots$ do
5: $\tilde{g}_{j}^{(k)}\leftarrow
g_{j}+\mu\|T_{\mathrm{t}j}^{\top}\,\Delta\boldsymbol{u}^{(k)}\|$
$(j=1,\dots,c)$.
6:
$\boldsymbol{s}_{\mathrm{n}}\leftarrow\boldsymbol{r}_{\mathrm{n}}^{(k)}+\alpha_{k}(\tilde{\boldsymbol{g}}^{(k)}-T_{\mathrm{n}}^{\top}\,\Delta\hat{\boldsymbol{u}}^{(k)})$.
7:
$\boldsymbol{s}_{\mathrm{t}}\leftarrow\boldsymbol{r}_{\mathrm{t}}^{(k)}-\alpha_{k}T_{\mathrm{t}}^{\top}\,\Delta\hat{\boldsymbol{u}}^{(k)}$.
8: $\boldsymbol{r}_{j}^{(k+1)}\leftarrow\Pi_{F}(\boldsymbol{s}_{j})$
$(j=1,\dots,c)$.
9:
$\boldsymbol{b}\leftarrow\Delta\boldsymbol{u}^{(k)}+\beta_{k}(T\boldsymbol{r}^{(k+1)}+\boldsymbol{p})$.
10: Solve $(\beta_{k}K+I)\,\Delta\boldsymbol{u}^{(k+1)}=\boldsymbol{b}$ to
obtain $\Delta\boldsymbol{u}^{(k+1)}$.
11: $\displaystyle\theta_{k}\leftarrow\frac{1}{\sqrt{1+\mu_{\pi}\beta_{k}}}$.
12: $\displaystyle\alpha_{k+1}\leftarrow\frac{\alpha_{k}}{\theta_{k}}$,
$\beta_{k+1}\leftarrow\theta_{k}\beta_{k}$.
13:
$\Delta\hat{\boldsymbol{u}}^{(k+1)}\leftarrow\Delta\boldsymbol{u}^{(k+1)}+\theta_{k}(\Delta\boldsymbol{u}^{(k+1)}-\Delta\boldsymbol{u}^{(k)})$.
14:end for
### 3.4 Projection onto Coulomb’s friction cone
$r_{\mathrm{n}}$$r_{\mathrm{t}1}$$r_{\mathrm{t}2}$$\boldsymbol{\xi}_{2}$$\boldsymbol{\xi}_{1}$$1$$\mu$$F$
Figure 2: Projection onto the Coulomb friction cone.
In line 8 of Algorithm 2, we compute the projection of
$\boldsymbol{s}_{j}\in\mathbb{R}^{3}$ onto the Coulomb friction cone $F$. This
can be easily performed as follows.
For notational simplicity, we consider computation of
$\Pi_{F}(s_{\mathrm{n}},\boldsymbol{s}_{\mathrm{t}})$, i.e., we omit subscript
$j$. Define $\boldsymbol{\xi}_{1}$, $\boldsymbol{\xi}_{2}\in\mathbb{R}^{3}$ by
$\displaystyle\boldsymbol{\xi}_{1}=-\frac{1}{1+\mu^{2}}\begin{bmatrix}\mu\\\\[4.30554pt]
\displaystyle\frac{\boldsymbol{s}_{\mathrm{t}}}{\|\boldsymbol{s}_{\mathrm{t}}\|}\\\
\end{bmatrix},\quad\boldsymbol{\xi}_{2}=\frac{1}{1+\mu^{2}}\begin{bmatrix}-1\\\\[4.30554pt]
\displaystyle\mu\frac{\boldsymbol{s}_{\mathrm{t}}}{\|\boldsymbol{s}_{\mathrm{t}}\|}\\\
\end{bmatrix};$
see Figure 2.111When $\boldsymbol{s}_{\mathrm{t}}=\boldsymbol{0}$, we use any
unit vector instead of
$\boldsymbol{s}_{\mathrm{t}}/\|\boldsymbol{s}_{\mathrm{t}}\|$. It is worth
noting that for implementation we do not need to consider this case; see
Algorithm 3. Also, define $\lambda_{1}$, $\lambda_{2}\in\mathbb{R}$ by
$\displaystyle\lambda_{1}=-\mu
s_{\mathrm{n}}-\|\boldsymbol{s}_{\mathrm{t}}\|,\quad\lambda_{2}=-s_{\mathrm{n}}+\mu\|\boldsymbol{s}_{\mathrm{t}}\|.$
Then we have
$\displaystyle\Pi_{F}(s_{\mathrm{n}},\boldsymbol{s}_{\mathrm{t}})=\max\\{0,\lambda_{1}\\}\boldsymbol{\xi}_{1}+\max\\{0,\lambda_{2}\\}\boldsymbol{\xi}_{2}.$
(26)
It is worth noting that, when $\mu=1$, (26) corresponds to a formula of the
projection onto the second-order cone [22] based on the spectral factorization
in the Jordan algebra.
The computational procedure of the projection is described in Algorithm 3.
Algorithm 3 Projection onto the Coulomb’s friction cone,
$\Pi_{F}(s_{\mathrm{n}},\boldsymbol{s}_{\mathrm{t}})$.
1:$(s_{\mathrm{n}},\boldsymbol{s}_{\mathrm{t}})\in\mathbb{R}\times\mathbb{R}^{2}$.
2:$\lambda_{1}\leftarrow-\mu s_{\mathrm{n}}-\|\boldsymbol{s}_{\mathrm{t}}\|$.
3:if $\lambda_{1}\geq 0$ then
4:
$\Pi_{F}(s_{\mathrm{n}},\boldsymbol{s}_{\mathrm{t}})\leftarrow(s_{\mathrm{n}},\boldsymbol{s}_{\mathrm{t}})$.
5:else
6: $\lambda_{2}\leftarrow-s_{\mathrm{n}}+\mu\|\boldsymbol{s}_{\mathrm{t}}\|$.
7: if $\lambda_{2}\leq 0$ then
8:
$\Pi_{F}(s_{\mathrm{n}},\boldsymbol{s}_{\mathrm{t}})\leftarrow(0,\boldsymbol{0})$.
9: else
10:
$\displaystyle\Pi_{F}(s_{\mathrm{n}},\boldsymbol{s}_{\mathrm{t}})\leftarrow\frac{\lambda_{2}}{1+\mu^{2}}\Bigl{(}-1,\mu\frac{\boldsymbol{s}_{\mathrm{t}}}{\|\boldsymbol{s}_{\mathrm{t}}\|}\Bigr{)}$.
11: end if
12:end if
## 4 Numerical experiments
In this section, we demonstrate efficiency of the proposed method through
numerical experiments. Section 4.1 describes details of implementation.
Sections 4.2 and 4.3 report on two numerical examples. Computation was carried
out on a 2.6 GHz Intel Core i7-9750H processor with 32 GB RAM. In the
following, we omit units of physical quantities for simplicity. Young’s
modulus and Poisson’s ratio of elastic bodies are $1$ and $0.3$, respectively.
### 4.1 Implementation
Algorithm 2 was implemented in Matlab ver. 9.8.0. Initial values were set to
$\Delta\boldsymbol{u}^{(0)}=\boldsymbol{0}$,
$\boldsymbol{r}^{(0)}=\boldsymbol{0}$, and $\alpha_{0}=10^{-1}$. Stopping
criterion was
$\|\Delta\boldsymbol{u}^{(k+1)}-\Delta\boldsymbol{u}^{(k)}\|\leq\epsilon$ with
$\epsilon=10^{-12}$.
The projection onto the friction cone in line 8 is computed with Algorithm 3.
Matlab implementation of Algorithm 3 was compiled into a MEX-file by using
Matlab Coder, where the loop for $j=1,\dots,c$ was implemented with Matlab
built-in function parfor. For this parallel loop computation, Matlab was
allowed to use up to 6 cores.
As explained in section 3.3, a preconditioned conjugate gradient method was
used for solution of a system of linear equations in line 10 of Algorithm 2.
Matlab built-in function pcg was used, where the maximum number of iterations
and the tolerance for termination were set to $10^{4}$ and $10^{-10}$,
respectively. The minimum eigenvalue of $K$, i.e., $\mu_{\pi}$, was computed
by eigs($\,\cdot\,$,1,’smallestabs’). The maximum singular value of $T$, i.e.,
$\sigma_{T}$, was computed by svds($\,\cdot\,$,1,’largest’), with setting the
maximum number of iterations to $10^{8}$.
For comparison, the same problem instances were solved with a regularized and
smoothed Newton method [24], which is implemented in ReSNA [23]. Specifically,
we solved problem (31) in appendix A, which is a second-order cone linear
complementarity problem. Parameter tole of ReSNA was set to $10^{-7}$. Both
for ReSNA and Algorithm 2, the stiffness matrix $K$ was stored as a Matlab
sparse form.
Besides computation time, we compare accuracy of the obtained solutions. With
referring to problem (13), we consider the following three residuals. First,
the residual of (13a) is
$\displaystyle\|K\Delta\boldsymbol{u}^{(k)}-\boldsymbol{p}-T_{\mathrm{n}}\boldsymbol{r}_{\mathrm{n}}^{(k)}-T_{\mathrm{t}j}\boldsymbol{r}_{\mathrm{t}}^{(k)}\|.$
(27)
Second, as the residual of the complementarity conditions in (13b), we
consider
$\displaystyle\left|\sum_{j=1}^{c}\left\langle\begin{bmatrix}-g_{j}-\mu\|T_{\mathrm{t}j}^{\top}\Delta\boldsymbol{u}^{(k)}\|+\boldsymbol{t}_{\mathrm{n}j}^{\top}\Delta\boldsymbol{u}^{(k)}\\\
T_{\mathrm{t}j}^{\top}\Delta\boldsymbol{u}^{(k)}\\\
\end{bmatrix},\begin{bmatrix}r_{\mathrm{n}j}^{(k)}\\\
\boldsymbol{r}_{\mathrm{t}j}^{(k)}\\\
\end{bmatrix}\right\rangle\right|=|\langle\boldsymbol{u}^{(k)},T\boldsymbol{r}^{(k)}\rangle-\langle\Delta\tilde{\boldsymbol{g}}^{(k)},\boldsymbol{r}_{\mathrm{n}}^{(k)}\rangle|.$
(28)
Finally, for the inequality constraints
$g_{j}-\boldsymbol{t}_{\mathrm{n}j}^{\top}\Delta\boldsymbol{u}\geq 0$
$(j=1,\dots,c)$ in (13b), the residual is
$\displaystyle\|\min\\{\boldsymbol{g}-T_{\mathrm{n}}^{\top}\Delta\boldsymbol{u}^{(k)},\boldsymbol{0}\\}\|.$
(29)
It is worth noting that $\boldsymbol{r}_{j}^{(k)}\in F$ $(j=1,\dots,c)$ is
satisfied at every iteration of Algorithm 2, due to the projection in line 8.
### 4.2 Example (I): two-dimensional problem
$g_{j}$obstacle
Figure 3: Problem setting of example (I).
(a)
(b)
(c)
(d)
(e)
(f)
Figure 4: Computational results of example (I). “Solid line” The proposed
method; and “dotted line” ReSNA. LABEL:sub@fig:time_log_dof_prog2ex_rev The
computation time; LABEL:sub@fig:iter_pd_prog2ex_rev the number of iterations
of the proposed method; LABEL:sub@fig:iter_soc_prog2ex_rev the number of
systems of linear equations solved in ReSNA;
LABEL:sub@fig:resid_eq_prog2ex_rev the residual in (27);
LABEL:sub@fig:resid_compl_prog2ex_rev the residual in (28); and
LABEL:sub@fig:resid_gap_ineq_prog2ex_rev the residual in (29).
Consider the problem outlined in Figure 3, where an elastic body is in the
plane-stress state. The body is discretized uniformly as $N_{X}\times N_{Y}$
four-node quadrilateral (Q4) elements, where $N_{X}=2.5N_{Y}$ and the value of
$N_{Y}$ is varied to change the size of problem instance. As for
implementation of the finite element method, we use a Matlab code due to
Andreassen et al. [5]. Uniformly distributed vertical traction of $0.01$ is
applied at the top edge. The bottom nodes can possibly contact with a flat
rigid obstacle. Hence, the number of contact candidate nodes is $c=N_{X}$. The
initial gaps are set to $g_{j}=0.01$ $(j=1,\dots,c)$. The coefficient of
friction is $\mu=0.5$.
Figure 4 collects the computational results for $N_{Y}=26$, 40, 54, 68, 80,
92, 104, 114, 124, 134, and 144. Here, “#DOF” means the number of degrees of
freedom of the nodal displacements, i.e., $d$. Figure 4a shows the computation
time. The dashed lines correspond to $O(d)$, $O(d^{2})$, and $O(d^{3})$. The
proposed method clearly outperforms ReSNA in terms of computation time. For
example, $382.6\,\mathrm{s}$ and $98222.0\,\mathrm{s}$ were required by the
proposed method and ReSNA, respectively, to solve the instance with $d=54600$
and $c=260$. At the equilibrium state, about 40% contact candidate nodes are
free, 10% are in sliding contact, and 50% are in sticking contact.
Figure 4b reports the number of iterations required by the proposed method.
ReSNA solves a system of linear equations at step 2.1 of Algorithm 2 in [24],
to obtain a search direction. Figure 4c reports the number of systems of
linear equations solved at step 2.1. Increase of the number in Figure 4c is
moderate compared with Figure 4b. Therefore, increase of computation time of
ReSNA observed in Figure 4a may be due to increase of computation time for
numerical solution of a system of linear equations at each iteration.
Figure 4d, Figure 4e, and Figure 4f compare the residuals defined by (27),
(28), and (29), respectively. Concerning the residuals in (27) and (28), it is
observed in Figure 4d and Figure 4e that the solution obtained by the proposed
method has smaller residuals for every problem instance. In contrast, in terms
of the residual in (29), the solution obtained by ReSNA has a smaller
residual, although a residual of the solution obtained by the proposed method
is also sufficiently small, as observed in Figure 4f.
### 4.3 Example (II): three-dimensional problem
$N_{X}$$N_{Z}$$N_{Y}$
Figure 5: Problem setting of example (II).
(a)
(b)
(c)
(d)
(e)
(f)
Figure 6: Computational results of example (II) with $\mu=0.5$. “Solid line”
The proposed method; and “dotted line” ReSNA.
LABEL:sub@fig:time_log_dof_prog4ex_rev The computation time;
LABEL:sub@fig:iter_pd_prog4ex_rev the number of iterations of the proposed
method; LABEL:sub@fig:iter_soc_prog4ex_rev the number of systems of linear
equations solved in ReSNA; LABEL:sub@fig:resid_eq_prog4ex_rev the residual in
(27); LABEL:sub@fig:resid_compl_prog4ex_rev the residual in (28); and
LABEL:sub@fig:resid_gap_ineq_prog4ex_rev the residual in (29).
(a)
(b)
(c)
(d)
Figure 7: Iteration history of the proposed method for example (II) with
$\mu=0.5$, $N_{X}=56$, $N_{Y}=N_{Z}=28$, $d=141288$, and $c=1624$.
LABEL:sub@fig:prog4_pcg_iter The number of iterations of the conjugate
gradient method; LABEL:sub@fig:prog4_resid_force_eq the residual in (27);
LABEL:sub@fig:prog4_resid_comp the residual in (28); and
LABEL:sub@fig:prog4_resid_gap_ineq the residual in (29).
(a)
(b)
(c)
(d)
Figure 8: Computational results of the proposed method for example (II) with
three different values of the friction coefficient. “Solid line” $\mu=0.5$;
“dashed line” $\mu=1.0$; and “dashed-doted line” $\mu=1.5$.
LABEL:sub@fig:time_log_dof_prog4ex_fric The computational time;
LABEL:sub@fig:resid_eq_prog4ex_fric the residual in (27);
LABEL:sub@fig:resid_compl_prog4ex_fric the residual in (28); and
LABEL:sub@fig:resid_gap_ineq_prog4ex_fric the residual in (29).
Consider a three-dimensional version of example (I) outlined in Figure 5. The
elastic body is discretized uniformly as $N_{X}\times N_{Y}\times N_{Z}$
8-node hexahedron elements, where $N_{X}=2N_{Y}=2N_{Z}$. A Matlab code due to
Ferrari and Sigmund [20] is used as for implementation of the finite element
method. Uniformly distributed vertical traction of $5\times 10^{-3}$ is
applied at all the top nodes. The bottom nodes are considered as contact
candidate nodes, the number of which is $c=N_{X}(N_{Y}+1)$. The initial gaps
are $g_{j}=0.005$ $(j=1,\dots,c)$. At the equilibrium state, about 33% contact
candidate nodes are free, 11% are in sliding contact, and 56% are in sticking
contact.
Figure 6 reports the computational results with $\mu=0.5$, where $N_{Y}=10$,
12, 14, 16, 18, 20, 22, 24, 26, and 28. It is observed in Figure 6a that the
proposed method outperforms ReSNA from the viewpoint of computation time.
Concerning the number of iterations, Figure 6b and Figure 6c show trends
similar to the ones observed in example (I). From Figure 6d, Figure 6e, and
Figure 6f we see that accuracy of the solutions obtained by the proposed
method is comparable with that of the solutions obtained by ReSNA.
Figure 7 shows the iteration history of the proposed method. The number of
iterations required by pcg for solving a system of linear equations in line 10
of Algorithm 2 decreases as the algorithm approaches to termination. The
residual in (27) decreases monotonically. In contrast, the residuals in (28)
and (29) are equal to zero for about the first 90 iterations. This is because
the contact candidates nodes are free and have no reactions at these
iterations as we set $\Delta\boldsymbol{u}^{(0)}=\boldsymbol{0}$ and
$\boldsymbol{r}^{(0)}=\boldsymbol{0}$. The residuals become positive when the
displacement violates the non-penetration condition, and then decrease
monotonically.
For $\mu=0.5$, $1$, and $1.5$, Figure 8 compares performance of the proposed
method. We can observe that the performance is irrelevant to a value of the
friction coefficient.
## 5 Conclusions
This paper has developed a fast fist-order optimization-based method for a
quasi-static contact problem with Coulomb’s friction. The method is designed
based on an accelerated primal-dual algorithm solving a convex optimization
problem that approximates the contact problem.
In the numerical experiments on problem instances with up to 140 thousands
degrees of freedom of the nodal displacements, it has been observed that the
proposed method successfully converges to a solution for every problem
instance, although the method has no guarantee of convergence. Also, it has
been demonstrated that the proposed method outperforms a regularized and
smoothed Newton method for the second-order cone complementarity problem.
Furthermore, the proposed method is easy to implement.
#### Acknowledgments
This work is supported by JSPS KAKENHI 17K06633, 21K04351, and JST CREST Grant
No. JPMJCR1911, Japan.
## References
* Acary et al. [2018] V. Acary, M. Brémond, and O. Huber: On solving contact problems with Coulomb friction: formulations and numerical comparisons. In R. I. Leine, V. Acary, O. Brüls (eds.): Advanced Topics in Nonsmooth Dynamics (Springer International Publishing, Cham, 2018), 375–457.
* Acary and Brogliato [2008] V. Acary and B. Brogliato: Numerical Methods for Nonsmooth Dynamical Systems (Springer-Verlag, Berlin, 2008).
* Acary et al. [2011] V. Acary, F. Cadoux, C. Lemaréchal, and J. Malick: A formulation of the linear discrete Coulomb friction problem via convex optimization. ZAMM, 91 (2011), 155–175.
* Alart and Curnier [1991] P. Alart and A. Curnier: A mixed formulation for frictional contact problems prone to Newton like solution methods. Computer Methods in Applied Mechanics and Engineering, 92 (1991), 353–375.
* Andreassen et al. [2011] E. Andreassen, A. Clausen, M. Schevenels, B. S. Lazarov, and O. Sigmund: Efficient topology optimization in MATLAB using 88 lines of code. Structural and Multidisciplinary Optimization, 43 (2011), 1–16.
* Areias et al. [2014] P. Areias, A. Pinto da Costa, T. Rabczuk, F. J. M. Queirós de Melo, D. Dias-da-Costa, and M. Bezzeghoud: An alternative formulation for quasi-static frictional and cohesive contact problems. Computational Mechanics, 53 (2014), 807–824.
* Bertails-Descoubes et al. [2011] F. Bertails-Descoubes, F. Cadoux, G. Daviet, and V. Acary: A nonsmooth Newton solver for capturing exact Coulomb friction in fiber assemblies. ACM Transactions of Graphics, 30 (2011), Article No. 6.
* Bonettini and Ruggiero [2012] S. Bonettini and V. Ruggiero: On the convergence of primal–dual hybrid gradient algorithms for total variation image restoration. Journal of Mathematical Imaging and Vision, 44 (2012), 236–253.
* Brogliato [2016] B. Brogliato: Nonsmooth Mechanics: Models, Dynamics and Control (3rd ed.) (Springer International Publishing, Cham, 2016).
* Cadoux [2009] F. Cadoux: An optimization-based algorithm for Coulomb’s frictional contact. ESAIM: Proceedings and Surveys, 27 (2009), 54–69.
* Chambolle and Pock [2011] A. Chambolle and T. Pock: A first-order primal-dual algorithm for convex problems with applications to imaging. Journal of Mathematical Imaging and Vision, 40 (2011), 120–145.
* Chambolle and Pock [2016a] A. Chambolle and T. Pock: An introduction to continuous optimization for imaging. Acta Numerica, 25 (2016), 161–319.
* Chambolle and Pock [2016b] A. Chambolle and T. Pock: On the ergodic convergence rates of a first-order primal–dual algorithm. Mathematical Programming, 159 (2016), 253–287.
* Chen et al. [2014] Y. Chen, G. Lan, and Y. Ouyang: Optimal primal-dual methods for a class of saddle point problems. SIAM Journal on Optimization, 24 (2014), 1779–1814.
* Chen and Pan [2012] J.-S. Chen and S. Pan: A survey on SOC complementarity functions and solution methods for SOCPs and SOCCPs. Pacific Journal of Optimization, 8 (2012), 33–74.
* Christensen [2002] P. W. Christensen: A semi-smooth Newton method for elasto-plastic contact problems. International Journal of Solids and Structures, 39 (2002), 2323–2341.
* Christensen et al. [1998] P. W. Christensen, A. Klarbring, J.-S. Pang, and N. Strömberg: Formulation and comparison of algorithms for frictional contact problems. International Journal for Numerical Methods in Engineering, 42 (1998), 145–173.
* Duvaut and Lions [1976] G. Duvaut and J. L. Lions: Inequalities in Mechanics and Physics (Springer-Verlag, Berlin, 1976).
* Esser et al. [2010] E. Esser, X. Zhang, and T. F. Chan: A general framework for a class of first order primal-dual algorithms for convex optimization in imaging science. SIAM Journal on Imaging Sciences, 3 (2010), 1015–1046.
* Ferrari and Sigmund [2020] F. Ferrari and O. Sigmund: A new generation 99 line Matlab code for compliance topology optimization and its extension to 3D. Structural and Multidisciplinary Optimization, 62 (2020), 2211–2228.
* Fujita and Kanno [2019] S. Fujita and Y. Kanno: Application of accelerated gradient method to equilibrium analysis of trusses with nonlinear elastic materials (in Japanese). Journal of Structural and Construction Engineering (Transactions of AIJ), 84 (2019), 1223–1230.
* Fukushima et al. [2001] M. Fukushima, Z.-Q. Luo, and P. Tseng: Smoothing functions for second-order-cone complementarity problems. SIAM Journal on optimization, 12 (2001), 436–460.
* Hayashi [2020] S. Hayashi: Website of ReSNA (Regularized Smoothing Newton Algorithm). http://optima.ws.hosei.ac.jp/hayashi/ReSNA/ (Accessed December 2020).
* Hayashi et al. [2005] S. Hayashi, N. Yamashita, and M. Fukushima: A combined smoothing and regularization method for monotone second-order cone complementarity problems. SIAM Journal on Optimization, 15 (2005), 593–615.
* He et al. [2014] B. He, Y. You, and X. Yuan: On the convergence of primal-dual hybrid gradient algorithm. SIAM Journal on Imaging Sciences, 7 (2014), 2526–2537.
* Kanno [2011] Y. Kanno: Nonsmooth Mechanics and Convex Optimization (CRC Press, Boca Raton, 2011).
* Kanno [2016] Y. Kanno: A fast first-order optimization approach to elastoplastic analysis of skeletal structures. Optimization and Engineering, 17 (2016), 861–896.
* Kanno [2020a] Y. Kanno: A note on a family of proximal gradient methods for quasi-static incremental problems in elastoplastic analysis. Theoretical and Applied Mechanics Letters, 10 (2020), 315–320.
* Kanno [2020b] Y. Kanno: An accelerated Uzawa method for application to frictionless contact problem. Optimization Letters, 14 (2020), 1845–1854.
* Kanno [to appear] Y. Kanno: Accelerated proximal gradient method for bi-modulus static elasticity. Optimization and Engineering, to appear. DOI:10.1007/s11081-021-09595-2
* Kanno et al. [2006] Y. Kanno, J. A. C. Martins, and A. Pinto da Costa: Three-dimensional quasi-static frictional contact by using second-order cone linear complementarity problem. International Journal for Numerical Methods in Engineering, 65 (2006), 62–83.
* Kikuchi and Oden [1988] N. Kikuchi and J. T. Oden: Contact Problems in Elasticity (SIAM, Philadelphia, 1988).
* Klarbring [1986] A. Klarbring: A mathematical programming approach to three-dimensional contact problems with friction. Computer Methods in Applied Mechanics and Engineering, 58 (1986), 175–200.
* Klarbring [1999] A. Klarbring: Contact, friction, discrete mechanical structures and mathematical programming. In P. Wriggers, P. Panagiotopoulos (eds.): New Developments in Contact Problems (Springer-Verlag, Wien, 1999), 55–100,
* Malitsky and Pock [2018] Y. Malitsky and T. Pock: A first-order primal-dual algorithm with linesearch. SIAM Journal on Optimization, 28 (2018), 411–432.
* Martins and Pinto da Costa [2000] J. M. C. Martins and A. Pinto da Costa: Stability of finite-dimensional nonlinear elastic systems with unilateral contact and friction. International Journal of Solids and Structures, 37 (2000), 2519–2564.
* Martins et al. [2002] J. A. C. Martins, A. Pinto da Costa, and F. M. F. Simões: Some notes on friction and instabilities. In J. A. C. Martins, M. Raous (eds.): Friction and Instabilities (Springer-Verlag, Wien, 2002), 65–136.
* Mazhar et al. [2015] H. Mazhar, T. Heyn, D, Negrut, and A. Tasora: Using Nesterov’s method to accelerate multibody dynamics with friction and contact. ACM Transactions on Graphics, 34 (2015), Article No. 32.
* Melanz et al. [2017] D. Melanz, L. Fang, P. Jayakumar, and D. Negrut: A comparison of numerical methods for solving multibody dynamics problems with frictional contact modeled via differential variational inequalities. Computer Methods in Applied Mechanics and Engineering, 320 (2017), 668–693.
* Pinto da Costa et al. [2004] A. Pinto da Costa, J. A. C. Martins, I. N. Figueiredo, and J. J. Júdice: The directional instability problem in systems with frictional contacts. Computer Methods in Applied Mechanics and Engineering, 193 (2004), 357–384.
* Renard [2013] Y. Renard: Generalized Newton’s methods for the approximation and resolution of frictional contact problems in elasticity. Computer Methods in Applied Mechanics and Engineering, 256 (2013), 38–55.
* Shimizu and Kanno [2018] W. Shimizu and Y. Kanno: Accelerated proximal gradient method for elastoplastic analysis with von Mises yield criterion. Japan Journal of Industrial and Applied Mathematics, 35 (2018), 1–32.
* Shimizu and Kanno [2020] W. Shimizu and Y. Kanno: A note on accelerated proximal gradient method for elastoplastic analysis with Tresca yield criterion. Journal of the Operations Research Society of Japan, 63 (2020), 78–92.
* Wriggers [2006] P. Wriggers: Computational Contact Mechanics (2nd ed.) (Springer-Verlag, Berlin, 2006).
* Yoshise [2012] A. Yoshise: Complementarity problems over symmetric cones: a survey of recent developments in several aspects. In M. F. Anjos, J. B. Lasserre (eds.): Handbook on Semidefinite, Conic and Polynomial Optimization (Springer, New York, 2012), 339–375.
## Appendix A Formulation as second-order cone linear complementarity problem
ReSNA [23] is a Matlab software package implementing a regularized smoothing
Newton method proposed by Hayashi et al. [24]. It solves a second-order cone
linear complementarity problem (SOCLCP) in the following form:
$\displaystyle\mathcal{K}\ni\boldsymbol{x}\perp\boldsymbol{y}\in\mathcal{K},$
(30a)
$\displaystyle\boldsymbol{y}=M_{11}\boldsymbol{x}+M_{12}\boldsymbol{v}+\boldsymbol{w}_{1},$
(30b) $\displaystyle
M_{21}\boldsymbol{x}+M_{22}\boldsymbol{v}+\boldsymbol{w}_{2}=\boldsymbol{0}.$
(30c)
Here, $\boldsymbol{x}$, $\boldsymbol{y}$, and $\boldsymbol{v}$ are unknown
variables, and $\mathcal{K}$ is a Cartesian product of some second-order
cones. In the numerical experiments reported in section 4, we use ReSNA for
comparison.
The method proposed in this paper attempts to solve problem (13) in section 2.
Kanno et al. [31] showed that this problem can be recast as the following
SOCLCP:
$\displaystyle
K\Delta\boldsymbol{u}=\boldsymbol{p}+T_{\mathrm{n}}\boldsymbol{r}_{\mathrm{n}}+T_{\mathrm{t}}\boldsymbol{r}_{\mathrm{t}},$
(31a)
$\displaystyle\mathbb{R}_{+}\ni(g_{j}-\boldsymbol{t}_{\mathrm{n}j}^{\top}\Delta\boldsymbol{u})\perp(-r_{\mathrm{n}j})\in\mathbb{R}_{+},\quad
j=1,\dots,c,$ (31b) $\displaystyle L^{3}\ni\begin{bmatrix}\lambda_{j}\\\
T_{\mathrm{t}j}^{\top}\Delta\boldsymbol{u}\\\
\end{bmatrix}\perp\begin{bmatrix}-\mu r_{\mathrm{n}j}\\\
\boldsymbol{r}_{\mathrm{t}j}\\\ \end{bmatrix}\in L^{3},\quad j=1,\dots,c.$
(31c)
Here, $\mathbb{R}_{+}^{n}$ and $L^{n}$ are the nonnegative orthant and the
second-order cone, respectively, i.e.,
$\displaystyle\mathbb{R}_{+}^{n}$
$\displaystyle=\\{\boldsymbol{x}\in\mathbb{R}^{n}\mid\boldsymbol{x}\geq\boldsymbol{0}\\},$
$\displaystyle L^{n}$
$\displaystyle=\\{(x_{0},\boldsymbol{x}_{1})\in\mathbb{R}\times\mathbb{R}^{n-1}\mid
x_{0}\geq\|\boldsymbol{x}_{1}\|\\},$
and $\lambda_{j}\in\mathbb{R}$ $(j=1,\dots,c)$ are additional unknown
variables.
Problem (31) can be embedded into the form of (30) as follows. Put
$\displaystyle\boldsymbol{x}=\begin{bmatrix}\boldsymbol{g}-T_{\mathrm{n}}^{\top}\Delta\boldsymbol{u}\\\
\lambda_{1}\\\ T_{\mathrm{t}1}^{\top}\Delta\boldsymbol{u}\\\ \vdots\\\
\lambda_{c}\\\ T_{\mathrm{t}c}^{\top}\Delta\boldsymbol{u}\\\
\end{bmatrix},\quad\boldsymbol{y}=\begin{bmatrix}-\boldsymbol{r}_{\mathrm{n}}\\\
-\mu r_{\mathrm{n}1}\\\ \boldsymbol{r}_{\mathrm{t}1}\\\ \vdots\\\ -\mu
r_{\mathrm{n}c}\\\ \boldsymbol{r}_{\mathrm{t}c}\\\
\end{bmatrix},\quad\boldsymbol{v}=\begin{bmatrix}\Delta\boldsymbol{u}\\\
\boldsymbol{\lambda}\\\ \boldsymbol{r}_{\mathrm{n}}\\\
\boldsymbol{r}_{\mathrm{t}}\end{bmatrix}$ (32)
to see that (31b) and (31c) are equivalently rewritten as (30a) with
$\mathcal{K}\subset\mathbb{R}^{4c}$ defined by
$\displaystyle\mathcal{K}=\mathbb{R}_{+}^{c}\times\mathcal{L}^{3}\times\dots\times\mathcal{L}^{3}.$
Define $\boldsymbol{e}_{1}\in\mathbb{R}^{3}$ and $E_{2}\in\mathbb{R}^{3\times
2}$ by
$\displaystyle\boldsymbol{e}_{1}=\begin{bmatrix}1\\\ 0\\\ 0\\\
\end{bmatrix},\quad E_{2}=\begin{bmatrix}0&0\\\ 1&0\\\ 0&1\\\ \end{bmatrix}.$
We see that the relation between $\boldsymbol{y}$ and $\boldsymbol{v}$ in (32)
can be written as
$\displaystyle\boldsymbol{y}$ $\displaystyle=M_{12}\boldsymbol{v}$
with $M_{12}\in\mathbb{R}^{(4c)\times(d+4c)}$ defined by
$\displaystyle M_{12}=\begin{bmatrix}O_{c,d}&O_{c,c}&-I_{c}&O_{c,2c}\\\
O_{3c,d}&O_{3c,c}&-\mu I_{c}\otimes\boldsymbol{e}_{1}&I_{c}\otimes E_{2}\\\
\end{bmatrix},$ (33)
where we use $\otimes$ to denote the Kronecker product. Similarly, define
$M_{22}^{(1)}\in\mathbb{R}^{(4c)\times(d+4c)}$ and
$\boldsymbol{w}_{2}^{(1)}\in\mathbb{R}^{4c}$ by
$\displaystyle
M_{22}^{(1)}=\begin{bmatrix}T_{\mathrm{n}}^{\top}&O_{c,c}&O_{c,c}&O_{c,2c}\\\
-(I_{c}\otimes
E_{2})T_{\mathrm{t}}^{\top}&-I_{c}\otimes\boldsymbol{e}_{1}&O_{3c,c}&O_{3c,2c}\\\
\end{bmatrix},\quad\boldsymbol{w}_{2}^{(1)}=\begin{bmatrix}-\boldsymbol{g}\\\
O_{3c,1}\\\ \end{bmatrix}$
to see that the relation between $\boldsymbol{x}$ and $\boldsymbol{v}$ is
reduced to
$\displaystyle\boldsymbol{x}+M_{22}^{(1)}\boldsymbol{v}+\boldsymbol{w}_{2}^{(1)}=\boldsymbol{0}.$
Furthermore, define $M_{22}^{(2)}\in\mathbb{R}^{d\times(d+4c)}$ and
$\boldsymbol{w}_{2}^{(2)}\in\mathbb{R}^{d}$ by
$\displaystyle
M_{22}^{(2)}=\begin{bmatrix}K&O_{d,c}&-T_{\mathrm{n}}&-T_{\mathrm{t}}\\\
\end{bmatrix},\quad\boldsymbol{w}_{2}^{(2)}=-\boldsymbol{p}.$
Then we see that (31a) is reduced to
$\displaystyle
M_{22}^{(2)}\boldsymbol{v}+\boldsymbol{w}_{2}^{(2)}=\boldsymbol{0}.$
Consequently, problem (31) can be transformed into the the form in (30) with
$\displaystyle M_{11}=O_{4c,4c},$ $\displaystyle\boldsymbol{w}_{1}=O_{4c,1},$
$\displaystyle M_{21}=\begin{bmatrix}I_{4c}\\\ O_{d,4c}\\\ \end{bmatrix},$
$\displaystyle M_{22}=\begin{bmatrix}M_{22}^{(1)}\\\ M_{22}^{(2)}\\\
\end{bmatrix},$
$\displaystyle\boldsymbol{w}_{2}=\begin{bmatrix}\boldsymbol{w}_{2}^{(1)}\\\
\boldsymbol{w}_{2}^{(2)}\\\ \end{bmatrix},$
and $M_{12}$ in (33).
## Appendix B Optimality condition of problem (15)
We can confirm equivalence of (14) and (15) as follows.
Observe that problem (15) is written in an explicit manner as follows:
$\displaystyle\mathop{\mathrm{Minimize}}_{\boldsymbol{u}\in\mathbb{R}^{d}}$
$\displaystyle\frac{1}{2}\Delta\boldsymbol{u}^{\top}\,K\,\Delta\boldsymbol{u}-\boldsymbol{p}^{\top}\,\Delta\boldsymbol{u}$
(34a) $\displaystyle\mathop{\mathrm{subject~{}to}}$
$\displaystyle\begin{bmatrix}-\tilde{g}_{j}+\boldsymbol{t}_{\mathrm{n}j}^{\top}\,\Delta\boldsymbol{u}\\\
T_{\mathrm{t}j}^{\top}\,\Delta\boldsymbol{u}\\\ \end{bmatrix}\in F^{*},\quad
j=1,\dots,c.$ (34b)
Since $3c<d$, problem (34) has an interior feasible solution. The Lagrangian
of problem (34) is defined by
$\displaystyle
L(\Delta\boldsymbol{u};\boldsymbol{r})=\begin{dcases*}\frac{1}{2}\Delta\boldsymbol{u}^{\top}\,K\,\Delta\boldsymbol{u}-\boldsymbol{p}^{\top}\,\Delta\boldsymbol{u}\\\
\qquad{}-\sum_{j=1}^{c}\left\langle\begin{bmatrix}r_{\mathrm{n}j}\\\
\boldsymbol{r}_{\mathrm{t}j}\\\
\end{bmatrix},\begin{bmatrix}-\tilde{g}_{j}+\boldsymbol{t}_{\mathrm{n}j}^{\top}\,\Delta\boldsymbol{u}\\\
T_{\mathrm{t}j}^{\top}\,\Delta\boldsymbol{u}\\\ \end{bmatrix}\right\rangle&if
$\begin{bmatrix}r_{\mathrm{n}j}\\\ \boldsymbol{r}_{\mathrm{t}j}\\\
\end{bmatrix}\in F$ $(j=1,\dots,c)$,\\\ -\infty&otherwise.\end{dcases*}$
Indeed, from the duality between $F$ and $F^{*}$ we see that problem (34) is
equivalent to the following one:
$\displaystyle\mathop{\mathrm{Minimize}}_{\Delta\boldsymbol{u}\in\mathbb{R}^{d}}\quad\sup_{\boldsymbol{r}\in\mathbb{R}^{3c}}L(\Delta\boldsymbol{u};\boldsymbol{r}).$
Here,
$\displaystyle\sup_{\boldsymbol{r}\in\mathbb{R}^{3c}}L(\Delta\boldsymbol{u};\boldsymbol{r})$
attains at a finite value if and only if (14b) is satisfied. Also, the
stationarity condition of $L(\Delta\boldsymbol{u};\boldsymbol{r})$ with
respect to $\Delta\boldsymbol{u}$ is (14a). Thus, (14) is a necessary and
sufficient condition for optimality of problem (15).
|
Amitsur subgroup and noncommutative motives
Sa$\mathrm{\check{s}}$a Novakovi$\mathrm{\acute{c}}$
January 2021
_ЗА ИЛИАНУ._
Abstract. This paper addresses the problem of calculating the Amitsur subgroup
of a proper $k$-scheme. Under mild hypothesis, we calculate this subgroup for
proper $k$-varieties $X$ with $\mathrm{Pic}(X)\simeq\mathbb{Z}^{\oplus m}$,
using a classification of so called absolutely split vector bundles
($AS$-bundles for short). We also show that the Brauer group of $X$ is
isomorphic to $\mathrm{Br}(k)$ modulo the Amitsur subgroup, provided $X$ is
geometrically rational. Our results also enable us to classify $AS$-bundles on
twisted flags. Moreover, we find an alternative proof for a result due to
Merkurjev and Tignol, stating that the Amitsur subgroup of twisted flags is
generated by a certain subset of the set of classes of Tits algebras of the
corresponding algebraic group. This result of Merkurjev and Tignol is actually
a corollary of a more general theorem that we prove. The obtained results have
also consequences for the noncommutative motives of the twisted flags under
consideration. In particular, we show that a certain noncommutative motive of
a twisted flag is a birational invariant, generalizing in this way a result of
Tabuada. We generalize this result for $X$ having a certain type of
semiorthogonal decomposition.
## 1\. Introduction
Let $f\colon X\rightarrow S$ be a scheme that is seperated and of finite type
over a Noetherian scheme $S$ and assume
$\mathcal{O}_{S}\xrightarrow{\simeq}f_{*}\mathcal{O}_{X}$. Then, for each
$S$-scheme $T$ there is an exact sequence
$\displaystyle
0\longrightarrow\mathrm{Pic}(T)\longrightarrow\mathrm{Pic}(X_{T})\longrightarrow\mathrm{Pic}_{(X/S)(\mathrm{fppf})}(T)\stackrel{{\scriptstyle\delta}}{{\longrightarrow}}\mathrm{Br}^{\prime}(T)\longrightarrow\mathrm{Br}^{\prime}(X_{T}).$
Here $\mathrm{Pic}_{(X/S)}$ denotes the Picard functor and
$\mathrm{Pic}_{(X/S)(\mathrm{fppf})}$ the associated sheaf in the fppf
topology. Specializing the above sequence to the case $X$ a proper variety
over a field $k$ and $T=\mathrm{Spec}(k)$, Liedtke [18] called the group
$\delta(\mathrm{Pic}_{(X/S)(\mathrm{fppf})}(k))$ the _Amitsur subgroup_ of $X$
in $\mathrm{Br}(k)$. This subgroup is denoted by $\mathrm{Am}(X)$. If $X$ is
smooth and proper over $k$, it follows that
$\mathrm{Am}(X)=\mathrm{ker}(\mathrm{Br}(k)\rightarrow\mathrm{Br}(k(X)))$. The
group $\mathrm{ker}(\mathrm{Br}(k)\rightarrow\mathrm{Br}(k(X)))$ is also
denoted by $\mathrm{Br}(k(X)/k)$ and was studied for instance in [20] and [8].
If $X$ is Brauer–Severi corresponding to a central simple algebra $A$, it is a
classical result due to Châtelet that $\mathrm{Am}(X)=\langle[A]\rangle$. In
this case $\mathrm{Br}(X)\simeq\mathrm{Br}(k)/\mathrm{Am}(X)$. It is shown in
[18] that $\mathrm{Am}(X)$ is a birational invariant for $X$ smooth and proper
over $k$. Note that if $X$ admits a $k$-rational point, then
$\mathrm{Am}(X)=0$. On the other hand, there are proper varieties with trivial
Amitsur subgroup without rational points (see [18], Proposition 5.4). Special
varieties for which $\mathrm{Am}(X)$ is calculated can be found in [8], [18]
and [20]. In this paper, we want to calculate $\mathrm{Am}(X)$ and
$\mathrm{Br}(X)$ for a certain class of proper $k$ schemes $X$. Furthermore,
we want to explain the consequences of our results for the noncommutative
motives.
To state our results, we fix some notations and recall some facts. Let $X$ be
a proper and geometrically integral $k$-scheme and denote by $X_{s}$ the base
change $X\otimes_{k}k^{s}$ to the separable closure. If
$\mathrm{Pic}(X_{s})\simeq\mathbb{Z}^{\oplus m}$, then $\mathrm{Pic}(X)\simeq
r_{1}\mathbb{Z}\oplus\cdots\oplus r_{m}\mathbb{Z}$. Let us fix a basis
$\mathcal{L}_{1},...,\mathcal{L}_{m}$ of $\mathrm{Pic}(X_{s})$. Using a basis
of $\mathrm{Pic}(X)$ one can show that there are line bundles
$\mathcal{J}_{i}\in\mathrm{Pic}(X)$ satisfying
$\mathcal{J}_{i}\otimes_{k}k^{s}\simeq\mathcal{L}_{i}^{\otimes c_{i}}$ for
some integers $c_{i}\geq 1$. Now choose the line bundles $\mathcal{J}_{i}$
such that the $c_{i}$ are minimal. According to [23], Proposition 3.4, these
$\mathcal{J}_{i}$ are unique up to isomorphism. Assume there are pure vector
bundles $\mathcal{M}_{i}$ of type $\mathcal{L}_{i}\in\mathrm{Pic}(X_{s})$. A
vector bundle $\mathcal{E}$ on a proper $k$-scheme $X$ is called _pure of
type_ $\mathcal{W}$ if there is an indecomposable vector bundle $\mathcal{W}$
on $X\otimes_{k}\bar{k}$ such that
$\mathcal{E}\otimes_{k}{\bar{k}}\simeq\mathcal{W}^{\oplus m}$. Being pure of
type $\mathcal{L}_{i}$ is equivalent to
$\mathcal{L}_{i}\in\mathrm{Pic}_{\Gamma}(X_{s})$, where $\Gamma$ denotes the
absolute Galois group (see [23], Theorem 4.5). We know from [23], Proposition
3.5 that the bundle $\mathcal{M}_{i}$ is unique up to isomorphism. We set
$\mathcal{M}_{\mathcal{L}_{i}}:=\mathcal{M}_{i}$. It is easy to see that for
any line bundle $\mathcal{L}_{i}^{\otimes a}\in\mathrm{Pic}(X_{s})$ there is
an indecomposable pure bundle of type $\mathcal{L}_{i}^{\otimes a}$. Indeed,
let $s_{i}=\mathrm{rk}(\mathcal{M}_{\mathcal{L}_{i}})$ and consider
$(\mathcal{L}_{i}^{\oplus s_{i}})^{\otimes a}\simeq(\mathcal{L}_{i}^{\otimes
a})^{\oplus s_{i}^{a}}$. Then we get $\mathcal{M}_{\mathcal{L}_{i}}^{\otimes
a}\otimes_{k}k^{s}\simeq(\mathcal{L}_{i}^{\oplus s_{i}})^{\otimes
a}\simeq(\mathcal{L}_{i}^{\otimes a})^{\oplus s_{i}^{a}}$.
Considering the Krull–Schmidt decomposition of
$\mathcal{M}_{\mathcal{L}_{i}}^{\otimes a}$ and taking into account that all
indecomposable direct summands are isomorphic (see [23], proof of Proposition
3.6 and Remark 3.7), we get an, up to isomorphism, unique indecomposable
vector bundle $\mathcal{M}_{\mathcal{L}_{i}^{\otimes a}}$ such that
$\mathcal{M}_{\mathcal{L}_{i}^{\otimes
a}}\otimes_{k}k^{s}\simeq(\mathcal{L}_{i}^{\otimes a})^{\oplus s_{i}(a)}$,
where $s_{i}(a)=\mathrm{rank}(\mathcal{M}_{\mathcal{L}_{i}^{\otimes a}})$.
Using Krull–Schmidt decomposition again, we can use our indecomposable vector
bundles $\mathcal{M}_{\mathcal{L}_{i}^{\otimes a}}$ and take the tensor
product of these to get an indecomposable vector bundle
$\mathcal{M}_{(a_{1},...,a_{m})}$ of type $\mathcal{L}_{1}^{\otimes
a_{1}}\otimes\cdots\otimes\mathcal{L}_{m}^{\otimes a_{m}}$. Again, the bundles
$\mathcal{M}_{(a_{1},...,a_{m})}$ are unique up to isomorphism. Recall that a
vector bundle $\mathcal{E}$ on a $k$-scheme $X$ is called _absolutely split_
if it splits after base change as a direct sum of line bundles on
$X\otimes_{k}\bar{k}$. For an absolutely split vector bundle we shortly write
_AS-bundle_. Over an algebraically closed field, a classical result of
Grothendieck classifies all $AS$-bundles on $\mathbb{P}^{1}$. Note that on
$\mathbb{P}^{1}$ the result of Grothendieck shows that actually all vector
bundles are $AS$-bundles. In [22] the author classifies vector bundles on
twisted forms of $\mathbb{P}^{1}$. The twisted forms of $\mathbb{P}^{1}$ are
Brauer–Severi curves (or smooth non-degenerate quadrics without rational
point). Generalizing these results further, in [23], the author clssifies
$AS$-bundles on proper $k$-schemes. Moreover, for $X$ with cyclic Picard group
the indecomposable $AS$-bundles are determined explicitely. In the present
paper we generalize this result to the case
$\mathrm{Pic}(X_{s})\simeq\mathbb{Z}^{\oplus m}$.
###### Theorem (Theorem 4.5).
Let $X$ be a proper and geometrically integral $k$-scheme with
$\mathrm{Pic}(X_{s})\simeq\mathbb{Z}^{\oplus m}$ and let
$\mathcal{L}_{1},...,\mathcal{L}_{m}\in\mathrm{Pic}(X_{s})$ be the basis from
above. Let $\mathcal{J}_{i}\in\mathrm{Pic}(X)\simeq\mathbb{Z}^{\oplus m}$ be
the up to isomorphism unique line bundles satisfying
$\mathcal{J}_{i}\otimes_{k}k^{s}\simeq\mathcal{L}_{i}^{\otimes c_{i}}$ with
$c_{i}$ being minimal. Assume there are indecomposable pure bundles
$\mathcal{M}_{\mathcal{L}_{i}}$ of type $\mathcal{L}_{i}$. Then all
indecomposable $AS$-bundles $\mathcal{E}$ are of the form
$\displaystyle\mathcal{J}_{1}^{\otimes
b_{1}}\otimes\cdots\otimes\mathcal{J}_{m}^{\otimes
b_{m}}\otimes\mathcal{M}_{(a_{1},...,a_{m})}$
with unique $b_{i}\in\mathbb{Z}$ and $0\leq a_{j}\leq c_{j}-1$.
Notice that Theorem 4.5 is a generalization of [23], Theorem 5.1. Using
Theorem 4.5 we will prove the following result about the Amitsur subgroup and
the Brauer group $\mathrm{Br}(X)$.
###### Theorem 1.1.
Let $X$ be proper and geometrically integral $k$-scheme with
$\mathrm{Pic}(X_{s})\simeq\mathbb{Z}^{\oplus m}$. With the notation and
assumption from above, the Amitsur subgroup $\mathrm{Am}(X)$ is generated by
the classes of central simple algebras $\mathrm{End}(\mathcal{M}_{e_{i}})$,
where $i\in\\{1,...,m\\}$ for which $c_{i}\geq 2$. In particular, if
$\mathrm{Pic}(X_{s})\simeq\mathbb{Z}$ is generated by an ample line bundle
$\mathcal{L}$ and if there exists a pure vector bundle
$\mathcal{M}_{\mathcal{L}}$ of type $\mathcal{L}$, then $\mathrm{Am}(X)$ is
cyclic and generated by $[\mathrm{End}(\mathcal{M}_{\mathcal{L}})]$. Moreover,
if $X$ is geometrically rational, then
$\mathrm{Br}(X)\simeq\mathrm{Br}(k)/\mathrm{Am}(X)$.
Schemes satisfying the assuptions of Theorem 1.1 include twisted flag
varieties. In order to apply Theorem 1.1 to twisted flags and to obtain in
this way an alternative proof of a result of Merkurjev and Tignol [20],
Theorem B, we recall the definition and some facts on twisted flags and refer
to [19] for details.
Let $G$ be a semisimple algebraic group over a field $k$ and
$G_{s}=G\otimes_{k}k^{s}$. For a parabolic subgroup $P$ of $G_{s}$, one has a
homogeneous variety $G_{s}/P$. A _twisted flag_ is variety $X$ such that
$X\otimes_{k}k^{s}$ is $G_{s}$-isomorphic to $G_{s}/P$ for some $G$ and some
parabolic $P$ in $G_{s}$. Any twisted flag is smooth, absolutely irreducible
and reduced. An algebraic group $G^{\prime}$ is called twisted form of $G$ iff
$G^{\prime}_{s}\simeq G_{s}$ iff $G^{\prime}={{}_{\gamma}}G$ for some
$\gamma\in Z^{1}(k,\mathrm{Aut}(G_{s}))$. The group $G^{\prime}$ is called an
_inner form_ of $G$ if there is a $\delta\in Z^{1}(k,\bar{G}(k^{s}))$ with
$G^{\prime}={{}_{\delta}}G$. Here $\bar{G}=G/Z(G)$ where $Z(G)$ denotes the
center. For an arbitrary semisimple $G$ over $k$, there is a unique (up to
isomorphism) split semisimple group $G^{d}$ such that $G_{s}\simeq G_{s}^{d}$.
If $G$ is an inner form of $G^{d}$, then $G$ is said to be of _inner type_.
For instance, let $A$ be a central simple algebra over $k$ of degree $n$ and
$G=\mathrm{PGL}_{1}(A)$, then $G_{s}\simeq\mathrm{PGL}_{n}$ over $k^{s}$.
Hence $G$ is an inner form of $\mathrm{PGL}_{n}$. Since $\mathrm{PGL}_{n}$ is
split, $G=\mathrm{PGL}_{1}(A)$ is of inner type. For a classification of
simple groups of classical type we refer to [14], p.366-373.
Now let $G$ be a semisimple (so connected) simply connected algebraic group
over $k$ and $P$ a parabolic subgroup. Let $G/P$ be a flag variety and note
that $G/P=\bar{G}/\bar{P}$. Let $\gamma\colon\mathrm{Gal}(k^{s}|k)\rightarrow
G(k^{s})$ be a 1-cocycle. We denote by $X:={{}_{\gamma}}(G/P)$ the twisted
form of $G/P$ corresponding to $\gamma$. Notice that
${{}_{\gamma}}(G/P)\otimes_{k}k^{s}\simeq G_{s}/P_{s}$ for a suitable
parabolic subgroup $P_{s}$ of $G_{s}$. The next corollary is essentially [20],
Theorem B. Below $\mathrm{Ch}(P_{s})$ denotes the character group and
$\mathrm{Ch}(P_{s})^{\Gamma}$ the character group of Galois invariant
characters.
###### Corollary 1.2.
Let $G$ be a semisimple (so connected) simply connected algebraic group over
$k$ and $P$ a parabolic subgroup. Denote by $X={{}_{\gamma}}(G/P)$ a twisted
flag. Then $\mathrm{Am}(X)$ is generated by the Brauer classes of Tits
algebras of $G$ corresponding to the elements of a basis of
$\mathrm{Ch}(P_{s})^{\Gamma}$. Moreover,
$\mathrm{Br}(X)\simeq\mathrm{Br}(k)/\mathrm{Am}(X)$.
The proof of Theorem 1.1 actually uses Theorem 4.5 which enables us to
classify all indecomposable $AS$-bundles on $X$. It is a non-trivial fact that
these bundles are in one-to-one correspondence with the closed points of the
Picard scheme $\mathrm{Pic}_{(X/k)(\mathrm{fppf})}$ (see Theorem 4.4). Notice
that Theorem 4.5 generalizes [23], Theorem 5.1.
###### Corollary 1.3.
Let $X_{i}$ be a twisted form of $G_{i}/P_{i}$ with $G_{i}$ and $P_{i}$ as in
Theorem 1.1 and let $X=X_{1}\times\cdots\times X_{n}$. Let $D_{i}$ be the set
of generators of $\mathrm{Am}(X_{i})$ obtained from Corollary 1.2. Then
$\mathrm{Am}(X)$ is generated by $\cup D_{i}$. Moreover,
$\mathrm{Br}(X)\simeq\mathrm{Br}(k)/\mathrm{Am}(X)$.
Theorem 1.1 from above has also a motivic consequence. Noncommutative motives
are by construction closely related to semiorthogonal decompositions. In the
last decades, the bounded derived category $D^{b}(X)$ of coherent sheaves on a
smooth projective variety $X$ has been recognized as an interesting invariant,
encoding a lot of geometric information. For instance, there are links between
the semiorthogonal decomposition of $D^{b}(X)$ and the birational geometry of
$X$ (see for instance [16], [2], [3], [24], [26] and references therein). From
a motivic point of view, it is quite natural to ask how birational geometry of
a given variety $X$ is detected by its noncommutative motive. And indeed,
there are results in this direction for (generalized) Brauer–Severi varieties
[30] and [31]. In the present paper we want to consider twisted flags and shed
some light to the case of arbitrary proper $k$-schemes admitting a certain
type of semiorthogonal decomposition. Our main results are Theorems 1.4 and
1.7.
Recall from the book [29] that the category dgcat of small dg categories with
dg functors carries a Quillen model structure whose weak equivalences are
Morita equivalences. Denote by $\mathrm{Hmo}$ the homotopy category obtained
from the Quillen model structure and by $\mathrm{Hmo}_{0}$ its additivization.
To any small dg category $\mathcal{A}$ one can associate functorially its
_noncommutative motive_ $U(\mathcal{A})$ which takes values in
$\mathrm{Hmo}_{0}$. This functor
$U\colon\textbf{dgcat}\rightarrow\mathrm{Hmo}_{0}$ is an _universal additive
invariant_. Recall that an universal additive invariant is any functor
$E\colon\textbf{dgcat}\rightarrow D$ taking values in an additive category $D$
such that
* (i)
it sends derived Morita equivalences to isomorphisms,
* (ii)
for any pre-triangulated dg category $\mathcal{A}$ admitting full pre-
triangulated dg subcategories $\mathcal{B}$ and $\mathcal{C}$ such that
$H^{0}(\mathcal{A})=\langle H^{0}(\mathcal{B}),H^{0}(\mathcal{C})\rangle$ is a
semiorthogonal decomposition, the morphism $E(\mathcal{B})\oplus
E(\mathcal{C})\rightarrow E(\mathcal{A})$ induced by the inclusions is an
isomorphism.
A source of examples for dg categories is provided by schemes since the
derived category of perfect complexes $\mathrm{perf}(X)$ of any quasi-
projective scheme $X$ admits a canonical (unique) dg enhancement
$\mathrm{perf}_{dg}(X)$. In [30] it is proved that if two Brauer–Severi
varieties $X$ and $Y$ (see Section 2 for a definition) are birational, then
$U(\mathrm{perf}_{dg}(X))=U(\mathrm{perf}_{dg}(Y))$. In view of the Amitsur
conjecture for central simple algebras (two Brauer–Severi varieties $X$ and
$Y$ are birational if and only if the corresponding central simple algebras
$A$ and $B$ generate the same subgroup in $\mathrm{Br}(k)$), it is conjectured
in _loc.cite_ that $U$ is actually a complete birational invariant for
Brauer–Severi varieties. As a Brauer–Severi variety is a special case of a
twisted flag, Theorem 1.4 from below is a generalization.
We fix some notation: Let $X$ be a twisted flag as in Corollary 1.2 and denote
by $A_{g}$ the central simple division algebra corresponding to
$g\in\mathrm{Am}(X)$. Analogously, let $B_{h}$ denote the central simple
division algebra corresponding to $h\in\mathrm{Am}(Y)$. Set
$M_{X}:=\bigoplus_{g\in\mathrm{Am}(X)}U(A_{g})$ and
$M_{Y}:=\bigoplus_{h\in\mathrm{Am}(Y)}U(B_{h})$. Note that these sums are
finite according to Theorem 1.1. Furthermore, let $\mathrm{Sep}(k)$ be the
full subcategory of the category of noncommutative Chow motives (see [30] for
a definition) consisting of objects $U(F)$ with $F$ a separable $k$-algebra.
Now let $\mathrm{CSA}(k)$ be the full subcategory of $\mathrm{Sep}(k)$
consisting of objects $U(A)$ with $A$ a central simple $k$-algebra (see
Section 2 for a definition of central simple algebra) and denote by
$\mathrm{CSA}(k)^{\oplus}$ its closure under finite sums. It is an additive
symmetric monoidal subcategory. We write shortly $U(X)$ for
$U(\mathrm{perf}_{dg}(X))$.
Let $G$, $P$ and $\gamma$ be as above and let $\rho_{1},...,\rho_{n}$ be a
$\mathrm{Ch}$-homogeneous basis of $R(P)$ over $R(G)$ (see [27],§2), where
$R(P)$ and $R(G)$ denote the corresponding representation rings. Let
$A_{\chi(i),\gamma}$ be the Tits central simple algebras associated to
$\rho_{i}$ (see Section 3 for a definition) and let $\mathrm{Ti}(X):=\langle
A_{\chi(1),\gamma},...,A_{\chi(n),\gamma}\rangle$ be the subgroup of
$\mathrm{Br}(k)$ generated by these Tits algebras. Denote by
$\displaystyle M\mathrm{Ti}(X):=\bigoplus_{f\in\mathrm{Ti}(X)}U(A_{f}),$
where $A_{f}$ are the central simple division algebras corresponding to $f$.
###### Theorem 1.4.
Let $X$ be a twisted flags as in Corollary 1.2. Then there are direct summands
$N,N^{\prime}\in\mathrm{CSA}(k)^{\oplus}$ of $M\mathrm{Ti}(X)$ such that
$M_{X}\oplus N=U(X)\oplus N^{\prime}$.
###### Remark 1.5.
In the case of a Brauer–Severi variety $X$ corresponding to a central simple
algebra of period $m$, one has $M_{X}=U(k)\oplus U(A)\oplus\cdots\oplus
U(A^{\otimes m-1})$. Note that the period $m$ divides the degree $n$. So if
$m\cdot r=n$, we get $M_{X}^{\oplus r}=U(\mathrm{perf}(X))$ (see Example 6.1
for details). In this case we have $N=M_{X}^{\oplus(r-1)}$ and $N^{\prime}=0$.
With the help of Corollary 1.6 we get back [30], Proposition 3.15 (see p.14
for detailed explanation).
###### Corollary 1.6.
Let $X$ and $Y$ be twisted flags as in Corollary 1.2 and let $N,N^{\prime}$ be
the direct summands of $M\mathrm{Ti}(X)$ and $Q,Q^{\prime}$ of
$M\mathrm{Ti}(Y)$ obtained from Theorem 1.4. If $X$ and $Y$ are birational,
then $M_{X}\simeq M_{Y}$ and $U(X)\oplus N^{\prime}\oplus Q\simeq U(Y)\oplus
Q^{\prime}\oplus N$.
Using the theory of semiorthogonal decompositions one can try to generalize
Corollary 1.6 for arbitrary proper and geometrically integral $k$-schemes. For
the definition of w-exceptional objects and semiorthogonal decompositions we
refer to p.15. Below, $D^{b}(X)$ denotes the bounded derived category of
coherent sheaves on $X$ and
$D^{b}(X)=\langle\mathcal{D}_{1},...,\mathcal{D}_{m}\rangle$ a semiorthogonal
decomposition. We want to call a smooth, proper and geomerically integral
$k$-scheme $X$ a scheme of _pure weak exceptional type_ if it satisfies the
following conditions:
* (i)
$D^{b}(X)=\langle\mathcal{E}_{1},...,\mathcal{E}_{m}\rangle$ is a
semiorthogonal decomposition induced by a full w-exceptional collection,
* (ii)
The bundles $\mathcal{E}_{i}$ be pure of type $\mathcal{K}_{i}$. Furthermore,
$\mathrm{End}(\mathcal{K}_{i})\simeq k^{s}$ and some of these
$\mathcal{K}_{i}$ form a basis of $\mathrm{Pic}(X_{s})$.
Note that if $\mathcal{E}_{i}$ is pure of type $\mathcal{K}_{i}$ with
$\mathrm{End}(\mathcal{K}_{i})\simeq k^{s}$, the base change of the
semiorthogonal decomposition (i) actually implies that
$D^{b}(X_{s})=\langle\mathcal{K}_{1},...,\mathcal{K}_{m}\rangle$ is a full
exceptional collection (see p.15 for a definition) and hence
$\mathrm{Pic}(X_{s})\simeq\mathbb{Z}^{\oplus m}$. Indeed, there are schemes
satisfying these two conditions. For instance (generalized) Brauer-Severi
varieties or certain involution varieties. Conjecturally, all twisted flags
are of pure weak exceptional type. It is interesting to investigate whether
twisted flags are characterized by (i) and (ii). For a result in this
direction see [25], Theorem 1.2. One can show that schemes $X$ of pure weak
exceptional type satisfy the assumptions of Theorem 4.5 (see proof of Theorem
1.7). Therefore, we can classify indecomposable $AS$-bundles on such $X$.
Now let $D_{X}$ denote the set of indecopmposable $AS$-bundles on a scheme of
pure weak exceptional type. Note that for any $AS$-bundle $\mathcal{E}$ on $X$
satisfying the assumptions of Theorem 4.5 one has
$[\mathrm{End}(\mathcal{E})]\in\mathrm{Br}(k)$ (see Proposition 4.7). From
Theorem 4.5 and Proposition 4.7 we conclude that there are only finitely many
Brauer-classes $[\mathrm{End}(\mathcal{E})]\in\mathrm{Br}(k)$ of
indecomposable $AS$-bundles. Now let $C_{X}\subset\mathrm{Br}(k)$ be the
subgroup generated by these finitely many Brauer-classes. Denote by $A(g)$ the
central simple division algebra in $C_{X}$ corresponding to $g$. Set
$\displaystyle M_{T(X)}:=\bigoplus_{g\in C_{X}}U(A(g))$
###### Theorem 1.7.
Let $X$ and $Y$ be schemes of pure weak exceptional type. If $X$ and $Y$ are
birational, then there are direct summands
$N,N^{\prime}\in\mathrm{CSA}^{\oplus}$ of $M_{T(X)}$ and
$Q,Q^{\prime}\in\mathrm{CSA}^{\oplus}$ of $M_{T(Y)}$ such that $U(X)\oplus
N^{\prime}\oplus Q\simeq U(Y)\oplus Q^{\prime}\oplus N$.
###### Corollary 1.8.
Let $X$ and $Y$ be as in Theorem 1.7. Assume $X$ and $Y$ are birational and
let $A_{i},1\leq i\leq n$ and $B_{j},1\leq j\leq m$ be the central simple
algebras occuring in $U(X)\oplus N^{\prime}\oplus Q$ and $U(Y)\oplus
Q^{\prime}\oplus N$ respectively, then
$\langle[A_{i}]\rangle=\langle[B_{j}]\rangle$ in $\mathrm{Br}(k)$.
Notations Throughout the paper $k$ is an arbitrary field and $k^{s}$ a
separable closure. For a variety/algebraic group over $k$, we write $X_{s}$
and $G_{s}$ for the base changes $X\otimes_{k}k^{s}$ and $G\otimes_{k}k^{s}$
respectively.
## 2\. Examples of inner twisted flags
Recall that a finite-dimensional $k$-algebra $A$ is called _central simple_ if
it is an associative $k$-algebra that has no two-sided ideals other than $0$
and $A$ and if its center equals $k$. If the algebra $A$ is a division algebra
it is called _central division algebra_. Note that $A$ is a central simple
$k$-algebra if and only if there is a finite field extension $k\subset L$,
such that $A\otimes_{k}L\simeq M_{n}(L)$. This is also equivalent to
$A\otimes_{k}\bar{k}\simeq M_{n}(\bar{k})$. An extension $k\subset L$ such
that $A\otimes_{k}L\simeq M_{n}(L)$ is called splitting field for $A$. The
_degree_ of a central simple algebra $A$ is defined to be
$\mathrm{deg}(A):=\sqrt{\mathrm{dim}_{k}A}$. According to the _Wedderburn
Theorem_ , for any central simple $k$-algebra $A$ there is an unique integer
$n>0$ and a division $k$-algebra $D$ such that $A\simeq M_{n}(D)$. The
division algebra $D$ is also central and unique up to isomorphism. The degree
of the unique central division algebra $D$ is called the _index_ of $A$ and is
denoted by $\mathrm{ind}(A)$. Two central simple algebras $A$ and $B$ are said
to be _Brauer-equivalent_ if there are positive integers $r,s$ such that
$M_{r}(A)\simeq M_{s}(B)$.
For a central simple $k$-algebra $A$, the inner twisted forms arising from
$G=\mathrm{PGL}_{1}(A)$ can be described very explicitly. This will be done in
the sequel. One of these inner twisted forms is the _generalized Brauer–Severi
variety_. So let $m\leq n$. The generalized Brauer–Severi variety
$\mathrm{BS}(m,A)$ is defined to be the subset of $\mathrm{Grass}_{k}(mn,A)$
consisting of those subspaces of $A$ which are right ideals of dimension
$m\cdot n$ (see [14] or [7]). Recall that $\mathrm{Grass}_{k}(mn,A)$ is given
the structure of a projective variety via the Plücker embedding
$\mathrm{Grass}_{k}(mn,A)\rightarrow\mathbb{P}(\wedge^{mn}(A))$. This gives an
embedding $\mathrm{BS}(m,A)\rightarrow\mathbb{P}(\wedge^{mn}(A))$ and a very
ample line bundle $\mathcal{M}$ on $\mathrm{BS}(m,A)$. Note that for any
$\mathrm{BS}(m,A)$ there exists a finite Galois field extension $E$ of $k$
such that
$\mathrm{BS}(m,A)\otimes_{k}E\simeq\mathrm{Grass}_{E}(mn,n^{2})\simeq\mathrm{Grass}_{E}(m,n)$.
The Picard group $\mathrm{Pic}(\mathrm{Grass}_{E}(m,n))$ is isomorphic to
$\mathbb{Z}$ and has ample generator
$\mathcal{O}(1)\simeq\mathrm{det}(\mathcal{Q})$ with $\mathcal{Q}$ being the
universal quotient bundle on $\mathrm{Grass}_{E}(m,n)$. Recall that
$\mathrm{Pic}(\mathrm{BS}(m,A))\simeq\mathbb{Z}$ and that it has a positive
generator $\mathcal{L}$ such that
$\mathcal{L}\otimes_{k}E\simeq\mathcal{O}(r)$ for a suitable $r>0$. Since
$\mathrm{Pic}(\mathrm{BS}(m,A))$ is cyclic, we have $\mathcal{L}^{\otimes
s}\simeq\mathcal{M}$ for a suitable $s>0$. Therefore, $\mathcal{L}$ is ample.
From the definition of $\mathrm{BS}(m,A)$ it is clear that $\mathcal{L}$ is
also very ample. If $m=1$, $\mathrm{BS}(1,A)$ is called _Brauer–Severi
variety_.
We also recall the basics of generalized Brauer–Severi schemes (see [17]). Let
$X$ be a noetherian $k$-scheme and $\mathcal{A}$ a sheaf of Azumaya algebras
of rank $n^{2}$ over $X$ (see [11], [12] for details on Azumaya algebras). For
an integer $1\leq m_{1}<n$ the generalized Brauer–Severi scheme
$p:\mathrm{BS}(m_{1},\mathcal{A})\rightarrow X$ is defined as the scheme
representing the functor $F:\mathrm{Sch}/X\rightarrow\mathrm{Sets}$, where
$(\psi:Y\rightarrow X)$ is mapped to the set of left ideals $\mathcal{J}$ of
$\psi^{*}\mathcal{A}$ such that $\psi^{*}\mathcal{A}/\mathcal{J}$ is locally
free of rank $n(n-m_{1})$. By definition, there is an étale covering
$U\rightarrow X$ and a locally free sheaf $\mathcal{E}$ of rank $n$ with the
following trivializing diagram:
$\textstyle{\mathrm{Grass}(m_{1},\mathcal{E})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\pi}$$\scriptstyle{q}$$\textstyle{\mathrm{BS}(n_{1},\mathcal{A})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{p}$$\textstyle{U\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{g}$$\textstyle{X}$
In the same way one defines the twisted relative flag
$\mathrm{BS}(m_{1},...,m_{r},\mathcal{A})$ as the scheme representing the
functor $F:\mathrm{Sch}/X\rightarrow\mathrm{Sets}$, where $(\psi:Y\rightarrow
X)$ is mapped to the set of left ideals
$\mathcal{J}_{1}\subset...\subset\mathcal{J}_{r}$ of $\psi^{*}\mathcal{A}$
such that $\psi^{*}\mathcal{A}/\mathcal{J}_{i}$ is locally free of rank
$n(n-m_{i})$. As for the generalized Brauer–Severi schemes, there is an étale
covering $U\rightarrow X$ and a locally free sheaf $\mathcal{E}$ of rank $n$
with diagram
$\textstyle{\mathrm{Flag}_{U}(m_{1},...,m_{r},\mathcal{E})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\pi}$$\scriptstyle{q}$$\textstyle{\mathrm{BS}(m_{1},...,m_{r},\mathcal{A})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{p}$$\textstyle{U\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{g}$$\textstyle{X}$
Note that the usual Brauer–Severi schemes are obtained from the generalized
one by setting $m_{1}=1$. In this case one has a well known one-to-one
correspondence between sheaves of Azumaya algebras of rank $n^{2}$ on $X$ and
Brauer–Severi schemes of relative dimension $n-1$ via
$\check{H}^{1}(X_{et},\mathrm{PGL}_{n})$ (see [11]). Note that if the base
scheme $X$ is a point a sheaf of Azumaya algebras on $X$ is a central simple
$k$-algebra and the generalized Brauer–Severi schemes are the generalized
Brauer–Severi varieties from above. Consider a twisted flag
$X=\mathrm{SB}(m_{1},...,m_{r},A)\rightarrow\mathrm{Spec}(k)$. Such an $X$ is
an _inner form_ of a partial flag variety. That is, there is a cartesian
square of the form
$\textstyle{\mathrm{Grass}_{L}(m_{1},...,m_{r},V)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\pi}$$\scriptstyle{q}$$\textstyle{\mathrm{BS}(m_{1},...,m_{r},A)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{p}$$\textstyle{\mathrm{Spec}(L)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\pi}$$\textstyle{\mathrm{Spec}(k)}$
where $L/k$ is a Galois extension and the 1-cocycle
$\displaystyle\mathrm{Gal}(L/k)\longrightarrow\mathrm{Aut}(\mathrm{Grass}_{L}(m_{1},...,m_{r},V))$
factors through $\mathrm{PGL}(V)$.
## 3\. Tits algebras
We refer to [27], Section 3.1 for details (see also [14], p.376-379). Now let
$G$ be a simply connected semi-simple algebraic group over the field $k$ and
$P$ a parabolic subgroup. We denote by $\widetilde{G}$ and $\widetilde{P}$
their universal covers. For the center $\widetilde{Z}\subset\widetilde{G}$ let
$\mathrm{Ch}:=\mathrm{Hom}(\widetilde{Z},\mathbb{G}_{m})$ be the character
group. Furthermore, let $R(\widetilde{G})$ and $R(\widetilde{P})$ be the
associated representation rings. Recall from [27],§2 that there exits a finite
free $\mathrm{Ch}$-homogeneous basis of $R(\widetilde{P})$ over
$R(\widetilde{G})$. Furthermore, let $\chi\in\mathrm{Ch}$ and denote by
$\mathrm{Rep}_{k}^{\chi}(\tilde{G})$ the full subcategory of
$\mathrm{Rep}_{k}(\tilde{G})$ consisting of those $V$ such that $\tilde{Z}$
acts on $V$ by $\chi$. Now for a Galois-invariant $\chi\in\mathrm{Ch}$, choose
a non-trivial representation $V_{\chi}\in\mathrm{Rep}_{k}^{\chi}(\tilde{G})$.
Put $A_{\chi}=\mathrm{End}(V_{\chi})$. Then $A_{\chi}$ is an $k$-algebra
equipped with a $G$-action by $k$-algebra automorphism. Using a 1-cocycle
$\gamma\colon\mathrm{Gal}(k^{s}|k)\rightarrow G(k^{s})$ one gets a new
$\mathrm{Gal}(k^{s}|k)$-action on $A_{\chi}\otimes_{k}k^{s}$ and hence a
twisted form $A_{\chi,\gamma}$. In this way, one obtains the _Tits map_ (see
[27], §3 or [14], p.377)
$\beta_{\gamma}\colon\mathrm{Ch}^{\Gamma}\rightarrow\mathrm{Br}(k)$ which is a
group homomorphism and assigns to each character $\chi\in\mathrm{Ch}^{\Gamma}$
a central simple algebra $A_{\chi,\gamma}\in\mathrm{Br}(k)$, called _Tits
algebra_.
###### Example 3.1 (Type $A_{n}$).
Let $G=\mathrm{SL}_{1}(A)$ where $A$ is a central simple algebra of degree
$n+1$. Then $\bar{G}=\mathrm{PGL}_{1}(A)$ and
$\mathrm{Ch}(Z)=\mathbb{Z}/(n+1)\mathbb{Z}$ with trivial
$\mathrm{Gal}(k^{s}|k)$-action. For any $i=0,1,...,n$, consider the
representation $p_{i}\colon\bar{G}\rightarrow\mathrm{GL}_{1}(\lambda^{i}A)$,
where $\lambda^{i}A$ are external powers of $A$. In the split case, the $i$-th
exterior power representation are known to be minimal representations (see
[14]). Hence $A^{\otimes i}$ are the Tits algebras for $G$.
###### Example 3.2 (Type $C_{n}$).
Let $G=\mathrm{Sp}(A,\sigma)$ where $A$ is a central simple algebra of degree
$2n$ with symplectic involution $\sigma$. Then
$\bar{G}=\mathrm{PGSp}(A,\sigma)$ and
$\mathrm{Ch}(Z)=\mathbb{Z}/2\mathbb{Z}=\\{0,\chi\\}$. The embedding
$\bar{G}\rightarrow\mathrm{GL}_{1}(A)$ is in the split case a minimal
representation. Hence $A$ is the Tits algebra.
A complete list of the (minimal) Tits algebras for the simple $k$-split
algebraic groups of classical type can be found for instance in [14], p.
378-379.
## 4\. AS-bundles on proper $k$-schemes
###### Definition 4.1.
A vector bundle $\mathcal{E}$ on a proper $k$-scheme $X$ is called _pure of
type_ $\mathcal{W}$ if there is an indecomposable vector bundle $\mathcal{W}$
on $X\otimes_{k}\bar{k}$ such that
$\mathcal{E}\otimes_{k}{\bar{k}}\simeq\mathcal{W}^{\oplus m}$.
Recall from [23] the following definition.
###### Definition 4.2.
Let $X$ be a $k$-scheme. A vector bundle $\mathcal{E}$ on $X$ is called
_absolutely split_ (_separably split_) if it splits after base change as a
direct sum of invertible sheaves on $X\otimes_{k}\bar{k}$ (resp.
$X\otimes_{k}k^{sep}$). For an absolutely split vector bundle we shortly write
_AS-bundle_.
###### Proposition 4.3 ([23], Proposition 4.2).
Let $X$ be a proper $k$-scheme and $\mathcal{E}$ a vector bundle on $X$. Then
$\mathcal{E}$ is absolutely split if and only if it is separably split.
###### Theorem 4.4 ([23], Theorem 4.6).
Let $X$ be a proper $k$-scheme with $H^{0}(X,\mathcal{O}_{X})=k$. Then the
closed points of the Picard scheme $\mathrm{Pic}_{X/k}$ are in one-to-one
correspondence with isomorphism classes of indecomposable $AS$-bundles on $X$.
In [23] the author classified all indecomposable $AS$-bundles on a proper
$k$-scheme $X$ with cyclic Picard group. We want to generalize this result for
the case $\mathrm{Pic}(X_{s})\simeq\mathbb{Z}^{\oplus m}$. Theorem 4.5 below
is interesting in its own right and can be used for instance to classify
indecomposable $AS$-bundles on twisted flags. In the proof of Theorem 1.1 it
is implicitely shown that twisted flags of classical type satisfy the
assumption of Theorem 4.5. Note that the classification of indecomposable
$AS$-bundles on the twisted flags under consideration is a vast generalization
of the main theorem of [6].
Let us repeat the notations and facts from the introduction. $X$ is still a
proper and geometrically integral $k$-scheme. From [23], Proposition 3.4 it
follows that $\mathrm{Pic}(X)$ is a subgroup of $\mathrm{Pic}(X_{s})$. In
particular, if $\mathrm{Pic}(X_{s})\simeq\mathbb{Z}^{\oplus m}$, then
$\mathrm{Pic}(X)\simeq r_{1}\mathbb{Z}\oplus\cdots\oplus r_{m}\mathbb{Z}$. Let
us fix a basis $\mathcal{L}_{1},...,\mathcal{L}_{m}$ of
$\mathrm{Pic}(X_{s})\simeq\mathbb{Z}^{\oplus m}$. Using a basis of
$\mathrm{Pic}(X)$ and an easy computation from linear algebra involving
matrices over the integers, one can show that there are line bundles
$\mathcal{J}_{i}\in\mathrm{Pic}(X)$ satisfying
$\mathcal{J}_{i}\otimes_{k}k^{s}\simeq\mathcal{L}_{i}^{\otimes c_{i}}$ for
some integers $c_{i}\geq 1$. Now we choose the $\mathcal{J}_{i}$ such that the
$c_{i}$ are minimal. According to [23], Proposition 3.4 these line bundles
$\mathcal{J}_{i}$ are unique up to isomorphism. Assume there are pure vector
bundles $\mathcal{M}_{i}$ of type $\mathcal{L}_{i}\in\mathrm{Pic}(X_{s})$. We
know from [23], Proposition 3.5 that the bundle $\mathcal{M}_{i}$ is unique up
to isomorphism. We set $\mathcal{M}_{\mathcal{L}_{i}}:=\mathcal{M}_{i}$. It is
easy to see that for any line bundle $\mathcal{L}_{i}^{\otimes
a}\in\mathrm{Pic}(X_{s})$ there is an indecomposable pure bundle of type
$\mathcal{L}_{i}^{\otimes a}$. Indeed, let
$s_{i}=\mathrm{rk}(\mathcal{M}_{\mathcal{L}_{i}})$ and consider
$(\mathcal{L}_{i}^{\oplus s_{i}})^{\otimes a}\simeq(\mathcal{L}_{i}^{\otimes
a})^{\oplus s_{i}^{a}}$. Then we get $\mathcal{M}_{\mathcal{L}_{i}}^{\otimes
a}\otimes_{k}k^{s}\simeq(\mathcal{L}_{i}^{\oplus s_{i}})^{\otimes
a}\simeq(\mathcal{L}_{i}^{\otimes a})^{\oplus s_{i}^{a}}$.
Considering the Krull–Schmidt decomposition of
$\mathcal{M}_{\mathcal{L}_{i}}^{\otimes a}$ and taking into account that all
indecomposable direct summands are isomorphic (see [23], proof of Proposition
3.6 and Remark 3.7), we get an, up to isomorphism, unique indecomposable
vector bundle $\mathcal{M}_{\mathcal{L}_{i}^{\otimes a}}$ such that
$\mathcal{M}_{\mathcal{L}_{i}^{\otimes
a}}\otimes_{k}k^{s}\simeq(\mathcal{L}_{i}^{\otimes a})^{\oplus s_{i}(a)}$,
where $s_{i}(a)=\mathrm{rank}(\mathcal{M}_{\mathcal{L}_{i}^{\otimes a}})$.
Using Krull–Schmidt decomposition again, we can use our indecomposable vector
bundles $\mathcal{M}_{\mathcal{L}_{i}^{\otimes a}}$ and take the tensor
product of these to get an indecomposable vector bundle
$\mathcal{M}_{(a_{1},...,a_{m})}$ of type $\mathcal{L}_{1}^{\otimes
a_{1}}\otimes\cdots\otimes\mathcal{L}_{m}^{\otimes a_{m}}$. Again, the bundles
$\mathcal{M}_{(a_{1},...,a_{m})}$ are unique up to isomorphism.
###### Theorem 4.5.
Let $X$ be a proper, geometrically integral $k$-scheme with
$\mathrm{Pic}(X_{s})\simeq\mathbb{Z}^{\oplus m}$ and let
$\mathcal{L}_{1},...,\mathcal{L}_{m}\in\mathrm{Pic}(X_{s})$ be the basis from
above. Let $\mathcal{J}_{i}\in\mathrm{Pic}(X)\simeq\mathbb{Z}^{\oplus m}$ be
up to isomorphism unique line bundles satisfying
$\mathcal{J}_{i}\otimes_{k}k^{s}\simeq\mathcal{L}_{i}^{\otimes c_{i}}$ with
$c_{i}$ being minimal. Assume there are indecomposable pure bundles
$\mathcal{M}_{\mathcal{L}_{i}}$ of type $\mathcal{L}_{i}$. Then all
indecomposable $AS$-bundles $\mathcal{E}$ are of the form
$\displaystyle\mathcal{J}_{1}^{\otimes
b_{1}}\otimes\cdots\otimes\mathcal{J}_{m}^{\otimes
b_{m}}\otimes\mathcal{M}_{(a_{1},...,a_{m})}$
with unique $b_{i}\in\mathbb{Z}$ and $0\leq a_{j}\leq c_{j}-1$.
###### Proof.
Let $\mathcal{E}$ be an arbitrary, not necessarily indecomposable, $AS$-bundle
and let $\pi:X\otimes_{k}k^{s}\rightarrow X$ the projection. By assumption,
there are indecomposable pure vector bundles $\mathcal{M}_{\mathcal{L}_{i}}$
of type $\mathcal{L}_{i}$. Above we showed that there exist (up to
isomorphism) unique indecomposable pure vector bundles of type
$\mathcal{L}_{i}^{\otimes a}$ for all $a\in\mathbb{Z}$. Let
$d=\mathrm{lcm}(\mathrm{rk}(\mathcal{M}_{(a_{1},...,a_{m})})$, $0\leq
a_{j}\leq c_{j}-1$, be the least common multiple and consider the vector
bundle $\pi^{*}(\mathcal{E}^{\oplus d})$. Since $\mathcal{E}$ is an
$AS$-bundle, the vector bundle $\mathcal{E}^{\oplus d}$ is an $AS$-bundle,
too. Therefore $\pi^{*}(\mathcal{E}^{\oplus d})$ decomposes into a direct sum
of invertible sheaves. Below we give the proof for $m=2$ to simplify the
notation. So after reordering $(\mathrm{mod}\ r_{1},\mathrm{mod}\ r_{2})$ in
lexicographical order, we find that $\pi^{*}(\mathcal{E}^{\oplus d})$ is
isomorphic to the bundle
$\displaystyle\left(\bigoplus(\mathcal{L}_{1}^{\otimes{s_{i_{0}}^{(1)}\cdot
c_{1}+0}}\otimes\mathcal{L}_{2}^{\otimes{t_{i_{0}}^{(1)}\cdot
c_{2}}+0})^{\oplus
d}\right)\oplus\left(\bigoplus(\mathcal{L}_{1}^{\otimes{s_{i_{1}}^{(1)}\cdot
c_{1}+0}}\otimes\mathcal{L}_{2}^{\otimes{t_{i_{1}}^{(1)}\cdot
c_{2}}+1})^{\oplus d}\right)\oplus\cdots$
$\displaystyle\oplus\left(\bigoplus(\mathcal{L}_{1}^{\otimes{s_{i_{(c_{2}-1)}}^{(1)}\cdot
c_{1}+0}}\otimes\mathcal{L}_{2}^{\otimes{t_{i_{(c_{2}-1)}}^{(1)}\cdot
c_{2}}+(r_{2}-1)})^{\oplus d}\right)\oplus\cdots$
$\displaystyle\oplus\left(\bigoplus(\mathcal{L}_{1}^{\otimes{s_{i_{(c_{2}-1)}}^{(c_{1}-1)}\cdot
c_{1}+(c_{1}-1)}}\otimes\mathcal{L}_{2}^{\otimes{t_{i_{(c_{2}-1)}}^{(c_{1}-1)}\cdot
c_{2}}+(c_{2}-1)})^{\oplus d}\right)$
By definition of $d$, there are $h_{(p,q)}$ such that
$h_{(p,q)}\cdot\mathrm{rk}(\mathcal{M}_{(p,q)})=d$ for $0\leq p\leq c_{1}-1$
and $0\leq q\leq c_{2}-1$. Furthermore, the sheaves $\mathcal{M}_{(p,q)}$
satisfy
$\displaystyle\pi^{*}\mathcal{M}_{(p,q)}\simeq(\mathcal{L}_{1}^{\otimes
p}\otimes\mathcal{L}_{2}^{\otimes q})^{\oplus d_{(p,q)}},$
where $d_{(p,q)}=\mathrm{rk}(\mathcal{M}_{(p,q)})$. Now for the direct
summands
$\displaystyle\left(\mathcal{L}_{1}^{\otimes{s_{i_{m}}^{(l)}\cdot
c_{1}+p}}\otimes\mathcal{L}_{2}^{\otimes{t_{i_{m}}^{(l)}\cdot
c_{2}}+q}\right)^{\oplus d}$
where $0\leq p\leq c_{1}-1$ and $0\leq q\leq c_{2}-1$, we have
$\displaystyle\left(\left(\mathcal{L}_{1}^{\otimes{s_{i_{m}}^{(l)}\cdot
c_{1}+p}}\otimes\mathcal{L}_{2}^{\otimes{t_{i_{m}}^{(l)}\cdot
c_{2}}+q}\right)^{\oplus d_{(p,q)}}\right)^{\oplus h_{(p,q)}}.$
Considering the vector bundle
$(\mathcal{J}_{1}^{\otimes{s_{i_{m}}^{(l)}}}\otimes\mathcal{J}_{2}^{\otimes{t_{i_{m}}^{(l)}}}\otimes\mathcal{M}_{(p,q)})^{\oplus
h_{(p,q)}}$ on $X$, we find
$\displaystyle\pi^{*}\left(\mathcal{J}_{1}^{\otimes{s_{i_{m}}^{(l)}}}\otimes\mathcal{J}_{2}^{\otimes{t_{i_{m}}^{(l)}}}\otimes\mathcal{M}_{(p,q)}\right)^{\oplus
h_{(p,q)}}\simeq\left(\mathcal{L}_{1}^{\otimes{s_{i_{m}}^{(l)}\cdot
c_{1}+p}}\otimes\mathcal{L}_{2}^{\otimes{t_{i_{m}}^{(l)}\cdot
c_{2}}+q}\right)^{\oplus d}$
Now consider the vector bundle
$\displaystyle\left(\bigoplus(\mathcal{J}_{1}^{\otimes
s_{i_{0}}^{(1)}}\otimes\mathcal{J}_{2}^{t_{i_{0}}^{(1)}}\otimes\mathcal{M}_{(0,0)})^{\oplus
d}\right)\oplus\left(\bigoplus(\mathcal{J}_{1}^{\otimes
s_{i_{1}}^{(1)}}\otimes\mathcal{J}_{2}^{t_{i_{1}}^{(1)}}\otimes\mathcal{M}_{(0,1)})^{\oplus
h_{(0,1)}}\right)\oplus\cdots$
$\displaystyle\oplus\left(\bigoplus(\mathcal{J}_{1}^{\otimes
s_{i_{(c_{1}-1)}}^{(1)}}\otimes\mathcal{J}_{2}^{t_{i_{(c_{1}-1)}}^{(1)}}\otimes\mathcal{M}_{(0,c_{2}-1)})^{\oplus
h_{(0,c_{2}-1)}}\right)\oplus\cdots$
$\displaystyle\oplus\left(\bigoplus(\mathcal{J}_{1}^{\otimes
s_{i_{(c_{2}-1)}}^{(c_{1}-1)}}\otimes\mathcal{J}_{2}^{t_{i_{(c_{2}-1)}}^{(c_{1}-1)}}\otimes\mathcal{M}_{(c_{1}-1,c_{2}-1)})^{\oplus
h_{(c_{1}-1,c_{2}-1)}}\right)$
which is denoted by $\mathcal{R}$. We immediately see that
$\pi^{*}\mathcal{R}\simeq\pi^{*}(\mathcal{E}^{\oplus d})$. Applying [23],
Proposition 3.4 implies that $\mathcal{E}^{\oplus d}$ is isomorphic to
$\mathcal{R}$. Because Krull–Schmidt Theorem holds for vector bundles on $X$,
we conclude that $\mathcal{E}$ is isomorphic to the direct sum of vector
bundles of the form
$\displaystyle\mathcal{J}_{1}^{\otimes b_{1}}\otimes\mathcal{J}_{2}^{\otimes
b_{2}}\otimes\mathcal{M}_{(a_{1},a_{2})}$
with unique $b_{i}\in\mathbb{Z}$ and $0\leq a_{j}\leq c_{j}-1$. Furthermore,
since all these bundles are indecomposable by definition, we finally get that
all the indecomposable $AS$-bundles have the desired form. This completes the
proof. ∎
###### Lemma 4.6.
Let $X$ be a proper and geometrically integral $k$-scheme and
$\mathcal{L},\mathcal{L}_{1}$ and $\mathcal{L}_{2}$ line bundles.
* (i)
If $\mathcal{M}$ is pure of type $\mathcal{L}$, then
$\mathrm{End}(\mathcal{M})$ is a central simple $k$-algebra.
* (ii)
There is an (up to isomorphism) unique indecomposable pure vector bundle
$\mathcal{M}_{\mathcal{L}}$ of type $\mathcal{L}$.
* (iii)
Let $\mathcal{M}_{\mathcal{L}_{1}}$ and $\mathcal{M}_{\mathcal{L}_{2}}$ be
pure vector bundles of type $\mathcal{L}_{1}$ resp. $\mathcal{L}_{2}$. Then
$\mathrm{End}(\mathcal{M}_{\mathcal{L}_{1}})\otimes\mathrm{End}(\mathcal{M}_{\mathcal{L}_{2}})$
is Brauer-equivalent to
$\mathrm{End}(\mathcal{M}_{\mathcal{L}_{1}\otimes\mathcal{L}_{2}})$.
###### Proof.
Since $X$ is geometrically integral, we have
$H^{0}(X_{s},\mathcal{O}_{X_{s}})\simeq k^{s}$ (see [23], Proposition 4.2).
This implies $\mathcal{M}\otimes_{k}k^{s}\simeq\mathcal{L}^{\oplus r}$ and
therefore
$\displaystyle\mathrm{End}(\mathcal{M})\otimes_{k}k^{s}\simeq\mathrm{Mat}_{r}(k^{s}).$
Then [14], Theorem (1.1) shows that $\mathrm{End}(\mathcal{M})$ must be
central simple over $k$. This shows (i). Assertion (ii) follows directly from
[13], Lemma 8. It remains to show (iii). For this, let $\Delta\colon
X\rightarrow X\times X$ be the diagonal and $\pi_{i}\colon X\times
X\rightarrow X$, $i=1,2$, the two projections. For
$\mathcal{L}:=\pi_{1}^{*}\mathcal{L}_{1}\otimes\pi_{2}^{*}\mathcal{L}_{2}$ one
has $\Delta^{*}\mathcal{L}\simeq\mathcal{L}_{1}\otimes\mathcal{L}_{2}$. From
[13], Corollary 11 it follows
$\mathcal{M}_{\Delta^{*}\mathcal{L}}\simeq\Delta^{*}\mathcal{M}_{\mathcal{L}}$.
Moreover, the proof of Corollary 11 in _loc.cite_ shows
$\mathrm{End}(\mathcal{M}_{\mathcal{L}})\simeq\mathrm{End}(\Delta^{*}\mathcal{M}_{\mathcal{L}})\simeq\mathrm{End}(\mathcal{M}_{\mathcal{L}_{1}\otimes\mathcal{L}_{2}})$.
Now [13], p.14 explains that
$\mathrm{End}(\mathcal{L}_{1})\otimes\mathrm{End}(\mathcal{L}_{2})$ is Brauer-
equivalent to
$\mathrm{End}(\mathcal{M}_{\mathcal{L}_{1}\otimes\mathcal{L}_{2}})$. This
completes the proof. ∎
###### Proposition 4.7.
Let $\mathcal{E}=\mathcal{J}_{1}^{\otimes
b_{1}}\otimes\cdots\otimes\mathcal{J}_{m}^{\otimes
b_{m}}\otimes\mathcal{M}_{(a_{1},...,a_{m})}$ be an indecomposable $AS$-bundle
as in Theorem 4.5. Then $\mathrm{End}(\mathcal{E})$ is a central simple
$k$-algebra.
###### Proof.
Since
$\mathrm{End}(\mathcal{E})\simeq\mathrm{End}(\mathcal{M}_{(a_{1},...,a_{m})})$,
the assertion follows from Lemma 4.6 (i). ∎
###### Proposition 4.8.
Let $\mathcal{E}=\mathcal{J}_{1}^{\otimes
b_{1}}\otimes\cdots\otimes\mathcal{J}_{m}^{\otimes
b_{m}}\otimes\mathcal{M}_{(a_{1},...,a_{m})}$ and
$\mathcal{E}^{\prime}=\mathcal{J}_{1}^{\otimes
b^{\prime}_{1}}\otimes\cdots\otimes\mathcal{J}_{m}^{\otimes
b^{\prime}_{m}}\otimes\mathcal{M}_{(a^{\prime}_{1},...,a^{\prime}_{m})}$ be
two indecopmosable $AS$-bundles as in Theorem 4.5. Then there is a unique
positive integer $s$ such that
$\displaystyle(\mathcal{E}\otimes\mathcal{E}^{\prime})\simeq\mathcal{J}_{1}^{\otimes(b_{1}+b_{1}^{\prime})}\otimes\cdots\otimes\mathcal{J}_{m}^{\otimes(b_{m}+b_{m}^{\prime})}\otimes\mathcal{M}_{((a_{1}+a_{1}^{\prime}),...,(a_{m}+a_{m}^{\prime}))}^{\oplus
s}.$
Moreover, $\mathrm{End}(\mathcal{E}\otimes\mathcal{E}^{\prime})$ is Brauer-
equivalent to
$\mathrm{End}(\mathcal{E})\otimes\mathrm{End}(\mathcal{E}^{\prime})$ in
$\mathrm{Br}(k)$.
###### Proof.
Note that the $AS$-bundle
$\mathcal{M}_{(a_{1},...,a_{m})}\otimes\mathcal{M}_{(a^{\prime}_{1},...,a^{\prime}_{m})}$
is a pure vector bundle of type
$\mathcal{L}_{1}^{\otimes(a_{1}+a_{1}^{\prime})}\otimes\cdots\otimes\mathcal{L}_{m}^{\otimes(a_{m}+a_{m}^{\prime})}$.
The first assertion follows from Lemma 4.6 (ii). Now we want to prove that
$\mathrm{End}(\mathcal{E}\otimes\mathcal{E}^{\prime})$ is Brauer-equivalent to
$\mathrm{End}(\mathcal{E})\otimes\mathrm{End}(\mathcal{E}^{\prime})$. As
mentioned in the proof of Proposition 4.7, we have
$\mathrm{End}(\mathcal{E})\simeq\mathrm{End}(\mathcal{M}_{(a_{1},...,a_{m})})$
and
$\mathrm{End}(\mathcal{E}^{\prime})\simeq\mathrm{End}(\mathcal{M}_{(a^{\prime}_{1},...,a^{\prime}_{m})})$.
We conclude with Lemma 4.6 (iii). ∎
## 5\. Proof of Theorem 1.1
###### Proof.
According to Theorem 4.4, the closed points of $\mathrm{Pic}_{(X/k)(et)}$ are
in one-to-one correspondence with isomorphism classes of indecomposable
$AS$-bundles. Since $X$ is proper over $k$, we have
$\mathrm{Pi}_{(X/k)(\mathrm{et})}\simeq\mathrm{Pic}_{(X/k)(\mathrm{fppf})}$ as
abelian groups. And because $\mathrm{Pic}(X_{s})\simeq\mathbb{Z}^{\oplus m}$,
we can use Theorem 4.5 to obtain a classification of all indecomposable
$AS$-bundles on $X$. In particular, the $k$-rational points of
$\mathrm{Pic}_{(X/k)(et)}$ correspond to indecomposable $AS$-bundles. As
mentioned in the introduction, being pure vector bundle of type
$\mathcal{L}\in\mathrm{Pic}(X_{s})$ is equvalent to
$\mathcal{L}\in\mathrm{Pic}_{\Gamma}(X_{s})$. By assumption, there are pure
vector bundles of type $\mathcal{L}_{i}$. This actually implies that the basis
$\mathcal{L}_{1},...,\mathcal{L}_{m}$ of $\mathrm{Pic}(X_{s})$ consists of
Galois invariant line bundles. Therefore, the indecomposable $AS$-bundles are
in one-to-one correspondence with the $k$-rational points of
$\mathrm{Pic}_{(X/k)(et)}$. Now consider the exact sequence from the
introduction and specialize it to the case $T=S=\mathrm{Spec}(k)$. We get the
following exact sequence
$\displaystyle
0\longrightarrow\mathrm{Pic}(X)\longrightarrow\mathrm{Pic}_{(X/S)(\mathrm{fppf})}(k)\stackrel{{\scriptstyle\delta}}{{\longrightarrow}}\mathrm{Br}(k)\longrightarrow\mathrm{Br}^{\prime}(X)$
where $\delta(\mathcal{E})=[\mathrm{End}(\mathcal{E})]\in\mathrm{Br}(k)$ for
an indecomposable $AS$-bundle $\mathcal{E}$. Finally, use Theorem 4.5, Lemma
4.6, Lemma 4.8, and [27], Lemma 3.4 to conclude that $\mathrm{Am}(X)$ is
indeed generated by the set $D$ which consists of the classes
$\mathrm{End}(\mathcal{M}_{e_{i}})$, where $i\in\\{1,...m\\}$ satisfying
$c_{i}\geq 2$. The above arguments also show
$\\#D\leq\mathrm{rank}(\mathrm{Pic}(X))$. It remains to show that if $X$ is
geometrically rational, one has
$\mathrm{Br}(X)\simeq\mathrm{Br}(k)/\mathrm{Am}(X)$. For this, we consider the
Hochschild–Serre spectral sequence
$H^{p}(k,H^{q}(X_{s},\mathbb{G}_{m}))\Rightarrow H^{p+q}(X,\mathbb{G}_{m})$.
Since $X$ is geometrically rational, we have $\mathrm{Br}(X_{s})=0$. The
spectral sequence then yields
$\displaystyle\mathrm{Pic}_{\Gamma}(X_{s})\longrightarrow\mathrm{Br}(k)\longrightarrow\mathrm{Br}(X)\longrightarrow
H^{1}(k,\mathrm{Pic}(X_{s}))$
where $\Gamma$ denotes the absolute Galois group. By assumption, there are
pure vector bundles of type $\mathcal{L}_{i}$. This actually implies that the
basis $\mathcal{L}_{1},...,\mathcal{L}_{m}$ of $\mathrm{Pic}(X_{s})$ consists
of Galois invariant line bundles. Therefore $\mathrm{Pic}(X_{s})$ is a
permutation $\Gamma$-module and hence $H^{1}(k,\mathrm{Pic}(X_{s}))=0$. The
above exact sequence yields
$\mathrm{Br}(X)\simeq\mathrm{Br}(k)/\mathrm{Am}(X)$. This completes the proof.
∎
(proof of Corollary 1.2):
Recall that for any flag $G/P$ of classical type associated to some semisimple
simply connected and $k$-split $G$, one has
$\mathrm{Pic}(G/P)\simeq\mathbb{Z}^{\oplus m}$. Now, let $G$ be an semisimple
simply connected algebraic group over $k$. Consider a twisted flag
$X:={{}_{\gamma}}(G/P)$. Note that after base change to a separable closure
$k^{s}$ we have $X_{s}\simeq G_{s}/P_{s}$. Denote by
$\mathcal{A}_{1},...,\mathcal{A}_{m}$ the basis of $\mathrm{Pic}(X_{s})$ given
in [20], p.55f. From [20], p.37 we conclude that
$\mathrm{End}(\mathcal{B}_{\mathcal{A}_{i}})$ is a Tits algebra. Using these
$\mathcal{A}_{i}$, we can construct a basis
$\mathcal{L}_{1},...,\mathcal{L}_{m}$ of
$\mathrm{Pic}(X_{s})\simeq\mathbb{Z}^{\oplus m}$ consisting of Galois-
invariant line bundles. Let $\mathcal{J}_{1},...\mathcal{J}_{m}$ be the line
bundles in $\mathrm{Pic}(X)$ satisfying
$\mathcal{J}_{j}\otimes_{k}k^{s}\simeq\mathcal{L}_{j}^{\otimes c_{j}}$ with
minimal $c_{i}$. Notice that $H^{0}(X_{s},\mathcal{O}_{X_{s}})\simeq k^{s}$.
According to Theorem 4.4, the closed points of $\mathrm{Pic}_{(X/k)(et)}$ are
in one-to-one correspondence with isomorphism classes of indecomposable
$AS$-bundles. Since $X$ is proper over $k$, we have
$\mathrm{Pic}_{(X/k)(\mathrm{et})}\simeq\mathrm{Pic}_{(X/k)(\mathrm{fppf})}$
as abelian groups. And because
$\mathrm{Pic}(G_{s}/P_{s})\simeq\mathbb{Z}^{\oplus m}$ and since we have a
basis $\mathcal{L}_{1},...,\mathcal{L}_{m}$ of $\mathrm{Pic}(X_{s})$
consisting of Galois-invariant line bundles, it follows from [23], Proposition
4.3 that there are (unique) pure indecomposable vector bundles
$\mathcal{W}_{\mathcal{L}_{i}}$ of type $\mathcal{L}_{j}$. Moreover, we have
that the indecomposable $AS$-bundles are in one-to-one correspondence with the
$k$-rational points of $\mathrm{Pic}_{(X/k)(et)}$. Now we can use Theorem 4.5
from above to obtain a classification of all indecomposable $AS$-bundles on
the twisted flags of classical type. By construction, and by applying Lemma
4.6 (iii) we conclude that there exist integers $a_{1},...,a_{m}$ such that
$\mathrm{End}(\mathcal{W}_{\mathcal{L}_{i}})$ is Brauer equivalent to
$\mathrm{End}(\mathcal{B}_{\mathcal{A}_{1}})^{\otimes
a_{1}}\otimes\cdots\otimes\mathrm{End}(\mathcal{B}_{\mathcal{A}_{m}})^{\otimes
a_{m}}$.
Now consider the exact sequence from the introduction and specialize it to the
case $T=S=\mathrm{Spec}(k)$. We get the following exact sequence
$\displaystyle
0\longrightarrow\mathrm{Pic}(X)\longrightarrow\mathrm{Pic}_{(X/S)(\mathrm{fppf})}(k)\stackrel{{\scriptstyle\delta}}{{\longrightarrow}}\mathrm{Br}(k)\longrightarrow\mathrm{Br}^{\prime}(X)$
where $\delta(\mathcal{E})=[\mathrm{End}(\mathcal{E})]\in\mathrm{Br}(k)$ for
an indecomposable $AS$-bundle $\mathcal{E}$. Finally, use Theorem 4.5, Lemma
4.6, Lemma 4.8 and [27], Lemma 3.4 to conclude that $\mathrm{Am}(X)$ is indeed
generated by the subset $D$ which consists of classes of Tits algebras coming
from a basis of $\mathrm{Ch}(P_{s})^{\Gamma}$ which is given by some of the
$\mathcal{L}_{i}$. For these $\mathcal{L}_{i}$ one has $c_{i}\geq 2$. The
above arguments also show $\\#D\leq\mathrm{rank}(\mathrm{Pic}(X_{s}))$. This
completes the proof.
(proof of Corollary 1.3)
We sketch the proof for $X=X_{1}\times X_{2}$. Let
$X_{1}={{}_{\gamma_{1}}}(G_{1}/P_{1})$ and
$X_{2}={{}_{\gamma_{2}}}(G_{2}/P_{2})$ and notice that
$\mathrm{Pic}(X_{s})\simeq\mathrm{Pic}((X_{1})_{s})\times\mathrm{Pic}((X_{2})_{s})$
since $(X_{i})_{s}$ are rational over $k^{s}$. Therefore
$\mathrm{Pic}(X_{s})\simeq\mathbb{Z}^{\oplus m_{1}}\oplus\mathbb{Z}^{\oplus
m_{2}}$. This implies $\mathrm{Pic}(X)\simeq\mathbb{Z}^{\oplus m_{1}+m_{2}}$.
Now let $\mathcal{L}_{1},...,\mathcal{L}_{m_{1}}$ be the generators of
$\mathrm{Pic}((X_{1})_{s})$ and $\mathcal{K}_{1},...,\mathcal{K}_{m_{2}}$ the
generators of $\mathrm{Pic}((X_{2})_{s})$. Now proceed as in the proof of
Theorem 1.1 and use [19], Corollary 2.3 to obtain the desired assertion.
###### Remark 5.1.
We wonder whether the Amitsur subgroup is a complete birational invariant for
twisted flags of classical type. In the special case of Brauer–Severi
varieties, this problem is known as the _Amitsur conjecture for central simple
algebras_ [1]. This conjecture is still open in general. For details and
results in this direction we refer to [10] and [15] and references therein.
###### Example 5.2.
Let $X$ be a Brauer–Severi variety corresponding to the central simple algebra
$A$. Then $X_{s}\simeq\mathbb{P}^{n}$ and $\mathrm{Pic}(X_{s})=\mathbb{Z}$ is
generated by the ample line bundle $\mathcal{O}_{\mathbb{P}^{n}}(1)$. In
Section 4 we showed that there is a (up to isomorphism) unique $AS$-bundle
$\mathcal{M}_{1}$ of type $\mathcal{O}_{\mathbb{P}^{n}}(1)$. The proof of
Theorem 1.1 actually shows that $[\mathrm{End}(\mathcal{M}_{1})]$ generates
$\mathrm{Am}(X)$. It is well known that
$[\mathrm{End}(\mathcal{M}_{1})]=[A]^{-1}$ (see for instance [27], p.571).
Therefore, $\mathrm{Am}(X)=\langle[A]\rangle$. This gives back the classical
result due to Châtelet which is mentioned in the introduction.
###### Example 5.3.
Let $X=\mathrm{BS}(d,A)$ be a generalized Brauer–Severi corresponding to a
central simple algebra $A$ of degree $n$. We have
$X_{s}\simeq\mathrm{Grass}(d,n)$ and $\mathrm{Pic}(X_{s})=\mathbb{Z}$ is
generated by the ample line bundle $\mathcal{O}(1)=\mathrm{det}(\mathcal{Q})$
where $\mathcal{Q}$ is the universal quotient bundle on $\mathrm{Grass}(d,n)$.
As explained in [23], p.16 there is a (up to isomorphism) unique $AS$-bundle
$\mathcal{N}$ of type $\mathcal{O}(1)$. Moreover, it is well known that
$\mathrm{End}(\mathcal{N})$ is Brauer-equivalent to $A^{\otimes-d}$ (see for
instance [27], p.572). Hence $\mathrm{Am}(X)=\langle[A^{\otimes d}]\rangle$.
This gives back [7], Theorem 7.
## 6\. Application to noncommutative motives
(proof of Theorem 1.4 and Corollary 1.6)
If $X$ and $Y$ are birational, we conclude from [18], Proposition 2.10 that
$\mathrm{Am}(X)=\mathrm{Am}(Y)$ in $\mathrm{Br}(k)$. According to Theorem 1.1,
both Amitsur subgroups $\mathrm{Am}(X)$ and $\mathrm{Am}(Y)$ are generated by
certain Tits algebras of the algebraic groups involved. Since the Amitsur
subgroup is a finitely generated torsion abelian subgroup of $\mathrm{Br}(k)$,
we conclude with the fundamental theorem of finitely generated abelian groups
that
$\displaystyle\mathrm{Am}(X)\simeq\mathbb{Z}/p_{1}^{r_{1}}\mathbb{Z}\times\cdots\times\mathbb{Z}/p_{s}^{r_{s}}\mathbb{Z},\
\mathrm{Am}(Y)\simeq\mathbb{Z}/q_{1}^{v_{1}}\mathbb{Z}\times\cdots\times\mathbb{Z}/q_{t}^{v_{t}}\mathbb{Z}$
with uniquely determined $p_{i}^{r_{i}}$ and $q_{j}^{v_{j}}$ where $p_{i}$ and
$q_{j}$ are prime numbers. Since $\mathrm{Am}(X)=\mathrm{Am}(Y)$, we have
$s=t$ and isomorphic factors up to permutation. Without loss of generality we
assume $p_{i}=q_{i}$ and therefore $r_{i}=v_{i}$. Let $a_{i}$ be a generator
of $\mathbb{Z}/p_{i}^{r_{i}}\mathbb{Z}$ and $b_{i}$ a generator of
$\mathbb{Z}/q_{i}^{v_{i}}\mathbb{Z}$. Denote by $e_{i}=(0,...,a_{i},0,...,0)$
$i=1,...,s$ a set of generators of
$\mathbb{Z}/p_{1}^{r_{1}}\mathbb{Z}\times\cdots\times\mathbb{Z}/p_{s}^{r_{s}}\mathbb{Z}$
and by $f_{i}=(0,...,b_{i},0,...,0)$ $i=1,...,t$ a set of generators of
$\mathbb{Z}/q_{1}^{v_{1}}\mathbb{Z}\times\cdots\times\mathbb{Z}/q_{t}^{v_{t}}\mathbb{Z}$.
The corresponding central simple algebras are denoted by $A_{e_{i}}$ and
$B_{f_{i}}$ respectively. By definition, we have
$[A_{e_{i}}]=[B_{f_{i}}^{\otimes n_{i}}]$ for a unique positive integer
$n_{i}$. Note that from [30], (2.18) it follows
$\displaystyle\bigoplus_{i=1}^{s}U(A_{e_{i}})\simeq\bigoplus_{i=1}^{t}U(B_{f_{i}}^{\otimes
n_{i}}).$
Now let $A_{g}$ be the central simple division algebra corresponding to
$g\in\mathrm{Am}(X)$. Analogously, let $B_{h}$ be the central simple division
algebra corresponding to $h\in\mathrm{Am}(Y)$. Now [30], Theorem Theorem 2.19
implies
$\displaystyle M_{X}:=\bigoplus_{g\in\mathrm{Am}(X)}U(A_{g})\
\simeq\bigoplus_{h\in\mathrm{Am}(Y)}U(B_{h})=:M_{Y}.$
Claim: Let $G$, $P$ and $\gamma$ be as in Corollary 1.2 and let
$\rho_{1},...,\rho_{n}$ be a $\mathrm{Ch}$-homogeneous basis of $R(P)$ over
$R(G)$, where $R(P)$ and $R(G)$ denote the corresponding representation rings.
Let $A_{\chi(i),\gamma}$ be the Tits central simple algebras associated to
$\rho_{i}$ and let $\mathrm{Ti}_{\rho_{1},...,\rho_{n}}(X):=\langle
A_{\chi(1),\gamma},...,A_{\chi(n),\gamma}\rangle$ be the subgroup generated by
these Tits algebras. Denote by
$\displaystyle
M\mathrm{Ti}_{\rho_{1},...,\rho_{n}}(X):=\bigoplus_{f\in\mathrm{Ti}_{\rho_{1},...,\rho_{n}}(X)}U(A_{f}),$
where $A_{f}$ are the central simple algebras corresponding to $f$. Then
$U(X)\oplus N^{\prime}=M\mathrm{Ti}_{\rho_{1},...,\rho_{n}}(X)$ with
$N^{\prime}\in\mathrm{CSA}(k)^{\oplus}$.
###### Proof.
By [28], Theorem 2.1 we conclude that
$U(X)=U(A_{\chi(1),\gamma})\oplus\cdots\oplus U(A_{\chi(n),\gamma})$.
Obviously, $U(X)$ is a direct summand of
$M\mathrm{Ti}_{\rho_{1},...,\rho_{n}}(X)$ and
$N^{\prime}\in\mathrm{CSA}(k)^{\oplus}$ by construction. ∎
Note that $\mathrm{Am}(X)$ is a subgroup of
$\mathrm{Ti}_{\rho_{1},...,\rho_{n}}(X)$. It is clear that there exists a
$N\in\mathrm{CSA}(k)^{\oplus}$ such that $M_{X}\oplus
N=M\mathrm{Ti}_{\rho_{1},...,\rho_{n}}(X)$. Now use the claim to conclude that
$U(X)$ is a direct summand of $M\mathrm{Ti}_{\rho_{1},...,\rho_{n}}(X)$. Hence
$M_{X}\oplus N=U(X)\oplus N^{\prime}$ with
$N^{\prime}\in\mathrm{CSA}(k)^{\oplus}$. By the same argument we obtain
$M_{Y}\oplus Q=U(Y)\oplus Q^{\prime}$. This completes the proof of Theorem 1.4
and Corollary 1.6.
###### Example 6.1.
Let $X$ and $Y$ be Brauer–Severi varieties corresponding to central simple
algebras $A$ and $B$. Then $\mathrm{Am}(X)=\langle A\rangle$ and
$\mathrm{Am}(Y)=\langle B\rangle$. Now if $X$ and $Y$ are birational, then
$\langle A\rangle=\langle B\rangle$ according to a theorem of Amitsur [1].
Since $\mathrm{Am}(X)$ and $\mathrm{Am}(Y)$ are cyclic of order
$\mathrm{per}(A)=\mathrm{per}(B):=m$, we conclude from [30], Theorem 3.20
$\displaystyle M_{X}=U(k)\oplus U(A)\oplus\cdots\oplus U(A^{\otimes
m-1})\simeq U(k)\oplus U(B)\oplus\cdots\oplus U(B^{\otimes m-1})=M_{Y}$
Since $m\cdot r=\mathrm{deg}(A)=\mathrm{deg}(B)$, we can use [30], Theorem
2.19 to conclude that $M_{X}^{\oplus r}\simeq U(X)$ and $M_{Y}^{\oplus
r}\simeq U(Y)$. So in the case of Brauer–Severi varieties we have
$N=M_{X}^{\oplus(r-1)}$, $N^{\prime}=0$, $Q=M_{Y}^{\oplus(r-1)}$ and
$Q^{\prime}=0$.
###### Example 6.2.
Let $X=\mathrm{BS}(d,A)$ and $Y=\mathrm{BS}(d,B)$ be generalized Brauer–Severi
varieties corresponding to central simple algebras $A$ and $B$. Then
$\mathrm{Am}(X)=\langle A^{\otimes d}\rangle$ and $\mathrm{Am}(Y)=\langle
B^{\otimes d}\rangle$ (see [7], Theorem 7). If $X$ is birational to $Y$, then
$\mathrm{Am}(X)=\mathrm{Am}(Y)$. Notice that $\mathrm{Am}(X)$ and
$\mathrm{Am}(Y)$ are cyclic of order
$m=\mathrm{per}(A)/\mathrm{gcd}(d,\mathrm{per}(A))$. According to [30],
Theorem 3.20 one has $M_{X}\simeq M_{Y}$, where
$\displaystyle M_{X}=U(k)\oplus U(A^{\otimes d})\oplus\cdots\oplus
U(A^{\otimes dm-d}),$ $\displaystyle M_{Y}=U(k)\oplus U(B^{\otimes
d})\oplus\cdots\oplus U(B^{\otimes dm-d}).$
One can use [30], Theorem 2.19 and 3.18 to conclude that $M_{X}$ is a direct
summand of $U(X)$ and $M_{Y}$ a direct summand of $U(Y)$. Again, we have
$N^{\prime}=0=Q^{\prime}$.
In particular, for birational (generalized) Brauer–Severi varieties $X$ and
$Y$ one has $U(X)\oplus Q\simeq U(Y)\oplus N$. Not that if $X$ and $Y$ are
Brauer–Severi, then $N\simeq Q$ and [31], Proposition 4.5 implies $U(X)\simeq
U(Y)$. In this way we get back [30], Proposition 3.15. Using the theory of
semiorthogonal decompositions one can try to generalize Corollary 1.6 to
arbitrary proper and geometrically integral $k$-schemes. We recall the
definition of exceptional object and semiorthogonal decomposition.
Let $\mathcal{D}$ be a triangulated category and $\mathcal{C}$ a triangulated
subcategory. The subcategory $\mathcal{C}$ is called _thick_ if it is closed
under isomorphisms and direct summands. For a subset $A$ of objects of
$\mathcal{D}$ we denote by $\langle A\rangle$ the smallest full thick
subcategory of $\mathcal{D}$ containing the elements of $A$. Furthermore, we
define $A^{\perp}$ to be the subcategory of $\mathcal{D}$ consisting of all
objects $M$ such that $\mathrm{Hom}_{\mathcal{D}}(E[i],M)=0$ for all
$i\in\mathbb{Z}$ and all elements $E$ of $A$. We say that $A$ _generates_
$\mathcal{D}$ if $A^{\perp}=0$. Now assume $\mathcal{D}$ admits arbitrary
direct sums. An object $B$ is called _compact_ if
$\mathrm{Hom}_{\mathcal{D}}(B,-)$ commutes with direct sums. Denoting by
$\mathcal{D}^{c}$ the subcategory of compact objects we say that $\mathcal{D}$
is _compactly generated_ if the objects of $\mathcal{D}^{c}$ generate
$\mathcal{D}$. One has the following important theorem (see [9], Theorem
2.1.2).
###### Theorem 6.3.
Let $\mathcal{D}$ be a compactly generated triangulated category. Then a set
of objects $A\subset\mathcal{D}^{c}$ generates $\mathcal{D}$ if and only if
$\langle A\rangle=\mathcal{D}^{c}$.
For a smooth projective scheme $X$ over $k$, we denote by
$D(\mathrm{Qcoh}(X))$ the derived category of quasicoherent sheaves on $X$.
The bounded derived category of coherent sheaves is denoted by $D^{b}(X)$.
Note that $D(\mathrm{Qcoh}(X))$ is compactly generated with compact objects
being all of $D^{b}(X)$. For details on generating see [9].
###### Definition 6.4.
Let $A$ be a division algebra over $k$, not necessarily central. An object
$\mathcal{E}\in D^{b}(X)$ is called _w-exceptional_ if
$\mathrm{End}(\mathcal{E})=A$ and $\mathrm{Hom}(\mathcal{E},\mathcal{E}[r])=0$
for $r\neq 0$. If $A=k$ the object is called _exceptional_. If $A$ is a
separable $k$-algebra, the object $\mathcal{E}$ is called _separable-
exceptional_.
###### Definition 6.5.
A totally ordered set $\\{\mathcal{E}_{1},...,\mathcal{E}_{n}\\}$ of
w-exceptional (resp. separable-exceptional) objects on $X$ is called an _w-
exceptional collection_ (resp. _separable-exceptional collection_) if
$\mathrm{Hom}(\mathcal{E}_{i},\mathcal{E}_{j}[r])=0$ for all integers $r$
whenever $i>j$. An w-exceptional (resp. separable-exceptional) collection is
_full_ if $\langle\\{\mathcal{E}_{1},...,\mathcal{E}_{n}\\}\rangle=D^{b}(X)$
and _strong_ if $\mathrm{Hom}(\mathcal{E}_{i},\mathcal{E}_{j}[r])=0$ whenever
$r\neq 0$. If the set $\\{\mathcal{E}_{1},...,\mathcal{E}_{n}\\}$ consists of
exceptional objects it is called _exceptional collection_.
###### Example 6.6.
Let $\mathbb{P}^{n}$ be the projective space and consider the ordered
collection of invertible sheaves
$\\{\mathcal{O}_{\mathbb{P}^{n}},\mathcal{O}_{\mathbb{P}^{n}}(1),...,\mathcal{O}_{\mathbb{P}^{n}}(n)\\}$.
In [5] Beilinson showed that this is a full strong exceptional collection.
A generalization of the notion of a full w-exceptional collection is that of a
semiorthogonal decomposition of $D^{b}(X)$. Recall that a full triangulated
subcategory $\mathcal{D}$ of $D^{b}(X)$ is called _admissible_ if the
inclusion $\mathcal{D}\hookrightarrow D^{b}(X)$ has a left and right adjoint
functor.
###### Definition 6.7.
Let $X$ be a smooth projective variety over $k$. A sequence
$\mathcal{D}_{1},...,\mathcal{D}_{n}$ of full triangulated subcategories of
$D^{b}(X)$ is called _semiorthogonal_ if all $\mathcal{D}_{i}\subset D^{b}(X)$
are admissible and
$\mathcal{D}_{j}\subset\mathcal{D}_{i}^{\perp}=\\{\mathcal{F}\in
D^{b}(X)\mid\mathrm{Hom}(\mathcal{G},\mathcal{F})=0$, $\forall$
$\mathcal{G}\in\mathcal{D}_{i}\\}$ for $i>j$. Such a sequence defines a
_semiorthogonal decomposition_ of $D^{b}(X)$ if the smallest full thick
subcategory containing all $\mathcal{D}_{i}$ equals $D^{b}(X)$.
For a semiorthogonal decomposition we write
$D^{b}(X)=\langle\mathcal{D}_{1},...,\mathcal{D}_{n}\rangle$.
###### Example 6.8.
Let $\mathcal{E}_{1},...,\mathcal{E}_{n}$ be a full w-exceptional collection
on $X$. It is easy to verify that by setting
$\mathcal{D}_{i}=\langle\mathcal{E}_{i}\rangle$ one gets a semiorthogonal
decomposition $D^{b}(X)=\langle\mathcal{D}_{1},...,\mathcal{D}_{n}\rangle$.
The noncommutative motives $M_{T(X)}$ and $M_{T(Y)}$ are defined in the
introduction.
###### Theorem (Theorem 1.7).
Let $X$ and $Y$ be schemes of pure weak exceptional type. If $X$ and $Y$ are
birational, then there are direct summands
$N,N^{\prime}\in\mathrm{CSA}^{\oplus}$ of $M_{T(X)}$ and
$Q,Q^{\prime}\in\mathrm{CSA}^{\oplus}$ of $M_{T(Y)}$ such that $U(X)\oplus
N^{\prime}\oplus Q\simeq U(Y)\oplus Q^{\prime}\oplus N$.
###### Proof.
The semiorthogonal decompositions
$\displaystyle D^{b}(X)=\langle\mathcal{E}_{1},...,\mathcal{E}_{m}\rangle$
and
$\displaystyle D^{b}(Y)=\langle\mathcal{F}_{1},...,\mathcal{F}_{n}\rangle$
imply $\mathrm{Pic}(X)\simeq\mathbb{Z}^{\oplus m}$ and
$\mathrm{Pic}(Y)\simeq\mathbb{Z}^{\oplus n}$. Since $\mathcal{E}_{i}$ is pure
of type $\mathcal{K}_{i}$, we conclude that
$\langle\mathcal{K}_{1},...,\mathcal{K}_{m}\rangle$ is a semiorthogonal
decomposition of $D^{b}(X_{s})$ which is induced from a full exceptional
collection. Therefore $\mathrm{Pic}(X_{s})\simeq\mathbb{Z}^{\oplus m}$. In the
same way it follows $\mathrm{Pic}(Y_{s})\simeq\mathbb{Z}^{\oplus n}$. Some of
the $\mathcal{K}_{i}$ form a basis of $\mathrm{Pic}(X_{s})$. Without loss of
generality, let $\mathcal{K}_{1},...,\mathcal{K}_{r}$ be a basis. Then, by
assumption, there are pure vector bundles
$\mathcal{E}_{1},...,\mathcal{E}_{r}$ of type
$\mathcal{K}_{1},...,\mathcal{K}_{r}$. Therefore, the assumptions of Theorem
4.5 are fulfilled. Let $D_{X}$ (resp. $D_{Y}$) denote the set of
indecopmposable $AS$-bundles on $X$ (resp. $Y$). Since $X$ satisfies the
assumptions of Theorem 4.5, the proof of Theorem 1.1 shows that for any
indecomposable $AS$-bundle $\mathcal{E}$ on $X$ one has
$[\mathrm{End}(\mathcal{E})]\in\mathrm{Br}(k)$. From Theorem 4.5 and
Proposition 4.7 we conclude that there are only finitely many Brauer-classes
$[\mathrm{End}(\mathcal{E})]\in\mathrm{Br}(k)$ of indecomposable $AS$-bundles.
Now let $C_{X}\subset\mathrm{Br}(k)$ be the subgroup generated by these
finitely many Brauer-classes. Analogously, we define
$C_{Y}\subset\mathrm{Br}(k)$. Denote by $A(g)$ the central simple division
algebra corresponding to $g\in C_{X}$ and by $A(h)$ the central simple
division algebra corresponding to $h\in C_{Y}$. As in the introduction, we set
$\displaystyle M_{T(X)}:=\bigoplus_{g\in
C_{X}}U(A(g))\quad\textnormal{and}\quad M_{T(Y)}:=\bigoplus_{h\in
C_{Y}}U(A(h)).$
Furthermore, we put
$\displaystyle
M_{X}:=\bigoplus_{p\in\mathrm{Am}(X)}U(A_{p})\quad\textnormal{and}\quad
M_{Y}:=\bigoplus_{q\in\mathrm{Am}(Y)}U(A_{q})$
where $A_{p}$ is the central simple division algebra corresponding to $p$ and
$A_{q}$ the central simple division algebra corresponding to $q$. Consider the
simiorthogonal decompositions
$\displaystyle D^{b}(X)=\langle\mathcal{E}_{1},...,\mathcal{E}_{m}\rangle$
and
$\displaystyle D^{b}(Y)=\langle\mathcal{F}_{1},...,\mathcal{F}_{n}\rangle.$
By assumption, the vector bundles $\mathcal{E}_{1},...,\mathcal{E}_{m}$ and
$\mathcal{F}_{1},...,\mathcal{F}_{n}$ are pure having endomorphism algebras
being isomorphic to $k^{s}$. This implies that $\mathrm{End}(\mathcal{E}_{i})$
and $\mathrm{End}(\mathcal{F}_{j})$ are central simple algebras (see proof of
[23], Proposition 3.3). From the construction of noncommutative motives we
have
$\displaystyle
U(X):=\bigoplus^{m}_{i=1}U(\mathrm{End}(\mathcal{E}_{i}))\quad\textnormal{and}\quad
U(Y):=\bigoplus^{n}_{j=1}U(\mathrm{End}(\mathcal{F}_{j})).$
Obviously, $U(X)$ is a direct summand of $M_{T(X)}$ and therefore there exists
$N^{\prime}\in\mathrm{CSA}(k)^{\oplus}$ such that $U(X)\oplus N^{\prime}\simeq
M_{T(X)}$. In the same way one shows that there is a
$Q^{\prime}\in\mathrm{CSA}(k)^{\oplus}$ such that $U(Y)\oplus Q^{\prime}\simeq
M_{T(Y)}$. Note that $\mathrm{Am}(X)$ is a subgroup of $C_{X}$. Hence there
exists a $N$ such that $M_{T(X)}\simeq M_{X}\oplus N$. The same holds for
$M_{Y}$. This gives us $U(X)\oplus N^{\prime}=M_{X}\oplus N$ and $U(Y)\oplus
Q^{\prime}=M_{Y}\oplus Q$, implying the equalities $M_{X}\oplus N\oplus
Q=U(X)\oplus N^{\prime}\oplus Q$ and $M_{Y}\oplus Q\oplus N=U(Y)\oplus
Q^{\prime}\oplus N$. Now if $X$ and $Y$ are birational, it follows
$\mathrm{Am}(X)=\mathrm{Am}(Y)$ and therefore $M_{X}\simeq M_{Y}$. Hence
$U(X)\oplus N^{\prime}\oplus Q\simeq U(Y)\oplus Q^{\prime}\oplus N$. This
completes the proof. ∎
###### Corollary (Corollary 1.8).
Let $X$ and $Y$ be as in Theorem 1.7. Assume $X$ and $Y$ are birational and
let $A_{i},1\leq i\leq n$ and $B_{j},1\leq j\leq m$ be the central simple
algebras occuring in $U(X)\oplus N^{\prime}\oplus Q$ and $U(Y)\oplus
Q^{\prime}\oplus N$ respectively, then
$\langle[A_{i}]\rangle=\langle[B_{j}]\rangle$ in $\mathrm{Br}(k)$.
###### Proof.
This follows from [31], Corollary 4.8. ∎
## References
* [1] S.A. Amitsur: Generic splitting fields of central simple algebras. Ann. of Math. 62 (1955), 8-43.
* [2] A. Auel and M. Bernardara: Cycles, derived categories, and rationality, in Surveys on Recent Developments in Algebraic Geometry, Proceedings of Symposia in Pure Mathematics 95 (2017), 199-266.
* [3] A. Auel and M. Bernardara: Semiorthogonal decompositions and birational geometry of del Pezzo surfaces over arbitrary fields. Proc. London Math. Soc. 117 (2018) 1-64.
* [4] M. Artin: Brauer-Severi varieties. Brauer groups in ring theory and algebraic geometry, Lecture Notes in Math. 917, Notes by A. Verschoren, Berlin, New York: Springer-Verlag (1982), 194-210
* [5] A.A. Beilinson: Coherent sheaves on $\mathbb{P}^{n}$ and problems in linear algebra. Funktsional. Anal. i Prilozhen. Vol. 12 (1978), 68-69.
* [6] I. Biswas and D. Nagaraj: Vector bundles over a nondegenerate conic. J. Aust. Math. Soc. 86 (2009), 145-154.
* [7] A. Blanchet: Function fields of generalized Brauer–Severi varieties. Comm. Algebra. Vol. 19 (1991), 97-118.
* [8] J.-L. Colliot-Thélène, N. A. Karpenko, A. S. Merkurjev: Rational surfaces and the canonical dimension of the group $\mathrm{PGL}_{6}$, Algebra i Analiz 19 (2007), 159–178, translation in St. Petersburg Math. J. 19 (2008), 793–804.
* [9] A.I. Bondal and M. Van den Bergh: Generators and representability of functors in commutative and noncommutative geometry. Mosc. Math. J. Vol. 3 (2003), 1-36.
* [10] P. Gille and T. Szamuely: Central simple algebras and Galois cohomology. Cambridge Studies in advanced Mathematics. 101. Cambridge University Press. (2006)
* [11] A. Grothendieck: Le group de Brauer I: Algebras d Azumaya et interpretations diverses, Seminaire Bourbaki. No. 290 (1964).
* [12] A. Grothendieck: Le group de Brauer II: Theorie cohomologique, Seminaire Bourbaki. No. 297 (1965).
* [13] J. Kollár: Severi–Brauer varieties; A geometric treatment, arXiv:1606.04368 [math.AG] (2016).
* [14] M-A. Knus, A. Merkurjev, M. Rost and J-P. Tignol: The Book of Involutions. AMS Coll. Publ. 44, AMS, Providence, RI (1998).
* [15] D. Krashen: Birational maps between generalized Brauer–Severi varieties. J. Pure Appl. Algebra. 212, (2008), 689-703.
* [16] A.G. Kuznetsov: Semiorthogonal decomposition in algebraic geometry. Proceedings of ICM (2014).
* [17] M. Levine, V. Srinivas and J. Weyman: K-Theory of Twisted Grassmannians. K-Theory Vol. 3 (1989), 99-121.
* [18] Ch. Liedtke: Morphisms to Brauer–Severi varieties, with applications to del Pezzo surfaces. Geometry over Nonclosed Fields, (2017), 157-196, Springer.
* [19] A.S. Merkurjev, I.A. Panin and A.R. Wadsworth: Index reduction formulas for twisted flag varieties, I. K-Theory 10, (1996), 517-596.
* [20] A.S. Merkurjev and J.-P. Tignol: The multipliers of similitudes and the Brauer group of homogeneous varieties. J. reine angew. Math. 461 (1995), 13-47.
* [21] A.S. Merkurjev: Equivariant K-theory. J. Handbook of K-theory. Vol. 1, 2, Springer, Berlin (2005), 925-954.
* [22] S. Novakovi$\mathrm{\acute{c}}$: Absolutely split locally free sheaves on Brauer–Severi varieties of index two. Bulletin des Sciences Mathématiques 136 (2012), 413–422.
* [23] S. Novakovi$\mathrm{\acute{c}}$: Absolutely split locally free sheaves on proper $k$-schemes and Brauer–Severi varieties, to appear in: Bulletin des Sciences Mathématiques, arXiv:1501.00859.
* [24] S. Novakovi$\mathrm{\acute{c}}$: Non-existence of exceptional collections on twisted flags and categorical representability via noncommutative motives. arXiv:1607.01043v1 [math.AG] (2016).
* [25] Novaković, Saša: No phantoms in the derived category of curves over arbitrary fields, and derived characterizations of Brauer-Severi varieties, arXiv:1701.03020, to appear in: Journal of Commutative Algebra.
* [26] S. Novakovi$\mathrm{\acute{c}}$: On non-existence of full exceptional collections on some relative flags. To appear in: Rocky Mountain J. Math. also available on arXiv:1607.04834v1 [math.AG] (2016).
* [27] I.A. Panin: On the algebraic K-theory of twisted flag varieties. K-Theory Vol. 8 (1994), 541-585.
* [28] G. Tabuada: Additive invariants of toric and twisted projective homogeneous varieties via noncommutative motives. J. Algebra Vol. 417 (2014), 15-38.
* [29] G. Tabuada: Noncommutative motives. University Lecture Series, Vol. 63, AMS, 2015.
* [30] G. Tabuada and M. Van den Bergh: Noncommutative motives of separable algebras. Adv. Math. 303, (2016), 1122-1161.
* [31] G. Tabuada: Jacques Tits’ motivic measure. arXiv:1604.06407 [math.AG] (2016).
* [32] D.A. Timashev: Homogeneous spaces and equivariant embeddings. Encyclopedia of Mathematical Sciences, Vol. 22, Springer, Dodrecht (2011).
HOCHSCHULE FRESENIUS UNIVERSITY OF APPLIED SCIENCES 40476 DÜSSELDORF, GERMANY.
E-mail adress<EMAIL_ADDRESS>
MATHEMATISCHES INSTITUT, HEINRICH–HEINE–UNIVERSITÄT 40225 DÜSSELDORF, GERMANY.
E-mail adress<EMAIL_ADDRESS>
|
shadows MyMdframed[1][]
# Performance Analysis and Window Design for Channel Estimation of OTFS
Modulation
Zhiqiang Wei, Weijie Yuan, Shuangyang Li, Jinhong Yuan, and Derrick Wing Kwan
Ng Zhiqiang Wei, Weijie Yuan, Shuangyang Li, Jinhong Yuan, and Derrick Wing
Kwan Ng are with the School of Electrical Engineering and Telecommunications,
the University of New South Wales, Australia (email: zhiqiang.wei;
weijie.yuan; shuangyang.li; j.yuan; w.k.ng@unsw.edu.au).
###### Abstract
In this paper, we investigate the impacts of transmitter and receiver windows
on orthogonal time-frequency space (OTFS) modulation and propose a window
design to improve the OTFS channel estimation performance. Assuming ideal
pulse shaping filters at the transceiver, we first identify the role of window
in effective channel and the reduced channel sparsity with conventional
rectangular window. Then, we characterize the impacts of windowing on the
effective channel estimation performance for OTFS modulation. Based on the
revealed insights, we propose to apply a Dolph-Chebyshev (DC) window at either
the transmitter or the receiver to effectively enhance the sparsity of the
effective channel. As such, the channel spread due to the fractional Doppler
is significantly reduced, which leads to a lower error floor in channel
estimation compared with that of the rectangular window. Simulation results
verify the accuracy of the obtained analytical results and confirm the
superiority of the proposed window designs in improving the channel estimation
performance over the conventional rectangular or Sine windows.
## I Introduction
Future wireless networks are expected to provide high-speed and ultra-reliable
communications for a wide range of emerging mobile applications[1, 2, 3, 4, 5,
6, 7], including online video gaming, unmanned aerial vehicles (UAV)[8],
vehicle-to-everything (V2X), high-speed railway systems, etc. In high-mobility
channels, the multipath propagation and the temporal channel variations give
rise to the frequency-selective fading (time dispersion) and time-selective
fading (frequency dispersion), respectively, resulting in the so called
doubly-selective or doubly-dispersive channels[9]. To cope with the channel
dynamics, a new two-dimensional (2D) modulation scheme referred to as the
orthogonal time-frequency space (OTFS) modulation was recently proposed in [1]
and has received an increasing amount of attention in both academia and
industry, e.g. [1, 10, 11, 12, 13].
In OTFS modulation, data symbols are multiplexed in the delay-Doppler (DD)
domain rather than in the time-frequency (TF) domain as in contrast to the
traditional orthogonal frequency division multiplexing (OFDM) modulation[11,
14, 15]. In practice, OTFS modulation effectively transforms the TF domain
time-variant channel into an effective two-dimensional (2D) time-invariant
channel in the DD domain, which exhibits both sparse and stable properties[1,
11]. More importantly, the 2D transformation from the DD domain to the TF
domain employed by an OTFS modulator allows the possibility of each
information symbol to experience the whole TF domain channel over an OTFS
frame. Thus, OTFS enjoys the joint time-frequency diversity[12] (the so-called
full diversity in [1]), which is desirable to provide reliable communications
over doubly dispersive channels. In particular, it has been demonstrated that
OTFS is resilient to severe delay-Doppler shifts and outperforms OFDM
significantly for both uncoded [11] and coded [16, 17] systems.
However, reliable communications with OTFS modulation highly rely on accurate
channel estimation[5], particularly in the presence of fractional Doppler[11],
i.e., the exact Doppler frequency straddles a pair of finite-resolution bins
rather than falls exactly into a bin in the Doppler domain. Yet, most of
existing works only considered integer Doppler for simplicity, e.g. [18, 19,
20]. In fact, ensuring integer Doppler requires a large speed separation among
transceiver and all moving scatters to create a high Doppler resolution, which
is not always possible in practical systems. Although channel acquisition in
the DD domain may be deemed more convenient than that in the TF domain in high
mobility scenarios[21, 20], the effective channel is spread across all the
Doppler bins due to fractional Doppler, which sacrifices the DD domain channel
sparsity. Moreover, the channel estimation performance of OTFS systems is
mainly limited by the inter-Doppler interference (IDI), where a guard space is
usually required to avoid the IDI between data and pilot symbols, either
employing a single pilot symbol [21] or a pilot sequence[20]. Even worse, the
IDI between data and pilot symbols caused by fractional Doppler becomes more
severe leading to an error floor in the effective channel estimation. As a
remedy, to lower the error floor, a much larger guard space inserting between
the data and pilot symbols is required, which causes a higher amount of
signaling overhead. Therefore, a pragmatic approach for reducing the channel
spreading caused by fractional Doppler is desired.
As mentioned in [18, 22], windowing in the TF domain has the potential in
combating the effective channel spreading in the DD domain. Yet, the authors
in [18, 22] did not propose any method for window designs. Moreover, the role
of windowing in OTFS modulation and its impact on the performance of OTFS
channel estimation are not well understood yet. In the literature, to the best
of our knowledge, there is no existing work studying on the window design for
OTFS, which motivates this work.
In this paper, we study the window design for OTFS modulation to improve the
channel estimation performance with the consideration of practical fractional
Doppler. Firstly, the roles of windowing on OTFS systems, such as the
effective channel and the corresponding sparsity, are identified. Secondly, we
analyze the impact of windowing on the effective channel estimation
performance. Thirdly, we propose to employ a Dolph-Chebyshev (DC) window in
the TF domain to facilitate the channel estimation in the DD domain. The
employed DC window is optimal in the sense that it can obtain a predefined
channel sparsity while suppressing the channel spreading caused by the
fractional Doppler to the largest degree. Due to the enhanced channel
sparsity, applying the proposed DC window at either the transmitter or the
receiver can achieve a much lower channel estimation error floor compared with
the conventional rectangular window. Extensive simulations are conducted
verify the analytical results and to demonstrate the substantial performance
gain of the proposed window design over the conventional rectangular and Sine
windows.
Notations: $\mathbb{Z}^{+}$ denotes the set of all non-negative integers;
$\mathbb{C}^{M\times N}$ denotes the set of all $M\times N$ matrices with
complex entries; $\lvert\cdot\rvert$ denotes the absolute value of a complex
scalar; $E\\{\cdot\\}$ denotes the expectation; $\left(\cdot\right)^{*}$
denotes the conjugate operation; $\left(\cdot\right)_{N}$ denotes the modulus
operation with respect to $N$; $\Re\\{\cdot\\}$ returns the real part of the
input complex number; $\lfloor\cdot\rfloor$ is the floor function which
returns the largest integer smaller than the input value; The circularly
symmetric complex Gaussian distribution with mean $\bm{\mu}$ and covariance
matrix $\bm{\Sigma}$ is denoted by ${\cal CN}(\bm{\mu},\bm{\Sigma})$; $\sim$
stands for “distributed as”.
Figure 1: The block diagram of the OTFS transceiver[11].
## II System Model
### II-A OTFS Transmitter
A practical implementation of the OTFS transceiver is shown in Fig. 1. Without
loss of generality, we assume that one OTFS frame occupies a bandwidth of
$B_{\mathrm{OTFS}}$ and a time duration of $T_{\mathrm{OTFS}}$. The total
available bandwidth $B_{\mathrm{OTFS}}$ is divided into $M$ subcarriers with
an equal spacing of $\Delta f=\frac{B_{\mathrm{OTFS}}}{M}$. The total time
duration $T_{\mathrm{OTFS}}$ is divided into $N$ time slots with an equal-
length slot duration of $T=\frac{T_{\mathrm{OTFS}}}{N}$. As a result, a grid
of $N\times M$ can be constructed in the TF domain. Note that the delay
resolution is determined by the reciprocal of the system bandwidth, i.e.,
$\frac{1}{M\Delta f}$, while the Doppler resolution is determined by the OTFS
frame duration, i.e., $\frac{1}{NT}$ [11]. Correspondingly, in the DD domain,
$N$ denotes the number of Doppler indices with a Doppler resolution of
$\frac{1}{NT}$ and $M$ denotes the number of delay indices with a delay
resolution of $\frac{1}{M\Delta f}$. Consider a baseband modulated symbol in
the DD domain:
$x\left[{k,l}\right]\in\mathbb{A}=\\{a_{1},\ldots,a_{Q}\\},$ (1)
where $k\in\\{0,\ldots,N-1\\}$ represents the Doppler index,
$l\in\\{0,\ldots,M-1\\}$ represents the delay index, and $\mathbb{A}$ denotes
the constellation set with a size of $Q$. We assume that a normalized
constellation is adopted, i.e.,
$E\left\\{{{{\left|{x\left[{k,l}\right]}\right|}^{2}}}\right\\}=1$, and a
proper scrambler is applied to scramble the output of the encoder such that it
is reasonable to assume
$E\left\\{{{{\left|{x\left[{k,l}\right]}\right|}}}{{{\left|{x\left[{k^{\prime},l^{\prime}}\right]}\right|}}}\right\\}=0$,
$\forall k\neq k^{\prime}$, $\forall l\neq l^{\prime}$. OTFS modulator
performs a 2D transformation which maps the data symbols $x\left[{k,l}\right]$
in the DD domain to $X\left[{n,m}\right]$ in the TF domain. In particular,
such mapping can be realized by the inverse symplectic finite Fourier
transform (ISFFT)[1]:
$X\left[{n,m}\right]=\frac{1}{{\sqrt{NM}}}\sum\nolimits_{k=0}^{N-1}{\sum\nolimits_{l=0}^{M-1}{x\left[{k,l}\right]{e^{j2\pi\left({\frac{{nk}}{N}-\frac{{ml}}{M}}\right)}}}},$
(2)
where $n\in\\{0,\ldots,N-1\\}$ is the time slot index and
$m\in\\{0,\ldots,M-1\\}$ is the subcarrier index.
A TF domain transmitter (TX) window $U\left[{n,m}\right]$ is imposed through a
point-wise multiplication with the TF domain signal $X\left[{n,m}\right]$:
$\widetilde{X}\left[{n,m}\right]=U\left[{n,m}\right]X\left[{n,m}\right],$ (3)
where $U\left[{n,m}\right]\in\mathbb{C}$ denotes the complex-valued TX window
weighted on the point of $\left[{n,m}\right]$ in the TF domain grid. Then, a
multicarrier modulator is adopted to transform the TF domain signal
$\widetilde{X}\left[{n,m}\right]$ to a time-domain signal $s\left(t\right)$,
given by
$s\left(t\right)=\sum\limits_{n=0}^{N-1}{\sum\limits_{m=0}^{M-1}{\widetilde{X}\left[{n,m}\right]{{g_{{\rm{tx}}}}\left({t-nT}\right){e^{j2\pi
m\Delta f\left({t-nT}\right)}}}}},$ (4)
which is referred to as the Heisenberg transform in[1], where $t$ denotes the
continuous time variable. The time domain function
${{g_{{\rm{tx}}}}\left({t}\right)}$ is the pulse-shaping filter of the
multicarrier modulator for the windowed TF domain symbol.
### II-B DD Domain Channel Response
For a linear time-variant channel, the received signal in the time domain is
given by[11]
$r\left(t\right)=\int{\int{h\left({\tau,\nu}\right)}}{{e^{j2\pi\nu\left({t-\tau}\right)}}}s\left({t-\tau}\right)d\tau
d\nu+w\left(t\right),$ (5)
where $w\left(t\right)$ denotes the noise signal in the time domain following
a stationary Gaussian random process and we have
$w\left(t\right)\sim\mathcal{CN}\left(0,N_{0}\right)$ with $N_{0}$ denoting
the noise variance. In practice, only few reflectors are moving within one
OTFS frame duration and thus only a small number of channel taps are
associated with Doppler shift[1, 11]. Therefore, the resulting channel
response in the DD domain is sparse compared with the whole DD domain grid
spanned by one OTFS frame. In particular, considering a channel consisting of
$P$ independent distinguishable paths, the channel response in the DD domain
can be modeled by
${h\left({\tau,\nu}\right)}=\sum\nolimits_{i=1}^{P}{h_{i}}\delta(\tau-\tau_{i})\delta(\nu-\nu_{i}),$
(6)
where $h_{i}\in\mathbb{C}$, $\tau_{i}$, and $\nu_{i}$ denote the channel
coefficient, delay, and Doppler shift associated with the $i$-th path,
respectively. The variables $\tau_{i}$ and $\nu_{i}$ are defined as
$\tau_{i}=l_{\tau_{i}}\frac{1}{M\Delta f}$ and
$\nu_{i}=\left(k_{\nu_{i}}+\kappa_{\nu_{i}}\right)\frac{1}{NT}$, respectively,
where $l_{\tau_{i}}\in\\{0,\ldots,l_{\mathrm{max}}\\}$,
${k_{{\nu_{i}}}}\in\\{-k_{\mathrm{max}},\ldots,k_{\mathrm{max}}\\}$, and
$-\frac{1}{2}<\kappa_{\nu_{i}}<\frac{1}{2}$ denote the integer delay, integer
Doppler, and fractional Doppler indices, respectively. Variables
$k_{\max}\in\mathbb{Z}^{+}$ and $l_{\max}\in\mathbb{Z}^{+}$ denote the maximum
Doppler and delay indices, respectively.
### II-C OTFS Receiver
At the receiver side, we first perform a multicarrier demodulation for the
received signal $r\left(t\right)$ with a receiving filter to obtain the TF
domain signal $\widetilde{Y}\left[n,m\right]$, given by:
$\widetilde{Y}\left[n,m\right]=\int{r\left(t\right)g_{{\rm{rx}}}^{*}\left({t-nT}\right){e^{-j2\pi
m\Delta f\left({t-nT}\right)}}dt},$ (7)
which is referred to as the Wigner transform in [1]. A time domain function
${{g_{{\rm{rx}}}}\left({t}\right)}$ serving as a receiving filter for the
multicarrier demodulator is adopted to sample the discrete symbol
$\widetilde{Y}\left[{n,m}\right]$ from the received waveform
$r\left(t\right)$. Substituting (4), (5), and (6) into (7), and assuming ideal
transceiver pulse shaping filters satisfying the bi-orthogonal condition[11],
we have
$\widetilde{Y}\left(n,m\right)={\widetilde{X}\left[{n,m}\right]}\widetilde{H}\left[n,m\right]+\widetilde{Z}\left[n,m\right],$
(8)
where the TF domain effective channel is given by
$\widetilde{H}\left[n,m\right]=\sum\limits_{i=1}^{P}{h_{i}{e^{-j2\pi\frac{\left(k_{\nu_{i}}+\kappa_{\nu_{i}}\right)l_{\tau_{i}}}{NM}}}{e^{j2\pi\left({\frac{{n\left(k_{\nu_{i}}+\kappa_{\nu_{i}}\right)}}{N}-\frac{{ml_{\tau_{i}}}}{M}}\right)}}}.$
(9)
Corresponding to the TX window, we can insert a receiver (RX) window
$V\left[{n,m}\right]$ to the received signal in the TF domain:
${Y}\left[{n,m}\right]=V\left[{n,m}\right]\widetilde{Y}\left[{n,m}\right],$
(10)
where $V\left[{n,m}\right]\in\mathbb{C}$. Then, an OTFS demodulator transforms
the TF domain signals ${Y}\left[{n,m}\right]$ to the DD domain signals
$y\left[{k,l}\right]$ through a symplectic finite Fourier transform (SFFT)
[1]:
$y\left[{k,l}\right]=\frac{1}{{\sqrt{NM}}}\sum\nolimits_{n=0}^{N-1}{\sum\nolimits_{m=0}^{M-1}{Y\left[{n,m}\right]{e^{-j2\pi\left({\frac{{kn}}{N}-\frac{{lm}}{M}}\right)}}}}.$
(11)
## III The Impact of Windowing on Effective Channel
In this section, we analyze the impacts of windowing on the effective channel
for OTFS modulation.
### III-A Effective Channel in the DD Domain
According to the OTFS transceiver structure introduced above, the output of
the OTFS demodulator in the DD domain is given by[11]
$\displaystyle y\left[{k,l}\right]$
$\displaystyle=\sum\nolimits_{k^{\prime}=0}^{N-1}{\sum\nolimits_{l^{\prime}=0}^{M-1}{x\left[{k^{\prime},l^{\prime}}\right]}}{h_{w}}\left[{k-k^{\prime},l-l^{\prime}}\right]$
$\displaystyle+\sum\nolimits_{k^{\prime}=0}^{N-1}{\sum\nolimits_{l^{\prime}=0}^{M-1}{z\left[{k^{\prime},l^{\prime}}\right]}}{v_{z}}\left[{k-k^{\prime},l-l^{\prime}}\right].$
(12)
In (III-A), ${h_{w}}\left[{k,l}\right]$ denotes the effective channel in the
DD domain capturing the windows’ effect and it is given by
${h_{w}}\left[{k,l}\right]=\sum\limits_{i=1}^{P}{h_{i}}w(k-k_{\nu_{i}}-\kappa_{\nu_{i}},l-l_{\tau_{i}}){e^{-j2\pi\frac{\left(k_{\nu_{i}}+\kappa_{\nu_{i}}\right)l_{\tau_{i}}}{NM}}},$
(13)
where $w(k-{k_{{\nu_{i}}}}-\kappa_{\nu_{i}},l-{l_{{\tau_{i}}}})$ is an
equivalent DD domain filter designed by the TX-RX window which is given by[11]
$\displaystyle w(k-{k_{{\nu_{i}}}}-\kappa_{\nu_{i}},l-{l_{{\tau_{i}}}})$
$\displaystyle=\frac{1}{NM}\sum\limits_{n=0}^{N-1}\sum\limits_{m=0}^{M-1}V\left[{n,m}\right]U\left[{n,m}\right]$
$\displaystyle\times{e^{-j2\pi
n\frac{\left({k-k_{\nu_{i}}-\kappa_{\nu_{i}}}\right)}{N}}}{e^{j2\pi
m\frac{\left(l-l_{\tau_{i}}\right)}{M}}}.$ (14)
Also in (III-A), ${v_{z}}\left[{k,l}\right]$ is a DD domain filter induced
only by the RX window and is given by
${v_{z}}\left[{k,l}\right]=\frac{1}{NM}\sum\nolimits_{n=0}^{N-1}\sum\nolimits_{m=0}^{M-1}V\left[{n,m}\right]{e^{-j2\pi\frac{nk}{N}}}{e^{j2\pi\frac{ml}{M}}}.$
(15)
We can observe that different from the original DD domain channel response in
(6), the effective channel in (13) has a circular structure due to
${h_{w}}\left[{\left(k\right)_{N},\left(l\right)_{M}}\right]={h_{w}}\left[{k,l}\right]$.
As such, from (III-A), we can observe that the received signal
$y\left[{k,l}\right]$ is a 2D circular convolution between the data symbols,
${x\left[{k,l}\right]}$, and the effective channel,
${h_{w}}\left[{k,l}\right]$, in the DD domain. Furthermore, as the data and
training symbols are multiplexed in the DD domain [21], the channel estimation
performance and the data detection complexity depend on the effective channel
${h_{w}}\left[{k,l}\right]$ instead of the original channel response
$h\left({\tau,\nu}\right)$. As shown in (13), the effective channel
${h_{w}}\left[{k,l}\right]$ is a summation of the channel spread of each path
where $w(k-{k_{{\nu_{i}}}}-\kappa_{\nu_{i}},l-{l_{{\tau_{i}}}})\neq 0$,
$\forall i$, and the spreading pattern can be manipulated by the design of the
DD domain filter $w(k,l)$. In other words, the channel sparsity of the
effective channels can be controlled by the TX and RX windows. In (III-A), we
can observe that imposing a TX window $U\left[{n,m}\right]$ or a RX window
$V\left[{n,m}\right]$ in the TF domain has the same effect on the design of
the DD domain filter $w(k,l)$. In contrast, only the RX window
$V\left[{n,m}\right]$ affects the DD domain filter,
${v_{z}}\left[{k,l}\right]$, which alters the properties of the noise at the
receiver side.
### III-B Effective Channel Sparsity with Rectangular Window
Since the rectangular window, i.e.,
$V\left[{n,m}\right]=U\left[{n,m}\right]=1$, $\forall n,m$, is the most
straightforward one to be considered[11, 21], we investigate the effective
channel sparsity with rectangular window for both cases of integer and
fractional Doppler111Different from [11, 21], we focus on discussing the
reduced effective channel sparsity due to the existence of fractional Doppler,
which motivates our analysis and design in the following sections.. With
employing the rectangular window, we have the DD domain filters given by[11]
$\displaystyle w(k-{k_{{\nu_{i}}}}-\kappa_{\nu_{i}},l-{l_{{\tau_{i}}}})$
$\displaystyle=\mathcal{G}^{\mathrm{Rect}}_{N}\left({k-{k_{{\nu_{i}}}}-\kappa_{\nu_{i}}}\right)\mathcal{F}^{\mathrm{Rect}}_{M}\left({l-{l_{{\tau_{i}}}}}\right)$
$\displaystyle\text{and}\;{v_{z}}\left[{k,l}\right]$
$\displaystyle=\mathcal{G}^{\mathrm{Rect}}_{N}\left(k\right)\mathcal{F}^{\mathrm{Rect}}_{M}\left(l\right),$
(16)
respectively. Functions $\mathcal{G}^{\mathrm{Rect}}_{N}\left(k\right)$ and
$\mathcal{F}^{\mathrm{Rect}}_{M}\left(l\right)$ represent the filters in the
delay and Doppler domains, respectively, and they are given by
$\mathcal{G}^{\mathrm{Rect}}_{N}\left(k\right)=\frac{1}{N}\left({e^{-j\left({N-1}\right)\frac{\pi
k}{N}}}\frac{\sin\left(\pi k\right)}{{\sin\left({\frac{{\pi
k}}{N}}\right)}}\right)$ and
$\mathcal{F}^{\mathrm{Rect}}_{M}\left(l\right)=\frac{1}{M}\left({{e^{-j\left({M-1}\right)\frac{\pi
l}{M}}}\frac{{\sin\left({\pi l}\right)}}{{\sin\left({\frac{{\pi
l}}{M}}\right)}}}\right)$, respectively.
For the case of integer Doppler, i.e., $\kappa_{\nu_{i}}=0$, the DD domain
filter is simplified as
$\displaystyle w\left[k-{k_{{\nu_{i}}}},l-{l_{{\tau_{i}}}}\right]$
$\displaystyle=\delta\left[k-{k_{{\nu_{i}}}}\right]\delta\left[l-{l_{{\tau_{i}}}}\right]$
(17)
$\displaystyle=\left\\{{\begin{array}[]{*{20}{c}}{1}&{\left(k-{k_{{\nu_{i}}}}\right)_{N}=0,\left(l-{l_{{\tau_{i}}}}\right)_{M}=0}\\\
0&\mathrm{otherwise}\end{array}}\right.,$ (20)
and the effective channel in the DD domain is given by
${h_{w}}\left[{k,l}\right]=\sum\nolimits_{i=1}^{P}{h_{i}}\delta\left[k-{k_{{\nu_{i}}}}\right]\delta\left[l-{l_{{\tau_{i}}}}\right]{e^{-j2\pi\frac{k_{\nu_{i}}l_{\tau_{i}}}{NM}}}.$
(21)
We can observe that the effective channel ${h_{w}}\left[{k,l}\right]$ in the
DD domain has a response if and only if $k={k_{{\nu_{i}}}}$ and
$l={l_{{\tau_{i}}}}$, i.e., the effective channel shares the same channel
sparsity with the original DD domain channel response in (6). Moreover, the
effective channel ${h_{w}}\left[{k,l}\right]$ is a phase-rotated version of
the original channel response in (6), where the delay and Doppler shift of the
$i$-th path rotates the original channel response ${h_{i}}$ with a phase of
$2\pi k_{\nu_{i}}l_{\tau_{i}}/MN$.
Figure 2: The channel spreading in the Doppler domain with a rectangular
window with/without fractional Doppler, where
$\mathrm{SL}_{w}\approx\frac{1}{N}$ denotes the sidelobe level of the adopted
rectangular window.
For the case of fractional Doppler, i.e., $\kappa_{\nu_{i}}\neq 0$, the
effective channel is given by
${h_{w}}\left[{k,l}\right]=\sum\limits_{i=1}^{P}{h_{i}}\mathcal{G}^{\mathrm{Rect}}_{N}\left({k-{k_{{\nu_{i}}}}-\kappa_{\nu_{i}}}\right)\delta\left[l-{l_{{\tau_{i}}}}\right]{e^{-j2\pi\frac{\left(k_{\nu_{i}}+\kappa_{\nu_{i}}\right)l_{\tau_{i}}}{NM}}}.$
(22)
From (22), we can observe that the effective channel in the DD domain
${h_{w}}\left[{k,l}\right]$ contains more “paths” (non-zero entries) than that
of the original channel response in (6), since the Doppler domain filter
$\mathcal{G}^{\mathrm{Rect}}_{N}\left({k-{k_{{\nu_{i}}}}-\kappa_{\nu_{i}}}\right)\neq
0$, $\forall k,{k_{{\nu_{i}}}}$, and $\forall\kappa_{\nu_{i}}\neq 0$. In fact,
for each path with a Doppler shift of ${k_{{\nu_{i}}}}+\kappa_{\nu_{i}}$, the
channel coefficient ${h_{i}}$ is spread to all the Doppler indices $k$ in the
Doppler domain. To visualize the channel spreading, we ignore the delay domain
at the moment and plot the Doppler domain filter response
$\left|\mathcal{G}^{\mathrm{Rect}}_{N}\left({k-{k_{{\nu_{i}}}}-\kappa_{\nu_{i}}}\right)\right|$
to illustrate the impact of fractional Doppler in Fig. 2. It can be seen that
without the fractional Doppler, the filter
$\left|\mathcal{G}^{\mathrm{Rect}}_{N}\left({k-{k_{{\nu_{i}}}}}\right)\right|$
is a perfect sampling function $\delta\left[k-{k_{{\nu_{i}}}}\right]$, i.e.,
no channel spread. However, the existence of the fractional Doppler shift
$\kappa_{\nu_{i}}$ not only reduces signal power at the sampling point
$k={k_{{\nu_{i}}}}$, but also introduces non-negligible power leakage from the
Doppler shift ${k_{{\nu_{i}}}}$ to $k\neq{k_{{\nu_{i}}}}$. In other words,
with the application of the rectangular window, fractional Doppler sacrifices
the sparsity of the effective channel in the DD domain, which could degrade
the channel estimation performance and increase the complexity of data
detection. Therefore, it is desired to design a window which can null/suppress
the power leakage and improve the effective channel sparsity.
$\displaystyle E\left\\{{{{\left|{I\left[{k,l}\right]}\right|}^{2}}}\right\\}$
$\displaystyle=\sum\limits_{k^{\prime}\notin\mathcal{K}}{\sum\limits_{l^{\prime}=l-l_{\mathrm{max}}}^{{l}}{E\left\\{{{{\left|{x\left[{k^{\prime},l^{\prime}}\right]}\right|}^{2}}}\right\\}E\left\\{{{{\left|{{h_{w}}\left[{{{\left({k-k^{\prime}}\right)}_{N}},{{\left({l-l^{\prime}}\right)}_{M}}}\right]}\right|}^{2}}}\right\\}}}$
$\displaystyle\mathop{=}\limits^{(a)}\sum\limits_{k^{\prime}\notin\mathcal{K}}\sum\limits_{l-l^{\prime}\in[0,l_{\mathrm{max}}],{{\left({l-l^{\prime}}\right)}_{M}}=l_{\tau_{i}}}{{E\left\\{{{{\left|{\sum\limits_{i=1}^{P}{{h_{i}}}w\left({{{\left({k-k^{\prime}}\right)}_{N}}-{k_{{\nu_{i}}}}-{\kappa_{{\nu_{i}}}},0}\right){e^{-j2\pi\frac{{\left({{k_{{\nu_{i}}}}+{\kappa_{{\nu_{i}}}}}\right){l_{{\tau_{i}}}}}}{{NM}}}}}\right|}^{2}}}\right\\}}}$
(24) $\displaystyle\mathrm{MSE}$
$\displaystyle=\sum_{k={k_{p}}-{k_{\max}}-\hat{k}}^{{k_{p}}+{k_{\max}}+\hat{k}}\sum_{l={l_{p}}}^{{l_{p}}+{l_{\mathrm{max}}}}E\left\\{{{{\left|{h_{w}}\left[{\left(k-k_{p}\right)_{N},\left(l-l_{p}\right)_{M}}\right]-{\hat{h}_{w}}\left[{\left(k-k_{p}\right)_{N},\left(l-l_{p}\right)_{M}}\right]\right|}^{2}}}\right\\}$
(27) $\displaystyle\lim\limits_{N_{0}\to 0}\mathrm{MSE}$
$\displaystyle=\sum_{k={k_{p}}-{k_{\max}}-\hat{k}}^{{k_{p}}+{k_{\max}}+\hat{k}}\sum_{l={l_{p}}}^{{l_{p}}+{l_{\mathrm{max}}}}\frac{E\left\\{{{{\left|{I\left[{k,l}\right]}\right|}^{2}}}\right\\}}{\left|x_{p}\right|^{2}}\approx\left(N-4{k_{\max}}-4\hat{k}-1\right)\left(2{k_{\max}}+2\hat{k}+1\right)\left(l_{\mathrm{max}}+1\right)\mathrm{SL}_{w}^{2}$
(28)
## IV Effective Channel Estimation Performance with Arbitrary Window
In this work, we adopt the channel estimation scheme proposed in [21], where a
single pilot symbol is embedded in the DD domain and a guard space is inserted
between the pilot symbol and data symbols. In fact, to the best of our
knowledge, the channel estimation scheme in [21] is the first DD domain
channel estimation method proposed for OTFS in the literature, which is simple
and practical. In this section, we investigate the impact of windowing on
channel estimation performance based on the scheme in [21]. Let us assume that
the only pilot symbol $x_{p}$ is inserted at the $\left[k_{p},l_{p}\right]$-th
DD grid and data symbols ${x_{d}}\left[{k,l}\right]$ are arranged as
follow[21]
$x\left[{k,l}\right]=\left\\{{\begin{array}[]{*{20}{c}}{{x_{p}}}&{k={k_{p}},l={l_{p}}},\\\
0&\begin{array}[]{l}k\in\mathcal{K},k\neq{k_{p}},l\in\mathcal{L},l\neq{l_{p}},\end{array}\\\
{{x_{d}}\left[{k,l}\right]}&{{\rm{otherwise}}},\end{array}}\right.$ (20)
where
$\mathcal{K}=\\{{k_{p}}-2{k_{\max}}-2\hat{k},\ldots,{k_{p}}+2{k_{\max}}+2\hat{k}\\}$
and
$\mathcal{L}=\\{{l_{p}}-{l_{\mathrm{max}}},\ldots,{l_{p}}+{l_{\mathrm{max}}}\\}$
denotes the index sets of the guard space in the Doppler and delay domains,
respectively. Variable $\hat{k}\in\mathbb{Z}^{+}$ denotes the additional guard
to mitigate the spread due to fractional Doppler and
$\hat{k}\in\left\\{0,\ldots,\lfloor\frac{N-4k_{\max}-1}{4}\rfloor\right\\}$.
Increasing $\hat{k}$ would potentially increase the channel estimation
performance while reduces the spectral efficiency, as the signaling overhead
increases with $\hat{k}$, i.e., the total signaling overhead is
$\left(2{l_{\mathrm{max}}}+1\right)\left(4{k_{\max}}+4\hat{k}+1\right)$.
The estimation of the effective channel is based on the received signals in
the DD domain, which are given by
$\displaystyle y\left[k,l\right]$
$\displaystyle=x_{p}{h_{w}}\left[{\left(k-k_{p}\right)_{N},\left(l-l_{p}\right)_{M}}\right]+I\left[k,l\right]$
$\displaystyle+\sum\nolimits_{k^{\prime}=0}^{N-1}{\sum\nolimits_{l^{\prime}=0}^{M-1}{z\left[{k^{\prime},l^{\prime}}\right]}}{v_{z}}\left[{k-k^{\prime},l-l^{\prime}}\right],$
(21)
where ${k_{p}}-{k_{\max}}-\hat{k}\leq k\leq{k_{p}}+{k_{\max}}+\hat{k}$ and
${l_{p}}\leq l\leq{l_{p}}+{l_{\mathrm{max}}}$. According to [21], the
effective DD domain channel can be estimated by
${\hat{h}_{w}}\left[{\left(k-k_{p}\right)_{N},\left(l-l_{p}\right)_{M}}\right]=\frac{y\left[k,l\right]}{x_{p}},\;\text{if}\left|y\left[k,l\right]\right|\geq
3\sqrt{N_{0}}.$ (22)
In (IV), $I\left[k,l\right]$ denotes the interference spread from data symbols
due to the existence of fractional Doppler, which is given by
$I\left[k,l\right]=\sum_{k^{\prime}\notin\mathcal{K}}\sum_{l^{\prime}=0}^{l_{\mathrm{max}}}x\left[{k^{\prime},\left(l-l^{\prime}\right)_{M}}\right]{h_{w}}\left[{\left(k-k^{\prime}\right)_{N},l^{\prime}}\right].$
(23)
We can observe that in the delay domain, only $l_{\mathrm{max}}+1$ symbols
before $l$ affect the received symbol on $l$. On the other hand, in the
Doppler domain, all the data symbols outside the guard space
$k^{\prime}\notin\mathcal{K}$ affect the received symbol on $k$. Due to the
existence of the interference term $I\left[k,l\right]$, the channel estimation
in (22) suffers from an error floor even increasing the system signal-to-noise
ratio (SNR). Note that when applying the full guard space[21], i.e.,
$4{k_{\max}}+4\hat{k}+1=N$, the interference term in (23) would disappear and
there is no error floor in the effective channel estimation. However, it
requires a higher signaling overhead of $\left(2{l_{\mathrm{max}}}+1\right)N$
compared with that of the scheme in (20).
In the following, we derive the interference power to investigate the impact
of windowing on the effective channel estimation performance. Since the
transmitted data symbols are independent, the interference power can be
calculated as (III-B) at the top of next page, where the equality $(a)$ is
obtained since only the data symbols on
${{\left({l-l^{\prime}}\right)}_{M}}=l_{\tau_{i}}$ in the summation over
$l^{\prime}$ affect the received symbol on $l$ with adopting a rectangular
window in the delay domain, i.e.,
$V\left[{n,m}\right]=V\left[{n,m^{\prime}}\right]=U\left[{n,m}\right]=U\left[{n,m^{\prime}}\right]$,
$\forall n,m,m^{\prime}$, and
$E\left\\{{{{\left|{x\left[{k^{\prime},l^{\prime}}\right]}\right|}^{2}}}\right\\}=1$.
Assuming independent channel coefficients, i.e.,
$E\left\\{{h_{i}}{h^{*}_{j}}\right\\}=0$, $\forall i\neq j$, (III-B) becomes
$E\left\\{{{{\left|{I\left[{k,l}\right]}\right|}^{2}}}\right\\}=\sum\limits_{k^{\prime}\notin\mathcal{K}}\sum\limits_{i=1}^{P}{E\left\\{{{{\left|{{h_{i}}}\right|}^{2}}}\right\\}}{{\left|{w\left({{{\left({k-k^{\prime}}\right)}_{N}}-{k_{{\nu_{i}}}}-{\kappa_{{\nu_{i}}}},0}\right)}\right|}^{2}}.$
(25)
It can be observed that the interference power is determined by the window
response at
${{\left({k-k^{\prime}}\right)}_{N}}-{k_{{\nu_{i}}}}-{\kappa_{{\nu_{i}}}}$.
Thanks to the guard space, the window response
$\left|{w\left({{{\left({k-k^{\prime}}\right)}_{N}}-{k_{{\nu_{i}}}}-{\kappa_{{\nu_{i}}}},0}\right)}\right|$
lies in its sidelobe and becomes almost a constant, as shown in Fig. 2.
Therefore, we assume
$\left|{w\left({{{\left({k-k^{\prime}}\right)}_{N}}-{k_{{\nu_{i}}}}-{\kappa_{{\nu_{i}}}},0}\right)}\right|\approx\mathrm{SL}_{w}$
for ${k^{\prime}\notin\mathcal{K}}$ and ${k_{p}}-{k_{\max}}-\hat{k}\leq
k\leq{k_{p}}+{k_{\max}}+\hat{k}$. Considering a normalized channel power gain,
i.e.,
$\sum\nolimits_{i=1}^{P}{E\left\\{{{{\left|{{h_{i}}}\right|}^{2}}}\right\\}}=1$,
the average interference power can be approximated by
$E\left\\{{{{\left|{I\left[{k,l}\right]}\right|}^{2}}}\right\\}\approx\left(N-4{k_{\max}}-4\hat{k}-1\right)\mathrm{SL}_{w}^{2},$
(26)
where $\mathrm{SL}_{w}$ denotes the sidelobe level of the adopted window. For
instance, as shown in Fig. 2, we have $\mathrm{SL}_{w}\approx\frac{1}{N}$ for
the case of rectangular window. Define the mean squared error (MSE) of the
effective channel estimation in the guard space as (27) at the top of this
page. According to (IV), in high SNR regime, i.e., $N_{0}\to 0$, the MSE of
the effective channel estimation is given by (28) at the top of this page,
which indicates the effective channel estimation error floor. Note that the
analytical result in (28) is applicable to arbitrary window.
Now, we can observe that in the high SNR regime, the error floor level in the
effective channel estimation in (28) depends on the additional guard $\hat{k}$
and the sidelobe level $\mathrm{SL}_{w}$ of the designed window response. It
can be seen that the MSE of the effective channel estimation is a quadratic
function with respect to $\hat{k}$. After some mathematical manipulations, it
can be seen that when $\frac{N-8{k_{\max}-3}}{4}\leq 0$, increasing $\hat{k}$
in the range of
$\left\\{0,\ldots,\lfloor\frac{N-4k_{\max}-1}{4}\rfloor\right\\}$ always
results in a lower error floor level at the expense of more signaling
overhead. On the other hand, when $\frac{N-8{k_{\max}-3}}{4}\geq 1$,
increasing $\hat{k}$ first increases and then decreases the MSE of effective
channel estimation. This is because for large $N$, increasing additional guard
$\hat{k}$ introduces more entries to be estimated within the guard space,
thereby might increasing channel estimation error. Note that further
increasing $\hat{k}$ reduces the IDI caused by the data symbols and thus
reduces the effective channel estimation MSE, but it also consumes more
signaling overhead. More importantly, as shown in (28), a proper design of
window response can achieve a low sidelobe level at the first place, which can
effectively decrease the error floor level with a relatively small $\hat{k}$.
In fact, a window response with a low sidelobe level can enhance the effective
channel sparsity, which can improve the channel estimation performance.
Moreover, as the TX and RX windows have the same impact on the effective
channel in the DD domain in (13), imposing a window at either the transmitter
or the receiver will result in the same channel estimation error floor.
## V Window Designs for OTFS Channel Estimation
Based on the above discussed properties, we first discuss the ideal window
response, which is not realizable but provides insightful guidelines for
practical window designs. Then, we propose to apply the DC window to enhance
the effective channel sparsity, which will significantly improve the
performance of both channel estimation and data detection.
### V-A Ideal Window
To facilitate the window design, we consider a separable TF domain window as
follows:
$V\left[{n,m}\right]=V_{\nu}\left[{n}\right]V_{\tau}\left[{m}\right]\;\text{and}\;U\left[{n,m}\right]=U_{\nu}\left[{n}\right]U_{\tau}\left[{m}\right],$
(29)
where $V_{\nu}\left[{n}\right]$ and $U_{\nu}\left[{n}\right]$ denote the RX
and TX windows in the Doppler domain, respectively, and
$V_{\tau}\left[{m}\right]$ and $U_{\tau}\left[{m}\right]$ denote the RX and TX
windows in the delay domain, respectively. As a result, the window response in
the DD domain in (III-A) can be decomposed as
$w(k-{k_{{\nu_{i}}}}-\kappa_{\nu_{i}},l-{l_{{\tau_{i}}}})=\mathcal{G}_{N}\left({k-{k_{{\nu_{i}}}}-\kappa_{\nu_{i}}}\right)\mathcal{F}_{M}\left({l-{l_{{\tau_{i}}}}}\right),$
(30)
where
$\displaystyle\mathcal{G}_{N}\left({k-{k_{{\nu_{i}}}}-\kappa_{\nu_{i}}}\right)$
$\displaystyle=\frac{1}{N}\sum\nolimits_{n=0}^{N-1}V_{\nu}\left[{n}\right]U_{\nu}\left[{n}\right]{e^{-j2\pi
n\frac{\left({k-k_{\nu_{i}}-\kappa_{\nu_{i}}}\right)}{N}}}$
$\displaystyle\text{and}\;\mathcal{F}_{M}\left({l-{l_{{\tau_{i}}}}}\right)$
$\displaystyle=\frac{1}{M}\sum\nolimits_{m=0}^{M-1}V_{\tau}\left[{m}\right]U_{\tau}\left[{m}\right]{e^{j2\pi
m\frac{\left(l-l_{\tau_{i}}\right)}{M}}}.$ (31)
Combining (13) and (30), the effective channel in the DD domain can be
rewritten as
${h_{w}}\left[{k,l}\right]=\sum\nolimits_{i=1}^{P}{h_{i}}\mathcal{G}_{N}\left({k-{k_{{\nu_{i}}}}-\kappa_{\nu_{i}}}\right)\mathcal{F}_{M}\left({l-{l_{{\tau_{i}}}}}\right){e^{-j2\pi\frac{\left(k_{\nu_{i}}+\kappa_{\nu_{i}}\right)l_{\tau_{i}}}{NM}}}.$
(32)
Since the delay resolution is usually sufficient and there is only negligible
channel spread in the delay domain [11], the optimal window in the delay
domain should be maintained as the rectangular window, i.e.,
$\mathcal{F}^{\mathrm{Ideal}}_{M}\left({l}\right)=\mathcal{F}^{\mathrm{Rect}}_{M}\left({l}\right)$.
Besides, with the existence of the fractional Doppler, as
$-\frac{1}{2}<\kappa_{\nu_{i}}<\frac{1}{2}$, the ideal window in the Doppler
domain is given by:
${{\cal
G}^{\mathrm{Ideal}}_{N}}\left(k\right)=\left\\{{\begin{array}[]{*{20}{c}}{1,}&{-0.5\leq
k\leq 0.5,}\\\\[-1.42262pt] {0,}&{{\rm{otherwise.}}}\end{array}}\right.$ (33)
In Fig. 3, we illustrate that the ideal window can tolerate the fractional
Doppler shift without sacrificing any channel gain and causing any channel
spread. However, to implement such an ideal window response, an infinite
length of window in the time domain is needed, i.e., $N\to\infty$. Recall that
the fractional Doppler is caused by the finite $N$. Therefore, the ideal
window in (33) is not realizable in practice.
### V-B Dolph-Chebyshev Window
In what follows, we propose to apply the Dolph-Chebyshev (DC) window at the
transmitter or the receiver to improve the channel sparsity when channel state
information (CSI) is not available. In fact, it has been proved that the DC
window is effective [23] in the sense that: 1) given the specified sidelobe
level, the width of the mainlobe in the window response is the narrowest; or
2) given the fixed mainlobe width, the sidelobe level is minimized. Note that
the effective channel only has a considerably large entry when it is located
in the mainlobe of the window response function in (V-A). Therefore, given any
channel sparsity requirement, the channel spread to other Doppler indices is
reduced to the largest degree by using a DC window.
Particularly, if the required mainlobe width of the window response in the
Doppler domain is $k_{\mathrm{main}}>1$, the number of non-negligible
effective channel spreading of each path in the Doppler domain is no more than
$k_{\mathrm{main}}$, i.e., the effective channel sparsity is improved. In this
case, the lowest sidelobe level achieved by the DC window is [24]
$\mathrm{SL}_{w}[\mathrm{dB}]=-20\log_{10}{{\cosh\left({\frac{N}{2}{{\cosh}^{-1}}\left(\frac{3-\cos\left(\frac{k_{\mathrm{main}}}{2}\right)}{1+\cos\left(\frac{k_{\mathrm{main}}}{2}\right)}\right)}\right)}}.$
(34)
In this paper, to reveal the insights of employing TX/RX windows, we only
adopt the DC window at the transmitter or the receiver side with the other
side adopting a rectangular window. The TX window $U_{\nu}\left[{n}\right]$ or
the RX window $V_{\nu}\left[{n}\right]$ can be obtained by (16) in [25]
according to the selected $\mathrm{SL}_{w}$. In Fig. 3, for the same setting
with Fig. 2, we employ a DC window at the transmitter side with
$\mathrm{SL}_{w}[\mathrm{dB}]=-40$ dB and $k_{\mathrm{main}}\approx 3$. We can
observe that the resulting effective channel only has approximately $3$
entries with considerable gains and the channel spreading to all the other
Doppler indices has been significantly suppressed due to the $40$ dB
attenuation on the sidelobe introduced by the designed DC window. By putting
$\mathrm{SL}_{w}[\mathrm{dB}]=-40$ dB into (28), we can find that the proposed
DC window can effectively suppress the MSE of effective channel estimation,
compared with the rectangular window.
Figure 3: The effective channel in the Doppler domain with different types of
windows and fractional Dopplers.
## VI Numerical Results
In this section, we verify the accuracy of the derived analytical results and
the effectiveness of the proposed designs via simulations. For each OTFS
frame, we set $N=20$, $M=30$, carrier frequency $f_{c}=3$ GHz, and the
subcarrier spacing is $\Delta f=5$ kHz. Without loss of generality, we set the
maximum delay index as $l_{\mathrm{max}}=4$ and the maximum Doppler index as
$k_{\mathrm{max}}=3$, corresponding to the relative speed between the
transceiver as $270$ km/h. The number of paths in the DD domain is $P=5$ and
the additional guard space is $\hat{k}=[0,1]$. For each channel realization,
we randomly select the delay and Doppler indices such that we have
$-k_{\mathrm{max}}\leq{k_{{\nu_{i}}}}\leq k_{\mathrm{max}}$ and $0\leq
l_{\tau_{i}}\leq l_{\mathrm{max}}$. The channel coefficients $h_{i}$ are
generated according to the distribution
$h_{i}\sim\mathcal{CN}(0,q^{l_{\tau_{i}}})$, where $q^{l_{\tau_{i}}}$ follows
a normalized exponential power delay profile
$q^{l_{\tau_{i}}}=\frac{\exp(-0.1l_{\tau_{i}})}{\sum_{i}\exp(-0.1l_{\tau_{i}})}$.
The system SNR is defined as $\mathrm{SNR}=\frac{1}{N_{0}}$ and the pilot
power is $\left|x_{p}\right|^{2}=[10,30]$ dBw[21]. The DC window is designed
with $\mathrm{SL}_{w}[\mathrm{dB}]=-40$ dB such that $k_{\mathrm{main}}\approx
3$. All simulation results are averaged over more than $10^{4}$ OTFS frames.
Figure 4: The MSE of effective channel estimation with employing a rectangular
window at the transceiver.
Fig. 4 shows the MSE of effective channel estimation performance when
employing rectangular window at the transceiver. We can observe that the
effective channel estimation suffers from an error floor in all the considered
cases. This is due to the interference spread from data symbols to the guard
space caused by the existence of the fractional Doppler. Moreover, our derived
error floor level in (28) matches closely with the simulation results in the
high SNR regime. Note that a better effective channel estimation performance
can be achieved with a higher pilot power. Besides, as expected, the more
additional guard space $\hat{k}$ inserted, the lower the MSE of channel
estimation will be, at the expense of a higher amount of overhead.
Fig. 5 illustrates the MSE of effective channel estimation when employing the
designed DC window or Sine window at the transmitter and rectangular window at
the receiver. Note that the sine window is generated via
$U_{\nu}\left[{n}\right]=\sin\left(\frac{\pi n}{N-1}\right)$. We can observe
that the derived error floor in (28) is also consistent with the simulation
results in the high SNR regime. Comparing Fig. 4 and Fig. 5, it can be seen
that the employing the designed DC window at the transmitter is able to
achieve a significantly lower MSE in the effective channel estimation than
that of the rectangular and Sine windows. This demonstrates the effectiveness
of the employed DC window in enhancing the effective channel sparsity and
improving the effective channel estimation performance. We also evaluate the
MSE with employing the designed DC window at the receiver and a rectangular
window at the transmitter. The results are identical to those in Fig. 5. This
verifies that employing DC window at either the transmitter or the receiver
can achieve the same error floor for the effective channel estimation in the
high SNR regime, as predicted by our analysis.
Figure 5: The MSE of effective channel estimation with adopting a DC window at
the transmitter and a rectangular window at the receiver.
## VII Conclusions
In this paper, we investigated the impacts of transmitter and receiver windows
and proposed a window design for OTFS channel estimation. We analyzed and
revealed the insights of the impacts of windowing on the effective channel,
channel sparsity, and the corresponding estimation performance. In particular,
we showed that the existence of fractional Doppler leads to the potential
effective channel spread, which causes an error floor in the effective channel
estimation. We found that adopting a window at the transmitter or the receiver
can obtain an identical performance in the effective channel estimation.
Besides, we proposed to apply a DC window to enhance the channel sparsity,
which improves the performance in channel estimation. We verified the accuracy
of the obtained analytical results and insights and demonstrated the
substantial performance gain of the proposed window designs.
## References
* [1] R. Hadani, S. Rakib, M. Tsatsanis, A. Monk, A. J. Goldsmith, A. F. Molisch, and R. Calderbank, “Orthogonal time frequency space modulation,” in _Proc. IEEE Wireless Commun. and Networking Conf._ , 2017, pp. 1–6.
* [2] X. Chen, D. W. K. Ng, W. Yu, E. G. Larsson, N. Al-Dhahir, and R. Schober, “Massive access for 5G and beyond,” _IEEE J. Select. Areas Commun._ , early access, 2020.
* [3] J. Zhang, E. Björnson, M. Matthaiou, D. W. K. Ng, H. Yang, and D. J. Love, “Prospective multiple antenna technologies for beyond 5G,” _IEEE J. Select. Areas Commun._ , vol. 38, no. 8, pp. 1637–1660, 2020.
* [4] C. Liu, J. Wang, X. Liu, and Y.-C. Liang, “Deep CM-CNN for spectrum sensing in cognitive radio,” _IEEE J. Sel. Areas Commun._ , vol. 37, no. 10, pp. 2306–2321, Oct. 2019.
* [5] C. Liu, X. Liu, D. W. K. Ng, and J. Yuan, “Deep residual learning for channel estimation in intelligent reflecting surface-assisted multi-user communications,” _arXiv preprint arXiv: 2009.01423_ , 2020, [Online]. Available: https://arxiv.org/abs/2009.01423.
* [6] C. Liu, J. Wang, X. Liu, and Y.-C. Liang, “Maximum eigenvalue-based goodness-of-fit detection for spectrum sensing in cognitive radio,” _IEEE Trans. Veh. Technol._ , vol. 68, no. 8, pp. 7747–7760, Aug. 2019.
* [7] H. Zhang, Y. Duan, K. Long, and V. C. M. Leung, “Energy efficient resource allocation in terahertz downlink NOMA systems,” _IEEE Trans. Commun._ , early access, 2020.
* [8] H. Zhang, J. Zhang, and K. Long, “Energy efficiency optimization for NOMA UAV network with imperfect CSI,” _IEEE J. Select. Areas Commun._ , vol. 38, no. 12, pp. 2798–2809, 2020.
* [9] X. Ma and G. B. Giannakis, “Maximum-diversity transmissions over doubly selective wireless channels,” _IEEE Trans. Inf. Theory_ , vol. 49, no. 7, pp. 1832–1840, Jul. 2003.
* [10] Z. Wei, W. Yuan, S. Li, J. Yuan, G. Bharatula, R. Hadani, and L. Hanzo, “Orthogonal time-frequency space modulation: A full-diversity next generation waveform,” _arXiv preprint arXiv:2010.03344_ , 2020.
* [11] P. Raviteja, K. T. Phan, Y. Hong, and E. Viterbo, “Interference cancellation and iterative detection for orthogonal time frequency space modulation,” _IEEE Trans. Wireless Commun._ , vol. 17, no. 10, pp. 6501–6515, Oct. 2018.
* [12] G. D. Surabhi, R. M. Augustine, and A. Chockalingam, “On the diversity of uncoded OTFS modulation in doubly-dispersive channels,” _IEEE Trans. Wireless Commun._ , vol. 18, no. 6, pp. 3049–3063, Jun. 2019.
* [13] Z. Wei, W. Yuan, S. Li, J. Yuan, and D. W. K. Ng, “Off-grid channel estimation with sparse bayesian learning for OTFS systems,” _arXiv preprint arXiv:2101.05629_ , 2021.
* [14] Z. Wei, W. Yuan, S. Li, J. Yuan, and D. W. K. Ng, “Transmitter and receiver window designs for orthogonal time frequency space modulation,” _IEEE Trans. Commun._ , early access, 2021.
* [15] S. Li, W. Yuan, Z. Wei, and J. Yuan, “Cross domain iterative detection for orthogonal time frequency space modulation,” _arXiv preprint arXiv:2101.03822_ , 2021.
* [16] T. Zemen, M. Hofer, D. Löschenbrand, and C. Pacher, “Iterative detection for orthogonal precoding in doubly selective channels,” in _Proc. IEEE Personal, Indoor and Mobile Radio Commun. Sympos._ , Sep. 2018, pp. 1–7.
* [17] S. Li, J. Yuan, W. Yuan, Z. Wei, B. Bai, and D. W. K. Ng, “Performance analysis of coded OTFS systems and code design,” _IEEE Trans. Wireless Commun._ , submitted, 2020.
* [18] A. Farhang, A. RezazadehReyhani, L. E. Doyle, and B. Farhang-Boroujeny, “Low complexity modem structure for OFDM-Based orthogonal time frequency space modulation,” _IEEE Commun. Lett._ , vol. 7, no. 3, pp. 344–347, Jun. 2018.
* [19] P. Raviteja, Y. Hong, E. Viterbo, and E. Biglieri, “Practical pulse-shaping waveforms for reduced-cyclic-prefix OTFS,” _IEEE Trans. Veh. Technol._ , vol. 68, no. 1, pp. 957–961, Jan. 2019.
* [20] W. Shen, L. Dai, J. An, P. Fan, and R. W. Heath, “Channel estimation for orthogonal time frequency space (OTFS) massive MIMO,” _IEEE Trans. Signal Process._ , vol. 67, no. 16, pp. 4204–4217, Aug. 2019.
* [21] P. Raviteja, K. T. Phan, and Y. Hong, “Embedded pilot-aided channel estimation for OTFS in delay-doppler channels,” _IEEE Trans. Veh. Technol._ , vol. 68, no. 5, pp. 4906–4917, May 2019.
* [22] A. RezazadehReyhani, A. Farhang, M. Ji, R. R. Chen, and B. Farhang-Boroujeny, “Analysis of discrete-time MIMO OFDM-Based orthogonal time frequency space modulation,” in _Proc. IEEE Intern. Commun. Conf._ , May 2018, pp. 1–6.
* [23] C. L. Dolph, “A current distribution for broadside arrays which optimizes the relationship between beam width and side-lobe level,” _Proceedings of the IRE_ , vol. 34, no. 6, pp. 335–348, Jun. 1946.
* [24] Z. Wei, D. W. K. Ng, and J. Yuan, “NOMA for hybrid mmWave communication systems with beamwidth control,” _IEEE J. Select. Topics Signal Process._ , vol. 13, no. 3, pp. 567–583, Jun. 2019.
* [25] R. H. Duhamel, “Optimum patterns for endfire arrays,” _Proceedings of the IRE_ , vol. 41, no. 5, pp. 652–659, May 1953.
|
# Resolvent analysis of stratification effects on wall-bounded shear flows
M. A. Ahmed<EMAIL_ADDRESS>Graduate Aerospace Laboratories, California
Institute of Technology, Pasadena, CA 91125, USA H. J. Bae<EMAIL_ADDRESS>Harvard University, Cambridge, MA 02139, USA Graduate Aerospace Laboratories,
California Institute of Technology, Pasadena, CA 91125, USA A. F. Thompson
<EMAIL_ADDRESS>Geophysical and Planetary Sciences, California Institute
of Technology, Pasadena, CA 91125, USA B. J. McKeon<EMAIL_ADDRESS>Graduate Aerospace Laboratories, California Institute of Technology, Pasadena,
CA 91125, USA
###### Abstract
The interaction between shear driven turbulence and stratification is a key
process in a wide array of geophysical flows with spatio-temporal scales that
span many orders of magnitude. A quick numerical model prediction based on
external parameters of stratified boundary layers could greatly benefit the
understanding of the interaction between velocity and scalar flux at varying
scales. For these reasons, here, we use the resolvent framework [1] to
investigate the effects of an active scalar on incompressible wall-bounded
turbulence. We obtain the state of the flow system by applying the linear
resolvent operator to the nonlinear terms in the governing Navier-Stokes
equations with the Boussinesq approximation. This extends the formulation to
include the scalar advection equation with the scalar component acting in the
wall-normal direction in the momentum equations [2]. We use the mean velocity
profiles from a direct numerical simulation (DNS) of a stably-stratified
turbulent channel flow at varying friction Richardson number $Ri_{\tau}$. The
results obtained from the resolvent analysis are compared to the premultiplied
energy spectra, auto-correlation coefficient, and the energy budget terms
obtained from the DNS. It is shown that despite using only a very limited
range of representative scales, the resolvent model is able to reproduce the
balance of energy budget terms as well as provide meaningful insight of
coherent structures occurring in the flow. Computation of the leading
resolvent models, despite considering a limited range of scales, reproduces
the balance of energy budget terms, provides meaningful predictions of
coherent structures in the flow, and is more cost-effective than performing
full-scale simulations. This quick model can provide further understanding of
stratified flows with only information about the mean profile and prior
knowledge of energetic scales of motion in the neutrally-buoyant boundary
layers.
## I Introduction
Stable boundary layers can be generated by the advection of warm air over a
colder surface. Stably-stratified atmospheric boundary layers are observed
during clear nights as a result of radiative cooling of the ground surface [3,
4]. Oceans, unlike the lower atmosphere, are heated from above and are usually
stably stratified [5, 6]. In both the atmosphere and oceans, stratification
has a significant effect on turbulence production, propagation, and decay. The
interaction between shear-driven turbulence and stratification is a key
process in a wide array of relevant geophysical flows for which the spatio-
temporal scales span many orders of magnitudes.
Classical understanding of stably-stratified boundary layers is well described
in a number of textbooks [7, 8, 9, 10] and reviews [11, 12, 13]. However,
fundamental features of the stably-stratified turbulent boundary layer still
remain elusive from a modeling standpoint. The strong intermittency observed
in stable boundary layers causes the upper portion of the boundary layer to
decouple from the near-wall region due to the inhibition in vertical mixing
[9, 14, 15]. Strong stable stratification also significantly changes the flow
structures prevalent in a boundary layer with additional features becoming
prominent such as large-scale intermittency, gravity waves and Kelvin-
Helmholtz instabilities [14], and the near parallel downstream tilting of flow
structures [16, 17, 18].
One way to study the stably-stratified turbulent boundary layer is through on-
site experiments. Researchers in the past decades have conducted field
experiments in the stably-stratified atmospheric boundary layer to study
turbulent energy budgets [19], heat and momentum transfer [20], regime
characterization [21, 14], flow structures [16], and the complexities of
atmospheric stable boundary layers [22]. Measurements of turbulence quantities
in the ocean near the bottom boundary are difficult to measure and as such the
literature is sparse. Smedman _et al._ [23], using data from a marine coastal
experiment over the Baltic sea, found that the near-wall turbulence was
virtually independent of forcing from large-scale structures embedded in the
flow. Experiments performed in the northern bay of San Francisco [24] found
that active turbulence is confined near the wall. Additionally, tidal channel
experiments [25] demonstrated that the production of turbulent kinetic energy
is generally greatest near the bottom boundary while the buoyancy flux is
weakest in this region. Still, real-world atmospheric and oceanic boundary
layers are complicated by non-turbulent motions occurring simultaneously on a
variety of scales, the possible importance of radiative flux divergence of the
air within the boundary layer, surface condensation, and variable cloudiness
[26, 11, 13]. In order to isolate instances where the secondary effects are
minimized, restrictions on nonstationarity or conditions on the minimum
allowed value of turbulence energy may be applied to the data collected.
Nonetheless, certain assumptions that are applied for analyses of these real-
world stratified boundary layers are not always valid. As such, researchers
supplement their work with laboratory experiments as well as simulations.
Laboratory experiments of stratified wall-bounded flows show that buoyancy
effects play an important role in the transfer of heat and momentum in both
the inner and outer layers of the boundary layer [27, 28, 29, 30, 31]. In
general, the experiments show that with increasing stratification, the
turbulence shear production rate is strongly affected by buoyancy and greatly
reduced far from the wall. One measure of stratification strength is the local
gradient Richardson number, $Ri_{g}$. Since shear originates at the wall, the
local gradient Richardson number, which is inversely proportional to the
shear, is generally smaller in the near-wall region as the shear term
overpowers the buoyancy term. The stabilizing effect of stratification has a
greater impact farther from the wall. Indeed, works listed here demonstrated
that velocity fluctuations become weaker from the wall and in some cases,
turbulence intensity is reduced as the buoyancy frequency in the system is
increased. Linear inviscid stability analysis [32] showed that there exists a
critical value for the gradient Richardson number, $Ri_{g}\geq 0.25$, that
serves as a sufficient condition for stability. Additionally, the experiments
of Komori _et al._ [30] show that the correlation coefficients associated with
the Reynolds shear stress approach zero at values of $Ri_{g}\simeq 0.2-0.3$.
There have been many large-eddy simulations (LES) [33, 34, 35, 36] and direct
numerical simulations (DNS) [37, 38, 39, 40, 41] of density stratified channel
flows. The results support the experimental observations: strengthening the
stratification leads to the reduction (or even suppression) of turbulent
velocity fluctuations further from the wall. Garg _et al._ [33] showed in
their work that the mean velocity profiles of the stratified channel were
similar in the near-wall region but differed in the logarithmic region. The
difference is characterized by a reduction in the value of both the slope of
the log-law of the mean velocity and the gradient of the mean velocity
profile. It should be noted that the authors used the friction Richardson
number to categorize the stratification strengths investigated in their
simulations and concluded that the friction Richardson number is superior to
the local gradient Richardson number in characterizing flow regimes as it is a
global flow property.
Performing experiments (both on-site and in laboratories) of stratified wall-
bounded turbulence can be challenging for reasons such as topography or
secondary effects and simulations suffer from computational constraints.
Moreover, laboratory experiments and simulations can attain only a limited
range of Reynolds and Richardson numbers that are often orders of magnitude
smaller than real-world geophysical phenomena. A quick numerical model
prediction of key features of stratified boundary layers could greatly benefit
the understanding of the interaction between velocity and scalar flux at
varying scales. For these reasons, in this paper, we aim to explore the
interaction between velocity and scalar fluctuations using the resolvent model
[1].
The resolvent model provides an optimal basis, in an energy sense, that allows
an in-depth comparison of the underlying mechanisms in the flow. Moreover, the
model is computationally efficient with only a singular value decomposition of
the largest singular value required to obtain the leading order model.
Resolvent analysis has been widely applied to a range of flow configurations
to identify dominant flow structures and the underlying forcing, e.g. Ref. [1,
42, 43, 44, 45, 46], and has been reviewed in detail in Ref. [47] and Ref.
[48]. We use the model to provide analysis of the flow using only mean
quantities, which are easy to obtain even in field experiments, along with
knowledge from the energetics of the unstratified case, which is better
documented than the stably-stratified case. The predictions from the resolvent
model are then compared to the flow statistics from a DNS of a stably-
stratified turbulent channel flow. The Reynolds number under consideration in
the current study is considerably lower than those observed in geophysical
flows, which is dictated by the available DNS data for comparison, rather than
by the resolvent model. Resolvent analysis of unstratified wall-bounded flows
shows that the results of the model are still relevant for moderate Reynolds
numbers [49] with the resolvent modes in the logarithmic layer showing self-
similar behavior. We expect the capability of the model in stably-stratified
regimes to extend to higher Reynolds numbers as well.
The paper is organized as follows. In §II, we introduce the resolvent
framework with the inclusion of the scalar advection-diffusion equation and
discuss the relevant energy norm, boundary conditions, and computational
methods. In §III.1, we examine the sensitivity of the low-rank properties of
the resolvent operator to the stable stratification strength and compare these
properties with the most energetic scales in each flow. In §III.2, we analyze
the characteristics of the forcing and response modes of both velocity and
scalar. We compare the mode shapes with correlations obtained from DNS data.
In §III.3, we study the turbulent kinetic energy budget in the resolvent
formulation and compare the results with the energy budget obtained from the
DNS data. Finally, our conclusions on the application of the resolvent
framework to a stably-stratified boundary layer are given in §IV.
## II Modeling active scalar dynamics in the Navier-Stokes equations
### II.1 Navier-Stokes equation with active scalar
We consider a density-stratified turbulent channel flow where the density acts
in the direction of gravitational acceleration. We use a Cartesian co-ordinate
system $\bm{x}=(x,y,z)$ such that the force of gravity acts in the $-y$
direction, with $x$, $y$ and $z$ being the streamwise, wall-normal and
spanwise directions, respectively. The governing equations are given by the
non-dimensional Navier-Stokes equation under the Boussinesq approximation,
$\displaystyle\frac{\partial\widetilde{\bm{u}}}{\partial
t}+(\widetilde{\bm{u}}\cdot\nabla)\widetilde{\bm{u}}$
$\displaystyle=-\nabla\widetilde{p}+\frac{\nabla^{2}\widetilde{\bm{u}}}{Re_{\tau}}-Ri_{\tau}\widetilde{\rho}\bm{e}_{y},$
(1a) $\displaystyle\frac{\partial\widetilde{\rho}}{\partial
t}+(\widetilde{\bm{u}}\cdot\nabla)\widetilde{\rho}$
$\displaystyle=\frac{\nabla^{2}\widetilde{\rho}}{Re_{\tau}Pr},$ (1b)
$\displaystyle\nabla\cdot\widetilde{\bm{u}}$ $\displaystyle=0.$ (1c)
Here, $\widetilde{\bm{u}}=(\tilde{u},\tilde{v},\tilde{w})$ is the
instantaneous velocity vector in the reference system $(x,y,z)$, $t$ is time,
$\widetilde{p}$ is the kinematic pressure field that remains after removing
the part that is in hydrostatic balance with the mean density field,
$\widetilde{\rho}$ is the density deviation from the reference density
$\rho_{0}$ ($\widetilde{\rho}\ll\rho_{0}$), and $\bm{e}_{y}$ is the unit
vector acting in the $y$-direction. The velocity and length scales are non-
dimensionalized using the friction velocity $u_{\tau}$ and channel half-height
$\delta$, respectively, and the density is non-dimensionalized using
$\Delta\rho$, the difference in density between the two channel walls. We
define the walls to be located at $y=0$ and $y=2$. The non-dimensional
quantities are given by the Reynolds, Prandtl and Richardson numbers, defined
as
$Re_{\tau}=\frac{u_{\tau}\delta}{\nu},\qquad Pr=\frac{\nu}{\gamma},\qquad
Ri_{\tau}=\frac{g\Delta\rho\delta}{\rho_{0}u_{\tau}^{2}},$ $None$
where $\nu$ is the kinematic viscosity, $\gamma$ is the molecular diffusivity
of density, and $g$ is the acceleration due to gravity.
### II.2 Resolvent framework with an active scalar
The total fields $\widetilde{\bm{u}}$, $\widetilde{p}$ and $\widetilde{\rho}$
can be split into mean and fluctuating parts as
$\displaystyle\widetilde{\bm{u}}(\bm{x},t)$
$\displaystyle={\bm{\overline{u}}}(y)+\bm{u}(\bm{x},t),$ (3a)
$\displaystyle\widetilde{p}(\bm{x},t)$
$\displaystyle=\overline{p}(y)+p(\bm{x},t),$ (3b)
$\displaystyle\widetilde{\rho}(\bm{x},t)$
$\displaystyle=\overline{\rho}(y)+\rho(\bm{x},t),$ (3c)
where the mean is taken in the homogeneous directions, $x$ and $z$, and time.
Note that ${\bm{\overline{u}}}=(\bar{u},\bar{v},\bar{w})$ and
$\bar{v}=\bar{w}=0$. We substitute the decomposed variables into Eq. (1) to
obtain the fluctuation equations
$\displaystyle\partial_{t}\bm{u}+(\overline{u}\cdot\nabla)\bm{u}+(\bm{u}\cdot\nabla)\overline{u}$
$\displaystyle=-\nabla
p+\frac{\nabla^{2}\bm{u}}{Re_{\tau}}-Ri_{\tau}\rho\bm{e}_{y}+\bm{f}_{\bm{u}}$
(4a)
$\displaystyle\partial_{t}\rho+(\overline{u}\cdot\nabla)\rho+(\bm{u}\cdot\nabla)\overline{\rho}$
$\displaystyle=\frac{\nabla^{2}\rho}{Re_{\tau}Pr}+f_{\rho},$ (4b)
$\displaystyle\nabla\cdot\bm{u}$ $\displaystyle=0,$ (4c)
where $\bm{f}_{\bm{u}}=-\bm{u}\cdot\nabla\bm{u}$ and
$f_{\rho}=-\bm{u}\cdot\nabla\rho$ are the nonlinear terms.
Taking the Fourier transform of the fluctuation equations above in homogeneous
directions and time, the variables can be expressed as
$\begin{bmatrix}\bm{u}(x,y,z,t)\\\ p(x,y,z,t)\\\
\rho(x,y,z,t)\end{bmatrix}=\iiint^{\infty}_{-\infty}\begin{bmatrix}\hat{\bm{u}}(y;k_{x},k_{z},\omega)\\\
\hat{p}(y;k_{x},k_{z},\omega)\\\
\hat{\rho}(y;k_{x},k_{z},\omega)\end{bmatrix}e^{\text{i}(k_{x}x+k_{z}z-\omega
t)}dk_{x}dk_{z}d\omega,$ (5)
for $\bm{k}=(k_{x},k_{z},\omega)\neq(0,0,0)$, where $(\hat{\cdot})$ denotes
the Fourier transformed variables. Here, the streamwise and spanwise
wavenumbers are $k_{x}$ and $k_{z}$, respectively, and $\omega$ is the
temporal frequency defined as $\omega=ck_{x}$, where $c$ is the wavespeed. The
streamwise and spanwise wavelengths are defined as $\lambda_{x}=2\pi/k_{x}$
and $\lambda_{z}=2\pi/k_{z}$, respectively. Critical-layers can be identified
when the wavespeed $c$ is equivalent to the mean velocity, i.e. $y_{c}$ is the
critical layer location for wavespeed $c=\overline{u}(y_{c})$. Assuming the
mean velocity and density profiles are known, the fluctuations equations are
expressed compactly in a linear equation as
$-\text{i}\omega\hat{\bm{q}}-\mathcal{A}\hat{\bm{q}}=\hat{\bm{f}},$ (6)
where we define
$\hat{\bm{q}}=[\hat{u}\;\hat{v}\;\hat{w}\;\hat{p}\;\hat{\rho}]^{T}$ as the
state vector and
$\hat{\bm{f}}=[\hat{f}_{u}\;\hat{f}_{v}\;\hat{f}_{w}\;0\;\hat{f}_{\rho}]^{T}$
as the forcing vector. The linear operator is given by
$\mathcal{A}=\begin{pmatrix}A&-\partial\overline{u}/\partial
y&0&-\text{i}k_{x}&0\\\ 0&A&0&-D_{y}&-Ri_{\tau}\\\ 0&0&A&-\text{i}k_{z}&0\\\
-\text{i}k_{x}&-D_{y}&-\text{i}k_{z}&0&0\\\
0&-\partial\overline{\rho}/\partial y&0&0&A_{\rho}\end{pmatrix},$ (7)
where
$\displaystyle A$
$\displaystyle=-\text{i}k_{x}\overline{u}+\frac{\hat{\Delta}}{Re_{\tau}},$
(8a) $\displaystyle A_{\rho}$
$\displaystyle=-\text{i}k_{x}\overline{u}+\frac{\hat{\Delta}}{Re_{\tau}Pr},$
(8b)
$D_{y}$ is the wall-normal derivative operator and $\hat{\Delta}\equiv
D_{yy}-k_{\perp}^{2}$ is the Laplacian with
$k_{\perp}^{2}=k_{x}^{2}+k_{z}^{2}$. The block matrix $\mathcal{A}$ describes
the linear dynamics of the system. Equation (6) can be rearranged to yield
$\hat{\bm{q}}\;=\;\mathcal{H}(\bm{k})\;\hat{\bm{f}},$ (9)
where $\mathcal{H}(\bm{k})=(-\text{i}\omega I-\mathcal{A})^{-1}$ is the
resolvent of the linear operator and $I$ is the identity matrix. A related
analysis has been performed in Ref. [50].
From Eq. (9), we wish to find a decomposition of the resolvent operator that
enables us to identify high gain input and output modes with respect to the
linear operator. For resolvent analysis, this is given by the Schmidt
decomposition. However, this decomposition must be accompanied by a choice of
inner product and the corresponding norm. The natural and physically
meaningful norm is given by the non-dimensionalized energy norm, which is the
sum of kinetic and potential energies [51, 52]
$\frac{1}{2}\|\bm{q}\|^{2}_{E}=\frac{1}{2}(\bm{q},\bm{q})_{E}=\frac{1}{2}\int_{0}^{2}\left(u^{*}u+v^{*}v+w^{*}w+Ri_{\tau}(\rho^{*}\rho)\right)dy,$
(10)
where $(\cdot)^{*}$ denotes the conjugate transpose.
We perform the Schmidt decomposition of the resolvent operator $\mathcal{H}$
to generate a basis based on the most highly amplified forcing and response
directions such that
$\mathcal{H}(\bm{k})=\sum_{j=1}^{\infty}\sigma_{j}(\bm{k})\bm{\hat{\psi}}_{j}(y;\bm{k})\bm{\hat{\phi}}^{*}_{j}(y;\bm{k}),$
(11)
where the right and left Schmidt bases (or singular vectors in the discrete
case) are given by $\bm{\hat{\phi}}_{j}$ and $\bm{\hat{\psi}}_{j}$ along with
their corresponding gains $\sigma_{j}$. The singular values are in descending
order such that $\sigma_{1}\geq\sigma_{2}\geq\cdots\geq 0$. The forcing and
resolvent modes are orthonormal such that
$(\bm{\hat{\phi}}_{j},\bm{\hat{\phi}}_{k})_{E}=(\bm{\hat{\psi}}_{j},\bm{\hat{\psi}}_{k})_{E}=\delta_{jk},$
(12)
where $\delta_{jk}$ denotes the Kronecker delta. The basis pair defined above
is used to decompose the nonlinear forcing and response field at a specified
wavenumber triplet as
$\displaystyle\hat{\bm{f}}(y;\bm{k})$
$\displaystyle=\sum_{j=1}^{\infty}\bm{\hat{\phi}}_{j}(y;\bm{k})\chi_{j}(\bm{k}),$
(13a) $\displaystyle\hat{\bm{q}}(y;\bm{k})$
$\displaystyle=\sum_{j=1}^{\infty}\chi_{j}(\bm{k})\sigma_{j}(\bm{k})\bm{\hat{\psi}}_{j}(y;\bm{k}).$
(13b)
Here, $\chi_{j}$ is a projection variable that is obtained by projecting the
nonlinear forcing onto the forcing modes, and subsequently use to weight the
response modes. Note that the largest energy is obtained when the forcing is
aligned with the leading singular vector, i.e. when $\chi_{j}=\delta_{j1}$.
### II.3 Computational approach
#### II.3.1 Mean velocity and density profiles
Table 1: Comparison of our DNS and the results of García-Villalba & del Álamo [41] denoted under columns titled GV11, both at $Re_{\tau}=180$. $Re_{B}$ is the bulk Reynolds number defined as $u_{B}\delta/\nu$ where the bulk velocity is $u_{B}=\int_{0}^{2}\overline{u}dy/2$. $Ri_{B}$ is the bulk Richardson number which is defined as $Ri_{B}=Ri_{\tau}(u_{\tau}/u_{B})^{2}/2$. $Nu$ is the Nusselt number defined as $Nu=2\delta q_{w}/(\gamma\Delta\rho)$. For laminar flow $Nu=1$. | $Re_{B}$ | $Ri_{B}$ | $Nu$
---|---|---|---
$Ri_{\tau}$ | GV11 | DNS | GV11 | DNS | GV11 | DNS
0 | 2820 | 2823 | 0.000 | 0.000 | 6.03 | 6.08
10 | - | 2970 | - | 0.018 | - | 4.78
18 | 3043 | 3060 | 0.031 | 0.031 | 4.02 | 4.15
60 | 3436 | 3473 | 0.082 | 0.081 | 2.80 | 2.82
100 | - | 3850 | - | 0.109 | - | 2.37
(a) (b)
(c) (d)
Figure 1: Mean (a) streamwise velocity and (b) density profiles and root-mean-
square (r.m.s.) (c) streamwise velocity and (d) density profiles from the
current DNS for $Ri_{\tau}=0,10,18,60,100$ (solid lines darker to lighter),
compared to the mean profiles of Ref. [41] for $Ri_{\tau}=0,18,120$ (dashed
lines darker to lighter). The friction density is defined as
$\rho_{\tau}=q_{w}/u_{\tau}$, where $q_{w}$ is the density flux at the wall.
Mean velocity and density profiles are required to close the resolvent model.
We obtain the one-dimensional mean velocity and density profiles from a DNS of
a stratified turbulent channel at $Re_{\tau}=180$ for a wide range of
$Ri_{\tau}$. The simulations are performed by discretizing the incompressible
Navier-Stokes equations with a staggered, second-order accurate, central
finite-difference method in space [53], and an explicit third-order accurate
Runge-Kutta method for time advancement [54]. The system of equations is
solved via an operator splitting approach [55]. The code has been verified for
neutrally-buoyant cases in Ref. [56, 57].
Periodic boundary conditions are imposed in the streamwise and spanwise
directions, the no-slip and no-penetration condition with $\tilde{\rho}=0$ is
applied at the bottom boundary, and a no-slip and no-penetration condition
with $\tilde{\rho}=1$ is applied at the top boundary. The streamwise, wall-
normal, and spanwise domain sizes are $4\pi$, $2$, and $2\pi$ respectively.
The grid spacings in the streamwise and spanwise directions are uniform with
$\Delta x^{+}=8.8$ and $\Delta z^{+}=4.4$; non-uniform meshes are used in the
wall-normal direction, with the grid stretched toward the wall according to a
hyperbolic tangent distribution with $\min(\Delta y^{+})=0.31$ and
$\max(\Delta y^{+})=5.19$, where the superscript $+$ indicates length scales
in wall units normalized by $\nu/u_{\tau}$ rather than $\delta$. A constant
pressure gradient is applied to drive the flow. The simulation was run over
$100$ eddy-turnover times, defined as $\delta/u_{\tau}$, after transients.
The work of García-Villalba & del Álamo [41] at $Re_{\tau}=180$ is used to
validate the results. The comparison of a few key quantities is shown in Table
1, which indicate a good agreement for all Richardson numbers. The mean and
root-mean-squared streamwise velocity and density profiles are shown in Fig. 1
for all current cases and select cases from Ref. [41]. The profiles show good
agreement among all statistics.
#### II.3.2 Resolvent mode computation
The Schmidt decomposition of the resolvent operator outlined in §II.2 is
numerically implemented as the singular value decomposition (SVD). We solve
the discrete equations using a spectral collocation method with the number of
points in the wall-normal direction given by $N_{y}$, thus limiting the number
of singular values to $5N_{y}$ because the state vector
$\hat{\bm{q}}\in\mathbb{C}^{5N_{y}\times 1}$. In this study, after conducting
a grid convergence study examining the singular values, we selected a wall-
normal grid resolution of $N_{y}=400$. Thus, the computational cost of the
resolvent mode computation is at most $O(N_{y}^{3})$ (less if randomized
algorithms are employed [49, 58]), often only requiring a leading order
singular value decomposition (see §III.1 for more information), and can be
performed in seconds on a personal computer.
The discretized linear operator is constructed using Chebyshev differentiation
matrices and is shifted to integrate between $y\in[0,2]$ rather than
$y\in[-1,1]$. The mean velocity and density profiles obtained from DNS as well
as their wall-normal derivatives are interpolated to the Chebyshev grid points
to form the resolvent operator as in Eq. (7). The no-slip and no-penetration
boundary conditions for the fluctuating velocities and density, i.e.
$u,v,w,\rho=0$, are applied at the walls.
In the case of a turbulent channel, due to the symmetry in the geometry, the
resolvent modes appear in pairs that can be linearly combined to produce
symmetric and antisymmetric modes. Depending on the support of these modes,
the singular values may be identical or similar in magnitude. For the results
in the following sections, only results in the bottom half-channel will be
shown, but the corresponding upper half-channel results are analogous in all
cases.
## III Results
In this section, we explore how the resolvent analysis provides insight to
changes in flow characteristics with increasing stratification from only a
limited range of representative scales. We compare (i) the resolvent energy
spectra, obtained from the ratio of the energy in the leading resolvent
response mode to the total response,
$(\sigma_{1}^{2}+\sigma_{2}^{2})/\sum_{j}\sigma_{j}^{2}$, to the premultiplied
energy spectra of the DNS, (ii) the structure identified by the leading
resolvent mode to the correlation computed from DNS, and (iii) the energy
budgets of the resolvent modes to that of the DNS.
In order for full representation of the system, a wide range of scales as well
as information of all other subsequent modes in addition to the leading
resolvent modes are necessary [1, 47]. However, the goal here is to provide a
quick model for characterising the flow. The simplest and quickest model can
be provided via a rank-one approximation, where only the leading resolvent
mode is computed. Thus, our focus will be on the representation given by the
leading resolvent mode for a limited number of scales.
### III.1 Resolvent energy spectra
The resolvent norm is the principal singular value,
$\sqrt{\sigma_{1}^{2}+\sigma_{2}^{2}}$ in this case, of the resolvent operator
$\mathcal{H}$, and quantifies the system’s sensitivity to temporal forcing
[59]. The energetic contribution from broadband forcing is quantified as the
square of the resolvent norm. The resolvent operator $\mathcal{H}$ can be
described as low-rank if the majority of its response to broadband forcing in
the wall-normal direction is captured by the first few response modes.
Theoretically, there are an infinite number of singular values and
corresponding modes because the wall-normal coordinate is continuous. However,
not all of the singular vectors are energetically significant. As described in
§II.2, a self-sustaining representation of the flow will correspond to a
weighted assembly of forcing modes rather than a broadband forcing [46];
however, past studies have showed that broadband forcing is successful in
identifying the important component of the flow, e.g. Ref. [1, 44]. McKeon &
Sharma [1] demonstrated that the characteristics of the leading response modes
for a range of wavenumber-frequency combinations agree with experimental
observations in pipe flow and with scaling concepts in wall-bounded
turbulence. Moarref _et al._ [49] showed that the first two resolvent modes
account for more than 80% of the total response in a channel. Bae _et al._
[44] investigated the low-rank nature of a compressible turbulent boundary
layer and highlighted the similarities in the region where the low-rank
approximation is valid for the incompressible regime.
Assuming the resolvent operator is low-rank
($\sigma_{1}\simeq\sigma_{2}\gg\sigma_{3}$) allows us to approximate the
operator as
$\mathcal{H}(\bm{k})\;\approx\;\sigma_{1}\;\bm{\hat{\psi}}_{1}\;\bm{\hat{\phi}}^{*}_{1}+\sigma_{2}\;\bm{\hat{\psi}}_{2}\;\bm{\hat{\phi}}^{*}_{2},$
(14)
for each $\bm{k}$ since most of the energy in the system is modelled by the
principal singular value. The low-rank behavior of $\mathcal{H}$ is typically
representative of there being a dynamically significant physical, spatio-
temporal structure at the scale dictated by $\bm{k}$.
Figure 2: Contour plots depicting the energy contained in the leading response
mode relative to the total response,
$(\sigma_{1}^{2}+\sigma_{2}^{2})/\Sigma_{j}\sigma_{j}^{2}$, for different
streamwise and spanwise wavelengths at (a) $c=\overline{u}(y^{+}=15)$, (b)
$c=\overline{u}(y^{+}=30)$ and (c) $c=\overline{u}(y^{+}=100)$ for
$Ri_{\tau}=$ 0, 10, 18, 60, 100 (top to bottom). Green dashed lines are (a)
$\lambda_{x}=15\lambda_{z}$, (b) $\lambda_{x}=10\lambda_{z}$ and (c)
$\lambda_{x}=5\lambda_{z}$. Figure 3: Contour plots depicting the
premultiplied streamwise kinetic energy spectra as functions of the streamwise
and spanwise wavelengths obtained from DNS at (a) $y^{+}=15$, (b) $y^{+}=30$,
and (c) $y^{+}=100$ for $Ri_{\tau}=0$ (solid line), $Ri_{\tau}=60$ (dashed
line) and $Ri_{\tau}=100$ (dotted line). The shaded contours are from the
$Re_{\tau}=180$ neutral channel [60]. The levels plotted are $0.1,0.3,0.5$
times the maximum value of the corresponding spectrum.
To study the variation in the low-rank behavior for different magnitudes of
stratification, we plot the energetic contribution of the principal response
mode to the total response in the model for a given $\bm{k}$ quantified by
$(\sigma_{1}^{2}+\sigma_{2}^{2})/\Sigma_{j}\sigma_{j}^{2}$ for a range of
wall-parallel wavelengths (Fig. 2). The leading response modes account for
more than $80\%$ of the total response over a large range of homogeneous
wavelengths for the three wavespeeds selected.
The range of wavenumbers for which the resolvent operator is low-rank changes
significantly with stratification. In the neutrally-buoyant case
($Ri_{\tau}=0$), we see that $\mathcal{H}$ is low-rank in a range of moderate-
to-large streamwise wavelengths. For the neutrally-buoyant case, it is known
that the low-rank region coincides with the most energetic wavenumbers from
the premultiplied energy spectra of a turbulent channel [49]. As the friction
Richardson number first increases, the low-rank behavior shifts to only a
small range of streamwise wavelengths. We see a similar phenomenon in the
premultiplied streamwise energy spectra from the DNS (Fig. 3), where with
increasing $Ri_{\tau}$, the larger streamwise wavelength content is
suppressed. This was also observed in the premultiplied energy spectra of
García-Villalba & del Álamo [41] for a wider range of $Re_{\tau}$ and
$Ri_{\tau}$.
However, after $Ri_{\tau}=18$, the low-rank behavior of the principal
resolvent modes intensifies along a vertical band $\lambda_{x}/\delta\geq 1$
until the system becomes low-rank at large spanwise wavelengths with almost no
low-rank behavior below the green dashed line in Fig. 2
($\lambda_{x}=15\lambda_{z}$, $10\lambda_{z}$ and $5\lambda_{z}$ for
$y^{+}=15$, $30$ and $100$, respectively). This seems to indicate a low-rank
behavior in structures that are descriptive of quasi-two-dimensional flow
where $\lambda_{z}\gg\lambda_{x}$. Hopfinger [61] details the emergence of
two-dimensional modes for a variety of flows with strong stratification.
Moreover, Mahrt [13] alludes to the emergence of two-dimensional modes (often
referred to as pancake modes) owing to the conversion of vertical kinetic
energy to potential energy in the presence of strong stable stratification.
The premultiplied energy spectra for higher $Ri_{\tau}$ indicate high energy
in the vertical band as well [41].
### III.2 Mode shapes
Table 2: Representative wavenumber combinations that we will explore in §III.2. Mode name | $k_{x}$ | $k_{z}$ | $c$
---|---|---|---
E1: most energetic mode for $y^{+}=15$ | $\pi/2$ | $4\pi$ | $\overline{u}(y^{+}=15)$
E2: most energetic mode for $y^{+}=30$ | $\pi/2$ | $3\pi$ | $\overline{u}(y^{+}=30)$
E3: most energetic mode for $y^{+}=100$ | $\pi/2$ | $2\pi$ | $\overline{u}(y^{+}=100)$
(a) (b)
Figure 4: Amplitudes of the leading resolvent response modes for the (a)
streamwise velocity and (b) density for $Ri_{\tau}=0,10,18,60,100$ (darker to
lighter) at $c=\bar{u}(y^{+}=15)$ (dashed line), $\bar{u}(y^{+}=30)$ (dot-
dashed line) and $\bar{u}(y^{+}=100)$ (dotted line) for wave-parameters
corresponding to E1, E2 and E3, respectively. The subscripts $u$ and $\rho$
indicate the corresponding components of the resolvent response mode.
In order to study the flow structures, we compute the resolvent response modes
for a set of wave parameters. The most energetic scales for the various
$Ri_{\tau}$ under consideration for the different wall-normal heights still
coincide with the neutrally-buoyant case (Fig. 3), falling in the low-rank
region despite the fact that including the scalar advection-diffusion equation
in the governing equations changes the wavelengths at which the resolvent
operator is low-rank (Fig. 2). In this section, we study the resolvent
response mode shapes for these wavenumber and wavespeed combinations. The list
of mode combinations under consideration is listed in Table 2. In particular,
mode E1 is the most energetic mode for $y^{+}=15$, E2 for $y^{+}=30$ and E3
for $y^{+}=100$.
The predictive capabilities of the resolvent modes are first shown through the
amplitudes of the leading resolvent response modes (Fig. 4) of the streamwise
velocity and density. The resolvent modes compare well to the streamwise and
density turbulence intensities in Fig. 3(c,d). The streamwise root-mean-square
(r.m.s.) quantities and resolvent amplitudes show no variation among different
Richardson numbers closer to the wall and increase slightly with $Ri_{\tau}$
farther away from the wall. On the other hand, the density r.m.s. and
resolvent amplitudes decrease significantly with Richardson number at all
wall-normal heights. Despite only using the leading resolvent mode, the
relative magnitude at each corresponding wall-normal height is well captured
for the range of Richardson numbers considered here.
(a) (b) (c)
Figure 5: Two-dimensional response mode shapes for
$(k_{x},k_{z})=(\pi/2,4\pi)$ at a critical-layer location of $y^{+}=15$ for
(a) $Ri_{\tau}=0$, (b) $18$, and (c) $100$. Red and blue contours represent
positive and negative fluctuations, respectively. The contour levels are
scaled by the maximum of each mode component. The dashed black line in each
sub-plot is the location of the critical-layer where
$c=\overline{u}(y^{+}=15)$.
(a) (b)
Figure 6: Autocorrelation coefficients $C_{uu}$, $C_{vv}$, $C_{ww}$ and
$C_{\rho\rho}$ of the DNS at $y^{+}=15$ for (a) $Ri_{\tau}=0$ and (b) $100$.
Red and blue contours represent positive (0.4, 0.6, 0.8) and negative (-0.2)
correlation, respectively, with each contour level signifying 0.2 increments.
The horizontal dashed line is $y^{+}=30$ and the vertical dotted line is
$\Delta x=0$.
Additionally, we examine the response mode shapes in two dimensions for the
different regions and compare the structures observed in the resolvent modes
with the autocorrelation coefficient from the DNS data. We first define the
streamwise auto-covariance as
$\hat{R}_{qq}(k_{x},y,y^{\prime},k_{z})=\langle\hat{q}(k_{x},y,k_{z})\hat{q}^{*}(k_{x},y^{\prime},k_{z})\rangle,$
(15)
where $q$ is a generic variable of zero mean and $\langle\cdot\rangle$ is the
expected value. The auto-covariance in physical space, $R_{qq}(\Delta
x,y,y^{\prime},\Delta z)$, is obtained as the inverse Fourier transform of
$\hat{R}$, where $\Delta x=x-x^{\prime}$ and $\Delta z=z-z^{\prime}$ are the
distances between the two points in the homogeneous directions. The
autocorrelation coefficient,
$C_{qq}(\bm{x},\bm{x}^{\prime})=\frac{R_{qq}(\bm{x},\bm{x}^{\prime})}{\varsigma_{q}(\bm{x})\varsigma_{q}(\bm{x}^{\prime})},$
(16)
is obtained by normalising the covariance with the product of the standard
deviations, $\varsigma$, at the two points involved in the measurements, which
is the normalization adopted by most researchers [62, 63, 64, 65, 66, 67].
The two-dimensional structures of mode E1, which coincides with the size of
the near the wall structures observed previously in experiments and
simulations [68, 69], are plotted in Fig. 5. The autocorrelations of the
streamwise, wall-normal and spanwise velocity fields as well as the density
field are shown in Fig. 6, for a two-dimensional slice at $\Delta z=0$. The
reference location $y^{\prime+}=15$.
The LES of Armenio _et al._ [34] and the DNS of García-Villalba & del Álamo
[41] demonstrated that structures in the near-wall region are largely
unaffected by stable stratification. As expected, both the resolvent response
modes and the correlations do not change significantly for the range of
$Ri_{\tau}$ considered. For the velocities, the main difference is a reduction
in the autocorrelation coefficient in the stratified case. The largest
difference occurs for density properties as the phase in the wall-normal
direction along the resolvent response modes are shifted, creating structures
that are more detached from the wall. Similarly, the density correlations are
wall-attached for $Ri_{\tau}=0$ whereas they are more detached for
$Ri_{\tau}=100$.
(a) (b) (c)
Figure 7: Two-dimensional response mode shapes for
$(k_{x},k_{z})=(\pi/2,3\pi)$ at a critical-layer location of $y^{+}=30$ for
(a) $Ri_{\tau}=0$, (b) $18$, and (c) $100$. Red and blue contours represent
positive and negative fluctuations, respectively. The contour levels are
scaled by the maximum of each mode component. The dashed black line in each
sub-plot is the location of the critical-layer where
$c=\overline{u}(y^{+}=30)$.
(a) (b)
Figure 8: Autocorrelation coefficients $C_{uu}$, $C_{vv}$, $C_{ww}$ and
$C_{\rho\rho}$ of the DNS at $y^{+}=30$ for (a) $Ri_{\tau}=0$ and (b) $100$.
Red and blue contours represent positive (0.4, 0.6, 0.8) and negative (-0.2)
correlation, respectively, with each contour level signifying 0.2 increments.
The horizontal dashed line is $y^{+}=30$ and the vertical dotted line is
$\Delta x=0$.
We plot the resolvent response modes for the wavenumbers and wavespeed
corresponding to E2 (Fig. 7) and the correlations for $y^{\prime+}=30$ at
$\Delta z=0$ (Fig. 8). The results are similar to that of E1, since the
velocity response modes do not vary across $Ri_{\tau}$, but a difference is
observed in the density modes as a phase change along $y$. The correlation for
the density modes are both wall-detached in the $Ri_{\tau}=0$ and
$Ri_{\tau}=100$ case, although the centre of the density correlation for the
$Ri_{\tau}=100$ case lies farther away from the wall.
(a) (b) (c)
Figure 9: Two-dimensional response mode shapes for
$(k_{x},k_{z})=(\pi/2,2\pi)$ at a critical-layer location of $y^{+}=100$ for
(a) $Ri_{\tau}=0$, (b) $18$, and (c) $100$. Red and blue contours represent
positive and negative fluctuations, respectively. The contour levels are
scaled by the maximum of each mode component. The dashed black line in each
sub-plot is the location of the critical-layer where
$c=\overline{u}(y^{+}=100)$.
(a) (b)
Figure 10: Autocorrelation coefficients $C_{uu}$, $C_{vv}$, $C_{ww}$ and
$C_{\rho\rho}$ of the DNS at $y^{+}=100$ for (a) $Ri_{\tau}=0$ and (b) $100$.
Red and blue contours represent positive (0.4, 0.6, 0.8) and negative (-0.2)
correlation, respectively, with each contour level signifying 0.2 increments.
The horizontal dashed line is $y^{+}=30$ and the vertical dotted line is
$\Delta x=0$.
The biggest difference in the resolvent response modes for the different
Richardson numbers can be seen for the wavenumber and wavespeed corresponding
to E3. We plot the resolvent response modes for the wavenumbers and wavespeed
corresponding to E3 (Fig. 9) and the correlations for $y^{\prime+}=100$ at
$\Delta z=0$ (Fig. 10).
Here, all resolvent modes show significant differences in the stratified case
compared to the unstratified case. In particular, the backwards tilting of the
wall-normal velocity modes, the forward tilting of the density modes, as well
as the phase difference across $y$ of the density mode are pronounced. These
phenomena occur in the correlations as well. There is noticeable backwards
tilting in the $C_{vv}$ term and forwards tilting in the $C_{\rho\rho}$ term
for $Ri_{\tau}=100$ compared to the neutrally stratified case. The biggest
differences come in the form of the wall-normal and density models because
they are coupled through the Richardson number in the stratified Navier-Stokes
equations.
### III.3 Energy balance at selected scales
Finally, we study the energy budget terms of the stratified channel. We define
the production, transport, buoyancy flux, and viscous dissipation budget terms
in the resolvent formulation [70, 50] as
$\displaystyle\mathcal{P}_{\text{tot}}(y)$
$\displaystyle=\mathbb{R}\left[-\frac{\partial\overline{u}}{\partial
y}\sum_{j}\int_{-\infty}^{\infty}\sigma_{j}^{2}\chi_{j}^{2}\Big{(}\bm{\hat{\psi}}^{*}_{j,u}\bm{\hat{\psi}}_{j,v}\Big{)}d\bm{k}\right],$
(17a) $\displaystyle\mathcal{T}_{\text{tot}}(y)$
$\displaystyle=\mathbb{R}\left[\sum_{j}\sum_{i}\int_{-\infty}^{\infty}\sigma_{j}\chi_{j}\chi_{i}D_{y}\Big{(}\bm{\hat{\phi}}^{*}_{i,u}\bm{\hat{\psi}}_{j,v}+\bm{\hat{\phi}}^{*}_{i,v}\bm{\hat{\psi}}_{j,v}+\bm{\hat{\phi}}^{*}_{i,w}\bm{\hat{\psi}}_{j,v}\Big{)}d\bm{k}\right],$
(17b) $\displaystyle\mathcal{B}_{\text{tot}}(y)$
$\displaystyle=\mathbb{R}\left[-Ri_{\tau}\sum_{j}\int_{-\infty}^{\infty}\sigma_{j}^{2}\chi_{j}^{2}\Big{(}\bm{\hat{\psi}}^{*}_{j,v}\bm{\hat{\psi}}_{j,\rho}\Big{)}d\bm{k}\right],$
(17c) $\displaystyle\mathcal{V}_{\text{tot}}(y)$
$\displaystyle=\mathbb{R}\left[\frac{1}{Re_{\tau}}\sum_{j}\int_{-\infty}^{\infty}\sigma_{j}^{2}\chi_{j}^{2}\Big{(}\bm{\hat{\psi}}^{*}_{j,u}\hat{\Delta}\bm{\hat{\psi}}_{j,u}+\bm{\hat{\psi}}^{*}_{j,v}\hat{\Delta}\bm{\hat{\psi}}_{j,v}+\bm{\hat{\psi}}^{*}_{j,w}\hat{\Delta}\bm{\hat{\psi}}_{j,w}\Big{)}d\bm{k}\right],$
(17d)
where $\chi_{j}$, $\sigma_{j}$, $\bm{\hat{\psi}}_{j}$ and
$\bm{\hat{\phi}}_{j}$ are functions of $\bm{k}$ and the subscript $u,v,w,\rho$
indicate the corresponding components of the response or forcing mode. To get
a global sense of the energy balance, the equations above are integrated over
all wavenumber triplets. Here, we will examine only the principal resolvent
mode contribution to the local components of the total budgets for particular
$\bm{k}$, defined as
$\displaystyle\mathcal{P}(y,\bm{k})$
$\displaystyle=\mathbb{R}\left[-\frac{\partial\overline{u}}{\partial
y}\sigma_{1}^{2}\Big{(}\bm{\hat{\psi}}^{*}_{1,u}\bm{\hat{\psi}}_{1,v}\Big{)}\right],$
(18a) $\displaystyle\mathcal{T}(y,\bm{k})$
$\displaystyle=\mathbb{R}\left[\sigma_{1}D_{y}\Big{(}\bm{\hat{\phi}}^{*}_{1,u}\bm{\hat{\psi}}_{1,v}+\bm{\hat{\phi}}^{*}_{1,v}\bm{\hat{\psi}}_{1,v}+\bm{\hat{\phi}}^{*}_{1,w}\bm{\hat{\psi}}_{1,v}\Big{)}\right],$
(18b) $\displaystyle\mathcal{B}(y,\bm{k})$
$\displaystyle=\mathbb{R}\left[-Ri_{\tau}\sigma_{1}^{2}\Big{(}\bm{\hat{\psi}}^{*}_{1,v}\bm{\hat{\psi}}_{1,\rho}\Big{)}\right],$
(18c) $\displaystyle\mathcal{V}(y,\bm{k})$
$\displaystyle=\mathbb{R}\left[\frac{1}{Re_{\tau}}\sigma_{1}^{2}\Big{(}\bm{\hat{\psi}}^{*}_{1,u}\hat{\Delta}\bm{\hat{\psi}}_{1,u}+\bm{\hat{\psi}}^{*}_{1,v}\hat{\Delta}\bm{\hat{\psi}}_{1,v}+\bm{\hat{\psi}}^{*}_{1,w}\hat{\Delta}\bm{\hat{\psi}}_{1,w}\Big{)}\right].$
(18d)
The results for wavenumber combinations E1, E2 and E3 are shown in Fig. 11.
Since the wavenumber combinations E1, E2 and E3 are the most energetic at each
wavespeed, we predict that the local components of the budget term should
indicate the overall trend of the total budget term at the corresponding wall-
normal height. These quantities are compared to the energy budget computed
from the DNS, shown in Fig. 12.
Figure 11: Energy budget terms computed from resolvent modes Eq. (18a–18d) for
wavenumbers given by (a) E1 at $c=\overline{u}(y^{+}=15)$, (b) E2 at
$c=\overline{u}(y^{+}=30)$, and (c) E3 at $c=\overline{u}(y^{+}=100)$ for
$Ri_{\tau}=0,10,18,60,100$ (darker to lighter). Figure 12: (Energy budget
terms computed from the DNS for $Ri_{\tau}=0,10,18,60,100$ (darker to
lighter).
The trends observed in the energy budget computed from the DNS are also
recovered in the resolvent budgets. The production is mostly balanced by
viscous dissipation and has larger magnitudes compared to the transport
(approximately 10% of the production term) or buoyancy flux (approximately
0.1-1%, depending on $Ri_{\tau}$ of the production term) terms. Comparing the
quantities at the wall-normal heights of interest, we see that at $y^{+}=15$,
there is little variation in the production and viscous dissipation terms in
both DNS and resolvent modes. The difference in relative magnitude over the
various values of $Ri_{\tau}$ increases farther away from the wall, and at
$y^{+}=100$, the production (and viscous diffusion) of the $Ri_{\tau}=100$
case is double the production (and viscous diffusion) of the neutrally-buoyant
case in both the DNS and resolvent.
Direct comparison of the integrated magnitudes is more difficult for the
transport and buoyancy flux terms as they are not uniformly positive or
negative. However, this indicates that, locally, buoyancy flux acts as a
energy transfer term, much like the turbulent transport, as the term adds
energy in one wall-normal location and removes it from another. Because the
DNS energy budget is integrated for all spatio-temporal scales, it is
impossible to deduce that the buoyance flux term acts as a local energy
transfer term from Fig. 12, which shows a net negative energy balance from
$\mathcal{B}$ at all wall-normal locations. In contrast, the resolvent
buoyancy flux term indicates a non-monotonic distribution of energy in the
wall-normal direction. Similar results could be obtained through spatio-
temporal deconstruction of the DNS energy budget term as in Ref. [71], but
this would require a time-resolved dataset for a longer time domain. The
resolvent turbulent transport stays relatively similar among different
$Ri_{\tau}$, as does the turbulent transport term from DNS. The buoyancy flux
is much more dependent on $Ri_{\tau}$, with variations becoming greater
farther away from the wall in both the DNS and resolvent results.
These results can be better quantified by plotting the values at each wall-
normal location normalized by the peak production at $y^{+}=15$ for each case,
as shown in Fig. 13. This shows that the overall trend of the budget terms are
well captured by the resolvent budget terms, with the exception of the
transport term close to the wall. This discrepancy may be attenuated by
integrating over more wavespeeds.
(a) (b)
Figure 13: (a) Resolvent energy budget terms for wavenumber combinations E1,
E2 and E3 evaluated at $y^{+}=15,30,100$, respectively, for
$Ri_{\tau}=0,10,18,60,100$ (darker to lighter) normalized with
$\mathcal{P}(y^{+}=15)$ for $Ri_{\tau}=0$ and wavenumber combination E1. (b)
DNS energy budget terms at $y^{+}=15,30,100$ for $Ri_{\tau}=0,10,18,60,100$
(darker to lighter) normalized with $\mathcal{P}_{\text{DNS}}(y^{+}=15)$ for
$Ri_{\tau}=0$. Symbols are $\mathcal{P}$, circles; $\mathcal{T}$, triangles;
$\mathcal{V}$, crosses; and $\mathcal{B}$, asterisks.
Note that the results are not expected to match that of DNS for all scales as
the the energy captured in the wall-parallel resolvent modes are known to be
overpredicted and the energy captured in the Reynolds stress and wall-normal
resolvent modes underpredicted. This is a known issue for the resolvent
analysis in the primitive variables due to the competing mechanisms of the
Squire modes with the Orr-Sommerfeld modes [72, 73]. Additionally, the
underprediction of energy captured in the Reynolds stress and wall-normal
resolvent modes could explain the underprediction of the transport term close
to the wall. Crucially, though, the most energetic scale can reproduce the
integrated effect of all scales, which enables a quick predictive model of
stratified boundary layers.
## IV Conclusions
The resolvent framework for the Navier-Stokes equations with the Boussinesq
approximation was applied to a stratified turbulent boundary layer.
Computation of the leading resolvent modes is more cost-effective than
performing a full-scale simulation or experiment, while being able to provide
information on the flow. This quick model can provide meaningful insight into
stratified flows with only information about the mean profile and prior
knowledge of energetic scales of motion in the neutrally-buoyant boundary
layers.
The results show that despite using only a very limited range of
representative scales, the resolvent model was able to reproduce the relative
magnitude of turbulence intensities and the balance of the energy budget as
well as provide meaningful analysis of structures in the flow. We studied the
amplitude of the resolvent response modes and their two-dimensional mode
shapes of the rank-one approximation, which were then compared to the
turbulence intensities and the two-dimensional auto-correlation of the
velocity and density fields of the DNS, respectively. The resolvent response
modes were able to predict the relative variation in turbulence intensities as
a function of wall-normal distance and Richardson number for the $Ri_{\tau}$
under consideration in this study. The two-dimensional mode shapes also
provided insight into how the auto-correlation coefficient might shift as a
function of $Ri_{\tau}$. Finally, the energy budget terms for the turbulent
kinetic energy of the system were computed both using the rank-one
approximation of the resolvent analysis and the DNS data. Again, the resolvent
energy budget predicts well the relative distribution of energy between
production, dissipation, transport, and buoyancy flux as a function of wall-
normal distance and Richardson number.
In the current study, the resolvent model was closed using mean velocity and
density profiles obtained from DNS. The computational cost of calculating the
forcing and response modes at certain scales was on the order of seconds on a
laptop. Therefore, by obtaining only mean velocity and scalar profiles we
could generate the salient modal structure for a given stratified wall-bounded
flow. The next steps involve using in-situ data to generate modes that are
representative of flow phenomena observed in nature.
## Acknowledgements
The support of a Vannevar Bush Faculty Fellowship administered under the U.S.
Office of Naval Research, grant #N00014-17-1-3022, is gratefully acknowledged.
Additionally, the authors would like to thank Dr. Angeliki Laskari for
insightful discussions.
## References
* McKeon and Sharma [2010] B. J. McKeon and A. S. Sharma, A critical-layer framework for turbulent pipe flow, J. Fluid Mech. 658, 336 (2010).
* Dawson _et al._ [2018] S. T. Dawson, T. Saxton-Fox, and B. J. McKeon, Modeling passive scalar dynamics in wall-bounded turbulence using resolvent analysis, in _2018 AIAA Fluid Dynamics Conference_ (2018) p. 4042.
* Nieuwstadt [1984] F. T. M. Nieuwstadt, The turbulent structure of the stable, nocturnal boundary layer, J. Atm. Sci. 41, 2202 (1984).
* Stull [2000] R. B. Stull, _Meteorology for scientists and engineers_ (Brooks/Cole, 2000).
* Wunsch and Ferrari [2004] C. Wunsch and R. Ferrari, Vertical mixing, energy, and the general circulation of the oceans, Annu. Rev. Fluid Mech. 36, 281 (2004).
* Thorpe [2005] S. A. Thorpe, _The Turbulent Ocean_ (Cambridge University Press, 2005).
* Panofsky and Dutton [1984] H. A. Panofsky and J. A. Dutton, _Atmospheric Turbulence: Models and methods for engineering applications_ (Wiley, 1984).
* Sorbjan [1989] Z. Sorbjan, _Structure of the atmospheric boundary layer_ (Prentice Hal, 1989).
* Stull [1988] R. B. Stull, _An Introduction to Boundary Layer Meteorology_ , Vol. 13 (Springer Science & Business Media, 1988).
* Wyngaard [2010] J. C. Wyngaard, _Turbulence in the Atmosphere_ (Cambridge University Press, 2010).
* Garratt [1994] J. R. Garratt, The atmospheric boundary layer, Earth-Sci. Rev. 37, 89 (1994).
* Ivey _et al._ [2008] G. N. Ivey, K. B. Winters, and J. R. Koseff, Density stratification, turbulence, but how much mixing?, Annu. Rev. Fluid Mech. 40, 169 (2008).
* Mahrt [2014] L. Mahrt, Stably stratified atmospheric boundary layers, Annu. Rev. Fluid Mech. 46, 23 (2014).
* Mahrt [1999] L. Mahrt, Stratified atmospheric boundary layers, Bound.-Layer Meteorol. 90, 375 (1999).
* Williams _et al._ [2017] O. Williams, T. Hohman, T. Van Buren, E. Bou-Zeid, and A. J. Smits, The effect of stable thermal stratification on turbulent boundary layer statistics, J. Fluid Mech. 812, 1039 (2017).
* Chauhan _et al._ [2013] K. Chauhan, N. Hutchins, J. Monty, and I. Marusic, Structure inclination angles in the convective atmospheric surface layer, Bound.-Layer Meteorol. 147, 41 (2013).
* Salesky and Anderson [2018] S. T. Salesky and W. Anderson, Buoyancy effects on large-scale motions in convective atmospheric boundary layers: implications for modulation of near-wall processes, J. Fluid Mech. 856, 135 (2018).
* Salesky and Anderson [2020] S. T. Salesky and W. Anderson, Revisiting inclination of large-scale motions in unstably stratified channel flow, J. Fluid Mech. 884 (2020).
* Wyngaard and Coté [1971] J. C. Wyngaard and O. R. Coté, The budgets of turbulent kinetic energy and temperature variance in the atmospheric surface layer, J. Atm. Sci. 28, 190 (1971).
* Kondo _et al._ [1978] J. Kondo, O. Kanechika, and N. Yasuda, Heat and momentum transfers under strong stability in the atmospheric surface layer, J. Atm. Sci. 35, 1012 (1978).
* Mahrt [1998] L. Mahrt, Nocturnal boundary-layer regimes, Bound.-Layer Meteorol. 88, 255 (1998).
* Fernando and Weil [2010] H. J. S. Fernando and J. C. Weil, Whither the stable boundary layer? A shift in the research agenda, Bull. Am. Meteorol. Soc. 91, 1475 (2010).
* Smedman _et al._ [1994] A.-S. Smedman, M. Tjernström, and U. Högström, The near-neutral marine atmospheric boundary layer with no surface shearing stress: A case study, J. Atm. Sci. 51, 3399 (1994).
* Stacey _et al._ [1999] M. T. Stacey, S. G. Monismith, and J. R. Burau, Observations of turbulence in a partially stratified estuary, J. Phys. Oceanogr. 29, 1950 (1999).
* Lu _et al._ [2000] Y. Lu, R. G. Lueck, and D. Huang, Turbulence characteristics in a tidal channel, J. Phys. Oceanogr. 30, 855 (2000).
* Large _et al._ [1994] W. G. Large, J. C. McWilliams, and S. C. Doney, Oceanic vertical mixing: A review and a model with a nonlocal boundary layer parameterization, Rev. Geophys. 32, 363 (1994).
* Arya [1975] S. P. S. Arya, Buoyancy effects in a horizontal flat-plate boundary layer, J. Fluid Mech. 68, 321 (1975).
* Britter [1974] R. E. Britter, _An experiment on turbulence in a density stratified fluid_ , Ph.D. thesis, Monash University, Australia (1974).
* Piat and Hopfinger [1981] J. F. Piat and E. J. Hopfinger, A boundary layer topped by a density interface, J. Fluid Mech. 113, 411 (1981).
* Komori _et al._ [1983] S. Komori, H. Ueda, F. Ogino, and T. Mizushina, Turbulence structure in stably stratified open-channel flow, J. Fluid Mech. 130, 13 (1983).
* Fukui _et al._ [1983] K. Fukui, M. Nakajima, and H. Ueda, A laboratory experiment on momentum and heat transfer in the stratified surface layer, Q. J. R. Meteorol. Soc. 109, 661 (1983).
* Miles [1961] J. W. Miles, On the stability of heterogeneous shear flows, J. Fluid Mech. 10, 496 (1961).
* Garg _et al._ [2000] R. P. Garg, J. H. Ferziger, S. G. Monismith, and J. R. Koseff, Stably stratified turbulent channel flows. I. Stratification regimes and turbulence suppression mechanism, Phys. Fluids 12, 2569 (2000).
* Armenio and Sarkar [2002] V. Armenio and S. Sarkar, An investigation of stably stratified turbulent channel flow using large-eddy simulation, J. Fluid Mech. 459, 1 (2002).
* Basu and Porté-Agel [2006] S. Basu and F. Porté-Agel, Large-eddy simulation of stably stratified atmospheric boundary layer turbulence: a scale-dependent dynamic modeling approach, J. Atmos. Sci. 63, 2074 (2006).
* Stoll and Porté-Agel [2008] R. Stoll and F. Porté-Agel, Large-eddy simulation of the stable atmospheric boundary layer using dynamic models with different averaging schemes, Bound.-Layer Meteorol. 126, 1 (2008).
* Iida _et al._ [2002] O. Iida, N. Kasagi, and Y. Nagano, Direct numerical simulation of turbulent channel flow under stable density stratification, Int. J. Heat Mass Transf. 45, 1693 (2002).
* Nieuwstadt [2005] F. T. M. Nieuwstadt, Direct numerical simulation of stable channel flow at large stability, Bound.-Layer Meteorol. 116, 277 (2005).
* Brethouwer _et al._ [2007] G. Brethouwer, P. Billant, E. Lindborg, and J.-M. Chomaz, Scaling analysis and simulation of strongly stratified turbulent flows, J. Fluid Mech. 585, 343–368 (2007).
* Flores and Riley [2011] O. Flores and J. J. Riley, Analysis of turbulence collapse in the stably stratified surface layer using direct numerical simulation, Bound.-Layer Meteorol. 139, 241 (2011).
* García-Villalba and Del Alamo [2011] M. García-Villalba and J. C. Del Alamo, Turbulence modification by stable stratification in channel flow, Phys. Fluids 23, 045104 (2011).
* Yeh and Taira [2018] C.-A. Yeh and K. Taira, Resolvent-analysis-based design of airfoil separation control, J. Fluid Mech. 867, 572 (2018).
* Towne _et al._ [2018] A. Towne, O. T. Schmidt, and T. Colonius, Spectral proper orthogonal decomposition and its relationship to dynamic mode decomposition and resolvent analysis, J. Fluid Mech. 847, 821 (2018).
* Bae _et al._ [2020] H. J. Bae, S. T. Dawson, and B. J. McKeon, Resolvent-based study of compressibility effects on supersonic turbulent boundary layers, J. Fluid Mech. 883 (2020).
* McMullen _et al._ [2020] R. M. McMullen, K. Rosenberg, and B. J. McKeon, Interaction of forced Orr-Sommerfeld and Squire modes in a low-order representation of turbulent channel flow, Phys. Rev. Fluids 5, 084607 (2020).
* Nogueira _et al._ [2021] P. A. S. Nogueira, P. Morra, E. Martini, A. V. G. Cavalieri, and D. S. Henningson, Forcing statistics in resolvent analysis: application in minimal turbulent couette flow, J. Fluid Mech. 908, A32 (2021).
* McKeon [2017] B. J. McKeon, The engine behind (wall) turbulence: perspectives on scale interactions, J. Fluid Mech. 817 (2017).
* Jovanović [2020] M. R. Jovanović, From bypass transition to flow control and data-driven turbulence modeling: An input-output viewpoint, Annu. Rev. Fluid Mech. 53, null (2020).
* Moarref _et al._ [2013] R. Moarref, A. S. Sharma, J. A. Tropp, and B. J. McKeon, Model-based scaling of the streamwise energy density in high-Reynolds-number turbulent channels, J. Fluid Mech. 734, 275 (2013).
* Madhusudanan [2020] A. Madhusudanan, _Coherent structures from the linearized Navier-Stokes equations for wall-bounded turbulent flows_ , Ph.D. thesis, University of Melbourne (2020).
* Lorenz [1955] E. N. Lorenz, Available potential energy and the maintenance of the general circulation, Tellus 7, 157 (1955).
* Turner [1979] J. S. Turner, _Buoyancy Effects in Fluids_ (Cambridge University Press, 1979).
* Orlandi [2000] P. Orlandi, _Fluid Flow Phenomena: A Numerical Toolkit_ (Springer, 2000).
* Wray [1990] A. A. Wray, _Minimal-storage time advancement schemes for spectral methods_ , Tech. Rep. (NASA Ames Research Center, 1990).
* Chorin [1968] A. J. Chorin, Numerical solution of the Navier-Stokes equations, Math. Comput. 22, 745 (1968).
* Bae _et al._ [2019] H. J. Bae, A. Lozano-Durán, S. T. Bose, and P. Moin, Dynamic slip wall model for large-eddy simulation, J. Fluid Mech. 859, 400 (2019).
* Lozano-Durán and Bae [2019] A. Lozano-Durán and H. J. Bae, Characteristic scales of Townsend’s wall-attached eddies, J. Fluid Mech. 868, 698 (2019).
* Ribeiro _et al._ [2020] J. H. M. Ribeiro, C.-A. Yeh, and K. Taira, Randomized resolvent analysis, Phys. Rev. Fluids 5, 033902 (2020).
* Symon _et al._ [2018] S. Symon, K. Rosenberg, S. T. M. Dawson, and B. J. McKeon, Non-normality and classification of amplification mechanisms in stability and resolvent analysis, Phys. Rev. Fluids 3, 053902 (2018).
* del Alamo _et al._ [2004] J. C. del Alamo, J. Jiménez, P. Zandonade, and R. D. Moser, Scaling of the energy spectra of turbulent channels, J. Fluid Mech. 500, 135 (2004).
* Hopfinger [1987] E. J. Hopfinger, Turbulence in stratified fluids: A review, J. Geophys. Research 92, 5287 (1987).
* Tritton [1967] D. J. Tritton, Some new correlation measurements in a turbulent boundary layer, J. Fluid Mech. 28, 439 (1967).
* Liu _et al._ [2001] Z. Liu, R. J. Adrian, and T. J. Hanratty, Large-scale modes of turbulent channel flow: transport and structure, J. Fluid Mech. 448, 53 (2001).
* Ganapathisubramani _et al._ [2005] B. Ganapathisubramani, N. Hutchins, W. T. Hambleton, E. K. Longmire, and I. Marusic, Investigation of large-scale coherence in a turbulent boundary layer using two-point correlations, J. Fluid Mech. 524, 57 (2005).
* Lee and Sung [2011] J. H. Lee and H. J. Sung, Very-large-scale motions in a turbulent boundary layer, J. Fluid Mech. 673, 80 (2011).
* Pirozzoli and Bernardini [2011] S. Pirozzoli and M. Bernardini, Turbulence in supersonic boundary layers at moderate Reynolds number, J. Fluid Mech. 688, 120 (2011).
* Sillero _et al._ [2014] J. A. Sillero, J. Jiménez, and R. D. Moser, Two-point statistics for turbulent boundary layers and channels at Reynolds numbers up to $\delta^{+}\approx 2000$, Phys. Fluids 26, 105109 (2014).
* Kline _et al._ [1967] S. J. Kline, W. C. Reynolds, F. A. Schraub, and P. W. Runstadler, The structure of turbulent boundary layers, J. Fluid Mech. 30, 741 (1967).
* Smith and Metzler [1983] C. R. Smith and S. P. Metzler, The characteristics of low-speed streaks in the near-wall region of a turbulent boundary layer, J. Fluid Mech. 129, 27 (1983).
* Symon _et al._ [2020] S. Symon, S. J. Illingworth, and I. Marusic, Energy transfer in turbulent channel flows and implications for resolvent modelling, J. Fluid Mech. (to appear), arXiv:2004.13266 (2020).
* Mizuno [2016] Y. Mizuno, Spectra of energy transport in turbulent channel flows for moderate Reynolds numbers, J. Fluid Mech. 805, 171 (2016).
* Moarref _et al._ [2014] R. Moarref, M. R. Jovanović, J. A. Tropp, A. S. Sharma, and B. J. McKeon, A low-order decomposition of turbulent channel flow via resolvent analysis and convex optimization, Phys. Fluids 26, 051701 (2014).
* Rosenberg and McKeon [2019] K. Rosenberg and B. J. McKeon, Efficient representation of exact coherent states of the Navier-Stokes equations using resolvent analysis, Fluid Dyn. Res. 51 (2019).
|
Vol.0 (20xx) No.0, 000–000
11institutetext: Key Laboratory of Orogenic Belts and Crustal Evolution,
School of Earth and Space Sciences, Peking University, 100871 Beijing, China;
<EMAIL_ADDRESS>
22institutetext: Earth Dynamics Research Group, School of Earth and Planetary
Sciences, Curtin University, 6102 WA, Australia;
33institutetext: State Key Laboratory of Remote Sensing Science, Aerospace
Information Research Institute, Chinese Academy of Sciences, 100101 Beijing,
China<EMAIL_ADDRESS>
Received 20xx month day; accepted 20xx month day
# Lunar Cratering Asymmetries with High Orbital Obliquity and Inclination of
the Moon
Huacheng Li 11 Nan Zhang 1122 Zongyu Yue 33 Yizhuo Zhang 11
###### Abstract
Accurate estimation of cratering asymmetry on the Moon is crucial for
understanding Moon evolution history. Early studies of cratering asymmetry
have omitted the contributions of high lunar obliquity and inclination. Here,
we include lunar obliquity and inclination as new controlling variables to
derive the cratering rate spatial variation as a function of longitude and
latitude. With examining the influence of lunar obliquity and inclination on
the asteroids population encountered by the Moon, we then have derived general
formulas of the cratering rate spatial variation based on the crater scaling
law. Our formulas with addition of lunar obliquity and inclination can
reproduce the lunar cratering rate asymmetry at the current Earth-Moon
distance and predict the apex/ant-apex ratio and the pole/equator ratio of
this lunar cratering rate to be 1.36 and 0.87, respectively. The apex/ant-apex
ratio is decreasing as the obliquity and inclination increasing. Combining
with the evolution of lunar obliquity and inclination, our model shows that
the apex/ant-apex ratio does not monotonically decrease with Earth-Moon
distance and hence the influences of obliquity and inclination are not
negligible on evolution of apex/ant-apex ratio. This model is generalizable to
other planets and moons, especially for different spin-orbit resonances.
###### keywords:
Moon, meteorites, meteors, meteoroids, planets and satellites: surfaces
## 1 Introduction
Cratering asymmetry on the lunar surface has been recognized in many studies
(Le Feuvre & Wieczorek 2011; Wang & Zhou 2016). Understanding of such
asymmetry alters the basis of lunar cratering chronology (Hiesinger et al.
2000; Fassett et al. 2012), because it has assumed cratering rate is spatially
uniform on the whole Moon (McGill 1977), which eventually influences the
fundamental understanding of lunar evolution. Quantifying the asymmetry can
rectify the deviation in counting the lunar craters sampled by Apollo and Luna
missions (Hartmann 1970; Neukum et al. 1975; Neukum 1984). Cratering asymmetry
has been also generalized to the surface datings of other planets or moons
(Horedt & Neukum 1984; Neukum et al. 2001a, b; Hartmann & Neukum 2001; Zahnle
et al. 2001; Korycansky & Zahnle 2005). Various factors affecting the
cratering asymmetry on the Moon have been intensively investigated (Hartmann
1970; Neukum et al. 1975; Neukum 1984; Le Feuvre & Wieczorek 2011; Wang & Zhou
2016), and the key factors affecting the cratering asymmetry include (1) the
speed and inclination of asteroids encountering the Moon (Le Feuvre &
Wieczorek 2011) and (2) the distance between the Earth and the Moon (Zahnle et
al. 2001; Le Feuvre & Wieczorek 2011; Wang & Zhou 2016).
Three types of cratering asymmetries, i.e., the leading/trailing asymmetry,
pole/equator asymmetry, and near/far-side asymmetry have been recognized
(e.g., Le Feuvre & Wieczorek 2011; Wang & Zhou 2016). The leading/trailing
asymmetry has been explained by both theoretic derivations (Horedt & Neukum
1984; Le Feuvre & Wieczorek 2011; Wang & Zhou 2016) and numerical simulations
(Gallant et al. 2009; Wang & Zhou 2016). It has been confirmed that the
leading surface receives more impactor fluxes and higher impact speed than the
trailing surface due to the synchronous rotating, while this difference
declines with the Earth-Moon distance increased (Le Feuvre & Wieczorek 2011;
Wang & Zhou 2016). The pole/equator asymmetry has also been numerically
modelled (Gallant et al. 2009; Wang & Zhou 2016), which suggested that low
latitude of the Moon receives more impactor fluxes for the gathering of low
inclination asteroids (Le Feuvre & Wieczorek 2008, 2011). In addition, the
pole/equator asymmetry is found to vary by less than %1 when the Earth-Moon
distance is between 20 and 60 Earth radii (Le Feuvre & Wieczorek 2011). The
mechanism of near/far-side asymmetry has not reached a consensus (Wiesel 1971;
Bandermann & Singer 1973). In previous studies, two factors affecting impact
asymmetry, i.e., orbital obliquity and inclination of the Moon (relative to
the ecliptic), have been usually neglected (Le Feuvre & Wieczorek 2011; Wang &
Zhou 2016). However, these two factors might be important within the first 35
Earth radii of Earth-Moon distance when the Moon quickly left Earth (Ćuk et
al. 2016; Ward 1975). Therefore, it is necessary to investigate the influences
of these two factors on the lunar cratering asymmetries.
In this study, we derive the impact asymmetry reliance on the orbital
obliquity and inclination of the Moon by improving previous empirical models
of leading/trailing and pole/equator asymmetries (Le Feuvre & Wieczorek 2011)
and extending two-dimensional analytic formulas (Wang & Zhou 2016) to the
complete formulas based on three-dimensional geometry. Le Feuvre & Wieczorek
(2011) assumed the orbital obliquity of the Moon was constant when the Earth-
Moon distance is larger than 20 Earth radii. Wang & Zhou (2016) calculated the
cratering asymmetries in a planar model which excludes the influences of the
orbital obliquity and inclination of the Moon. Our analytical formulation
including obliquity and inclination can reveal more features of lunar
leading/trailing asymmetry (Le Feuvre & Wieczorek 2011) and add explicit term
for the pole/equator asymmetry (Wang & Zhou 2016). In Section 2, we derived
the formulas for the distribution of impact flux, normal speed, and cratering
rate on the Moon using the concentration of asteroids encountering with the
Moon and scaling laws that convert asteroids velocities and diameters to the
diameters of craters (Holsapple & Housen 2007). Section 3 shows the resultant
distributions of impact flux, normal speed, and cratering rate based on
formulas in Section 2. This result section also estimates the evolution of the
apex/ant-apex ratio of cratering rate according to the evolution of orbital
obliquity and inclination with different Earth-Moon distances (Ćuk et al.
2016). In Section 4, we verify formulas in Section 2 by comparing with
previous results and explain how the orbital obliquity and inclination
influence the lunar cratering rate asymmetry. Additionally, the influences of
orbital obliquity and inclination of the Moon on the concentration of
asteroids encountering with the Moon are detailed in the appendix.
## 2 Method
This section shows how we calculate the distribution of asteroids impact flux,
impact speed and cratering rate using variables in Table 1. Section 2.1
introduces assumptions and coordinate systems with which we derive the
expression of asteroid’s velocity $\vec{v_{p}}$ and the normal vector
$\vec{n}$ at the impact site. $\vec{v_{p}}$ and $\vec{n}$ will be used in the
following calculations. Section 2.2 uses equations from Wang & Zhou (2016) and
Le Feuvre & Wieczorek (2011) to estimate the impact flux at different impact
sites. These equations are rewritten as functions of $\vec{v_{p}}$ and
$\vec{n}$. Section 2.3 calculates the cratering rate variation using scaling
law from Holsapple & Housen (2007). Obtaining the cratering rate variation
requires the impact flux variation and impact normal speed variation. The
former has been calculated in section 2.2 and the later can be calculated with
minor changes in calculation of impact flux.
Table 1: Variables or Parameters Used in the Method. Description | Notation | Range
---|---|---
inclination of asteroids’ encounter velocity | $\phi_{p}$ | $[-\frac{\pi}{2},\frac{\pi}{2}]$
azimuth of asteroids’ encounter velocity | $\lambda_{p}$ | $[0,2\pi]$
encounter speed of asteroids | $v_{p}$ | $[19km/s,\sim 20km/s]$
lunar orbit inclination | $i_{1}$ | $[0,\frac{\pi}{2}]$
lunar obliquity relative to the ecliptic | $i_{2}$ | $[0,\frac{\pi}{2}]$
azimuth of lunar orbit normal | $\omega_{1}$ | $[0,2\pi]$
azimuth of lunar spin axis | $\omega_{2}$ | $[0,2\pi]$
lunar true anomaly | $f_{m}$ | $[0,2\pi]$
lunar eccentric anomaly | $E$ | $[0,2\pi]$
lunar mean anomaly | $M$ | $[0,2\pi]$
lunar argument of perihelion | $\omega_{3}$ | $[0,2\pi]$
longitude of impact sites | $\lambda$ | $[0,2\pi]$
latitude of impact sites | $\phi$ | $[-\frac{\pi}{2},\frac{\pi}{2}]$
semi-major axis of lunar orbit | $a_{m}$ | $[25R_{e},60R_{e}]$
eccentricity of lunar orbit | $e$ | [0,1)
### 2.1 Asteroids Velocity and Normal Vector at Impact Site
This model assumes the orbit of the Moon is an ellipse with the Earth as a
focus. Then in the geocentric ecliptic coordinate system (Z-axis is parallel
to the ecliptic normal and X-axis is towards mean equinox of the J2000 epoch),
the position and velocity of the Moon are $\vec{r_{m}}$ and $\vec{v_{m}}$. We
note that the influence of variation of $i_{1},\omega_{1}$ or $\omega_{3}$ on
the lunar velocity can be estimated using Eqs. (2-5) of Ćuk & Burns (2004) and
is $<1\%$ compared to the influence of variation of $f_{m}$. We hence ignored
the variation of $i_{1},\omega_{1}$ or $\omega_{3}$ in deriving the lunar
velocity. In Eq. (2), $G$ and $M_{e}$ are the gravitational constant and the
mass of the Earth respectively.
$\displaystyle\vec{r_{m}}$
$\displaystyle=R_{z}(\frac{\pi}{2}+\omega_{1})R_{x}(i_{1})R_{z}(\omega_{3})\vec{r}$
(1) $\displaystyle\vec{r}$
$\displaystyle=\left[\cfrac{a_{m}(1-e^{2})}{1+e\cos{f_{m}}}\cos{f_{m}},\cfrac{a_{m}(1-e^{2})}{1+e\cos{f_{m}}}\sin{f_{m}},0\right]^{T}$
$\displaystyle\vec{v_{m}}$
$\displaystyle=R_{z}(\frac{\pi}{2}+\omega_{1})R_{x}(i_{1})R_{z}(\omega_{3})\vec{v}$
(2) $\displaystyle\vec{v}$
$\displaystyle=\left[-\sqrt{\frac{GM_{e}}{a_{m}(1-e^{2})}}\sin{f_{m}},\sqrt{\frac{GM_{e}}{a_{m}(1-e^{2})}}(e+\cos{f_{m}}),0\right]^{T}$
$\displaystyle R_{x}(\theta)=\left[\begin{array}[]{ccc}1&0&0\\\
0&\cos{\theta}&-\sin{\theta}\\\ 0&\sin{\theta}&\cos{\theta}\\\
\end{array}\right],\
R_{z}(\theta)=\left[\begin{array}[]{ccc}\cos{\theta}&-\sin{\theta}&0\\\
\sin{\theta}&\cos{\theta}&0\\\ 0&0&1\end{array}\right]$ (9)
The population concentration $C_{0}$ of asteroids encountering with the Moon
is defined as the distribution of the relative number of asteroids that
encounter with the Moon within a unit time and it can be determined by their
velocities (e.g., Figure 5 of Le Feuvre & Wieczorek 2008). In Eq. (3) and (4),
$\vec{v_{p}}$ and $\vec{e_{z}}$ are the asteroids’ encounter velocity in the
geocentric ecliptic coordinate system and an unit vector parallel to the
positive Z-axis respectively. $v_{p}$ is the average encounter velocity of the
asteroids related to the Earth.
$\displaystyle C_{0}$ $\displaystyle=p(\vec{v_{p}})=p(\lambda_{p},\phi_{p})$
(10) $\displaystyle\vec{v_{p}}$
$\displaystyle=v_{p}R_{z}(\frac{\pi}{2}+\lambda_{p})R_{x}(\frac{\pi}{2}-\phi_{p})\vec{e_{z}}$
(11)
In this model, $\vec{v}_{p}$ is determined by $\lambda_{p}$, $\phi_{p}$, and
$v_{p}$. The concentration of asteroids encountering with the Moon is assumed
unaffected by the orbital obliquity and inclination of the Moon (see the
appendix A). Eq. (A.20) indicates the concentration of asteroids encountering
with the Moon can be estimated by the concentration encountering with the
Earth. Then $C_{0}$ should be function of $(v_{p},\lambda_{p},\phi_{p})$.
Spectrum of $v_{p}$ is not considered in this study and it is set as the
average encounter speed (Horedt & Neukum 1984; Zahnle et al. 2001). Then
$C_{0}$ is independent of $v_{p}$ and a function of $(\lambda_{p},\phi_{p})$.
Since the precession of lunar orbit, the asteroids’ azimuth distribution will
not affect the cratering asymmetries. Therefore, only the marginal
distribution of Eq. (3) $\int_{-\pi}^{\pi}C_{0}d\lambda_{p}$ is required in
the calculation of cratering asymmetries. This marginal distribution is taken
from Le Feuvre & Wieczorek (2008) and it has been shown in Figure 6 of Le
Feuvre & Wieczorek (2008).
Figure 1: The coordinate systems used in calculation. The geocentric ecliptic
coordinate system is $OX_{1}Y_{1}Z_{1}$. The lunar fixed coordinate system is
$OX_{2}Y_{2}Z_{2}$ and its origin is translated to the Earth. The gray plane
$C_{1}$ is the ecliptic plane. The plane $C_{2}$ is the lunar equatorial
plane. $\overline{OA}$ is the intersection of $C_{1}$ and $C_{2}$.
The Moon is assumed to be synchronously rotating with a constant angular
velocity and the prime meridian is determined by the mean sub-Earth point
(GSFC 2008). In the lunar fixed coordinate system whose X-axis is the
intersection of the lunar equator plane and prime meridian plane, and Z-axis
is the lunar spin axis, the normal at lunar surface is $\vec{n}(\lambda,\phi)$
and the transformation matrix $T$ from this lunar fixed coordinate system to
the geocentric ecliptic coordinate system are determined by $\omega_{2}$,
$i_{2}$ and $M$. The relationship between coordinate systems used in this
section is illustrated in Figure 1.
$\displaystyle\vec{n}(\lambda,\phi)$
$\displaystyle=R_{z}(\frac{\pi}{2}+\lambda)R_{x}(\frac{\pi}{2}-\phi)\vec{e_{z}}$
(12) $\displaystyle T$
$\displaystyle=R_{z}(\frac{\pi}{2}+\omega_{2})R_{x}(i_{2})R_{z}(M+M_{0})$ (13)
In Eq. (6), $M_{0}$ is a parameter related to the position of the mean sub-
Earth point and it will be determined by Eqs. (7-9). When the Moon at the
perigee, the center of Earth passes through the lunar prime meridian plane.
$\displaystyle\frac{\vec{r_{m}}}{|\vec{r_{m}}|}\Big{|}_{f_{m}=0}+T\cdot\vec{n}(0,\phi)=0$
(14)
Solving Eq. (14), this gives
$\displaystyle\cos{M_{0}}$
$\displaystyle=\frac{\cos{i_{1}}\sin{(\omega_{1}-\omega_{2})}\sin{\omega_{3}}-\cos{(\omega_{1}-\omega_{2})}\cos{\omega_{3}}}{\sqrt{1-(\sin{i_{2}}\sin{(\omega_{1}-\omega_{2})}\cos{\omega_{3}}-\cos{i_{1}}\sin{i_{2}}\cos{(\omega_{1}-\omega_{2})}\sin{\omega_{3}}-\sin{i_{1}}\cos{i_{2}}\sin{\omega_{3}})^{2}}}$
(15) $\displaystyle\sin{M_{0}}$
$\displaystyle=\frac{-\cos{i_{1}}\cos{i_{2}}\cos{(\omega_{1}-\omega_{2})}\sin{\omega_{3}}-\cos{i_{2}}\sin{(\omega_{1}-\omega_{2})}\cos{\omega_{3}}+\sin{i_{1}}\sin{i_{2}}\sin{\omega_{3}}}{\sqrt{1-(\sin{i_{2}}\sin{(\omega_{1}-\omega_{2})}\cos{\omega_{3}}-\cos{i_{1}}\sin{i_{2}}\cos{(\omega_{1}-\omega_{2})}\sin{\omega_{3}}-\sin{i_{1}}\cos{i_{2}}\sin{\omega_{3}})^{2}}}$
(16)
### 2.2 Distribution of Impact Flux
For given $\vec{v_{p}}$, $\vec{r_{m}}$ and $\vec{v_{m}}$, the velocity of
asteroids relative to the Moon is $\vec{v_{p}}-\vec{v_{m}}$. Define $\hat{x}$
as
$\\-\cfrac{\vec{v_{p}}-\vec{v_{m}}}{|\vec{v_{p}}-\vec{v_{m}}|}\cdot(T\vec{n}(\lambda,\phi))$.
The impact flux $\delta F$ is defined as the distribution of the number of
asteroids that impact on the lunar surface within a unit area and a unit time.
According to Eq. (26) of Wang & Zhou (2016) and Eq. (A.47) of Le Feuvre &
Wieczorek (2011), the impact flux $\delta F$ is a function of $\hat{x}$. In
Eq. (13), $M_{m}$ and $R_{m}$ are the mass and radius of the Moon
respectively.
$\displaystyle\delta F$
$\displaystyle=C_{0}|\vec{v_{p}}-\vec{v_{m}}|f(\hat{x})$ (17) $\displaystyle
f_{1}(\hat{x})$
$\displaystyle=\left\\{\begin{array}[]{lcl}\hat{x}&,&\hat{x}\geq 0\\\
0&,&\hat{x}<0\end{array}\right.$ (20) $\displaystyle f_{2}(x)$
$\displaystyle=\left\\{\begin{array}[]{lcl}\frac{1}{4}(1+\Gamma)^{-1}(1+\mu^{-1})(\Gamma+(1+\mu)\hat{x})&,&\hat{x}\geq\frac{-\Gamma}{2+\Gamma}\\\
0&,&\hat{x}<\frac{-\Gamma}{2+\Gamma}\end{array}\right.$ (23)
$\displaystyle\mu=$ $\displaystyle\sqrt{1+\cfrac{2\Gamma}{1+\hat{x}}}\
,\Gamma=\cfrac{2GM_{m}}{|\vec{v_{p}}-\vec{v_{m}}|^{2}R_{m}}$ (24)
where $f_{1}(\hat{x})$ and $f_{2}(\hat{x})$ are two forms of $f(\hat{x})$.
$f_{1}(\hat{x})$ is from Wang & Zhou (2016) which assumes the trajectories of
asteroids are straight lines in the direction of their common encounter
velocity. While $f_{2}(\hat{x})$ is from Le Feuvre & Wieczorek (2011) in which
trajectories of asteroids are treated as hyperbolic curve with a focus at the
center of the Moon. Because $\Gamma<0.02$, we can expand Eq. (23) around 0
with Taylor series.
$\displaystyle
f_{2}(\hat{x})=\hat{x}+(\frac{1}{2}-2\hat{x})\Gamma+\frac{4\hat{x}^{3}+6\hat{x}^{2}-3}{4(1+\hat{x})^{2}}\Gamma^{2}+o(\Gamma^{3})\
\ \hat{x}>\frac{-\Gamma}{2+\Gamma}$ (25)
Obviously, $f_{1}(\hat{x})$ is the first order approximation of
$f_{2}(\hat{x})$. The absolute relative difference between $f_{1}$ and $f_{2}$
is less than $\%3.5$. For simplicity, we use $f(\hat{x})=f_{1}(\hat{x})$ in
following calculations. Then the average flux within a period of lunar orbit
is
$\displaystyle F=\frac{1}{2\pi}\int_{0}^{2\pi}\delta
FdM=\frac{1}{2\pi}\int_{0}^{2\pi}C_{0}|\vec{v_{p}}-\vec{v_{m}}|f(\hat{x})dM$
(26)
It is known today that $\omega_{3}$ changes with a period of 8.85 years and
$\omega_{2}$ changes with a period of 18.61 years. The secular average flux
independent of them is
$\displaystyle\overline{F}(\lambda,\phi;a_{m},e,i_{1},i_{2},v_{p})=\int_{0}^{2\pi}\frac{d\omega_{3}}{2\pi}\int_{0}^{2\pi}\frac{d\omega_{2}}{2\pi}\int_{0}^{2\pi}\frac{d\lambda_{p}}{2\pi}\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}F\frac{d\phi_{p}}{\pi}$
(27)
### 2.3 Distribution of Normal Impact Speed and Cratering Rate
In this model, the impact angle of asteroids at lunar surface is $\theta$ (Eq.
(A.54) of Le Feuvre & Wieczorek 2011) and the normal impact speed is
$V_{\perp}$ (Eqs. (A.50-A.51) of Le Feuvre & Wieczorek 2011).
$\displaystyle V_{\perp}$
$\displaystyle=|\vec{v_{p}}-\vec{v_{m}}|\sqrt{1+\Gamma}\sin{\theta}=|\vec{v_{p}}-\vec{v_{m}}|g(\hat{x})$
(28)
$g(\hat{x})$ can also be written as two different forms $g_{1}(\hat{x})$ and
$g_{2}(\hat{x})$. $g_{1}(\hat{x})$ is from Wang & Zhou (2016) and
$g_{2}(\hat{x})$ is from Le Feuvre & Wieczorek (2011).
$\displaystyle g_{1}(\hat{x})$ $\displaystyle=\hat{x}/\sqrt{1+\Gamma}$ (29)
$\displaystyle g_{2}(\hat{x})$
$\displaystyle=\sqrt{1+\Gamma-(\frac{1+\mu}{2})^{2}(1-\hat{x}^{2})}$
$\displaystyle=|\hat{x}|+\frac{1}{2}sgn(\hat{x})\Gamma-\frac{sgn(\hat{x})}{4(1+\hat{x})}\Gamma^{2}+o(\Gamma^{3})$
(30)
Similar to $f(\hat{x})$, we expand $g_{2}(\hat{x})$ around $\Gamma=0$.
$g_{1}(\hat{x})$ is the first order approximation of $g_{2}(\hat{x})$.
Substituting $g_{1}(\hat{x})$ in to Eq. (17). We obtain the average normal
speed
$\displaystyle\overline{V}_{\perp}(\lambda,\phi;a_{m},e,i_{1},i_{2},v_{p})=\frac{1}{\overline{F}(\lambda,\phi;a_{m},e,i_{1},i_{2},v_{p})}$
$\displaystyle\cdot\int_{0}^{2\pi}\frac{d\omega_{3}}{2\pi}\int_{0}^{2\pi}\frac{d\omega_{2}}{2\pi}\int_{0}^{2\pi}\frac{d\lambda_{p}}{2\pi}\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\frac{d\phi_{p}}{\pi}\int_{0}^{2\pi}\delta
FV_{\perp}\frac{dM}{2\pi}$ (31)
Combing Eq. (16) and Eq. (20), we finally obtain the cratering rate
expression. Similar to Eq. (56) of Wang & Zhou (2016) and applying the scaling
law of crater diameters(e.g., Holsapple & Housen (2007)), the cratering rate
in our model is
$\displaystyle
N_{c}(\lambda,\phi;a_{m},e,i_{1},i_{2},v_{p})\propto(\overline{V}_{\perp}(\lambda,\phi;a_{m},e,i_{1},i_{2},v_{p}))^{\gamma_{p}\alpha_{p}}\overline{F}(\lambda,\phi;a_{m},e,i_{1},i_{2},v_{p})$
(32)
Here the cratering rate calculation only takes account the the near-Earth
objects: $\gamma_{p}\alpha_{p}=0.987$ (Bottke et al. 2002; Holsapple & Housen
2007). $\alpha_{p}$ is an exponent in the cumulative size distribution of
near-Earth objects diameter (Bottke et al. 2002). $\gamma_{p}$ is a parameter
in the scaling law (Holsapple & Housen 2007; Le Feuvre & Wieczorek 2011; Wang
& Zhou 2016).
## 3 Result
In this section, we describe the cratering rate asymmetry produced by Eq.
(21). Section 3.1 demonstrates the spatial variation of impact flux, normal
speed, and cratering rate. Section 3.2 reveals the influences of orbital
obliquity and inclination of the Moon on lunar cratering rate asymmetry.
Section 3.3 provides the evolution of apex/ant-apex ratio with orbital
obliquity and inclination of the Moon.
### 3.1 Spatial Variations of Impact Flux, Normal Speed, and Cratering Rate
First, our derived formula is used to calculate the cratering rate spatial
variation at current values of the Earth-Moon system since such variation can
be compared with previous predictions by Le Feuvre & Wieczorek (2011) and Wang
& Zhou (2016). Figure 2 shows the relative spatial variation of impact flux,
normal speed, and cratering rate on the Moon with parameters set at current
values of the Earth-Moon system. The parameters involved in Eq. (21) are set
as
$(a_{m},e,i_{1},i_{2},v_{p})=(60R_{e},0.0549,5.145^{\circ},1.535^{\circ},19km/s)$
($R_{e}$ is the radius of the Earth). In Figure 2, the relative cratering rate
is symmetry about $0^{\circ}N$. This symmetry arises from the symmetry of the
asteroids’s concentration $C_{0}$. The maximum of impact flux occurs at
$(90^{\circ}W,0^{\circ}N)$ and the minimum is at $(90^{\circ}E,\pm
65^{\circ}N)$. The maximum/minimum ratio of impact flux is 1.24. The maximum
of normal speed occurs at $(90^{\circ}W,0^{\circ}N)$ and the minimum is at
$(90^{\circ}E,\pm 47^{\circ}N)$. The maximum normal speed is 13.7 $km/s$ and
the minimum is 12.1 $km/s$. The maximum of cratering rate occurs at
$(90^{\circ}W,0^{\circ}N)$ and the minimum is at $(90^{\circ}E,\pm
53^{\circ}N)$. The maximum/minimum cratering rate ratio is 1.40. The apex/ant-
apex ratio (the cratering rate ratio between $(90^{\circ}W,0^{\circ}N)$ and
$(90^{\circ}E,0^{\circ}N)$) is 1.36 and this ratio is a measure of the
longitudinal variation. The pole/equator ratio is 0.87 and this is a measure
of the latitudinal variation. The impact flux, normal speed, and cratering
rate with $e=0$ (other parameters are set same as Figure 2) are also
calculated. The relative difference of cratering rate ($e=0$) from Figure 2 is
less than 0.2%.
Figure 2: Distribution of impact flux (a), normal speed (b), and cratering
rate (c) on the Moon for the current lunar orbital obliquity, inclination and
Earth-Moon distance. The maximum is set to 1.00.
### 3.2 Influences of Orbital Inclination and Obliquity of the Moon
We next investigate the specific effects of lunar orbital inclination and
obliquity on apex/ant-apex and pole/equator ratios. Figure 3 shows the
apex/ant-apex ratio $r_{1}$ and pole/equator ratio $r_{2}$ with different
lunar inclination and obliquity. $i_{2}$ is not same with the lunar obliquity
to its orbit normal. For Cassini state 2 ($\omega_{1}=\omega_{2}+\pi$), lunar
obliquity relative to the lunar orbit normal is $i_{1}+i_{2}$, while for
Cassini state 1 ($\omega_{1}=\omega_{2}$), lunar obliquity relative to the
lunar orbit normal is $|i_{1}-i_{2}|$ (Ward 1975). Other parameters in Eq.
(21) are set as $(a_{m},e,v_{p})=(60R_{e},0.0549,19km/s)$. $r_{1}$ decreases
with both $i_{1}$ and $i_{2}$ increased, while $r_{2}$ increases with increase
in $i_{2}$ and seems to be independent of $i_{1}$. Based on our calculated
$i_{1}$ and $i_{2}$ distribution, we speculate the correlation of $r_{1}$ or
$r_{2}$ with $\cos{(i_{1}+i_{2})}$ and $\cos{(2i_{2})}$ can be fitted as
linear regressions.
$\displaystyle r_{1}$
$\displaystyle=a_{11}+a_{12}\cos{(i_{1}+i_{2})}+a_{13}\cos{(2i_{2})}$ (33)
$\displaystyle r_{2}$
$\displaystyle=a_{21}+a_{22}\cos{(i_{1}+i_{2})}+a_{23}\cos{(2i_{2})}$ (34)
When $a_{m}=60R_{e}$, the fitting result is
$\displaystyle\left(\begin{matrix}a_{11}&a_{12}&a_{13}\\\
a_{21}&a_{22}&a_{23}\end{matrix}\right)=\left(\begin{matrix}1.12676&0.2469790&0.0089461\\\
0.97137&-0.0029778&-0.0930123\end{matrix}\right)$ (35)
When $i_{1}$ and $i_{2}$ are between $0^{\circ}$ and $45^{\circ}$, the
relative error between fitting result and Figure 3 is less than 2.4% for
$r_{1}$ and 0.15% for $r_{2}$.
Figure 3: The apex/ant-apex (a) and pole/equator (b) ratios with orbital
obliquity and inclination of the Moon. Only the Cassini state 2
($\omega_{1}=\omega_{2}+\pi$) is calculated.
### 3.3 Evolution of the Apex/ant-apex Ratio
The past obliquity has been very high and the lunar inclination is also
different from current value (Ward 1975; Ćuk et al. 2016). We obtain the
evolution of the lunar orbital obliquity and inclination with the Earth-Moon
distance by reproducing the semi-analytical method for the lunar orbital
evolution from Ćuk et al. (2016). This method includes solving the
differential equations of lunar synchronous orbit controlled by Earth and Moon
tidal dissipation, as well as coupling them with the equation to satisfy the
Cassini state Ward (1975). The solutions show that the lunar inclination damps
from the initial high value to its present low value $5.1^{\circ}$ due to
tidal dissipation, and the lunar obliquity first increases and then decreases
to current value $1.5^{\circ}$ with the jump between 29.7$R_{e}$ and 35$R_{e}$
due to the transitions from Cassini state 1 to Cassini state 2, which is
similar to the extended data Figure 1 in Ćuk et al. (2016). We next apply this
evolution in our model to estimate the evolution of apex/ant-apex ratio
(Figure 4). According to Ćuk et al. (2016), the Moon is in non-synchronous
rotation from 29.7$R_{e}$ to about 35$R_{e}$ (gray box in Figure 4). When the
Moon is at Cassini state 1 (the Earth-Moon distance $<29.7R_{e}$), the
apex/ant-apex ratio decreases with $a_{m}$. When the Moon is at Cassini state
2 ($>35R_{e}$), this ratio reaches a maximum between 40$R_{e}$ and 45$R_{e}$.
Figure 4: Evolution of the apex/ant-apex ratio with Earth-Moon distance. The
X-axis represents the Earth-Moon distance and Y-axis represents the apex/ant-
apex ratio. The black solid line is $1.12e^{-0.0529a_{m}/R_{e}}+1.32$ from Le
Feuvre & Wieczorek (2011). The value of $\alpha_{p}\gamma_{p}$ in Le Feuvre &
Wieczorek (2011) ranges from $0.907$ to $1.25$. The other two dashed black
curves represent Eq. (124) of Wang & Zhou (2016) with
$\alpha_{p}\gamma_{p}=0.987$. The red and blue triangles represent results
from Eq. (21) which uses constant obliquity and inclination same as current
values $(i_{1},i_{2})=(5.145^{\circ},1.535^{\circ})$. The red and blue lines
is calculated basing on the evolution of orbital obliquity and inclination of
the Moon from Ćuk et al. (2016) with lunar tidal dissipation number
$Q_{M}=38$.
### 3.4 Cratering Rate Distribution of 3:2 Resonance
Our formulas can also predict cratering rate distributions with various spin-
orbit resonance. When the resonance is 3:2 (applicable to Mercury, Colombo
1965), we have a different transformation matrix in Eq. (6) as
$\displaystyle T^{{}^{\prime}}$
$\displaystyle=R_{z}(\frac{\pi}{2}+\omega_{2})R_{x}(i_{2})R_{z}(\frac{3}{2}M+M_{0})$
(36)
Also because this resonance is 3:2, a full integration interval for Eqs. (16),
(20), and (21) is extended to two periods $(0,4\pi]$. If setting up the
parameters involved in Eq. (21) as those for Mercury
$(\sqrt{GM_{e}/a_{m}},e,i_{1},i_{2},v_{p})=(48.0km/s,0.205,7.0^{\circ},7.0^{\circ},42.2km/s)$.
Further substituting T in Eqs. (7-21) with T’ and using the asteroids
inclination distribution from Le Feuvre & Wieczorek (2008), the maximum and
minimum of cratering rate with 3:2 resonance is at $(\pm
90^{\circ}E,0^{\circ}N)$ and $\pm 90^{\circ}N$, respectively. The
maximum/minimum cratering rate ratio is 3.64. When the orbital eccentric is
degraded to 0.0 (Le Feuvre & Wieczorek 2008; Wang & Zhou 2016), the maximum
and minimum are at $0^{\circ}N$ and $\pm 90^{\circ}N$, respectively. The
maximum/minimum cratering rate ratio is 2.91.
## 4 Discussion
### 4.1 Comparison with Previous Results
In Figure 2(c), this study gives a similar current cratering rate spatial
variation ($a_{m}=60R_{e}$) as Le Feuvre & Wieczorek (2011) in which the
maximum and minimum appear at $(90^{\circ}W,0^{\circ}N)$ and $(90^{\circ}E,\pm
65^{\circ}N)$ respectively. The difference in the location of the minimum
between our result and Le Feuvre & Wieczorek (2011) may be brought by $f_{1}$
and $g_{1}$ used in our calculations. The apex/ant-apex ratio for current Moon
from Le Feuvre & Wieczorek (2011) is 1.37. The apex/ant-apex ratio for current
Moon from Wang & Zhou (2016) with $v_{p}=19km/s,\alpha_{p}\gamma_{p}=0.987$
and this study are 1.32 and 1.36 respectively. As an extension based on Le
Feuvre & Wieczorek (2011) and Wang & Zhou (2016), this study gives a value
between them. The larger relative difference between this study and Wang &
Zhou (2016) is probably caused by either the asteroids inclination or the
orbital obliquity and inclination of the Moon. The pole/equator ratio in
Figure 2(c) is 0.87. This value is higher than 0.80 in Le Feuvre & Wieczorek
(2011). This difference may be brought by $f_{1}$ and $g_{1}$ used in our
calculations. Besides the current cratering rate, this study also gives the
evolution of the apex/ant-apex ratio in Figure 4. The results from Le Feuvre &
Wieczorek (2011) and Wang & Zhou (2016) are also included in Figure 4. The
value of $\alpha_{p}\gamma_{p}$ adopted in Le Feuvre & Wieczorek (2011) is
about $0.907\sim 1.25$, because they used different parameters in crater
scaling law (in non-porous gravity scaling regime $\gamma_{p}=0.564$ while in
porous $\gamma_{p}=0.410$) and a 10th-order polynomial to fit the size
distribution of asteroids ($\alpha_{p}\approx 2.22$). The influences of
$\alpha_{p}\gamma_{p}$ is shown in Figure 5. When $\alpha_{p}\gamma_{p}$ is
between $0.907\sim 1.25$ and $v_{p}$ is between $19\sim 20km/s$, the apex/ant-
apex ratio for current Moon calculated by this model is about $1.33\sim 1.41$.
Although the value of $\alpha_{p}\gamma_{p}$ or $v_{p}$ in this study is
different from Le Feuvre & Wieczorek (2011), if the orbital obliquity and
inclination of the Moon is assumed constant and same with current values
$(i_{1},i_{2})=(5.145^{\circ},1.535^{\circ})$, this study will reproduce the
results predicted by Le Feuvre & Wieczorek (2011) (red and blue triangle in
Figure 4).
If we consider the variation of obliquity and inclination, when the Earth-Moon
distance is more than $\sim 42R_{e}$, this model gives a consistent value with
Le Feuvre & Wieczorek (2011) and a similar trend with Wang & Zhou (2016).
However, when Earth-Moon distance is between $35R_{e}$ and $\sim 42R_{e}$,
this study gives an opposite trend to previous results. According to Ćuk et
al. (2016), the orbital obliquity and inclination of the Moon decrease in this
interval. This evolution trend of apex/ant-apex ratio can be explained by the
influences of orbital obliquity and inclination of the Moon. When Earth-Moon
distance is between $29.7R_{e}$ and $35R_{e}$, the Moon is in non-synchronous
rotation and the apex/ant-apex ratio will be diminished by non-synchronous
rotation. When Earth-Moon distance is less than $29.7R_{e}$, the apex/ant-apex
ratio is calculated under the assumption: the inclination distribution of
asteroids’ velocity is same as current distribution. Although the obliquity
and inclination is very high, the apex/ant-apex ratio is consistent with Le
Feuvre & Wieczorek (2011). We note that the population of asteroids is
dominated by main-belt asteroids during the late heavy bombardment and and
near-Earth objects since $3.8-3.7$ Ga according to Strom et al. (2015). The
population of near-Earth objects have been in steady state for the past $\sim
3$ Ga (Bottke et al. 2002). The evolution of apex/ant-apex ratio for the
Earth-Moon distance $<29.7R_{e}$ may be quite different from that shown in
Figure 4. It is confirmed that the influences of orbital obliquity and
inclination of the Moon are not negligible in analysing lunar cratering
asymmetry.
Figure 5: Apex/ant-apex ratio of cratering rate with different
$\alpha_{p}\gamma_{p}$ and $v_{p}$. Other parameters involved in Eq. (21) is
set as $(a_{m},e,i_{1},i_{2})=(60R_{e},0.0549,5.145^{\circ},1.535^{\circ})$.
### 4.2 Explanation for the Influences of Orbital Obliquity and Inclination
of the Moon
Figure 6: Sketch for our model in the lunar fixed coordinate system
$OX_{2}Y_{2}Z_{2}$. The origin $O$ is at the center of the Moon. The
$Z_{2}$-axis is parallel to the lunar spin axis $\vec{s}$ and the $X_{2}$-axis
points to the mean sub earth point. $\vec{k}$ is the ecliptic normal.
$\vec{n}$ is the lunar orbit normal. Point A, B, and C are on the intersection
of the lunar surface and the plane $OY_{2}Z_{2}$. $\overline{OB}$ is parallel
to $\vec{v_{m}}$ and perpendicular to $\vec{n}$. Point A is on the lunar
equator plane $C_{2}$ and it is the ant-apex point $(90^{\circ}E,0^{\circ}N)$.
Point C is on the ecliptic plane. Point D is the mean sub-earth point.
When the lunar orbit eccentric $e=0.0$, our model is sketched in Figure 6. In
a lunar rotation period, the relative position between
$\\{\vec{n},\vec{k},\vec{s}\\}$ and the coordinate system $OX_{2}Y_{2}Z_{2}$
is not fixed. The apex or ant-apex point is on the gray cycle $C_{2}$. When
$i_{1}=i_{2}=0$, the gray circle $C_{2}$, red circle, and yellow circle
coincide and $r_{1}$ reaches its maximum. The influence of asteroids
inclination is related to the length of
$\stackrel{{\scriptstyle\frown}}{{AC}}$. The influence of lunar velocity is
related to the length of $\stackrel{{\scriptstyle\frown}}{{AB}}$. The farther
the ant-apex is from $B$ or $C$, the smaller the leading/trailing asymmetry.
In our model, the angular distance between A (ant-apex) and B is in
$[0,max\\{|i_{1}+i_{2}|,|i_{1}-i_{2}|\\}]$ and the angular distance between A
and C is in $[0,i_{2}]$. For Cassini state 2,
$max\\{|i_{1}+i_{2}|,|i_{1}-i_{2}|\\}=|i_{1}+i_{2}|$. This is consistent with
the fitting result: $r_{1}$ is proportional to $\cos{(i_{1}+i_{2})}$ and
$\cos{(2i_{2})}$. For $r_{2}$ which is related to the angular distance between
point D and the red or yellow plane. When $i_{1}=i_{2}=0$, $r_{2}$ reaches its
minimum. The angular distance between point D and the red circle is in
$[0,i_{2}]$ and the angular distance between point D and the yellow circle is
in $[0,max\\{|i_{1}+i_{2}|,|i_{1}-i_{2}|\\}$. The pole/equator asymmetry will
decrease with increasing in those two angular distances. This is also
consistent with the fitting result: $r_{2}$ is proportional to
$\cos{(2i_{2})}$ and $\cos{(i_{1}+i_{2})}$.
### 4.3 Generalization of this Model
The orbital obliquity and inclination of the Moon, Earth-Moon distance, lunar
orbital eccentric, and lunar rotation speed have been included in this model.
In addition to the Moon, this model can be applied to other planets and moons,
especially for the types of spin-orbit resonance. For example, Mercury is
tidally locked with the Sun in a 3:2 resonance. The cratering rate
distribution of 3:2 Resonance is detailed in the Figure 7 Figure 7(a) shows
the distribution of cratering rate for 3:2 resonance. This cratering rate
asymmetry has been reported in Wieczorek et al. (2012). They predict the
cratering asymmetry maximizes at $(0^{\circ}E,0^{\circ}N)$ and
$(180^{\circ}E,0^{\circ}N)$ and minimizes at $(\pm 90^{\circ}E,0^{\circ}N)$.
Both of this study and Wieczorek et al. (2012) predict the distance between
maxima of cratering asymmetry is $180^{\circ}$. The difference between this
study and Wieczorek et al. (2012) may be from the ignorance of the non-
uniformity on the azimuth of asteroids velocity in this study and different
definition of prime meridian between this study and Wieczorek et al. (2012).
In Figure 7(b), the cratering with orbital eccentric $e=0$ shows a different
distribution from $e=0.205$ for 3:2 resonance. The longitudinal variation of
cratering rate is diminished by rotation of planets and moons with eccentric
$e=0$. However, in the cratering rate on the Moon, the difference caused by
eccentric is less than 0.2%. The influences of eccentric is probably related
to the types of spin-orbit resonance and will be investigated in a future
work.
Figure 7: The relative cratering rate for 3:2 resonance. The maximum is set to
1.00. longitudes $0^{\circ}$ and $180^{\circ}$ are subsolar points when $M=0$.
In subfigure (a), the orbital eccentric is 0.205. In subfigure (b), the
orbital eccentric is 0.0.
## 5 Conclusion
In this study, we have presented an extension of Wang & Zhou (2016) and Le
Feuvre & Wieczorek (2011) to calculate the lunar cratering asymmetry with high
obliquity and inclination. Different from previous models, this model is also
able to calculate the cratering asymmetry with different Cassini states, and
synchronous rotating speed. This model gives consistent results with previous
with low obliquity and inclination. When the obliquity, inclination and Earth-
Moon distance are at current values, this model gives an cratering asymmetry
maximizing at $(90^{\circ}W,0^{\circ}N)$ and minimizing at $(90^{\circ}E,\pm
53^{\circ}N)$ using the encountering velocity inclination distribution
calculated in Le Feuvre & Wieczorek (2008). The apex/ant-apex ratio of this
asymmetry is 1.36 and the pole/equator ratio is 0.87. In order to calculate
the cratering rate with high obliquity and inclination, we have assumed the
orbital obliquity and inclination of the Moon don’t affect the asteroids
population encountering with the Moon. Increasing the orbital obliquity and
inclination of the Moon reduces the apex/ant-apex ratio. According to the
evolution of orbital obliquity and inclination of the Moon, this model gives
an increasing trend in apex/ant-apex ratio with the Earth-Moon distance
between $[35R_{e},42R_{e}]$. This ratio was predicted decreased with the
increasing of Earth-Moon distance in previous studies. Besides the cratering
rate, this model also gives the spatial variation of impact flux and impact
normal speed. Our results provide the quantitative information for evaluating
and rectifying the lunar cratering chronology.
###### Acknowledgements.
We thank M. A. Wieczorek and W. Fa for instructive discussion at the early
stage of this study. We also thank J.-L. Zhou for constructive and insightful
suggestions on our research focus. We thank careful reviews by two anonymous
reviewers. Computations were conducted on the High-performance Computing
Platform of Peking University and the Pawsey Supercomputing Centre with
funding from the Australian Government and the Government of Western
Australia. Z.Y. is supported by the B-type Strategic Priority Program of the
Chinese Academy of Sciences, Grant No. XDB41000000 and NSFC 41972321. N.Z. is
grateful for NSFC 41674098, CNSA D020205 and the B-type Strategic Priority
Program of the Chinese Academy of Sciences, Grant No. XDB18010104. This
research has made use of data and/or services provided by the International
Astronomical Union’s Minor Planet Center.
## References
* Bandermann & Singer (1973) Bandermann, L. W., & Singer, S. F. 1973, Icarus, 19, 108
* Bottke et al. (2002) Bottke, W. F., Morbidelli, A., Jedicke, R., et al. 2002, Icarus, 156, 399
* Colombo (1965) Colombo, G. 1965, Nature, 208, 575
* Fassett et al. (2012) Fassett, C. I., Head, J. W., Kadish, S. J., et al. 2012, Journal of Geophysical Research: Planets, 117
* Gallant et al. (2009) Gallant, J., Gladman, B., & Ćuk, M. 2009, Icarus, 202, 371
* Greenberg (1982) Greenberg, R. 1982, The Astronomical Journal, 87, 184
* GSFC (2008) GSFC. 2008, LRO Project White Paper Version, 4, 1
* Hartmann (1970) Hartmann, W. K. 1970, Icarus, 13, 299
* Hartmann & Neukum (2001) Hartmann, W. K., & Neukum, G. 2001, Space Science Reviews, 96, 165
* Hiesinger et al. (2000) Hiesinger, H., Jaumann, R., Neukam, G., & Head, J. W. 2000, Journal of Geophysical Research, 105, 29239
* Holsapple & Housen (2007) Holsapple, K. A., & Housen, K. R. 2007, Icarus, 191, 586
* Horedt & Neukum (1984) Horedt, G. P., & Neukum, G. 1984, Icarus, 60, 710
* Korycansky & Zahnle (2005) Korycansky, D. G., & Zahnle, K. J. 2005, Planetary and Space Science, 53, 695
* Le Feuvre & Wieczorek (2008) Le Feuvre, M., & Wieczorek, M. A. 2008, Icarus, 197, 291
* Le Feuvre & Wieczorek (2011) Le Feuvre, M., & Wieczorek, M. A. 2011, Icarus, 214, 1
* McGill (1977) McGill, G. E. 1977, Geological Society of America Bulletin, 88, 1102
* Neukum (1984) Neukum, G. 1984, Meteorite bombardment and dating of planetary surfaces, PhD thesis, National Aeronautics and Space Administration, Washington, DC.
* Neukum et al. (2001a) Neukum, G., Ivanov, B. A., & Hartmann, W. K. 2001a, Space Science Reviews, 96, 55
* Neukum et al. (1975) Neukum, G., König, B., & Arkani-Hamed, J. 1975, The moon, 12, 201
* Neukum et al. (2001b) Neukum, G., Oberst, J., Hoffmann, H., Wagner, R., & Ivanov, B. A. 2001b, Planetary and Space Science, 49, 1507
* Opik (1951) Opik, E. J. 1951, Proc. R. Irish Acad. Sect. A, 54, 165
* Strom et al. (2015) Strom, R. G., Malhotra, R., Xiao, Z.-Y., et al. 2015, Research in Astronomy and Astrophysics, 15, 407
* Wang & Zhou (2016) Wang, N., & Zhou, J. L. 2016, Astronomy & Astrophysics, 594
* Ward (1975) Ward, W. R. 1975, Science, 189, 377
* Wetherill (1967) Wetherill, G. W. 1967, Journal of Geophysical Research (1896-1977), 72, 2429
* Wieczorek et al. (2012) Wieczorek, M. A., Correia, A. C. M., Le Feuvre, M., Laskar, J., & Rambaux, N. 2012, Nature Geoscience, 5, 18
* Wiesel (1971) Wiesel, W. 1971, Icarus, 15, 373
* Zahnle et al. (2001) Zahnle, K., Schenk, P., Sobieszczyk, S., Dones, L., & Levison, H. F. 2001, Icarus, 153, 111
* Ćuk & Burns (2004) Ćuk, M., & Burns, J. A. 2004, The Astronomical Journal, 128, 2518
* Ćuk et al. (2016) Ćuk, M., Hamilton, D. P., Lock, S. J., & Stewart, S. T. 2016, Nature, 539, 402
## Appendix A Asteroids encountering with the Moon with high obliquity and
inclination
In section 2.1, we assumed the concentration of asteroids encountering with
the Moon is unaffected by the orbital obliquity and inclination of the Moon.
Although the probability of asteroids encountering with the Moon has been
estimated based on (Opik 1951; Wetherill 1967; Greenberg 1982) and it has been
demonstrated the probability encountering with the Moon is similar with the
Earth when the lunar inclination and obliquity is about 0 in Figure 6 of Le
Feuvre & Wieczorek (2008). But when the inclination and obliquity is high, the
rationality of our assumption is uncertain. In this section we introduce a
different framework to prove this assumption. For any asteroid with semi-major
axis, eccentricity, inclination, longitude of ascending node, argument of
perihelion, true anomaly, mean anomaly and eccentric anomaly are
$(a,e,i,\Omega,\omega,f,M,E)$. Here we use subscript $e$ to represent the
orbit of Earth and $m$ to represent the Moon. In the heliocentric ecliptic
coordinates system, the position of this asteroid is
$\displaystyle\vec{r}$
$\displaystyle=rR_{z}(\Omega)R_{x}(i)R_{z}(\omega+f)[1,0,0]^{T}$ (37)
$\displaystyle r$ $\displaystyle=a(1-e^{2})/(1+e\cos{f})=a(1-e\cos{E})$ (38)
$\displaystyle r_{max}=a(1+e)\ ,\ r_{min}=a(1-e)$
For elliptic trajectory,
$\displaystyle M$ $\displaystyle=E-e\sin{E}$ (39) $\displaystyle\cos{f}$
$\displaystyle=\frac{\cos{E}-e}{1-e\cos{E}}$ (40) $\displaystyle\sin{f}$
$\displaystyle=\frac{\sqrt{1-e^{2}}\sin{E}}{1-e\cos{E}}$ (41)
Using the common assumption: uniform precession of $\Omega$ and $\omega$(Opik
1951; Wetherill 1967; Greenberg 1982), for given $(a,e,i)$ the joint
probability density is
$\displaystyle
P_{\Omega,\omega,M}(\Omega,\omega,M|a,e,i)=\left(\frac{1}{2\pi}\right)^{3}$
(42) $\displaystyle P_{\Omega,\omega,E}(\Omega,\omega,E|a,e,i)$
$\displaystyle=P_{\Omega,\omega,M}(\Omega,\omega,M|a,e,i)|\frac{\partial
M}{\partial E}|=\left(\frac{1}{2\pi}\right)^{3}(1+e\sin{E})$ (43)
$\displaystyle\Omega\in[-\pi,\pi],\ \omega\in[-\pi,\pi],\ M\in[-\pi,\pi],\
E\in[-\pi,\pi]$ (44)
The position of this asteroid can also be expressed as $\vec{r}=[x,y,z]^{T}$,
then we obtain a transformation: $(\Omega,\omega,E)\mapsto(x,y,z)$.
$\displaystyle\left\\{\begin{array}[]{cll}x&=&r(\cos{\Omega}\cos{(\omega+f)}-\sin{\Omega}\sin{(\omega+f)}\cos{i})\\\
y&=&r(\sin{\Omega}\cos{(\omega+f)}+\cos{\Omega}\sin{(\omega+f)}\cos{i})\\\
z&=&r\sin{(\omega+f)}\sin{i}\end{array}\right.$ (48)
Eq. (A.9) has 4 solutions: $(\Omega_{k},\omega_{k},E_{k}),k=1,2,3,4$.
$\displaystyle\left\\{\begin{array}[]{cll}\cos{f_{k}}&=&\cfrac{a(1-e^{2})}{e\sqrt{x^{2}+y^{2}+z^{2}}}-\cfrac{1}{e}\\\
\sin{(\omega_{k}+f_{k})}&=&\cfrac{z}{\sqrt{x^{2}+y^{2}+z^{2}}\sin{i}}\\\
\Omega_{k}&=&atan2\Big{(}x\cos{(\omega_{k}+f_{k})}+y\sin{(\omega_{k}+f_{k})}\cos{i},\\\
&&y\cos{(\omega_{k}+f_{k})}-x\sin{(\omega_{k}+f_{k})}\cos{i}\Big{)}\\\
E_{k}&=&atan2\Big{(}\sqrt{x^{2}+y^{2}+z^{2}}\cos{f_{k}}+ae,\\\
&&\sqrt{x^{2}+y^{2}+z^{2}}\sin{f_{k}}\frac{1}{\sqrt{1-e^{2}}}\Big{)}\end{array}\right.$
(54)
Using Eqs. (A.6-A.10), the joint probability density is
$\displaystyle
P_{x,y,z}(x,y,z|a,e,i)=\sum_{k=1}^{4}\left|\left|\cfrac{\partial(\Omega_{k},\omega_{k},E_{k})}{\partial(x,y,z)}\right|\right|P_{\Omega,\omega,E}(\Omega_{k},\omega_{k},E_{k}|a,e,i)$
$\displaystyle=\frac{1}{2a\pi^{3}}\frac{1}{\sqrt{r^{2}\sin^{2}{i}-z^{2}}}\frac{1}{\sqrt{(r-r_{min})(r_{max}-r)}}\
,\ \ r=\sqrt{x^{2}+y^{2}+z^{2}}$ (56)
In Eq. (A.11),
$\cfrac{\partial(\Omega_{k},\omega_{k},E_{k})}{\partial(x,y,z)}$ is the Jacobi
matrix. Eq. (A.11) is valid when $(x,y,z)\in\mathbf{D}=\\{|z|\leq|\sin{i}|r\
and\ r_{min}\leq r\leq r_{max}\\}$. When $(x,y,z)\notin\mathbf{D}$,
$P_{x,y,z}(x,y,z|a,e,i)=0$. This asteroid encountering with a fix point
$\vec{r_{0}}=[x_{0},y_{0},z_{0}]^{T}$ is defined as
$|\vec{r}-\vec{r_{0}}|\leq\tau$ ($\tau$ is different for the Moon and the
Earth) and $\tau\ll min\\{|\vec{r}|,|\vec{r_{0}}|\\}$. Then we obtain the
probability encountering with a fixed point $P_{1}$ and its error $\delta
P_{1}$.
$\displaystyle P_{1}$
$\displaystyle=\iiint_{|\vec{r}-\vec{r_{0}}|\leq\tau}P_{x,y,z}(x,y,z|a,e,i)dxdydz$
$\displaystyle\approx\frac{4}{3}\pi\tau^{3}P_{x,y,z}(x_{0},y_{0},z_{0}|a,e,i)$
(57) $\displaystyle\delta P_{1}$
$\displaystyle\approx\iiint_{|\vec{r}-\vec{r_{0}}|\leq\tau}|\vec{r}-\vec{r_{0}}||\nabla
P_{x,y,z}(x_{0},y_{0},z_{0}|a,e,i)|dxdydz$ $\displaystyle=\pi\tau^{4}|\nabla
P_{x,y,z}(x_{0},y_{0},z_{0}|a,e,i)|$ (58)
Because $P_{x,y,z}$ is not bounded with
$(r^{2}\sin^{2}{i}-z^{2})(r-r_{max})(r-r_{min})=0$. Eq.(A.12) and Eq. (A.13)
are valid when $min\\{|r\sin{i}\pm
z|,|r-r_{max}|,|r-r_{min}|\\}\geq\varepsilon a\ (\varepsilon>0)$. When
$min\\{|r\sin{i}\pm z|,|r-r_{max}|,|r-r_{min}|\\}<\varepsilon a$, the supremum
of $P_{1}$ can be estimated in spherical coordinates,
$\displaystyle P_{1}\leq\iiint_{E}P_{x,y,z}r^{2}|\sin{\theta}|drd\theta
d\varphi=\frac{1}{2a\pi^{3}}\left(\int_{\varphi}d\varphi\right)$
$\displaystyle\cdot
Re\left(\int_{\theta}\frac{\sin{\theta}d\theta}{\sqrt{\sin^{2}{i}-\cos^{2}{\theta}}}\right)Re\left(\int_{r}\frac{rdr}{\sqrt{(r-r_{min})(r_{max}-r)}}\right)$
(59) $\displaystyle
E=\Big{\\{}|r-r_{0}|\leq\tau,|\theta-\theta_{0}|\leq\arctan{\tau/r_{0}},|\varphi-\varphi_{0}|\leq\arctan{\frac{\tau}{r_{0}\cos{\theta_{0}}}}\Big{\\}}$
(60)
This section is only a qualitative explanation. For simplicity, following
derivations are under the condition: $min\\{|r\sin{i}\pm
z|,|r-r_{max}|,|r-r_{min}|\\}\geq\varepsilon a$. When $\vec{r_{0}}$ is not
fixed, for the Earth
$\vec{r_{0}}=\vec{r_{e}}=r_{e}R_{z}(\Omega_{e})R_{x}(i_{e})R_{z}(\omega_{e}+f_{e})[1,0,0]^{T}$,
this asteroid encounters with the Earth by probability $P_{2}$.
$\displaystyle
P_{2}=\iiint_{(\Omega_{e},\omega_{e},M_{e})}P(\Omega_{e},\omega_{e},M_{e}|a_{e},e_{e},i_{e})P_{1}d\Omega_{e}d\omega_{e}dM_{e}$
$\displaystyle=2\pi\iint_{(\omega_{e},M_{e})}P(\Omega_{e}=0,\omega_{e},M_{e}|a_{e},e_{e},i_{e})P_{1}d\omega_{e}dM_{e}$
(61) $\displaystyle\delta
P_{2}=2\pi\iint_{(\omega_{e},M_{e})}P(\Omega_{e}=0,\omega_{e},M_{e}|a_{e},e_{e},i_{e})\delta
P_{1}d\omega_{e}dM_{e}$ (62)
Eq. (A.16) and Eq. (A.17) use the rotational symmetry about the z axis of
$P_{x,y,z}$. For the Moon
$\vec{r_{0}}=\vec{r_{e}}+\vec{r_{m}}=r_{e}R_{z}(\Omega_{e})R_{x}(i_{e})R_{z}(\omega_{e}+f_{e})[1,0,0]^{T}+r_{m}R_{z}(\Omega_{m})R_{x}(i_{m})R_{z}(\omega_{m}+f_{m})[1,0,0]^{T}$
, this asteroid encounters with the Moon by $P_{3}$.
$\displaystyle
P_{3}=2\pi\iint_{(\omega_{e},M_{e})}P(\Omega_{e}=0,\omega_{e},M_{e}|a_{e},e_{e},i_{e})$
$\displaystyle\iiint_{(\Omega_{m},\omega_{m},M_{m})}P(\Omega_{m},\omega_{m},M_{m}|a_{m},e_{m},i_{m})P_{1}d\omega_{e}dM_{e}d\Omega_{m}d\omega_{m}dM_{m}$
(63) $\displaystyle\delta
P_{3}=2\pi\iint_{(\omega_{e},M_{e})}P(\Omega_{e}=0,\omega_{e},M_{e}|a_{e},e_{e},i_{e})$
$\displaystyle\iiint_{(\Omega_{m},\omega_{m},M_{m})}P(\Omega_{m},\omega_{m},M_{m}|a_{m},e_{m},i_{m})\delta
P_{1}d\omega_{e}dM_{e}d\Omega_{m}d\omega_{m}dM_{m}$ (64)
The difference between $P_{2}$ and $P_{3}$ can be estimated by
$\displaystyle|P_{2}\tau_{e}^{-3}-P_{3}\tau_{m}^{-3}|\leq
2\pi\iint_{(\omega_{e},M_{e})}P(\Omega_{e}=0,\omega_{e},M_{e}|a_{e},e_{e},i_{e})\iiint_{(\Omega_{m},\omega_{m},M_{m})}P(\Omega_{m},\omega_{m},M_{m}|a_{m},e_{m},i_{m})$
$\displaystyle\times\frac{4\pi}{3}|P_{x,y,z}(x_{m}+x_{e},y_{m}+y_{e},z_{m}+z_{e}|a,e,i)-P_{x,y,z}(x_{e},y_{e},z_{e}|a,e,i)|d\omega_{e}dM_{e}d\Omega_{m}d\omega_{m}dM_{m}$
(65)
From Eq. (A.20), $P_{3}$ can be estimated by
$P_{2}\frac{\tau_{m}^{3}}{\tau_{e}^{3}}$. While
$P_{2}\frac{\tau_{m}^{3}}{\tau_{e}^{3}}$ is independent of lunar inclination
and obliquity, therefore we can use the concentration of asteroids
encountering with the Moon with low inclination and obliquity to replace the
concentration with high inclination and obliquity. We note that when
$(a_{e},i_{e},e_{e})=(1,0,0)$, for $87\%$ of the near-earth orbits (the
dataset of near-earth orbts is taken from the International Astronomical
Union’s website), the relative error between
$P_{2}\frac{\tau_{m}^{3}}{\tau_{e}^{3}}$ and $P_{3}$ calculated by Eq. (A.20)
is less than 5%.
|
# How the Design of YouTube Influences User Sense of Agency
Kai Lukoff University of Washington<EMAIL_ADDRESS>, Ulrik lyngs University of
Oxford<EMAIL_ADDRESS>, Himanshu Zade University of Washington
<EMAIL_ADDRESS>, J. Vera Liao University of Washington<EMAIL_ADDRESS>, James
Choi University of Washington<EMAIL_ADDRESS>, Kaiyue Fan University of
Washington<EMAIL_ADDRESS>, Sean A. Munson University of Washington
<EMAIL_ADDRESS>and Alexis Hiniker University of Washington<EMAIL_ADDRESS>
(2021)
###### Abstract.
In the attention economy, video apps employ design mechanisms like autoplay
that exploit psychological vulnerabilities to maximize watch time.
Consequently, many people feel a lack of agency over their app use, which is
linked to negative life effects such as loss of sleep. Prior design research
has innovated external mechanisms that police multiple apps, such as lockout
timers. In this work, we shift the focus to how the internal mechanisms of an
app can support user agency, taking the popular YouTube mobile app as a test
case. From a survey of 120 U.S. users, we find that autoplay and
recommendations primarily undermine sense of agency, while search and
playlists support it. From 13 co-design sessions, we find that when users have
a specific intention for how they want to use YouTube they prefer interfaces
that support greater agency. We discuss implications for how designers can
help users reclaim a sense of agency over their media use.
digital wellbeing, sense of agency, social media, YouTube
††journalyear: 2021††copyright: acmlicensed††conference: CHI Conference on
Human Factors in Computing Systems; May 8–13, 2021; Yokohama,
Japan††booktitle: CHI Conference on Human Factors in Computing Systems (CHI
’21), May 8–13, 2021, Yokohama, Japan††price: 15.00††doi:
10.1145/3411764.3445467††isbn: 978-1-4503-8096-6/21/05††ccs: Human-centered
computing Empirical studies in HCI
## 1\. Introduction
$``$At Netflix, we are competing for our customers’ time, so our competitors
include Snapchat, YouTube, sleep, etc.$"$
\- Reed Hastings, Netflix CEO (Williams, 2018, p.50)
In the attention economy, social media apps employ a variety of design
mechanisms–such as eye-catching notification icons, tempting clickbait, and
never-ending autoplay–to maximize their share of the user’s time. In this
pursuit, designers and tech industry insiders warn that many of these
mechanisms exploit psychological vulnerabilities and harm the interests of the
user (Lewis, 2017; Burr et al., 2018).
It is no accident then that social media use is often associated with a loss
of sense of agency (Baumer et al., 2018). People self-report that their desire
to consume media frequently conflicts with their plans or goals and that they
fail to resist about three-quarters of the time (Delaney and Lades, 2017). And
loss of control is a key component of many measures of problematic technology
use (Cheng et al., 2019).
In response, digital wellbeing researchers have innovated what we term
external mechanisms that help users manage or monitor their app use, such as
lockout timers (Kim et al., 2019a) and productivity dashboards (Kim et al.,
2016). While these mechanisms apply universally to many different apps, they
do not change the internal mechanisms within an app, such as autoplay, that
might lead it to be problematic in the first place.
One promising approach is to redesign these mechanisms for a greater sense of
agency, i.e., an individual’s experience of being the initiator of their
actions in the world (Synofzik et al., 2008). Low sense of agency over
technology use is associated with negative life impacts such as a loss of
social opportunities, productivity, and sleep (Caplan, 2010) that often
motivate digital wellbeing efforts to begin with. Moreover, a lack of sense of
agency itself can be understood as a driver of the dissatisfaction that people
often feel with their social media use (Marino et al., 2018).
In this work, we take the mobile app for YouTube, the most widely used social
media service in the United States (Perrin and Anderson, 2019), as a test case
to understand and redesign how internal mechanisms influence sense of agency.
The design of YouTube must balance the interests of many different
stakeholders. For example, policymakers may wish to exert control over
extremist content. Advertisers may wish to control how much time users spend
on ads. Designers may wish to control how much time users spend in the app.
Content creators may wish to control how much time users spend on their
channel. All of these stakeholders merit consideration, however, in this work
we focus specifically on users and how design influences the control they feel
over the time they spend in the mobile app.
We investigate two research questions in two studies that build upon each
other:
* •
RQ1: What existing mechanisms in the YouTube mobile app influence sense of
agency?
In a survey, we asked 120 YouTube users which mechanisms make them feel most
and least in control of how they spend their time in the YouTube mobile app.
* •
RQ2: What changes to these mechanisms might increase sense of agency?
Based on the responses to the survey, we redesigned four internal mechanisms
to change user sense of agency in the YouTube app: recommendations, playlists,
search, and autoplay. In co-design sessions, we then asked 13 YouTube users to
sketch changes of their own and evaluate our mockups. We also asked how much
control they would prefer to have in different situations.
The two contributions of this work are:
1. (1)
We identify the internal design mechanisms that influence users’ sense of
agency over how they spend time in the YouTube mobile app and how they might
be changed. While some of these mechanisms are expected (e.g., autoplay),
others are less so (e.g., playlists) and suggest promising directions for
digital wellbeing (e.g., designing to support ‘microplans’ that guide behavior
within a single session of use).
2. (2)
We distinguish when designing for a sense of agency is desirable from when it
might actually go against what users want. Participants in our co-design
sessions preferred greater control when they had a specific intention for
using the app (e.g., to cook a recipe) than when they had a non-specific
intention (e.g., to relax), in which case they wanted to let the app take
control. We propose ways for designers to navigate this mixed preference for
different levels of control at different times.
## 2\. Background and Motivation
### 2.1. Designing to Undermine Sense of Agency
Design practitioners have raised concerns about dark patterns, interfaces that
are designed to manipulate a user into behavior that goes against their best
interests (Gray et al., 2018; Lukoff et al., 2021). Brignull’s original types
of dark patterns focus on financial and privacy harms to the user (Brignull
and Darlington, [n.d.]). However, given that people routinely report using
technology in ways that are a waste of their time and that they later regret
(Ko et al., 2015; Hiniker et al., 2016a; Lukoff et al., 2018; Ames, 2013),
there is a need for research to examine which design patterns prompt such
attentional harms for the user. We might term these attention capture dark
patterns, designs that manipulate the user into spending time and attention in
an app against their best interests.
Tech industry insiders, like the ex-President of Facebook, warn that social
media apps are especially likely to innovate and employ design patterns that
”consume as much of your time and conscious attention as possible” (Pandey,
2017). For social games, one such a proposed pattern is $``$playing by
appointment,$"$ wherein a player must return to play on a schedule defined by
the game, or else lose their precious resources (Zagal et al., 2013). For
social media, a common suggestion in popular self-help guides is to take back
control by turning off notifications (noa, [n.d.]; Kamenetz, 2018). However,
it is not yet established that these mechanisms are the ones that lead users
to feel a loss of control. For example, some users report that notifications
actually reduce their checking habits, since they know they will be alerted
when their desired content is ready (Oulasvirta et al., 2012).
YouTube is an important case for better understanding the design mechanisms of
attention capture. YouTube has over two billion monthly users worldwide
(YouTube, [n.d.]) and is extremely popular in the U.S., where about three-
quarters of adults report using YouTube on their smartphone, with 32$\%$ using
it several times a day, 19$\%$ about once per day, and 49$\%$ less often
(Perrin and Anderson, 2019). It is also frequently reported as a source of
distraction (Aagaard, 2015), suggesting that it is a good site for the
investigation of attention capture dark patterns. In particular, Youtube’s
algorithmic recommendations merit special consideration as they drive more
than 70$\%$ of watchtime (Solsman, 2018).
### 2.2. Designing to Support Sense of Agency
Reducing screentime in certain apps is a common measure of success in digital
wellbeing tools. The two most popular mobile operating systems, Android and
iOS, both come pre-installed with tools for the user to track and limit their
time in mobile apps. Within the YouTube app itself, there are also features to
manage time spent: ‘Time watched statistics,’ which shows how much time a user
has spent on YouTube in each of the last 7 days, and the ‘Take a break
reminder,’ which periodically prompts the user to take a rest. A strength of
addressing digital wellbeing via such screentime tools is that time spent is
easy to track and easy to understand.
However, a weakness of this approach is that reducing screentime is often a
poor proxy for what users actually want. Instead, user intentions are often
highly specific, such as wanting to reduce the time spent on targeted features
of an app (e.g., on the Facebook newsfeed, but not in Facebook groups) or in
certain contexts (e.g., when with family, but not when commuting on the bus)
(Lukoff et al., 2018; Lyngs et al., 2020; Hiniker et al., 2016a).
Within YouTube, there are two digital wellbeing features that do move beyond
time spent controls and offer more granular control. The ‘Notifications
digest’ lets a user bundle push notifications together into a single
notification each day, which may reduce the triggers that lead to non-
conscious, habitual use (Lyngs et al., 2018). ‘Autoplay toggle’ lets a user
decide to stop the next video from playing automatically; this may preserve
the natural stopping point that comes at the end of the video, a mechanism
that has been shown to help users set more deliberate boundaries around use
(Hiniker et al., 2018). While the notification digest and the autoplay toggle
clearly do more than just track and limit time, it is not immediately clear by
what measure of success they might be evaluated.
One promising alternative to the screentime paradigm is to design for sense of
agency, the focus of this paper. Sense of agency is a construct that refers to
an individual’s experience of being the initiator of their actions in the
world (Synofzik et al., 2008). Sense of agency can be broken down into
feelings of agency, that is, the in-the-moment perception of control, and
judgments of agency, that is, the post hoc, explicit attribution of an action
to the self or other (Synofzik et al., 2008). In the present paper, we focus
on the latter, judgments of agency.
Sense of agency matters for digital wellbeing in at least three ways. First,
supporting user control is a common principle in HCI design guidelines (Coyle
et al., 2012; Nielsen, 1994; Shneiderman and Plaisant, 2004). Designing for an
$``$internal locus of control$"$ is one of Shneiderman and Plaisant’s Eight
Golden Rules of Interface Design, arising from the observation that users want
$``$the sense that they are in charge of an interface and that the interface
responds to their actions$"$ (Shneiderman and Plaisant, 2004). Second, a low
sense of control over technology use predicts greater negative life effects,
e.g., internet use leading to missed social activities (Caplan, 2010) and
smartphone use leading to the loss of a career opportunity or significant
relationship (Jeong et al., 2016). Scales of problematic technology use
generally measure both (a) lack of control and (b) negative life impacts,
suggesting that ‘the problem’ is a combination of these two factors (Cheng et
al., 2019; Cash et al., 2012). Third, and perhaps most importantly, sense of
agency matters in its own right. Feeling in control of one’s actions is
integral to autonomy, one of the three basic human needs outlined in self-
determination theory (Ryan and Deci, 2006). More specific to technology use,
it is also central to user (dis)satisfaction with smartphones (Davis et al.,
2019; Harmon and Mazmanian, 2013) and Facebook use (Cheng et al., 2019; Marino
et al., 2018).
Prior work has investigated different ways that interfaces can support sense
of agency. First, some input modalities seem to support a greater sense of
agency than others (e.g., keyboard input versus voice commands) (Limerick et
al., 2015). Second, a system’s feedback should match a user’s predicted
feedback (Limerick et al., 2014). Third, a study of flight navigation systems
found that increasing the level of automation reduced sense of agency
(Berberian et al., 2012). These lessons might be revisited in the domain of
digital wellbeing, as how an interface modulates sense of agency may vary with
context (Limerick et al., 2014).
### 2.3. Design Mechanisms for Digital Wellbeing
The mechanisms111We use the term $``$mechanism$"$ to describe one component of
a larger design (although some digital wellbeing designs do consist of a
single mechanism) of digital wellbeing interventions can be placed along a
spectrum (see Figure 1). At one end are external mechanisms that monitor or
police apps, such as screentime statistics and lockout timers. A hallmark of
an external mechanism is that it functions identically across multiple apps,
as in a timer that locks the user out of social media, gaming, and video apps.
However, external mechanisms do not significantly change the experience within
individual apps.
Figure 1. Mechanisms that influence how people spend their time in apps can be
placed along a spectrum, as in these examples. External mechanisms monitor or
police apps, while internal mechanisms redesign or rebuild the experience
within a problematic app. Internal mechanisms offer designers a more targeted
way of supporting user agency.
At the other end of the spectrum, internal mechanisms contribute to the
redesign or rebuild of an experience. For example, Focus Mode in Microsoft
Word redesigns the writing process by hiding all formatting options (Baab-
Muguira, 2017). Going a step further, the standalone app Flowstate not only
offers a minimal interface, but also deletes all text on the page if the user
stops writing for longer than seven seconds (Statt, 2016). Internal mechanisms
fundamentally change the experience within a problematic app, or rebuild it
into a new experience entirely.
At present, design researchers have innovated many tools on the external side
of the spectrum, that monitor and police multiple apps in the same way (Kim et
al., 2019a, b; Collins et al., 2014; Monge Roffarello and De Russis, 2019;
Okeke et al., 2018). Likewise, industry designers have built tools that apply
the same time lockout mechanism to all apps, such as the screentime tools that
come pre-installed on Android and iOS.
In contrast to external mechanisms, the space of internal mechanisms is
relatively underexplored (see (Lottridge et al., 2012; Harambam et al., 2019)
for notable exceptions), but holds particular promise for increasing user
agency in two ways. First, designers can craft more targeted interventions
with internal mechanisms than with external ones. External mechanisms, such as
locking the user out of a device, often require sacrifices that users are
reluctant to accept (Tran et al., 2019; Kim et al., 2019a). Whereas an
external mechanism might block the Facebook app after time is up, a more
internal could reconfigure the newsfeed to show only content from close
personal friends. A redesign of internal mechanisms may be able to remove
problematic aspects from an app, while still retaining its benefits.
Second, internal mechanisms shift the focus from fighting distractions to
aligning interests. External mechanisms often respond to the temptations of
problematic apps with microboundaries (Cox et al., 2016) or restraints on
interactions (Park et al., 2018). However, this sets up an arms race in which
the designers of digital wellbeing tools are always in a defensive position.
An alternative is for designers to reenvision the internal mechanisms that
lead to compulsive use in the first place (Tran et al., 2019). Looking at the
mechanisms inside of specific apps may encourage designers to not just block
existing mechanisms but to innovate better ones, such as Flowstate’s seven
seconds rule for writing. This paper presents an examination how such internal
mechanisms can be redesigned to support sense of agency.
## 3\. Study 1: Survey of 120 YouTube users
Study 1 examines how existing mechanisms in the YouTube mobile app support or
undermine sense of agency (RQ1). We decided to start by investigating user’s
experiences in the current app before proceeding to design and evaluate
potential changes in Study 2 (RQ2). Both studies were approved by the
University of Washington’s Institutional Review Board.
### 3.1. Participants
#### 3.1.1. Recruitment.
To obtain a general sample of users of the YouTube mobile app, we recruited
from Amazon Mechanical Turk workers in the United States. Participants were
invited to $``$Help us understand how people spend their time on the YouTube
mobile app.$"$ They were required to meet four inclusion criteria:
1. (1)
A task approval rating greater than 98$\%$ for their prior work on Mechanical
Turk, indicating a history of high-quality responses.
2. (2)
Own a smartphone. Three members of our research team tested the YouTube mobile
app on both Android and iPhone and found that the app has nearly identical
features and only minor stylistic differences, so we accepted users of both
types of devices as participants (80 Android, 40 iPhone users).
3. (3)
Spend a minimum of 3 hours on YouTube in the past week (across all devices),
according to their time watched statistics in the YouTube app. In the survey,
participants saw instructions with screenshots that showed where to find this
statistic in the app, confirmed that they had found it, and then entered it
into the survey. To see time watched statistics, users must be signed into the
app.
4. (4)
Of the time they spend on YouTube, 20$\%$ or more is on their smartphone
(self-estimated).
#### 3.1.2. Demographics.
A total of 120 participants met the inclusion criteria and completed the
survey (see demographics in Table 1). We excluded responses from an additional
7 participants who started but did not complete the survey. We oversampled
men, Asians, and young people relative to the 2019 estimates of the United
States Census Bureau (United States Census Bureau, [n.d.]). Other participant
samples may use the YouTube mobile app differently, e.g., users in emerging
countries for whom a smartphone is often their only device for watching videos
(Silver et al., 2019). Further research is required to determine whether our
results apply to other populations.
Gender identity | Man (63%), Woman (36%), Non-binary (0%), Prefer not to say (1%)
---|---
Age range | 18-24 (8%), 25-34 (41%), 35-44 (40%), 45-54 (11%), 55+ (1%)
Education | High school (22%), Associate degree (22%), Bachelor’s degree (46%), Advanced degree (11%)
Household income (US) | ¡25K (14%), 25-50K (23%), 50-75K (30%), 75-125K (20%), ¿ 125K (11%), prefer not to say (2%)
Race (choose one or more) | White (69%), Asian (17%), Black (9%), Hispanic/Latino (4%), Native American (2%)
Table 1. Demographics of the 120 survey participants
#### 3.1.3. YouTube use.
Participants spent a median of 101 minutes per day (interquartile range:
57-156) on YouTube across all devices in the week prior to the survey. Of this
time, participants estimated they spent a median of 50$\%$ (interquartile
range: 30-75$\%$) in the mobile app. For comparison, the YouTube press page
states that mobile accounts for over 70$\%$ of watchtime (YouTube, [n.d.]).
Upon multiplying these two responses together for each participant, we found
that participants spent an average of 70 minutes per day in the YouTube mobile
app. This is similar to the average for all YouTube users: in 2017, YouTube
shared that signed-in users spend an average of more than 60 minutes per day
in the mobile app (Matney, 2017). We neglected to ask whether participants
were using the paid YouTube premium service, which removes ads and can play
videos offline and in the background; however, Google reports that only 1$\%$
of YouTube’s monthly visitors subscribe to this service (Spangler, [n.d.]).
### 3.2. Procedure
Participants answered questions in an online survey. The initial questions
asked about our four inclusion criteria. Eligible participants continued on to
background questions about their demographics and YouTube use. The complete
survey wording, along with all of the other appendices for this study can be
found at: https://osf.io/w3hmd
To investigate RQ1, one question table asked about things that made
participants feel most in control of how they spend their time on YouTube (See
Table 2). A second question table asked about things that made them feel less
in control. The order of these two question tables was randomized. In terms of
wording, we chose to ask about feeling ”in control,” as this is how sense of
agency has been measured in previous studies of sense of agency in HCI (e.g.,
(Metcalfe and Greene, 2007)) and on a self-report scale (Tapal et al., 2017).
We used the informal term $``$things$"$ because, in piloting the survey, we
found that testers were unsure about whether certain things (e.g.,
recommendations and ads) counted as $``$mechanisms′′ of the app and we did not
want to provide examples that would bias responses. In total, each participant
was required to submit 6 responses for things that influenced their sense of
agency on YouTube (3 for most in control, 3 for least in control).
| | Thing Question: What are 3 things about
---
the mobile app that lead you to feel most
in control over how you spend your time
on YouTube?
| Explain Question: How does this thing
---
make you feel more in control of how you
spend your time on YouTube?
Thing 1 | | “I am able to quickly access my subscribed
---
channels.”
| “I don’t spend uncontrolled amounts of time
---
browsing through videos that may or may not
be related to what I want to watch.”
Thing 2 | | “I am able to get notifications of certain
---
channels or videos getting posted.”
| “I will know exactly when a new video goes
---
up that I may be interested in watching.
This way I am not randomly checking for
uploads and spending extra time searching
and browsing.”
Thing 3 | “Screen/watch time.” | | “I can follow trends and tell when I am
---
spending more time than usual on the app.”
Table 2. The wording and format of the “more in control” question in the
survey. The example responses here come from a single study participant. All
participants also completed a second version of this question table, with the
text modified from “most” to “least” in the Thing Question and from “more” to
“less” in the Explain Question.
Participants were compensated $\$$6.00 for answering all questions, an amount
that exceeds the U.S. minimum wage ($\$$7.25 per hour). The survey took a
median of 21 minutes to complete (interquartile range: 15-29).
### 3.3. Coding reliability thematic analysis
We conducted a coding reliability thematic analysis (Boyatzis, 1998; Braun et
al., 2018), in which we first established reliable codes for design mechanisms
and then used them to generate themes that captured shared meanings. We
started by iteratively coding the 720 responses (6 per participant). Each
$``$thing$"$ was analyzed as a single response, combining answers to the Thing
Question and the Explain Question (i.e., one row in Table 2). In our first
pass, two researchers individually reviewed all responses and met to develop
initial codes. At this stage, we eliminated 112 responses without any
substantive content, e.g., “I can’t think of anything else.” Of the 112
responses without substance, 55 came from “less in control” and 57 from
“more.”
We further limited coding to responses that specified a mechanism within the
interface of the YouTube mobile app, i.e., something the app’s designers could
directly change. This included responses such as, $``$Recommended videos \-
Being shown recommended videos is like a moth to a light for me,$"$ which was
coded as ‘recommendations’. It excluded responses about situational factors
that are largely outside of the control of the designer such as, $``$I make my
own decisions - I am a conscious person who can make decisions on what I
do.$"$ This eliminated 141 more responses (59 from “less in control” and 82
from “more in control”). Interestingly, “more in control” included 28
responses that we coded as willpower, e.g., $``$I make my own decisions,$"$
with only 1 such response for “less”. This suggests a potential self-serving
bias (Forsyth, 2008) wherein in-control behavior is attributed to one’s own
willpower whereas out-of-control behavior is attributed to external factors.
The other responses that we removed were about characteristics of mobile
phones (e.g., “The app is easy to access and tempt me on my phone…”) and
usability issues (e.g., “it crashes on me every other day or so” and “it
consumes a lot of battery life”) that are not specific to the interface of the
YouTube mobile app. After excluding these responses, we continued with coding
the 467 responses that referenced a specific design mechanism.
In our second pass, we applied the initial codes to 120 randomly selected
responses and met to discuss. Since one mechanism (recommendations) came up
more often than all others, we developed three subcodes for how
recommendations affected participant experiences on YouTube. After merging
similar codes, our codebook consisted of 21 design mechanisms, such as
autoplay, playlists, and multiple device sync. In our third pass, we each
independently coded the same 50 randomly selected responses. Interrater
reliability was assessed using Cohen’s kappa, with $\kappa$ = 0.73 indicating
substantial agreement (Landis and Koch, 1977). In our fourth pass, we each
coded half of the remaining responses, discussed the final counts, and
selected several representative quotes for each code. The first author then
wrote up a draft of the coding results and reviewed together with the other
authors. We mapped codes (design mechanisms) to potential themes, generating
three higher-level themes that structured our final writeup. In our analysis
and writeup, we noted cases where responses for an individual code were split
with regards to a theme, e.g., ‘notifications’ sometimes supported and
sometimes undermined ‘planning ahead’.
### 3.4. Results and Analysis
#### 3.4.1. Design Mechanisms.
467 responses referenced a specific design mechanism (246 for less in control,
221 for more in control). Nine mechanisms were described as influencing sense
of agency 15 or more times and are the focus of our analysis.222 Mechanisms
mentioned 15 or more times covered 392 of 467 responses (84$\%$) that
referenced a design mechanism. Mechanisms mentioned fewer than 15 times
included content moderation (12), playing videos in the background (12
responses), syncing across multiple devices (9), comments (9), ratings (8),
and YouTube’s ‘Take a break reminders’ (5). The 6 remaining mechanisms were
mentioned fewer than 5 times each. Figure 2 provides a glanceable view of how
many times each of these nine mechanisms was mentioned as leading to more or
less control. Table 3 shows the same data with a description and example
response for each mechanism. Appendix I contains annotated screenshots that
show the exact implementation of these nine mechanisms in the YouTube mobile
app as they appeared when participants provided their feedback.
In summary, recommendations were the most frequently mentioned mechanism,
accounting for 27$\%$ of all responses. Recommendations, ads, and autoplay
primarily made respondents feel less in control. Playlists, search,
subscriptions, play controls, and watch history $\&$ stats primarily made
respondents feel more in control. Notifications were divided with about half
of responses in each direction.
Figure 2. This diverging bar chart shows how many times these nine design
mechanisms led participants to feel more control or less control.
Recommendations, ads, and autoplay primarily made respondents feel less in
control. Playlists, search, subscriptions, play controls, and watch history
$\&$ stats primarily made respondents feel more in control. Notifications were
sometimes mentioned as leading to more control and sometimes to less.
| Design
---
Mechanism
Description | | Count
---
of re-
sponses
| Less in
---
control (%
of responses)
| Representative quote(s)
---
(2 quotes if minority opinion on
direction of control ¿20% of responses)
| Recommendations…
---
(see 3 subcodes
below)
| Recommended videos on the home,
---
explore, & video player screens.
128 | 77% | See subcodes in the 3 rows below.
| / Irrelevant
---
recommendations
| Repetitive, dull, or
---
generic recommendations that the
user is not interested in.
| 42
---
(of 128)
100% | | “The related videos are sometimes videos I’ve seen before,
---
over and over.”
| / Relevant
---
recommendations
| Engaging or catchy recommenda-
---
tions that the user is interested in.
| 45
---
(of 128)
53% | | “YouTube has very good algorithms that know what I like,
---
when I want it.” —VS.— “I have a hard time not looking
at the suggested videos that the algorithm picks for me…
I almost always justify watching just one more video.”
| / Customization
---
settings
| Settings to customize location,
---
quantity, or content of
recommendations.
| 41
---
(of 128)
81% | | “Not having control over the trending list. I feel like I’m
---
force-fed content.”
Ads | | Ads that appear before, during, and
---
after videos in the player.
55 | 98% | | “I feel as if I am forced to watch ads, which can suck up
---
a lot of time.”
| Playlists (includes
---
Watch Later)
| Creating, saving, and playing a list
---
of videos. Watch Later is a default
playlist for all users. Playlists
autoplay all videos on the list.
39 | 0% | | “I can create playlists or queue videos in advance to limit
---
what I watch to a specific list instead of endlessly searching
around for what I want.”
Search | Searching for videos. | 36 | 33% | | “Very efficient and relevant searches.” —VS.— “Countless
---
videos have nothing to do with my latest search request.”
Subscriptions | Follow specific video creators. | 35 | 0% | | “I can choose the content creators I want to follow so that
---
I can limit my time to specific creators I enjoy the most.”
Autoplay | | Automatically plays a new video
---
after the current one. Can be
toggled on/off.
32 | 87% | | “I feel like I have little control whenever YouTube takes it
---
upon itself to just play whatever it feels like playing.”
| Watch history
---
& stats
| A chronological record of videos
---
watched and time watched stats in
YouTube.
28 | 7% | | “I am able to view EVERYTHING I do in the app. I
---
can keep an eye if I need to change behavior, what type of
videos I watch, everything.“
Play controls | | Controls to play/pause, seek for-
---
ward/back, etc.
24 | 12% | | “I can start, pause and stop content streaming easily, at
---
any time.”
Notifications | | System and in-app alerts with new
---
subscription content, recommenda-
tions, etc.
15 | 53% | | “If I especially like a channel I can know about everything
---
they upload as soon as they do.” —VS.— “Notifications
draw me to YouTube and create my schedule for 20-30
minutes. This creates an addiction.”
Table 3. This table shows nine design mechanisms that were mentioned 15 or
more times in response to the survey question: $``$What are 3 things about the
mobile app that lead you to feel [most $|$ least] in control over how you
spend your time on YouTube?$"$ Design mechanisms are shown in the order of
frequency of mention. The most frequently mentioned mechanism,
recommendations, is shown with 3 subcodes. The representative quote(s) column
shows one typical response for each design mechanism; both a $``$more in
control$"$ and a $``$less in control$"$ quote are shown if the minority
opinion on the direction of control was more than 20$\%$ of total responses.
How Existing Mechanisms Influence Sense of Agency
The design mechanisms we identified in the YouTube mobile app informed three
higher-level themes. First, users experience actions in the app along a
spectrum of consent. Second, mechanisms for planning ahead help them feel more
in control. Third, the accuracy of YouTube algorithms has mixed consequences
for control. The writeup for each theme draws upon examples from our coding of
the design mechanisms.
#### 3.4.2. The spectrum of consent.
Participants’ sense of agency depended on whether it felt like they had
‘agreed’ to the actions of the app. Participants gave their active consent
through actions such as tapping on a play control: $``$I’m watching a video
that’s taken too long of my time, so I can just pause it and come back to it.
I feel control there.$"$ Participants could also issue ongoing consent for the
app, e.g., by subscribing to a creator: $``$My subscriptions show me what I
asked to see and I can choose what and when I wish to watch each video.$"$ At
the other end of the spectrum were mechanisms like autoplay that acted without
consent: $``$It feels weird for the app to start acting before I’ve told it to
do anything.$"$
Non-consent was often felt as a result of (perceived) deception. For example,
users disliked ads, but also expected them and indicated their reluctant
consent. However, they seemed more upset when the app was unpredictable or
violated expectations, as in: $``$I understand the reason for the ads, but I
don’t get why some are 5 seconds and you can skip them while others are 60
seconds and you can’t.$"$ Other cases where participants felt manipulated
included when a $``$small accidental click$"$ triggered an ad, when video
creators were $``$not upfront$"$ about the products they promoted, and when
autoplay $``$automatically$"$ turned on. Participants disliked when the app
openly acted against their interests, but expressed stronger sentiments when
they felt that the app also misled them about it.
#### 3.4.3. Planning ahead.
Participants felt more in control when they planned their consumption in
advance. Playlists helped participants plan how much to watch (e.g., $``$I can
create playlists or queue videos in advance to limit what I watch to a
specific list instead of endlessly searching around for what I want$"$).
Participants described the end of a playlist as a $``$good place to stop$"$,
in contrast to browsing recommendations, which they described as
$``$endless.$"$ Watch Later, a default playlist on YouTube, also let
participants control when and where to watch. A guitar teacher described how
Watch Later empowered them to save videos on-the-go and watch them later in
their music studio. Watch history $\&$ stats also supported planning by
providing an awareness that participants could use to adjust their behavior:
$``$I can look at my watch history and see how many videos I have watched
today. That puts it into perspective if I should spend time doing something
else if I am spending too much time on YouTube.$"$ Several participants
described using this awareness in conjunction with the Watch Later playlist:
$``$I am able to put a video in my Watch Later playlist if I think I have
spent too much time on YouTube for the day.$"$
By contrast, sense of agency was diminished by mechanisms that prompted and
pressured participants with suggestions that were hard to decline. Autoplay
and recommendations frequently led to this, as in $``$I often spend more time
than I meant to because there is a good related video that seems worth
watching so ya know, ‘Just one more’ which becomes a couple hours.$"$ The
Watch Later playlist again served as a safety valve in ‘just one more’
situations: $``$Watch Later means I don’t feel pressured into watching a
recommended video from autoplay right when I see it.$"$
Notifications sometimes supported planning and sometimes not. For example,
they put participants on the spot: $``$Based on my viewing history, the app
will push me new content and I may not have the fortitude to not click to
view.$"$ However, notifications also helped participants plan when to check
the app by reducing their fear of missing out: $``$With notifications I will
know exactly when a new video goes up that I may be interested in watching.
This way I am not randomly checking for uploads and spending extra time
searching and browsing.$"$ This may explain why notifications were split
between $``$more in control$"$ and $``$less in control$"$ responses (47$\%$
vs. 53$\%$ ).
#### 3.4.4. The accuracy of algorithms has mixed consequences for control.
Irrelevant recommendations, i.e., those that were repetitive or unrelated to
personal interests, universally undermined sense of agency: $``$Seeing
’recommended’ videos that have nothing to do with my viewing history leads to
unwanted scrolling and possibly unwanted content.$"$ Similarly, irrelevant
search results undermined control because they forced participants to keep
scrolling for what they wanted, e.g., $``$I use specific search terms, but I
still have to scan past a lot of vaguely or even unrelated stuff to find what
I want.$"$
For relevant recommendations, participants’ control responses were divided
nearly 50-50. In contrast to irrelevant recommendations, relevant ones
supported control with their personalization (e.g., $``$It has some very good
algorithms that know what I like, when I want it$"$ ) or with suggestions that
reached just beyond the users’ comfort zone (e.g., $``$I can expand my tastes
based on my own preference$"$ ). However, relevant recommendations sometimes
undermined control by being too engaging, i.e., recommending videos that users
watch, but that are unplanned and later regretted. This was captured in
participants’ use of terms like the $``$wormhole$"$ (two mentions) and
$``$rabbit hole$"$ (five mentions), as in $``$The way that videos get promoted
to my home page and have appealing thumbnails–I end up clicking on them and
wonder how I got to this place and why I am watching this video. I ended up
going down the rabbit hole and watching the video and then others like it and
so on.$"$ Some of these recommendations were described as $``$clickbait$"$
(six mentions) that misled with content that did not meet expectations and
sometimes also violated participants’ consent (e.g., by showing
$``$inappropriate content$"$). More often though, participants seemed to like
the content, but felt that it was too much (e.g., $``$At times there is no
escape when I become interested in documentary after documentary$"$) or not
the right time (e.g., $``$Some of the church videos are addicting and I keep
watching them at night$"$).
Given their mixed experiences with recommendations, participants expressed
frustration with the customization settings at their disposal (or lack
thereof). Participants lacked the ability to customize the location, quantity,
and content of recommendations. Having recommendations on almost every screen
led to a loss of control: $``$It seems like there are video recommendations
everywhere. They are obviously in my home feed; they are in the explore menu;
and they are under and beside and within other videos. It often takes me down
the rabbit hole.$"$ Up next recommendations that appear below the current
video (and autoplay after it finishes) were specifically mentioned seven
times. The $``$endless$"$ quantity of recommendations also made it hard to
stop watching. Finally, participants also wanted to control what content is
recommended, particularly when recommended content did not match their
aspirations: $``$There are cases in a particular day where I just want to
watch cat videos. But I do not want my entire screen to recommend cat
videos.$"$ Participants wanted to customize the content of recommendations
more directly than just by generating a watch history: $``$The only thing you
can do to control the algorithm is to watch videos. But you get no say how
it’ll recommend new ones.$"$
A minority of responses described recommendation settings that do support
sense of agency. For instance, three participants appreciated how the settings
menu
($\mathopen{\mathchoice{\ooalign{$\displaystyle\mathop{:}$\cr$\displaystyle\cdot$}}{\ooalign{$\textstyle\mathop{:}$\cr$\textstyle\cdot$}}{\ooalign{$\scriptstyle\mathop{:}$\cr$\scriptstyle\cdot$}}{\ooalign{$\scriptscriptstyle\mathop{:}$\cr$\scriptscriptstyle\cdot$}}}$)
allows them to mark $``$Not interested$"$ on specific videos, e.g., $``$When
I’m tempted but know a video is not educational I can hide it.$"$ In this
case, the user is in fact interested in the sense that the video
recommendation arouses their curiosity and attention. However, they must
paradoxically mark it as “Not interested” in order to tell the interface to
stop showing videos of this kind because they conflict with their longer-term
goals. YouTube’s settings also allow participants to delete videos from their
watch history–which stops them from being used in personalized
recommendations–but only one participant mentioned this feature. The vast
majority of participants seemed either unaware of YouTube’s existing
customization settings for recommendations or found them inadequate.
## 4\. Study 2: Co-design with YouTube users
Study 1 identified existing mechanisms in the YouTube mobile app that
influence user sense of agency (RQ1). In Study 2, we sought to understand how
changes to these design mechanisms might influence sense of agency (RQ2). We
conducted 13 study sessions with individual YouTube users that included two
co-design activities: 1) sketching participant-generated changes; and 2)
evaluating researcher-generated changes that were based on the results of
Study 1. Consistent with a research-through-design approach (Zimmerman and
Forlizzi, 2014), the aim of these activities was not to converge upon a single
solution but rather to generate knowledge, i.e., what to design for a sense of
agency.
### 4.1. Preparatory Design Work
In preparation for the evaluation co-design activity, five of the authors (KL,
HZ, JVL, JC, KF), all advanced-degree students in a technology design program,
created mockups of changes to mechanisms in the YouTube mobile app that we
expected to impact sense of agency. To generate a wide range of possible
changes, we started with a design brainstorm that generated 67 different
ideas, e.g., creating a ‘How-to mode’ for viewing only educational content,
reducing video playback speed to 50$\%$ after a daily time limit is exceeded,
or making Watch Later the default action for recommendations. Ideas were
reviewed as a group and favorites could be ‘claimed’ by one author who further
refined it. This generated a total of 33 different sketches. We presented,
discussed, and then scored these sketches according to three criteria:
expected impact on sense of agency (based on the results of Study 1), novelty
relative to existing digital wellbeing tools, and feasibility of
implementation.333Feasibility was a criterion to focus on designs that a
third-party mobile developer could build using public APIs, an intention we
have for our future work. Expected effect on sense of agency was weighted
twice in our scoring.
We created mockups for the seven sketches with the highest average scores. We
wanted participants to evaluate a variety of potential changes to each
mechanism, so we created three versions of each mockup: low, medium, and high-
control. For example, the recommendations mechanism in the YouTube app was
redesigned to change the number of recommendations shown on the homepage, with
the low-control version showing unlimited recommendations, the medium-control
version showing only three recommendations with a button to $``$show more,$"$
and the high-control version not showing any recommendations (see images in
Table 4). To focus on RQ2, our results and analysis here address only the four
mockups (see Table 5) that directly change one of the existing internal
mechanisms in YouTube that we identified in Study 1. The other three mockups
we tested—activity-goal setting, time-goal setting, and a timer—are more
external mechanisms that might apply equally well to other apps. However, we
decided to focus this paper on the unique potential of internal mechanisms.
We note that although our research focuses at the level of ‘design
mechanisms,’ the details of these designs matter. For instance, although the
recommendations in the current version of YouTube seemed to reduce sense of
agency in most of the Study 1 responses, a different implementation of
‘recommendations’ might produce different effects. This is true of our mockups
too: in our search redesign we showed a task-oriented example query ($``$How
to cook a turkey$"$), whereas a leisure-oriented example query (e.g.,
$``$Funny cat videos$"$) could have led to different results. We include
descriptions of the most relevant details of each of these design mechanisms
in the body of the paper, screenshots of their current implementation in the
YouTube mobile app in Appendix I, and images of all of our mockups in Appendix
II.
| Low-control version:
---
Unlimited recommendations
| Medium-control version:
---
Click-to-show-more-recommendations
| High-control version:
---
No recommendations
| |
Table 4. Mockups of the redesign of the recommendations mechanism. We created
three versions of the mockup that we expected to offer different levels of
control. These 3 versions of each redesign were evaluated by participants in
the co-design evaluation activity.
| Redesigned
---
mechanism
Dimension of change | | Low-control
---
version
| Medium-control
---
version
| High-control
---
version
| Related experience for users (as de-
---
scribed by Study 1 participants)
| Comparison to
---
current version of
YouTube mobile app
Recommendations | | Number of video recom-
---
mendations on home screen
| Unlimited recom-
---
mendations
| Shows 3 recommen-
---
dations, then a click-
to-show-more button
| No
---
recommendations
| Endless recommendations often
---
undermine sense of agency
| Similar to low-control
---
version
Playlists | | Prominence of button to
---
save a video to the Watch
later playlist
| No Watch Later
---
button
| Small Watch later
---
button
| Large Watch Later
---
button
| Watch Later playlist lets users plan
---
ahead, reduces pressure to watch now
| Similar to medium-
---
control version
Search | | The degree to which search
---
prioritizes fun vs. relevant
results (see Appendix II for
more details)
| Prioritize “fun” re-
---
sults (intended to
be too engaging)
| User can toggle
---
between “fun” &
“relevant” results
| Prioritize “relevant”
---
results
| Sometimes recommendations and
---
search results that are too engaging
undermine sense of agency
| Similar to medium-
---
control version
Autoplay | | The degree of user consent
---
required to play the next
video recommendation
| Autoplay the next
---
recommendation
| Show the next recom-
---
mendation
| No next
---
recommendation
| Autoplaying videos without consent un-
---
dermines sense of agency
| Similar to low-control
---
version
Table 5. This table describes our redesigns of 4 existing mechanisms in the
YouTube app. We created three versions of each mockup that we expected to
provide different levels of control to the user: low, medium, and high.
Appendix II describes more details about the search redesign and the three
additional mockups we created, which we do not report on here.
### 4.2. Participants
#### 4.2.1. Recruitment.
We recruited YouTube users in Seattle via email lists and social media
channels to $``$Help us understand how people spend their time in the YouTube
mobile app.$"$ We did not initially set inclusion criteria for participation
(beyond adult YouTube users) as we viewed our co-design activities as
exploratory. However, after our initial sessions proved insightful for our
team of design researchers, we sent a follow-up survey to participants that
asked about demographics and YouTube use. Participants were compensated with a
$\$$30 voucher.
#### 4.2.2. Demographics and YouTube use.
13 YouTube users (7 women, 6 men) participated in our sessions. The median age
was 29 (range: 18-36). Participants reported using YouTube a median of 52
minutes per day (range: 27-70), again based on checking their time watched
statistics in the YouTube mobile app. For reference, this amount of time is
slightly lower than the average of signed-in YouTube users (60 minutes)
(Matney, 2017) and considerably lower than the median of participants in Study
1 (101 minutes).
### 4.3. Procedures
Sessions included an initial think-aloud demonstration of their current
YouTube use, followed by sketching and evaluation co-design activities. The
median length of a session was 73 minutes (range: 57-105 minutes).
#### 4.3.1. Think-aloud Demonstrations with YouTube App.
In a modified version of a think-aloud-protocol (Jääskeläinen, 2010), the
participant opened YouTube on their smartphone and talked us through a typical
engagement cycle (how they start and stop use) (Tran et al., 2019). Next, they
showed and talked us through the mechanisms that made them feel most and least
in control of how they spend their time on YouTube.
#### 4.3.2. Co-design Activity 1: Sketching.
To elicit participant-generated ideas, we asked participants to sketch over
paper mockups of three key screens: home, search, and video player (see Figure
3). Each screen represented a minimal version of a video app without
recommendations, rather than a direct copy of the current YouTube interface.
We chose this minimal version to encourage participants to generate new ideas,
rather than to evaluate the existing interface (which we did in Study 1).
Participants were handed a pen and a copy of one mockup (e.g., the home
screen) and were asked, $``$What would you change on this page to feel more in
control of how you spend your time on YouTube?$"$ They then received a second
copy of the same mockup and were asked to sketch changes that would make them
feel $``$less in control.$"$ Each participant created a total of six sketches
(two versions of three different screens). As they sketched, participants were
asked to explain their thinking (Schrage, 1996).
#### 4.3.3. Co-design Activity 2: Evaluation.
To receive feedback on our changes from YouTube users, we asked participants
to evaluate our mockups of the redesigned mechanisms in the YouTube mobile app
(see Table 5). For each mockup, the three different versions were placed in
front of the participant in a random order, they reviewed for about one
minute, and then asked any questions they had. We did not tell participants
which one was the low, medium, or high-control version. The participant was
then asked to rank the three versions in order from the one they would least
prefer to use to the one they would most prefer, and explain why.
Figure 3. Sketches of the home screen of the YouTube mobile app. The
participant (P11) explained that in the $``$more in control$"$ version,
recommendations are based on topics chosen by the user. In the $``$less in
control$"$ version, the user needs to scroll through recommendations to see
the search bar at the bottom of the screen.
### 4.4. Codebook Thematic Analysis
We used codebook thematic analysis to analyze the data (Braun et al., 2018;
Braun and Clarke, 2006), wherein we generated themes that are more
interpretive than just a summary of all of the data, but less interpretive
than in reflexive thematic analysis where the researcher’s subject position
plays a central role in the analysis (Braun and Clarke, 2019). After each co-
design session, the researcher leading the session completed a debriefing form
with their top three takeaways and shared participant sketches with the rest
of the research team. We held weekly meetings to discuss these data and
discuss initial ideas. After finishing data collection, all co-design sessions
were transcribed. To further familiarize ourselves with the data, three of the
authors read the transcripts and again reviewed the sketches. We next
independently coded the data using a web app for collaborative coding
(Sillito, [n.d.]) to generate our set of initial codes. After reviewing this
first pass of coding together, we refined and consolidated codes and generated
initial themes. Our final set of codes included: user freedom of choice,
situational features affecting control, design mechanisms for control, setting
clear expectations for the user, and triggers to stop, each of which had
further subcodes. We applied our codes to all transcripts and sketches and
reviewed the results to create our final themes. For each theme, we extracted
vivid exhibits (Bannon et al., 1994), which we used to write analytical memos.
### 4.5. Results and Analysis
We generated two themes about how participants expected changes to the design
mechanisms of YouTube would affect their sense of agency. First, participants
wanted design mechanisms that provided more control when they had an intention
in mind as opposed to when they just wanted to explore. Second, participants
envisioned and wanted mechanisms for active and informed choices to increase
control.
#### 4.5.1. Specific intentions call for more control.
When individual participants reviewed the different versions of their own
sketches and our mockups, they were often conflicted about how much control
they preferred. It depended upon the situation. When they had a specific
intention or goal for their YouTube visit (e.g., to cook a recipe), they
wanted design mechanisms that provided greater control. When they had a non-
specific intention such as relaxing, they preferred design mechanisms that
turned control over to YouTube.
For participants, specific intentions varied from watching a video of a
favorite dance, to the latest basketball highlight, to a tutorial on solving a
Rubik’s Cube. When they had such a specific intention in mind, they wanted
greater control than YouTube currently gives them. P4 removed recommendations
from their sketch, explaining: $``$If I have a specific goal, I know what I
want, I don’t need recommendations to guide my search, I just want to be in
control of my search.$"$ P2 evaluated our redesign of the search mechanism
that emphasized results with higher entertainment value by saying, $``$I’m
probably going to click on it because it’s cute and I’m just going to waste so
much time. So it’s going to make me feel totally out of control of what I
actually wanted to come here for.$"$ In these cases, participants wanted
stronger control mechanisms so that the app would not hijack their specific
intention.
Sometimes participants held intentions with a moderate level of specificity,
in which case participants wanted to retain some control but also delegate
some to YouTube. Often these intentions were topical, as in when P11 wanted to
be able to use the app in an $``$active way$"$ to search and browse videos
about programming, but not in a $``$passive way$"$ to follow just any
recommendation. Sometimes, these intentions were temporal, such as when
working or studying, participants preferred a version of YouTube that helps
them watch a moderate number of videos without making them $``$fall down a
rabbit hole of similar related stuff$"$ (P13). To address these cases,
participants sketched both changes to internal mechanisms that were specific
to YouTube (e.g., limits on the number of recommended videos) and also more
external mechanisms that might apply across a variety of social media apps
(e.g., time reminders).
By contrast, when participants had only a non-specific intention (e.g., to
unwind or explore), they wanted YouTube to lead the way. Our redesigns of the
recommendations mechanism showed either unlimited, limited, or no video
recommendations, to which P2 responded $``$If I came here for a specific
reason, like my goal is to learn how-to do something, then I prefer this one
without recommendations. However, if I just want to watch something that gets
my mind off things, I prefer the one where I can choose to show more
recommendations.$"$ At times when participants just wanted to be entertained,
designing for greater control could actually get in the way. P13 shared,
$``$If you’re not giving me recommendations, and if you’re making me search,
then I’m not in control. Or, I’m in control, but the problem is I’m spending
more time. There’s no point.$"$
#### 4.5.2. Active and informed choices.
The Study 1 theme $``$Spectrum of consent$"$ addressed whether the user had
’agreed’ to an action taken by the app (e.g., autoplaying the next video). To
support control, Study 2 participants envisioned more active choices, where
the user felt like they were the one to initiate the action. As a step in this
direction, P1 described a home screen that presented, $``$Six categories we
think you’re most interested in, and then you’re at least making the active
choice, ‘I want to watch some interviews right now.’$"$ In this design, the
app’s algorithm would recommend a set of personalized topics, but the user
would be the one to choose between them. A still more active choice was when
the user was the one to generate the set of choices in the first place, as in
P7’s sketch: $``$There aren’t a billion recommendations on the home screen.
It’s just a search bar. You go straight to what you want to watch, you watch
it, and then you’re done.$"$ Participants described search as a paragon of
user-led choice, and many foregrounded the search option in their sketches to
increase control and hid it in ones to decrease control (see Figure 3).
Many sketches also supported more informed choices. These designs made it
easier for users to know what to expect from a video by surfacing metadata
like view count, user ratings, and descriptions. Five participants proposed
novel metadata, such as an ‘activity time’ filter that would sort how-to
videos by the time it takes to perform the activity they teach, e.g., cook a
recipe (P12). Another suggested expert ratings as an indicator of quality
(P5). Conversely, in sketches to undermine control, it was common to remove
video metadata. P12 likened this to the experience at Costco, a supermarket
chain that deliberately shows no signs in its stores (NPR, 2015): $``$If you
want to go find cookies, they won’t actually show you where the cookies are so
you literally have to go through every single aisle. You have to go find
it.$"$
More choice alone did not lead to more control. In sketches of designs to
undermine control, participants covered every corner of the home screen with
video recommendations that scrolled infinitely (P11) and in every direction
(P5). P13 described, $``$If they didn’t have [recommended videos], it would be
a lot harder to follow these different rabbit holes. I imagine that I would
have to intentionally seek out another video, so I wouldn’t feel sucked in as
much.$"$ Recommendations prompted a passive form of choice, in which users
reacted to the app’s infinite scroll of suggestions, rather than making active
choices on their own terms.
## 5\. Overall Discussion
Together, our two studies identify design mechanisms that influence sense of
agency in the YouTube mobile app and how they might be changed to increase it.
In Study 1, participants reported that, in the current app, recommendations,
ads, and autoplay mostly led them to feel less in control, whereas playlists,
search, subscriptions, play controls, and watch history & stats mostly made
them feel more in control. Across all existing mechanisms, participants felt
less in control when the app took actions of its own without their consent
(e.g., autoplaying a new video recommendation). Recommendations were of
special concern and participants expressed frustration at their inability to
customize their location, quantity, and content. In contrast, by helping
participants plan ahead for even just a short while, existing mechanisms like
playlists and watch stats made participants feel more in control.
When participants envisioned and evaluated changes in Study 2, they wanted
more opportunities to make active choices, rather than respond to a set of
choices proposed by the app. This preference was stronger when they had a
specific intention in mind (e.g., to watch a certain video or topic), whereas
when their intention was more general (e.g., to pass the time) they favored
turning control over to YouTube.
We expect that our findings on how design mechanisms influence sense of agency
on YouTube are most likely to generalize to other social media and media apps
where users (a) report feeling out of control at times (e.g., Facebook (Marino
et al., 2018)); and (b) use the app for both specific and non-specific
intentions (e.g., Pinterest (Cheng et al., 2019)). We first discuss our
findings mostly with respect to our test case of YouTube, before considering
implications for digital wellbeing more broadly.
### 5.1. Rethinking What ‘Relevance’ Means for Recommendations
Recommendations were mentioned by participants as undermining sense of agency
far more times than any other design mechanism in the YouTube mobile app,
suggesting that recommender systems (Resnick and Varian, 1997) should be of
central concern to digital wellbeing designers. However, they led to a reduced
sense of agency via two very different routes: irrelevance and relevance.
First, recommendations were sometimes irrelevant, showing videos that
participants were simply not interested in. However, due to rapid advances in
artificial intelligence and recommender systems like YouTube specifically
(e.g., (Covington et al., 2016)), one might expect recommendations in social
media apps to become more and more relevant in the coming years.
Second, recommendations were sometimes too ‘relevant,’ which presents a more
vexing problem from a digital wellbeing perspective. For example, participants
reported that they sometimes saw too many interesting recommendations (e.g.,
for documentaries or for church videos late at night), which made them feel a
loss of control. In this case, YouTube’s algorithm is arguably too good at a
local optimization problem: Out of millions of videos, which one is the user
most likely to watch? But it misses a more global optimization problem: Out of
many possible actions, which one does the user most want to take? In these
cases, recommendations appealed to a users’ impulse or short-term desire to
watch more videos, but conflicted with their long-term goals, creating a self-
control dilemma for the user (Lyngs et al., 2019; Duckworth et al., 2016).
Our findings call for rethinking what ‘relevance’ means for recommendations in
the context of digital wellbeing. Prior research on recommender systems has
argued that “being accurate is not enough,” as a fixation on accuracy can lead
designers to ignore important facets of user experience like serendipity
(McNee et al., 2006, p.1). For participants in our study, sense of agency was
clearly a neglected facet of user experience, as YouTube’s recommendations led
them to actions (i.e., watching more videos) they did not feel in control of.
To be clear, this does not mean that Google or others should try to create an
‘algorithm for life’ that recommends between watching another video, writing a
term paper, and going to sleep.
However, it does suggest that recommender systems could first start with the
global problem of when to show recommendations, before moving on to the local
problem of which items to recommend. For example, a decision not to show
recommendations might be informed by the time of day (e.g., 2am is too late),
screentime preferences (e.g., when the user has already exceeded their goal of
30-minutes per day on entertainment apps), or explicit user preferences (e.g.,
only show three recommendations unless I click-to-show-more). In HCI research,
sometimes the implication of a user needs assessment is not to design
technology, as a new technology might not be appropriate in the context of the
larger situation (Baumer and Silberman, 2011). Similarly, for recommender
systems, our findings suggest that sometimes the implication is not to
recommend. Prior work has addressed how a system can display the level of
confidence it has in its recommendations to the user (McNee et al., 2003), but
this should be preceded by a more fundamental question of whether or not to
show recommendations in the first place.
Whereas both of the studies in this work elicit user preferences (“what users
say”), the dominant paradigm of recommender systems today, including YouTube,
is behaviorism: recommendations largely neglect explicit preferences and
instead rely on behavior traces (“what users do”) (Ekstrand and Willemsen,
2016). The present bias effect (O’Donoghue and Rabin, 2015) predicts that
actual behavior will favor the choice that offers immediate rewards at the
expense of long-term goals. In this way, recommender systems reinforce the
sometimes problematic behavior of the current self rather than helping people
realize their ‘aspirational self’ that reflects long-term goals (Ekstrand and
Willemsen, 2016; Lyngs et al., 2018).
Participants also wanted to customize the content of recommendations, e.g.,
$``$I do not want my entire screen to recommend cat videos.$"$ Today, the
dominant paradigm of recommender systems, including YouTube, is behaviorism:
recommendations rely on behavior traces (“what users do”) and largely neglect
explicit preferences (“what users say”). In this way, recommender systems
reinforce the sometimes problematic behavior of the current self rather than
helping people realize their ‘aspirational self’ that reflects long-term goals
(Ekstrand and Willemsen, 2016; Lyngs et al., 2018). Designers might address
this by making it easier for users to (a) explicitly state preferences for
topics they would like to see or not see; (b) explicitly rate recommendations
(e.g., show me more like this one); (c) edit their viewing history to
influence future recommendations (e.g., delete all cat videos); or (d) select
an algorithmic personae to curate their recommendations (e.g., $``$The
Diplomat,$"$ who brings news videos from the other side) (Harambam et al.,
2019, p.72). The current YouTube app offers limited support for these first
three features (e.g., users can select from among topics for recommendations
on the home page of the app), but participants in our study seemed mostly
either unaware of these customization settings or found them to be inadequate.
To summarize, we encourage designers of recommender systems to think beyond
just optimizing for the item that is most likely to be clicked, watched, or
liked. This includes considering when to show recommendations in the first
place. It also means exploring how recommendations can support user
aspirations rather than just reinforce current behaviors, which requires
better measures of long-term preferences. Designers and researchers should
continue to explore how to personalize recommendations to satisfy these
broader user needs, or provide customization options that put users in control
- at least to the extent they want.
### 5.2. Designing to Support Microplanning
Behavior change researchers have long known that plans can help bridge the gap
between intentions and behavior. In this work, plans are usually crafted in
advance through careful deliberation and guide behavior for some time into the
future (Agapie, 2020). For example, a screentime tool in this mold might ask
the user to review and reflect upon their past usage data and develop a plan
for their use over the next month. Participants in our study also ‘planned’,
but they did so in a more ad hoc manner. For example, they queued videos in
advance to limit what they watched during a single session or glanced at their
Time watched statistics to know whether to watch another video or add it to
their Watch Later playlist.
These types of actions might be called ‘microplanning,’ making lightweight
plans that guide behavior for a short time, usually just a single session of
use. Our naming takes inspiration from Cox et al.’s coining of the term
‘microboundary’ to describe $``$a small obstacle prior to an interaction that
prevents us rushing from one context to another,$"$ which serves as a ‘micro’
version of a commitment device that prevents the user from $``$acting hastily
and regretting it later$"$ (Cox et al., 2016). ‘Microboundary’ has helped
center an important concept from behavioral economics, commitment devices that
restrict future choices to reflect long-term goals (Bryan et al., 2010;
Schelling, 1984), in the research and development of digital wellbeing tools,
e.g., (Kim et al., 2019b, a; Lyngs et al., 2019; Pinder et al., 2018).
Similarly, we hope that the concept of ‘microplans’ encourages the use of
behavior planning knowledge in the design of digital wellbeing tools. For
example, this literature finds that plans are more likely to succeed if they
specify where, when, and how a behavior will be enacted (Gollwitzer and
Sheeran, 2006). A microplan might incorporate just the where part, and be
supported by a video playlist that is tied to a specific location, e.g., song
tutorials for my guitar studio. Triggers are also a key component of effective
plans (Fogg, 2009), so in this case the playlist might be the primary
recommendation in the app anytime the user is within 50 meters of the studio.
In another example, Hiniker et al. adapted an evidence-based Plan-Do-Review
sequence (Felner et al., 1988) for an app that asked children to plan out
their video-watching, finding that it helped them transition to their next
activity with ease (Hiniker et al., 2017). In the domain of impulse buying
(Moser et al., 2019), an e-commerce site (or third-party extension) might
foreground ‘shopping list’ tools to support intentional buying.
### 5.3. Different Levels of Control for Ritualized and Instrumental Use
In Study 2, participants suggested ways that the YouTube mobile app might be
redesigned to increase sense of agency (e.g., by reducing the number of
recommendations it displays). However, such changes might lead to adverse
effects as there were also times when participants preferred low-control
features. Although HCI design guidelines advise supporting user sense of
agency (Nielsen, 1994; Shneiderman and Plaisant, 2004), we should not assume
that a greater sense of agency is always desirable.
Specifically, participants preferred higher-control mechanisms when they had a
specific intention in mind and lower-control ones when they had a non-specific
intention. This finding broadly aligns with two types of viewing that have
been identified in uses and gratifications research on television use (Rubin,
1984): (1) ritualized use, open-ended use to gratify diversionary needs; and
(2) instrumental use, goal-directed use to gratify informational needs. On
this view, the current version of the YouTube app appears to offer good
support for ritualized use, but poor support for instrumental use, as
participants often felt that their specific intentions were hijacked by its
autoplay and endless recommendations.
How might a single app support sense of agency for both ritualized and
instrumental use? One approach is a customizable interface that lets the user
switch between low and high levels of control. This can be done at the app-
level, e.g., switching between an Explore Mode and a Focus Mode. Or it can be
done at a mechanism-level, e.g., YouTube currently offers an on/off toggle for
autoplay, but does not provide any way to toggle recommendations, which were
the mechanism most frequently mentioned as leading to a loss of control in
Study 1. This approach may be particularly suitable for power users, as prior
research indicates that power users prefer interfaces that are customizable
(user-tailored) by a toggle, whereas non-power users prefer ones that are
personalized (system-tailored) for them (Sundar and Marathe, 2010).
A second approach then is an interface that is personalized for the user based
on a prediction model. Recent work has found that classifiers can be trained
to predict these types of media use with high confidence, e.g., for Pinterest
(Cheng et al., 2017) and smartphone use (Hiniker et al., 2016b). For example,
if YouTube expects that the user is visiting for ritualistic use, it could
remain as is, or even go further to take control as in its Leanback mode for
$``$effortless viewing$"$ that autoplays a never-ending stream of high-
definition recommendations (Google, [n.d.]). Both our own findings on autoplay
and previous work suggest that such a high level of automation would reduce
sense of agency (Berberian et al., 2012), but it may still be the interface
that the user prefers in this situation. Conversely, if YouTube has high
confidence that the user is visiting for instrumental use, it could present a
search-only interface and hide all recommendations. Finally, if it has low
confidence in its prediction, it could present a middle-ground interface that
shows limited recommendations, or it might err on the side of caution and lead
with a search-first interface in case the user has an intention to express.
### 5.4. Towards a Language of Attention Capture Dark Patterns
Our findings address what and when to design to increase sense of agency.
However, in the attention economy, what might motivate key stakeholders to
support such designs? One step is for the design community to develop a common
language of attention capture dark patterns that recognizes designs that lead
to attentional harms.
Developing such a lingua franca of attention capture design patterns could be
integrated into design education (Gray et al., 2018), influence designer
thinking, and reputations, as is done by the name-and-shame campaign of the
darkpatterns.org website (Brignull and Darlington, [n.d.]). At the company
level, it could help inspire products that are mindful of the user’s sense of
agency. For example, in spite of the incentives of the attention economy,
Apple is now working to make privacy a selling point (Hall, 2019), e.g., by
preventing developers from tracking users across multiple apps without their
active consent (Apple Inc, [n.d.]). At the regulatory level, a recent review
of dark patterns by Naraynan et al. notes that if the design community does
not self-regulate by setting standards for itself, it may be regulated by more
onerous standards set by others (Narayanan et al., 2020). The U.S. Senate is
currently considering how to regulate social media, with one bill that would
make it illegal to $``$manipulate a user interface with the purpose or
substantial effect of obscuring, subverting, or impairing user autonomy$"$
(McKay, 2019) and another that would ban autoplay and infinite scroll (Chen,
[n.d.]). For designers, the language of dark patterns is an important way to
contribute to a broader critical discussion of design practices in the the
technology industry (Gray et al., 2018).
We caution that the message of attention capture dark patterns should not be
$``$never X,$"$ but rather $``$be careful when X.$"$ Participants in both of
our studies reported mixed experiences with many design mechanisms, including
autoplay and recommendations. An outright ban on these mechanisms is likely to
reduce sense of agency in a substantial number of situations where the user
just wants to explore. Instead, a nuanced guide to dark patterns might present
examples of the problem, followed by counterexamples where such a pattern is
appropriate. While this creates a murky gray middle, it also better describes
the effects of the design mechanisms that we identified in our studies.
### 5.5. Limitations
In addition to the previously stated limitations of our participant sampling
and focus on design mechanisms as a unit of analysis, our work also has at
least four conceptual limitations that could be explored in future work.
First, both of our studies asked participants to share their preferences,
however present bias (O’Donoghue and Rabin, 2015) predicts that actual
behavior will favor the choice that offers immediate rewards at the expense of
long-term goals. An in-situ study of how people respond to redesigns intended
to influence sense of agency would yield results on (“what users do”), which
might need to be reconciled with the present results on (“what users say”).
Second, time and attention are not the only factors that influence sense of
agency. By asking participants in both studies to reflect on $``$…in control
of how you spend your time on YouTube$"$ we discouraged participants from
considering other factors such as privacy (Sundar and Marathe, 2010). In Study
2, this may have primed participants to focus on sense of agency over other
factors when evaluating which version of the mockup they preferred. Third,
self-reported agency can be quite different from the facts of agency (Coyle et
al., 2012; Moore, 2016). For example, many people continue to press ‘placebo
buttons’ like the ‘close door button’ in their apartment’s elevator, even when
doing so has no effect (Paumgarten, 2014). There is therefore a concern that
designs to increase sense of agency may be disconnected from actual ability to
influence the world. Fourth, users are not the only stakeholders on YouTube,
and it would be a mistake to optimize for their sense of agency alone. Google,
creators, advertisers, and even society itself all have a stake in what
happens on YouTube. For instance, radicalizing political videos can make
viewers feel as if they have uncovered powerful conspiracies that were
previously hidden from them (Roose, 2019); to support sense of agency in this
use case would be dangerous. User sense of agency needs to be integrated into
larger design frameworks as one important consideration among many for social
media apps.
## 6\. Conclusion
Whereas a common approach to digital wellbeing is designing to reduce
screentime, this work takes an alternative approach of designing to increase
sense of agency. In two studies, we identify mechanisms within the YouTube
mobile app that participants report influence their sense of agency and how
they want to change them. We find that participants generally prefer
mechanisms like autoplay and recommendations to be redesigned for a greater
sense of agency than the YouTube mobile app currently provides. For digital
wellbeing designers, we highlight a need for recommender systems that better
reflect user aspirations rather than just reinforce their current behavior. We
also propose mechanisms that support ‘microplanning,’ making lightweight plans
to guide a single session of use, to increase user sense of agency. Finally,
we propose language that the design community might adopt to recognize design
patterns that impose attentional harms upon the user.
###### Acknowledgements.
This work was funded in part by National Science Foundation award #1849955. We
thank Xuecong Xu, Ming Yao Zheng, Kevin Kuo, Tejus Krishnan, Laura Meng, Linda
Lai, and Stefania Druga for helping to conceptualize this study and design the
mockups.
## References
* (1)
* noa ([n.d.]) [n.d.]. Take Control. https://www.humanetech.com/take-control. https://www.humanetech.com/take-control Accessed: 2020-8-3.
* Aagaard (2015) Jesper Aagaard. 2015\. Drawn to distraction: A qualitative study of off-task use of educational technology. _Computers & education_ 87 (Sept. 2015), 90–97. https://doi.org/10.1016/j.compedu.2015.03.010
* Agapie (2020) Elena Agapie. 2020\. _Designing for Human Supported Evidence-Based Planning_. Ph.D. Dissertation. https://digital.lib.washington.edu/researchworks/handle/1773/45709
* Ames (2013) Morgan G Ames. 2013\. Managing Mobile Multitasking: The Culture of iPhones on Stanford Campus. In _Proceedings of the 2013 Conference on Computer Supported Cooperative Work_ (San Antonio, Texas, USA) _(CSCW ’13)_. ACM, New York, NY, USA, 1487–1498. https://doi.org/10.1145/2441776.2441945
* Apple Inc ([n.d.]) Apple Inc. [n.d.]. Details for app privacy questions now available - News - Apple Developer. https://developer.apple.com/news/?id=hx9s63c5. https://developer.apple.com/news/?id=hx9s63c5 Accessed: 2020-9-13.
* Baab-Muguira (2017) Catherine Baab-Muguira. 2017\. The Stupidly Simple Productivity Hack Hiding In Microsoft Word. https://www.fastcompany.com/3068825/the-stupidly-simple-productivity-hack-hiding-in-microsoft-word. https://www.fastcompany.com/3068825/the-stupidly-simple-productivity-hack-hiding-in-microsoft-word Accessed: 2020-9-11.
* Bannon et al. (1994) Liam Bannon, John Bowers, Peter Carstensen, John A Hughes, K Kuutti, James Pycock, Tom Rodden, Kjeld Schmidt, Dan Shapiro, Wes Sharrock, and Others. 1994. _Informing CSCW system requirements_. Lancaster University. https://www.forskningsdatabasen.dk/en/catalog/2185760093
* Baumer et al. (2013) Eric P S Baumer, Phil Adams, Vera D Khovanskaya, Tony C Liao, Madeline E Smith, Victoria Schwanda Sosik, and Kaiton Williams. 2013\. Limiting, Leaving, and (Re)Lapsing: An Exploration of Facebook Non-use Practices and Experiences. In _Proceedings of the SIGCHI Conference on Human Factors in Computing Systems_ (Paris, France) _(CHI ’13)_. ACM, New York, NY, USA, 3257–3266. https://doi.org/10.1145/2470654.2466446
* Baumer and Silberman (2011) Eric P S Baumer and M Six Silberman. 2011. When the implication is not to design (technology). In _Proceedings of the SIGCHI Conference on Human Factors in Computing Systems_. ACM, 2271–2274. https://doi.org/10.1145/1978942.1979275
* Baumer et al. (2018) Eric P S Baumer, Rui Sun, and Peter Schaedler. 2018. Departing and Returning: Sense of Agency As an Organizing Concept for Understanding Social Media Non/Use Transitions. _Proc. ACM Hum. -Comput. Interact._ 2, CSCW (Nov. 2018), 23:1–23:19. https://doi.org/10.1145/3274292
* Berberian et al. (2012) Bruno Berberian, Jean-Christophe Sarrazin, Patrick Le Blaye, and Patrick Haggard. 2012\. Automation technology and sense of control: a window on human agency. _PloS one_ 7, 3 (March 2012), e34075. https://doi.org/10.1371/journal.pone.0034075
* Boyatzis (1998) Richard E Boyatzis. 1998\. _Transforming Qualitative Information: Thematic Analysis and Code Development_. SAGE. https://play.google.com/store/books/details?id=_rfClWRhIKAC
* Braun and Clarke (2006) Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. _Qualitative research in psychology_ 3, 2 (Jan. 2006), 77–101. https://doi.org/10.1191/1478088706qp063oa
* Braun and Clarke (2019) Virginia Braun and Victoria Clarke. 2019. Reflecting on reflexive thematic analysis. _Qualitative Research in Sport, Exercise and Health_ 11, 4 (Aug. 2019), 589–597. https://doi.org/10.1080/2159676X.2019.1628806
* Braun et al. (2018) Virginia Braun, Victoria Clarke, Nikki Hayfield, and Gareth Terry. 2018. Thematic Analysis. In _Handbook of Research Methods in Health Social Sciences_ , Pranee Liamputtong (Ed.). Springer Singapore, Singapore, 1–18. https://doi.org/10.1007/978-981-10-2779-6_103-1
* Brignull and Darlington ([n.d.]) Harry Brignull and Alexander Darlington. [n.d.]. What are dark patterns? https://www.darkpatterns.org/. https://www.darkpatterns.org/ Accessed: 2019-9-28.
* Bryan et al. (2010) Gharad Bryan, Dean Karlan, and Scott Nelson. 2010\. Commitment Devices. _Annual review of economics_ 2, 1 (Sept. 2010), 671–698. https://doi.org/10.1146/annurev.economics.102308.124324
* Burr et al. (2018) Christopher Burr, Nello Cristianini, and James Ladyman. 2018\. An Analysis of the Interaction Between Intelligent Software Agents and Human Users. _Minds and Machines_ 28, 4 (Sept. 2018), 735–774. https://doi.org/10.1007/s11023-018-9479-0
* calkuta ([n.d.]) calkuta. [n.d.]. DF Tube (Distraction Free for YouTube). https://chrome.google.com/webstore/detail/df-tube-distraction-free/mjdepdfccjgcndkmemponafgioodelna?hl=en. https://chrome.google.com/webstore/detail/df-tube-distraction-free/mjdepdfccjgcndkmemponafgioodelna?hl=en Accessed: 2020-8-3.
* Caplan (2010) Scott E Caplan. 2010\. Theory and measurement of generalized problematic Internet use: A two-step approach. _Computers in human behavior_ 26, 5 (Sept. 2010), 1089–1097. https://doi.org/10.1016/j.chb.2010.03.012
* Cash et al. (2012) Hilarie Cash, Cosette D Rae, Ann H Steel, and Alexander Winkler. 2012. Internet Addiction: A Brief Summary of Research and Practice. _Current psychiatry reviews_ 8, 4 (Nov. 2012), 292–298. https://doi.org/10.2174/157340012803520513
* Cecchinato et al. (2017) Marta E Cecchinato, Anna L Cox, and Jon Bird. 2017. Always On(line)?: User Experience of Smartwatches and their Role within Multi-Device Ecologies. In _Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems_. ACM, 3557–3568. https://doi.org/10.1145/3025453.3025538
* Chen ([n.d.]) Angela Chen. [n.d.]. A new bill would ban making social media too addictive. _MIT Technology Review_ ([n. d.]). https://www.technologyreview.com/2019/07/30/133976/josh-hawley-social-media-addictive-design-legislation-smart-act-bill/
* Cheng et al. (2019) Justin Cheng, Moira Burke, and Elena Goetz Davis. 2019\. Understanding Perceptions of Problematic Facebook Use: When People Experience Negative Life Impact and a Lack of Control. In _Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems_. ACM, 199\. https://doi.org/10.1145/3290605.3300429
* Cheng et al. (2017) Justin Cheng, Caroline Lo, and Jure Leskovec. 2017. Predicting Intent Using Activity Logs: How Goal Specificity and Temporal Range Affect User Behavior. In _Proceedings of the 26th International Conference on World Wide Web Companion_. International World Wide Web Conferences Steering Committee, 593–601. https://doi.org/10.1145/3041021.3054198
* Cohen (2012) Julie E Cohen. 2012\. What privacy is for. _Harvard law review_ 126 (2012), 1904. https://heinonline.org/hol-cgi-bin/get_pdf.cgi?handle=hein.journals/hlr126§ion=88&casa_token=WL8sq3iJD9YAAAAA:ET2vmSL-cBa99C528GgMfSdFRCGzFo-aMK6PIepmiwFmC6VWTYJsbwQTSAXRYoi42VexRjoT
* Collins et al. (2014) Emily I M Collins, Anna L Cox, Jon Bird, and Cassie Cornish-Tresstail. 2014. Barriers to Engagement with a Personal Informatics Productivity Tool. In _Proceedings of the 26th Australian Computer-Human Interaction Conference on Designing Futures: The Future of Design_ (Sydney, New South Wales, Australia) _(OzCHI ’14)_. ACM, New York, NY, USA, 370–379. https://doi.org/10.1145/2686612.2686668
* Covington et al. (2016) Paul Covington, Jay Adams, and Emre Sargin. 2016\. Deep Neural Networks for YouTube Recommendations. In _Proceedings of the 10th ACM Conference on Recommender Systems_ (Boston, Massachusetts, USA) _(RecSys ’16)_. Association for Computing Machinery, New York, NY, USA, 191–198. https://doi.org/10.1145/2959100.2959190
* Cox et al. (2016) Anna L Cox, Sandy J J Gould, Marta E Cecchinato, Ioanna Iacovides, and Ian Renfree. 2016\. Design Frictions for Mindful Interactions: The Case for Microboundaries. In _Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems_ (San Jose, California, USA) _(CHI EA ’16)_. ACM, New York, NY, USA, 1389–1397. https://doi.org/10.1145/2851581.2892410
* Coyle et al. (2012) David Coyle, James Moore, Per Ola Kristensson, Paul Fletcher, and Alan Blackwell. 2012. I did that! Measuring users’ experience of agency in their own actions. In _Proceedings of the SIGCHI Conference on Human Factors in Computing Systems_ (Austin, Texas, USA) _(CHI ’12)_. Association for Computing Machinery, New York, NY, USA, 2025–2034. https://doi.org/10.1145/2207676.2208350
* Davis et al. (2019) Katie Davis, Anja Dinhopl, and Alexis Hiniker. 2019\. “ Everything’s the Phone”: Understanding the Phone’s Supercharged Role in Parent-Teen Relationships. In _Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems_. dl.acm.org, 227\. https://dl.acm.org/citation.cfm?id=3300457
* Dearden and Finlay (2006) Andy Dearden and Janet Finlay. 2006. Pattern Languages in HCI: A Critical Review. _Human–Computer Interaction_ 21, 1 (March 2006), 49–102. https://doi.org/10.1207/s15327051hci2101_3
* Delaney and Lades (2017) Liam Delaney and Leonhard K Lades. 2017. Present bias and everyday self-control failures: a day reconstruction study. _Journal of behavioral decision making_ 30, 5 (2017), 1157–1167. https://onlinelibrary.wiley.com/doi/abs/10.1002/bdm.2031?casa_token=8axLfIY-YlEAAAAA:Z65pChy95G1cvF2v600EYc6oqnxIGFaC5DQJteKnCuK5AQ4Nqkj_YnMbnrJB9KxqhtTJHN0NUtiLOOsI
* Digital Wellness Warriors (2018) Digital Wellness Warriors. 2018\. Apple: let developers help iPhone users with mental wellbeing. https://www.change.org/p/apple-allow-digital-wellness-developers-to-help-ios-users. https://www.change.org/p/apple-allow-digital-wellness-developers-to-help-ios-users Accessed: 2020-8-27.
* Dixon (2019) Colin Dixon. 2019\. Why shutter YouTube Leanback when there are many potential users? https://nscreenmedia.com/why-shutter-youtube-leanback-browser-experience-now/. https://nscreenmedia.com/why-shutter-youtube-leanback-browser-experience-now/ Accessed: 2020-9-7.
* Duckworth et al. (2016) Angela L Duckworth, Rachel E White, Alyssa J Matteucci, Annie Shearer, and James J Gross. 2016\. A Stitch in Time: Strategic Self-Control in High School and College Students. _Journal of educational psychology_ 108, 3 (April 2016), 329–341. https://doi.org/10.1037/edu0000062
* Ekstrand and Willemsen (2016) Michael D Ekstrand and Martijn C Willemsen. 2016. Behaviorism is Not Enough: Better Recommendations Through Listening to Users. In _Proceedings of the 10th ACM Conference on Recommender Systems_ (Boston, Massachusetts, USA) _(RecSys ’16)_. ACM, New York, NY, USA, 221–224. https://doi.org/10.1145/2959100.2959179
* Felner et al. (1988) Robert Felner, Angela Adan, Richard Price, E L Cowen, R P Lorion, and J Ramos-McKay. 1988\. 14 Ounces of prevention: A casebook for practitioners. (1988).
* Fogg (2009) B J Fogg. 2009. Creating persuasive technologies: an eight-step design process. In _Proceedings of the 4th International Conference on Persuasive Technology_ (Claremont, California, USA) _(Persuasive ’09, Article 44)_. Association for Computing Machinery, New York, NY, USA, 1–6. https://doi.org/10.1145/1541948.1542005
* Forsyth (2008) Donelson R Forsyth. 2008\. Self-serving bias. (2008).
* Gollwitzer and Sheeran (2006) Peter M Gollwitzer and Paschal Sheeran. 2006. Implementation Intentions and Goal Achievement: A Meta‐analysis of Effects and Processes. In _Advances in Experimental Social Psychology_. Vol. 38. Academic Press, 69–119. https://doi.org/10.1016/S0065-2601(06)38002-1
* Google ([n.d.]) Google. [n.d.]. YouTube Leanback offers effortless viewing. https://youtube.googleblog.com/2010/07/youtube-leanback-offers-effortless.html. https://youtube.googleblog.com/2010/07/youtube-leanback-offers-effortless.html Accessed: 2020-9-12.
* Gray et al. (2018) Colin M Gray, Yubo Kou, Bryan Battles, Joseph Hoggatt, and Austin L Toombs. 2018. The Dark (Patterns) Side of UX Design. In _Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems_ (Montreal QC, Canada) _(CHI ’18)_. ACM, New York, NY, USA, 534:1–534:14. https://doi.org/10.1145/3173574.3174108
* Hall (2019) Zac Hall. 2019. Apple makes privacy extremely relatable in fun new iPhone ad - 9to5Mac. https://9to5mac.com/2019/03/14/iphone-privacy-ad/. https://9to5mac.com/2019/03/14/iphone-privacy-ad/ Accessed: 2020-9-13.
* Harambam et al. (2019) Jaron Harambam, Dimitrios Bountouridis, Mykola Makhortykh, and Joris van Hoboken. 2019. Designing for the Better by Taking Users into Account: A Qualitative Evaluation of User Control Mechanisms in (News) Recommender Systems. In _Proceedings of the 13th ACM Conference on Recommender Systems_ (Copenhagen, Denmark) _(RecSys ’19)_. ACM, New York, NY, USA, 69–77. https://doi.org/10.1145/3298689.3347014
* Harmon and Mazmanian (2013) Ellie Harmon and Melissa Mazmanian. 2013. Stories of the Smartphone in Everyday Discourse: Conflict, Tension & Instability. In _Proceedings of the SIGCHI Conference on Human Factors in Computing Systems_ (Paris, France) _(CHI ’13)_. ACM, New York, NY, USA, 1051–1060. https://doi.org/10.1145/2470654.2466134
* Hill et al. (2020) Joshua Hill, Kelly Widdicks, and Mike Hazas. 2020\. Mapping the Scope of Software Interventions for Moderate Internet Use on Mobile Devices. In _Proceedings of the 7th International Conference on ICT for Sustainability_ (Bristol, United Kingdom) _(ICT4S2020)_. Association for Computing Machinery, New York, NY, USA, 204–212. https://doi.org/10.1145/3401335.3401361
* Hiniker et al. (2018) Alexis Hiniker, Sharon S Heung, Sungsoo (ray) Hong, and Julie A Kientz. 2018. Coco’s Videos: An Empirical Investigation of Video-Player Design Features and Children’s Media Use. In _Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems_ (Montreal QC, Canada) _(CHI ’18, Paper 254)_. Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3173574.3173828
* Hiniker et al. (2016a) Alexis Hiniker, Sungsoo (ray) Hong, Tadayoshi Kohno, and Julie A Kientz. 2016a. MyTime: Designing and Evaluating an Intervention for Smartphone Non-Use. In _Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems_. ACM, 4746–4757. https://doi.org/10.1145/2858036.2858403
* Hiniker et al. (2017) Alexis Hiniker, Bongshin Lee, Kiley Sobel, and Eun Kyoung Choe. 2017. Plan & Play: Supporting Intentional Media Use in Early Childhood. In _Proceedings of the 2017 Conference on Interaction Design and Children_ (Stanford, California, USA) _(IDC ’17)_. ACM, New York, NY, USA, 85–95. https://doi.org/10.1145/3078072.3079752
* Hiniker et al. (2016b) Alexis Hiniker, Shwetak N Patel, Tadayoshi Kohno, and Julie A Kientz. 2016b. Why would you do that? predicting the uses and gratifications behind smartphone-usage behaviors. In _Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing_. dl.acm.org, 634–645. http://dl.acm.org/citation.cfm?id=2971762
* Hofmann et al. (2012) Wilhelm Hofmann, Roy F Baumeister, Georg Förster, and Kathleen D Vohs. 2012. Everyday temptations: an experience sampling study of desire, conflict, and self-control. _Journal of personality and social psychology_ 102, 6 (June 2012), 1318–1335. https://doi.org/10.1037/a0026545
* Jääskeläinen (2010) Riitta Jääskeläinen. 2010\. Think-aloud protocol. _Handbook of translation studies_ 1 (2010), 371–374. https://books.google.com/books?hl=en&lr=&id=sBVGAYCh_9AC&oi=fnd&pg=PA371&dq=Riitta+J%C3%A4%C3%A4skel%C3%A4inen+2010+Think-aloud+protocol.&ots=Qn8NYacfXD&sig=9gAnKpZ6vdVbGHapKTfo74b0MtE
* Jeong et al. (2016) Se-Hoon Jeong, Hyoungjee Kim, Jung-Yoon Yum, and Yoori Hwang. 2016\. What type of content are smartphone users addicted to?: SNS vs. games. _Computers in human behavior_ 54 (Jan. 2016), 10–17. https://doi.org/10.1016/j.chb.2015.07.035
* Kamenetz (2018) Anya Kamenetz. 2018\. _The art of screen time: How your family can balance digital media and real life_. Hachette UK.
* Kim et al. (2019a) Jaejeung Kim, Hayoung Jung, Minsam Ko, and Uichin Lee. 2019a. GoalKeeper: Exploring Interaction Lockout Mechanisms for Regulating Smartphone Use. _Proc. ACM Interact. Mob. Wearable Ubiquitous Technol._ 3, 1 (March 2019), 29. https://doi.org/10.1145/3314403
* Kim et al. (2019b) Jaejeung Kim, Joonyoung Park, Hyunsoo Lee, Minsam Ko, and Uichin Lee. 2019b. LocknType: Lockout Task Intervention for Discouraging Smartphone App Use,“. In _ACM CHI_. https://doi.org/10.1145/3290605.3300927
* Kim et al. (2016) Young-Ho Kim, Jae Ho Jeon, Eun Kyoung Choe, Bongshin Lee, Kwonhyun Kim, and Jinwook Seo. 2016\. TimeAware: Leveraging Framing Effects to Enhance Personal Productivity. In _Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems_ (San Jose, California, USA) _(CHI ’16)_. ACM, New York, NY, USA, 272–283. https://doi.org/10.1145/2858036.2858428
* Ko et al. (2015) Minsam Ko, Subin Yang, Joonwon Lee, Christian Heizmann, Jinyoung Jeong, Uichin Lee, Daehee Shin, Koji Yatani, Junehwa Song, and Kyong-Mee Chung. 2015\. NUGU: A Group-based Intervention App for Improving Self-Regulation of Limiting Smartphone Use. In _Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing_ (Vancouver, BC, Canada) _(CSCW ’15)_. ACM, New York, NY, USA, 1235–1245. https://doi.org/10.1145/2675133.2675244
* Kovacs et al. (2019) Geza Kovacs, Drew Mylander Gregory, Zilin Ma, Zhengxuan Wu, Golrokh Emami, Jacob Ray, and Michael S Bernstein. 2019. Conservation of Procrastination: Do Productivity Interventions Save Time Or Just Redistribute It?. In _Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems_ (Glasgow, Scotland Uk) _(CHI ’19)_. ACM, New York, NY, USA, 330:1–330:12. https://doi.org/10.1145/3290605.3300560
* Landis and Koch (1977) J R Landis and G G Koch. 1977. The measurement of observer agreement for categorical data. _Biometrics_ 33, 1 (March 1977), 159–174. https://www.ncbi.nlm.nih.gov/pubmed/843571
* Latham and Locke (1991) Gary P Latham and Edwin A Locke. 1991. Self-regulation through goal setting. _Organizational behavior and human decision processes_ 50, 2 (1991), 212–247. https://www.researchgate.net/profile/Gary_Latham2/publication/232501090_A_Theory_of_Goal_Setting_Task_Performance/links/57d0e85108ae5f03b489170d/A-Theory-of-Goal-Setting-Task-Performance.pdf
* Lewis (2017) Paul Lewis. 2017\. ’Our minds can be hijacked’: the tech insiders who fear a smartphone dystopia. _The Guardian_ 6, 10 (2017), 2017.
* Limerick et al. (2014) Hannah Limerick, David Coyle, and James W Moore. 2014\. The experience of agency in human-computer interactions: a review. _Frontiers in human neuroscience_ 8 (Aug. 2014), 643\. https://doi.org/10.3389/fnhum.2014.00643
* Limerick et al. (2015) Hannah Limerick, James W Moore, and David Coyle. 2015\. Empirical Evidence for a Diminished Sense of Agency in Speech Interfaces. In _Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems_ (Seoul, Republic of Korea) _(CHI ’15)_. Association for Computing Machinery, New York, NY, USA, 3967–3970. https://doi.org/10.1145/2702123.2702379
* Lottridge et al. (2012) Danielle Lottridge, Eli Marschner, Ellen Wang, Maria Romanovsky, and Clifford Nass. 2012. Browser Design Impacts Multitasking. _Proceedings of the Human Factors and Ergonomics Society … Annual Meeting Human Factors and Ergonomics Society. Meeting_ 56, 1 (Sept. 2012), 1957–1961. https://doi.org/10.1177/1071181312561289
* Lukoff et al. (2021) Kai Lukoff, Alexis Hiniker, Colin M Gray, Arunesh Mathur, and Shruthi Chivukula. 2021. What Can CHI Do About Dark Patterns?. In _Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems_ (Yokohama, Japan). ACM, New York, NY, USA. https://doi.org/10.1145/3411763.3441360
* Lukoff et al. (2018) Kai Lukoff, Cissy Yu, Julie Kientz, and Alexis Hiniker. 2018\. What Makes Smartphone Use Meaningful or Meaningless? _Proc. ACM Interact. Mob. Wearable Ubiquitous Technol._ 2, 1 (March 2018), 22:1–22:26. https://doi.org/10.1145/3191754
* Lyngs et al. (2018) U Lyngs, R Binns, M Van Kleek, and others. 2018\. So, Tell Me What Users Want, What They Really, Really Want! _Extended Abstracts of the_ (2018). https://dl.acm.org/citation.cfm?id=3188397
* Lyngs et al. (2019) Ulrik Lyngs, Kai Lukoff, Petr Slovak, Reuben Binns, Adam Slack, Michael Inzlicht, Max Van Leek, and Nigel Shadbolt. 2019\. Self-Control in Cyberspace: Applying Dual Systems Theory to a Review of Digital Self-Control Tools. _CHI 2019_ (May 2019). https://doi.org/10.1145/3290605.3300361
* Lyngs et al. (2020) Ulrik Lyngs, Kai Lukoff, Petr Slovak, William Seymour, Helena Webb, Marina Jirotka, Max Van Kleek, and Nigel Shadbolt. 2020\. ’I Just Want to Hack Myself to Not Get Distracted’: Evaluating Design Interventions for Self-Control on Facebook. (Jan. 2020). arXiv:2001.04180 [cs.HC] http://arxiv.org/abs/2001.04180
* Marino et al. (2018) Claudia Marino, Gianluca Gini, Alessio Vieno, and Marcantonio M Spada. 2018. A comprehensive meta-analysis on Problematic Facebook Use. _Computers in human behavior_ 83 (June 2018), 262–277. https://doi.org/10.1016/j.chb.2018.02.009
* Marotta and Acquisti ([n.d.]) Veronica Marotta and Alessandro Acquisti. [n.d.]. Online Distractions, Website Blockers, and Economic Productivity: A Randomized Field Experiment. ([n. d.]).
* Matney (2017) Lucas Matney. 2017\. YouTube has 1.5 billion logged-in monthly users watching a ton of mobile video. _TechCrunch_ (June 2017). http://techcrunch.com/2017/06/22/youtube-has-1-5-billion-logged-in-monthly-users-watching-a-ton-of-mobile-video/
* McKay (2019) Tom McKay. 2019\. Senators Introduce Bill to Stop ’Dark Patterns’ Huge Platforms Use to Trick Users. https://gizmodo.com/senators-introduce-bill-to-stop-dark-patterns-huge-plat-1833929276. https://gizmodo.com/senators-introduce-bill-to-stop-dark-patterns-huge-plat-1833929276 Accessed: 2020-8-27.
* McNee et al. (2003) Sean M McNee, Shyong K Lam, Catherine Guetzlaff, Joseph A Konstan, and John Riedl. 2003\. Confidence displays and training in recommender systems. In _Proc. INTERACT_ , Vol. 3. books.google.com, 176–183. https://books.google.com/books?hl=en&lr=&id=PTg0fVYqgCcC&oi=fnd&pg=PA176&dq=low+confidence+%22recommender+system%22&ots=ObJIxzmCAZ&sig=0uf4-fGfAJyBGbfmCDJE0zpPsGU
* McNee et al. (2006) Sean M McNee, John Riedl, and Joseph A Konstan. 2006. Being accurate is not enough: how accuracy metrics have hurt recommender systems. In _CHI ’06 Extended Abstracts on Human Factors in Computing Systems_ (Montréal, Québec, Canada) _(CHI EA ’06)_. Association for Computing Machinery, New York, NY, USA, 1097–1101. https://doi.org/10.1145/1125451.1125659
* Metcalfe and Greene (2007) Janet Metcalfe and Matthew Jason Greene. 2007. Metacognition of agency. _Journal of experimental psychology. General_ 136, 2 (May 2007), 184–199. https://doi.org/10.1037/0096-3445.136.2.184
* Monge Roffarello and De Russis (2019) Alberto Monge Roffarello and Luigi De Russis. 2019. The Race Towards Digital Wellbeing: Issues and Opportunities. In _Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems_ (Glasgow, Scotland Uk) _(CHI ’19)_. ACM, New York, NY, USA, 386:1–386:14. https://doi.org/10.1145/3290605.3300616
* Moore (2016) James W Moore. 2016\. What Is the Sense of Agency and Why Does it Matter? _Frontiers in psychology_ 7 (Aug. 2016), 1272\. https://doi.org/10.3389/fpsyg.2016.01272
* Moser et al. (2019) C Moser, S Y Schoenebeck, and P Resnick. 2019. Impulse Buying: Design Practices and Consumer Needs. _of the 2019 CHI Conference on …_ (2019). https://dl.acm.org/doi/abs/10.1145/3290605.3300472
* Narayanan et al. (2020) Arvind Narayanan, Arunesh Mathur, Marshini Chetty, and Mihir Kshirsagar. 2020. Dark Patterns: Past, Present, and Future: The evolution of tricky user interfaces. _Queueing Systems. Theory and Applications_ 18, 2 (April 2020), 67–92. https://doi.org/10.1145/3400899.3400901
* Nielsen (1994) Jakob Nielsen. 1994\. 10 Heuristics for User Interface Design: Article by Jakob Nielsen. https://www.nngroup.com/articles/ten-usability-heuristics/. https://www.nngroup.com/articles/ten-usability-heuristics/ Accessed: 2020-2-7.
* NPR (2015) NPR. 2015. Episode 653: The Anti-Store. _NPR_ (Sept. 2015). https://www.npr.org/sections/money/2015/09/25/443519599/episode-653-the-anti-store
* O’Donoghue and Rabin (2015) Ted O’Donoghue and Matthew Rabin. 2015. Present Bias: Lessons Learned and to Be Learned. _The American economic review_ 105, 5 (2015), 273–279. https://ideas.repec.org/a/aea/aecrev/v105y2015i5p273-79.html
* Okeke et al. (2018) Fabian Okeke, Michael Sobolev, Nicola Dell, and Deborah Estrin. 2018. Good vibrations: can a digital nudge reduce digital overload?. In _Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services_. ACM, 4. https://doi.org/10.1145/3229434.3229463
* Oulasvirta et al. (2012) Antti Oulasvirta, Tye Rattenbury, Lingyi Ma, and Eeva Raita. 2012. Habits make smartphone use more pervasive. _Personal and Ubiquitous Computing_ 16, 1 (Jan. 2012), 105–114. https://doi.org/10.1007/s00779-011-0412-2
* Pandey (2017) Erica Pandey. 2017\. Sean Parker: Facebook was designed to exploit human “vulnerability”. https://www.axios.com/sean-parker-facebook-exploits-a-vulnerability-in-humans-2507917325.html. https://www.axios.com/sean-parker-facebook-exploits-a-vulnerability-in-humans-2507917325.html Accessed: 2020-9-15.
* Park et al. (2018) Joonyoung Park, Jin Yong Sim, Jaejeung Kim, Mun Yong Yi, and Uichin Lee. 2018. Interaction Restraint: Enforcing Adaptive Cognitive Tasks to Restrain Problematic User Interaction. In _Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems_ (Montreal QC, Canada) _(CHI EA ’18)_. ACM, New York, NY, USA, LBW559:1–LBW559:6. https://doi.org/10.1145/3170427.3188613
* Paumgarten (2014) Nick Paumgarten. 2014\. Up And Then Down. _The New Yorker_ (July 2014). https://www.newyorker.com/magazine/2008/04/21/up-and-then-down
* Perrin and Anderson (2019) Andrew Perrin and Monica Anderson. 2019. Share of U.S. adults using social media, including Facebook, is mostly unchanged since 2018. https://www.pewresearch.org/fact-tank/2019/04/10/share-of-u-s-adults-using-social-media-including-facebook-is-mostly-unchanged-since-2018/. https://www.pewresearch.org/fact-tank/2019/04/10/share-of-u-s-adults-using-social-media-including-facebook-is-mostly-unchanged-since-2018/ Accessed: 2020-9-14.
* Pinder et al. (2018) C Pinder, J Vermeulen, B R Cowan, and others. 2018\. Digital Behaviour Change Interventions to Break and Form Habits. _ACM Transactions on_ (2018). https://dl.acm.org/citation.cfm?id=3196830
* Resnick and Varian (1997) Paul Resnick and Hal R Varian. 1997. Recommender systems. _Commun. ACM_ 40, 3 (1997), 56–58. http://dl.acm.org/citation.cfm?id=245121
* Roose (2019) Kevin Roose. 2019\. The making of a YouTube radical. _The New York times_ (2019). https://static01.nyt.com/images/2019/06/09/nytfrontpage/scan.pdf
* Rubin (1984) Alan M Rubin. 1984\. Ritualized and Instrumental Television Viewing. _The Journal of communication_ 34, 3 (Sept. 1984), 67–77. https://doi.org/10.1111/j.1460-2466.1984.tb02174.x
* Ryan and Deci (2006) Richard M Ryan and Edward L Deci. 2006. Self-regulation and the problem of human autonomy: Does psychology need choice, self-determination, and will? _Journal of personality_ 74, 6 (2006), 1557–1586. https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-6494.2006.00420.x?casa_token=MZuzC4Br_U4AAAAA:ErU7WbByAWUFcoh2N_5TIRqe7jhVXe6V8Z0-pWB8gbb-ZZ3I8xz_qrdAtePASmiTFBWb2COF6sX4BQ
* Schelling (1984) Thomas C Schelling. 1984\. Self-Command in Practice, in Policy, and in a Theory of Rational Choice. _The American economic review_ 74, 2 (1984), 1–11. http://www.jstor.org/stable/1816322
* Schlosser (2019) Markus Schlosser. 2019\. _Agency_ (winter 2019 ed.). Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/win2019/entries/agency/
* Schrage (1996) Michael Schrage. 1996\. Cultures of prototyping. _Bringing design to software_ 4, 1 (1996), 1–11. https://pdfs.semanticscholar.org/c399/6e2738e52ea22c83ef662ab118f68b82eba2.pdf
* Schüll (2012) N D Schüll. 2012\. _Addiction by Design: Machine Gambling in Las Vegas_. Princeton University Press. https://books.google.com/books?id=_Vsk6EXc1_4C
* Seaver (2018) Nick Seaver. 2018\. Captivating algorithms: Recommender systems as traps. _Journal of Material Culture_ (Dec. 2018), 1359183518820366\. https://doi.org/10.1177/1359183518820366
* Shneiderman (1992) Ben Shneiderman. 1992\. _Designing the user interface (2nd ed.): strategies for effective human-computer interaction_. Addison-Wesley Longman Publishing Co., Inc., USA. https://dl.acm.org/citation.cfm?id=129385
* Shneiderman and Plaisant (2004) Ben Shneiderman and Catherine Plaisant. 2004. _Designing the User Interface: Strategies for Effective Human-Computer Interaction (4th Edition)_. Pearson Addison Wesley.
* Sillito ([n.d.]) Jonathan Sillito. [n.d.]. Saturate App: Simple Collaborative Analysis. http://www.saturateapp.com/. http://www.saturateapp.com/ Accessed: 2020-2-NA.
* Silver et al. (2019) L Silver, A Smith, C Johnson, K Taylor, J Jiang, A Monica, and L Rainie. 2019\. Use of smartphones and social media is common across most emerging economies. https://www.pewresearch.org/internet/2019/03/07/use-of-smartphones-and-social-media-is-common-across-most-emerging-economies/. https://www.pewresearch.org/internet/2019/03/07/use-of-smartphones-and-social-media-is-common-across-most-emerging-economies/ Accessed: 2019-2-NA.
* Smith et al. (2018) Aaron Smith, Skye Toor, and Patrick Van Kessel. 2018. Many Turn to YouTube for Children’s Content, News, How-To Lessons. https://www.pewresearch.org/internet/2018/11/07/many-turn-to-youtube-for-childrens-content-news-how-to-lessons/. https://www.pewresearch.org/internet/2018/11/07/many-turn-to-youtube-for-childrens-content-news-how-to-lessons/ Accessed: 2020-3-3.
* Sniehotta et al. (2005) Falko F Sniehotta, Urte Scholz, and Ralf Schwarzer. 2005\. Bridging the intention–behaviour gap: Planning, self-efficacy, and action control in the adoption and maintenance of physical exercise. _Psychology & Health_ 20, 2 (April 2005), 143–160. https://doi.org/10.1080/08870440512331317670
* Solsman (2018) Joan E Solsman. 2018\. Ever get caught in an unexpected hourlong YouTube binge? Thank YouTube AI for that. https://www.cnet.com/news/youtube-ces-2018-neal-mohan/. https://www.cnet.com/news/youtube-ces-2018-neal-mohan/ Accessed: 2020-5-1.
* Spangler ([n.d.]) Todd Spangler. [n.d.]. YouTube Tops 20 Million Paying Subscribers, YouTube TV Has Over 2 Million Customers. https://variety.com/2020/digital/news/youtube-tops-20-million-paying-subscribers-youtube-tv-has-over-2-million-customers-1203491228/. https://variety.com/2020/digital/news/youtube-tops-20-million-paying-subscribers-youtube-tv-has-over-2-million-customers-1203491228/ Accessed: 2020-8-26.
* Stanphill (2019) Maggie Stanphill. 2019\. Optimizing for Engagement: Understanding the Use of Persuasive Technology on Internet Platforms. U.S. Senate Committee Hearing.
* Statt (2016) Nick Statt. 2016\. Flowstate is a writing app that will delete everything if you stop typing. https://www.theverge.com/2016/1/28/10853534/flowstate-writing-app-mac-ios-delete-everything. https://www.theverge.com/2016/1/28/10853534/flowstate-writing-app-mac-ios-delete-everything Accessed: 2020-8-10.
* Sundar and Marathe (2010) S Shyam Sundar and Sampada S Marathe. 2010. Personalization versus Customization: the Importance of Agency, Privacy, and Power Usage. _Human communication research_ 36, 3 (July 2010), 298–322. https://doi.org/10.1111/j.1468-2958.2010.01377.x
* Synofzik et al. (2008) Matthis Synofzik, Gottfried Vosgerau, and Albert Newen. 2008\. Beyond the comparator model: a multifactorial two-step account of agency. _Consciousness and cognition_ 17, 1 (March 2008), 219–239. https://doi.org/10.1016/j.concog.2007.03.010
* Tapal et al. (2017) Adam Tapal, Ela Oren, Reuven Dar, and Baruch Eitam. 2017\. The Sense of Agency Scale: A Measure of Consciously Perceived Control over One’s Mind, Body, and the Immediate Environment. _Frontiers in psychology_ 8 (Sept. 2017), 1552\. https://doi.org/10.3389/fpsyg.2017.01552
* Tran et al. (2019) Jonathan A Tran, Katherine S Yang, Katie Davis, and Alexis Hiniker. 2019. Modeling the Engagement-Disengagement Cycle of Compulsive Phone Use. In _CHI ’19_ (Glasgow). https://doi.org/10.1145/3290605.3300542
* United States Census Bureau ([n.d.]) United States Census Bureau. [n.d.]. QuickFacts: United States. https://www.census.gov/quickfacts/fact/table/US/PST045219
* Van Kessel et al. (2019) Patrick Van Kessel, Skye Toor, and Aaron Smith. 2019. A Week in the Life of Popular YouTube Channels. https://www.pewresearch.org/internet/2019/07/25/a-week-in-the-life-of-popular-youtube-channels/. https://www.pewresearch.org/internet/2019/07/25/a-week-in-the-life-of-popular-youtube-channels/ Accessed: 2020-4-1.
* Williams (2018) James Williams. 2018\. _Stand Out of Our Light: Freedom and Resistance in the Attention Economy_. Cambridge University Press. https://play.google.com/store/books/details?id=88FWDwAAQBAJ
* Wu et al. (2019) Eva Yiwei Wu, Emily Pedersen, and Niloufar Salehi. 2019\. Agent, Gatekeeper, Drug Dealer: How Content Creators Craft Algorithmic Personas. _Proc. ACM Hum. -Comput. Interact._ 3, CSCW (Nov. 2019), 219:1–219:27. https://doi.org/10.1145/3359321
* YouTube ([n.d.]) YouTube. [n.d.]. YouTube for Press. https://www.youtube.com/about/press/. https://www.youtube.com/about/press/ Accessed: 2020-8-14.
* Zagal et al. (2013) José P Zagal, Staffan Björk, and Chris Lewis. 2013\. Dark Patterns in the Design of Games. In _Foundations of Digital Games 2013_. diva-portal.org. http://www.diva-portal.org/smash/record.jsf?pid=diva2:1043332
* Zimmerman and Forlizzi (2014) John Zimmerman and Jodi Forlizzi. 2014. Research Through Design in HCI. In _Ways of Knowing in HCI_ , Judith S Olson and Wendy A Kellogg (Eds.). Springer New York, New York, NY, 167–189. https://doi.org/10.1007/978-1-4939-0378-8_8
|
# A characterization of $p$-minimal surfaces in the Heisenberg group $H_{1}$
Hung-Lin Chiu and Hsiao-Fan Liu Department of Mathematics, National Tsing Hua
University, Hsinchu, Taiwan 300, R.O.C<EMAIL_ADDRESS>Department of
Mathematics, TamKang University, New Taipei City 25137, Taiwan, R.O.C.
<EMAIL_ADDRESS>
###### Abstract.
In Euclidean $3$-space, it is well known that the Sine-Gordon equation was
considered in the nineteenth century in the course of investigation of
surfaces of constant Gaussian curvature $K=-1$. Such a surface can be
constructed from a solution to the Sine-Gordon equation, and vice versa. With
this as motivation, by means of the fundamental theorem of surfaces in the
Heisenberg group $H_{1}$, we show in this paper that the existence of a
constant $p$-mean curvature surface (without singular points) is equivalent to
the existence of a solution to a nonlinear second order ODE (1.2), which is a
kind of Liénard equations. Therefore, we turn to investigate this equation. It
is a surprise that we give a complete set of solutions to (1.2) (or (1.5)) in
the $p$-minimal case, and hence use the types of the solution to divide
$p$-minimal surfaces into several classes. As a result, we obtain a
representation of $p$-minimal surfaces and classify further all $p$-minimal
surfaces. In Section 9, we provide an approach to construct $p$-minimal
surfaces. It turns out that, in some sense, generic $p$-minimal surfaces can
be constructed via this approach. Finally, as a derivation, we recover the
Bernstein-type theorem which was first shown in [3] (or see [6, 7]).
###### Key words and phrases:
Key Words: Heisenberg group, Pansu sphere, p-Minimal surface, Liénard
equation, Bernstein theorem
###### 1991 Mathematics Subject Classification:
1991 Mathematics Subject Classification. Primary: 53A10, 53C42, 53C22, 34A26.
###### Contents
1. 1 Introduction and main results
2. 2 The Fundamental Theorem for surfaces in $H_{1}$
3. 3 Constant $p$-mean curvature surfaces
4. 4 Solutions to the Liénard equation
5. 5 The classification of $p$-minimal surfaces
6. 6 A representaion of $p$-minimal surfaces
7. 7 Examples of $p$-minimal surfaces
8. 8 Structures of singular sets of $p$-minimal surfaces
9. 9 An approach to construct $p$-minimal surfaces
## 1\. Introduction and main results
In literature, the Heisenberg group and its sub-Laplacian are active in many
fields of analysis and sub-Riemannian geometry, control theory, semiclassical
analysis of quantum mechanics and etc. (cf. [17, 18, 19, 20, 21]). It also has
applications on signal analysis and geometric optics [30, 31, 32]. Research on
the sub-Riemannian geometry and its analytic consequences, in particular
geodesics, has been studied widely and extensively in the past ten years (cf.
[18, 22, 23, 24, 25, 26, 28, 27, 29]). In this paper, the Heisenberg group is
studied as a pseudo-hermitian manifold. As Euclidean geometry, it is a branch
of Klein geometries, and the corresponding Cartan geometry is pseudo-hermitian
geometry.
Recall that the Heisenberg group $H_{1}$ is the space $\mathbb{R}^{3}$
associated with the group multiplication
$(x_{1},y_{1},z_{1})\circ(x_{2},y_{2},z_{2})=(x_{1}+x_{2},y_{1}+y_{2},z_{1}+z_{2}+y_{1}x_{2}-x_{1}y_{2}),$
which is a $3$-dimensional Lie group. The space of all left invariant vector
fields is spanned by the following three vector fields:
$\mathring{e}_{1}=\frac{\partial}{\partial x}+y\frac{\partial}{\partial
z},~{}~{}\mathring{e}_{2}=\frac{\partial}{\partial
y}-x\frac{\partial}{\partial z}~{}~{}\mbox{ and
}~{}~{}T=\frac{\partial}{\partial z}.$
The standard contact bundle on $H_{1}$ is the subbundle $\xi$ of the tangent
bundle $TH_{1}$ which is spanned by $\mathring{e}_{1}$ and $\mathring{e}_{2}$.
It is also defined to be the kernel of the contact form
$\Theta=dz+xdy-ydx.$
The CR structure on $H_{1}$ is the endomorphism $J:\xi\to\xi$ defined by
$J(\mathring{e}_{1})=\mathring{e}_{2}~{}~{}\mbox{ and
}~{}~{}J(\mathring{e}_{2})=-\mathring{e}_{1}.$
One can view $H_{1}$ as a pseudohermitian manifold with $(J,\Theta)$ as the
standard pseudohermitian structure. There is a natural associated connection
$\nabla$ if we regard all these left invariant vector fields
$\mathring{e}_{1},\mathring{e}_{2}$ and $T$ as parallel vector fields. A
natural associated metric on $H_{1}$ is the adapted metric $g_{\Theta}$, which
is defined by $g_{\Theta}=d\Theta(\cdot,J\cdot)+\Theta^{2}$. It is equivalent
to define the metric by regarding $\mathring{e}_{1},\mathring{e}_{2}$ and $T$
as an orthonormal frame field. We sometimes use $<\cdot,\cdot>$ to denote the
adapted metric. In this paper, we use the adapted metric to measure the
lengths and angles of vectors, and so on.
A pseudohermitian transformation (or a Heisenberg rigid motion) in $H_{1}$ is
a diffeomorphism in $H_{1}$ which preserves the standard pseudohermitian
structure $(J,\Theta)$. We let $PSH(1)$ be the group of Heisenberg rigid
motions, that is, the group of all pseudohermitian transformations in $H_{1}$.
For details of this group, we refer readers to [5], which is the first
published paper where the fundamental theorem in the Heisenberg groups has
been studied. We say that two surfaces are congruent if they differ by an
action of a Heisenberg rigid motion.
The concept of minimal surfaces or constant mean curvature surfaces plays an
important role in differential geometry to study basic properties of
manifolds. There is an analogous concept in pseudo-hermitian manifolds, which
is called $p$-minimal surfaces. In this paper, we focus on studying such kind
of surfaces in the Heisenberg group $H_{1}$. Throughout this article, all
objects we discuss are assumed to be $C^{\infty}$ smooth, unless we specify.
Suppose $\Sigma$ is a surface in the Heisenberg group $H_{1}$. There is a one-
form $I$ on $\Sigma$ which is induced from the adapted metric $g_{\Theta}$.
This induced metric is defined on the whole surface $\Sigma$ and is called the
first fundamental form of $\Sigma$. The intersection $T\Sigma\cap\xi$ is
integrated to be a singular foliation on $\Sigma$, called the characteristic
foliation. Each leaf is called a characteristic curve. A point $p\in\Sigma$ is
called a singular point if at which the tangent plane $T_{p}\Sigma$ coincides
with the contact plane $\xi_{p}$; otherwise, $p$ is called a regular (or non-
singular) point. Generically, a point $p\in\Sigma$ is a regular point and the
set of all regular points is called the regular part of $\Sigma$. On the
regular part, we are able to choose a unit vector field $e_{1}$ such that
$e_{1}$ defines the characteristic foliation. The vector $e_{1}$ is determined
up to a sign. Let $e_{2}=Je_{1}$. Then $\\{e_{1},e_{2}\\}$ forms an
orthonormal frame field of the contact bundle $\xi$. We usually call the
vector field $e_{2}$ a horizontal vector field. Then the $p$-mean curvature
$H$ of the surface $\Sigma$ is defined by
(1.1) $\nabla_{e_{1}}e_{2}=-He_{1}.$
The $p$-mean curvature $H$ is only defined on the regular part of $\Sigma$.
There are two more equivalent ways to define the $p$-mean curvature from the
point of view of variation and a level surface (see [3, 33]). We remark that
this notion of mean curvature was proposed by J.-H. Cheng, J.-F. Hwang, A.
Malchiodi and P. Yang from the geometric point of view to generalize the one
introduced first by S. Pauls in $H_{1}$ for graphs over the $xy$-plane [34].
Also, in [35], M. Ritoré and C. Rosales exposed another method to compute the
mean curvature of a hypersurface. If $H=0$ on the whole regular part, we call
the surface a $p$-minimal surface. The $p$-mean curvature is actually the line
curvature of a characteristic curve, and hence the characteristic curves are
straight lines (for the detail, see [3]). There also exists a function
$\alpha$ defined on the regular part such that $\alpha e_{2}+T$ is tangent to
the surface $\Sigma$. We call this function the $\alpha$-function of $\Sigma$.
It is uniquely determined up to a sign, which depends on the choice of the
characteristic direction $e_{1}$. Define $\hat{e}_{1}=e_{1}$ and
$\hat{e}_{2}=\frac{\alpha e_{2}+T}{\sqrt{1+\alpha^{2}}}$, then
$\\{\hat{e}_{1},\hat{e}_{2}\\}$ forms an orthonormal frame field of the
tangent bundle $T\Sigma$. Notice that $\hat{e}_{2}$ is uniquely determined and
independent of the choice of the characteristic direction $e_{1}$. In [5], it
was shown that these four invariants,
$I,e_{1},\alpha,H,$
form a complete set of invariants for surfaces in $H_{1}$. We remark that all
the results provided in [5] still hold in the $C^{2}$-category. For each
regular point $p$, we can choose a suitable coordinates around $p$ to study
the local geometry of such surfaces. Actually, there always exists a
coordinate system $(x,y)$ of $p$ such that
$e_{1}=\frac{\partial}{\partial x}.$
We call such coordinates $(x,y)$ a compatible coordinate system (see Figure
1.1). It is determined up to a transformation in (2.19). Notice that the
compatible coordinate systems are dependent on the characteristic direction
$e_{1}$.
Figure 1.1. A compatible coordinates with $\frac{\partial}{\partial x}=e_{1}$
Let $\Sigma\subset H_{1}$ be a constant $p$-mean curvature surface with $H=c$.
Then in terms of a compatible coordinate system $(U;x,y)$, the
$\alpha$-function satisfies the following equation
(1.2) $\alpha_{xx}+6\alpha\alpha_{x}+4\alpha^{3}+c^{2}\alpha=0,$
which first appeared as a Codazzi-like equation $(1.12)$ in [4] with
$D=-1/{\alpha}$.
It is a nonlinear ordinary differential equation and actually is a kind of the
so-called Liénard equations, named after the French physicist Alfred-Marie
Liénard. The Liénard equations were intensely studied as they can be used to
model oscillating circuits. Conversely, in this paper, we show that if there
exists a solution $\alpha(x,y)$ to the Liénard equation (1.2), we are able to
construct a constant $p$-mean curvature surface with $H=c$ and this given
solution $\alpha$ as its $\alpha$-function. This result is summarized as
Theorem 1.1. One motivation of this theorem comes from the famous Sine-Gordon
Equation (SGE),
$u_{tt}-u_{xx}=\sin(u)\cos(u),$
which is considerably older than the Korteweg de Vries Equation (KdV). It was
discovered in the late eighteenth century to study pseudospherical surfaces,
that is, surfaces of Gaussian curvature $K=-1$ immersed in
$\operatorname{\mathbb{R}}^{3}$, and it was intensively studied for this
reason. It arises from the Gauss-Codazzi equations for pseudospherical
surfaces in $\operatorname{\mathbb{R}}^{3}$ and is known as an integrable
equation [12]. In addition, it can also be viewed as a continuum limit [13].
Its solutions and solitons have been widely discussed by the Inverse
Scattering Transform and other approaches.
There is a bijective relation between solutions $u$ of the SGE with
$\Im(u)\subset(0,\frac{\pi}{2})$ and the classes of pseudospherical surfaces
in $\operatorname{\mathbb{R}}^{3}$ up to rigid motion. If
$u:\operatorname{\mathbb{R}}^{2}\rightarrow\operatorname{\mathbb{R}}$ is a
solution such that $\sin u\cos u$ is zero at a point $u_{0}$, then the
immersed pseudoshperical surface has cusp singularities. For example, the
pseudospherical surfaces corresponding to the 1-soliton solutions of SGE are
the so-called Dini surfaces and have a helix of singularities.
The study of line congruences give rise to the concept of Bäcklund
transformations. A line congruence $L:M\rightarrow M^{*}$ is called a Bäcklund
transformation with the constant angle $\theta$ between the normal to $M$ at
$p$ and the normal to $M^{*}$ at $p^{*}=L(p)$ and the distance between $p$ and
$p^{*}$ is $\sin\theta$ for all $p\in M$. The classical Bäcklund
transformation for the sine-Gordon equation was constructed in the nineteenth
century by Swedish differential geometer Albert Bäcklund by means of a
geometric construction [14, 15, 16]. We then are motivated to have analogue
theorems for Heisenberg groups.
###### Theorem 1.1.
The existence of a constant $p$-mean curvature surface $($without singular
points$)$ in $H_{1}$ is equivalent to the existence of a solution to the
equation (1.2).
In this article, we sometimes call the Liénard equation (1.2) as the Codazzi-
like equation from the geometrical point of view [4, 5]. We would also like to
specify that for a graph $(x,y,u(x,y))$ to be $p$-minimal if it satisfies the
$p$-minimal equation (see [3])
(1.3) $(u_{y}+x)^{2}u_{xx}-2(u_{y}+x)(u_{x}-y)u_{xy}+(u_{x}-y)^{2}u_{yy}=0.$
This is a degenerate hyperbolic and elliptic partial differential equation.
Theorem 1.1 follows from Theorem 1.2, which is another version of the
fundamental theorem for surfaces in $H_{1}$ acquired after we make a detailed
investigation of the origin version of the integrability conditions (2.1), and
hence is more useful than the previous one in some sense (for the origin
version, we refer readers to [5] or Theorem 2.1 of this paper). The following
theorem also appeared as Theorem H in [4] with slightly different formulation
as the authors of [4] did not prescribe the metric.
###### Theorem 1.2.
Let $\alpha(x,y)$ and $H(x,y)$ be two arbitrary smooth functions on a
coordinate neighborhood $(U;x,y)\subset\operatorname{\mathbb{R}}^{2}$. If they
satisfy the following integrability condition
(1.4) $\left(h(y)-\int H\alpha\left(e^{\int 2\alpha
dx}\right)dx\right)H_{x}+e^{k(y)}H_{y}=\left(e^{\int 2\alpha
dx}\right)(\alpha_{xx}+6\alpha\alpha_{x}+4\alpha^{3}+\alpha H^{2}),$
for some functions $k(y)$ and $h(y)$ in the variable $y$, then there exists an
embedding $X:U\rightarrow H_{1}$ $($provided that $U$ is small enough$)$ such
that the surface $\Sigma=X(U)$ has $H$ and $\alpha$ as its $p$-mean curvature
and $\alpha$-function, respectively, and $\hat{e}_{1}=\frac{\partial}{\partial
x}$, $\hat{e}_{2}=a(x,y)\frac{\partial}{\partial
x}+b(x,y)\frac{\partial}{\partial y}$ with $a$ and $b$ defined as (2.15) and
(2.18). In addition, such embeddings are unique, up to a Heisenberg rigid
motion.
In particular, if $H=c$, for some constant $c$, then (1.4) reads (1.2), for
each pair of functions $k(y)$ and $h(y)$. We hence have the following
fundamental theorem for constant $p$-mean curvature surfaces in $H_{1}$, which
implies Theorem 1.1.
###### Theorem 1.3.
Let $\alpha(x,y)$ be an arbitrary smooth function on a coordinate neighborhood
$(U,x,y)\subset\operatorname{\mathbb{R}}^{2}$. If $\alpha(x,y)$ satisfies the
Codazzi-like equation (1.2), then there exists an embedding $X:U\rightarrow
H_{1}$ $($provided that $U$ is small enough$)$ such that the surface
$\Sigma=X(U)$ is a constant $p$-mean curvature surface with $H=c$ and the
given function $\alpha(x,y)$ as its $\alpha$-function, and
$\hat{e}_{1}=\frac{\partial}{\partial x}$. In addition, such a surface depends
upon two functions $k(y)$ and $h(y)$ of $y$, which, together with $c,\alpha$,
describe the induced metric with $\hat{e}_{2}=a(x,y)\frac{\partial}{\partial
x}+b(x,y)\frac{\partial}{\partial y}$. Here $a$ and $b$ are specified as
(2.15) and (2.18) with $H=c$.
Theorem 1.2 follows from our detailed study on the integrability condition
(see (2.1)) of the fundamental theorem (Theorem 2.1) for surfaces in $H_{1}$.
Actually, if we let $\hat{\omega}_{1}{}^{2}$ be the Levi-Civita connection
associated to the induced metric with respect to the orthonormal frame field
$\\{\hat{e}_{1},\hat{e}_{2}\\}$, as specified in Theorem 1.2, then (1.4) means
that $\hat{\omega}_{1}{}^{2},\alpha$, and $H$ satisfy the integrability
condition (2.1). This is equivalent to saying that $a,b,\alpha$ and $H$
satisfy the integrability condition (2.13) (see Subsection 2.2), which is
another version of (2.1). We therefore have Theorem 1.2.
Given a function $\alpha(x,y)$ in a coordinate neighborhood
$(U;x,y)\subset\operatorname{\mathbb{R}}^{2}$ which satisfies the Codazzi-like
equation (1.2), we are able to construct a family of constant $p$-mean
curvature surfaces. Therefore, it suggests a good strategy to investigate
constant $p$-mean curvature surfaces by means of the Codazzi-like equation
(1.2); in particular, $p$-minimal surfaces. In this paper, we will focus on
the theory of $p$-minimal surfaces. Strategically, we first study the equation
(1.2) with $c=0$, that is,
(1.5) $\alpha_{xx}+6\alpha\alpha_{x}+4\alpha^{3}=0.$
For nonlinear ordinary differential equations, it is known that it is rarely
possible to find explicit solutions in close form, even in power series.
Fortunately, we indeed obtain a complete set of solutions to (1.5) in a simple
form (see Section 4). This is the following theorem.
###### Theorem 1.4.
Besides the following three special solutions to (1.5),
$\alpha(x)=0,\ \frac{1}{x+c_{1}},\ \frac{1}{2(x+c_{1})},$
we have the general solution to (1.5) of the form
(1.6) $\alpha(x)=\frac{x+c_{1}}{(x+c_{1})^{2}+c_{2}},$
which depneds on two constants $c_{1}$ and $c_{2}$, and $c_{2}\neq 0$.
In Subsection 5.1, we are able to use the types of the solutions in Theorem
1.4 to divide the $p$-minimal surfaces into several classes, which are
vertical, special type I, special type II and general type (see Definition 5.1
and 5.2). Each type of these $p$-minimal surfaces is open and contains no
singular points. Generically, each $p$-minimal surface is an union of these
types of surfaces. And ”type” can be shown to be invariant under an action of
a Heisenberg rigid motion. Now for each type, no matter it is special or
general, if a function $\alpha$ is given, then Proposition 5.3, 5.4 and 5.5
express the formula for the induced metric $a,b$ (see (5.5), (5.6) and (5.7)),
which is a representation of $I$, on the $p$-minimal surfaces with this given
$\alpha$ as $\alpha$-function. From these formulae, we see that such
constructed $p$-minimal surfaces depends upon two functions $k(y)$ and $h(y)$
for each given $\alpha$. Nonetheless, in Section 6, we proceed to normalize
these invariants to the following normal forms in terms of an orthogonal
coordinate system $(x,y)$, which is a coordinate system such that $a=0$. Such
an coordinate system is determined up to a translation on $(x,y)$, thus we
call it a normal coordinate system.
###### Theorem 1.5.
Let $\Sigma\subset H_{1}$ be a $p$-minimal surface. Then, in terms of a normal
coordinate system $(x,y)$, we can normalize the $\alpha$-function and the
induced metric $a,b$ to be the following normal forms:
1. (1)
$\alpha=\frac{1}{x+\zeta_{1}(y)}$, and
$a=0,b=\frac{\alpha^{2}}{\sqrt{1+\alpha^{2}}}$ if $\Sigma$ is of special type
I,
2. (2)
$\alpha=\frac{1}{2x+\zeta_{1}(y)}$, and
$a=0,b=\frac{|\alpha|}{\sqrt{1+\alpha^{2}}}$ if $\Sigma$ is of special type
II,
3. (3)
$\alpha=\frac{x+\zeta_{1}(y)}{(x+\zeta_{1}(y))^{2}+\zeta_{2}(y)}$, and
$a=0,b=\frac{|\alpha|}{|x+\zeta_{1}(y)|\sqrt{1+\alpha^{2}}}$ if $\Sigma$ is of
general type,
for some functions $\zeta_{1}(y)$ and $\zeta_{2}(y)$, in which $\zeta_{2}(y)$
is unique up to a translation on $y$, and $\zeta_{1}(y)$ is unique up to a
translation on $y$ as well as its image.
Therefore, $\zeta_{1}(y)$ constitutes a complete set of invariants for
$p$-minimal surfaces of special type I (or of special type II). And both
$\zeta_{1}(y)$ and $\zeta_{2}(y)$ constitute a complete set of invariants for
$p$-minimal surfaces of general type. We hence give a representaion for
$p$-minimal surfaces (see Section 6). From Theorem 1.5, together with, Theorem
1.2 and Theorem 1.3, it holds immediately that the following version of
fundamental theorem for $p$-minimal surfaces in $H_{1}$.
###### Theorem 1.6.
Given two arbitrary functions $\zeta_{1}(y)$ and $\zeta_{2}(y)$ defined on
$(c,d)\subset\operatorname{\mathbb{R}}$, and $\zeta_{2}(y)\neq 0$ for all
$y\in(c,d)$ (note that $(c,d)$ may be the whole line
$\operatorname{\mathbb{R}}$), then
1. (1)
there exist an open set
$U\subset(e,f)\times(c,d)\subset(\operatorname{\mathbb{R}}^{2};x,y)$, for some
$(e,f)\subset\operatorname{\mathbb{R}}$, and an embedding $X:U\rightarrow
H_{1}$ such that $\Sigma=X(U)$ is a $p$-minimal surface of special type I
$($or of special type II$)$ with $(x,y)$ as a normal coordinate system and
$\zeta_{1}(y)$ as its $\zeta_{1}$-invariant;
2. (2)
there exist an open set
$U\subset(e,f)\times(c,d)\subset(\operatorname{\mathbb{R}}^{2};x,y)$, for some
$(e,f)\subset\operatorname{\mathbb{R}}$, and an embedding $X:U\rightarrow
H_{1}$ such that $\Sigma=X(U)$ is a $p$-minimal surface of general type with
$(x,y)$ as a normal coordinate system and $\zeta_{1}(y)$ and $\zeta_{2}(y)$ as
its $\zeta_{1}$\- and $\zeta_{2}$-invariants.
Moreover, such embeddings in $(1)$ and $(2)$ are unique, up to a Heisenberg
rigid motion.
Due to Theorem 1.6, for each pair of function $\zeta_{1}(y)$ and
$\zeta_{2}(y)$, we define in Subsection 6.3 eight maximal $p$-minimal surfaces
in the sense specified in Theorem 1.7. Roughly speaking, it says that any
connected $p$-minimal surface with type is a part of one of these eight
classes of $p$-minimal surfaces. And notice that generically, a $p$-minimal
surface is a union of those $p$-minimal surfaces with type.
###### Theorem 1.7.
Given two arbitrary functions $\zeta_{1}(y)$ and $\zeta_{2}(y)$ defined on
$(c,d)\subset\operatorname{\mathbb{R}}$, and $\zeta_{2}(y)\neq 0$ for all
$y\in(c,d)$ $($note that $(c,d)$ may be the whole line
$\operatorname{\mathbb{R}}$$)$, then all the eight $p$-minimal surfaces
(1.7) $\begin{split}&S_{I}^{-}(\zeta_{1}),\ S_{I}^{+}(\zeta_{1}),\
S_{II}^{-}(\zeta_{1}),\ S_{II}^{+}(\zeta_{1});\ \textrm{and}\\\
&\Sigma_{I}(\zeta_{1},\zeta_{2}),\ \Sigma_{II}^{-}(\zeta_{1},\zeta_{2}),\
\Sigma_{II}^{+}(\zeta_{1},\zeta_{2})\ \textrm{and}\
\Sigma_{III}(\zeta_{1},\zeta_{2})\end{split}$
are immersed, in addition, they are maximal in the following sense:
* •
Any connected $p$-minimal surface of special type I with $\zeta_{1}(y)$ as the
$\zeta_{1}$-invariant is a part of either $S_{I}^{-}(\zeta_{1})$ or
$S_{I}^{+}(\zeta_{1})$.
* •
Any connected $p$-minimal surface of special type II with $\zeta_{1}(y)$ as
the $\zeta_{1}$-invariant is a part of either $S_{II}^{-}(\zeta_{1})$ or
$S_{II}^{+}(\zeta_{1})$.
* •
Any connected $p$-minimal surface of type I with $\zeta_{1}(y)$ and
$\zeta_{2}(y)$ as the $\zeta_{1}$\- and $\zeta_{2}$-invariants is a part of
$\Sigma_{I}(\zeta_{1},\zeta_{2})$.
* •
Any connected $p$-minimal surface of type II with $\zeta_{1}(y)$ and
$\zeta_{2}(y)$ as the $\zeta_{1}$\- and $\zeta_{2}$-invariants is a part of
either $\Sigma_{II}^{-}(\zeta_{1},\zeta_{2})$ or
$\Sigma_{II}^{+}(\zeta_{1},\zeta_{2})$.
* •
Any connected $p$-minimal surface of type III with $\zeta_{1}(y)$ and
$\zeta_{2}(y)$ as the $\zeta_{1}$\- and $\zeta_{2}$-invariants is a part of
$\Sigma_{III}(\zeta_{1},\zeta_{2})$.
As applications of this theory, in Section 8, we give a complete description
about the structures of the singular sets of $p$-minimal surfaces in the
Heisenberg group $H_{1}$.
###### Theorem 1.8.
The singular set of a $p$-minimal surface is either
1. (1)
an isolated point; or
2. (2)
a $C^{1}$ smooth curve.
In addition, an isolated singular point only happens in the surfaces of
special type I with $\zeta_{1}=\textrm{cont.}$, that is, a part of the graph
$u=0$ contains the origin as the isolated singular point.
Actually, the result in Theorem 1.8 is just a special one of Theorem 3.3 in
[3]. However, we give a computable proof of this result for $p$-minimal
surfaces. We also have the description about how a characteristic leaf goes
through a singular curve, which is called a ”go through” theorem in [3].
###### Theorem 1.9.
Let $\Sigma\subset H_{1}$ be a $p$-minimal surface. Then the characteristic
foliation is smooth around the singular curve in the following sense that each
leaf can be extended smoothly to a point on the singular curve.
Due to Theorem 1.9, we have the following result.
###### Theorem 1.10.
Let $\Sigma$ be a $p$-minimal surface of type II $($III$)$. If it can be
smoothly extended through the singular curve, then the other side of the
singular curve is of type III $($II$)$.
Theorem 1.10 plays a key point to enable us to recover the Bernstein-type
theorem (see Section 8), which was first shown in the original paper [3] (or
see [1, 6, 7]), and says that
$u(x,y)=Ax+By+C,$
for some constants $A,B,C\in\operatorname{\mathbb{R}}$, and
$u(x,y)=-ABx^{2}+(A^{2}-B^{2})xy+ABy^{2}+g(-Bx+Ay),$
where $A,B$ are constants such that $A^{2}+B^{2}=1$ and $g\in
C^{\infty}(\operatorname{\mathbb{R}})$, are the only two classes of entire
smooth solutions to the $p$-minimal graph equation (1.3). In addition, in
Section 7, we present some basic examples which, in particular, help us figure
out the Bernstein-type theorem.
Finally, in Section 9, depending on a parametrized curve
$\mathcal{C}(\theta)=(x(\theta),y(\theta),z(\theta))$ for
$\theta\in\operatorname{\mathbb{R}}$, we deform the graph $u=0$ in some way to
construct $p$-minimal surfaces with parametrization
(1.8)
$Y(r,\theta)=(x(\theta)+r\cos{\theta},y(\theta)+r\sin{\theta},z(\theta)+ry(\theta)\cos{\theta}-rx(\theta)\sin{\theta}),$
for $r\in\operatorname{\mathbb{R}}$. It is easy to see that $Y$ is an
immersion if and only if either
$\Theta(\mathcal{C}^{\prime}(\theta))-(y^{\prime}(\theta)\cos{\theta}-x^{\prime}(\theta)\sin{\theta})^{2}\neq
0\ \textrm{or}\
r+(y^{\prime}(\theta)\cos{\theta}-x^{\prime}(\theta)\sin{\theta})\neq 0$ for
all $\theta$ (see Remark 9.2). In particular, we have
###### Theorem 1.11.
The surface $Y$ defines a $p$-minimal surface of special type I if the curve
$\mathcal{C}$ satisfies
(1.9)
$z^{\prime}(\theta)+x(\theta)y^{\prime}(\theta)-y(\theta)x^{\prime}(\theta)-(y^{\prime}(\theta)\cos{\theta}-x^{\prime}(\theta)\sin{\theta})^{2}=0,$
for all $\theta$, or equivalently
(1.10)
$z(\theta)=\int\big{[}(y^{\prime}(\theta)\cos{\theta}-x^{\prime}(\theta)\sin{\theta})^{2}+y(\theta)x^{\prime}(\theta)-x(\theta)y^{\prime}(\theta)\big{]}d\theta.$
In addition, the corresponding $\zeta_{1}$-invariant reads
(1.11)
$\zeta_{1}(\theta)=y^{\prime}(\theta)\cos{\theta}-x^{\prime}(\theta)\sin{\theta}-\int\big{[}x^{\prime}(\theta)\cos{\theta}+y^{\prime}(\theta)\sin{\theta}\big{]}d\theta,$
where
$\int\big{[}x^{\prime}(\theta)\cos{\theta}+y^{\prime}(\theta)\sin{\theta}\big{]}d\theta$
is an anti-derivative of the function
$x^{\prime}(\theta)\cos{\theta}+y^{\prime}(\theta)\sin{\theta}$.
On the other hand, we also have
###### Theorem 1.12.
The surface $Y$ defines a $p$-minimal surface of general type if the curve
$\mathcal{C}$ satisfies
(1.12)
$z^{\prime}(\theta)+x(\theta)y^{\prime}(\theta)-y(\theta)x^{\prime}(\theta)-(y^{\prime}(\theta)\cos{\theta}-x^{\prime}(\theta)\sin{\theta})^{2}\neq
0,$
for all $\theta$. In addition, the corresponding $\zeta_{1}$\- and
$\zeta_{2}$-invariant read
(1.13)
$\left\\{\begin{split}\zeta_{1}(\theta)&=y^{\prime}(\theta)\cos{\theta}-x^{\prime}(\theta)\sin{\theta}-\int\big{[}x^{\prime}(\theta)\cos{\theta}+y^{\prime}(\theta)\sin{\theta}\big{]}d\theta\\\
\zeta_{2}(\theta)&=z^{\prime}(\theta)+x(\theta)y^{\prime}(\theta)-y(\theta)x^{\prime}(\theta)-(y^{\prime}(\theta)\cos{\theta}-x^{\prime}(\theta)\sin{\theta})^{2}.\end{split}\right.$
From this construction, together with Theorem 7.8 which gives parametrizations
for $p$-minimal surfaces of special type II, we conclude that we has
generically provided a parametrization for any given $p$-minimal surface (see
the argument in Section 9).
Acknowledgments. The first author’s research was supported in part by NCTS and
in part by MOST 109-2115-M-007-004 -MY3. The second author’s research was
supported in part by MOST 108-2115-M-032-008-MY2.
## 2\. The Fundamental Theorem for surfaces in $H_{1}$
In this section, we first review the fundamental theorem for surfaces in the
Heisenberg group $H_{1}$ (Theorem 2.1). For the detailed, we refer the reader
to [5]. Next we give another version (Theorem 1.2) of this theorem in terms of
compatible coordinate systems.
### 2.1. The Fundamental Theorem for surfaces in $H_{1}$
Recall that there are four invariants for surfaces induced on a surface
$\Sigma$ from the Heisenberg group $H_{1}$:
1. (1)
The first fundamental form (or the induced metric) $I$, which is the adapted
metric $g_{\Theta}$ restricted to $\Sigma$. This metric is actually defined on
the whole surface $\Sigma$.
2. (2)
The directed characteristic foliation $e_{1}$, which is a unit vector field
$\in T\Sigma\cap\xi$. This vector field is only defined on the regular part of
$\Sigma$.
3. (3)
The $\alpha$-function $\alpha$, which is a function defined on the regular
part such that $\alpha e_{2}+T\in T\Sigma$, where $e_{2}=Je_{1}$.
4. (4)
The $p$-mean curvature $H$, which is a function on the regular part defined by
$\nabla_{e_{1}}e_{2}=-He_{1}$.
These four invariants constitute a complete set of invariants for surfaces in
$H_{1}$. That is, if $\phi:\Sigma_{1}\rightarrow\Sigma_{2}$ is a
diffeomorphism between these two surfaces which preserves these four
invariants, then $\phi$ is the restriction of a Heisenberg rigid motion
$\Phi$. We have the integrability condition
(2.1)
$\begin{split}\hat{\omega}_{1}{}^{2}(\hat{e}_{1})&=\frac{H\alpha}{(1+\alpha^{2})^{1/2}}\\\
\hat{\omega}_{1}{}^{2}(\hat{e}_{2})&=2\alpha+\frac{\alpha(\hat{e}_{1}\alpha)}{1+\alpha^{2}}\\\
\hat{e}_{2}H&=\frac{\hat{e}_{1}\hat{e}_{1}\alpha+6\alpha(\hat{e}_{1}\alpha)+4\alpha^{3}+\alpha
H^{2}}{(1+\alpha^{2})^{1/2}},\end{split}$
where $\hat{e}_{2}=\frac{\alpha e_{2}+T}{\sqrt{1+\alpha^{2}}}$, which is
nothing to do with the orientation of $\Sigma$ but a vector field and only
determined by the contact form $\Theta$. Moreover, $\hat{e}_{1}=e_{1}$, which
is the characteristic direction and is determined up to a sign (If we choose
$\hat{e}_{1}$ such that $\hat{e}_{1}\wedge\hat{e}_{2}$ is compatible with the
orientation of $\Sigma$ then $\hat{e}_{1}$ is unique). The form
$\hat{\omega}_{1}{}^{2}$ is the Levi-Civita connection form with respect to
the frame $\\{\hat{e}_{1},\hat{e}_{2}\\}$. We are ready to write down the
following fundamental theorem (see [5]).
###### Theorem 2.1 (The Fundamental theorem for surfaces in $H_{1}$).
Let $(\Sigma,g)$ be a Riemannian $2$-manifold, and let $\hat{\alpha},\hat{H}$
be two real-valued functions on $\Sigma$. Assume that $g$, together with
$\hat{\alpha},\hat{H}$, satisfies the integrability condition (2.1), with
$\alpha,H$ replaced by $\hat{\alpha},\hat{H}$, respectively. Then for every
point $p\in\Sigma$ there exist an open neighborhood $U$ containing $p$, and an
embedding $X:U\rightarrow H_{1}$ such that
$g=X^{\ast}(I),\hat{\alpha}=X^{\ast}\alpha$ and $\hat{H}=X^{\ast}H$. And
$X_{\ast}(\hat{e}_{1})$ defines the foliation on $X(U)$ induced from $H_{1}$.
Moreover, $X$ is unique up to a Heisenberg rigid motion.
### 2.2. The new version of the integrability condition
The goal of this subsection is to express the integrability condition (2.1) in
terms of a compatible coordinate system $(x,y)$. We write
(2.2) $\hat{e}_{2}=a(x,y)\frac{\partial}{\partial
x}+b(x,y)\frac{\partial}{\partial y},$
for some functions $a$ and $b\neq 0$. We can assume, without loss of
generality, that $b>0$, that is, both $\frac{\partial}{\partial
x}\wedge\frac{\partial}{\partial y}$ and $\hat{e}_{1}\wedge\hat{e}_{2}$ define
the same orientation on $\Sigma$. The two functions $a$ and $b$ are a
representation of the first fundamental form $I$. The dual co-frame
$\\{\hat{\omega}^{1},\hat{\omega}^{2}\\}$ of $\\{\hat{e}_{1},\hat{e}_{2}\\}$
is
(2.3) $\begin{split}\hat{\omega}^{1}&=dx-\frac{a}{b}dy,\\\
\hat{\omega}^{2}&=\frac{1}{b}dy.\end{split}$
Then the Levi-Civita connection forms are uniquely determined by the following
Riemannian structure equations
(2.4)
$\begin{split}d\hat{\omega}^{1}&=\hat{\omega}^{2}\wedge\hat{\omega}_{2}{}^{1},\\\
d\hat{\omega}^{2}&=\hat{\omega}^{1}\wedge\hat{\omega}_{1}{}^{2},\end{split}$
with the normalized condition
(2.5) $\hat{\omega}_{1}{}^{2}+\hat{\omega}_{2}{}^{1}=0.$
A computation shows that
(2.6) $\begin{split}d\hat{\omega}^{1}&=-d\left(\frac{a}{b}\right)\wedge
dy=-\left(\frac{bda-adb}{b^{2}}\right)\wedge dy\\\
&=\frac{dy}{b}\wedge\frac{bda-adb}{b}=\hat{\omega}^{2}\wedge\frac{bda-
adb}{b}.\end{split}$
By comparing with the first equation of the Riemannian structure equations
(2.4), we have
(2.7) $\hat{\omega}_{2}{}^{1}=\frac{bda-adb}{b}+a_{11}\hat{\omega}^{2}$
for some function $a_{11}$. On one hand, the second equation of (2.4) and the
normalized condition (2.5) imply
(2.8)
$\begin{split}d\hat{\omega}^{2}&=\hat{\omega}^{1}\wedge\hat{\omega}_{1}{}^{2}\\\
&=-(dx-\frac{a}{b}dy)\wedge\left(\frac{bda-
adb}{b}+a_{11}\hat{\omega}^{2}\right)\\\
&=\left(-a_{y}+\frac{a}{b}b_{y}-\frac{a_{11}}{b}-\frac{a}{b}a_{x}+\frac{a^{2}}{b^{2}}b_{x}\right)dx\wedge
dy.\end{split}$
On the other hand,
(2.9) $d\hat{\omega}^{2}=\left(d\frac{1}{b}\right)\wedge
dy=-\frac{b_{x}}{b^{2}}dx\wedge dy.$
Equations (2.8) and (2.9) yield
(2.10) $a_{11}=\frac{b_{x}}{b}-ba_{y}+ab_{y}-aa_{x}+\frac{a^{2}}{b}b_{x}.$
Therefore,
(2.11) $\begin{split}\hat{\omega}_{2}{}^{1}&=\frac{bda-
adb}{b}+a_{11}\hat{\omega}^{2}\\\
&=(ba_{x}-ab_{x})\frac{dx}{b}+(ba_{y}-ab_{y})\frac{dy}{b}+\left(\frac{b_{x}}{b}-ba_{y}+ab_{y}-aa_{x}+\frac{a^{2}}{b}b_{x}\right)\frac{dy}{b}\\\
&=\left(\frac{ba_{x}-ab_{x}}{b}\right)dx+\left(\frac{b_{x}}{b^{2}}-\frac{aa_{x}}{b}+\frac{a^{2}b_{x}}{b^{2}}\right)dy.\end{split}$
From the connection form formula (2.11), we have
(2.12)
$\begin{split}\hat{\omega}_{1}{}^{2}(\hat{e}_{1})&=\frac{ab_{x}-ba_{x}}{b}=-a_{x}+a\frac{b_{x}}{b},\\\
\hat{\omega}_{1}{}^{2}(\hat{e}_{2})&=\frac{a(ab_{x}-ba_{x})}{b}+b\left(\frac{aa_{x}}{b}-\frac{b_{x}}{b^{2}}-\frac{a^{2}b_{x}}{b^{2}}\right)=-\frac{b_{x}}{b}.\end{split}$
Therefore, in terms of a compatible coordinate system $(U;x,y)$, the
integrability condition (2.1) is equivalent to
(2.13)
$\begin{split}-a_{x}+a\frac{b_{x}}{b}&=\frac{H\alpha}{(1+\alpha^{2})^{1/2}},\\\
-\frac{b_{x}}{b}&=2\alpha+\frac{\alpha\alpha_{x}}{1+\alpha^{2}},\\\
aH_{x}+bH_{y}&=\frac{\alpha_{xx}+6\alpha\alpha_{x}+4\alpha^{3}+\alpha
H^{2}}{(1+\alpha^{2})^{1/2}}.\end{split}$
### 2.3. The computation of the first fundamental form $I$
We would like to solve the first two equations of (2.13), which are part of
the integrability condition. From the second equation of (2.13), it is easy to
see that
(2.14)
$\ln{|b|}=\int-\left(2\alpha+\frac{\alpha\alpha_{x}}{1+\alpha^{2}}\right)dx+k(y)$
for some function $k(y)$ in the variable $y$, that is
(2.15)
$\begin{split}|b|&=e^{k(y)}e^{\int-\left(2\alpha+\frac{\alpha\alpha_{x}}{1+\alpha^{2}}\right)dx},\\\
&=e^{k(y)}\frac{e^{-\int 2\alpha
dx}}{(1+\alpha^{2})^{\frac{1}{2}}},\end{split}$
where $\int 2\alpha dx$ is an anti-derivative of $2\alpha$ with respect to
$x$. Throughout this paper, we always assume, without loss of generality, that
$b>0$. For $a$, we substitute the second equation of (2.13) into the first one
to obtain the first order linear ODE
(2.16)
$a_{x}+a\left(2\alpha+\frac{\alpha\alpha_{x}}{1+\alpha^{2}}\right)+\frac{H\alpha}{(1+\alpha^{2})^{1/2}}=0.$
To solve $a$, we choose the integrating factor
$u=e^{\int\left(2\alpha+\frac{\alpha\alpha_{x}}{1+\alpha^{2}}\right)dx}$ such
that the one-form
(2.17)
$u\left(\left[a\left(2\alpha+\frac{\alpha\alpha_{x}}{1+\alpha^{2}}\right)+\frac{H\alpha}{(1+\alpha^{2})^{1/2}}\right]dx+da\right)$
is an exact form. Therefore, using the standard method of ODE, one sees that
(2.18)
$\begin{split}a&=e^{\int-\left(2\alpha+\frac{\alpha\alpha_{x}}{1+\alpha^{2}}\right)dx}\left(h(y)-\int\frac{H\alpha}{(1+\alpha^{2})^{1/2}}e^{\int\left(2\alpha+\frac{\alpha\alpha_{x}}{1+\alpha^{2}}\right)dx}dx\right),\\\
&=\frac{e^{-\int 2\alpha dx}}{(1+\alpha^{2})^{\frac{1}{2}}}\left(h(y)-\int
H\alpha\left(e^{\int 2\alpha dx}\right)dx\right),\end{split}$
for some function $h(y)$ in $y$ and $\int H\alpha\left(e^{\int 2\alpha
dx}\right)dx$ is an anti-derivative of $H\alpha\left(e^{\int 2\alpha
dx}\right)$ with respect to $x$. From (2.18) and (2.15), we conclude that the
first fundamental form $I$ (or $a$ and $b$) is determined by $\alpha$ and $H$,
up to two functions $k(y)$ and $h(y)$. We are thus ready to prove a more
useful version of fundamental theorem for surfaces (see Theorem 1.2).
### 2.4. The proof of Theorem 1.2
We define a Riemannian metric on $U$ by regarding $\\{\frac{\partial}{\partial
x},a(x,y)\frac{\partial}{\partial x}+b(x,y)\frac{\partial}{\partial y}\\}$ as
an orthonormal frame field, where $a$ and $b$ are specified as (2.15) and
(2.18) with $h(y)$ and $k(y)$ given in (1.4). Then it is easy to see that
$\alpha$ and $H$, together with $a$ and $b$, satisfy the integrability
condition (2.13), and hence, by the fundamental theorem for surfaces (see
Theorem 2.1), $U$ can be embedded uniquely as a surface with $H$ and $\alpha$
as its $p$-mean curvature and $\alpha$-function, respectively. In addition,
the characteristic direction $\hat{e}_{1}=\frac{\partial}{\partial x}$ and
$\hat{e}_{2}=a(x,y)\frac{\partial}{\partial x}+b(x,y)\frac{\partial}{\partial
y}$ define the induced metric on the embedded surface. We thus complete the
proof of Theorem 1.2, which is another version of the fundamental theorem for
surfaces (see Theorem 2.1 for the original version).
### 2.5. The Transformation law of invariants
First of all, we compute the transformation law of compatible coordinate
systems. Let $(x,y)$ and $(\tilde{x},\tilde{y})$ be two compatible coordinate
systems and $\phi$ a coordinate transformation, i.e.,
$(\tilde{x},\tilde{y})=\phi(x,y)$. Then we have
$\phi_{*}\frac{\partial}{\partial x}=\frac{\partial}{\partial\tilde{x}}$,
which means that
$[\phi_{*}]\left(\begin{array}[]{c}1\\\
0\end{array}\right)=\left(\begin{array}[]{cc}\tilde{x}_{x}&\tilde{x}_{y}\\\
\tilde{y}_{x}&\tilde{y}_{y}\end{array}\right)\left(\begin{array}[]{c}1\\\
0\end{array}\right)=\left(\begin{array}[]{c}1\\\ 0\end{array}\right),$
where $[\phi_{*}]$ is the matrix representation of $\phi_{*}$ with respect to
these two coordinate systems. We then have the coordinates transformation
(2.19) $\tilde{x}=x+\Gamma(y),\ \ \ \tilde{y}=\Psi(y),$
for some functions $\Gamma(y)$ and $\Psi(y)$. Since
$\det{[\phi_{*}]}=\Psi^{{}^{\prime}}(y)$, we immediately have that
$\Psi^{{}^{\prime}}(y)\neq 0$ for all $y$. Next, we compute the transformation
law of representations of the induced metrics. Suppose that the
representations of the induced metric are, respectively, given by $a,b$ and
$\tilde{a},\tilde{b}$, that is, $\hat{e}_{2}=a\frac{\partial}{\partial
x}+b\frac{\partial}{\partial
y}=\tilde{a}\frac{\partial}{\partial\tilde{x}}+\tilde{b}\frac{\partial}{\partial\tilde{y}}$.
By (2.19), we have the following transformation law of the induced metric as
(2.20) $\tilde{a}=a+b\Gamma^{\prime}(y),\ \ \ \tilde{b}=b\Psi^{\prime}(y).$
Notice that we have omitted the sign of pull-back $\phi^{*}$ on $\tilde{a}$
and $\tilde{b}$. Since the $p$-mean curvature and $\alpha$-function are
function-type invariants, they transform by pull-back.
## 3\. Constant $p$-mean curvature surfaces
In this section, we aim to prove Theorem 1.3 and Theorem 1.1, and then provide
a new tool to study the constant $p$-mean curvature surfaces. More precisely,
we will indicate that one is able to convert the investigation of the constant
$p$-mean curvature surfaces into the study of the so-called Codazzi-like
equation.
### 3.1. The proof of Theorem 1.1 and Theorem 1.3
Let $\Sigma\subset H_{1}$ be a constant $p$-mean curvature surface with $H=c$.
Then, in terms of a compatible coordinate system $(U;x,y)$, the integrability
condition (2.13) is reduced to
(3.1)
$\begin{split}-a_{x}+a\frac{b_{x}}{b}&=\frac{c\alpha}{(1+\alpha^{2})^{1/2}},\\\
-\frac{b_{x}}{b}&=2\alpha+\frac{\alpha\alpha_{x}}{1+\alpha^{2}},\\\
\alpha_{xx}+6\alpha\alpha_{x}&+4\alpha^{3}+c^{2}\alpha=0.\end{split}$
In other words, there exists the $\alpha$ satisfying the Codazzi-like equation
(3.2) $\alpha_{xx}+6\alpha\alpha_{x}+4\alpha^{3}+c^{2}\alpha=0,$
which is a nonlinear ordinary differential equation. Conversely, given an
arbitrary function $\alpha(x,y)$ on a coordinate neighborhood
$(U,x,y)\subset\operatorname{\mathbb{R}}^{2}$ which satisfies the Codazzi-like
equation (3.2) (or (1.2)). It is easy to see that equation (3.2) is just the
equation (1.4) in the constant $p$-mean curvature cases. Namely, that $\alpha$
satisfies equation (3.2) means it satisfies (1.4) for arbitrary functions
$h(y)$ and $k(y)$. Therefore, the embedding $\Sigma=X(U)$ in Theorem 1.2 is a
constant $p$-mean curvature surface with $H=c$, the given function
$\alpha(x,y)$ as its $\alpha$-function, and
$\hat{e}_{1}=\frac{\partial}{\partial x}$. Notice that, in view of (2.15) and
(2.18), the constant $p$-mean curvature surface determined by the given
function $\alpha$ usually is not unique. They depend on two functions $h(y)$
and $k(y)$ in $y$. We then have Theorem 1.3, and hence Theorem 1.1. This
completes the proof of Theorem 1.3 and Theorem 1.1.
Throughout the rest of this paper, we are going to apply the tool to the
subject of the $p$-minimal surfaces.
## 4\. Solutions to the Liénard equation
Since we bring in a strategy to study $p$-minimal surfaces by means of the
understanding of the Codazzi-like equation (4.1), we focus on studying this
equation in this section. First of all, we suppose that $\alpha$ is regarded
as a function in $x$ and we want to discuss the solutions to the Codazzi-like
equation
(4.1) $\alpha_{xx}+6\alpha\alpha_{x}+4\alpha^{3}=0.$
This is actually one kind of the so-called Liénard equation [2]. Derivation of
explicit solutions (see Theorem 1.4) to the equation (4.1) is given below.
### 4.1. The proof of Theorem 1.4
Using $v=\alpha^{\prime}$ to see that (4.1) becomes
(4.2) $v\frac{dv}{d\alpha}+6\alpha v+4\alpha^{3}=0,$
which is the second kind of the Abel equation [8, 9]. Apparently, given $v\neq
0$, equation (4.2) can be written as
(4.3) $\frac{dv}{d\alpha}=-\frac{2\alpha(3v+2\alpha^{2})}{v}.$
Denote $u=\frac{1}{v}$. Then (4.2) or (4.3) becomes
(4.4) $\frac{du}{d\alpha}=6\alpha u^{2}+4\alpha^{3}u^{3}.$
We apply Chiellini’s integrability condition (stated in [10, 11]) for the Abel
equation to (4.4), which is exactly integrated with $k=\frac{2}{9}\neq 0$. It
can be checked that
$\frac{d}{d\alpha}\left(\frac{4\alpha^{3}}{6\alpha}\right)=\frac{d}{d\alpha}\left(\frac{2}{3}\alpha^{2}\right)=\frac{4}{3}\alpha=\frac{2}{9}(6\alpha).$
Let $u=\frac{6\alpha}{4\alpha^{3}}\omega$, i.e.,
$\omega=\frac{2}{3}\alpha^{2}u$ and $\omega\neq 0$. The equation (4.4) turns
out to be
(4.5) $\frac{d\omega}{d\alpha}=\frac{\omega}{\alpha}(2+9\omega+9\omega^{2}),$
which is a separable first order ODE, i.e.,
(4.6) $\frac{d\omega}{\omega(2+9\omega+9\omega^{2})}=\frac{d\alpha}{\alpha}.$
Using the method of partial fractions to integrate the left hand side, we have
(4.7)
$\int\left(\frac{1}{2\omega}-\frac{3}{3\omega+1}+\frac{3}{2(3\omega+2)}\right)d\omega=\int\frac{d\alpha}{\alpha},$
which gives
(4.8)
$\frac{1}{2}\left(\ln\frac{|\omega(3\omega+2)|}{|3\omega+1|^{2}}\right)=\ln|\alpha|+const.$
The implicit solution to (4.5) is then expressed as
(4.9) $\frac{\omega(3\omega+2)}{(3\omega+1)^{2}}=C\alpha^{2},$
provided that $\omega\neq 0,-\frac{1}{3},-\frac{2}{3}$, and $C$ is arbitrary
nonzero constant. Hence, with the assumption $\alpha\neq 0$, we have
(4.10) $4\alpha^{2}(3C\alpha^{2}-1)u^{2}+4(3C\alpha^{2}-1)u+3C=0,$
or equivalently,
(4.11)
$3C(\alpha^{\prime})^{2}+4(3C\alpha^{2}-1)\alpha^{\prime}+4\alpha^{2}(3C\alpha^{2}-1)=0.$
This yields
(4.12) $\begin{split}\alpha^{\prime}&=\frac{2(1-3C\alpha^{2})\pm
2\sqrt{1-3C\alpha^{2}}}{3C}\\\
&=\frac{(1-2C\alpha^{2})\pm\sqrt{1-2C\alpha^{2}}}{C},\end{split}$
for a nonzero constant $C$. Since we have assumed that $v=\alpha^{\prime}$ is
not zero, neither is $1-2C\alpha^{2}$. In order to obtain the general
solutions, we proceed to solve equation (4.12) by means of the variable
separable method. We rewrite (4.12) as
(4.13)
$\begin{split}\frac{dx}{C}&=\frac{d\alpha}{(1-2C\alpha^{2})\pm\sqrt{1-2C\alpha^{2}}}\\\
&=\frac{(1-2C\alpha^{2})\mp\sqrt{1-2C\alpha^{2}}}{[(1-2C\alpha^{2})\pm\sqrt{1-2C\alpha^{2}}][(1-2C\alpha^{2})\mp\sqrt{1-2C\alpha^{2}}]}d\alpha\\\
&=\frac{(1-2C\alpha^{2})\mp\sqrt{1-2C\alpha^{2}}}{-2C\alpha^{2}(1-2C\alpha^{2})}d\alpha\\\
&=\left(\frac{-1}{2C\alpha^{2}}\pm\frac{1}{2C\alpha^{2}\sqrt{1-2C\alpha^{2}}}\right)d\alpha.\end{split}$
Case I. If $C<0$, we use the trigonometric substitution
$\sqrt{2|C|}\alpha=\tan{\theta},\ -\frac{\pi}{2}<\theta<\frac{\pi}{2}$, to get
(4.14)
$\int\frac{d\alpha}{2C\alpha^{2}\sqrt{1-2C\alpha^{2}}}=-\frac{\sqrt{1-2C\alpha^{2}}}{2C\alpha}+c_{1},$
for some $c_{1}\in\mathbb{R}$. Substituting (4.14) into (4.13), we obtain
$\frac{x+c_{1}}{C}=\frac{1\mp\sqrt{1-2C\alpha^{2}}}{2C\alpha},$
that is,
(4.15) $2\alpha(x+c_{1})-1=\mp\sqrt{1-2C\alpha^{2}},$
for some $c_{1}\in\mathbb{R}$. Taking the square of both sides and noticing
that $\alpha\neq 0$, we obtain
(4.16) $\alpha(x)=\frac{x+c_{1}}{(x+c_{1})^{2}+c_{2}},$
for some $c_{1},\ c_{2}\in\mathbb{R}$ and $c_{2}<0$. If $\alpha$ satisfies
$2\alpha(x+c_{1})-1=+\sqrt{1-2C\alpha^{2}}$ in (4.15), then we have
$\alpha(x+c_{1})>0$, and hence (4.16) implies $x+c_{1}<-\sqrt{|c_{2}|}$ or
$\sqrt{|c_{2}|}<x+c_{1}$. On the other hand, if $\alpha$ satisfies
$2\alpha(x+c_{1})-1=-\sqrt{1-2C\alpha^{2}}$, then we have $\alpha(x+c_{1})<0$,
and we then obtain that $-\sqrt{|c_{2}|}<x+c_{1}<\sqrt{|c_{2}|}$ by (4.16).
Case II. If $C>0$, we use the trigonometric substitution
$\sqrt{2C}\alpha=\sin{\theta},\ -\frac{\pi}{2}<\theta<\frac{\pi}{2}$, to get
(4.17)
$\int\frac{d\alpha}{2C\alpha^{2}\sqrt{1-2C\alpha^{2}}}=-\frac{\sqrt{1-2C\alpha^{2}}}{2C\alpha}+c_{1},$
for some $c_{1}\in\mathbb{R}$. Substituting (4.17) into (4.13), we obtain
$\frac{x+c_{1}}{C}=\frac{1\mp\sqrt{1-2C\alpha^{2}}}{2C\alpha},$
that is,
(4.18) $2\alpha(x+c_{1})-1=\mp\sqrt{1-2C\alpha^{2}},$
for some $c_{1}\in\mathbb{R}$. Taking the square of both sides and noticing
that $\alpha\neq 0$, we obtain
(4.19) $\alpha(x)=\frac{x+c_{1}}{(x+c_{1})^{2}+c_{2}},$
for some $c_{1},\ c_{2}\in\mathbb{R}$ and $c_{2}>0$.
As above, while we try to get the general solutions to (4.1), we have assumed
that $\alpha^{\prime}\neq 0,\alpha\neq 0,\omega\neq 0,\omega\neq-\frac{1}{3}$
and $\omega\neq-\frac{2}{3}$. Now suppose $\alpha^{\prime}=0$ on an open
interval, then (4.1) immediately implies $\alpha=0$ on that interval. And it
is easy to see that $\omega=0$ is equivalent to $\alpha=0$. Finally, since
$\omega=\frac{2}{3}\alpha^{2}u$, we see that
$\omega=-\frac{1}{3}\Leftrightarrow\alpha^{2}u=\frac{-1}{2}\Leftrightarrow\alpha^{\prime}=-2\alpha^{2}\Leftrightarrow\alpha(x)=\frac{1}{2(x+c_{1})}.$
Similarly, we have
$\omega=-\frac{2}{3}\Leftrightarrow\alpha(x)=\frac{1}{(x+c_{1})},$
for some $c_{1}\in R$. We hence complete the proof of Theorem 1.4.
### 4.2. The phase plane
We remark that when $c=0$ (the $p$-minimal surfaces case), (3.2) is in fact
one of the so-called Liénard equation [2]
(4.20) $\alpha_{xx}=f(\alpha,\alpha_{x})=-(6\alpha\alpha_{x}+4\alpha^{3}).$
If we imagine a simple dynamical system consisting of a particle of unit mass
moving on the $\alpha$-axis, and if $f(\alpha,\alpha_{x})$ is the force acting
on it, then (4.20) is the equation of motion. The values of $\alpha$
(position) and $\alpha_{x}$ (velocity), which at each instant characterize the
state of the system, are called its phases, and the plane of the variables
$\alpha$ and $\alpha_{x}$ is called the phase plane. Using $v=\alpha_{x}$ to
see that (4.20) can be replaced by the equivalent system
(4.21) $\left\\{\begin{split}\frac{d\alpha}{dx}&=v,\\\
\frac{dv}{dx}&=-(6\alpha v+4\alpha^{3}).\end{split}\right.$
In general a solution of (4.21) is a pair of functions $\alpha(x)$ and $v(x)$
defining a curve on the phase plane. It follows from the standard theory of
ODE that if $x_{0}$ is any number and $(\alpha_{0},v_{0})$ is any point in the
phase plane, then there exists a unique solution $(\alpha(x),v(x))$ of (4.21)
such that $\alpha(x_{0})=\alpha_{0}$ and $v(x_{0})=v_{0}$. If this solution
$(\alpha(x),v(x))$ is not a constant, then it defines a curve on the phase
plane called a path of the system; otherwise, it defines a critical point.
Actually, all paths together with critical points form a directed singular
foliation on the phase plane with critical points as singular points of the
foliation, and each path lies in a leaf of the foliation. It is easy to see
that this foliation is really defined by the vector field $V=(v,-(6\alpha
v+4\alpha^{3}))$ and $(0,0)$ is the only critical point. We express the
direction field (or the directed singular foliation) as Figure 4.1:
Figure 4.1. direction field $V$
## 5\. The classification of $p$-minimal surfaces
### 5.1. The classification of $p$-minimal surfaces
Theorem 1.4 suggests us to divide (locally) the $p$-minimal surfaces into
several classes. In terms of compatible coordinates $(x,y)$, the function
$\alpha(x,y)$ is a solution to the Codazzi-like equation (4.1) for any given
$y$. By Theorem 1.4, the function $\alpha(x,y)$ hence has one of the following
forms of special types
$0,\ \frac{1}{x+c_{1}(y)},\ \frac{1}{2x+c_{1}(y)},$
and general types
$\frac{x+c_{1}(y)}{(x+c_{1}(y))^{2}+c_{2}(y)},$
where, instead of constants, both $c_{1}(y)$ and $c_{2}(y)$ are now functions
of $y$. Notice that $c_{2}(y)\neq 0$ for all $y$. We now use the types of the
function $\alpha(x,y)$ to define the types of $p$-minimal surface as follows.
###### Definition 5.1.
Locally, we say that a $p$-minimal surface is
1. (1)
vertical if $\alpha$ vanishes (i.e., $\alpha(x,y)=0$ for all $x,y$).
2. (2)
of special type I if $\alpha=\frac{1}{x+c_{1}(y)}$.
3. (3)
of special type II if $\alpha=\frac{1}{2x+c_{1}(y)}$.
4. (4)
of general type if $\alpha=\frac{x+c_{1}(y)}{(x+c_{1}(y))^{2}+c_{2}(y)}$ with
$c_{2}(y)\neq 0$ for all $y$.
We further divide $p$-minimal surfaces of general type into three classes as
follows:
###### Definition 5.2.
We say that a $p$-minimal surface of general type is
1. (1)
of type I if $c_{2}(y)>0$ for all $y$.
2. (2)
of type II if $c_{2}(y)<0$ for all $y$, and either
$x<-c_{1}(y)-\sqrt{-c_{2}(y)}$ or $x>-c_{1}(y)+\sqrt{-c_{2}(y)}$.
3. (3)
of type III if $c_{2}(y)<0$ for all $y$, and
$-c_{1}(y)-\sqrt{-c_{2}(y)}<x<-c_{1}(y)+\sqrt{-c_{2}(y)}$.
We notice that the type is invariant under an action of a Heisenberg rigid
motion and the regular part of a $p$-minimal surface $\Sigma\subset H_{1}$ is
a union of these types of surfaces. The corresponding paths of each type of
$\alpha$ is marked on the phase plane (Figure 4.1). We express some basic
facts about $p$-minimal surfaces with type as follows.
* •
If $\alpha$ vanishes, then it is part of a vertical plane.
* •
The two concave downward parabolas represent
$\alpha=\frac{1}{x+c_{1}},\frac{1}{2x+c_{1}}$ respectively. The one for
$\alpha=\frac{1}{x+c_{1}}$ is on top of the one for
$\alpha=\frac{1}{2x+c_{1}}$. For surfaces of special type I, we have that
(5.1) $\begin{split}\alpha\rightarrow\left\\{\begin{array}[]{rl}\infty,&\
\textrm{if}\ x\rightarrow-c_{1}\ \ \textrm{from the right},\\\ -\infty,&\
\textrm{if}\ x\rightarrow-c_{1}\ \ \textrm{from the
left};\end{array}\right.\end{split}$
and, for surfaces of special type II, we have that
(5.2) $\begin{split}\alpha\rightarrow\left\\{\begin{array}[]{rl}\infty,&\
\textrm{if}\ x\rightarrow\frac{-c_{1}}{2}\ \ \textrm{from the right},\\\
-\infty,&\ \textrm{if}\ x\rightarrow\frac{-c_{1}}{2}\ \ \textrm{from the
left};\end{array}\right.\end{split}$
* •
The closed curves taken away the origin correspond to the family of solutions
$\alpha(x)=\frac{x+c_{1}}{(x+c_{1})^{2}+c_{2}},$
where $c_{1},c_{2}$ are constants and $c_{2}>0$, which are of type I. There
exists a zero for $\alpha$-function at $x=-c_{1}$. For surfaces of type I, we
have $|\alpha|\leq\frac{1}{2\sqrt{c_{2}}}$, so $\alpha$ is a bounded function
for each fixed $y$, that is, along each path on the phase plane, $\alpha$ is
bounded. Therefore, there are no singular points for surfaces of type I.
* •
The curves in between the two concave downward parabolas are of type II. The
$\alpha$-function of type II does not have any zeros. For surfaces of type II,
it can be checked that
(5.3) $\begin{split}\alpha\rightarrow\left\\{\begin{array}[]{rl}\infty,&\
\textrm{if}\ x\rightarrow-c_{1}+\sqrt{-c_{2}}\ \ \textrm{from the right},\\\
-\infty,&\ \textrm{if}\ x\rightarrow-c_{1}-\sqrt{-c_{2}}\ \ \textrm{from the
left}.\end{array}\right.\end{split}$
* •
The curves beneath the lower concave downward parabolas are of type III. There
exists a zero for $\alpha$-function at $x=-c_{1}$. For surfaces of type III,
we have
(5.4) $\begin{split}\alpha\rightarrow\left\\{\begin{array}[]{rl}-\infty,&\
\textrm{if}\ x\rightarrow-c_{1}+\sqrt{-c_{2}}\ \ \textrm{from the left},\\\
\infty,&\ \textrm{if}\ x\rightarrow-c_{1}-\sqrt{-c_{2}}\ \ \textrm{from the
right}.\end{array}\right.\end{split}$
###### Proposition 5.3.
Suppose $\alpha(x,y)=\frac{x+c_{1}(y)}{(x+c_{1}(y))^{2}+c_{2}(y)}$, which is
of general type. Then the explicit formula for the induced metric on a
$p$-minimal surface with this $\alpha$ as its $\alpha$-function is given by
(5.5) $a=\frac{|\alpha|h(y)}{|x+c_{1}(y)|\sqrt{1+\alpha^{2}}},\ \
b=\frac{|\alpha|e^{k(y)}}{|x+c_{1}(y)|\sqrt{1+\alpha^{2}}},$
for some functions $h(y)$ and $k(y)$.
###### Proof.
If $\alpha=\frac{x+c_{1}(y)}{(x+c_{1}(y))^{2}+c_{2}(y)}$, we choose
$\ln{|(x+c_{1}(y))^{2}+c_{2}(y)|}$ as an anti-derivative of $2\alpha$ with
respect to $x$. Simple computations imply
$\begin{split}\frac{e^{-\int 2\alpha
dx}}{(1+\alpha^{2})^{\frac{1}{2}}}&=\frac{1}{\sqrt{(x+c_{1}(y))^{2}+((x+c_{1}(y))^{2}+c_{2}(y))^{2}}}\\\
&=\frac{|\alpha|}{|x+c_{1}(y)|\sqrt{1+\alpha^{2}}}.\end{split}$
Substituting the above formula into (2.15) and (2.18), equation (5.5) follows.
∎
Similarly, we have
###### Proposition 5.4.
Suppose $\alpha(x,y)=\frac{1}{x+c_{1}(y)}$, which is of special type I. Then
the explicit formula for the induced metric on a $p$-minimal surface with this
$\alpha$ as its $\alpha$-function is given by
(5.6) $a=\frac{\alpha^{2}h(y)}{\sqrt{1+\alpha^{2}}},\ \
b=\frac{\alpha^{2}e^{k(y)}}{\sqrt{1+\alpha^{2}}}.$
###### Proof.
In order to obtain (5.6), we choose $2\ln{|x+c_{1}(y)|}$ as an anti-derivative
of $2\alpha$ with respect to $x$. ∎
###### Proposition 5.5.
Suppose $\alpha(x,y)=\frac{1}{2x+c_{1}(y)}$, which is of special type II. Then
the explicit formula for the induced metric on a $p$-minimal surface with this
$\alpha$ as its $\alpha$-function is given by
(5.7) $a=\frac{|\alpha|h(y)}{\sqrt{1+\alpha^{2}}},\ \
b=\frac{|\alpha|e^{k(y)}}{\sqrt{1+\alpha^{2}}}.$
###### Proof.
To have (5.7), we choose $\ln{|2x+c_{1}(y)|}$ as an anti-derivative of
$2\alpha$ with respect to $x$. ∎
## 6\. A representaion of $p$-minimal surfaces
Let $\Sigma\subset H_{1}$ be a $p$-minimal surface. We define an orthogonal
coordinate system $(x,y)$ to be a compatible coordinate system such that
$a=0$, that is, $\hat{e}_{2}=b\frac{\partial}{\partial y}$.
###### Proposition 6.1.
There always exists an orthogonal coordinate system around any regular point
of a $p$-minimal surface $\Sigma$.
###### Proof.
Suppose that $p\in\Sigma$ is a regular point and $(x,y)$ is an arbitrary
compatible coordinate system around $p$. Since $H=0$, equations (2.15) and
(2.18) imply that the ratio $\frac{-a}{b}=\frac{-h(y)}{e^{g(y)}}$ is just a
function of $y$. Now we define another compatible coordinates
$(\tilde{x},\tilde{y})$ by
$(\tilde{x},\tilde{y})=(x+\Gamma(y),\Psi(y)),$
for some functions $\Gamma(y)$ and $\Psi(y)$ such that
$\Gamma^{\prime}(y)=\frac{-a}{b}$. By the transformation law (2.20) of the
representation of the induced metric, we have $\tilde{a}=0$. This means that
$(\tilde{x},\tilde{y})$ are orthogonal coordinates around $p$. ∎
### 6.1. The proof of Theorem 1.5
It will be suitable to choose an orthogonal coordinate system $(U;x,y)$ to
study a $p$-minimal surface. And it is easy to see from (2.19) and (2.20) that
any two orthogonal coordinate systems $(x,y)$ and $(\tilde{x},\tilde{y})$ are
transformed by
(6.1) $\tilde{x}=x+C,\ \ \ \tilde{y}=\Psi(y),$
for a constant $C$ and a function $\Psi(y)$. That is, the orthogonal
coordinate systems are determined, up to a constant $C$ on the coordinate $x$
and a scaling on the coordinate $y$. The transformation law of the
representation of the induced metric hence reduces to
(6.2) $\tilde{a}=a=0,\ \tilde{b}=b\Psi^{\prime}(y).$
In terms of orthogonal coordinate systems, the integrability condition hence
reads
(6.3)
$\begin{split}-\frac{b_{x}}{b}&=2\alpha+\frac{\alpha\alpha_{x}}{1+\alpha^{2}},\\\
\alpha_{xx}&+6\alpha\alpha_{x}+4\alpha^{3}=0.\end{split}$
Then the $\alpha$-function determines the metric representation $b$, and hence
a $p$-minimal surface, up to a positive function $e^{k(y)}$ as (2.15)
specified (or see (5.5),(5.6) and (5.7) ). Therefore, from the transformation
law of the induced metric (6.2), we are able to choose another orthogonal
coordinate system $(\tilde{x},\tilde{y})=\phi(x,y)$ with $\Psi$ satifying
$e^{k(y)}\Psi^{{}^{\prime}}=1$. That is, we can further normalize $b$ such
that $k(\tilde{y})=0$ for each type, no matter it is special or general. Here
$k(\tilde{y})$ is the function $k$ in the numerator of $\tilde{b}$ (see
(5.5),(5.6) and (5.7)) with $\alpha,x,y$ replaced by
$\tilde{\alpha},\tilde{x},\tilde{y}$. In fact, for a general type (the special
types are similar), it is possible to choose another orthogonal coordinate
system $(\tilde{x},\tilde{y})=\phi(x,y)$ such that
$\phi^{*}\tilde{b}=\frac{|\alpha|}{|x+c_{1}(y)|\sqrt{1+\alpha^{2}}}.$
And it is easy to see that such orthogonal coordinate systems are unique up to
a translation on the two variables $x$ and $y$. In other words, there are
constants $C_{1}$ and $C_{2}$ such that
(6.4) $(\tilde{x},\tilde{y})=\phi(x,y)=(x+C_{1},y+C_{2}).$
We call such an orthogonal coordinate system a normal coordinate system.
Indeed, since $\phi^{*}\tilde{\alpha}=\alpha$, Definition 5.1 indicates the
following transformation law for $c_{1}(y)$ and $c_{2}(y)$ functions
(6.5) $\begin{split}\tilde{c}_{1}(\tilde{y})&=c_{1}(\tilde{y}-C_{2})-C_{1},\ \
\textrm{for special type I},\\\
\tilde{c}_{1}(\tilde{y})&=c_{1}(\tilde{y}-C_{2})-2C_{1},\ \ \textrm{for
special type II, and}\\\
\tilde{c}_{1}(\tilde{y})&=c_{1}(\tilde{y}-C_{2})-C_{1},\
\tilde{c}_{2}(\tilde{y})=c_{2}(\tilde{y}-C_{2}),\ \ \textrm{for general type
},\\\ \end{split}$
where $\tilde{c}_{1}(\tilde{y})$ and $\tilde{c}_{2}(\tilde{y})$ are with
respect to $\tilde{\alpha}$. Namely, $c_{2}(y)$ is unique up to a translation
on $y$, and $c_{1}(y)$ is unique up to a translation on $y$ and its image as
well. We denote these two unique functions $c_{1}(y)$ and $c_{2}(y)$ by
$\zeta_{1}(y)$ and $\zeta_{2}(y)$, respectively. We then complete the proof of
Theorem 1.5.
Both two functions $\zeta_{1}(y)$ and $\zeta_{2}(y)$ are invariants under a
Heisenberg rigid motion. Therefore we call them $\zeta_{1}$\- and
$\zeta_{2}$-invariants, respectively. In terms of $\zeta_{1}$ and $\zeta_{2}$,
we thus have the version of fundamental theorem for $p$-minimal surfaces in
$H_{1}$ (Theorem 1.6).
### 6.2. The proof of Theorem 1.6
Given $\zeta_{1}(y)$, for (1) in Theorem 1.6, we define $\alpha,a,b$ on $U$ by
$\alpha=\frac{1}{x+\zeta_{1}(y)},a=0\ \textrm{and}\
b=\frac{\alpha^{2}}{\sqrt{1+\alpha^{2}}}.$
Notice that $(e,f)$ needs be chosen so that $(e,f)\times(c,d)$ does not
contain the zero set of $x+\zeta_{1}(y)$. Then they satisfy the integrability
condition (3.1) with $c=0$, and hence $U$ together with $\alpha,a,b$ can be
embedded into $H_{1}$ to be a $p$-minimal surface with $\alpha$ as its
$\alpha$-function, and the induced metric $a,b$. Moreover the characteristic
direction $e_{1}=\frac{\partial}{\partial x}$. From the type of $\alpha$, this
minimal $p$-surface is of special type I. In view of $a=0$ and
$b=\frac{\alpha^{2}}{\sqrt{1+\alpha^{2}}}$, we see that the coordinates
$(x,y)$ are a normal coordinate system. Therefore $\zeta_{1}(y)$ is the
corresponding $\zeta_{1}$-invariant. The uniqueness follows from the
fundamental theorem for surfaces in $H_{1}$ or theorem 1.5. This completes the
proof of (1) for the special type I. Both proofs of (1) for the special type
II and of (3) are similar with $(e,f)$ chosen according to their types. For
the special type II, note that $(e,f)$ needs be chosen so that
$(e,f)\times(c,d)$ does not contain the zero set of $2x+\zeta_{1}(y)$, and we
define $\alpha,a,b$ on $U$ by
$\alpha=\frac{1}{2x+\zeta_{1}(y)},a=0\ \textrm{and}\
b=\frac{|\alpha|}{\sqrt{1+\alpha^{2}}}.$
For (3), we see that $(e,f)$ needs be chosen so that $(e,f)\times(c,d)$
contains no the zero set of $(x+\zeta_{1}(y))^{2}+\zeta_{2}(y)$, and we define
$\alpha,a,b$ on $U$ by
$\alpha=\frac{x+\zeta_{1}(y)}{(x+\zeta_{1}(y))^{2}+\zeta_{2}(y)},a=0\
\textrm{and}\ b=\frac{|\alpha|}{|x+\zeta_{1}(y)|\sqrt{1+\alpha^{2}}}.$
Thus we complete the proof of Theorem 1.6.
We remark that, in terms of normal coordinates $(x,y)$, the co-frame formula
(2.3) reads
(6.6) $\begin{split}\hat{\omega}^{1}&=dx-\frac{a}{b}dy=dx,\\\
\hat{\omega}^{2}&=\frac{1}{b}dy,\end{split}$
and hence the induced metric $I$ (the first fundamental form) reads
(6.7)
$\begin{split}I&=\hat{\omega}^{1}\otimes\hat{\omega}^{1}+\hat{\omega}^{2}\otimes\hat{\omega}^{2}=dx\otimes
dx+\frac{1}{b^{2}}dy\otimes dy,\\\ &=\left\\{\begin{array}[]{ll}dx\otimes
dx+\big{[}(x+\zeta_{1}(y))^{2}+(x+\zeta_{1}(y))^{4}\big{]}dy\otimes
dy,&\textrm{for {\bf special type I}},\\\ &\\\ dx\otimes
dx+\big{[}1+(2x+\zeta_{1}(y))^{2}\big{]}dy\otimes dy,&\textrm{for {\bf special
type II}},\\\ &\\\ dx\otimes
dx+\big{[}(x+\zeta_{1}(y))^{2}+[(x+\zeta_{1}(y))^{2}+\zeta_{2}(y)]^{2}\big{]}dy\otimes
dy,&\textrm{for {\bf general type}}.\\\ \end{array}\right.\end{split}$
From (6.7), we see immediately that the induced metric $I$ degenerates on the
singular set $\\{(x,y)\ |\ x+\zeta_{1}(y)=0\\}$ for surfaces of special type
I. Therefore, it won’t be able to extend smoothly through the singular set. On
the other hand, this phenomenon does not happen for both special type II and
general type.
### 6.3. The maximal $p$-minimal surfaces and the proof of Theorem 1.7
From the proof of Theorem 1.6, it is clear to see that
* •
for the special type I, the open rectangle $U$ in (1) of Theorem 1.6 can be
extended to be either
(6.8) $\begin{split}U_{I}^{-}&=\\{(x,y)\in\operatorname{\mathbb{R}}^{2}\ |\
y\in(c,d),\ x+\zeta_{1}(y)<0\\},\ \textrm{or}\\\
U_{I}^{+}&=\\{(x,y)\in\operatorname{\mathbb{R}}^{2}\ |\ y\in(c,d),\
x+\zeta_{1}(y>0\\},\end{split}$
which depends on that $U$ is originally contained in $U_{I}^{-}$ or
$U_{I}^{+}$. Notice that the embedding $X$ might be just extended to be an
immersion. Since both $U_{I}^{-}$ and $U_{I}^{+}$ are connected and simply
connected, the immersion $X$ is unique, up to a Heisenberg rigid motion. We
denote these two $p$-minimal surfaces of special type I by
$S_{I}^{-}(\zeta_{1})=X(U_{I}^{-})$ and $S_{I}^{+}(\zeta_{1})=X(U_{I}^{+})$.
From (6.7), we see that the induced metric $I$ degenerates on the singurlar
set $\\{(x,y)\in\operatorname{\mathbb{R}}^{2}\ |\ y\in(c,d),\
x+\zeta_{1}(y)=0\\}$.
* •
for the special type II, the open rectangle $U$ in (2) of Theorem 1.6 can be
extended to be either
(6.9) $\begin{split}U_{II}^{-}&=\\{(x,y)\in\operatorname{\mathbb{R}}^{2}\ |\
y\in(c,d),\ 2x+\zeta_{1}(y)<0\\},\ \textrm{or}\\\
U_{II}^{+}&=\\{(x,y)\in\operatorname{\mathbb{R}}^{2}\ |\ y\in(c,d),\
2x+\zeta_{1}(y>0\\},\end{split}$
which depends on that $U$ is originally contained in $U_{II}^{-}$ or
$U_{II}^{+}$. And the embedding $X$ might be extended to be an immersion.
Since both $U_{II}^{-}$ and $U_{II}^{+}$ are connected and simply connected,
the immersion $X$ is unique, up to a Heisenberg rigid motion. We denote these
two $p$-minimal surfaces of special type II by
$S_{II}^{-}(\zeta_{1})=X(U_{II}^{-})$ and
$S_{II}^{+}(\zeta_{1})=X(U_{II}^{+})$.
* •
when $\zeta_{2}(y)>0$ for all $y\in(c,d)$, since there exist no zeros of
$(x+\zeta_{1}(y))^{2}+\zeta_{2}(y)=0$, the open rectangle $U$ in (3) can be
extended to be the product space
$V_{I}=\operatorname{\mathbb{R}}\times(c,d).$
Since the extended immersion $X$ is unique, up to a Heisenberg rigid motion,
we denote the $p$-minimal surface of type I by
$\Sigma_{I}(\zeta_{1},\zeta_{2})=X(V_{I})$.
* •
when $\zeta_{2}(y)<0$ for all $y\in(c,d)$, since the zero set
$(x+\zeta_{1}(y))^{2}+\zeta_{2}(y)=0$ consists of two separated curves defined
by $x+\zeta_{1}(y)+\sqrt{-\zeta_{2}(y)}=0$ and
$x+\zeta_{1}(y)-\sqrt{-\zeta_{2}(y)}=0$, respectively, the open rectangle $U$
in (3) can be extended to be one of the following three domains:
(6.10) $\begin{split}V_{II}^{-}&=\\{(x,y)\in\operatorname{\mathbb{R}}^{2}\ |\
y\in(c,d),x<-\zeta_{1}(y)-\sqrt{-\zeta_{2}(y)}\\},\\\
V_{II}^{+}&=\\{(x,y)\in\operatorname{\mathbb{R}}^{2}\ |\
y\in(c,d),x>-\zeta_{1}(y)+\sqrt{-\zeta_{2}(y)}\\},\ \ \textrm{and}\\\
V_{III}&=\\{(x,y)\in\operatorname{\mathbb{R}}^{2}\ |\
y\in(c,d),-\zeta_{1}(y)-\sqrt{-\zeta_{2}(y)}<x<-\zeta_{1}(y)+\sqrt{-\zeta_{2}(y)}\\},\end{split}$
Since the extended immersion $X$ is unique, up to a Heisenberg rigid motion,
we denote these two $p$-minimal surfaces of type II by
$\Sigma_{II}^{-}(\zeta_{1},\zeta_{2})=X(V_{II}^{-})$ and
$\Sigma_{II}^{+}(\zeta_{1},\zeta_{2})=X(V_{II}^{+})$, and the $p$-minimal
surface of type III by $\Sigma_{III}(\zeta_{1},\zeta_{2})=X(V_{III})$.
We see that $\zeta_{1}$-invariant is the only one invariant for $p$-minimal
surfaces of special type, and $\zeta_{1}$\- and $\zeta_{2}$-invariants are the
only two invariants for general type. From the above, Theorem 1.7 immediately
follows.
### 6.4. Symmetric $p$-minimal surfaces
A $p$-minimal surface is called symmetric if $\zeta_{1}$-invariant is a
constant for the special types; and both $\zeta_{1}$\- and
$\zeta_{2}$-invariants are constants for the general types. Since $\zeta_{1}$,
up to a translation on its image, is an invariant, we presently have
###### Theorem 6.2.
All symmetric $p$-minimal surfaces of the same special type are locally
congruent to one another, whereas for the general type, locally there are a
family of symmetric $p$-minimal surfaces, depending on a parameter on
$\mathbb{R}$.
## 7\. Examples of $p$-minimal surfaces
### 7.1. Examples of special type I
The following is a family of $p$-minimal surfaces. They are defined by the
graphs of
(7.1) $u=Ax+By+C,$
for some real constants $A,B$ and $C$. It is easy to see that $(-B,A,C)$ or
$(x,y)=(-B,A)$ is the only singular point of the graph of $u=Ax+By+C$.
###### Lemma 7.1.
The graph defined by (7.1) is congruent to the graph of $u=0$.
###### Proof.
After the action of the left translation by $(B,-A,-C)$, we have
$\begin{split}(B,-A,-C)(x,y,u)&=(x+B,y-A,u-C-Ax-By)\\\
&=(x+B,y-A,0).\end{split}$
This completes the proof. ∎
###### Example 7.2.
The p-minimal surface defined by the graph of $u=0$ corresponds to
$\alpha=\frac{1}{r}$, where $r=\sqrt{x^{2}+y^{2}}$. Indeed, consider a surface
defined by
$X:(x,y)\rightarrow(x,y,0).$
We compute the horizontal normal
(7.2)
$e_{2}=\frac{(u_{x}-y)}{r}\overset{\circ}{e_{1}}+\frac{(u_{y}+x)}{r}\overset{\circ}{e_{2}}=\frac{-y}{\sqrt{x^{2}+y^{2}}}\overset{\circ}{e_{1}}+\frac{x}{\sqrt{x^{2}+y^{2}}}\overset{\circ}{e_{2}}.$
Thus
(7.3)
$e_{1}=\frac{x}{\sqrt{x^{2}+y^{2}}}\overset{\circ}{e_{1}}+\frac{y}{\sqrt{x^{2}+y^{2}}}\overset{\circ}{e_{2}}=\frac{x}{\sqrt{x^{2}+y^{2}}}\frac{\partial}{\partial
x}+\frac{y}{\sqrt{x^{2}+y^{2}}}\frac{\partial}{\partial y}.$
For the $\alpha$-function, we compute
$\alpha
e_{2}+T=\alpha(\frac{-y}{\sqrt{x^{2}+y^{2}}},\frac{x}{\sqrt{x^{2}+y^{2}}},\frac{-(x^{2}+y^{2})}{\sqrt{x^{2}+y^{2}}})+(0,0,1).$
On the other hand, $\alpha e_{2}+T=EX_{x}+FX_{y}=(E,F,0)$, for some $E$ and
$F$. Comparing with the above formula, we have
$\alpha=\frac{1}{\sqrt{x^{2}+y^{2}}},$
and hence $(0,0)$ is the only singular point. Notice that $(x,y)$ is not a
compatible coordinate system. In terms of the polar coordinates $(r,\theta)$
with the coordinates transformation $x=r\cos{\theta}$ and $y=r\sin{\theta}$,
that is, we consider the re-parametrization
$X:(r,\theta)\rightarrow(r\cos{\theta},r\sin{\theta},0).$
It represents
(7.4) $X_{r}=(\cos{\theta},\sin{\theta},0),\
X_{\theta}=(-r\sin{\theta},r\cos{\theta},0).$
From (7.3), it is easy to see that $e_{1}=X_{r}=\frac{\partial}{\partial r}$,
and thus $(r,\theta)$ is a compatible coordinate system. For the
$\alpha$-function and the induced metric $a$ and $b$, we solve the equation
$\frac{\alpha e_{2}+T}{\sqrt{1+\alpha^{2}}}=aX_{r}+bX_{\theta}$
to get $e_{2}=(-\sin{\theta},\cos{\theta},-r)$ from (7.2), and to obtain
$\alpha=\frac{1}{r},a=0,b=\frac{\alpha^{2}}{\sqrt{1+\alpha^{2}}},$
which, from the formula of $b$, implies that the polar coordinates
$(r,\theta)$ is a normal coordinate system. Since $\alpha=\frac{1}{r}$, the
surface $X$ is of special type I with the $\zeta_{1}(\theta)=0$ as
$\zeta_{1}$-invariant, and hence $X$ is a symmetric $p$-minimal surface.
Figure 7.1. The characteristic direction field of $X$
Figure 7.2. $e_{1}=\frac{\partial}{\partial x}$ for
$x>-\frac{1}{2}g^{\prime}(y)$
In view of Theorem 6.2, we immediately have the following theorem.
###### Theorem 7.3.
A symmetric $p$-minimal surface of special type I is locally congruent to the
graph of $u=0$.
###### Proof.
This is because that $\zeta_{1}$ is constant for a symmetric $p$-minimal
surface of special type I. And the function $\zeta_{1}$, up to a constant, is
a complete invariant. ∎
###### Theorem 7.4.
In terms of normal coordinates $(x,y)$, the induced metric (the first
fundamental form) on a $p$-minimal surface of special type I degenerates on
the singular set $\\{(x,y)\ |\ x=-\zeta_{1}(y)\\}$. Therefore if it is not
symmetric, then it will never smoothly extend through the singular set.
###### Proof.
In terms of normal coordinates, a $p$-minimal surface of special type I is
given by a function $\zeta_{1}(y)$, in which we have
$\alpha=\frac{1}{x+\zeta_{1}(y)},\ a=0,\
b=\frac{\alpha^{2}}{\sqrt{1+\alpha^{2}}}.$
Therefore,
$\hat{e}_{2}=a\frac{\partial}{\partial x}+b\frac{\partial}{\partial
y}=\frac{\alpha^{2}}{\sqrt{1+\alpha^{2}}}\frac{\partial}{\partial y}.$
This is equivalent to that the induced metric $I$ is degenerate on the
singular set on which $\alpha$ blows up. Actually, we have
$I=\hat{\omega}^{1}\otimes\hat{\omega}^{1}+\hat{\omega}^{2}\otimes\hat{\omega}^{2}=dx\otimes
dx+\left(\frac{1+\alpha^{2}}{\alpha^{4}}\right)dy\otimes dy,$
where $(\hat{\omega}^{1},\hat{\omega}^{2})$ is the dual co-frame of
$(\hat{e}_{1},\hat{e}_{2})$. If it is not symmetric, and suppose that it can
be smoothly extended beyond the singular set, then the singular set is
actually a singular curve and the induced metric must be non-degenerate. This
completes the proof. ∎
### 7.2. Examples of special type II
The other family of $p$-minimal surfaces are defined by the graph of
(7.5) $u=-ABx^{2}+(A^{2}-B^{2})xy+ABy^{2}+g(-Bx+Ay),$
for some real constants $A$ and $B$ such that $A^{2}+B^{2}=1$ and $g\in
C^{\infty}(\operatorname{\mathbb{R}})$.
###### Lemma 7.5.
The graph defined by (7.5) is congruent to the graph of $u=xy+g(y)$.
###### Proof.
Since $A^{2}+B^{2}=1$, the matrix
$\left(\begin{array}[]{rl}A&B\\\ -B&A\end{array}\right)$
defines a rotation on $\mathbb{R}^{2}$. Let
$\left(\begin{array}[]{c}X\\\
Y\end{array}\right)=\left(\begin{array}[]{rl}A&B\\\
-B&A\end{array}\right)\left(\begin{array}[]{c}x\\\ y\end{array}\right),$
we have
$\begin{split}XY&=(Ax+By)(-Bx+Ay)\\\
&=-ABx^{2}+(A^{2}-B^{2})xy+ABy^{2},\end{split}$
which implies
$u=XY+g(Y).$
This completes the proof. ∎
###### Example 7.6.
We now study the example of the graph of $u=xy+g(y)$. We consider a
parametrization of the graph defined by
$X:(x,y)\rightarrow(x,y,xy+g(y)),$
then we have
(7.6) $X_{x}=(1,0,y)=\mathring{e}_{1},\ \ X_{y}=(0,1,x+g^{\prime}(y)).$
The horizontal normal $e_{2}$ is taken to be
$\begin{split}e_{2}&=\left\\{\begin{array}[]{l}\frac{(u_{x}-y)}{D}\mathring{e}_{1}+\frac{(u_{y}+x)}{D}\mathring{e}_{2}\\\
\\\
-\frac{(u_{x}-y)}{D}\mathring{e}_{1}-\frac{(u_{y}+x)}{D}\mathring{e}_{2}\end{array}\right.\\\
&=\left\\{\begin{array}[]{ll}\frac{2x+g^{\prime}(y)}{D}\mathring{e}_{2}=\mathring{e}_{2}&,\
\textrm{if}\ 2x+g^{\prime}(y)>0\\\ \\\
-\frac{2x+g^{\prime}(y)}{D}\mathring{e}_{2}=\mathring{e}_{2}&,\ \textrm{if}\
2x+g^{\prime}(y)<0\end{array},\right.\end{split}$
where $D=|2x+g^{\prime}(y)|$. Combining with (7.6), we then have
(7.7) $e_{1}=-Je_{2}=\mathring{e}_{1}=X_{x}=\frac{\partial}{\partial x}.$
We proceed to compute the $\alpha$-function, $a$ and $b$ in terms of $(x,y)$,
which is a compatible coordinate system. First, that $\frac{\alpha
e_{2}+T}{\sqrt{1+\alpha^{2}}}=aX_{x}+bX_{y}$ gives
$\frac{1}{\sqrt{1+\alpha^{2}}}\left[\alpha(0,1,-x)+(0,0,1)\right]=(a,b,ay+b(x+g^{\prime}(y))).$
It immediately shows
(7.8) $a=0,\ b=\frac{\alpha}{\sqrt{1+\alpha^{2}}},\ \textrm{and}\ \
\alpha=\frac{1}{2x+g^{\prime}(y)}.$
(i) Thus, from the formula of $b$, the coordinates $(x,y)$ is a normal
coordinate system on the part where $2x+g^{\prime}(y)>0$, and we have
$\zeta_{1}(y)=g^{\prime}(y)$. It is easy to see that the graph of $u=xy+g(y)$,
for $g\in C^{\infty}(\operatorname{\mathbb{R}})$, is just the maximal surface
$S_{II}^{+}(g^{\prime}(y))$ when we restrict to the domain $\\{(x,y)\ |\
y\in\operatorname{\mathbb{R}},\ 2x+g^{\prime}(y)>0\\}$.
(ii) For the other part with $2x+g^{\prime}(y)<0$, we have $b<0$. Therefore,
instead of $(x,y)$, the new coordinate system $(\tilde{x},\tilde{y})=(x,-y)$
is a normal coordinate system (notice that the compatible coordinates are
chosen so that $b>0$). The invariants $\alpha,a$ and $b$ read
(7.9) $a=0,\ b=-\frac{\alpha}{\sqrt{1+\alpha^{2}}}>0,\ \textrm{and}\ \
\alpha=\frac{1}{2x+g^{\prime}(-\tilde{y})}<0,$
and hence $\zeta_{1}(\tilde{y})=g^{\prime}(-\tilde{y})$. Here ′ means the
derivative with respect to $y$.
(iii) For the other part with $2x+g^{\prime}(y)<0$, we can say something more.
If, instead of $\frac{\partial}{\partial x}$, we choose
$-\frac{\partial}{\partial x}$ as the characteristic direction, that is,
$e_{1}=-\frac{\partial}{\partial x}$, then the coordinates
$(\tilde{x},\tilde{y})=(-x,-y)$ lead to the normal coordinate system for the
part with $2x+g^{\prime}(y)<0$. Aa a result, we consider the re-
parametrization of the surface
$X:(\tilde{x},\tilde{y})\rightarrow(-\tilde{x},-\tilde{y},\tilde{x}\tilde{y}+g(-\tilde{y}))$
such that $e_{1}=\frac{\partial}{\partial\tilde{x}}=X_{\tilde{x}}$. Similarly,
from $\frac{\alpha
e_{2}+T}{\sqrt{1+\alpha^{2}}}=aX_{\tilde{x}}+bX_{\tilde{y}}$, we have
(7.10) $a=0,\ b=\frac{\alpha}{\sqrt{1+\alpha^{2}}}>0,\ \textrm{and}\ \
\alpha=\frac{1}{2\tilde{x}+\frac{\partial\tilde{g}}{\partial\tilde{y}}(\tilde{y})}>0,$
where $\tilde{g}(\tilde{y})$ is defined by
$\tilde{g}(\tilde{y})=g(-\tilde{y})$. Thus $(\tilde{x},\tilde{y})$ is the
normal coordinate system and we have
$\zeta_{1}(\tilde{y})=\frac{\partial\tilde{g}}{\partial\tilde{y}}(\tilde{y})$.
Let $R(g(y))$ and $L(g(y))$ be the part of the surface with
$2x+g^{\prime}(y)>0$ and $2x+g^{\prime}(y)<0$, respectively. Actually, in
terms of the notations defined in Subsection 6.3, we see that
$R(g(y))=S_{II}^{+}(g^{\prime}(y))$ and $L(g(y))=S_{II}^{-}(g^{\prime}(y))$.
Then, compairing with (7.8) and (7.10), we immediately have the following
proposition, due to Theorem 1.6.
###### Proposition 7.7.
The surface $L(g(-y))\left(\textrm{or}\ S_{II}^{-}(-g^{\prime}(y))\right)$ and
$R(g(y))\left(\textrm{or}\ S_{II}^{+}(g^{\prime}(y))\right)$ are congruent to
each other. They in fact differ by an action of the Heisenberg rigid motion
$(x,y,t)\rightarrow(-x,-y,t)$.
###### Theorem 7.8.
Any $p$-minimal surface of special type II is locally a part of the surface
defined by $u=xy+g(y)$ for some $g\in C^{\infty}(\mathbb{R})$, up to a
Heisenberg rigid motion. In addition, it is symmetric if and only if $g(y)$ is
linear in the variable $y$. Therefore, any symmetric $p$-minimal surface of
special type II is locally a part of the surface defined by the graph of
$u=xy$, up to a Heisenberg rigid motion.
###### Proof.
Any $p$-minimal surface of special type II locally has the following normal
representation
(7.11) $a=0,\ b=\frac{|\alpha|}{\sqrt{1+\alpha^{2}}},\ \
\alpha=\frac{1}{2x+\zeta_{1}(y)},$
in terms of normal coordinates $(x,y)$. Therefore, comparing with (7.8), the
proof is finished if we choose $g$ such that $g^{\prime}(y)=\zeta_{1}(y)$.
Moreover, it is symmetric if and only if $\zeta_{1}(y)=g^{\prime}(y)=$
constant, i.e., $g$ is linear in $y$. ∎
### 7.3. Examples of type I, II and III
We consider the surface $\Sigma\in H_{1}$ defined on
$\operatorname{\mathbb{R}}^{2}$ by
(7.12) $X:(s,t)\rightarrow(x,y,z)=(s\cos\theta(t),s\sin\theta(t),t).$
Then it can be calculated that
$X_{s}=(\cos\theta(t),\sin\theta(t),0),\quad
X_{t}=(-s\theta^{\prime}\sin\theta(t),s\theta^{\prime}\cos\theta(t),1).$
Notice that $\mathring{e}_{1}|_{(0,0,z)}=\frac{\partial}{\partial x}$,
$\mathring{e}_{2}|_{(0,0,z)}=\frac{\partial}{\partial y}$ and
$\theta(t)=\theta(z)$, i.e., $\theta$ is independent of $x$ and $y$. We
rewrite $X_{s}$ as
(7.13) $\displaystyle X_{s}$ $\displaystyle=$
$\displaystyle\cos\theta(t)\frac{\partial}{\partial
x}+\sin\theta(t)\frac{\partial}{\partial y}$ (7.14) $\displaystyle=$
$\displaystyle\cos\theta(t)(\mathring{e}_{1}(X)-y\frac{\partial}{\partial
z})+\sin\theta(y)(\mathring{e}_{2}(X)+x\frac{\partial}{\partial z})$ (7.15)
$\displaystyle=$
$\displaystyle\cos\theta(t)(\mathring{e}_{1}(X)-x\sin\theta\frac{\partial}{\partial
z})+\sin\theta(t)(\mathring{e}_{2}(X)+x\cos\theta\frac{\partial}{\partial z})$
(7.16) $\displaystyle=$
$\displaystyle\cos\theta(t)\mathring{e}_{1}(X)+\sin\theta(t)\mathring{e}_{2}(X)\in\xi,$
which is a vector tangent to the contact plane. Then we choose $e_{1}=X_{s}$,
and hence
$e_{2}=Je_{1}=-\sin\theta(t)\mathring{e}_{1}(X)+\cos\theta(t)\mathring{e}_{2}(X)$.
We compute
(7.17)
$\begin{split}\bigtriangledown_{e_{1}}e_{2}&=-(e_{1}\theta(t))(\cos\theta(t)\mathring{e}_{1}(X)-\sin\theta(t)\mathring{e}_{2}(X))\\\
&=0,\ \left(\because
e_{1}\theta(t)=\frac{d\theta(t)}{ds}=0\right).\end{split}$
This implies that such surface defined by (7.12) has $p$-mean curvature $H=0$.
We proceed to work out the $\alpha$-function $\alpha$, and $a$ and $b$. By
definition, it is defined by a function satisfying that $\alpha e_{2}+T\in
T\Sigma$, that is,
(7.18)
$\alpha(-\sin\theta\mathring{e}_{1}+\cos\theta\mathring{e}_{2})+T=EX_{s}+FX_{t},$
for some functions $E,F$. Similar to that we rewrite $X_{s}$ as a linear
combination of $\mathring{e}_{1},\mathring{e}_{2}$ and
$\frac{\partial}{\partial z}$, we can express $X_{t}$ as
(7.19)
$X_{t}=(-s\theta^{\prime}\sin{\theta})\mathring{e}_{1}+(s\theta^{\prime}\cos{\theta})\mathring{e}_{2}+(s^{2}\theta^{\prime}+1)\frac{\partial}{\partial
z}.$
Combining (7.18) and (7.19) and notice that $X_{s}=e_{1}$, we obtain that
$E=0,\ F=\frac{1}{s^{2}\theta^{\prime}(t)+1}$ and hence
$\alpha=\frac{s\theta^{\prime}(t)}{s^{2}\theta^{\prime}(t)+1},\ a=0,\
b=\frac{F}{\sqrt{1+\alpha^{2}}}.$
If $\theta^{\prime}(t)=0$, then we have $\alpha=0$. However, if
$\theta^{\prime}(t)\neq 0$, then $\alpha$, $a$ and $b$ read
(7.20) $\alpha=\frac{s}{s^{2}+\frac{1}{\theta^{\prime}(t)}},\ a=0,\
b=\frac{\alpha}{s\sqrt{1+\alpha^{2}}}\frac{1}{\theta^{\prime}(t)},$
which means that $(s,t)$ is an orthogonal coordinate system, but not normal.
In particular, if $\theta^{\prime}(t)>0$ for all $t$, then one sees that the
$p$-minimal surface has no singularities. From (7.20), we conclude that this
surface is of type I if $\theta^{{}^{\prime}}(t)>0$ for all $t$. On the other
hand, if $\theta^{{}^{\prime}}(t)<0$ for all $t$, then it is either of type II
on which $s>\sqrt{-\frac{1}{\theta^{\prime}(t)}(t)}$ or
$s<-\sqrt{-\frac{1}{\theta^{\prime}(t)}(t)}$; or type III on which
$-\sqrt{-\frac{1}{\theta^{\prime}(t)}(t)}<s<\sqrt{-\frac{1}{\theta^{\prime}(t)}(t)}$.
Finally, we can further take the coordinates
$(\tilde{s},\tilde{t})=(s-C,\theta(t))$ to normalize $b$ such that it only
depends on $s$ and $\alpha$, then we have
(7.21)
$\alpha=\frac{\tilde{s}+C}{(\tilde{s}+C)^{2}+\frac{1}{\theta^{\prime}(\theta^{-1}(\tilde{t}))}},\
a=0,\ b=\frac{\alpha}{(\tilde{s}+C)\sqrt{1+\alpha^{2}}},$
for some constant $C$, and hence
(7.22) $\zeta_{1}(\tilde{t})=C,\ \
\zeta_{2}(\tilde{t})=\frac{1}{\theta^{\prime}(\theta^{-1}(\tilde{t}))}.$
### 7.4. Examples of type I
There is another surface of type I. Consider a surface $\Sigma\in H_{1}$
defined on $\operatorname{\mathbb{R}}^{2}$ by
(7.23) $X:(s,t)\rightarrow(x,y,z)=(\cos{s}+(\sin{s})t,\sin{s}-(\cos{s})t,t),$
accordingly we obtain
$X_{s}=(-\sin{s}+(\cos{s})t,\cos{s}+(\sin{s})t,0)\mbox{ and
}X_{t}=(\sin{s},-\cos{s},1).$
Suppose that $\eta X_{s}+\zeta X_{t}\in\xi$. Then $0=\theta(\eta X_{s}+\zeta
X_{t})=\eta\theta(X_{s})+\zeta\theta(X_{t})$. It is easy to get
$\theta(X_{s})=1+t^{2}$ and $\theta(X_{t})=0$, and hence we have $\eta=0$. We
conclude that
$X_{t}=(\sin{s},-\cos{s},1)=(\sin{s})\mathring{e}_{1}-(\cos{s})\mathring{e}_{2}\in\xi\cap
T\Sigma,$
which means that $(t,s)$ is a compatible coordinate system. Taking
$e_{1}=X_{t}$, we see
$e_{2}=Je_{1}=(\cos{s})\mathring{e}_{1}+(\sin{s})\mathring{e}_{2}$, and then
$\nabla_{e_{1}}e_{2}=0$. Thus the $p$-mean curvature $H=0$. For $\alpha,a$ and
$b$, we solve
$\hat{e}_{2}=\frac{\alpha e_{2}+T}{\sqrt{1+\alpha^{2}}}=aX_{t}+bX_{s},$
to get
$\alpha=\frac{t}{1+t^{2}},\ a=b=\frac{1}{\sqrt{t^{4}+3t^{2}+1}}.$
Therefore it is of type $I$. We can further normalize the invariants in terms
of the new coordinates $(\tilde{t},\tilde{s})=(t-s+C,s)$ such that
(7.24)
$\tilde{\alpha}=\frac{\tilde{t}+\tilde{s}-C}{(\tilde{t}+\tilde{s}-C)^{2}+1},\
\ \tilde{a}=0,\ \
\tilde{b}=\frac{\tilde{\alpha}}{(\tilde{t}+\tilde{s}-C)\sqrt{1+\tilde{\alpha}^{2}}}>0.$
As a consequence, we have
(7.25) $\zeta_{1}(\tilde{s})=\tilde{s},\ \textrm{up to a constant}\ C;\
\textrm{and}\ \ \zeta_{2}(\tilde{s})=1.$
Figure 7.3. Helicoid
Figure 7.4. Conicoid
## 8\. Structures of singular sets of $p$-minimal surfaces
In this section, we assume that $\Sigma\subset H_{1}$ is a $p$-minimal
surface.
###### Proposition 8.1.
Let $p$ be a singular point of a $p$-minimal surface $\Sigma$. Then there must
be a characteristic line approaching this point $p$.
###### Proof.
Suppose no characteristic line approaches this point $p$, we would like to
find a contradiction. Firstly, by [3], there exists a small neighborhood of
$p$ whose intersection with the singular set is contained in a smooth curve
$\Gamma_{p}$. And if the neighborhood is small enough, then, on one side of
the curve $\Gamma_{p}$, we are able to find a compatible coordinate system
$(U;x,y)$ such that $p$ is contained on the boundary of $U$. Notice that, by
our assumption, $p$ does not lie at the end of any leaf of the foliation
defined by $e_{1}=\frac{\partial}{\partial x}$. Thus the image of the map
defined on $U$ by $(x,y)\rightarrow(\alpha,\alpha_{x})$ is bounded on the
phase plane (see Figure 4.1). Therefore, we have that $\lim_{(x,y)\rightarrow
p}\alpha(x,y)$ is finite, which is a contradiction since $p$ is a singular
point. ∎
### 8.1. The proof of Theorem 1.8
Due to Proposition 8.1, it suffices to show this theorem for a $p$-minimal
surface of some type. For general type (notice that there are no singular
points for type I), we choose a normal coordinate system $(x,y)$ such that
$\alpha$, and $a,b$ read
(8.1) $\alpha(x,y)=\frac{x+\zeta_{1}(y)}{(x+\zeta_{1}(y))^{2}-c^{2}(y)},\
\textrm{and}\ a=0,\ b=\frac{|\alpha|}{|x+\zeta_{1}(y)|\sqrt{1+\alpha^{2}}},$
where $c(y)$ is a positive function of the variable $y$ such that
$\zeta_{2}(y)=-c^{2}(y)$. Then the singular set is the graphs of the functions
(8.2) $x=-\zeta_{1}(y)\pm c(y).$
By (6.7), the induced metric $I$ (or the first fundamental form) on the
regular part reads
(8.3) $I=dx\otimes
dx+\big{[}(x+\zeta_{1}(y))^{2}+[(x+\zeta_{1}(y))^{2}+\zeta_{2}(y)]^{2}\big{]}dy\otimes
dy.$
Now we use the metric to compute the length of the singular set
$\\{(-\zeta_{1}(y)\pm c(y),y)\\}$, where $y$ belongs to some open interval.
Let $\gamma_{\pm}(y)=(-\zeta_{1}(y)\pm c(y),y)$, which is a parametrization of
the singular set. Then the square of the velocity at $y$ is
(8.4) $\begin{split}|\gamma^{\prime}_{\pm}(y)|^{2}&=|-\zeta_{1}^{\prime}(y)\pm
c^{\prime}(y)|^{2}+\big{[}(x+\zeta_{1}(y))^{2}+[(x+\zeta_{1}(y))^{2}+\zeta_{2}(y)]^{2}\big{]}\\\
&=|-\zeta_{1}^{\prime}(y)\pm c^{\prime}(y)|^{2}+c^{2}(y),\\\ &>0\ \textrm{for
all}\ y,\end{split}$
where we have used the fact that $(x+\zeta_{1}(y))^{2}=c^{2}(y)$ on the
singular set. Formula (8.4) shows that the parametrized curve
$\gamma_{\pm}(y)$ of the singular set has a positive length.
Similarly, for special type II, in terms of a normal coordinate system
$(x,y)$, we have
(8.5) $\alpha=\frac{1}{2x+\zeta_{1}(y)},\ \textrm{and}\ a=0,\
b=\frac{|\alpha|}{\sqrt{1+\alpha^{2}}}.$
In this case, equation (6.7) suggests that the induced metric $I$ on the
regular part reads
(8.6) $dx\otimes dx+\big{[}1+(2x+\zeta_{1}(y))^{2}\big{]}dy\otimes dy.$
Let $\gamma(y)=(\frac{-\zeta_{1}(y)}{2},y)$ be a parametrization of the
singular set $\\{(\frac{-c(y)}{2},y)\\}$, where $y$ belongs to some open
interval. Then the square of the velocity at $y$ is
(8.7)
$|\gamma^{\prime}(y)|^{2}=\frac{(\zeta_{1}^{\prime}(y))^{2}}{4}+\big{[}1+(2x+\zeta_{1}(y))^{2}\big{]}>1,$
where we have used the fact that $2x=-\zeta_{1}(y)$ on the singular set.
Again, formula (8.10) shows that the parametrized curve $\gamma(y)$ of the
singular set has a positive length.
Finally, for special type I, in terms of a normal coordinate system $(x,y)$,
we have
(8.8) $\alpha=\frac{1}{x+\zeta_{1}(y)},\ \textrm{and}\ a=0,\
b=\frac{\alpha^{2}}{\sqrt{1+\alpha^{2}}}.$
If $\gamma(y)=(-\zeta_{1}(y),y)$ is a parametrization of the singular set
$\\{(-\zeta_{1}(y),y)\\}$ for $y$ inside some open interval, then (6.7)
indicates that the induced metric $I$ on the regular part reads
(8.9) $dx\otimes
dx+\big{[}(x+\zeta_{1}(y))^{2}+(x+\zeta_{1}(y))^{4}\big{]}dy\otimes dy$
and the square of the velocity at $y$ is
(8.10)
$|\gamma^{\prime}(y)|^{2}=(\zeta_{1}^{\prime}(y))^{2}+\big{[}(x+\zeta_{1}(y))^{2}+(x+\zeta_{1}(y))^{4}\big{]}=(\zeta_{1}^{\prime}(y))^{2},$
where we have used the fact that $x=-\zeta_{1}(y)$ on the singular set. From
formula (8.10), we see that the length of $\gamma(y)$ depends on the value
$\zeta_{1}^{\prime}(y)$ is zero or not, which implies that the singular set is
either an isolated point or a smooth curve of positive length. In addition,
the singular set as an isolated point happens if and only if $\zeta_{1}$ is a
constant, that is, the surface is part of a plane. We thus completes the proof
of this theorem 1.8.
### 8.2. The proof of Theorem 1.9
Around the singular point $p$, we may assume that the surface is represented
by a graph $z=u(x,y)$. Let $X$ be a parametrization of the $p$-minimal surface
around $p$ defined by $X(x,y)=(x,y,u(x,y))$. Then
(8.11) $\begin{split}X_{x}&=(1,0,u_{x})=\frac{\partial}{\partial
x}+u_{x}\frac{\partial}{\partial
t}=\mathring{e}_{1}+(u_{x}-y)\frac{\partial}{\partial t};\\\
X_{y}&=(0,1,u_{y})=\frac{\partial}{\partial y}+u_{y}\frac{\partial}{\partial
t}=\mathring{e}_{2}+(u_{y}+x)\frac{\partial}{\partial t},\end{split}$
which yields
$I(X_{x},X_{x})=1+(u_{x}-y)^{2},\ \ I(X_{y},X_{y})=1+(u_{y}+x)^{2},\ \
I(X_{x},X_{y})=(u_{x}-y)(u_{y}+x),$
where $I$ is the induced metric (first fundamental form) on the surface. Now
we choose a horizontal normal as follows
$e_{2}=-\frac{(u_{x}-y)\mathring{e}_{1}+(u_{y}+x)\mathring{e}_{2}}{D},$
where $D=\left((u_{x}-y)^{2}+(u_{y}+x)^{2}\right)^{1/2}$. Then
(8.12) $e_{1}=\frac{(u_{y}+x)\mathring{e}_{1}-(u_{x}-y)\mathring{e}_{2}}{D}$
is tangent to the characteristic curves.
We first claim that either $u_{xx}(p)\neq 0$ or $(u_{xy}+1)(p)\neq 0$. Let
$f(x,y)=u_{x}-y$ and let $(x(s),y(s))$ be a parametrization of the singular
curve passing through $p$. Notice that we may assume, w.l.o.g., that the
$x$-axis past $p$ is transverse to the singular curve, i.e., $y^{\prime}\neq
0$. Since $f(x(s),y(s))=0$, taking derivative with respect to $s$ gives
$u_{xx}x^{\prime}+(u_{xy}-1)y^{\prime}=0$. Therefore, $(u_{xy}-1)(p)=0$ if
$u_{xx}(p)=0$, and hence $(u_{xy}+1)(p)=2$.
If $u_{xx}(p)\neq 0$, we turn to compute the angle $\zeta$ between $e_{1}$ and
$X_{x}$. First, from (8.11), we have
$I(e_{1},X_{x})=|e_{1}||X_{x}|\cos{\zeta}=(1+(u_{x}-y)^{2})^{1/2}\cos{\zeta}.$
On the other hand, using (8.12) to get
$I(e_{1},X_{x})=\frac{(u_{y}+x)}{D}.$
Combining the above two formulae, we obtain
(8.13)
$\begin{split}\cos{\zeta}&=\frac{u_{y}+x}{D\sqrt{1+(u_{x}-y)^{2}}}=\frac{\frac{u_{y}+x}{u_{x}-y}}{\frac{D}{u_{x}-y}\sqrt{1+(u_{x}-y)^{2}}}\\\
&=\pm\frac{\frac{u_{y}+x}{u_{x}-y}}{\sqrt{1+(\frac{u_{y}+x}{u_{x}-y})^{2}}\sqrt{1+(u_{x}-y)^{2}}},\end{split}$
where the sign $\pm$ depends on that the sign of $u_{x}-y$ is positive or not.
By the mean value theorem, it is easy to see (or see [3]) that
(8.14) $\lim_{q\rightarrow
p^{+}}\frac{u_{y}+x}{u_{x}-y}=\frac{u_{xy}+1}{u_{xx}}(p)=\lim_{q\rightarrow
p^{-}}\frac{u_{y}+x}{u_{x}-y},$
and thus
(8.15) $\lim_{q\rightarrow
p^{+}}\cos{\zeta}=\frac{\frac{u_{xy}+1}{u_{xx}}(p)}{\sqrt{1+\left(\frac{u_{xy}+1}{u_{xx}}(p)\right)^{2}}}=-\lim_{q\rightarrow
p^{-}}\cos{\zeta},$
where $\lim_{q\rightarrow p^{+}}(\lim_{q\rightarrow p^{-}})$ means that
$q\rightarrow p$ from the side in which $u_{x}-y$ is positive (negative).
If $(u_{xy}+1)(p)\neq 0$, similar computations give the angle $\eta$ between
$e_{1}$ and $X_{y}$ by
(8.16)
$\cos{\eta}=\frac{-(\frac{u_{x}-y}{u_{y}+x})}{\pm\sqrt{1+(\frac{u_{x}-y}{u_{y}+x})^{2}}\sqrt{1+(u_{y}+x)^{2}}},$
thus
(8.17) $\lim_{q\rightarrow
p^{+}}\cos{\eta}=-\frac{\frac{u_{xx}}{u_{xy}+1}(p)}{\sqrt{1+\left(\frac{u_{xx}}{u_{xy}+1}(p)\right)^{2}}}=-\lim_{q\rightarrow
p^{-}}\cos{\eta},$
where $\lim_{q\rightarrow p^{+}}(\lim_{q\rightarrow p^{-}})$ means that
$q\rightarrow p$ from the side in which $u_{y}+x$ is positive (negative).
From (8.14), it is easy to see that both $u_{x}-y$ and $u_{y}+x$ differ by a
sign on the different side of the singular curve which is defined by
$u_{x}-y=0$ and $u_{y}+x=0$. Therefore, from the formula of $e_{1}$ (see
(8.12)), together with (8.15) and (8.17), we conclude that the characteristic
vector field $e_{1}$ differs by a sign on the different side of the singular
curve when approaching the singular point $p$. This completes the proof of
theorem 1.9.
### 8.3. The proof of Theorem 1.10
In terms of normal coordinates $(x,y)$, the surface $\Sigma$ is represented by
two functions $\zeta_{1}(y)$ and $\zeta_{2}(y)$. Since it is of type II, we
have $\zeta_{2}(y)<0$ and
(8.18) $\alpha=\frac{x+\zeta_{1}(y)}{(x+\zeta_{1}(y))^{2}+\zeta_{2}(y)},\
a=0,\ b=\frac{|\alpha|}{|x+\zeta_{1}(y)|\sqrt{1+\alpha^{2}}},$
on which either $x+\zeta_{1}(y)>\sqrt{-\zeta_{2}(y)}$ or
$x+\zeta_{1}(y)<-\sqrt{-\zeta_{2}(y)}$. The induced metric is
(8.19) $I=dx\otimes dx+\frac{1}{b^{2}}dy\otimes dy.$
We assume that $\Sigma$ lies on the part $x+\zeta_{1}(y)>\sqrt{-\zeta_{2}(y)}$
(the proof for the case that $\Sigma$ lies on the part
$x+\zeta_{1}(y)<-\sqrt{-\zeta_{2}(y)}$ is similar). Suppose, in addition, that
$\Sigma$ can be smoothly extended beyond the singular curve
$x+\zeta_{1}(y)-\sqrt{-\zeta_{2}(y)}=0$. By theorem 1.9, the coordinates
$(x,y)$ can be extended beyond the singular curve to be compatible
coordinates. Then the $\alpha$-function on the other side of the singular
curve must be one of the following
1. (1)
$\frac{1}{x+\zeta_{1}(y)-\sqrt{-\zeta_{2}(y)}}$, which is of special type I;
2. (2)
$\frac{1}{2x+2(\zeta_{1}(y)-\sqrt{-\zeta_{2}(y)})}$, which is of special type
II;
3. (3)
$\alpha=\frac{x+\zeta_{1}(y)}{(x+\zeta_{1}(y))^{2}+\zeta_{2}(y)}$, which is of
general type,
for $x+\zeta_{1}(y)<\sqrt{-\zeta_{2}(y)}$. The induced metric on this other
part is
(8.20) $I=dx\otimes dx-\frac{a}{b}dx\otimes dy-\frac{a}{b}dy\otimes
dx+\frac{(1+a^{2})}{b^{2}}dy\otimes dy.$
Compare (8.19) and (8.20), and notice that $I$ is smooth around the singular
curve, we have
$a=0,\ b=\frac{|\alpha|}{|x+\zeta_{1}(y)|\sqrt{1+\alpha^{2}}},$
with $\alpha=\frac{x+\zeta_{1}(y)}{(x+\zeta_{1}(y))^{2}+\zeta_{2}(y)}$. That
is, case (1) and (2) for $\alpha$ do not happen. Therefore, the extended
coordinates beyond the singular curve are also normal coordinates. And the
formula of $\alpha$ shows that the part on the other side of the singular
curve is of type III. This completes the proof of Theorem 1.10.
### 8.4. The proof of the Bernstein-type theorem
In this subsection, we are going to show that (7.1) and (7.5) are the only
entire smooth $p$-minimal graphs. Suppose that $\Sigma\subset H_{1}$ is an
entire $p$-minimal graph. First of all, since it is a graph, we notice that
there is nowhere at which $\alpha$ is zero. Next, we claim the following
lemma.
###### Lemma 8.2.
The induced singular characteristic foliation of $\Sigma$ doesn’t contain a
leaf along which $\alpha$ is of general type, that is, in terms of normal
coordinates around the leaf, $\alpha$ is a general solution of the Codazzi-
like equation $($see (4.1)$)$.
###### Proof.
Suppose not, we assume that the induced singular characteristic foliation of
$\Sigma$ contains such a leaf. Then there will be a piece of the surface (a
neighborhood) around the leaf such that this piece is of general type. Suppose
that this piece is of type I or of type III, then the entireness and the phase
plane (Figure 4.1) indicate that the $\alpha$-function must be extended so
that it has a zero at somewhere. This is a contradiction. Therefore this piece
(of general type) must be of type II. Again, since it is entire, this piece
can be smoothly extended through the singular curve. By theorem 1.10, it
contains a piece of type III, which lies on the other side of the singular
curve. This is also a contradiction, as we argue above. We hence complete the
proof of Lemma 8.2. ∎
From Lemma 8.2, we know that an entire $p$-minimal graph is either of special
type I or of special type II. If it is of special type II, Theorem 7.8 and
Lemma 7.5 ensure that $\Sigma$ is one of the graphs in (7.5). If it contains a
piece of special type I, then this piece must be symmetric by Theorem 7.4.
Therefore, by Theorem 7.3 and Lemma 7.1, the surface $\Sigma$ must be one of
the graphs in (7.1). We therefore complete the proof of the Bernstein-type
theorem. We also remark that the Bernstein-type theorem still holds for
$C^{3}$ surfaces in $H_{1}$.
###### Remark 8.3.
We point out that in [3], the Bernstein-type theorem had been proved in
$C^{2}$-graphs.
## 9\. An approach to construct $p$-minimal surfaces
In this section, we provide an approach to construct $p$-minimal surfaces. It
turns out that, in some sense, generic $p$-minimal surfaces can be constructed
by this approach, particularly, other those $p$-minimal surfaces of special
type I. This approach is to perturb the surface $u=0$ in some way. Recall we
choose the parametrization of $u=0$ by
$X:(r,\theta)\rightarrow(r\cos{\theta},r\sin{\theta},0),\ \ r>0,$
where each half-ray $l_{\theta}:r\rightarrow(r\cos{\theta},r\sin{\theta},0)$
with a fixed angle $\theta$ is a Legendrian straight line. Therefore, the
image of the action of each Heisenberg rigid motion on $l_{\theta}$ is also a
Legendrian straight line. Let $\mathcal{C}$ be an arbitrary curve
$\mathcal{C}:\theta\rightarrow(x(\theta),y(\theta),z(\theta)),\
\theta\in\operatorname{\mathbb{R}}$. Then for each fixed $\theta$ and $r>0$,
the curve defined by
$L_{\mathcal{C}(\theta)}(l_{\theta}):r\rightarrow(x(\theta)+r\cos{\theta},y(\theta)+r\sin{\theta},z(\theta)+ry(\theta)\cos{\theta}-rx(\theta)\sin{\theta})$
is a Legendrian straight line. Here $L_{\mathcal{C}(\theta)}$ is the left
translation by $\mathcal{C}(\theta)$. Therefore, the union of these lines
constitutes a $p$-minimal surface with a parametrization $Y$ given by
(9.1)
$Y(r,\theta)=(x(\theta)+r\cos{\theta},y(\theta)+r\sin{\theta},z(\theta)+ry(\theta)\cos{\theta}-rx(\theta)\sin{\theta}).$
This surface depends on the curve
$\mathcal{C}(\theta)=(x(\theta),y(\theta),z(\theta))$. We have the following
proposition about the surface $Y$.
###### Proposition 9.1.
The coordinates $(r,\theta)$ are compatible coordinates for $Y$. In terms of
this coordinate system, the $\alpha$-invariant and the induced metric read
(9.2)
$\begin{split}a&=\frac{-(x^{\prime}(\theta)\cos{\theta}+y^{\prime}(\theta)\sin{\theta})\alpha}{[r+(y^{\prime}(\theta)\cos{\theta}-x^{\prime}(\theta)\sin{\theta})]\sqrt{1+\alpha^{2}}}\\\
&\\\
b&=\frac{\alpha}{[r+(y^{\prime}(\theta)\cos{\theta}-x^{\prime}(\theta)\sin{\theta})]\sqrt{1+\alpha^{2}}},\end{split}$
and
(9.3)
$\alpha=\frac{r+(y^{\prime}(\theta)\cos{\theta}-x^{\prime}(\theta)\sin{\theta})}{[r+(y^{\prime}(\theta)\cos{\theta}-x^{\prime}(\theta)\sin{\theta})]^{2}+\Theta(\mathcal{C}^{\prime}(\theta))-(y^{\prime}(\theta)\cos{\theta}-x^{\prime}(\theta)\sin{\theta})^{2}},$
where
$\Theta(\mathcal{C}^{\prime}(\theta))=z^{\prime}(\theta)+x(\theta)y^{\prime}(\theta)-y(\theta)x^{\prime}(\theta)$.
###### Proof.
We make a straightforward computation for the invariants $\alpha,a$ and $b$.
First we have
(9.4)
$\begin{split}Y_{r}&=(\cos{\theta},\sin{\theta},y(\theta)\cos{\theta}-x(\theta)\sin{\theta})\\\
&=\cos{\theta}\ \mathring{e}_{1}(Y(r,\theta))+\sin{\theta}\
\mathring{e}_{2}(Y(r,\theta)).\end{split}$
From the construction of $Y$, we have $e_{1}=Y_{r}$. Thus
(9.5) $e_{2}=Je_{1}=-\sin{\theta}\ \mathring{e}_{1}(Y(r,\theta))+\cos{\theta}\
\mathring{e}_{2}(Y(r,\theta)),$
whereas we have
$Y_{\theta}=(x^{\prime}(\theta)-r\sin{\theta},y^{\prime}(\theta)+r\cos{\theta},z^{\prime}(\theta)+r(y^{\prime}(\theta)\cos{\theta}-y(\theta)\sin{\theta}-x^{\prime}(\theta)\sin{\theta}-x(\theta)\cos{\theta})).$
If we let
(9.6) $Y_{\theta}=A\ \mathring{e}_{1}(Y(r,\theta))+B\
\mathring{e}_{2}(Y(r,\theta))+C\ \frac{\partial}{\partial z},$
for some functions $A,B$ and $C$. Then straightforward computations show that
(9.7) $\begin{split}A&=x^{\prime}(\theta)-r\sin{\theta},\ \
B=y^{\prime}(\theta)+r\cos{\theta},\\\
C&=z^{\prime}(\theta)-x^{\prime}(\theta)y(\theta)+y^{\prime}(\theta)x(\theta)+2r(y^{\prime}(\theta)\cos{\theta}-x^{\prime}(\theta)\sin{\theta})+r^{2}.\end{split}$
We recall that the three invariants $\alpha,a$ and $b$ are related by
(9.8) $\frac{\alpha e_{2}+T}{\sqrt{1+\alpha^{2}}}=aY_{r}+bY_{\theta}.$
If we substitute (9.4), (9.5) and (9.6) into (9.8), and compare the
corresponding coefficients, we then obtain (9.2) and (9.3). ∎
###### Remark 9.2.
Let $D=y^{\prime}(\theta)\cos{\theta}-x^{\prime}(\theta)\sin{\theta}$. By
(9.4),(9.6) and (9.7), we have
$\begin{split}0=Y_{r}\wedge Y_{\theta}&\Leftrightarrow
B\cos{\theta}-A\sin{\theta}=0,\ C\cos{\theta}=0,\ C\sin{\theta}=0\\\
&\Leftrightarrow r+D=0,\ C=0\\\ &\Leftrightarrow r+D=0,\
\Theta(\mathcal{C}^{\prime}(\theta))+2rD+r^{2}=0,\ \ \textrm{by}\
\eqref{7031},\\\ &\Leftrightarrow r+D=0,\
r=-D\pm\sqrt{D^{2}-\Theta(\mathcal{C}^{\prime}(\theta))}\\\ &\Leftrightarrow
r+D=0,\ \Theta(\mathcal{C}^{\prime}(\theta))-D^{2}=0.\end{split}$
We conclude that $Y$ is an immersion if and only if either
$\Theta(\mathcal{C}^{\prime}(\theta))-D^{2}\neq 0\ \textrm{or}\ r+D\neq 0$ for
all $\theta$, where
$\Theta(\mathcal{C}^{\prime}(\theta))=z^{\prime}(\theta)-x^{\prime}(\theta)y(\theta)+y^{\prime}(\theta)x(\theta)$.
Formula (9.3) suggests the following. That $Y$ defines a $p$-minimal surface
of special type depends on that
$\Theta(\mathcal{C}^{\prime}(\theta))-(y^{\prime}(\theta)\cos{\theta}-x^{\prime}(\theta)\sin{\theta})^{2}$
vanishes or not. We immediately have the Theorem 1.11 and Theorem 1.12.
### 9.1. The proof of Theorem 1.11
From (9.3), the function $\alpha$ reads
(9.9)
$\alpha=\frac{1}{r+(y^{\prime}(\theta)\cos{\theta}-x^{\prime}(\theta)\sin{\theta})}.$
Therefore, the surface $Y$ defines a $p$-minimal surface of special type I.
For the $\zeta_{1}$-invariant, we proceed to normalize the three invariants
$\alpha,a$ and $b$ by the process presented in Section 6. First we choose
another compatible coordinates
$(\tilde{r}=r+\Gamma(\theta),\tilde{\theta}=\Psi(\theta))$, for some
$\Gamma(\theta),\Psi(\theta)$, such that $\tilde{a}=0$. From the
transformation law of the induced metric (2.20), this can be chosen so that
(9.10)
$\Gamma^{\prime}(\theta)=\frac{-a}{b}=x^{\prime}(\theta)\cos{\theta}+y^{\prime}(\theta)\sin{\theta},$
or equivalently,
(9.11)
$\Gamma(\theta)=\int\big{[}x^{\prime}(\theta)\cos{\theta}+y^{\prime}(\theta)\sin{\theta}\big{]}d\theta.$
If we further choose $\Psi$ such that $\tilde{\theta}=\Psi(\theta)=\theta$
then, in terms of the compatible coordinates $(\tilde{r},\tilde{\theta})$, the
three invariants read
(9.12) $\begin{split}\tilde{a}&=0\\\
\tilde{b}&=\frac{\tilde{\alpha}^{2}}{\sqrt{1+\tilde{\alpha}^{2}}}\\\
\tilde{\alpha}&=\frac{1}{\tilde{r}+y^{\prime}(\theta)\cos{\theta}-x^{\prime}(\theta)\sin{\theta}-\Gamma(\theta)}.\end{split}$
From the formula for $\tilde{b}$, we see that
$(\tilde{r},\tilde{\theta}=\theta)$ is a normal coordinate system. We
therefore acquire the $\zeta_{1}$-invariant of $Y$ as
$\zeta_{1}(\theta)=y^{\prime}(\theta)\cos{\theta}-x^{\prime}(\theta)\sin{\theta}-\Gamma(\theta).$
This completes the proof.
### 9.2. The proof of Theorem 1.12
Applying the same process of normalization we showed in the proof of Theorem
1.11, we normalize the three invariants, in terms of the normal coordinates
$(\tilde{r}=r+\Gamma(\theta),\tilde{\theta}=\theta)$ with $\Gamma(\theta)$
specified as (9.11), to be
(9.13) $\begin{split}\tilde{a}&=0,\\\
\tilde{b}&=\frac{\tilde{\alpha}}{[r+(y^{\prime}(\theta)\cos{\theta}-x^{\prime}(\theta)\sin{\theta})]\sqrt{1+\tilde{\alpha}^{2}}},\\\
\tilde{\alpha}&=\frac{\tilde{r}-\Gamma(\theta)+(y^{\prime}(\theta)\cos{\theta}-x^{\prime}(\theta)\sin{\theta})}{[\tilde{r}-\Gamma(\theta)+(y^{\prime}(\theta)\cos{\theta}-x^{\prime}(\theta)\sin{\theta})]^{2}+\Theta(\mathcal{C}^{\prime}(\theta))-(y^{\prime}(\theta)\cos{\theta}-x^{\prime}(\theta)\sin{\theta})^{2}}.\end{split}$
By the formula for $\tilde{b}$, we see that
$(\tilde{r},\tilde{\theta}=\theta)$ are normal coordinates. Therefore, we
obtain the $\zeta_{1}$\- and $\zeta_{2}$-invariants of $Y$ as
$\begin{split}\zeta_{1}(\theta)&=-\Gamma(\theta)+(y^{\prime}(\theta)\cos{\theta}-x^{\prime}(\theta)\sin{\theta}),\\\
\zeta_{2}(\theta)&=z^{\prime}(\theta)+x(\theta)y^{\prime}(\theta)-y(\theta)x^{\prime}(\theta)-(y^{\prime}(\theta)\cos{\theta}-x^{\prime}(\theta)\sin{\theta})^{2}.\end{split}$
This completes the proof.
Comparing Theorem 1.11 and Theorem 1.12, it is convenient to regard surfaces
of special type I as surfaces of general type with $\zeta_{2}$-invariant
vanishing. Now given two arbitrary functions $\zeta_{1}$ and $\zeta_{2}$, we
solve equation system (1.13) for a smooth curve
$\mathcal{C}(\theta)=(x(\theta),y(\theta),z(\theta))$. Since system (1.13) is
equivalent to the following system
(9.14)
$\left\\{\begin{split}\zeta^{\prime}_{1}(\theta)&=y^{\prime\prime}(\theta)\cos{\theta}-x^{\prime\prime}(\theta)\sin{\theta}-2\big{(}x^{\prime}(\theta)\cos{\theta}+y^{\prime}(\theta)\sin{\theta}\big{)},\\\
\zeta_{2}(\theta)&=z^{\prime}(\theta)+x(\theta)y^{\prime}(\theta)-y(\theta)x^{\prime}(\theta)-(y^{\prime}(\theta)\cos{\theta}-x^{\prime}(\theta)\sin{\theta})^{2},\end{split}\right.$
which is underdetermined. Therefore, the solutions always exist. For example,
we can solve the first equation of (9.14) for $(x(\theta),y(\theta))$ and then
solve for $z(\theta)$ from the second one. It turns out that we can find a
smooth curve $\mathcal{C}$ such that the corresponding $p$-minimal surface $Y$
has the two given functions $\zeta_{1}$ and $\zeta_{2}$ as its $\zeta_{1}$-and
$\zeta_{2}$-invariants. If $\zeta_{2}=0$, then $Y$ is of special type I. We
thus conclude, together with Theorem 7.8 in which states parametrizations for
$p$-minimal surfaces of special type II, that we generically has provided a
parametrization for any given $p$-minimal surface with type. In particular, we
give a parametrization presentation for the eight classes of maximal
$p$-minimal surfaces constructed in Subsection 6.3.
Finally, we would like to point out that these $p$-minimal surfaces
constructed in Theorem 1.11 and Theorem 1.12 are all immersed surfaces at
least (in some cases, they are embedded). This is because that $\tilde{b}\neq
0$ for all points. In particular, formula (1.13) says that if
$\tilde{\alpha}\rightarrow 0$ then
$\tilde{b}\rightarrow\frac{1}{|\Theta(\mathcal{C}^{\prime}(\theta))-(y^{\prime}(\theta)\cos{\theta}-x^{\prime}(\theta)\sin{\theta})^{2}|},$
which is not zero.
###### Example 9.3.
If we take $\mathcal{C}$ to be the curve $\mathcal{C}(\theta)=(0,0,z(\theta))$
with $z^{\prime}(\theta)\neq 0$, then
$Y(r,\theta)=(r\cos{\theta},r\sin{\theta},z(\theta)).$
Taking the new coordinates $(s,t)=(r,z(\theta))$, we recover the surface of
general type in Subsection 7.3 (see Figure 7.4 for the case
$z(\theta)=\theta$).
###### Example 9.4.
If we take $\mathcal{C}$ to be the curve
$\mathcal{C}(\theta)=(-\sin{\theta},\cos{\theta},\theta)$, then
$Y(r,\theta)=(-\sin{\theta}+r\cos{\theta},\cos{\theta}+r\sin{\theta},r).$
The surface of type I in Subsection 7.4 (see Figure 7.4) can be recovered by
taking a rotation by $\frac{\pi}{2}$ about the $z$-axis.
## References
* [1] V. Barone Adesi, F. Serra Cassano, and D. Vittone, The Bernstein problem for intrinsic graphs in Heisenberg groups and calibrations, Preprint.
* [2] T. A. Burton, The Nonlinear Wave Equation as a Liénard Equation, Funkcialaj Ekvacioj, 34 (1991) 529-545.
* [3] Cheng, J.-H.; Hwang, J.-F.; Malchiodi, A., and Yang, P., Minimal surfaces in Pseudohermitian geometry, Annali della Scuola Normale Superiore di Pisa Classe di Scienze V , 4 (1), 129-177, 2005.
* [4] Cheng, J.-H.; Hwang, J.-F.; Malchiodi, A., and Yang, P., A Codazzi-like equation and the singular set for $C^{1}$ smooth surfaces in the Heisenberg group, J. reine angew. Math. 671 (2012), 131-198.
* [5] Chiu, H.-L. and Lai, S.-H., The fundamental theorem for hypersurfaces in Heisenberg groups, Calc. Var. Partial Diffrential Equations, 54 (2015), no. 1, 1091-1118.
* [6] D. Daniekki, N. Garofalo, D. M. Nhieu and S. D. Pauls, Instability of graphical strips and a positive answer to the Bernstein problem in the Heisenberg group $H^{1}$, J. Differential Geometry, 81, pp 251–295, 2009.
* [7] D. Danielli, N. Garofalo, DM Nhieu and S., Pauls, The Bernstein problem for Embedded surfaces in the Heisenberg group $H_{1}$, Indiana University mathematics journal, pp 563-594, 2010.
* [8] Zaitsev, V. F. and Polyanin, A. D., Discrete-Group Methods for Integrating Equations of Nonlinear Mechanics, CRC Press, Boca Raton, 1994.
* [9] Polyanin, A.D. and Zaitsev,V.F., Handbook of Exact Solutions for Ordinary Differential Equations, 2nd Edition, Chapman and Hall/CRC, Boca Raton, 2003.
* [10] Liénard, A., Etude des oscillations entretenues, Revue Générale de l’Electricité 23 (1928) 901-912 & 946-954.
* [11] A. Chiellini, Sull’integrazione dell’equazione differenziale $y^{\prime}+Py^{2}+Qy^{3}=0$, Bollettino dell’Unione Matematica Italiana, 10, 301-307 (1931).
* [12] M. J. Ablowitz, D. J. Kaup, A. C. Newell and H. Segur, Method for solving the Sine-Gordon equation, Phy. Rev. Lett. 30 (1973),1262–1264
* [13] M. Remoissenet, Waves Called Solitons: Concepts and Experiments, Springer, 1994.
* [14] A. Bäcklund, Zur Theorie der Flächentransformationen, Math. Ann. XIX 387–422 (1882).
* [15] A. Bäcklund, On ytor med konstant negativ krökning, Lunds Univ. $\mathring{A}$rsskr XIX (1883).
* [16] A. Bäcklund, Einiges über Kugelkomplexe, Annali di Matematica Ser III XX 65–107 (1913).
* [17] O. Calin, _Geodesics on a certain step 2 sub-Riemannian manifold,_ Number 22. Annals of Global Analysis and Geometry, pp. 317-339, 2002.
* [18] O. Calin, D. C. Chang, and P. C. Greiner, _On a Step 2(k + 1) sub-Riemannian manifold,_ volume 14. Journal of Geometric Analysis, pp. 1-18, 2004.
* [19] U. Hamenstiidt, _Some regularity theorems for Carnot-Caratheodory metrics,_ volume 32. J. Differential Geom., pp. 819-850, 1990.
* [20] P. Hartman, _Ordinary Differential equations,_ Wiley, 1984.
* [21] W. Liu and H. J. Sussmann, _Shortest paths for sub-Riemannian metrics on rank two distri- butions,_ volume 118. Mem. Amer. Math. Soc., 1995.
* [22] R. Beals, B. Gaveau, and P. C. Greiner, _Hamilton-Jacobi theory and the heat kernel on Heisenberg groups,_ Number 79. J. Math. Pure Appl., pp. 633-689, 2000.
* [23] R. Beals, B. Gaveau, and P.C. Greiner, _On a Geometric Formula for the Fundamental Solu- tion of Subelliptic Laplacians,_ Number 181. Math. Nachr., pp. 81-163, 1996.
* [24] O. Calin, D. C. Chang, and P. C. Greiner, _Geometric mechanics on the Heisenberg group,_ Bulletin of the Institute of Mathematics, Academia Sinica, 2005.
* [25] O. Calin and V. Mangione, _Variational calculus on sub-Riemannian manifolds,_ volume 8, Balcan Journal of Geometry and Applications, 2003.
* [26] B. Gaveau, _Principe de moindre action, propagation de la chaleur et estimees sous-elliptiques sur certains groupes nilpotents,_ volume 139. Acta Math, 1977, pp. 95-153.
* [27] A. Koranyi, _Geometric properties of Heisenberg groups,_ Advances in Math., pp. 28-38, 1985.
* [28] A. Koranyi, _Geometric aspects of analysis on the Heisenberg group,_ Topics in Modern Harmonic Analysis, pp. 209-258, May-June 1982.
* [29] A. Koranyi and H. M. Riemann, _Quasiconformal mappings in the Heisenberg group,_ Invent. Math., pp. 309-338, 1985.
* [30] Gerald B. Folland, _Fourier Analysis and its Applications,_ Brooks / Cole, Pacific Grove, 1992.
* [31] Karl-Heinz Gröchenig, _Foundations of Time-Frequency Analysis,_ Birkhäuser, Boston, 2000.
* [32] Ernst Binz and Sonja Pods, _The Geometry of Heisenberg Groups With Applications in Signal Theory, Optics, Quantization, and Field Quantization,_ AMS 2008.
* [33] Malchiodi, A., Minimal surfaces in three dimensional pseudo-Hermitian geometry, Lecture Notes of Seminario Interdisciplinare di Matematica, Vol. 6, pp. 195-207, 2007.
* [34] S. D. Pauls, Minimal surfaces in the Heisenberg group, Geometriae Dedicata,104, pp. 201-231, 2004.
* [35] M. Ritoré and C. Rosales, Rotationally invariant Hypersurfaces with constant mean curvature in the Heisenberg group $H^{n}$, The Journal of Geometric Analysis, v.16, n.4 pp. 703-720 (2006).
|
# Joint Transmission Scheme and Coded Content Placement in Cluster-centric
UAV-aided Cellular Networks
Zohreh HajiAkhondi-Meybodi, , Arash Mohammadi, ,
Jamshid Abouei, , Ming Hou, , and Konstantinos N. Plataniotis Z. HajiAkhondi-
Meybodi is with Electrical and Computer Engineering (ECE), Concordia
University, Montreal, Canada. (E-mail: z_hajiak@encs.concordia.ca). A.
Mohammadi (corresponding author) is with Concordia Institute of Information
Systems Engineering (CIISE), Concordia University, Montreal, Canada. (P: +1
(514) 848-2424 ext. 2712 F: +1 (514) 848-3171, E-mail:
arash.mohammadi@concordia.ca). J. Abouei was with the Department of Electrical
and Computer Engineering, University of Toronto, Toronto, Canada. He is now
with the Department of Electrical Engineering, Yazd University, Yazd
89195-741, Iran (E-mail: abouei@yazd.ac.ir). M. Hou is with Defence Research
and Development Canada (DRDC), Ottawa, Toronto, ON, M2K 3C9, Canada. (E-mail:
ming.hou@drdc-rddc.gc.ca). K. N. Plataniotis is with Electrical and Computer
Engineering (ECE), University of Toronto, Toronto, Canada. (E-mail:
kostas@ece.utoronto.ca).This Project was partially supported by Department of
National Defence’s Innovation for Defence Excellence & Security (IDEaS)
program, Canada.
###### Abstract
Recently, as a consequence of the COVID-19 pandemic, dependence on
telecommunication for remote learning/working and telemedicine has
significantly increased. In this context, preserving high Quality of Service
(QoS) and maintaining low latency communication are of paramount importance.
In cellular networks, incorporation of Unmanned Aerial Vehicles (UAVs) can
result in enhanced connectivity for outdoor users due to the high probability
of establishing Line of Sight (LoS) links. The UAV’s limited battery life and
its signal attenuation in indoor areas, however, make it inefficient to manage
users’ requests in indoor environments. Referred to as the Cluster-centric and
Coded UAV-aided Femtocaching (CCUF) framework, the network’s coverage in both
indoor and outdoor environments increases by considering a two-phase
clustering framework for FAPs’ formation and UAVs’ deployment. Our first
objective is to increase the content diversity. In this context, we propose a
coded content placement in a cluster-centric cellular network, which is
integrated with the Coordinated Multi-Point (CoMP) approach to mitigate the
inter-cell interference in edge areas. Then, we compute, experimentally, the
number of coded contents to be stored in each caching node to increase the
cache-hit-ratio, Signal-to-Interference-plus-Noise Ratio (SINR), and cache
diversity and decrease the users’ access delay and cache redundancy for
different content popularity profiles. Capitalizing on clustering, our second
objective is to assign the best caching node to indoor/outdoor users for
managing their requests. In this regard, we define the movement speed of
ground users as the decision metric of the transmission scheme for serving
outdoor users’ requests to avoid frequent handovers between FAPs and increase
the battery life of UAVs. Simulation results illustrate that the proposed CCUF
implementation increases the cache-hit-ratio, SINR, and cache diversity and
decrease the users’ access delay, cache redundancy and UAVs’ energy
consumption.
###### Index Terms:
Cluster-centric, Coded Femtocaching, Coordinated Multi-Point (CoMP), Unmanned
Aerial Vehicles (UAVs).
## I Introduction
As a consequence of the COVID-19 pandemic, dependence on telemedicine and
remote learning/working has significantly increased due to the exponential
rise in the demand for in-home care, remote working, schooling, and remote
reporting [1]. Caching has emerged as a promising solution to maintain low
latency communication and mitigate the network’s traffic over the backhaul.
This, in turn, improves the Quality of Service (QoS) by storing the most
popular multimedia content close to the end-users [2, 3]. Due to the limited
storage of caching nodes, however, it is critical to increase their content
diversity, therefore, storing more contents in caching nodes. Recently,
Unmanned Aerial Vehicle (UAV)-based caching has gained significant attention
from both industry and academia [5, 4, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
17, 18], due to its high-quality Line of Sight (LoS) links. Although the
enhanced connectivity that comes by using UAVs will improve the QoS in outdoor
areas, indoor penetration loss and deep shadow fading caused by building walls
significantly attenuate the UAV’s signals in indoor environments degrading the
network’s overall QoS [19]. Capitalizing on the above discussion, the paper
focuses on the issues of content diversity and transmission scheme of
indoor/outdoor users. In this context and in line with advancements of 5G
networks, the paper targets coupling UAVs as aerial caching nodes with Femto
Access Points (FAPs) [20]. In what follows, to understand the state-of-the-art
in this area and seek potential solutions, we first review the relevant
literature.
Related Work: The main objective of UAV-aided cellular networks is to bring
multimedia data closer to ground users and simultaneously improve users’ QoS
and the network’s Quality of Experience (QoE). If the requested content can be
found in the storage of one of the available caching nodes, this request would
be served directly and cache-hit occurs; otherwise, it is known as a cache-
miss. Due to the large size of multimedia contents, however, it is not
feasible to store all contents in the storage of caching nodes. To increase
content diversity, coded caching strategies [21, 22] have received remarkable
attention lately. In coded caching strategies, only specific segments of the
most popular multimedia contents are stored in the caching nodes. Early works
on coded femtocaching such as Reference [23], however, considered homogeneous
networks, where the same segments of the most popular multimedia contents are
stored in different caching nodes. To boost the content diversity, the main
focus of recent researchers has been shifted to the cluster-centric networks
[24] as a heterogeneous infrastructure, where distinct segments are stored in
neighboring caching nodes.
Cluster-centric cellular networks provide several benefits, such as increased
content/cache diversity, which in turn leads to an increase in the number of
requests managed by the caching nodes. However, this comes with the cost of
experiencing inter-cell interference, especially for cell-edge users. To
mitigate the inter-cell interference and improve the throughput of the cell-
edge users, content caching in Coordinated Multi-Point (CoMP)-integrated
networks [25, 26, 24, 27, 28, 29] has been studied in recent years. For
instance, Chen et al. [24] developed two transmission schemes, namely Joint
Transmission (JT) and Parallel Transmission (PT), which are selected based on
the popularity of the requested content. Alternatively, Lin et al. [27]
proposed a cluster-centric cellular network applying the CoMP technique based
on the users’ link quality, where cell-core and cell-edge users are served
through Single Transmission (ST) and JT, respectively. Despite all the
researches on the cluster-centric cellular networks, there is no framework to
determine how different segments can be cached to increase the data
availability in a UAV-aided cluster-centric cellular network to increase
content diversity. The paper addresses this gap.
In addition to content diversity, one of the main challenges in UAV-aided
cellular networks is to optimally assign caching nodes (UAVs or FAPs) to
ground users to efficiently serve their requests [30, 31]. There are several
QoS and QoE metrics that can be considered as the decision criteria for Access
Point (AP) selection, including users’ latency; traffic load and energy
consumption of APs; users’ link quality; handover rate, and; Signal-to-
Interference-plus-Noise Ratio (SINR). For instance, Athukoralage et al. [30]
considered an AP selection framework, where ground users are supported by UAV
or WiFi APs. In this work, the users’ link quality is utilized to balance the
load between UAVs and WiFi APs. Zhu et al. [31] proposed a game theory-based
AP selection scheme, where the probability of packet collision is used to
select the optimal AP among all possible UAVs and Base Stations (BSs). In our
previous work [32], we introduced the Convolutional Neural Network (CNN) and
Q-Network-based Connection Scheduling (CQN-CS) framework with the application
to the UAV-aided cellular networks. In that work, without considering the
content diversity, ground users were autonomously trained to determine the
optimal caching node, i.e., UAVs or FAPs. Although we minimized the users’
access delay by maintaining a trade-off between the energy consumption of UAVs
and the occurrence of handovers between FAPs, there are still key challenges
ahead. Our previous work is just efficient for serving ground users in outdoor
environments as the UAV’s signal attenuation in indoor environments is not
factored in. The wide transmission range of UAVs and the high probability of
establishing LoS links provide several advantages, including the ability to
manage the majority of ground users’ requests, which leads to improved
coverage in outdoor environments. Due to the limited battery life of UAVs,
however, requests that are handled by UAVs should be controlled. Another
challenge is the handover phenomenon, which can be frequently triggered by
FAPs if the ground user moves rapidly and leaves the current position. To
date, limited research has been performed on UAV-aided cellular networks to
provide high QoS for ground users in both indoor and outdoor environments. The
paper also addresses this gap.
Contribution: In this paper, we consider an integrated UAV-aided and cluster-
centric cellular network to serve ground users positioned in both indoor and
outdoor environments. Our first objective is to increase the content diversity
that can be accessed via caching nodes. The second goal is to introduce
different transmission schemes for indoor/outdoor users to improve the
achievable QoS in terms of the users’ access delay and decrease the energy
consumption of UAVs. Referred to as the Cluster-centric and Coded UAV-aided
Femtocaching (CCUF) framework, the network’s coverage in both indoor and
outdoor environments increases by considering a two-phase clustering approach
for FAPs’ formation and UAVs’ deployment. In summary, the paper makes the
following key contributions:
* •
Due to the UAV’s signal attenuation in indoor environments, we consider two
different indoor and outdoor caching service scenarios for the proposed CCUF
framework. More precisely, the indoor area is covered by FAPs, equipped with
extra storage and supported by CoMP technology. The outdoor area, however, is
supported by coupled UAVs and FAPs depending on the movement speed of ground
users.
* •
To access a large number of content during the movement of ground users, a
two-phase clustering approach is considered: $(i)$ The whole network (both
indoor and outdoor areas) is partitioned into sub-networks called inter-
clusters, which is defined for content placement in FAPs. We show that based
on this strategy, the ground users can acquire more segments during their
movements, and; $(ii)$ For UAVs formation, the outdoor environment is
partitioned into intra-clusters via a $K$-means clustering algorithm, each
covered by a UAV.
* •
To the best of our knowledge, despite all the research conducted in this
field, there is no placement strategy to determine how distinct segments of
popular content should be distributed in different caching nodes. Towards this
goal, we consider a cluster-centric cellular network, where multimedia
contents are classified into three categories, including popular, mediocre,
and non-popular contents. While the popular contents are stored completely,
distinct segments of mediocre ones are determined according to the proposed
framework to be stored in the storage of neighboring FAPs. We also determine
the best number of coded/uncoded contents in each caching node to increase the
cache-hit-ratio, SINR, and cache diversity while decreasing users’ access
delay and cache redundancy for different content popularity profiles.
The effectiveness of the proposed CCUF framework is evaluated through
simulation studies in both indoor and outdoor environments in terms of cache-
hit-ratio, users’ access delay, SINR, cache diversity, cache redundancy, and
energy consumption of UAVs. According to the simulation results, to improve
the aforementioned metrics for different content popularity profiles, we
investigate the best number of coded contents in each caching node.
The remainder of the paper is organized as follows: In Section II, the
network’s model is described and the main assumptions required for the
implementation of the proposed framework are introduced. Section III presents
the proposed CCUF scheme. Simulation results are presented in Section IV.
Finally, Section V concludes the paper.
Figure 1: A typical structure of the proposed UAV-aided cellular network in
(a) the indoor, and (b) the outdoor environments.
## II System Model and Problem Description
We consider a UAV-aided cellular network in a residential area that supports
both indoor and outdoor environments (see Fig. 1). There exist $N_{f}$ number
of FAPs, denoted by $f_{i}$, for ($1\leq i\leq N_{f}$), each with the cache
size of $C_{f}$ and transmission range of $R_{f}$. All FAPs are independently
and randomly distributed in the environment following Poisson Point Processes
(PPPs). There are also $N_{u}$ number of UAVs, denoted by $u_{k}$, for ($1\leq
k\leq N_{u}$), with equal transmission range of $R_{u}$, and a main server
that has access to the whole content and can manage all caching nodes. There
are $N_{g}$ number of ground users, denoted by $GU_{j}$, for ($1\leq j\leq
N_{g}$), that move through the network with different velocities. Term
$\upsilon_{j}(t)$ denotes the speed of the ground user $GU_{j}$ at time slot
$t$. When $GU_{j}$ requests content $c_{l}$ from a library of
$\mathcal{C}=\\{c_{1},\ldots,c_{N_{c}}\\}$, in which $N_{c}=|\mathcal{C}|$ is
the cardinality of multimedia data in the network, this request should be
handled by one of the nearest FAPs or UAVs having some segments of $c_{l}$. In
this work, FAP $f_{i}$, for ($1\leq i\leq N_{f}$), and UAV $u_{k}$, for
($1\leq k\leq N_{u}$), operate in an open access mode, i.e., they can serve
any ground user $GU_{j}$, for ($1\leq j\leq N_{g}$), located in their
transmission range. To completely download a requested content, a finite time
$T$ is required. In the proposed CCUF framework, each content $c_{l}$ is
fragmented into $N_{s}$ encoded segments, denoted by $c_{ls}$, for ($1\leq
s\leq N_{s}$). Without loss of generality, it is assumed that the time $T$ is
discretized into $N_{s}$ time slots with time interval $\delta_{t}$, i.e.,
$T=N_{s}\delta_{t}$, where $\delta_{t}$ is large enough for downloading one
segment $c_{ls}$.
As it can be seen from Fig. 1, we consider two different indoor and outdoor
caching service scenarios for the proposed CCUF framework to improve the
network’s coverage. As will be described in Subsection II-B, the requests of
indoor users are handled through FAPs, while outdoor users are supported by
coupled UAVs and FAPs depending on their movement speed. In this regard, we
define two clustering approaches, called inter-clusters and intra-clusters,
which are used for content placement in FAPs and UAVs’ deployment, as
discussed below:
Inter-Clusters: As shown in Fig. 1, $N_{b}<N_{f}$ number of neighboring FAPs
in both indoor and outdoor environments, form a cluster, referred to as the
inter-cluster. As it is stated in [24, 33], the main focus of cluster-centric
content placement is to place contents in the storage of FAPs, where all FAPs
belonging to an inter-cluster are used as an entity (despite conventional
femtocaching schemes where each FAP acts as single storage). Therefore, our
goal is to determine how different segments of popular files should be
distributed in the cache of FAPs belonging to an inter-cluster to increase the
content diversity. We construct the inter-clusters based on the following two
rules: (i) As will be described shortly in Subsection II-A, all FAPs in the
same inter-cluster save different segments of mediocre contents, and; (ii) The
cached contents of different inter-clusters are the same. In addition, all
FAPs use the CoMP transmission approach (supporting ST and JT schemes) to
mitigate the inter-cell interference in edge areas and manage ground users’
requests.
Intra-Clusters: Since the transmission range of FAPs is significantly less
than that of a UAV (see Fig. 1(b)), the outdoor area is also divided into
several intra-clusters (each intra-cluster is covered by a UAV) based on an
unsupervised learning algorithm. In what follows, we present the content
popularity profile and transmission schemes utilized to develop the proposed
CCUF framework.
### II-A Content Popularity Profile
To account for user’s behavior pattern in multimedia services, the popularity
of video contents is determined based on the Zipf distribution, where the
probability of requesting the $l^{\text{th}}$ file, denoted by $p_{l}$, is
calculated as
$p_{l}=\dfrac{l^{-\gamma}}{\sum\limits_{r=1}^{N_{c}}{r^{-\gamma}}},$ (1)
where $\gamma$ and $r$ represent the skewness of the file popularity, and the
rank of the file $c_{l}$, respectively. For notational convenience, we assume
that $P[n]\equiv P(n\delta_{t})$ denotes the probability of accessing a new
segment in time slot $n$, with $n\in\\{1,\ldots,N_{s}\\}$. Without loss of
generality and to be practical, we investigate the probability distribution of
a real multimedia data set, i.e., the YouTube videos trending statistics,
following Zipf distribution, i.e., a small part of the contents are requested
with a high probability. The majority of contents are not popular, and some
contents, are requested moderately. Consequently, in the proposed CCUF
framework, we classify multimedia contents into three categories, i.e.,
popular, mediocre, and non-popular [24]. To improve content diversity, the
storage capacity of FAPs, denoted by $C_{f}$, is divided into two spaces,
where $\alpha$ portion of the storage is allocated to store complete popular
contents, i.e., $1\leq l\leq\lfloor\alpha C_{f}\rfloor$, where $l=1$ indicates
the most popular content. Additionally, $(1-\alpha)$ portion of the cache is
assigned to store different parts of the mediocre contents, where
$\lfloor\alpha C_{f}\rfloor+1\leq l\leq N_{s}(C_{f}-\lfloor\alpha
C_{f}\rfloor)$. The best value of $\alpha$ is obtained experimentally. The
proposed model for identifying different segments to be cached in neighboring
FAPs will be discussed later on in Section III.
### II-B Transmission Scheme
In this subsection, we describe both the connection scheduling (serving by
FAPs or UAVs) and the transmission scheme depending on the presence of the
ground user in indoor or outdoor environments.
#### II-B1 Indoor Environment
The transmitted signal by UAVs, propagating in residential areas, becomes
weaker due to the penetration loss and shadow fading effects. It is,
therefore, assumed that ground users positioned in indoor areas are only
supported by FAPs. In the CoMP-integrated and cluster-centric cellular network
and as it can be seen from Fig. 1, there are two regions in each inter-
cluster, named cell-edge and cell-core, which are determined based on the
long-term averaged SINR values [27] to illustrate the quality of a wireless
link. In such a case that the ground user $GU_{j}$ is positioned in the
vicinity of the FAP $f_{i}$, the SINR from $f_{i}$ to $GU_{j}$, denoted by
$\mathcal{S}_{i,j}$, is obtained as follows
$\mathcal{S}_{i,j}(t)=\dfrac{P_{i}|\mathcal{\tilde{H}}_{i,j}(t)|^{2}}{I_{\bm{f}_{-i}}(t)+N_{0}},$
(2)
where $P_{i}$ denotes the transmitted signal power of FAP $f_{i}$, and
$I_{\bm{f}_{-i}}(t)$ represents the interference power from other FAP-ground
users, except for the corresponding $f_{i}$ link. Term $N_{0}$ represents the
noise power related to the additive white Gaussian random variable. Moreover,
the path loss and fading channel effects between FAP $f_{i}$ and ground user
$GU_{j}$ at time slot $t$ is denoted by
$\mathcal{\tilde{H}}_{i,j}(t)=\dfrac{h_{i,j}(t)}{\sqrt{\mathcal{L}_{i,j}(t)}}$.
In this case, $h_{i,j}(t)$ denotes a complex zero-mean Gaussian random
variable with unit standard deviation and $\mathcal{L}_{i,j}(t)$ represents
the path loss between FAP $f_{i}$ and ground user $GU_{j}$ at time slot $t$,
obtained as follows
$\mathcal{L}_{i,j}(t)=\mathcal{L}_{0}+10\eta\log\big{(}d_{i,j}(t)\big{)}+\chi_{\sigma},$
(3)
where $\eta$ is the path loss exponent. Term $\chi_{\sigma}$ indicates the
shadowing effect, which is a zero-mean Gaussian-distributed random variable
with standard deviation $\sigma$. Additionally, $d_{k,j}(t)$ represents the
Euclidean distance between FAP $f_{i}$ and ground user $GU_{j}$ at time slot
$t$. Furthermore, $\mathcal{L}_{0}=20\log\left(\dfrac{4\pi
f_{c}d_{0}}{c}\right)$ is the path loss related to the reference distance
$d_{0}$ where $f_{c}$ and $c=3\times 10^{8}$ denote the carrier frequency and
the light speed, respectively. Accordingly, the ground user $GU_{j}$ is marked
as the cell-core user connected to FAP $f_{i}$, if
$\overline{\mathcal{S}}_{i,j}(t)>\mathcal{S}_{th}$; otherwise, $GU_{j}$ is
marked as the cell-edge user, where $\mathcal{S}_{th}$ is the SINR threshold.
The transmission scheme in the proposed CoMP-integrated and cluster-centric
cellular network is determined based on two metrics; (i) The popularity of the
requested content, described in Subsection II-A, and; (ii) The link quality of
the ground user in the cell, i.e., cell-core or cell-edge. The following two
different transmission schemes are utilized for the development of the
proposed CCUF framework:
* •
Single Transmission (ST): In this case, the requested file $c_{l}$, for
$(1\leq l\leq\lfloor\alpha C_{f}\rfloor)$, is a popular content, and the
ground user $GU_{j}$ is marked as the cell-core of FAP $f_{i}$, i.e.,
$\overline{\mathcal{S}}_{i,j}(t)>\mathcal{S}_{th}$. It means that the content
is completely cached into the storage of FAP $f_{i}$ and the high-quality link
can be established between FAP $f_{i}$ and ground user $GU_{j}$. Consequently,
this request is served only by the corresponding FAP $f_{i}$. Moreover, if the
requested content belongs to the mediocre category, i.e., $\lfloor\alpha
C_{f}\rfloor+1\leq l\leq N_{s}(C_{f}-\lfloor\alpha C_{f}\rfloor)$, this
request is served according to the ST scheme regardless of the user’s link
quality since each FAP has a different segment of the mediocre content.
* •
Joint Transmission (JT): In this transmission scheme, the requested file
$c_{l}$ is a popular content, i.e., $1\leq l\leq\lfloor\alpha C_{f}\rfloor$.
Consequently, all FAPs have the same complete file. The ground user $GU_{j}$,
however, is marked as the cell-edge of FAP $f_{i}$, i.e.,
$\overline{\mathcal{S}}_{i,j}(t)\leq\mathcal{S}_{th}$. Therefore, the link
quality between FAP $f_{i}$ and the ground user $GU_{j}$ is not good enough.
In order to improve the reliability of content delivery, the corresponding
content will be jointly transmitted by several FAPs in its inter-cluster. As
it can be seen from Fig. 1, neighboring FAPs in an inter-cluster
collaboratively serve cell-edge ground users based on the JT scheme, which is
shown by the red color.
#### II-B2 Outdoor Environment
As it can be seen from Fig. 1, outdoor areas are covered by both UAVs and
FAPs. Therefore, outdoor users are classified based on their velocity into the
following two categories:
* •
Low Speed Users (LSUs): If the speed of ground user $GU_{j}$, denoted by
$\upsilon_{j}(t)$, is less than a predefined threshold $\upsilon_{\rm{th}}$,
this user is managed by inter-clusters (FAPs). Therefore, the transmission
scheme of LSUs is completely the same as the indoor users, described in
Subsection II-B-1.
* •
High Speed Users (HSUs): In this case, the speed of ground user $GU_{j}$ is
equal or more than $\upsilon_{\rm{th}}$. Therefore, this request should be
served by a UAV covering the corresponding intra-cluster.
This completes our discussion on the content popularity profile and
transmission schemes. Next, we develop the CCUF framework.
## III The CCUF Framework
In conventional femtocaching schemes, it is a common assumption that all
caching nodes store the same most popular contents. This assumption is
acceptable in static femtocaching models, in which users are stationary or
move with a low velocity. With the focus on a dynamic femtocaching network, in
which users can move based on the random walk model, storing distinct content
in neighboring FAPs leads to increasing the number of requests served by
caching nodes [34]. Despite recent researches on cluster-centric cellular
networks, there is no framework to determine how different segments should be
stored to increase content diversity. Toward this goal, we propose the CCUF
framework, which is an efficient content placement strategy for the network
model introduced in Section II. The proposed CCUF framework is implemented
based on the steps presented in the following subsections.
### III-A Content Caching for FAPs and UAVs
Identifying the best multimedia content to be stored in the storage of caching
nodes leads to a reduction in the users’ latency. It is commonly assumed [23,
2] that the total users’ access delay is determined according to the
availability of the required content in the nearby caching nodes. Based on
this assumption, in scenarios where the requested content can be served by
caching nodes, the cache-hit occurs and the ground user will experience no
delay; otherwise, the request is served by the main server resulting in a
cache-miss. Inspired from [35], we relax the above assumption and express the
actual users’ access delay as a function of the content popularity profile and
link’s quality. In this regard, we propose two optimization models for content
placement in both FAPs and UAVs to minimize the users’ latency, which are
similar in nature. Toward this goal, we first describe the delay that ground
users experience when served by FAPs and UAVs.
#### III-A1 UAVs’ Content Placement
Serving requests via UAVs leads to establishing air-to-ground links from UAVs
to ground users. Due to the obstacles in outdoor environments, the transmitted
signal from UAVs is attenuated. To be practical, we consider both LoS and Non-
LoS (NLoS) path losses from UAV $u_{k}$ to ground user $GU_{j}$ at time slot
$t$ as follows [4]
$\displaystyle\\!\\!\\!\\!\mathcal{L}_{k,j}^{(LoS)}(t)$
$\displaystyle\\!\\!\\!\\!=\\!\\!$
$\displaystyle\mathcal{L}_{0}+10\eta^{(LoS)}\log(d_{k,j}(t))+\chi_{\sigma}^{(LoS)},$
(4) $\displaystyle\\!\\!\\!\\!\mathcal{L}_{k,j}^{(NLoS)}(t)$
$\displaystyle\\!\\!\\!\\!=\\!\\!$
$\displaystyle\mathcal{L}_{0}+10\eta^{(NLoS)}\log(d_{k,j}(t))+\chi_{\sigma}^{(NLoS)},$
(5)
where $\mathcal{L}_{0}=20\log\left(\dfrac{4\pi f_{c}d_{0}}{c}\right)$ denotes
the reference path loss in distance $d_{0}$, and $d_{k,j}(t)$ is the Euclidean
distance between UAV $u_{k}$ and the ground user $GU_{j}$ at time slot $t$. In
addition, $\eta^{(LoS)}$, $\eta^{(NLoS)}$, $\chi_{\sigma}^{(LoS)}$ and
$\chi_{\sigma}^{(NLoS)}$ indicate the LoS and NLoS path loss exponents and the
corresponding shadowing effects, respectively. Consequently, the average path
loss, denoted by $\mathcal{\overline{L}}_{k,j}(t)$, is given by
$\mathcal{\overline{L}}_{k,j}(t)=p^{(LoS)}_{k,j}(t)\mathcal{\overline{L}}_{k,j}^{(LoS)}(t)+(1-p^{(LoS)}_{k,j}(t))\mathcal{\overline{L}}_{k,j}^{(NLoS)}(t),$
(6)
where $p^{(LoS)}_{k,j}(t)$ is the probability of establishing LoS link between
UAV $u_{k}$ and ground user $GU_{j}$ at time slot $t$, obtained as [13]
$p^{(LoS)}_{k,j}(t)=\left(1+\vartheta\exp{\left(-\zeta[\phi_{k,j}(t)-\vartheta]\right)}\right)^{-1},$
(7)
where $\vartheta$ and $\zeta$ are constant parameters, depending on the rural
and urban areas. Moreover,
$\phi_{k,j}(t)=\sin^{-1}{\left(\dfrac{h_{k}}{d_{k,j}(t)}\right)}$ is the
elevation angle between UAV $u_{k}$ and the ground user $GU_{j}$, and $h_{k}$
is the UAV’s altitude. Without loss of generality, altitude $h_{k}$ is assumed
to be a fixed value over the hovering time. If the requested content cannot be
found in the storage of UAVs, additional ground-to-air connection is required
to provide UAVs with the requested content through the main server. Similarly,
the average path loss of the main server-to-UAV $u_{k}$ link is calculated as
$\mathcal{\overline{L}}_{m,k}(t)=p^{(LoS)}_{m,k}(t)\mathcal{L}_{m,k}^{(LoS)}(t)+(1-p^{(LoS)}_{m,k}(t))\mathcal{L}_{m,k}^{(NLoS)}(t),$
(8)
where $\mathcal{L}_{m,k}^{(LoS)}(t)=d_{m,k}^{-\varpi}(t)$ and
$\mathcal{L}_{m,k}^{(NLoS)}(t)=\psi\mathcal{L}_{m,k}^{(LoS)}(t)$, in which
$d_{m,k}(t)$ denotes the distance between the main server and UAV $u_{k}$.
Furthermore, $\varpi$ and $\psi$ denote the LoS and NLoS path loss exponents,
respectively [4].
As stated previously, another parameter that has a great impact on the users’
access delay is the presence of the requested content in the caching node,
depending on the content popularity profile. Therefore, the cache-hit and the
cache-miss probability through serving by UAV $u_{k}$ at time slot $t$,
denoted by $p_{u}^{(h)}(t)$ and $p_{u}^{(m)}(t)$, respectively, are expressed
as
$\displaystyle p_{u}^{(h)}(t)$ $\displaystyle=$ $\displaystyle\sum_{l\in
C_{u}}p_{l}(t)\leq 1,$ (9) $\displaystyle p_{u}^{(m)}(t)$ $\displaystyle=$
$\displaystyle 1-p_{u}^{(h)}(t),$ (10)
where $C_{u}$ denotes the cache size of UAV $u_{k}$, which is assumed to be
the same for all UAVs. Consequently, the users’ access delay through UAVs is
expressed as
$\mathcal{D}_{u}(t)=p_{u}^{(h)}(t)\mathcal{D}_{u}^{(h)}(t)+p_{u}^{(m)}(t)\mathcal{D}_{u}^{(m)}(t),$
(11)
where $\mathcal{D}_{u}^{(h)}(t)$ and $\mathcal{D}_{u}^{(m)}(t)$ represent the
cache-hit and the cache-miss delays, respectively, calculated as follows [13]
$\displaystyle\\!\\!\\!\\!\mathcal{D}_{u}^{(h)}(t)$
$\displaystyle\\!\\!=\\!\\!$
$\displaystyle\dfrac{L_{c}}{R_{k,j}}=L_{c}\log^{-1}\left(1+\dfrac{P_{k}10^{\mathcal{\overline{L}}_{k,j}(t)/10}}{I_{k}(t,\bm{u}_{-k})+N_{0}}\right),$
(12) $\displaystyle\\!\\!\\!\\!\mathcal{D}_{u}^{(m)}(t)$
$\displaystyle\\!\\!=\\!\\!$
$\displaystyle\underbrace{L_{c}\log^{-1}\left(1+\dfrac{P_{k}10^{\mathcal{\overline{L}}_{m,k}(t)/10}}{I_{k}(t,\bm{u}_{-k})+N_{0}}\right)}_{\triangleq\mathbf{L_{MU}}}+$
(13)
$\displaystyle\underbrace{L_{c}\log^{-1}\left(1+\dfrac{P_{k}10^{\mathcal{\overline{L}}_{k,j}(t)/10}}{I_{k}(t,\bm{u}_{-k})+N_{0}}\right)}_{\triangleq\mathbf{L_{UG}}},$
where $L_{c}$ and $R_{k,j}$ represent the file size of $c_{l}$ and the
transmission data rate from UAV $u_{k}$ to $GU_{j}$. Furthermore, $P_{k}$ and
$I_{k}(t,\bm{u}_{-k})$ denote the transmission power of UAV $u_{k}$ and the
interference power caused by other UAV-user links for the transmission link
between $u_{k}$ and $GU_{j}$, respectively. Note that when the cache-miss
happens, the content should be first provided for the UAV by the main server.
Therefore, $L_{MU}$ and $L_{UG}$ in Eq. (13) represent the users’ access delay
related to the main server-UAV and UAV-ground user links, respectively.
Given users’ access delay through UAVs, the goal is to place contents in the
storage of UAVs to minimize the users’ access delay in Eq. (11). Due to the
large coverage area of UAVs, it is not feasible for ground users to move
through areas supported by different UAVs frequently. Therefore, we assume
that contents (either popular or mediocre ones) are cached completely in the
storage of UAVs. With the aim of minimizing users’ access delay, the cached
contents are selected as the solution of the following optimization problem:
$\displaystyle{}\min\limits_{\begin{subarray}{c}\mathrm{x}_{l}\end{subarray}}~{}~{}\sum\limits_{l=1}^{N_{c}}\Big{(}\sum\limits_{j=1}^{N_{g}}p_{l}^{(j)}(t)\mathcal{D}_{u}^{(j)}(t)\Big{)}\mathrm{x}_{l}$
(14)
$\displaystyle\text{s.t.}~{}~{}\textbf{C1.}~{}~{}\mathrm{x}_{l}\in\\{0,1\\},$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\textbf{C2.}~{}~{}\sum\limits_{l=1}^{N_{c}}(1-\mathrm{x}_{l})\leq
C_{u},$
where $p_{l}^{(j)}(t)$ denotes the probability of requesting content $c_{l}$
by the ground user $GU_{j}$ at time slot $t$, which is obtained according to
the request history of ground user $GU_{j}$ [2]. Furthermore,
$\mathcal{D}_{u}^{(j)}(t)$ is the delay that the ground user $GU_{j}$ may
experience, which is calculated based on Eq. (11). In the constraint C1,
$\mathrm{{x}}_{l}$ is an indicator variable, which is equal to $0$ when
content $c_{l}$ exists in the cache of UAV $u_{k}$. Moreover, the constraint
C2 represents that the total contents cached in the storage of $u_{k}$ should
not exceed its storage capacity of $u_{k}$.
#### III-A2 FAPs’ Content Placement
Serving requests by FAPs leads to a ground-to-ground connection type between
FAPs and ground users. Similarly, the users’ access delay through FAP
connections is calculated as
$\mathcal{D}_{f}(t)=p_{k}^{(h)}(t)\mathcal{D}_{f}^{(h)}(t)+p_{k}^{(m)}(t)\mathcal{D}_{f}^{(m)}(t),$
(15)
where $\mathcal{D}_{f}^{(h)}(t)$, as the cache-hit delay, is expressed as
$\mathcal{D}_{f}^{(h)}(t)=\dfrac{L_{c}}{R_{i,j}}=L_{c}\log^{-1}\left(1+\dfrac{P_{i}|\mathcal{\tilde{H}}_{i,j}(t)|^{2}}{I_{\bm{f}_{-i}}(t)+N_{0}}.\right)$
(16)
In this case, coded contents to be stored in the storage of FAPs are
determined according to the solution of the following optimization problem:
$\displaystyle{}\mathcal{F}(\mathbf{y},\mathbf{z})$
$\displaystyle\\!\\!=\\!\\!$
$\displaystyle\min\limits_{\begin{subarray}{c}y_{l},z_{l}\end{subarray}}~{}~{}\sum\limits_{l=1}^{N_{c}}\Big{(}\sum\limits_{j=1}^{N_{g}}p_{l}^{(j)}(t)\mathcal{D}_{f}^{(j)}(t)\Big{)}y_{l}$
$\displaystyle+\sum\limits_{l=1}^{N_{c}}\Big{(}\sum\limits_{j=1}^{N_{g}}p_{l}^{(j)}(t)\mathcal{D}_{f}^{(j)}(t)\Big{)}z_{l},$
s.t. $\displaystyle\textbf{C1.}~{}~{}y_{l},z_{l}\in\\{0,1\\},$
$\displaystyle\textbf{C2.}~{}~{}\sum\limits_{l=1}^{N_{c}}(1-y_{l})\leq\lfloor\alpha
C_{f}\rfloor,$
$\displaystyle\textbf{C3.}~{}~{}\sum\limits_{l=1}^{N_{c}}(1-z_{l})\leq
N_{s}(C_{f}-\lfloor\alpha C_{f}\rfloor),$
where $\mathcal{F}(\mathbf{y},\mathbf{z})$ is the cost function associated
with users’ access delay, experienced by serving the request through FAPs. By
assuming that $N_{p}=\lfloor\alpha C_{f}\rfloor$ and
$N_{a}=N_{s}(C_{f}-\lfloor\alpha C_{f}\rfloor)$ are the cardinality of popular
and mediocre contents, respectively, $\mathbf{y}=[y_{1},\ldots,y_{N_{p}}]^{T}$
is an indicator vector for popular contents, where $y_{l}$ would be 0 if
$l^{th}$ content is stored in the cache of FAPs, otherwise it equals to 1.
Similarly, $\mathbf{z}=[z_{1},\ldots,z_{N_{a}}]^{T}$ is an indicator variable
for mediocre contents. According to the optimization problem, different from
popular contents that are completely stored, just one segment of mediocre
contents are cached. Similarly, $y_{l}$ and $z_{l}$ in constraint C1
illustrate the availability of content $c_{l}$ in the cache of FAP $f_{i}$.
Finally, constraints C2 and C3 indicate the portion of cache allocated to
popular and mediocre contents, respectively. Due to the large size of video
contents and the complexity of the content placement, it is essential to
update the storage of caching nodes in the off-peak period [36]. Therefore, we
use an adaptive time window for cache updating, introduced in our previous
work [2], to maintain a trade-off between the on-time popularity recognition
of contents and the network’s traffic.
### III-B Content Placement in Multiple Inter-Clusters
Figure 2: A typical hexagonal cellular network, where seven FAPs form an
inter-cluster.
After identifying popular and mediocre contents, we need to determine how to
store different segments of mediocre contents within (i) An inter-cluster,
and; (ii) Multiple inter-clusters.
Single Inter-Cluster: The main idea behind the coded placement scheme in our
proposed CCUF framework comes from the frequency reusing technique in cellular
networks [37]. The distance between two cells with the same spectrum bandwidth
is determined in such a way that the resource availability increases and the
inter-cell interference decreases [38]. With the same argument in [37, 38],
the same mediocre contents are stored in different inter-clusters, while
different FAPs belonging to an inter-cluster store different segments of the
mediocre contents. Without loss of generality, we first consider a simple
hexagonal cellular network including $N_{b}$ FAPs as one inter-cluster (Fig.
2). Given the vector $\mathbf{z}=[z_{1},\ldots,z_{N_{a}}]^{T}$ that determines
the mediocre contents, in this phase, we need to indicate which segment of the
mediocre content $c_{l}$, denoted by $c_{ls}$ for $(1\leq l\leq N_{a})$ and
$(1\leq s\leq N_{s})$, should be cached in FAP $f_{i}$ for $(1\leq i\leq
N_{b})$. In this regard, we form an $(N_{a}\times N_{s})$ indicator matrix of
FAP $f_{i}$, denoted by $\bm{Z}^{(f_{i})}$, where the $l^{\text{th}}$ row of
$\bm{Z}^{(f_{i})}$, denoted by $\bm{z}^{(f_{i})}_{l}=[0,0,\ldots,1]_{(1\times
N_{s})}$ indicates segments of file $c_{l}$ stored in the cache of FAP
$f_{i}$. Note that $\bm{z}^{(f_{i})}_{l}$ is a zeros vector with only one non-
zero element, where $z^{(f_{i})}_{ls}=1$ means that $s^{\text{th}}$ segment of
file $c_{l}$ is stored in the cache of FAP $f_{i}$. To store different
segments of mediocre contents within an inter-cluster, the cached contents of
FAP $f_{j}$ for ($1\leq j\leq N_{b},j\neq i$) in the inter-cluster is
determined as follows
$\bm{z}^{(f_{i})}_{l}{\bm{z}_{l}^{(f_{j})}}^{T}=0,~{}~{}~{}~{}i=1,\ldots,N_{b},j=1,\ldots,N_{b},i\neq
j.$ (18)
Multiple Inter-Clusters: After allocating mediocre contents to FAPs inside an
inter-cluster, the same content as FAP $f_{i}$ is stored in FAP $f_{k}$ in the
neighboring inter-cluster, where $k$ is given by
$\bm{Z}^{(f_{k})}=\bm{Z}^{(f_{i})}~{}~{}~{}\text{if}~{}~{}~{}k=w^{2}+wz+z^{2},$
(19)
where $w$ and $z$ represent the number of FAPs required to reach another FAP
storing similar contents, in two different directions [38] (see Fig. 2). More
precisely, first, it is required to move $w$ cell along any direction from FAP
$f_{i}$, then turn $60$ degrees counter-clockwise and move $z$ cells to reach
FAP $f_{k}$. For example, in Fig. 2, $N_{s}=7$, $w=2$, and $z=1$ for starting
in a FAP including $c_{13}$ and reaching a similar FAP in a neighboring inter-
cluster.
Remark 1: In a practical scenario, the coverage area of FAPs is influenced by
path loss and shadowing models (it is not a hexagonal shape). Location
$p=(x,y)$ is placed within the transmission area of FAP $f_{i}$, if the
strength of the received signal at point $p$, denoted by $RSSI_{p}$, is higher
than the threshold value $RSSI_{th}$, where $RSSI_{p}$ is calculated as
$RSSI_{p}(dB)=RSSI(d_{0})+10\eta\log_{10}(\dfrac{d}{d_{0}})+X_{\sigma},$ (20)
with $d$ and $d_{0}$ denoting the distance between FAP and point $p$ in the
boundary of transmission area of FAP, and the reference distance is set to $1$
m, respectively. Moreover, $\eta$ represents the path loss exponent, which is
$10$ dB or $20$ dB, and $X_{\sigma}$ is a zero-mean Gaussian with standard
deviation $\sigma$ that represents the effect of multi-path fading in the CCUF
scheme [39].
### III-C Success Probability in the Proposed CCUF Framework
To quantify the benefits of the proposed CCUF strategy, we define success
probability, which is defined as the probability of finding a new segment by
user $GU_{j}$ at time slot $t$ under the following two scenarios: (i) Uncoded
cluster-centric, and; (ii) Coded cluster-centric UAV-aided femtocaching
network, denoted by $p_{uc}$, and $p_{cc}$, respectively. Concerning the
nature of mobile networks, ground users move and leave their current
positions. In this paper, it is assumed that low-speed ground users can obtain
one segment in each contact, i.e., $T=N_{s}\delta_{t}$ is required to
completely download content $c_{l}$. First, we consider a simple mobility,
where ground users are positioned in the transmission area of a new FAP in
each time slot $t$. Eventually, in $T=N_{s}\delta_{t}$, the whole content
$c_{l}$ will be downloaded. Then, we generalize the mobility of ground users
to the random walk model, where ground users can return to their previous
place.
#### III-C1 Simple Movement Scenario
Regarding the uncoded cluster-centric UAV aided femtocaching framework,
content $c_{l}$ consisting of $c_{ls}$, for ($1\leq s\leq N_{s}$) segments, is
stored completely in all FAPs. Consequently, the probability of downloading
$n=N_{s}$ segments of content $c_{l}$ in $T=N_{s}\delta_{t}$ depends on the
probability of requesting file $c_{l}$, denoted by $p_{l}$. Since the storage
capacity of each FAP is equal to $C_{f}$, the success probability, denoted by
$p_{uc}$, is obtained as follows
$p_{uc}[n=N_{s},t=T]=\sum\limits_{l=1}^{C_{f}}p_{l}.$ (21)
On the other hand, the success probability of the CCUF framework is obtained
as
$p_{cc}[n=N_{s},t=T]=\sum\limits_{l=1}^{N_{p}}p_{l}+\sum\limits_{l=N_{p}+1}^{N_{a}+N_{p}}p_{l}.$
(22)
To illustrate the growth rate of the success probability in the coded one, we
rewrite $p_{uc}$ in Eq. (21) as follows
$p_{uc}[n=N_{s},t=T]=\sum\limits_{l=1}^{N_{p}}p_{l}+\sum\limits_{l=N_{p}+1}^{C_{f}}p_{l}.$
(23)
As it can be seen from Eqs. (22), and (23), the first term related to the
popular content is the same. The second term, however, illustrates that the
number of distinct contents that can be served through FAPs within an inter-
cluster in the coded cluster-centric network is $\varkappa$ times greater than
the uncoded one, where $\varkappa$ is given by
$\varkappa=\dfrac{\lfloor\alpha C_{f}\rfloor+N_{s}(C_{f}-\lfloor\alpha
C_{f}\rfloor)}{C_{f}}.$ (24)
Accordingly, due to the allocation of different segments in the coded cluster-
centric network, more segments of the desired contents are accessible during
the users’ movement in the simple movement scenario. Therefore, more requests
can be served in comparison to the uncoded cluster-centric UAV-aided
femtocaching networks.
#### III-C2 Generalizing to Random Walk Scenario
In contrary to the simple movement scenario discussed above, the following two
situations are possible where the ground user $GU_{j}$ cannot find a new
segment during its movement: (i) Returning back to the previous coverage area
of FAPs; and, (ii) Positioning in the transmission area of a FAP, which stores
the same segment of the content that the ground user has already downloaded.
Consequently, the success probability of the coded cluster-centric will not be
the same as the previous scenario. If the requested content is the popular
one, regardless of the link’s quality of the ground user within an inter-
cluster, the ground user can download one segment of the required content with
the probability of $\sum\limits_{l=1}^{N_{p}}p_{l}$ at each contact. While
this part of the success probability is constant, the success probability of
downloading a new segment of a mediocre content in each contact depends on the
current and previous locations of the ground user. Therefore, we first
determine the success probability of achieving a new segment of a mediocre
content, denoted by $p_{ns}(n=n_{0},t=n_{0}\delta_{t})$, for ($1\leq n_{0}\leq
N_{s})$. Then, we calculate the success probability of a coded cluster-centric
network based on the random walk movement.
As it can be seen from Fig. 2, regardless of the location of $GU_{j}$, this
user can download one segment successfully in the first contact (i.e.,
$n_{0}=1$). Therefore, we have $p_{ns}(n=1,t=\delta_{t})=1$. Similarly, when
$n_{0}=2$, the ground user $GU_{j}$ can download a new segment without
considering the location of the ground user. Therefore, the probability of
downloading two segments after two contacts is $p_{ns}[n=2,t=2\delta_{t}]=1$.
More precisely, in the second contact, the ground user can be positioned in
the cell of $(N_{s}-1)$ number of FAPs, where the probability of being in the
cell of FAP $f_{i}$ is $p(f=f_{i})=\dfrac{1}{(N_{s}-1)}$. Therefore, we have
$\displaystyle p_{ns}[n=2,t=2\delta_{t}]$
$\displaystyle\\!\\!\\!\\!\\!=\\!\\!\\!\\!\\!\\!\\!\\!$
$\displaystyle\sum_{i=1}^{N_{s}-1}\\!\\!p_{ns}[n=2,t=2\delta_{t}|f=f_{i}]p(f=f_{i})$
(25)
$\displaystyle\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!(N_{s}-1)\times
1\times\dfrac{1}{N_{s}-1}+0\times\dfrac{1}{N_{s}-1}=1.$
Accordingly, the probability of finding a new segment in the third contact is
obtained as follows
$\displaystyle\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!p_{ns}[n=3,t=3\delta_{t}]\\!=\\!\\!\\!\\!\sum_{i=1}^{N_{s}-1}\\!\\!p_{ns}[n=3,t=3\delta_{t}|f=f_{i}]p(f=f_{i})$
(26)
$\displaystyle\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!=(N_{s}-2)\dfrac{1}{N_{s}-1}\times
1+\dfrac{1}{N_{s}-1}\times 0=\dfrac{N_{s}-2}{N_{s}-1},$
where $(N_{s}-2)$ FAPs have different segments, whereas if $GU_{j}$ returns to
the FAP at $t=\delta_{t}$, the ground user can find a similar segment.
Similarly, it can be proved that the probability of finding a new segment in
$n>2$ is given by
$\displaystyle\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!p_{ns}[n=n_{0},t=n_{0}\delta_{t}]=\sum_{i=1}^{N_{s}-1}p_{ns}[n=n_{0},t=n_{0}\delta_{t}|f=f_{i}]$
(27)
$\displaystyle\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\times
p(f=f_{i})=\dfrac{(N_{s}-2)^{n_{0}-2}}{(N_{s}-1)^{n_{0}-2}},~{}~{}~{}~{}~{}\text{for}~{}n_{0}>2.$
Taking into account the unequal likelihood of finding new segments of mediocre
contents in different contacts, we recalculate $p_{cc}$ as follows
$\displaystyle p_{cc}[n=N_{s},t=T]=\sum\limits_{l=1}^{N_{p}}p_{l}$ (28)
$\displaystyle+$
$\displaystyle\sum\limits_{n=1}^{N_{s}}\dfrac{(N_{s}-2)^{n-2}}{(N_{s}-1)^{n-2}}\Big{(}\sum\limits_{l=N_{p}+1}^{N_{s}(C_{f}-\lfloor\alpha
C_{f}\rfloor)}p_{l}\Big{)}.$
Remark 2: In such a case that the location of $GU_{j}$ in two consecutive time
slots is the same, it means that $GU_{j}$ is a fixed user. Therefore, the
following two scenarios can happen depending on the popularity of the
requested content: (i) Similar to the mobile users’ case, if the requested
content $c_{l}$ is popular, the whole segments of file $c_{l}$ are sent by
neighboring FAP to $GU_{j}$, and; (ii) If $c_{l}$ is a mediocre content, each
FAP within the inter-cluster transmits one segment of file $c_{l}$, which is
known as the Parallel Transmission (PT) [24].
Algorithm 1 Proposed CCUF Strategy
1:Initialization: Set $\alpha$, $\lambda$, $N_{s}$, and $C_{f}$.
2:Input: $p_{l}^{(j)}(t)$.
3:Output: $\mathrm{x}_{l}$, $\mathtt{y}_{l}$, and $\mathtt{z}_{l}$.
4:Content Placement Phase:
5:for $u_{k},k=1,\ldots,N_{u}$, do
$\min\limits_{\begin{subarray}{c}\mathrm{x}_{l}\end{subarray}}~{}~{}\sum\limits_{l=1}^{N_{c}}\Big{(}\sum\limits_{j=1}^{N_{g}}p_{l}^{(j)}(t)\mathcal{D}_{u}^{(j)}(t)\Big{)}\mathrm{x}_{l}$
6: s.t. C1. and C2. in Eq. (14).
7:end for
8:for $f_{i},i=1,\ldots,N_{f}$, do
$\min\limits_{\begin{subarray}{c}y_{l},z_{l}\end{subarray}}~{}~{}\sum\limits_{l=1}^{N_{c}}\Big{(}\sum\limits_{j=1}^{N_{g}}p_{l}^{(j)}(t)\mathcal{D}_{f}^{(j)}(t)\Big{)}y_{l}+$
$\sum\limits_{l=1}^{N_{c}}\Big{(}\sum\limits_{j=1}^{N_{g}}p_{l}^{(j)}(t)\mathcal{D}_{f}^{(j)}(t)\Big{)}z_{l},$
9: s.t. C1.-C3. in Eq. (III-A2).
10:end for
11:$\bm{z}^{(f_{i})}_{l}{\bm{z}_{l}^{(f_{j})}}^{T}=0,~{}~{}~{}~{}i=1,\ldots,N_{s},j=1,\ldots,N_{s},i\neq
j,$
12:$\bm{Z}^{(f_{k})}=\bm{Z}^{(f_{i})}~{}~{}~{}\text{if}~{}~{}~{}k=w^{2}+wz+z^{2},$
13:Transmission Phase:
14:for $GU_{j},j=1,\ldots,N_{g}$, do
15: if $GU_{j}$ is in indoor environment then
16: if $GU_{j}$ is an edge-user and requests
17: popular content then
18: The request should be handled according to the
19: JT scheme.
20: else
21: The request should be handled according to the
22: ST scheme.
23: end if
24: else
25: if $\upsilon_{j}(t)\geq\upsilon_{\text{th}}$ then
26:
27: The request is served by UAV $u_{k}$.
28: else
29: Similar to lines $16$ to $23$.
30: end if
31: end if
32:end for
### III-D 2-D Deployment of UAVs in Intra-clusters
To increase the resource availability for ground users, the outdoor
environment is partitioned based on an unsupervised learning algorithm, where
each partition is covered by a UAV. Considering a Gaussian mixture
distribution for ground users, we have a dense population of ground users in
some areas. The main goal is to deploy UAVs in such a way that ground users
can experience high QoS communications even in a dense area. Note that the
distance between UAVs and ground users is a critical factor that can
significantly impact the QoS from different perspectives such as the energy
consumption of UAVs and the users’ access delay. Our goal is to partition
$N_{g}$ ground users into $K$ intra-clusters, where the sum of Euclidean
distances between the ground user $GU_{j}$, for $(1\leq j\leq N_{g}^{k})$, and
UAV $u_{k}$ is minimized. In this case, $N_{g}^{k}$ is the cardinality of
ground users positioned in the intra-cluster related to the UAV $u_{k}$.
Therefore, the UAVs’ deployment is obtained according to the following
optimization problem
$\displaystyle\min\limits_{\begin{subarray}{c}\bm{l}_{k}(t)\end{subarray}}~{}~{}\sum\limits_{k=1}^{N_{u}}\sum\limits_{j=1}^{N_{g}^{k}}||\bm{l}_{j}(t),\bm{l}_{k}(t)||,$
(29)
where $\bm{l}_{k}(t)$ denotes the location of the UAV $u_{k}$ at time slot
$t$, defined as the mean of the coordinates of all ground users inside the
corresponding intra-cluster as follows
$\displaystyle\bm{l}_{k}(t)=\dfrac{\sum\limits_{j=1}^{N_{g}^{k}}\bm{l}_{j}(t)}{N_{g}^{k}},~{}~{}~{}~{}~{}~{}k=1,\ldots,N_{u}.$
(30)
To solve the above optimization problem, we utilize the $K$-Means clustering
algorithm [40], which is known as an efficient unsupervised learning
framework. In the first step, a set of points, denoted by
$\mathcal{P}=\\{P_{1},\ldots,P_{N_{u}}\\}$, is generated, where $P_{k}$ for
($1\leq k\leq N_{u}$) should be within the pre-specified environment. Then,
the set of ground users in the vicinity of $P_{k}$ is determined as follows
$\displaystyle u_{j}\in
N_{g}^{k}~{}~{}~{}~{}\text{if}~{}~{}||\bm{l}_{j}(t),P_{k}||<||\bm{l}_{j}(t),P_{r}||,~{}~{}\forall
k\neq r.$ (31)
Given the set of ground users belonging to each intra-cluster, UAVs’ locations
are determined according to Eq. (30). In the second step, by moving ground
users from one intra-cluster to another, the Euclidean distances between
ground users and UAVs are calculated to update the location of UAVs according
to Eq. (29). The $K$-Means algorithm is terminated when there is no change in
the ground users belonging to an intra-cluster over several iterations. This
completes our discussion on development of the CCUF scheme. The pseudo-code of
the proposed CCUF framework is summarized in Algorithm 1.
## IV Simulation Results
To demonstrate the advantage of the proposed CCUF framework, we consider a
UAV-aided cellular network with $R=5000$ m, covered by the main server. There
are $N_{f}=175$ FAPs, and $N_{u}=10$ UAVs, where each inter-cluster
compromises of $N_{s}=7$ FAPs. Without loss of generality and for simplicity,
we consider a static clustering scheme where there are a fixed number of FAPs
in each inter-cluster to determine how different segments of popular files
should be distributed in an inter-cluster to increase the content diversity.
Therefore, the number of FAPs in each inter-cluster is considered to be the
same as the number of segments $N_{s}$. According to the restrictions of the
aviation regulations, UAVs fly horizontally at the height of $h_{k}=100$ m,
covering a region of the outdoor environment with $R_{u}=500$ m, while the
transmission range of FAPs is $30$ m [32]. The general simulation parameters
are summarized in Table I. As it is proved in [23, 2] that the optimum content
placement is an NP-hard problem, we use fmincon optimization toolbox,
implemented in MATLAB (R2020a), to solve Eqs. (14) and (III-A2). Fig. 3
illustrates a typical $20\times 20$ $m^{2}$ area, where ground users are
randomly distributed and move according to [41]. The estimated location of GUs
is required to determine the transmission scheme and select an appropriate
caching node to manage the request. Capitalizing on the reliability and
efficiency of the AoA localization technique [42, 43], it is utilized to
estimate the GUs’ locations. It can be shown that the Root Mean Square Error
(RMSE) between the estimated and the actual location of the ground users is
about $0.4$ m, which is acceptable in comparison to the transmission range of
FAPs.
Figure 3: Typical location estimation results based on the AoA localization
scheme.
Fig. 4 depicts an integrated heterogeneous network, where yellow and red areas
determine indoor and outdoor environments, respectively. Fig. 4 also shows the
deployment of UAVs in the intra-clusters within the network, which is
generated by partitioning ground users according to the K-means clustering
algorithm. As a result of the Gaussian mixture distribution for clients, we
have a dense population in some areas, which can be changed over time by the
movement of ground users. Therefore, the location of $N_{u}=10$ UAVs and the
formation of intra-clusters in this paper is varying, depending on the user
density distribution. For the comparison purpose and in order to find the best
value of $\alpha$, three types of caching strategies are considered:
* •
Uncoded UAV-aided Femtocaching (UUF): This scheme is derived by modifying the
Fairness Scheduling algorithm with an Adaptive Time Window (FS-ATW) scheme
[2], where the proposed content placement strategy in [2] is used for both
UAVs and FAPs. The popular contents in the UUF model are stored completely
into FAPs and UAVs without any coding and clustering schemes. Therefore, it is
equivalent to our proposed CCUF framework, where the value of $\alpha$, which
indicates the percentage of contents stored completely, would be one (i.e.,
$\alpha=1$).
* •
Proposed Cluster-centric and Coded UAV-aided Femtocaching (CCUF): In this
case, the uncoded popular and the coded mediocre contents are stored in the
caching nodes, where $0<\alpha<1$. According to the simulation results, the
best value of $\alpha$ is obtained.
* •
The Conventional Cluster-centric and Coded UAV-aided Femtocaching
(Conventional CCUF): This scheme is an upgraded version of the FemtoCaching
scheme in [23], integrated with the CoMP technology. In this framework,
regardless of the content popularity profile, all contents are stored
partially. For simplicity, this scheme is shown by $\alpha=0$ in simulation
results.
These three strategies are evaluated over the cache-hit-ratio, cache
diversity, cache redundancy, SINR, and users’ access delay to determine the
best value of $\alpha$. Moreover, to illustrate the effect of considering a
UAV-aided femtocaching framework in an integrated network, we compare the
users’ access delay and energy consumption of UAVs, by serving users in both
indoor and outdoor areas.
Figure 4: Deployment of UAVs in intra-clusters within an integrated network, where “yellow” and “red” colors indicate indoor and outdoor environments, respectively. TABLE I: List of Parameters. Notation | Value | Notation | Value
---|---|---|---
$N_{g}$ | $500$ | $\eta^{(LoS)}$, $\eta^{(NLoS)}$ | $2.5$, $3$
$N_{f}$ | $180$ | $h_{k}$ | $100$ m
$N_{u}$ | $10$ | $\varpi$, $\psi$ | $2$, $20$
$N_{s}$ | $7$ | $L_{c}$ | $37.5$ MB
$N_{c}$ | $40724$ | $\tau_{p}$ | $0-5$ s
$R_{u}$ | $500$ m | $P_{k}$ | $15$ dBm
$R_{f}$ | $30$ m | $\chi_{\sigma}^{(LoS)}$, $\chi_{\sigma}^{(NLoS)}$ | $3.5$, $3$
$P_{T}(t)$, $P_{R}(t)$ | $0.5$ , $0.25$ W | $N_{0}$ | $-94$ dBm
Figure 5: The cache-hit-ratio versus the popularity parameter $\gamma$ for
different values of $\alpha$. Figure 6: The cache-hit-ratio versus the
$\alpha$ percentage of contents that are stored completely.
Cache-Hit-Ratio: This metric illustrates the number of requests served by
caching nodes versus the total number of requests made across the network. The
high value of cache-hit-ratio shows the superiority of the framework. Since we
assume that ground users can download one segment in each contact, we evaluate
the cache-hit-ratio in terms of the number of fragmented contents served by
caching nodes. Fig. 5 compares the cache-hit-ratio of the UUF ($\alpha=1$),
the proposed CCUF ($0<\alpha<1$), and conventional CCUF ($\alpha=0$)
frameworks versus the value of $\gamma$. As previously mentioned, parameter
$\gamma$ shows the skewness of the content popularity, where $\gamma\in[0,1]$.
Note that the large value of $\gamma$ indicates that a small number of
contents has a high popularity, where a small value of $\gamma$ illustrates an
almost uniform popularity distribution for the majority of contents. As it can
be seen from Fig. 5, depending on the popularity distribution of contents,
$\gamma$, the conventional CCUF framework results in a higher cache-hit-ratio.
The most important reason is that given a constant cache capacity, the coded
content placement of the conventional CCUF strategy leads to a remarkable
surge in the content diversity. In contrast, for a high value of $\gamma$,
where a small number of contents is widely requested, the UUF and the proposed
CCUF frameworks have better results compared to the conventional CCUF. By
considering the fact that the common value of $\gamma$ is about
$0.5\leq\gamma\leq 0.6$ (e.g., see [3, 23]), we define $CHR_{th}$ as the
threshold cache-hit-ratio, which is the average of cache-hit-ratio of
different values of $\alpha$ for a specific $\gamma$. As it can be seen from
Fig. 5, the proposed CCUF framework with $0<\alpha\leq 0.4$ and the UUF scheme
outperform other schemes from the aspect of cache-hit-ratio.
Fig. 6 shows the cache-hit-ratio versus different values of $\alpha$ when the
popularity parameter $\gamma$ changes in the range of $0.5$ to $1$.
Accordingly, for $0.5\leq\gamma\leq 0.6$, by increasing the value of $\alpha$,
the cache-hit-ratio decreases drastically. In the following, we also
investigate the impact of $\alpha$ on the users’ access delay to determine the
best value of $\alpha$.
Figure 7: The users’ access delay in the indoor environment versus different
value of $\gamma$. Figure 8: The users’ access delay in the indoor environment
versus different values of $\alpha$.
Users’ Access Delay: Users’ access delay depends on three parameters, i.e.,
the availability of the content in caching nodes, the distance between the
ground user and the corresponding caching node, and the channel quality, known
as the SINR. Figs. 7 and 8 compare the users’ access delay of the
aforementioned frameworks, which is obtained according to Eq. (15). By
utilizing the CoMP technology in the proposed CCUF, serving edge-users
according to the JT scheme has a great impact on the SINR, where users’ access
delay decrease by increasing the SINR. As can be seen from Table II, the SINR
of edge-users improves by increasing the value of $\alpha$. Note that JT
scheme can be performed if the same contents are stored in the neighboring
FAPs. Therefore, by increasing the value of $\alpha$, the users’ access delay
will decrease. With the same argument, we define $\mathcal{D}_{th}$, which is
the average of users’ access delay of different values of $\alpha$ for a
specific $\gamma$, shown in Fig. 7. Therefore, the best value of $\alpha$
would be $\alpha\geq 0.2$. Consequently, the cache-hit-ratio and users’ access
delay of the proposed CCUF framework would be efficient if
$\alpha\in[0.2,0.4]$.
TABLE II: The SINR experienced by edge-users for different values of $\alpha$ and $\gamma$. | $\gamma=0.5$ | $\gamma=0.6$ | $\gamma=0.7$ | $\gamma=0.8$ | $\gamma=0.9$ | $\gamma=1$
---|---|---|---|---|---|---
$\alpha=0$ | $16.37$ | $16.37$ | $16.37$ | $16.37$ | $16.37$ | $16.37$
$\alpha=0.1$ | $17.55$ | $18.12$ | $18.89$ | $19.84$ | $20.88$ | $21.88$
$\alpha=0.2$ | $18.01$ | $18.65$ | $19.46$ | $20.40$ | $21.38$ | $22.30$
$\alpha=0.4$ | $18.62$ | $19.30$ | $20.11$ | $21.00$ | $21.90$ | $22.70$
$\alpha=0.6$ | $19.06$ | $19.75$ | $20.53$ | $21.37$ | $22.20$ | $22.93$
$\alpha=0.8$ | $19.42$ | $20.09$ | $20.85$ | $21.64$ | $22.42$ | $23.08$
$\alpha=1$ | $19.72$ | $20.38$ | $21.11$ | $21.86$ | $22.58$ | $23.20$
Figure 9: The percentage of the cache diversity and the cache redundancy
versus different values of $\alpha$. Figure 10: The maximum cache capacity,
required to achieve the maximum cache diversity, versus different values of
$\alpha$.
Cache Diversity: This metric illustrates the diversity of contents in an
inter-cluster, which is defined as the number of distinct segments of
contents, expressed as follows
$\displaystyle\mathcal{CD}=\dfrac{N_{a}}{N_{s}C_{f}}=1-\dfrac{\lfloor\alpha
C_{f}\rfloor}{C_{f}}.$ (32)
As stated previously, we have $N_{a}=N_{s}(C_{f}-\lfloor\alpha C_{f}\rfloor)$.
As it can be seen from Fig. 9, the value of $\mathcal{CD}$ would be one, if
$\alpha=0$, which means that all cached contents are different. The cache
diversity, however, linearly decreases by increasing the value of $\alpha$,
and reaches the lowest value zero, when all contents are cached completely
(i.e., $\alpha=1$).
Cache Redundancy: This metric indicates the number of similar contents that
ground users meet during their random movements. As it can be seen from Fig.
9, the cache redundancy increases by storing the entire contents. By
considering the coded content placement, even in the proposed CCUF framework,
ground users that move randomly through the network, can meet a similar coded
contents during their movements (see Fig. 2).
Maximum Required Cache Capacity: Given a specific number of contents through
the network, denoted by $N_{c}$, the storage capacity of caching nodes is
determined by $C_{f}=\beta N_{c}$. In this case, parameter $\beta$ indicates
the percentage of contents that can be stored in caching nodes. In the coded
content placement, since only one segment of the contents is cached, it is
fairly likely that the total number of possible segments that can be cached
exceeds the total number of contents. Therefore, the maximum required cache
capacity, denoted by $\beta_{max}$, for different values of $\alpha$ is
obtained as
$\beta_{max}\leq\dfrac{N_{c}}{N_{s}N_{c}-(N_{s}-1)\alpha
N_{c}}=\dfrac{1}{\alpha(1-N_{s})+N_{s}},$ (33)
where the remainder of the storage would be occupied by redundant contents if
$\beta>\beta_{max}$. As it can be seen from Fig. 10, the maximum cache
capacity $\beta_{max}$ increases by the value of $\alpha$. Consequently, in
smaller values of $\alpha$, we need a smaller cache capacity to have the
maximum cache diversity.
Figure 11: The users’ access delay experienced through UAVs in both indoor and
outdoor versus different values of $\beta$. Figure 12: The energy consumption
of UAVs in both indoor and outdoor environments in different time slots.
Figure 13: The normalized energy consumption of UAVs versus the number of
outdoor users, where $\psi$ illustrates the ratio of requests served by UAVs
to the whole outdoor users’ requests. Figure 14: The FAPs’ handover
probability versus different values of
$\zeta=\dfrac{\upsilon}{\upsilon_{\rm{th}}}$.
Users’ Access Delay through UAVs and UAVs’ Energy Consumption: We evaluate the
users’ access delay and the energy consumption of UAVs in Figs. 11 and 12 when
the ground user is located in both indoor and outdoor environments. As can be
seen from Fig. 12, serving requests through UAVs, especially in such a case
that ground users are located in the indoor environment, leads to consuming
the energy of UAVs, calculated as follows [8]
$E_{u_{k}}^{(LoS)}(t)=L_{c}P_{T}(t)\tau_{p}+L_{c}P_{R}(t)\tau_{p}+P_{j}^{(LoS)}(t)(\tau_{f}-\tau_{p}),$
(34)
$E_{u_{k}}^{(NLoS)}(t)=L_{c}P_{T}(t)\tau_{p}+L_{c}P_{R}(t)\tau_{p}+P_{j}^{(NLoS)}(t)(\tau_{f}-\tau_{p}),$
(35)
where $P_{T}(t)$ and $P_{R}(t)$ represent the power consumed for transmission
and reception powers of $1$ Mb file, respectively. Moreover, $P_{j}(t)$,
$\tau_{f}$, and $\tau_{p}$ denote the received power at ground user $GU_{j}$,
and the flyby and the pause times of UAV $u_{k}$, respectively. On the other
hand, it can be shown from Fig. 11 that the indoor users being served through
UAVs, experience higher delay in comparison with outdoor users. Consequently,
it can be seen that serving indoor users by UAVs could not be efficient from
the aspect of user’s access delay and energy consumption of UAVs. For this
reason, ground users in indoor areas are served by FAPs in inter-clusters.
Finally, Figs. 13 and 14 illustrate the advantage of serving outdoor users
with both FAPs and UAVs in the CCUF framework. Fig. 13 compares the average
normalized UAVs’ energy consumption in different scenarios, where $\psi$
illustrates the ratio of requests served by UAVs to the whole outdoor users’
requests. For instance, $\psi=1$ means all requests in an outdoor environment
are managed by UAVs regardless of the user’s velocity, while in $\psi=0.7$, it
is assumed that $70\%$ of outdoor users are HSUs, who are supported by UAVs
and $30\%$ of users are LSUs, managed by FAPs. As it can be seen from Fig. 13,
serving LSUs’ requests by FAPs leads to a reduction in UAVs’ energy
consumption. Considering the fact that UAVs are limited energy caching nodes,
expanding UAVs’ lifetime is of paramount importance. Moreover, Fig. 14
illustrates the effect of users’ velocity on the FAP’s handover probability,
where $\zeta=\dfrac{\upsilon}{\upsilon_{\rm{th}}}$. For comparison purposes,
two scenarios are defined, where FAP Connection is associated with a case that
all outdoor users regardless of their velocities are supported by FAPs, while
in UAV Connection, LSUs and HSUs are supported by FAPs and UAVs, respectively.
In this case, handover is triggered if the ground user leaves the current
FAP’s coverage before completely downloading one segment. As it can be seen
from Fig. 14, serving HSUs ($\zeta>1$) leads to triggering frequent handovers,
where the handover probability is one for $\zeta>2$, while there would not be
any FAPs’ handover by serving HSUs by UAVs.
## V Conclusions
In this paper, we developed a Cluster-centric and Coded UAV-aided Femtocaching
(CCUF) framework for an integrated and dynamic cellular network to maximize
the number of requests served by caching nodes. To increase the cache
diversity and to store distinct segments of contents in neighboring FAPs, we
employed a two-phase clustering technique for FAPs’ formation and UAVs’
deployment. In this case, we formulated the success probability of the
proposed CCUF framework. Moreover, in the cluster-centric cellular network,
multimedia contents were coded based on their popularity profiles. In order to
benefit the Coordinated Multi-Point (CoMP) technology and to improve the
inter-cell interference, we determined the best value of the number of
contents that should be stored completely. According to the simulation results
and by considering the best value of $\alpha$, the proposed CCUF framework
results in an increase in the cache-hit-ratio, SINR, and cache diversity and
decrease users’ access delay and cache redundancy. Going forward, several
directions deserve further investigation. First, it is of interest to
introduce a Reinforcement Learning (RL)-based method for outdoor environment,
where ground users can be autonomously served by UAVs or FAPs, based on the
dynamic population of their current locations and their speeds. Second, the
optimum number of ground users to be served by a UAV in the proposed network
needs to be analyzed.
## References
* [1] V. Chamola, V. Hassija, V. Gupta and M. Guizani, “A Comprehensive Review of the COVID-19 Pandemic and the Role of IoT, Drones, AI, Blockchain, and 5G in Managing its Impact,” IEEE Access, vol. 8, pp. 90225-90265, May 2020.
* [2] Z. Hajiakhondi-Meybodi, J. Abouei, and A. H. F. Raouf, “Cache Replacement Schemes Based on Adaptive Time Window for Video on Demand Services in Femtocell Networks,” IEEE Transactions on Mobile Computing, vol. 18, no. 7, pp. 1476-1487, July 2019.
* [3] Z. HajiAkhondi-Meybodi, J. Abouei, M. Jaseemuddin and A. Mohammadi, “Mobility-Aware Femtocaching Algorithm in D2D Networks Based on Handover,” IEEE Transactions on Vehicular Technology, vol. 69, no. 9, pp. 10188-10201, June 2020.
* [4] B. Jiang, J. Yang, H. Xu, H. Song, and G. Zheng, “Multimedia Data Throughput Maximization in Internet-of-Things System Based on Optimization of Cache-Enabled UAV,” IEEE Internet of Things Journal, vol. 6, no. 2, pp. 3525-3532, Apr. 2019.
* [5] B. Li, Z. Fei, and Y. Zhang, “UAV Communications for 5G and Beyond: Recent Advances and Future Trends,” IEEE Internet of Things Journal, vol. 6, no. 2, pp. 2241-2263, Apr. 2019.
* [6] F. Cheng, G. Gui, N. Zhao, Y. Chen, J. Tang, and H. Sari, “UAV-Relaying-Assisted Secure Transmission With Caching,” IEEE Transactions on Communications, vol. 67, no. 5, pp. 3140-3153, May 2019.
* [7] N. Zhao, F. Cheng, F. R. Yu, J. Tang, Y. Chen, G. Gui, and H. Sari, “Caching UAV Assisted Secure Transmission in Hyper-Dense Networks Based on Interference Alignment,” IEEE Transactions on Communications, vol. 66, no. 5, pp. 2281-2294, May 2018.
* [8] V. Sharma, I. You, D. N. K. Jayakody, D. G. Reina, and K. K. R. Choo, “Neural-Blockchain-Based Ultrareliable Caching for Edge-Enabled UAV Networks,” IEEE Transactions on Industrial Informatics, vol. 15, no. 10, pp. 5723-5736, Oct. 2019.
* [9] Z. Wang, L. Duan and R. Zhang, “Adaptive Deployment for UAV-Aided Communication Networks,” IEEE Transactions on Wireless Communications, vol. 18, no. 9, pp. 4531-4543, Sept. 2019.
* [10] X. Liu, Y. Liu and Y. Chen, “Reinforcement Learning in Multiple-UAV Networks: Deployment and Movement Design,” IEEE Transactions on Vehicular Technology, vol. 68, no. 8, pp. 8036-8049, Aug. 2019.
* [11] M. Samir, S. Sharafeddine, C. Assi, T. M. Nguyen, and A. Ghrayeb, “Trajectory Planning and Resource Allocation of Multiple UAVs for Data Delivery in Vehicular Networks,” IEEE Networking Letters, vol. 1, no. 3, pp. 107-110, Sept. 2019.
* [12] S. Li, B. Duo, X. Yuan, Y. Liang and M. Di Renzo, “Reconfigurable Intelligent Surface Assisted UAV Communication: Joint Trajectory Design and Passive Beamforming,” IEEE Wireless Communications Letters, vol. 9, no. 5, pp. 716-720, May 2020.
* [13] M. Chen, W. Saad, and C. Yin, “Liquid State Machine Learning for Resource and Cache Management in LTE-U Unmanned Aerial Vehicle (UAV) Networks,” IEEE Transactions on Wireless Communications, vol. 18, no. 3, pp. 1504-1517, Mar. 2019.
* [14] Z. Yang, C. Pan, K. Wang and M. Shikh-Bahaei, “Energy Efficient Resource Allocation in UAV-Enabled Mobile Edge Computing Networks,” IEEE Transactions on Wireless Communications, vol. 18, no. 9, pp. 4576-4589, Sept. 2019.
* [15] B. Ji, Y. Li, B. Zhou, C. Li, K. Song, and H. Wen, “Performance Analysis of UAV Relay Assisted IoT Communication Network Enhanced With Energy Harvesting,” IEEE Access, vol. 7, pp. 38738-38747, Mar. 2019.
* [16] L. Zhang, Z. Zhao, Q. Wu, H. Zhao, H. Xu, and X. Wu, “Energy-Aware Dynamic Resource Allocation in UAV Assisted Mobile Edge Computing Over Social Internet of Vehicles,” IEEE Access, vol. 6, pp. 56700-56715, Oct. 2018.
* [17] S. Suman, S. Kumar and S. De, “Path Loss Model for UAV-Assisted RFET,” IEEE Communications Letters, vol. 22, no. 10, pp. 2048-2051, Oct. 2018.
* [18] Z. Hu, Z. Zheng, L. Song, T. Wang and X. Li, “UAV Offloading: Spectrum Trading Contract Design for UAV-Assisted Cellular Networks,” IEEE Transactions on Wireless Communications, vol. 17, no. 9, pp. 6093-6107, Sept. 2018.
* [19] J. Lee, K. Kim, M. Kim, J. Park, Y. K. Yoon and Y. J. Chong, “Measurement-Based Millimeter-Wave Angular and Delay Dispersion Characteristics of Outdoor-to-Indoor Propagation for 5G Millimeter-Wave Systems,” IEEE Access, vol. 7, pp. 150492-150504, Oct. 2019.
* [20] R. Avanzato and F. Beritelli, “A Smart UAV-Femtocell Data Sensing System for Post-Earthquake Localization of People,” IEEE Access, vol. 8, pp. 30262-30270, Feb. 2020.
* [21] E. Recayte, F. Lazaro, and G. Liva, “Caching at the Edge with Fountain Codes,” in Proc. Advanced Satellite Multimedia Systems Conference and the Signal Processing for Space Communications Workshop (ASMS/SPSC), Berlin, Oct. 2018, pp. 1-6.
* [22] D. Ko, B. Hong, and W. Choi, “Probabilistic Caching Based on Maximum Distance Separable Code in a User-Centric Clustered Cache-Aided Wireless Network,” IEEE Transactions on Wireless Communications, vol. 18, no. 3, pp. 1792-1804, Mar. 2019.
* [23] K. Shanmugam, N. Golrezaei, A. G. Dimakis, A. F. Molisch and G. Caire, “FemtoCaching: Wireless Content Delivery Through Distributed Caching Helpers,” IEEE Transactions on Information Theory, vol. 59, no. 12, pp. 8402-8413, Dec. 2013.
* [24] Z. Chen, J. Lee, T. Q. S. Quek and M. Kountouris, “Cooperative Caching and Transmission Design in Cluster-Centric Small Cell Networks,” IEEE Transactions on Wireless Communications, vol. 16, no. 5, pp. 3401-3415, May 2017.
* [25] A. Liu, and V. K. N. Lau, “Cache-Enabled Opportunistic Cooperative MIMO for Video Streaming in Wireless Systems,” IEEE Transactions Signal Processing, vol. 62, no. 2, pp. 390–402, Jan. 2014.
* [26] Y. Yu, T. Hsieh, and A. Pang, “Millimeter-Wave Backhaul Traffic Minimization for CoMP Over 5G Cellular Networks,” IEEE Transactions on Vehicular Technology, vol. 68, no. 4, pp. 4003–4015, Apr. 2019.
* [27] P. Lin, Q. Song, and A. Jamalipour, “Multidimensional Cooperative Caching in CoMP-Integrated Ultra-dense Cellular Networks,” IEEE Transactions Wireless Communication, vol. 19, no. 3, pp. 1977–1989, Mar. 2020.
* [28] H. Li, C. Yang, X. Huang, N. Ansari, and Z. Wang, “Cooperative RAN Caching based on Local Altruistic Game for Single and Joint Transmissions,” IEEE Communication Letter, vol. 21, no. 4, pp. 853–856, Apr. 2017.
* [29] P. Lin, K. S. Khan, Q. Song, and A. Jamalipour, “Caching in Heterogeneous Ultradense 5G Networks: A Comprehensive Cooperation Approach,” IEEE Vehicular Technology Magazine, vol. 14, no. 2, pp. 22–32, Jun. 2019.
* [30] D. Athukoralage, I. Guvenc, W. Saad and M. Bennis, “Regret Based Learning for UAV Assisted LTE-U/WiFi Public Safety Networks,” IEEE Global Communications Conference (GLOBECOM), Washington, Feb. 2017 pp. 1-7.
* [31] S. Zhu, L. Gui, N. Cheng, F. Sun and Q. Zhang, “Joint Design of Access Point Selection and Path Planning for UAV-Assisted Cellular Networks,” IEEE Internet of Things Journal, vol. 7, no. 1, pp. 220-233, Jan. 2020.
* [32] Z. Hajiakhondi-Meybodi, A. Mohammadi and J. Abouei, “Deep Reinforcement Learning for Trustworthy and Time-Varying Connection Scheduling in a Coupled UAV-Based Femtocaching Architecture,” IEEE Access, vol. 9, pp. 32263-32281, Feb. 2021.
* [33] M. Afshang, H. S. Dhillon and P. H. J. Chong “Fundamentals of Cluster-Centric Content Placement in Cache-Enabled Device-to-Device Networks,” IEEE Transactions on Communications, vol. 64, no. 6, pp. 2511-2526, June 2016.
* [34] S. Zhang, P. He, K. Suto, P. Yang, L. Zhao and X. Shen, “Cooperative Edge Caching in User-Centric Clustered Mobile Networks,” IEEE Transactions on Mobile Computing, vol. 17, no. 8, pp. 1791-1805, Aug. 2018.
* [35] K. Xue, L. Li, F. Yang, H. Zhang, X. Li and Z. Han, “Multi-UAV Delay Optimization in Edge Caching Networks: A Mean Field Game Approach,” Wireless and Optical Communications Conference, Dec. 2019, pp. 1-5.
* [36] F. Zhang, G. Han, L. Liu, M. Martínez-García and Y. Peng, “Joint Optimization of Cooperative Edge Caching and Radio Resource Allocation in 5G-Enabled Massive IoT Networks,” Accepted in IEEE Internet of Things Journal, Mar. 2021.
* [37] Y. Choi, C. S. Kim and S. Bahk, “Flexible Design of Frequency Reuse Factor in OFDMA Cellular Networks,” IEEE International Conference on Communications, Istanbul, 2006, pp. 1784-1788.
* [38] Wireless Communication. Available online: http://www.wirelesscommunication.nl/ reference/chaptr04/cellplan/reuse.html (accessed on 21-06-11).
* [39] A. M. Al-Samman, T. A. Rahman, T. Al-Hadhrami, A. Daho, M. N. Hindia, M. H. Azmi, K. Dimyati, and M. Alazab, “Comparative Study of Indoor Propagation Model Below and Above 6 GHz for 5G Wireless Networks,” Electronics, vol. 18, no. 1, Jan. 2019.
* [40] T. Kanungo, D. M. Mount, N. S. Netanyahu, C. D. Piatko, R. Silverman and A. Y. Wu, “An efficient k-means clustering algorithm: analysis and implementation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 881-892, July 2002.
* [41] C. M. Albertsen, “Generalizing the First-Difference Correlated Random Walk for Marine Animal Movement Data,” Scientific Reports, vol. 9, no. 1, pp. 4017–4031, Mar. 2019.
* [42] Z. HajiAkhondi-Meybodi, M. S. Beni, K. N. Plataniotis, and A. Mohammadi “Bluetooth Low Energy-based Angle of Arrival Estimation via Switch Antenna Array for Indoor Localization,” International Conference on Information Fusion, July 2020.
* [43] Z. HajiAkhondi-Meybodi, M. S. Beni, A. Mohammadi, and K. N. Plataniotis, “Bluetooth Low Energy-based Angle of Arrival Estimation in Presence of Rayleigh Fading,” IEEE International Conference on Systems, Man, and Sybernetics, 2020.
| Zohreh Hajiakhondi-Meybodi received the B.Sc. degree in Communication
Engineering from Yazd University, Yazd, Iran and the M.Sc. degree in
Communication Systems Engineering (with the highest honor) from Yazd
University, Yazd, Iran in 2013 and 2017, respectively. She is a Ph.D. degree
candidate at Electrical and Computer Engineering (ECE), Concordia University,
Montreal, Canada. Since 2019, she has been an active member of I-SIP Lab at
Concordia University. Her research interests include general areas of wireless
communication networks with a particular emphasis on Femtocaching, Internet of
Things (IoT), Indoor Localization, Optimization Algorithms, and Multimedia
Wireless Sensor Networks (WMSN).
---|---
| Arash Mohammadi (S’08-M’14-SM’17) is currently an Associate Professor with
Concordia Institute for Information Systems Engineering, Concordia University,
Montreal, QC, Canada. Prior to joining Concordia University and for 2 years,
he was a Postdoctoral Fellow with the Department of Electrical and Computer
Engineering, University of Toronto, Toronto, ON, Canada. Dr. Mohammadi is a
registered professional engineer in Ontario. He is Director-Membership
Developments of IEEE Signal Processing Society (SPS); General Co-Chair of
“2021 IEEE International Conference on Autonomous Systems (ICAS),” and; Guest
Editor for IEEE Signal Processing Magazine (SPM) Special Issue on “Signal
Processing for Neurorehabilitation and Assistive Technologies”. He is also
currently serving as Associate Editor on the editorial board of IEEE Signal
Processing Letters. He was Co-Chair of “Symposium on Advanced Bio-Signal
Processing and Machine Learning for Assistive and Neuro-Rehabilitation
Systems” as part of 2019 IEEE GlobalSIP, and “Symposium on Advanced Bio-Signal
Processing and Machine Learning for Medical Cyber-Physical Systems,” as a part
of IEEE GlobalSIP’18; The Organizing Chair of 2018 IEEE Signal Processing
Society Video and Image Processing (VIP) Cup, and the Lead Guest Editor for
IEEE Transactions on Signal & Information Processing over Networks Special
Issue on “Distributed Signal Processing for Security and Privacy in Networked
Cyber-Physical Systems”. He is recipient of several distinguishing awards
including the Eshrat Arjomandi Award for outstanding Ph.D. dissertation from
Electrical Engineering and Computer Science Department, York University, in
2013; Concordia President’s Excellence in Teaching Award in 2018, and; 2019
Gina Cody School of Engineering and Computer Science’s Research and Teaching
awards in the new scholar category.
---|---
| Jamshid Abouei (S05, M11, SM13) received the B.Sc. degree in Electronics
Engineering and the M.Sc. degree in Communication Systems Engineering (with
the highest honor) both from Isfahan University of Technology (IUT), Iran, in
1993 and 1996, respectively, and the Ph.D. degree in Electrical Engineering
from University of Waterloo, Canada, in 2009. He joined with the Department of
Electrical Engineering, Yazd University, Iran, in 1996 (as a Lecturer) and was
promoted to Assistant Professor in 2010, and Associate Professor in 2015. From
1998 to 2004, he served as a Technical Advisor and Design Engineer in the R &
D Center and Cable Design Department in SGCC, Iran. From 2009 to 2010, he was
a Postdoctoral Fellow in the Multimedia Lab, in the Department of Electrical &
Computer Engineering, University of Toronto, Canada, and worked as a Research
Fellow at the Self-Powered Sensor Networks (ORF-SPSN) consortium. During his
sabbatical, he was an Associate Researcher in the Department of Electrical,
Computer and Biomedical Engineering, Ryerson University, Toronto, Canada. Dr
Abouei was the International Relations Chair in 27th ICEE2019 Conference,
Iran, in 2019. Currently, Dr Abouei directs the research group at the Wireless
Networking Laboratory (WINEL), Yazd University, Iran. His research interests
are in the next generation of wireless networks (5G) and wireless sensor
networks (WSNs), with a particular emphasis on PHY/MAC layer designs including
the energy efficiency and optimal resource allocation in cognitive cell-free
massive MIMO networks, multi-user information theory, mobile edge computing
and femtocaching. Dr Abouei is a Senior IEEE member and a member of the IEEE
Information Theory. He has received several awards and scholarships, including
FOE and IGSA awards for excellence in research in University of Waterloo,
Canada, MSRT Ph.D. Scholarship from the Ministry of Science, Research and
Technology, Iran in 2004, Distinguished Researcher award in province of Yazd,
Iran, 2011, and Distinguished Researcher award in Electrical Engineering
Department, Yazd University, Iran, 2013. He is a recipient of the best paper
award for the IEEE Iranian Conference on Electrical Engineering (ICEE 2018).
---|---
| Ming Hou (M’05-SM’07) is currently a Senior Defence Scientist with Defence
Research and Development Canada (DRDC) and the Principal Authority of Human-
Technology Interactions with the Department of National Defence (DND), Canada.
He is responsible for providing science-based advice at national and
international levels to the Canadian Armed Forces (CAF) and coalition partners
about the investment in and application of advanced technologies for
human–machine systems requirements. He is an Integrator for the Canadian
government 16 billion IDEaS program and one of the three Scientific Advisors
to the Canadian National Centre of Expertise in Human Systems Performance with
responsibilities for guiding national research and development activities in
automation, robotics, and telepresence. He also gives advice for the
development of National Defence AI Science and Technology Strategy and Roadmap
to the CAF and DND. He is the Co-Chair of Human Factors Specialist Team within
NATO Joint Capability Group on Unmanned Aircraft Systems (UAS). His book
Intelligent Adaptive Systems: An Interaction-Centered Design Perspective
became a guiding document to the development of NATO Standard Recommendations
on UAS Human Systems Integration Guide book, UAS Human Factors Experimentation
Guidelines and UAS Sense and Avoid Guidance. As one of the four invited
lecturers, he delivers NATO Lecture Series on UAVs: Technological Challenges,
Concepts of Operations, and Regulatory Issues. He also serves for multiple
international associations/programs as a chair and a board members.
---|---
| Konstantinos N. (Kostas) Plataniotis is a Professor and the Bell Canada
Chair in Multimedia with the ECE Department at the University of Toronto. He
is the founder and inaugural Director-Research for the Identity, Privacy and
Security Institute (IPSI) at the University of Toronto and he has served as
the Director for the Knowledge Media Design Institute (KMDI) at the University
of Toronto from January 2010 to July 2012. His research interests are:
knowledge and digital media design, multimedia systems, biometrics, image &
signal processing, communications systems and pattern recognition. Among his
publications in these fields are the books entitled “WLAN positioning systems’
(2012) and ‘Multi-linear subspace learning: Reduction of multidimensional
data’ (2013). Dr. Plataniotis is a registered professional engineer in
Ontario, Fellow of the IEEE and Fellow of the Engineering Institute of Canada.
He has served as the Editor-in-Chief of the IEEE Signal Processing Letters,
and as Technical Co-Chair of the IEEE 2013 International Conference in
Acoustics, Speech and Signal Processing. He was the IEEE Signal Processing
Society Vice President for Membership (2014 -2016). He is the General Co-Chair
for 2017 IEEE GlobalSIP, the 2018 IEEE International Conference on Image
Processing (ICIP 2018), and the 2021 IEEE International Conference on
Acoustics, Speech and Signal Processing (ICASSP 2021).
---|---
|
# Spectral Functions from Auxiliary-Field Quantum Monte Carlo without Analytic
Continuation: The Extended Koopmans’ Theorem Approach
Joonho Lee<EMAIL_ADDRESS>Department of Chemistry, Columbia University,
New York, NY 10027, USA. Fionn D. Malone<EMAIL_ADDRESS>Quantum
Simulations Group, Lawrence Livermore National Laboratory, 7000 East Avenue,
Livermore, CA 94551 USA. Miguel A. Morales<EMAIL_ADDRESS>Quantum
Simulations Group, Lawrence Livermore National Laboratory, 7000 East Avenue,
Livermore, CA 94551 USA. David R. Reichman<EMAIL_ADDRESS>Department
of Chemistry, Columbia University, New York, NY 10027, USA.
###### Abstract
We explore the extended Koopmans’ theorem (EKT) within the phaseless
auxiliary-field quantum Monte Carlo (AFQMC) method. The EKT allows for the
direct calculation of electron addition and removal spectral functions using
reduced density matrices of the $N$-particle system, and avoids the need for
analytic continuation. The lowest level of EKT with AFQMC, called EKT1-AFQMC,
is benchmarked using small molecules, 14-electron and 54-electron uniform
electron gas supercells, and diamond at the $\Gamma$-point. Via comparison
with numerically exact results (when possible) and coupled-cluster methods, we
find that EKT1-AFQMC can reproduce the qualitative features of spectral
functions for Koopmans-like charge excitations with errors in peak locations
of less than 0.25 eV in a finite basis. We also note the numerical
difficulties that arise in the EKT1-AFQMC eigenvalue problem, especially when
back-propagated quantities are very noisy. We show how a systematic higher
order EKT approach can correct errors in EKT1-based theories with respect to
the satellite region of the spectral function. Our work will be of use for the
study of low-energy charge excitations and spectral functions in correlated
molecules and solids where AFQMC can be reliably performed.
## I Introduction
The dynamical response to external perturbation is one of the most powerful
means of experimentally probing molecules and materials. Examples include
angle-resolved photoemission spectroscopy,Lu _et al._ (2012) electron energy
loss spectroscopy,Egerton (2008) and inelastic neutron scattering,Andreani
_et al._ (2005) each of which encodes the excitation spectrum of a many-body
system. The theoretical description of such experiments can be modeled (in the
linear response regime) by considering many-body Green’s functions.Fetter and
Walecka (2003); Onida _et al._ (2002) For example, differential cross
sections from direct and inverse photoemission experiments can be related to
the (retarded) single-particle Green’s function.Cederbaum and Domcke (1977) In
a general sense, these observables are connected to spectral functions
describing electron removal and addition via the single-particle Green’s
function.Cederbaum and Domcke (1977); Hedin _et al._ (1998)
Given the above facts, the theoretical description of dynamical response
properties have been dominated by Green’s function-based approaches mainly due
to the direct access to the spectral function that they afford.Onida _et al._
(2002); Duchemin and Blase (2020) Among Green’s function methods, the G0W0
approachHybertsen and Louie (1986) is perhaps the most widely used
approximation to Hedin’s equations for the description of quasi-particle
spectra in solids.Hedin (1965) This approach has provided numerous valuable
insights for the interpretation of experimental data.Onida _et al._ (2002);
Reining (2018) Despite these successes, G0W0 has several deficiencies, such as
a notable dependence on the input Green’s function G0 and the screened Coulomb
operator W0,Rinke _et al._ (2005); Fuchs _et al._ (2007); Marom _et al._
(2012); Bruneval (2012) a poor description of satellite region of the spectral
function,Langreth (1970); Aryasetiawan _et al._ (1996); Guzzo _et al._
(2011); Lischner _et al._ (2013) and the absence of certain conserving
properties.Stan _et al._ (2009) Attempts have been made to address these
deficiencies via (partially) self-consistent GW approaches,von Barth and Holm
(1996); Holm and von Barth (1998); Faleev _et al._ (2004); van Schilfgaarde
_et al._ (2006); Shishkin and Kresse (2007); Rostgaard _et al._ (2010); Koval
_et al._ (2014) as well as with the incorporation of vertex correctionsBobbert
and Van Haeringen (1994); Schindlmayr and Godby (1998); Romaniello _et al._
(2009); Chen and Pasquarello (2015); Maggio and Kresse (2017) including via
the use of cumulant-based approaches.Holm and Aryasetiawan (1997); Kas _et
al._ (2014); Lischner _et al._ (2014); Caruso and Giustino (2016); Mayers
_et al._ (2016)
There has also been a sizable effort to construct spectral functions based on
wavefunction methods. These approaches include the use of matrix product
states (MPS),Jeckelmann (2002); Schollwöck and White (2006) algebraic
diagrammatic constructions,Schirmer (1982); Schirmer _et al._ (1983);
Tarantelli and Cederbaum (1989); Wenzel _et al._ (2014); Dreuw and Wormit
(2015); Sokolov (2018) and coupled-cluster Green’s function (or equation-of-
motion coupled-cluster) methods.Monkhorst (1977); Nooijen and Snijders (1992,
1993); Peng and Kowalski (2016); McClain _et al._ (2016, 2017); Furukawa _et
al._ (2018); Peng and Kowalski (2018); Shee and Zgid (2019) These and related
approaches have distinct strengths and weaknesses in terms of both cost and
accuracy and continue to be actively pursued.
Another useful path to the description of spectral information is based on
projector quantum Monte Carlo (PQMC) approaches.Blankenbecler and Sugar
(1983); Becca and Sorella (2017) PQMC methods provide a highly accurate means
to simulate the ground state properties of correlated solids.Foulkes _et al._
(2001) Unlike the aforementioned wavefunction-based approaches, PQMC methods
do not provide direct access to real-time and real-frequency Green’s
functions. This is a direct consequence of the imaginary-time propagation at
the heart of all PQMC approaches. A popular way around this hurdle is to first
obtain the imaginary-time Green’s function and then perform analytical
continuation to obtain the real-frequency Green’s function.Silver _et al._
(1990); Gubernatis _et al._ (1991); Jarrell and Gubernatis (1996); Motta _et
al._ (2014, 2015); Otsuki _et al._ (2017); Bertaina _et al._ (2017)
Unfortunately, analytic continuation is numerically ill-conditioned, and the
methods to perform analytic continuation such as the maximum entropy
methodSilver _et al._ (1990); Gubernatis _et al._ (1991) can exhibit
difficulties in resolving sharp features in the real-frequency spectral
function even if high quality imaginary-time Green’s functions are used as
input.Reichman and Rabani (2009); Goulko _et al._ (2017); Dornheim _et al._
(2018) Therefore, it is highly desirable to develop an alternative means to
obtain spectral functions which can work with PQMC methods without sacrificing
its ground state accuracy. There have been approaches based on diffusion Monte
Carlo (DMC) Ceperley and Bernu (1988) and the Krylov-projected full-
configuration QMC (KP-FCIQMC)Blunt _et al._ (2015, 2018) where one samples a
low-energy Hamiltonian matrix and solves an eigenvalue problem to obtain a
low-energy spectrum. Similar to its ground state counterpart, the excited
states from a DMC-based approach would be biased due to the fixed node
error.Ceperley and Bernu (1988) On the other hand, KP-FCIQMC is numerically
exact but scales exponentially in system size analogously to its ground state
counterpart, FCIQMC.Booth _et al._ (2009) Therefore, the scope of KP-FCIQMC
has been limited to small systems.Blunt _et al._ (2015, 2018) Furthermore,
because one is trying to obtain excited state information from imaginary-time
propagation in these approaches, the higher-lying excited states are
exponentially harder to obtain. This makes it challenging for these approaches
to estimate high energy spectral information.
The approach that we will examine in this work is called the extended
Koopmans’ theorem (EKT).Day _et al._ (1975); Smith and Day (1975); Pickup
(1975); Morrison _et al._ (1975); Ellenbogen _et al._ (1977); Morrison and
Liu (1992); Morrison (1992); Sundholm and Olsen (1993); Cioslowski _et al._
(1997); Kent _et al._ (1998); Olsen and Sundholm (1998); Pernal and
Cioslowski (2001); Farnum and Mazziotti (2004); Pernal and Cioslowski (2005);
Ernzerhof (2009); Vanfleteren _et al._ (2009); Piris _et al._ (2012, 2013);
Bozkaya (2013); Welden _et al._ (2015); Bozkaya and Ünal (2018); Pavlyukh
(2018, 2019) The EKT generalizes the Koopmans’ theorem in Hartree-Fock (HF)
theory for arbitrary many-body wavefunctions. Its working ingredients are
reduced density matrices (RDMs) for an $N$-particle system and it produces
approximate ($N$-1)-particle and ($N$+1)-particle wavefunctions even without
the $N$-particle ground state wavefunction. Due to this desirable feature, EKT
methods have been widely used as a means to obtain spectral information for
approaches for which one has access neither to real-time Green’s functions nor
wavefunctions. Examples include direct RDM-based methods,Farnum and Mazziotti
(2004) density matrix functional theory,Pernal and Cioslowski (2005) natural
orbital functional methods,Piris _et al._ (2012, 2013) and second-order
Green’s function methods.Welden _et al._ (2015) EKT has also been explored
with wavefunction methods such as configuration interaction methods,Morrison
(1992) Møller-Plesset perturbation methods,Cioslowski _et al._ (1997);
Bozkaya (2013) and coupled-cluster methods.Bozkaya and Ünal (2018) It is also
a promising way for any QMC method to compute excited state and spectral
information if the necessary RDMs can be constructed. The EKT has also been
used to obtain the quasi-particle band structure of silicon,(Kent _et al._ ,
1998) ionization potentials and electron affinities of atoms,(Zheng, 2016) and
the Fermi velocity of graphene using VMC.(Zheng, 2016) Lastly, the EKT has
been combined with DMC to study similar systems.(Zheng, 2016)
A PQMC approach that can be readily combined with the EKT is the phaseless
auxiliary-field QMC (ph-AFQMC) method.Zhang _et al._ (1995, 1997); Zhang and
Krakauer (2003) ph-AFQMC has emerged as a flexible, accurate and scalable
many-body method. It imposes an approximate gauge boundary condition (i.e.,
the phaseless constraint) on the imaginary-time evolution of Slater
determinant walkers, completely removing the Fermionic phase problem.Zhang and
Krakauer (2003) While the resulting energy is neither exact nor a variational
upper bound to the exact ground state energy,Carlson _et al._ (1999) many
benchmark studies have demonstrated the accuracy of ph-AFQMC and its related
variants.LeBlanc _et al._ (2015); Zheng _et al._ (2017); Motta _et al._
(2017); Zhang _et al._ (2018); Motta _et al._ (2020); Lee _et al._ (2019a,
2020a); Williams _et al._ (2020); Malone _et al._ (2020a); Lee _et al._
(2020b); Qin _et al._ (2020); Malone _et al._ (2020b) Furthermore, with
recent advances in local energy evaluation techniques in ph-AFQMC,Malone _et
al._ (2019); Lee and Reichman (2020) the cost for obtaining each statistical
sample scales cubically with system size, which renders it less expensive than
many other many-body methods. With the advent of the back-propagation (BP)
method in ph-AFQMC,Zhang _et al._ (1997); Purwanto and Zhang (2004); Motta
and Zhang (2017) with some additional effort one can compute pure estimators
for any operator, including those that do not commute with the Hamiltonian.
Therefore, one can compute the relevant input for the EKT directly from ph-
AFQMC using the BP algorithm. This is the direction we persue in this work.
This paper is organized as follows. We first present the general framework of
the EKT, its most common form EKT-1, and its extension, EKT-3. We then discuss
how to obtain the relevant input for EKT-1 using BP and ph-AFQMC. We assess
the accuracy of EKT-1-AFQMC on a variety of small molecules and the uniform
electron gas model. We further show the qualitative failure of EKT-1 for the
core spectra of the 14-electron uniform electron gas model and illustrate the
drastic improvement upon this result from using EKT-3 on the same model. We
also apply EKT1-AFQMC to the 54-electron uniform electron gas model and
diamond at the $\Gamma$-point. We conclude and summarize our most important
findings.
## II Theory
### II.1 The Extended Koopmans’ Theorem
In order to compute quasi-particle gaps and spectral functions, one must
compute ionization potential and electron attachment energies along with the
associated wavefunctions (or at least squared amplitudes for spectral
weights). While we focus in this work on electron removal processes, we keep
our presentation of theory general so that it is also applicable to electron
addition processes.
In the EKT approach, we consider wavefunctions
$|\Psi_{\nu}^{N\pm 1}\rangle=\hat{O}_{\nu}^{\pm}|\Psi_{0}^{N}\rangle,$ (1)
where the electron addition operator $\hat{O}_{\nu}^{+}$ is
$\hat{O}_{\nu}^{+}=\sum_{p}(c_{+})_{p}^{\nu}\hat{a}_{p}^{\dagger}$ (2)
for 1-particle excitations, and the electron removal operator
$\hat{O}_{\nu}^{-}$ is
$\hat{O}_{\nu}^{-}=\sum_{p}(c_{-})_{p}^{\nu}\hat{a}_{p}$ (3)
for 1-hole excitations. We obtain the linear coefficients $\mathbf{c}_{\pm}$
by minimizing the following variational energy expression:
$\Delta E_{\nu}^{\pm}=E_{\nu}^{(N\pm
1)}-E_{0}^{(N)}=\frac{\langle\Psi_{0}^{N}|({\hat{O}_{\nu}^{\pm}})^{\dagger}[\hat{\mathcal{H}},\hat{O}_{\nu}^{\pm}]|\Psi_{0}^{N}\rangle}{\langle\Psi_{0}^{N}|({\hat{O}_{\nu}^{\pm}})^{\dagger}\hat{O}_{\nu}^{\pm}|\Psi_{0}^{N}\rangle},$
(4)
where we have defined
$E_{\nu}^{(N\pm
1)}=\frac{\langle\Psi_{0}^{N}|({\hat{O}_{\nu}^{\pm}})^{\dagger}\hat{\mathcal{H}}\hat{O}_{\nu}^{\pm}|\Psi_{0}^{N}\rangle}{\langle\Psi_{0}^{N}|({\hat{O}_{\nu}^{\pm}})^{\dagger}\hat{O}_{\nu}^{\pm}|\Psi_{0}^{N}\rangle},$
(5)
and assumed
$\hat{\mathcal{H}}|\Psi_{0}^{N}\rangle=E_{0}^{(N)}|\Psi_{0}^{N}\rangle.$ (6)
We refer this approach to as EKT-1. The excitation levels in Eq. 3 and Eq. 2
can be systematically increased to achieve a greater accuracy in principle at
the expense of greater computational costs.Farnum and Mazziotti (2004);
Pavlyukh (2018) The next level of theory would incorporate 2h1p and 2p1h
excitations instead of Eq. 3 and Eq. 2, respectively:
$\hat{O}_{\nu}^{+}=\sum_{pqr}(c_{+})_{pqr}^{\nu}\hat{a}_{r}\hat{a}_{q}^{\dagger}\hat{a}_{p}^{\dagger}$
(7)
and
$\hat{O}_{\nu}^{-}=\sum_{pqr}(c_{-})_{pqr}^{\nu}\hat{a}_{r}^{\dagger}\hat{a}_{q}\hat{a}_{p}$
(8)
These operators include EKT-1 excitations because when $r=q$ we recover the 1h
and 1p excitations, as in Eq. 3 and Eq. 2. We refer this higher level of
theory to as EKT-3.
#### II.1.1 EKT-1
We consider the following Lagrangian for 1h and 1p excitations
$\displaystyle\mathcal{L}[\mathbf{c}^{\nu}]=\langle\Psi_{0}^{N}|({\hat{O}_{\nu}^{\pm}})^{\dagger}[\hat{\mathcal{H}},\hat{O}_{\nu}^{\pm}]|\Psi_{0}^{N}\rangle-\epsilon^{\nu}_{\pm}((\mathbf{c}^{\nu})^{\dagger}\mathbf{S}_{\pm}\mathbf{c}^{\nu}-1),$
(9)
where $\mathbf{S}_{\pm}$ is a pertinent metric matrix for normalization and
$\epsilon^{\nu}_{\pm}$ is a Lagrange multiplier. We note that
$\epsilon_{\pm}^{\nu}=\pm\Delta E_{\nu}^{\pm}.$ (10)
The normalization of $|\Psi_{\nu}^{N\pm 1}\rangle$ is ensured by the
constraint in Eq. 9. The stationary condition of Eq. 9 with respect to
$(\mathbf{c}_{\pm}^{\nu})^{\dagger}$ then leads to a generalized eigenvalue
equation,
$\mathbf{F}_{\pm}\mathbf{c}_{\pm}^{\nu}=\epsilon_{\pm}^{\nu}\mathbf{S}_{\pm}\mathbf{c}_{\pm}^{\nu}$
(11)
where the generalized Fock matrix is defined as (assuming that
$|\Psi_{0}^{N}\rangle$ is normalized)
$(\mathbf{F}_{-})_{pq}=\langle\Psi_{0}^{N}|\hat{a}_{p}^{\dagger}[\hat{\mathcal{H}},\hat{a}_{q}]|\Psi_{0}^{N}\rangle,$
(12)
and
$(\mathbf{F}_{+})_{pq}=\langle\Psi_{0}^{N}|\hat{a}_{p}[\hat{\mathcal{H}},\hat{a}_{q}^{\dagger}]|\Psi_{0}^{N}\rangle,$
(13)
and the corresponding metric matrix $\mathbf{S}_{\pm}$ is
$\mathbf{S}_{-}=\mathbf{P},$ (14)
and
$\mathbf{S}_{+}=\mathbf{I}-\mathbf{P}^{T}.$ (15)
Here, $\mathbf{P}$ is the one-body reduced density matrix (1-RDM),
$P_{pq}=\langle\Psi_{0}^{N}|\hat{a}_{p}^{\dagger}\hat{a}_{q}|\Psi_{0}^{N}\rangle.$
(16)
The electron attachment and ionization potential simply follow
$\epsilon_{+}=-\text{EA}$ and $\epsilon_{-}=\text{IP}$ (assuming $\nu$
corresponds to the lowest energy state). Then the quasiparticle gap is given
as $\Delta E_{\text{qp}}=\epsilon_{+}+\epsilon_{-}$. We note that these Fock
matrices are not Hermitian unless $|\Psi_{0}^{N}\rangle$ is an exact
eigenstate of $\hat{\mathcal{H}}$.
To provide more detailed expressions, let us define a generic ab-initio
Hamiltonian,
$\hat{\mathcal{H}}=\hat{\mathcal{H}}_{1}+\hat{\mathcal{H}}_{2},$ (17)
with
$\displaystyle\hat{\mathcal{H}}_{1}$
$\displaystyle=\sum_{pq}h_{pq}a_{p}^{\dagger}\hat{a}_{q},$ (18)
$\displaystyle\hat{\mathcal{H}}_{2}$
$\displaystyle=\frac{1}{2}\sum_{pqrs}\langle
pq|rs\rangle\hat{a}_{p}^{\dagger}\hat{a}_{q}^{\dagger}\hat{a}_{s}\hat{a}_{r},$
(19)
where $h_{pq}$ is the one-body Hamiltonian matrix element and $\langle
pq|rs\rangle$ is the two-electron integral tensor in Dirac notation.
Substituting Eq. 17 into Eq. 12 and Eq. 13, it can be shown that
$\mathbf{F}_{\pm}$ can be evaluated with the 1-RDM and the two-body RDM
(2-RDM):
$(\mathbf{F}_{-})_{pq}=-\sum_{q}h_{qr}(\mathbf{S}_{-})_{pr}+\frac{1}{2}\sum_{trs}\langle
tq||rs\rangle\Gamma_{pt}^{rs},$ (20)
and
$\displaystyle(\mathbf{F}_{+})_{pq}=$
$\displaystyle\sum_{q}h_{qr}(\mathbf{S}_{+})_{pr}+\frac{1}{2}\sum_{trs}\langle
rt||qs\rangle\Gamma_{rt}^{sp}$
$\displaystyle+\sum_{rs}(\mathbf{S}_{-})_{rs}\langle pr||qs\rangle,$ (21)
where the 2-RDM $\Gamma$ is
$\Gamma_{pt}^{rs}=\langle\Psi_{0}^{N}|\hat{a}_{p}^{\dagger}\hat{a}_{t}^{\dagger}\hat{a}_{s}\hat{a}_{r}|\Psi_{0}^{N}\rangle,$
(22)
and the antisymmetrized two-electron integral tensor is defined as
$\langle pq||rs\rangle=\langle pq|rs\rangle-\langle pq|sr\rangle.$ (23)
#### II.1.2 EKT-3
For many solid state systems, including 1h or 1p excitations only is not
sufficient as the restriction to such excitations would not be capable of
describing satellite peaks.Hedin (1980) A straightforward way to obtain
satellite peaks in addition to dominant quasiparticle peaks is to include
higher-order excitations in the EKT ansatz. Thus, EKT-3 is the next level in
this hierarchy that can be attempted. Although EKT-3 has been mentioned in
literatureFarnum and Mazziotti (2004); Pavlyukh (2018, 2019) and approximately
implemented (neglecting the opposite spin term) before,Farnum and Mazziotti
(2004) to the best of our knowledge this work presents the first complete
implementation of EKT-3 along with numerical results.
The corresponding generalized Fock operator for the IP problem reads
$\displaystyle(\mathbf{F}_{-})_{pqr,stu}$
$\displaystyle=\langle\Psi_{0}^{N}|\hat{a}^{\dagger}_{p}\hat{a}^{\dagger}_{q}\hat{a}_{r}[\hat{H},\hat{a}_{u}^{\dagger}\hat{a}_{t}\hat{a}_{s}]|\Psi_{0}^{N}\rangle.$
(24)
Using the SecondQuantizationAlgebra packagesqa developed by Neuscamman and
others,Neuscamman _et al._ (2009); Saitow _et al._ (2013) we derived a
complete spin-orbital equation of the generalized Fock operator:
$\displaystyle(\mathbf{F}_{-})_{ijk,lmn}$
$\displaystyle=-h_{kn}\Gamma_{ij}^{ml}-\sum_{a}(h_{la}\Gamma_{ij}^{am}\delta_{kn}+h_{ma}\delta_{kn}\Gamma_{ij}^{al}$
$\displaystyle+h_{la}\Gamma_{ijn}^{amk}-h_{ma}\Gamma_{ijn}^{alk}+h_{na}\Gamma_{ija}^{mlk})$
$\displaystyle+\frac{1}{2}\sum_{ab}(\langle
lm||ab\rangle\delta_{kn}\Gamma_{ij}^{ba}-\langle
kl||ab\rangle\Gamma_{ijn}^{bam}$ $\displaystyle+\langle
km||ab\rangle\Gamma_{ijn}^{bal}-2\langle
ka||nb\rangle\Gamma_{ija}^{bml}-\langle lm||ab\rangle\Gamma_{ijn}^{bak})$
$\displaystyle+\frac{1}{2}\sum_{abc}(\langle
ma||bc\rangle\delta_{kn}\Gamma_{ija}^{cbl}-\langle
la||bc\rangle\delta_{kn}\Gamma_{ija}^{cbm}$ $\displaystyle-\langle
la||bc\rangle\Gamma_{ijna}^{cbmk}+\langle ma||bc\rangle\Gamma_{ijna}^{cblk}$
$\displaystyle-\langle na||bc\rangle\Gamma_{ijbc}^{amlk}),$ (25)
where the three-body RDM (3-RDM) is
$\Gamma_{ijk}^{npq}=\langle\Psi_{0}^{N}|\hat{a}_{i}^{\dagger}\hat{a}_{j}^{\dagger}\hat{a}_{k}^{\dagger}\hat{a}_{q}\hat{a}_{p}\hat{a}_{n}|\Psi_{0}^{N}\rangle,$
(26)
and the four-body RDM (4-RDM) is
$\Gamma_{ijkl}^{mnpq}=\langle\Psi_{0}^{N}|\hat{a}_{i}^{\dagger}\hat{a}_{j}^{\dagger}\hat{a}_{k}^{\dagger}\hat{a}_{l}^{\dagger}\hat{a}_{q}\hat{a}_{p}\hat{a}_{n}\hat{a}_{m}|\Psi_{0}^{N}\rangle.$
(27)
The pertinent metric, $\mathbf{S}$, for this generalized eigenvalue problem is
$\displaystyle S_{pqr,stu}$
$\displaystyle=\langle\Psi_{0}^{N}|\hat{a}^{\dagger}_{p}\hat{a}^{\dagger}_{q}\hat{a}_{r}\hat{a}_{u}^{\dagger}\hat{a}_{t}\hat{a}_{s}|\Psi_{0}^{N}\rangle$
$\displaystyle=\delta_{ur}\Gamma_{pq}^{st}-\Gamma_{pqu}^{str}.$ (28)
The storage requirement of the 4-RDM scales as $\mathcal{O}(N^{8})$ and it
becomes prohibitively expensive for more than 16 orbitals. To circumvent this
problem, we approximate the 4-RDM via a cumulant expansion. The cumulant
approximation to the 4-RDM has been used in multi-reference perturbation
theory and configuration interaction methods previously.Zgid _et al._ (2009);
Saitow _et al._ (2013) In essence, the 4-RDM is approximately constructed
from four classes of terms: (1) 1-RDM$\times$1-RDM$\times$1-RDM$\times$1-RDM,
(2) 2-RDM$\times$1-RDM$\times$1-RDM (3) 2-RDM$\times$2-RDM, and (4)
1-RDM$\times$3-RDM. Interested readers are referred to Ref. 120 for more
details. To construct the cumulant terms we wrote a Python code based on the
Fortran code presented in Ref. 119. For the systems we have investigated, we
have found the error of the cumulant approximation is insignificant and we
present results with the reconstructed cumulant 4-RDM later in this work. We
further note that Mazziotti and co-workers have used a cumulant expansion for
both 3- and 4-RDMs in their EKT-3 calculations.Farnum and Mazziotti (2004)
Practical implementations may be achieved using spin-orbital expressions where
we consider two spin-blocks in $\mathbf{c}_{-}$ ($(\alpha\alpha\alpha)$ and
$(\alpha\beta\beta)$) for removing an $\alpha$ electron:
$\displaystyle\hat{O}_{\nu}^{-}$
$\displaystyle=\sum_{p_{\alpha}q_{\alpha}r_{\alpha}}(c_{-})_{p_{\alpha}q_{\alpha}r_{\alpha}}^{\nu}\hat{a}_{r_{\alpha}}^{\dagger}\hat{a}_{q_{\alpha}}\hat{a}_{p_{\alpha}}$
$\displaystyle+\sum_{p_{\alpha}q_{\beta}r_{\beta}}(c_{-})_{p_{\alpha}q_{\beta}r_{\beta}}^{\nu}\hat{a}_{r_{\beta}}^{\dagger}\hat{a}_{q_{\beta}}\hat{a}_{p_{\alpha}}.$
(29)
Consequently, this leads to four distinct spin-blocks for $\mathbf{F}$:
$(\alpha\alpha\alpha\alpha\alpha\alpha)$,
$(\alpha\beta\beta\alpha\alpha\alpha)$,
$(\alpha\alpha\alpha\alpha\beta\beta)$, and
$(\alpha\beta\beta\alpha\beta\beta)$.
### II.2 Spectral Functions from EKT
We write the retarded single-particle Green’s function in a finite basis
asFetter and Walecka (2003)
$iG^{R}_{pq}(t,t^{\prime})=\theta(t-t^{\prime})\langle\Psi_{0}^{N}|\\{\hat{a}_{p}(t),\hat{a}^{\dagger}_{q}(t^{\prime})\\}|\Psi_{0}^{N}\rangle,$
(30)
where $\theta(t)$ is the Heaviside step function. Assuming the
$\hat{\mathcal{H}}$ is time independent, we can write the Green’s function in
the frequency domain as
$\displaystyle G^{R}_{pq}(\omega+i\eta)$
$\displaystyle=\langle\Psi^{N}_{0}|\hat{a}_{p}\frac{1}{\omega-(\hat{\mathcal{H}}-E_{0}^{(N)})+i\eta}\hat{a}_{q}^{\dagger}|\Psi_{0}^{N}\rangle$
$\displaystyle+\langle\Psi^{N}_{0}|\hat{a}_{q}^{\dagger}\frac{1}{\omega-(\hat{\mathcal{H}}-E_{0}^{(N)})+i\eta}\hat{a}_{p}|\Psi_{0}^{N}\rangle,$
(31)
where $\eta$ is a small positive constant and $E_{0}^{N}$ is the ground-state
energy of $N$-particle system.
The EKT approach offers a systematically improvable way to approximate the
evaluation of Eq. 31. This is because one can form projection operators on the
subspace of EKT excitations,
$\hat{P}_{\pm}=\sum_{\mu\nu}|\Psi_{\mu}^{N\pm
1}\rangle(S_{\mu\nu}^{\pm})^{-1}\langle\Psi_{\nu}^{N\pm 1}|$ (32)
where $|\Psi_{\nu}^{N\pm 1}\rangle$ are approximate wavefunctions obtained via
the EKT as defined in Eq. 1 and $\mathbf{S}^{\pm}$ is a metric in the
pertinent space. Using Eq. 32, we obtain an approximate $\mathbf{G}^{R}$,
$\displaystyle G^{R}_{pq}(\omega+i\eta)$
$\displaystyle\simeq\langle\Psi^{N}_{0}|\hat{a}_{p}\hat{P}_{+}\frac{1}{\omega-(\hat{\mathcal{H}}-E_{0}^{(N)})+i\eta}\hat{P}_{+}\hat{a}_{q}^{\dagger}|\Psi_{0}^{N}\rangle$
$\displaystyle+\langle\Psi^{N}_{0}|\hat{a}_{q}^{\dagger}\hat{P}_{-}\frac{1}{\omega-(\hat{\mathcal{H}}-E_{0}^{(N)})+i\eta}\hat{P}_{-}\hat{a}_{p}|\Psi_{0}^{N}\rangle$
(33)
This approximation can be systematically improved as higher-order excitations
are included in Eq. 3 and Eq. 2. It is exact when $|\Psi_{\nu}^{N\pm
1}\rangle$ spans the entire $({N\pm 1})$-particle Hilbert space. Substituting
Eq. 32 into Eq. 33 and using
$\hat{P}_{\pm}\hat{\mathcal{H}}\hat{P}_{\pm}|\Psi_{\nu}^{N\pm
1}\rangle=E_{\nu}^{N\pm 1}|\Psi_{\nu}^{N\pm 1}\rangle$ (from Eq. 5), we obtain
the (approximate) Lehmann representation of the Green’s function
$\displaystyle G^{R}_{pq}(\omega)$
$\displaystyle=\sum_{\nu}\frac{\langle\Psi^{N}_{0}|\hat{a}_{p}|\tilde{\Psi}_{\nu}^{N+1}\rangle\langle\tilde{\Psi}^{N+1}_{\nu}|\hat{a}_{q}^{\dagger}|\Psi_{0}^{N}\rangle}{\omega-(E_{\nu}^{(N+1)}-E_{0}^{(N)})+i\eta}$
$\displaystyle+\sum_{\nu}\frac{\langle\Psi^{N}_{0}|\hat{a}_{q}^{\dagger}|\tilde{\Psi}_{\nu}^{N-1}\rangle\langle\tilde{\Psi}^{N-1}_{\nu}|\hat{a}_{p}|\Psi_{0}^{N}\rangle}{\omega+(E_{\nu}^{(N-1)}-E_{0}^{{(N)}})+i\eta},$
(34)
where we orthogonalized the eigenvectors by
$|\tilde{\Psi}^{N\pm 1}_{\nu}\rangle=\sum_{\mu}|{\Psi}^{N\pm
1}_{\mu}\rangle(S^{\pm})^{-1/2}_{\mu\nu}.$ (35)
Details concerning the numerical implementation of this orthogonalization
procedure are given in Appendix A.
A spectral function in a finite basis set can be computed from
$\displaystyle A_{pq}(\omega)=-\frac{1}{\pi}\lim_{\eta\rightarrow
0^{+}}\mathrm{Im}\left[G_{pq}^{R}(\omega+i\eta)\right]$
$\displaystyle=A_{pq}^{>}(\omega)+A^{<}_{pq}(\omega),$ (36)
where
$\displaystyle A_{pq}^{>}(\omega)$
$\displaystyle=\sum_{\nu}\langle\Psi^{N}_{0}|\hat{a}_{p}|\Psi_{\nu}^{N+1}\rangle\langle\Psi^{N+1}_{\nu}|\hat{a}_{q}^{\dagger}|\Psi_{0}^{N}\rangle$
$\displaystyle\times\delta\left(\omega-(E_{\nu}^{(N+1)}-E_{0}^{(N}))\right),$
(37)
and
$\displaystyle A^{<}_{pq}(\omega)$
$\displaystyle=\sum_{\nu}\langle\Psi^{N}_{0}|\hat{a}_{q}^{\dagger}|\Psi_{\nu}^{N-1}\rangle\langle\Psi^{N-1}_{\nu}|\hat{a}_{p}|\Psi_{0}^{N}\rangle$
$\displaystyle\times\delta\left(\omega+(E_{\nu}^{(N-1)}-E_{0}^{(N)})\right),$
(38)
where ${\mathbf{A}}^{>}$ and ${\mathbf{A}}^{<}$ are the addition and removal
single-particle spectral functions, which describe inverse and direct
photoemission experiments, respectively, in the sudden approximation.
Using the definition of Eq. 1, Eq. 37 and Eq. 38 can be expressed in terms of
directly computable quantities, $\tilde{\mathbf{c}}^{\nu}$ (orthogonalized
eigenvectors) and $\mathbf{P}$ in the case of EKT-1:
$A^{>}_{pq}(\omega)=\sum_{rs}(T_{+})_{rs}(\omega)\left(\delta_{pr}-P_{rp}\right)\left(\delta_{sq}-P_{qs}\right),$
(39)
and
$A^{<}_{pq}(\omega)=\sum_{rs}(T_{-})_{rs}(\omega)P_{qr}P_{sp},$ (40)
where the state-averaged one-particle transition density matrix
$\mathbf{T}_{\pm}$ is defined as
$\mathbf{T}_{\pm}(\omega)=\sum_{\nu}\tilde{\mathbf{c}}^{\nu}_{\pm}(\tilde{\mathbf{c}}^{\nu}_{\pm})^{\dagger}\delta\left(\omega\mp(E_{\nu}^{(N\pm
1)}-E_{0}^{(N)}))\right).$ (41)
Similarly, for EKT-3, 2-RDM naturally arises in the evaluation of the spectral
functions. The working equation for IP states is as follows:
$\displaystyle A^{<}_{pq}(\omega)$
$\displaystyle=\sum_{\nu}\sum_{ijklmn}\Gamma^{ij}_{qk}\Gamma^{pn}_{lm}\tilde{c}_{ijk}^{\nu}(\tilde{c}_{lmn}^{\nu})^{*}$
$\displaystyle\times\delta\left(\omega+(E_{\nu}^{(N-1)}-E_{0}^{(N)})\right).$
(42)
It is straightforward to find similar equations for the EA states.
Furthremore, we note that the density-of-state (DOS) is simply defined as
$g(\omega)=\frac{\operatorname{tr}(\mathbf{A}(\omega))}{M}$ (43)
where $M$ is the number of single-particle basis functions. We note that for
solid-state applications single-particle basis carries an additional index for
the crystalline momentum $\mathbf{k}$ which can be straightforwardly
incorporated into the above formalism. In such applications, it is useful to
compute the momentum-dependent DOS, $g(\mathbf{k},\omega)$.
### II.3 Phaseless Auxiliary-Field Quantum Monte Carlo
While the ph-AFQMC formalism has been presented before in detail,Motta and
Zhang (2018) we review the essence of the algorithm to provide a self-
contained description. The imaginary propagation is given as
$|\Psi_{0}\rangle\propto\lim_{\tau\rightarrow\infty}\exp{\left(-\tau\hat{\mathcal{H}}\right)}|\Phi_{0}\rangle=\lim_{\tau\rightarrow\infty}|\Psi(\tau)\rangle,$
(44)
where $\tau$ is the imaginary time, $|\Psi_{0}\rangle$ is the exact ground
state of a Hamiltonian $\hat{\mathcal{H}}$ and $|\Phi_{0}\rangle$ is an
initial starting wavefunction with a non-zero overlap with $|\Psi_{0}\rangle$.
We assume no special structure in the underlying Hamiltonian and work with the
generic ab-initio Hamiltonians of Eq. 17.
In ph-AFQMC, this imaginary-time propagation is stochastically implemented.
One discretizes the imaginary time $\tau$ with a time step of $\Delta\tau$
such that for $N$ time steps we have $\tau=N\Delta\tau$. Using the Trotter
approximation and the Hubbard-Stratonovich transformation,Hubbard (1959);
Hirsch (1983) a single time step many-body propagator can be written in
integral form,
$\exp(-\Delta\tau\hat{\mathcal{H}})\>=\int
d^{N_{\alpha}}\mathbf{x}~{}p(\mathbf{x})\hat{B}(\Delta\tau,\mathbf{x}),$ (45)
where $p(\mathbf{x})$ is the standard normal distribution, $\mathbf{x}$ is a
vector of $N_{\alpha}$ auxiliary fields and $\hat{B}$ is defined as
$\hat{B}(\Delta\tau,\mathbf{x})=e^{-\frac{\Delta\tau}{2}\hat{\mathcal{H}}_{1}}e^{-\sqrt{\Delta\tau}\mathbf{x}\cdot\hat{\mathbf{v}}}e^{-\frac{\Delta\tau}{2}\hat{\mathcal{H}}_{1}}+\mathcal{O}(\Delta\tau^{3}),$
(46)
where $\hat{\mathbf{v}}$ is defined from
$\hat{\mathcal{H}}_{2}=-\frac{1}{2}\sum_{\alpha}^{N_{\alpha}}\hat{v}_{\alpha}^{2}.$
(47)
The computation of the integral in Eq. 45 is carried out via Monte Carlo
sampling where each walker samples an instance of $\mathbf{x}$.
The global wavefunction is, with importance sampling, represented as a linear
combination of walker wavefunctions:
$|\Psi(\tau)\rangle=\sum_{i}w_{i}(\tau)\frac{|\psi_{i}(\tau)\rangle}{\langle\Psi_{T}|\psi_{i}(\tau)\rangle},$
(48)
where $w_{i}$ is the weight of the $i$-th walker, $|\psi_{i}(\tau)\rangle$ is
the single Slater determinant of the $i$-th walker, and $|\Psi_{T}\rangle$ is
the trial wavefunction. At each time step, each walker samples a set of
$\mathbf{x}$, forms $\hat{B}(\Delta\tau,\mathbf{x})$, and updates its
wavefunction by applying $\hat{B}(\Delta\tau,\mathbf{x})$ to it. Practical
implementations employ the so-called “optimal” force bias which shifts the
Gaussian distribution,Zhang and Krakauer (2003)
$\mathbf{\bar{x}}_{i}(\Delta\tau,\tau)=-\sqrt{\Delta\tau}\frac{\langle\Psi_{T}|\hat{\mathbf{v}}^{\prime}|\psi_{i}(\tau)\rangle}{\langle\Psi_{T}|\psi_{i}(\tau)\rangle}.$
(49)
With the optimal force bias, a single time step propagation can be summarized
with two equations
$\displaystyle w_{i}(\tau+\Delta\tau)$
$\displaystyle=I_{\text{ph}}(\mathbf{x}_{i},\mathbf{\bar{x}}_{i},\tau,\Delta\tau)\times
w_{i}(\tau),$ (50) $\displaystyle|\psi_{i}(\tau+\Delta\tau)\rangle$
$\displaystyle=\hat{B}(\Delta\tau,\mathbf{x}_{i}-\mathbf{\bar{x}}_{i})|\psi_{i}(\tau)\rangle,$
(51)
where the phaseless importance function in hybrid form is defined as
$I_{\text{ph}}(\mathbf{x}_{i},\mathbf{\bar{x}}_{i},\tau,\Delta\tau)=|I(\mathbf{x}_{i},\mathbf{\bar{x}}_{i},\tau,\Delta\tau)|\times\text{max}(0,\cos(\theta_{i}(\tau))),$
(52)
with
$I(\mathbf{x}_{i},\mathbf{\bar{x}}_{i},\tau,\Delta\tau)=S_{i}(\tau,\Delta\tau)e^{\mathbf{x}_{i}\cdot\mathbf{\bar{x}}_{i}-\mathbf{\bar{x}}_{i}\cdot\mathbf{\bar{x}}_{i}/2},$
(53)
and
$S_{i}(\tau,\Delta\tau)=\frac{\langle\Psi_{T}|\hat{B}(\Delta\tau,\mathbf{x}_{i}-\mathbf{\bar{x}}_{i})|\psi_{i}(\tau)\rangle}{\langle\Psi_{T}|\psi_{i}(\tau).\rangle},$
(54)
With this specific walker update instruction, all walker weights in Eq. 48
remain real and positive and thereby it completely eliminates the fermionic
phases problem.
In ph-AFQMC, the simplest way to evaluate the expectation value of an operator
$\hat{O}$ is using the mixed estimator
$\displaystyle\langle\hat{O}\rangle_{\mathrm{mixed}}$
$\displaystyle\coloneqq\frac{\langle\Psi_{T}|\hat{O}|\Psi(\tau)\rangle}{\langle\Psi_{T}|\Psi(\tau)\rangle}$
(55)
$\displaystyle=\frac{\sum_{i}w_{i}(\tau)\frac{\langle\Psi_{T}|\hat{O}|\psi_{i}(\tau)\rangle}{\langle\Psi_{T}|\psi_{i}(\tau)\rangle}}{\sum_{i}w_{i}(\tau).}$
(56)
The mixed estimator is an unbiased estimator only for operators that commute
with the Hamiltonian. For operators that do not commute with the Hamiltonian,
the mixed estimator can introduce significant biases due to the approximate
trial wavefunctions that can be practically used. To overcome this do we use
the back-propagation algorithmZhang _et al._ (1995); Purwanto and Zhang
(2004); Motta and Zhang (2017, 2018) and write
$\displaystyle\langle{\hat{O}}$
$\displaystyle\rangle\approx\lim_{\kappa\rightarrow\infty}\frac{\langle\Psi_{T}|e^{-\kappa\hat{\mathcal{H}}}\hat{O}|\Psi(\tau)\rangle}{\langle\Psi_{T}|e^{-\kappa\hat{\mathcal{H}}}|\Psi(\tau)\rangle}$
(57)
$\displaystyle=\lim_{\kappa\rightarrow\infty}\frac{\sum_{i}w_{i}(\tau+\kappa)\frac{\langle\psi_{i}(\kappa)|\hat{O}|\psi_{i}(\tau)\rangle}{\langle\psi_{i}(\kappa)|\psi_{i}(\tau)\rangle}}{\sum_{i}w_{i}(\tau+\kappa).}$
(58)
To summarize, we propagate $|\Psi\rangle$ until $\kappa+\tau$, storing the
walker wavefunction at time $\tau$. We can then split the propagation into
$\kappa$ back-propagation and $\tau$ forward-propagation as in Eq. 58. The
back propagated wavefunction is constructed by applying a walker’s propagators
to the trial wavefunction from the $\kappa$ portion of the path. Practically
the convergence of the expectation value has to be monitored with respect to
the back propagation time $\kappa$. It should be emphasized that in ph-AFQMC
the walker wavefunction is a single determinant wavefunction.
It was found in Ref.115 that the standard back-propagation algorithm described
in Eq. 58 can yield poor results in ph-AFQMC when applied to ab-initio
systems. The authors devised a number of additional steps to reduce the
phaseless error, the most accurate of which was to partially restore the phase
and cosine factors along the back propagation portion of the path. In this
work we restore phases along the back propagated path as well as along the
forward direction. Practically this amounts to storing the phases and cosine
factors between $[\tau-\kappa,\tau+\kappa]$ and multiplying these by the
weights appearing in Eq. 58. This additional restoration of paths along the
forward direction was not described in Ref.115 but was used in practiceMot ;
Chen _et al._ (2020) and we found it necessary to obtain more accurate
results for the systems studied here.
In EKT1-AFQMC, we directly sample $\mathbf{F}_{\pm}$ using the back-propagated
estimator form. This boils down to the evaluation of the 2-RDM appearing in
Eq. 22 using the back propagated 1-RDM via Wick’s theorem:
$\Gamma_{pt}^{rs}=P_{pr}P_{ts}-P_{ps}P_{tr}.$ (59)
With these ingredients, we can evaluate $\mathbf{F}_{\pm}$ by contracting the
one- and two-body matrix elements with the back propagated 1- and 2-RDMs.
While efficient implementations are not the focus of our efforts in this work,
we mention the computational cost of producing one back-propagated sample of
$\mathbf{F}^{\pm}$ and $\mathbf{S}^{\pm}$. A sample of $\mathbf{S}^{\pm}$ has
the same overhead as computing a back-propagated 1-RDM sample which scales as
$\mathcal{O}(NM^{2})$ where $M$ is the number of orbitals and $N$ is the
number of occupied orbitals. The cost for producing a sample of
$\mathbf{F}^{\pm}$ is more involved and depends on the integral factorization
that one chooses to use. Using the most common integral factorization, i.e.,
the Cholesky factorization, it can be shown that the cost scales as
$\mathcal{O}(M^{3}X)$ where $X$ is the number of Cholesky vectors (also note
that the 2-RDM is never explicitly formed). With tensor
hypercontraction,Hohenstein _et al._ (2012a); Parrish _et al._ (2012);
Hohenstein _et al._ (2012b); Malone _et al._ (2019); Lee _et al._ (2019b)
the cost can be brought down to overall cubic. If one were to just implement a
matrix-vector product for iterative eigensolvers, the Cholesky factorization
can achieve cubic scaling per matrix-vector product as well. The cost is
increased in EKT3-AFQMC where each matrix-vector product sample costs
$\mathcal{O}(M^{5})$. It is potentially possible to reduce this cost further
by also factorizing EKT amplitudes in a THC format, as is done in Ref. 128.
We leave the exploration of EKT3-AFQMC for the future study and focus on
EKT1-AFQMC in this work.
### II.4 Uniform Electron Gas
Aside from small molecular benchmarks, we also study the spectral properties
of the uniform electron gas (UEG) model. The UEG model is usually defined in
the plane-wave basis set, which gives the one-body operator
$\hat{\mathcal{H}}_{1}=\sum_{\mathbf{K}}\frac{|\mathbf{K}|^{2}}{2}a_{\mathbf{K}}^{\dagger}a_{\mathbf{K}}$
(60)
and the electron-electron interaction operator is (in a spin-orbital basis)
$\hat{\mathcal{H}}_{2}=\frac{1}{2\Omega}\sum_{\mathbf{K}\neq\mathbf{0},\mathbf{K}_{1},\mathbf{K}_{2}}\frac{4\pi}{|\mathbf{K}|^{2}}a_{\mathbf{K}_{1}+\mathbf{K}}^{\dagger}a_{\mathbf{K}_{2}-\mathbf{K}}^{\dagger}a_{\mathbf{K}_{2}}a_{\mathbf{K}_{1}},$
(61)
where $\mathbf{K}$ here is a planewave vector and $\Omega$ is the volume of
the unit cell. In addition to $\hat{\mathcal{H}}_{1}$ and
$\hat{\mathcal{H}}_{2}$, there is a constant term that arises due to a finite-
size effect. Specifically, the Madelung energy $E_{M}$ should be included to
account for self-interactions associated with the Ewald sum under periodic
boundary conditionsSchoof _et al._ (2015) via
$E_{M}=\frac{N}{2}\xi,$ (62)
with
$\xi=-2\times
2.837297\times\left(\frac{3}{4\pi}\right)^{1/3}N^{-1/3}r_{s}^{-1},$ (63)
where $N$ is the number of electrons in the unit cell and $r_{s}$ is the
Wigner-Seitz radius. We define the UEG Hamiltonian as a sum of these three
terms,
$\hat{H}_{\text{UEG}}=\hat{\mathcal{H}}_{1}+\hat{\mathcal{H}}_{2}+E_{M}.$ (64)
The Madelung constant can be either included in the Hamiltonian as written in
Eq. 64 or it can be included as an a posteriori correction to the simulation
done without it. When the latter choice is made, the spectral functions have
to be shifted accordingly in order to compare the results obtained from the
former approach. The corresponding shift can be derived from a shift in the
poles
IP $\displaystyle=E(N-1)-E(N)\rightarrow-\frac{\xi}{2},$ (65) EA
$\displaystyle=E(N+1)-E(N)\rightarrow\frac{\xi}{2}.$ (66)
Regardless of whether we included the Madelung constant in the Hamiltonian,
there is an additional correction of $-\frac{\xi}{2}$ for both IP and EA to
remove spurious image interactions coming from the excess charge created.Yang
_et al._ (2020) Therefore, overall electron removal poles are shifted by
$-\xi$ and electron addition poles remain the same.
For many molecular quantum chemistry methods, often the two-electron integral
tensor is assumed to be 8-fold symmetric. Practical implementations utilize
this symmetry to simplify equations as well. As such, without the 8-fold
symmetry, molecular quantum chemistry methods would not produce correct
answers even though the UEG Hamiltonian only contains real-valued matrix
elements. This complication of the UEG Hamiltonian becomes more obvious once
we write it in the form of Eq. 19 with
$\displaystyle\langle pq|rs\rangle=$
$\displaystyle\frac{1}{\Omega}\frac{4\pi}{|\mathbf{K}_{p}-\mathbf{K}_{r}|^{2}}\delta_{\mathbf{K}_{p}-\mathbf{K}_{r},\mathbf{K}_{q}-\mathbf{K}_{s}}$
$\displaystyle\times(1-\delta_{\mathbf{K}_{p},\mathbf{K}_{r}}).$ (67)
The permutation between $p$ and $r$ or between $q$ and $s$ alters the value of
the integral tensor because of the Kronecker delta term. This is a direct
consequence of using a planewave basis which is complex, unlike the usual
Gaussian orbitals. To circumvent any complications due to this, we perform a
unitary transformation that rotates the planewave basis into a real-valued
basis. Namely, for given $\mathbf{K}$ and $-\mathbf{K}$ (assuming
$\mathbf{K}\neq\mathbf{0}$), we use
$\mathbf{U}=\frac{1}{\sqrt{2}}\begin{pmatrix}1&-i\\\ 1&i\end{pmatrix}.$ (68)
We apply this transformation to every pair of $\mathbf{K}$ and $-\mathbf{K}$
in the two-electron integral tensor in Eq. 67. The resulting transformed
integral tensor now recovers the full 8-fold symmetry. One can also transform
observables such as spectral functions back to the original basis using
$\mathbf{U}^{\dagger}$ when necessary.
## III Computational Details
All quantum chemistry calculations are performed with PySCFSun _et al._
(2018) which include mean-field (HF) calculations, coupled-cluster with
singles and doubles (CCSD) and CCSD with perturbative triples (CCSD(T)). All
one- and two-electron integrals needed for ph-AFQMC were also generated with
PySCF. ph-AFQMC calculations were mostly performed with QMCPACKKent _et al._
(2020) and PAUXYpau was used to crosscheck some results.
We use a selected configuration interaction (CI) method called heat-bath CI
(HCI)Holmes _et al._ (2016); Sharma _et al._ (2017); Smith _et al._ (2017)
to produce numerically exact IPs within a basis whenever possible.
Furthermore, we compute the HCI IPs within EKT1 (i.e., EKT1-HCI) using the 1-
and 2-RDM from variational HCI wavefunctions along with Eq. 20. Since there
can be an inherent bias of EKT1 itself, we provide EKT1-HCI as an “exact”
result for IPs within the limits of the EKT1 approach. This can be used to
quantify the phaseless and back-propagation errors in EKT1-AFQMC. In HCI,
there is a single tunable parameter, $\epsilon_{1}$ that controls the
variational energy which is used to select determinants to be included in the
variational expansion. We also use the 3-RDM of the variational wavefunction
to compute the 4-RDM via the cumulant construction.Zgid _et al._ (2009);
Saitow _et al._ (2013) These 3- and 4-RDMs are further used to construct the
EKT3 Fock matrix in Eq. 25. This approach is referred to as EKT3-HCI. The
eigenvalues and eigenvectors of the EKT3 Fock matrix (with its pertinent
metric) can then be used to produce IPs and spectral functions of EKT3. We
tuned $\epsilon_{1}$ to be such that the resulting second-order Epstein-Nesbet
perturbation energy is no greater than 1 m$E_{h}$ for every system except the
UEG model. In the UEG model, we observed a PT2 correction of 3 m$E_{h}$ at
$r_{s}=4$. This was found to be sufficient to produce accurate EKT IPs for
systems studied here. All calculations are performed with a locally modified
version of Dice.Holmes _et al._ (2016); Sharma _et al._ (2017); Smith _et
al._ (2017)
We used a timestep of 0.01 au and the pair branch population control
methodWagner _et al._ (2009) used the hybrid propagation scheme(Purwanto and
Zhang, 2004) for all ph-AFQMC simulations. For the small atoms and molecules
and 14 electron UEG examples, we used 2880 walkers while for the 54 electron
UEG we used 1152 walkers. All calculations used restricted Hartree–Fock (RHF)
trial wavefunctions except for the charged species where we instead used
unrestricted Hartree–Fock wavefunctions (UHF). The exception to this was CH4
where we used the same RHF orbitals for the charged species as we found that
the UHF solution broke spatial symmetry and led to a large phaseless
constraint bias. All AFQMC results are performed with the phaseless
approximation so we simply refer ph-AFQMC to as AFQMC in the following
sections.
We adapted the standard dynamical Lanczos algorithmHaydock _et al._ (1975);
Dagotto (1994) to obtain spectral functions of Eq. 64. Even by exploiting
symmetry and using a distributed sparse Hamiltonian, dynamical Lanczos results
could only be obtained for the smallest UEG system of 14 electrons in 19 plane
waves, corresponding to an $N$-electron Hilbert space size of $2.5\times
10^{9}$ determinants. In the dynamical Lanczos algorithm, one first obtains
the $N$-particle ground state, $|\Psi_{0}^{N}\rangle$ iterating within the
Lanczos Krylov subspace. Ultimately, our goal is to compute Eq. 36 which then
requires another run of the Lanczos algorithm. For an electron removal
problem, we pick an orbital index $i$ and generate an initial vector in the
$(N-1)$-electron sector, $|f_{0}\rangle$ =
$\hat{a}_{i}|\Psi_{0}^{N}\rangle/\langle\Psi_{0}^{N}|\hat{a}_{i}^{\dagger}\hat{a}_{i}|\Psi_{0}^{N}\rangle$.
Each Lanczos iteration then generates coefficients, $\\{a_{k}\\}$ and
$\\{b_{k}\\}$, for the following continued fraction expression:Haydock _et
al._ (1975); Dagotto (1994)
$A_{ii}(\omega,\eta)=-\frac{1}{\pi}\text{Im}\frac{\langle\Psi_{0}^{N}|\hat{a}_{i}^{\dagger}\hat{a}_{i}|\Psi_{0}^{N}\rangle}{z-a_{0}-\frac{b_{1}^{2}}{z-a_{1}-\frac{b_{2}^{2}}{z-a_{2}\cdots}}},$
(69)
where $z=E_{0}^{(N)}-\omega+i\eta$ with some spectral broadening constant
$\eta$. We take a total of 50 Lanczos iterations to generate the continued
fraction coefficients and this was enough to converge the low-energy spectrum
within the energy scale that is relevant in this work.
## IV Results and Discussion
While the EKT approach is valid for both electron removal and electron
addition, for numerical results we focus on electron removal processes (IP
energies and electron removal spectral functions) for simplicity. We benchmark
the IP energies from the proposed EKT1-AFQMC approach over several small
chemical systems and the UEG model. Furthermore, we also show promising
improvements over EKT1 using EKT3-HCI when satellite peaks are important. We
use a “$\Delta$ method” to denote a scheme where we run the pertinent method
for both $N$\- and $(N-1)$-electron systems and obtain the IP as an energy
difference.
### IV.1 Small chemical systems in the aug-cc-pVDZ basis
| Experiment | $\Delta$HCI
---|---|---
He | 24.59 | -0.23
Be | 9.32 | -0.03
Ne | 21.56 | -0.13
FH | 16.19 | -0.12
N2 | 15.6 | -0.34
CH4 | 14.35 | -0.06
H2O | 12.62 | -0.07
Table 1: Experimental first ionization potentials (eV) and deviation (eV) of
the numerically exact $\Delta$HCI from these results for chemical species
considered. $\Delta$HCI employed the aug-cc-pVDZ basis.
In this section, we study seven small chemical systems (He, Be, Ne, FH, N2,
CH4, and H2O) that have well-documented experimental IPs.van Setten _et al._
(2015) We use the nomenclature FH for the hydrogen fluoride molecule to
distinguish it from the abbreviation for Hartree-Fock (HF). All geometries
were taken from ref. 141. We used a relatively small basis set, namely aug-cc-
pVDZ,Dunning (1989) to obtain good statistics in back-propagated estimators.
We have used more than 3000 back-propagated estimator samples in all cases
considered here, each of which requires a back-propagation time of greater
than 4 a.u. This results in a total propagation time longer than 12000 a.u.,
which is unusually long for standard AFQMC calculations. The use of this basis
set also allows for a direct comparison between AFQMC and numerically exact
HCI within this basis set.
The goal of this numerical section is to quantify the three sources of error
in addition to the basis set incompleteness error in EKT1-AFQMC based on
simple examples where exact simulations are possible. These three sources of
error are:
1. 1.
Phaseless constraint errors. As mentioned, the phaseless constraint is
necessary to remove the phase problem that arises in the imaginary-time
propagation. However, due to this constraint, the resulting ground state
energies and properties (e.g., RDMs) are biased.
2. 2.
Back-propagation errors. The back-propagation algorithm incurs additional
errors. This was noted and studied in detail in ref. 115. For instance, in
ref. 115, it was shown that for neon the phaseless error with a simple trial
wavefunction is negligible (below 1 m$E_{h}$) but the error in the one-body
energy from the back-propagated 1-RDM was about 5 m$E_{h}$.
3. 3.
EKT1 errors. While systematically improvable with higher-order excitations,
EKT1 is not an exact approach to quasiparticle spectra unless all orders of
excitations are included. Nonetheless, for the first IP, it has been
numerically and analytically suggested that EKT1 approaches the exact IP in
the basis set limit if the exact 1- and 2-RDMs are used.Morrison (1992);
Sundholm and Olsen (1993); Ernzerhof (2009); Vanfleteren _et al._ (2009)
Beyond the first IP, we will show that EKT1 qualitatively fails to capture
satellite peaks that arise in the case of the core spectrum of the UEG model.
In Table 1, we present numerically exact first IPs of molecules within this
basis set using $\Delta$HCI. The basis set incompleteness error can be as
large as 0.3 eV in these molecules and therefore we will only compare AFQMC
results to these numerically exact results in the same basis set as opposed to
comparing to the experimental data. We do not expect the qualitative
conclusions of our study to change with larger basis sets.
| Ground state | IP
---|---|---
He | 0.00 | 0.00
Be | 0.01 | -0.01
Ne | -0.03 | 0.07
FH | -0.03 | 0.07
N2 | -0.04 | 0.06
CH4 | -0.03 | 0.06
H2O | -0.03 | 0.04
Table 2: Error (eV) in the AFQMC $N$-electron system ground state energy and
in the first ionization potential with respect to the corresponding HCI
results. The statistical error bar of AFQMC is less than 0.01 eV and therefore
we do not present them here.
Next, we assess the phaseless bias in the $N$-electron system ground state
energy as well as the error in the $\Delta$AFQMC IP energies compared to the
$\Delta$HCI IPs. In Table 2, we present numerical data that detail the
phaseless bias in these quantities. The ground state energy error is less than
0.04 eV which is in the neighborhood of the usual standard of accuracy, 1
m$E_{h}$. Unfortunately, in many cases $(N-1)$-electron systems incur as large
a phaseless bias as do the $N$-electron systems, albeit with an opposite sign
of the error. Thus, AFQMC does not benefit from a cancellation of errors for
the IP energy, which results in IP errors that are larger than those for the
ground state energy. The largest IP error we find is around 0.07 eV.
| EKT1-HCI | KT | EKT1-AFQMC
---|---|---|---
He | 24.36 | 0.60 | 0.02
Be | 9.29 | -0.87 | 0.06
Ne | 21.48 | 1.73 | -0.04
FH | 16.13 | 1.58 | 0.01
N2 | 15.34 | 1.92 | 0.15
CH4 | 14.13 | 0.68 | 0.19
H2O | 12.60 | 1.27 | -0.04
Table 3: Error (eV) in the first IP obtained from EKT1-AFQMC and the
Koopmans’ theorem (KT) relative to EKT1-HCI. The statistical error bar of
EKT1-AFQMC cannot be estimated without bias (see main text for
discussion).Ceperley and Bernu (1988); Blunt _et al._ (2018)
We present the EKT1 results in Table 3. We refer readers to Appendix A where a
detailed description of the EKT1 calculation is given. Theoretically “exact”
EKT1 results can be obtained by using exact RDMs from HCI. To quantify the
back-propagation error of AFQMC (given that the phaseless error is very small
for these systems), we shall compare EKT1-AFQMC to EKT1-HCI. We also computed
simple Koopmans’ theorem (KT) IPs using HF and report these results in Table
3. The error of EKT1-AFQMC is small for most chemical species, but it becomes
as large as 0.19 eV for CH4. Even though the phaseless bias in the ground
state energy was found to be very small (0.04 eV or less), EKT1-AFQMC errors
from using the back-propagated 1-RDM and the EKT Fock matrix can be five times
larger. Therefore, we attribute this error mainly to the back-propagation
error. Nonetheless, the comparison against the simpler KT suggests that
EKT1-AFQMC can readily recover the correlation contribution to quasiparticle
energies with errors less than 0.2 eV in these examples.
We note that EKT1-AFQMC IPs do not have any statistical error bars. This is
not because these numbers are deterministic but arises because EKT eigenvalues
are not unbiased estimators. This was also observed in a similar approach
called Krylov-projected full configuration interaction QMC. Interested readers
are referred to Ref. 70 for details. In essence, eigenvalues of a noisy matrix
where each element is normally distributed are not normally distributed.
Therefore, statistical error bars are difficult to estimate and are not simply
associated with variances of Gaussian distributions. Given the small
statistical error bars (on the order of $3\times 10^{-4}$ or less) on the
diagonal elements of the EKT1 Fock matrix and 1-RDM, we expect that these
results are reproducible up to 0.01 eV if one follows exactly the same
numerical protocol given in Appendix A.
| EKT1-HCI | EKT1-AFQMC | EOM-IP-CCSD
---|---|---|---
He | 0.00 | 0.02 | 0.00
Be | 0.00 | 0.06 | 0.00
Ne | 0.05 | 0.02 | -0.28
FH | 0.06 | 0.07 | -0.22
N2 | 0.05 | 0.20 | 0.14
CH4 | -0.15 | 0.04 | -0.02
H2O | 0.05 | 0.01 | -0.15
Table 4: Error (eV) in the first IP obtained from EKT1-HCI, EKT1-AFQMC, and
EOM-IP-CCSD relative to $\Delta$HCI (given in Table 1) in the aug-cc-pVDZ
basis. The statistical error bar of EKT1-AFQMC cannot be estimated without
biases (see main text for discussion).Ceperley and Bernu (1988); Blunt _et
al._ (2018)
Finally, we discuss the inherent error of EKT1 by comparing EKT1-HCI and
EKT1-AFQMC against $\Delta$HCI as in Table 4. We also compare a more widely
used approach called equation-of-motion coupled cluster ionization potential
with singles and doubles (EOM-IP-CCSD) to gauge the magnitude of EKT1 errors.
Since this is a benchmark on the first IP of molecules, EKT1-HCI is expected
to be quite accurate. This expectation is due to the general belief that EKT1
with exact RDMs yields exact first IP.Morrison (1992); Sundholm and Olsen
(1993); Ernzerhof (2009); Vanfleteren _et al._ (2009) Given this, the small
errors of EKT1-HCI in Table 4 are not surprising. The only exception is CH4
with an error of -0.15 eV, which we believe will still approach the exact IP
in the complete basis set limit. EKT1-AFQMC appears to be as good as EKT1-HCI,
with an outlier for the case of N2. The error of EKT1-AFQMC relative to
EKT1-HCI is comparable to the error of EKT1-HCI relative to $\Delta$HCI in
this basis set. EOM-IP-CCSD generally does not work as well as the EKT1
approaches with a maximum error of -0.28 eV on the neon atom. While these
finite basis set comparisons are informative, we emphasize that more fair
comparisons should be conducted in the complete basis set limit and we hope to
carry these out in the future. Regardless, the EKT1-AFQMC results are
encouraging.
Figure 1: Electron removal spectral functions from various methods (EKT1-HCI,
EKT1-AFQMC, and HF) of (a) FH and (b) N2 in the aug-cc-pVDZ basis set. Note
that in (a) EKT1-AFQMC is right on top of EKT1-HCI on the plotted scale. A
broadening parameter $\eta=0.5$ eV was used.
The main motivation for performing EKT1 within AFQMC was to obtain spectral
functions. Poles alone can be obtained using the ground state AFQMC algorithm
(i.e., $\Delta$AFQMC) by imposing a proper constraint for excited state
descriptions. While the choice of proper trials is a challenge for this
purpose, such an approach avoids the complications due to back-propagation.
However, spectral weights cannot be obtained from $\Delta$AFQMC. We show
EKT1-AFQMC spectral functions for FH and N2 in Fig. 1. Based on the results
shown in Table 3, EKT1-AFQMC is in good agreement with EKT1-HCI for FH, but
not for N2. Therefore, comparing these two cases is useful for understanding
how back-propagation errors are reflected in spectral functions. In the case
of FH, we do not see any visible differences between EKT1-HCI and EKT1-AFQMC
on the plotted energy scale. However, for N2 we can clearly see some deviation
between EKT1-AFQMC and EKT1-HCI. Nonetheless, the main features of the
spectral function are reproduced, namely three large quasiparticle peaks with
the middle peak being the largest. We note that there are peaks with very
small spectral weights in EKT1-HCI close to -30 eV, -27 eV, and -5 eV and that
these features these do not appear in EKT1-AFQMC. We attribute this to a
relatively large linear dependency cutoff ($10^{-4}$) needed in EKT1-AFQMC to
stabilize the generalized eigenvalue problem as explained in Appendix A. In
both molecules, there are insignificant differences between the EKT1 and HF
spectra in terms of the peak heights, locations and the number of peaks with a
brodening parameter of $\eta=0.5$ eV.
### IV.2 The uniform electron gas (UEG) model
AFQMC has emerged as a unique tool for simulating correlated solids.Lee _et
al._ (2019a); Zhang _et al._ (2018); Malone _et al._ (2020a, b); Lee _et
al._ (2020c) A model solid that describes the basic physics of metallic
systems is the UEG model. The accuracy and scope of AFQMC in studying the UEG
model has been well documented at zero temperature and finite temperature.Lee
_et al._ (2019a, 2020c) Motivated by these studies, we investigate the
spectral properties of the UEG model within the EKT approaches (EKT1 and
EKT3).
#### IV.2.1 14-electrons/19-planewaves
Figure 2: Electron removal spectral functions of the 14-electron in
19-planewave UEG model from various methods at $r_{s}=0.5$: (a)
$\mathbf{K}=[0,0,0]$ and (b) $\mathbf{K}=[0,0,2\pi/L]$. In (a), note that
EKT1-AFQMC, EKT1-HCI, and HF are right on top of each other. In (b),
EKT1-AFQMC and EKT1-HCI are right on top of each other. In both (a) an (b),
Lanczos and EKT3-HCI are right on top of each other. A broadening parameter of
0.2 eV was used for all plots. Figure 3: Electron removal spectral functions
of the 14-electron in 19-planewave UEG model from various methods at
$r_{s}=4.0$: (a) $\mathbf{K}=[0,0,0]$ and (b) $\mathbf{K}=[0,0,2\pi/L]$. In
(a), note that EKT1-AFQMC, EKT1-HCI, and HF are right on top of each other. In
(b), EKT1-AFQMC and EKT1-HCI are nearly on top of each other. A broadening
parameter of 0.2 eV was used for all plots.
The first example that we consider is a relatively small UEG supercell with
only 19 planewaves. It is far from the basis set limit as well as from the
thermodynamic limit. However, it is small enough for one to produce unbiased
EKT results using HCI and numerically exact dynamical Lanczos results. We note
that the spectral function of this benchmark UEG model at $r_{s}=4$ was first
presented in Ref. 50. We produced around 3000 back-propagation samples, which
yielded the largest statistical error in the Fock matrix and 1-RDM on the
order of $5\times 10^{-4}$.
We consider two Wigner-Seitz radii, $r_{s}=0.5$ and $r_{s}=4$. Based on our
previous benchmark study of AFQMC on this system, we expect that the phaseless
error in the ground state at $r_{s}=0.5$ is negligible while the error is
relatively more noticeable at $r_{s}=4$.Lee _et al._ (2019a) Compared to the
numerically exact energies, it was found that the constraint bias in AFQMC is
only -0.0118(6) eV at $r_{s}=0.5$ and 0.185(2) eV at $r_{s}=4$. Given this
small ground state bias, we expect the EKT1-AFQMC approach would be as
accurate as EKT1-HCI if good statistics and accuracy in the back-propagated
estimators can be achieved.
In Fig. 2, we present spectral functions of this model at $r_{s}=0.5$ for two
momenta. The first is at $\mathbf{K}=[0,0,0]$ which represents removal of an
electron from the core shell of the UEG model. Core spectra have been shown to
have rich satellite features where different many body methods do not agree in
terms of the precise satellite structure.McClain _et al._ (2016) However,
such features are very simple in a small supercell like this one. As can be
seen in Fig. 2(a), we only have two peaks from the Lanczos method. Neither of
these peaks corresponds to a single Koopmans-like excitation. Namely, they
cannot be found from a simple single ionization process that either HF (i.e.,
Koopmans theorem) or EKT1 describes well. As a result of this, HF, EKT1-AFQMC,
and EKT-HCI all fail to capture the peak split and only yield a single peak.
The correlation effect is very marginal in the sense that nearly no
improvement was observed with the EKT1 methods compared to HF.
A significant improvement can be made by incorporating higher-order terms such
as 2h1p excitations. In other words, core ionization satellite states (up to
leading order) require excitations such as
$(\sum_{\mathbf{K}^{\prime}\neq\mathbf{0}}C_{\mathbf{K}^{\prime}}\hat{a}^{\dagger}_{\mathbf{K}^{\prime}}\hat{a}_{\mathbf{K}^{\prime}})\hat{a}_{\mathbf{K}=\mathbf{0}}$
(70)
where $C_{\mathbf{K}^{\prime}}$ are the excitation amplitudes. All of these
excitations are included in EKT3. EKT3-HCI can nearly completely reproduce the
exact spectral function despite the use of the cumulant approximation for the
4-RDM. The cumulant approximation error is small especially in weakly
correlated cases such as $r_{s}=0.5$ where the connected component of 4-RDM is
expected to be small. In Fig. 2(b), we emphasize that we observe meaningful
improvements even from the EKT1 methods compared to HF.
$\mathbf{K}=[0,0,2\pi/L]$ corresponds to the top of the valence band
corresponding to the first IP of the UEG model. There is no satellite peak
visible at this momentum and the single peak found from the EKT1 methods is
reasonable. The peak location of HF is displaced by about +0.9 eV from the
correct location while the EKT1 methods yield a peak shifted by about -0.6 eV.
We note that EKT1-AFQMC is practically indistinguishable from EKT1-HCI for
both momenta which indicates a small phaseless bias, a small back-propagation
error, and good statistics for the estimators.
A similar conclusion can be drawn for $r_{s}=4$ as shown in Fig. 3. For the
core excitation spectrum in Fig. 3(a), we observe the same split peak
structure observed at $r_{s}=0.5$. We see another smaller peak emerging on the
left shoulder of the peak near -12 eV. While EKT3-HCI is no longer exact, it
reproduces most of the features in the exact spectral function including the
emergence of the third peak. While EKT1-HCI and EKT1-AFQMC agree well, there
is no visible improvement over HF. The EKT1 methods all yield a single peak
which is qualitatively wrong. The valence excitation structure illustrated in
Fig. 3(b) is relatively featureless, but there are small peaks emerging in the
high energy (more negative) region of the spectrum. EKT3-HCI shows good
agreement with Lanczos for the main quasiparticle peak and also produces
satellite features. There is a visible improvement of EKT1 approaches (with a
deviation of the peak energy of approximately -0.25 eV) compared to HF (with
an approximate deviation of +0.34 eV). A slight deviation of EKT1-AFQMC from
EKT1-HCI is observed, but the difference in the main quasiparticle peak
location is only about 0.01 eV.
Overall, in this small benchmark study, the EKT1 approaches provide some
improvement over HF for valence excitations and qualitatively fails for the
core region. The agreement between the EKT1 valence peaks and the Lanczos
peaks is not perfect, with an error of about -0.6 eV for $r_{s}=0.5$ and -0.25
eV for $r_{s}=4$. However, we emphasize again that we expect EKT1 to become
exact for the first IP as the complete basis set limit is approached. Finally,
EKT1-AFQMC is able to reproduce EKT1-HCI nearly perfectly even for $r_{s}=4$,
where the phaseless error in the ground state energy is about 0.185(2) eV.
#### IV.2.2 54-electrons/287-planewaves
Next, we consider a larger UEG supercell (54 electrons in 287 planewaves)
where obtaining many back-propagation samples is difficult. We study
$r_{s}=2$, where AFQMC can be reliably extrapolated to the basis set limit.Lee
_et al._ (2019a) We produced 600 back-propagation samples with a back
propagation time of 8 a.u. This amounts to a total of 4800 a.u. propagation
time, which is a long propagation for this system size. Our approach yielded a
maximum statistical error in 1-RDM and Fock matrices of $4\times 10^{-3}$.
While this error is not small, the procedure described in Appendix A was
enough to stabilize the final results. Unlike previous cases, we take the
upper triangular part of the Fock matrix and explicitly symmetrize the Fock
matrix. A linear dependency cutoff of $10^{-3}$ was used in EKT1-AFQMC. It is
difficult to generate highly accurate 1- and 2-RDMs from HCI for this system
size, so for this system we do not have an exact benchmark reference to
compare to our EKT1-AFQMC results. Similarly, EKT3-HCI is also intractable for
this system size. Instead, we have performed EOM-IP-CCSD and $\Delta$AFQMC to
compare with and to gauge the magnitude of the errors of EKT1-AFQMC.
Figure 4: Electron removal spectral functions of the 54-electron in
287-planewave UEG model at $r_{s}=2$ from EKT1-AFQMC and HF. The first IPs
from $\Delta$AFQMC and EOM-IP-CCSD are shown for comparisons. A broadening
parameter of 0.2 eV was used for all plots.
In Fig. 4, EKT1-AFQMC and HF spectral functions are shown. As expected,
EKT1-AFQMC does not show any satellite peaks at all and EKT1-AFQMC only
introduces a shift to the HF spectrum. Correlation effects in the peak height
do not appear large, but the peak location changes by about 1 eV going from HF
to EKT1-AFQMC. We also produced $\Delta$AFQMC and EOM-IP-CCSD for comparison.
The $\Delta$AFQMC IP is 2.18(1) eV, EOM-IP-CCSD yields 1.91 eV, and EKT1-AFQMC
gives 2.51 eV in this basis set. The deviation of EKT1-AFQMC from
$\Delta$AFQMC is 0.33(1) eV whereas EOM-IP-CCSD deviates by 0.27(1) eV. These
deviations are similar in magnitude but with opposite signs. The accuracy of
EOM-IP-CCSD is unclear because the ground state energy found from CCSD is
higher than that of AFQMC by 0.0291(1) eV per electron. In the basis set
limit, AFQMC was found to be as accurate as state-of-the-art diffusion Monte
Carlo for the ground state energy, differing by 0.0088(9) eV per electron or
less.Lee _et al._ (2019a) This suggests that the ground state correlation
energy of CCSD may be on the order of 0.03 eV per electron. How much of this
error is propagated to the EOM-IP-CCSD calculation remains unclear. In the
complete basis set limit, we believe that the first IP from EKT1-AFQMC will
become more accurate and closer to that of $\Delta$AFQMC. A more complete
comparison should be conducted in this limit, but such calculations are very
difficult due to the need to procure many back-propagation samples in
EKT1-AFQMC.
### IV.3 Towards ab-initio solids: diamond at the $\Gamma$-point
Figure 5: EKT1-HCI and EKT1-AFQMC electron removal spectral functions of
diamond at the $\Gamma$-point. A broadening parameter $\eta=0.5$ eV was used.
With recent advances in open-source software such as PySCF,Sun _et al._
(2018) performing calculations on ab-initio solids is relatively
straightforward. While an implementation of AFQMC with $\mathbf{k}$-points has
been previously presented,Motta _et al._ (2019); Malone _et al._ (2020a) we
only present a $\Gamma$-point result in this work. This is mainly because our
current EKT1 implementation does not explicitly consider $\mathbf{k}$-points.
We chose to study diamond because it is one of the simplest solids just with
two carbon atoms in its unit cell. We used the GTH-PADE
pseudopotentialGoedecker _et al._ (1996) and the GTH-DZVP basis
set.VandeVondele and Hutter (2007)
This system is overall as small as the smaller systems considered in this
work. Therefore, we could obtain over 6000 back-propagation samples with a 4
a.u. back-propagation time. Even with these many samples, the largest error
bar in the Fock and 1-RDM matrices was $6\times 10^{-3}$. Due to this large
statistical error in the matrix elements, we used a linear dependency cutoff
of 0.01 and symmetrized the Fock matrix by taking only the upper triangular
part of it (see Appendix A for details). We also included the shift by the
Madelung constant in the spectral functions.
As shown in Fig. 5, EKT1-AFQMC successfully reproduces EKT1-HCI. A small
quasiparticle peak near -5 eV is not accurately captured by EKT1-AFQMC largely
due to statistical noise. Small peaks are difficult to resolve in EKT1-AFQMC
without reducing the statistical errors on each matrix element further.
Nonetheless, two other larger quasiparticle peaks are well represented. Both
peaks are well reproduced within 0.25 eV from EKT1-HCI. The improvement over
HF is about 1 eV or so and the first IP from EOM-IP-CCSD is within 0.02 eV of
the EKT1-HCI result. It will be interesting to revisit the assessment of
EKT1-AFQMC in the basis set and thermodynamic limits via direct comparisons to
experiments.
## V Conclusions
In this work, we have explored the extended Koopmans’ theorem (EKT) approach
to the computation of spectral functions via phaseless auxiliary-field quantum
Monte Carlo (AFQMC). Previous attempts in AFQMC to obtaining spectral
functions have resorted to analytic continuationMotta _et al._ (2015); Vitali
_et al._ (2016) which has well-documented drawbacks.Goulko _et al._ (2017);
Dornheim _et al._ (2018) The EKT approach is attractive because it requires
neither an explicit representation of the ground state wavefunction nor
analytic continuation to compute spectral functions. Instead, its only inputs
are N-particle reduced density matrices (N-RDMs) which can be computed in
AFQMC via the back-propagation algorithm. The motivation of our work was thus
to use the EKT approach with the aim of avoiding numerical problems arising in
analytic continuation for the accurate assessment of real-frequency spectral
information. While many studies have so far focused on the simplest level of
the EKT, the EKT approach is systematically improvable with increasing order
of excitations: 1h, 1p2h, etc. for electron removal and 1p, 1h2p, etc. for
electron addition. We presented the implementation of EKT1 (1h or 1p) and EKT3
(1p2h or 1h2p). For EKT3, we proposed the use of a cumulant approximation to
the 4-RDM to avoid the steep storage requirements.
We produced preliminary results using EKT1 within AFQMC (EKT1-AFQMC) for small
molecular systems, uniform electron gas (UEG) 14-electron and 54-electron
supercells, and diamond at the $\Gamma$-point. We focused on studying the
first ionization potential (IP) and electron removal spectral functions of
these systems. By comparing numerically exact EKT1 results based on heat-bath
configuration interaction (i.e., EKT1-HCI), we showed that despite statistical
noise, EKT1-AFQMC can capture most qualitative features of EKT1-HCI. We
provide a more detailed summary on our findings as follows:
1. 1.
In small molecular benchmarks within the aug-cc-pVDZ basis, we found the
maximum deviation of EKT1-AFQMC from EKT1-HCI in the first IP to be 0.19 eV.
These molecules have quite small phaseless biases in the ground state energy
($\leq 0.04$ eV) so we attributed additional biases to back-propagation.
Electron removal spectral functions from EKT1-AFQMC look qualitatively similar
to that of EKT1-HCI even in the least accurate case (N2).
2. 2.
For the 14-electron UEG supercell (19-planewave) benchmark, we observed a
qualitative failure of EKT1 due to its inability to describe satellite states
at $\mathbf{K}=\mathbf{0}$. We showed that EKT3 (within HCI) significantly
improves this. Despite these failures of EKT1, we found EKT1-AFQMC to have
peak locations that are nearly identical (within 0.01 eV) to EKT1-HCI for both
$r_{s}=0.5$ and $r_{s}=4$. Given the noticeable phaseless bias at $r_{s}=4$,
this result is quite encouraging. Lastly, for the valence region of the
electron removal spectral function, we observed reasonable accuracy of EKT1
compared to the exact spectral function. The location of the first IP was off
by 0.4 eV for $r_{s}=0.5$ and 0.25 eV for $r_{s}=4.0$, which we expect to
improve in larger bases.
3. 3.
For the 54-electron UEG supercell (257-planewave) benchmark, we could not
obtain EKT1-HCI due computational expense. Therefore, we attempted to assess
the accuracy of EKT1-AFQMC by comparing the first IP of EKT1-AFQMC with that
of equation-of-motion IP coupled-cluster with singles and doubles (EOM-IP-
CCSD) and $\Delta$-AFQMC. However, all three methods differ from each other by
more than 0.25 eV and a more thorough benchmark in the basis set limit is
highly desirable.
4. 4.
For diamond at the $\Gamma$-point, EKT1-AFQMC produced a qualitatively correct
electron removal spectral function which agrees well with EKT1-HCI. However,
EKT1-AFQMC peak locations were off by 0.2 eV from those of EKT1-HCI. We also
noted that EKT1-HCI first IP agrees with that of EOM-IP-CCSD within 0.01 eV.
While a more extensive benchmark study is highly desirable, we cautiously
conclude that EKT1-AFQMC is useful for charge excitations that are heavily
dominated by Koopmans-like excitations. EKT1-AFQMC errors in peak locations
can be as large as 0.25 eV compared to EKT1-HCI, but the line shapes of
EKT1-AFQMC closely follow those of EKT1-HCI in all systems considered in this
work.
The greatest challenge of EKT1-AFQMC is currently the statistical inefficiency
in obtaining relevant back-propagated quantities with error bars small enough
to enable the construction of stable EKT1-AFQMC spectral functions. Future
work must first be dedicated to improving the statistical efficiency of back-
propagation. Furthermore, better back-propagation algorithms are needed to
reduce the back-propagation error further. A practical implementation of
EKT3-AFQMC using an iterative eigenvalue solver will be an interesting topic
to explore in the future. Several interesting extensions are immediately
possible. First, extending the EKT framework to neutral excitationsPavlyukh
(2018, 2019) is relatively straightforward and could be interesting to
explore. Next, the extension of the EKT framework for finite-temperature
coupled electron-phonon problems would provide a way to compute temperature-
dependent vibronic spectra directly from AFQMC.Lee _et al._ (2020c, d) We
also leave the comparison of these EKT-based spectral functions to
analytically continued spectral functions for a future study.
## VI Acknowledgements
We thank Sandeep Sharma for providing access to a version of Dice with the
3-RDM and 4-RDM capability which was used in testing and producing EKT3-HCI
results presented in this work. We also thank Garnet Chan for discussion about
finite-size corrections to ionization energies. DRR acknowledges support of
NSF CHE- 1954791. The work of FDM and MAM was performed under the auspices of
the U.S. Department of Energy (DOE) by LLNL under Contract No. DE-
AC52-07NA27344 and was supported by the U.S. DOE, Office of Science, Basic
Energy Sciences, Materials Sciences and Engineering Division, as part of the
Computational Materials Sciences Program and Center for Predictive Simulation
of Functional Materials (CPSFM). Computing support for this work came from the
LLNL Institutional Computing Grand Challenge program.
## Appendix A Numerical Details of EKT
Determining the eigenvalues and eigenvectors of Eq. 11 from noisy QMC density
matrices is non-trivial. We first diagonalise the metric matrices $S_{\pm}$,
$S_{\pm}=\mathbf{U}\Lambda_{\pm}\mathbf{U}^{\dagger},$ (71)
and discard eigenvalues below a given threshold. In this work, unless noted
otherwise, in the main text we used $10^{-4}$ for all EKT1-AFQMC results,
which is on the order of the largest statistical error in the EKT1 Fock
matrix. We next construct the transformation matrix
$\mathbf{X}_{\pm}=\mathbf{U}\mathbf{\Lambda_{\pm}^{-1/2}}.$ (72)
This procedure is often referred to as canonical orthogonalization in quantum
chemistry. Then, we solve
$\tilde{\mathbf{F}}_{\pm}\tilde{\mathbf{c}}^{\nu}_{\pm}=\epsilon_{\pm}^{\nu}\tilde{\mathbf{c}}^{\nu}_{\pm},$
(73)
where
$\tilde{\mathbf{F}}_{\pm}=\mathbf{X}_{\pm}{{}^{\dagger}}\mathbf{F}_{\pm}\mathbf{X}_{\pm}.$
(74)
Finally the eigenvectors in the original basis can be determined from
$(\mathbf{c}^{\nu}_{\pm})_{p}=\sum_{I}(\mathbf{X}_{\pm})_{pI}(\tilde{\mathbf{c}^{\nu}}_{\pm})_{I}.$
(75)
Following Kent et al.,Kent _et al._ (1998) we explicitly zero out all matrix
elements whose magnitude is smaller than two times the corresponding
statistical error bar. We explicitly symmetrize RDMs, but leave the Fock
matrix asymmetric as required for approximate wavefunctions. However, for the
more difficult problems considered in this work, such as the 54-electron UEG
electron and diamond, we found that symmetrizing the Fock matrix is useful so
we choose to symmetrize the Fock matrix in such cases (by taking only the
upper triangular part of the Fock matrix). These steps improved the numerical
stability of the eigenvalue problem.
We also note that there are generic numerical issues arising in EKT even
without any statistical sampling error. This was observed in both EKT1-HCI and
EKT3-HCI, where spurious solutions with large negative IPs appear. These
states stem from the fact that the metric matrix in EKT problems are generally
low-rank, as they are related to RDMs. For instance, in the EKT1 formulation,
the metric we diagonalize for the IP problem is a 1-RDM whose rank is not so
much larger than the number of electrons in the system. These spurious states
can be removed with larger cutoffs while often affecting peak locations of
quasiparticle states. Interestingly, these spurious states all carry
negligible spectral weights and do not appear in spectra. Motivated by this
observation, the most satisfying solution we found was to use as a small
threshold as possible and to measure the overlap between Koopmans states and
eigenvectors to identify quasiparticle-like eigenvectors. This was enough to
identify physical IP excitations that are quasi-particle-like. The same
principle is applicable to EKT3-HCI and also the EA calculations.
Spectral functions are plotted by approximating the $\delta$-function in the
spectral function expression as a Lorentzian function
$\delta(\omega)\simeq\frac{1}{\pi}\left(\frac{\eta}{\omega^{2}+\eta^{2}}\right)$
(76)
for some small constant $\eta$. To ensure reproducibility, we specified the
value of $\eta$ in all relevant figures.
## References
* Lu _et al._ (2012) D. Lu, I. M. Vishik, M. Yi, Y. Chen, R. G. Moore, and Z.-X. Shen, Annu. Rev. Condens. Matter Phys. 3, 129 (2012).
* Egerton (2008) R. F. Egerton, Rep. Prog. Phys. 72, 016502 (2008).
* Andreani _et al._ (2005) C. Andreani, D. Colognesi, J. Mayers, G. Reiter, and R. Senesi, Adv. Phys. 54, 377 (2005).
* Fetter and Walecka (2003) A. L. Fetter and J. D. Walecka, _Quantum Theory of Many-Particle Systems (Dover Books on Physics)_ (Dover Publications, 2003).
* Onida _et al._ (2002) G. Onida, L. Reining, and A. Rubio, Rev. Mod. Phys. 74, 601 (2002).
* Cederbaum and Domcke (1977) L. Cederbaum and W. Domcke, Adv. Chem. Phys 36, 205 (1977).
* Hedin _et al._ (1998) L. Hedin, J. Michiels, and J. Inglesfield, Phys. Rev. B 58, 15565 (1998).
* Duchemin and Blase (2020) I. Duchemin and X. Blase, J. Chem. Theory Comput. 16, 1742 (2020).
* Hybertsen and Louie (1986) M. S. Hybertsen and S. G. Louie, Phys. Rev. B 34, 5390 (1986).
* Hedin (1965) L. Hedin, Phys. Rev. 139, A796 (1965).
* Reining (2018) L. Reining, Wiley Interdiscip. Rev. Comput. Mol. Sci. 8, e1344 (2018).
* Rinke _et al._ (2005) P. Rinke, A. Qteish, J. Neugebauer, C. Freysoldt, and M. Scheffler, New J. Phys. 7, 126 (2005).
* Fuchs _et al._ (2007) F. Fuchs, J. Furthmüller, F. Bechstedt, M. Shishkin, and G. Kresse, Phys. Rev. B 76, 115109 (2007).
* Marom _et al._ (2012) N. Marom, F. Caruso, X. Ren, O. T. Hofmann, T. Körzdörfer, J. R. Chelikowsky, A. Rubio, M. Scheffler, and P. Rinke, Phys. Rev. B 86, 245127 (2012).
* Bruneval (2012) F. Bruneval, J. Chem. Phys. 136, 194107 (2012).
* Langreth (1970) D. C. Langreth, Phys. Rev. B 1, 471 (1970).
* Aryasetiawan _et al._ (1996) F. Aryasetiawan, L. Hedin, and K. Karlsson, Phys. Rev. Lett. 77, 2268 (1996).
* Guzzo _et al._ (2011) M. Guzzo, G. Lani, F. Sottile, P. Romaniello, M. Gatti, J. J. Kas, J. J. Rehr, M. G. Silly, F. Sirotti, and L. Reining, Phys. Rev. Lett. 107, 166401 (2011).
* Lischner _et al._ (2013) J. Lischner, D. Vigil-Fowler, and S. G. Louie, Phys. Rev. Lett. 110, 146801 (2013).
* Stan _et al._ (2009) A. Stan, N. E. Dahlen, and R. Van Leeuwen, J. Chem. Phys. 130, 114105 (2009).
* von Barth and Holm (1996) U. von Barth and B. Holm, Phys. Rev. B 54, 8411 (1996).
* Holm and von Barth (1998) B. Holm and U. von Barth, Phys. Rev. B 57, 2108 (1998).
* Faleev _et al._ (2004) S. V. Faleev, M. Van Schilfgaarde, and T. Kotani, Phys. Rev. Lett. 93, 126406 (2004).
* van Schilfgaarde _et al._ (2006) M. van Schilfgaarde, T. Kotani, and S. Faleev, Phys. Rev. Lett. 96, 226402 (2006).
* Shishkin and Kresse (2007) M. Shishkin and G. Kresse, Phys. Rev. B 75, 235102 (2007).
* Rostgaard _et al._ (2010) C. Rostgaard, K. W. Jacobsen, and K. S. Thygesen, Phys. Rev. B 81, 085103 (2010).
* Koval _et al._ (2014) P. Koval, D. Foerster, and D. Sánchez-Portal, Phys. Rev. B 89, 155417 (2014).
* Bobbert and Van Haeringen (1994) P. Bobbert and W. Van Haeringen, Phys. Rev. B 49, 10326 (1994).
* Schindlmayr and Godby (1998) A. Schindlmayr and R. W. Godby, Phys. Rev. Lett. 80, 1702 (1998).
* Romaniello _et al._ (2009) P. Romaniello, S. Guyot, and L. Reining, J. Chem. Phys. 131, 154111 (2009).
* Chen and Pasquarello (2015) W. Chen and A. Pasquarello, Phys. Rev. B 92, 041115 (2015).
* Maggio and Kresse (2017) E. Maggio and G. Kresse, J. Chem. Theory Comput. 13, 4765 (2017).
* Holm and Aryasetiawan (1997) B. Holm and F. Aryasetiawan, Phys. Rev. B 56, 12825 (1997).
* Kas _et al._ (2014) J. J. Kas, J. J. Rehr, and L. Reining, Phys. Rev. B 90, 085112 (2014).
* Lischner _et al._ (2014) J. Lischner, D. Vigil-Fowler, and S. G. Louie, Phys. Rev. B 89, 125430 (2014).
* Caruso and Giustino (2016) F. Caruso and F. Giustino, Eur. Phys. J. B 89 (2016).
* Mayers _et al._ (2016) M. Z. Mayers, M. S. Hybertsen, and D. R. Reichman, Phys. Rev. B 94, 081109 (2016).
* Jeckelmann (2002) E. Jeckelmann, Phys. Rev. B 66, 045114 (2002).
* Schollwöck and White (2006) U. Schollwöck and S. R. White, AIP Conf. Proc. 816, 155 (2006).
* Schirmer (1982) J. Schirmer, Phys. Rev. A 26, 2395 (1982).
* Schirmer _et al._ (1983) J. Schirmer, L. S. Cederbaum, and O. Walter, Phys. Rev. A 28, 1237 (1983).
* Tarantelli and Cederbaum (1989) A. Tarantelli and L. S. Cederbaum, Phys. Rev. A 39, 1656 (1989).
* Wenzel _et al._ (2014) J. Wenzel, M. Wormit, and A. Dreuw, J. Chem. Theory Comput. 10, 4583 (2014).
* Dreuw and Wormit (2015) A. Dreuw and M. Wormit, WIREs Comput. Mol. Sci. 5, 82 (2015).
* Sokolov (2018) A. Yu. Sokolov, J. Chem. Phys. 149, 204113 (2018).
* Monkhorst (1977) H. J. Monkhorst, Int. J. Quantum Chem. 12, 421 (1977).
* Nooijen and Snijders (1992) M. Nooijen and J. G. Snijders, Int. J. Quantum Chem. 44, 55 (1992).
* Nooijen and Snijders (1993) M. Nooijen and J. G. Snijders, Int. J. Quantum Chem. 48, 15 (1993).
* Peng and Kowalski (2016) B. Peng and K. Kowalski, Phys. Rev. A 94, 062512 (2016).
* McClain _et al._ (2016) J. McClain, J. Lischner, T. Watson, D. A. Matthews, E. Ronca, S. G. Louie, T. C. Berkelbach, and G. K.-L. Chan, Phys. Rev. B 93, 235139 (2016).
* McClain _et al._ (2017) J. McClain, Q. Sun, G. K.-L. Chan, and T. C. Berkelbach, J. Chem. Theory Comput. 13, 1209 (2017).
* Furukawa _et al._ (2018) Y. Furukawa, T. Kosugi, H. Nishi, and Y.-i. Matsushita, J. Chem. Phys. 148, 204109 (2018).
* Peng and Kowalski (2018) B. Peng and K. Kowalski, Mol. Phys. 116, 561 (2018).
* Shee and Zgid (2019) A. Shee and D. Zgid, J. Chem. Theory Comput. 15, 6010 (2019).
* Blankenbecler and Sugar (1983) R. Blankenbecler and R. L. Sugar, Phys. Rev. D 27, 1304 (1983).
* Becca and Sorella (2017) F. Becca and S. Sorella, _Quantum Monte Carlo Approaches for Correlated Systems_ (Cambridge University Press, Cambridge, England, UK, 2017).
* Foulkes _et al._ (2001) W. M. C. Foulkes, L. Mitas, R. J. Needs, and G. Rajagopal, Rev. Mod. Phys. 73, 33 (2001).
* Silver _et al._ (1990) R. N. Silver, D. S. Sivia, and J. E. Gubernatis, Phys. Rev. B 41, 2380 (1990).
* Gubernatis _et al._ (1991) J. E. Gubernatis, M. Jarrell, R. N. Silver, and D. S. Sivia, Phys. Rev. B 44, 6011 (1991).
* Jarrell and Gubernatis (1996) M. Jarrell and J. E. Gubernatis, Phys. Rep. 269, 133 (1996).
* Motta _et al._ (2014) M. Motta, D. E. Galli, S. Moroni, and E. Vitali, J. Chem. Phys. 140, 024107 (2014).
* Motta _et al._ (2015) M. Motta, D. E. Galli, S. Moroni, and E. Vitali, J. Chem. Phys. 143, 164108 (2015).
* Otsuki _et al._ (2017) J. Otsuki, M. Ohzeki, H. Shinaoka, and K. Yoshimi, Phys. Rev. E 95, 061302 (2017).
* Bertaina _et al._ (2017) G. Bertaina, D. E. Galli, and E. Vitali, Adv. Phys-X 2, 302 (2017).
* Reichman and Rabani (2009) D. R. Reichman and E. Rabani, J. Chem. Phys. 131, 054502 (2009).
* Goulko _et al._ (2017) O. Goulko, A. S. Mishchenko, L. Pollet, N. Prokof’ev, and B. Svistunov, Phys. Rev. B 95, 014102 (2017).
* Dornheim _et al._ (2018) T. Dornheim, S. Groth, J. Vorberger, and M. Bonitz, Phys. Rev. Lett. 121, 255001 (2018).
* Ceperley and Bernu (1988) D. M. Ceperley and B. Bernu, J. Chem. Phys. 89, 6316 (1988).
* Blunt _et al._ (2015) N. S. Blunt, A. Alavi, and G. H. Booth, Phys. Rev. Lett. 115, 050603 (2015).
* Blunt _et al._ (2018) N. S. Blunt, A. Alavi, and G. H. Booth, Phys. Rev. B 98, 085118 (2018).
* Booth _et al._ (2009) G. H. Booth, A. J. W. Thom, and A. Alavi, J. Chem. Phys. 131, 054106 (2009).
* Day _et al._ (1975) O. W. Day, D. W. Smith, and R. C. Morrison, J. Chem. Phys. 62, 115 (1975).
* Smith and Day (1975) D. W. Smith and O. W. Day, J. Chem. Phys. 62, 113 (1975).
* Pickup (1975) B. T. Pickup, Chem. Phys. Lett. 33, 422 (1975).
* Morrison _et al._ (1975) R. C. Morrison, O. W. Day, and D. W. Smith, Int. J. Quantum Chem. 9, 229 (1975).
* Ellenbogen _et al._ (1977) J. C. Ellenbogen, O. W. Day, D. W. Smith, and R. C. Morrison, J. Chem. Phys. 66, 4795 (1977).
* Morrison and Liu (1992) R. C. Morrison and G. Liu, J. Comp. Chem. 13, 1004 (1992).
* Morrison (1992) R. C. Morrison, J. Chem. Phys. 96, 3718 (1992).
* Sundholm and Olsen (1993) D. Sundholm and J. Olsen, J. Chem. Phys. 98, 3999 (1993).
* Cioslowski _et al._ (1997) J. Cioslowski, P. Piskorz, and G. Liu, J. Chem. Phys. 107, 6804 (1997).
* Kent _et al._ (1998) P. Kent, R. Q. Hood, M. Towler, R. Needs, and G. Rajagopal, Phys. Rev. B - Condensed Matter and Materials Physics 57, 15293 (1998).
* Olsen and Sundholm (1998) J. Olsen and D. Sundholm, Chem. Phys. Lett. 288, 282 (1998).
* Pernal and Cioslowski (2001) K. Pernal and J. Cioslowski, J. Chem. Phys. 114, 4359 (2001).
* Farnum and Mazziotti (2004) J. D. Farnum and D. A. Mazziotti, Chem. Phys. Lett. 400, 90 (2004).
* Pernal and Cioslowski (2005) K. Pernal and J. Cioslowski, Chem. Phys. Lett. 412, 71 (2005).
* Ernzerhof (2009) M. Ernzerhof, J. Chem. Theory Comput. 5, 793 (2009).
* Vanfleteren _et al._ (2009) D. Vanfleteren, D. Van Neck, P. W. Ayers, R. C. Morrison, and P. Bultinck, J. Chem. Phys. 130 (2009), 10.1063/1.3130044.
* Piris _et al._ (2012) M. Piris, J. M. Matxain, X. Lopez, and J. M. Ugalde, J. Chem. Phys. 136, 174116 (2012).
* Piris _et al._ (2013) M. Piris, J. M. Matxain, X. Lopez, and J. M. Ugalde, in _8th Congress on Electronic Structure: Principles and Applications (ESPA 2012): A Conference Selection from Theoretical Chemistry Accounts_ (Springer, Berlin, Germany, 2013) pp. 5–15.
* Bozkaya (2013) U. Bozkaya, J. Chem. Phys. 139 (2013), 10.1063/1.4825041.
* Welden _et al._ (2015) A. R. Welden, J. J. Phillips, and D. Zgid, , 1 (2015), arXiv:1505.05575 .
* Bozkaya and Ünal (2018) U. Bozkaya and A. Ünal, J. Phys. Chem. A 122, 4375 (2018).
* Pavlyukh (2018) Y. Pavlyukh, Phys. Rev. A 98, 1 (2018).
* Pavlyukh (2019) Y. Pavlyukh, Phys. Status Solidi B Basic Res. 256, 1 (2019).
* Zheng (2016) H. Zheng, “First principles quantum Monte Carlo study of correlated electronic systems,” (2016), [Online; accessed 12. Jan. 2021].
* Zhang _et al._ (1995) S. Zhang, J. Carlson, and J. E. Gubernatis, Phys. Rev. Lett. 74, 3652 (1995).
* Zhang _et al._ (1997) S. Zhang, J. Carlson, and J. E. Gubernatis, Phys. Rev. B 55, 7464 (1997).
* Zhang and Krakauer (2003) S. Zhang and H. Krakauer, Phys. Rev. Lett. 90, 136401 (2003).
* Carlson _et al._ (1999) J. Carlson, J. Gubernatis, G. Ortiz, and S. Zhang, Phys. Rev. B 59, 12788 (1999).
* LeBlanc _et al._ (2015) J. P. LeBlanc, A. E. Antipov, F. Becca, I. W. Bulik, G. K.-L. Chan, C.-M. Chung, Y. Deng, M. Ferrero, T. M. Henderson, C. A. Jiménez-Hoyos, _et al._ , Phys. Rev. X 5, 041041 (2015).
* Zheng _et al._ (2017) B.-X. Zheng, C.-M. Chung, P. Corboz, G. Ehlers, M.-P. Qin, R. M. Noack, H. Shi, S. R. White, S. Zhang, and G. K.-L. Chan, Science 358, 1155 (2017).
* Motta _et al._ (2017) M. Motta, D. M. Ceperley, G. K.-L. Chan, J. A. Gomez, E. Gull, S. Guo, C. A. Jiménez-Hoyos, T. N. Lan, J. Li, F. Ma, _et al._ , Phys. Rev. X 7, 031059 (2017).
* Zhang _et al._ (2018) S. Zhang, F. D. Malone, and M. A. Morales, J. Chem. Phys. 149, 164102 (2018).
* Motta _et al._ (2020) M. Motta, C. Genovese, F. Ma, Z.-H. Cui, R. Sawaya, G. K.-L. Chan, N. Chepiga, P. Helms, C. Jiménez-Hoyos, A. J. Millis, _et al._ , Phys. Rev. X 10, 031058 (2020).
* Lee _et al._ (2019a) J. Lee, F. D. Malone, and M. A. Morales, J. Chem. Phys. 151, 064122 (2019a).
* Lee _et al._ (2020a) J. Lee, F. D. Malone, and D. R. Reichman, J. Chem. Phys. 153, 126101 (2020a).
* Williams _et al._ (2020) K. T. Williams, Y. Yao, J. Li, L. Chen, H. Shi, M. Motta, C. Niu, U. Ray, S. Guo, R. J. Anderson, _et al._ , Phys. Rev. X 10, 011041 (2020).
* Malone _et al._ (2020a) F. D. Malone, S. Zhang, and M. A. Morales, J. Chem. Theory Comput. 16, 4286 (2020a).
* Lee _et al._ (2020b) J. Lee, F. D. Malone, and M. A. Morales, J. Chem. Theory Comput. 16, 3019 (2020b).
* Qin _et al._ (2020) M. Qin, C.-M. Chung, H. Shi, E. Vitali, C. Hubig, U. Schollwöck, S. R. White, S. Zhang, _et al._ , Phys. Rev. X 10, 031016 (2020).
* Malone _et al._ (2020b) F. D. Malone, A. Benali, M. A. Morales, M. Caffarel, P. R. C. Kent, and L. Shulenburger, Phys. Rev. B 102, 161104 (2020b).
* Malone _et al._ (2019) F. D. Malone, S. Zhang, and M. A. Morales, J. Chem. Theory. Comput. 15, 256 (2019).
* Lee and Reichman (2020) J. Lee and D. R. Reichman, J. Chem. Phys. 153, 044131 (2020).
* Purwanto and Zhang (2004) W. Purwanto and S. Zhang, Phys. Rev. E 70, 056702 (2004).
* Motta and Zhang (2017) M. Motta and S. Zhang, J. Chem. Theory Comput. 13, 5367 (2017).
* Hedin (1980) L. Hedin, Phys. Scr. 21, 477 (1980).
* (117) See https://github.com/msaitow/SecondQuantizationAlgebra for details on how to obtain the source code.
* Neuscamman _et al._ (2009) E. Neuscamman, T. Yanai, and G. K.-L. Chan, J. Chem. Phys. 130, 124102 (2009).
* Saitow _et al._ (2013) M. Saitow, Y. Kurashige, and T. Yanai, J. Chem. Phys. 139, 044118 (2013).
* Zgid _et al._ (2009) D. Zgid, D. Ghosh, E. Neuscamman, and G. K.-L. Chan, J. Chem. Phys. 130, 194107 (2009).
* Motta and Zhang (2018) M. Motta and S. Zhang, WIREs Comput. Mol. Sci. 8, e1364 (2018).
* Hubbard (1959) J. Hubbard, Phys. Rev. Lett. 3, 77 (1959).
* Hirsch (1983) J. E. Hirsch, Phys. Rev. B 28, 4059 (1983).
* (124) Mario Motta, private communication.
* Chen _et al._ (2020) S. Chen, M. Motta, F. Ma, and S. Zhang, arXiv preprint arXiv:2011.08335 (2020).
* Hohenstein _et al._ (2012a) E. G. Hohenstein, R. M. Parrish, and T. J. Martínez, J. Chem. Phys. 137, 044103 (2012a).
* Parrish _et al._ (2012) R. M. Parrish, E. G. Hohenstein, T. J. Martínez, and C. D. Sherrill, J. Chem. Phys. 137, 224106 (2012).
* Hohenstein _et al._ (2012b) E. G. Hohenstein, R. M. Parrish, C. D. Sherrill, and T. J. Martínez, J. Chem. Phys. 137, 221101 (2012b).
* Lee _et al._ (2019b) J. Lee, L. Lin, and M. Head-Gordon, J. Chem. Theory Comput. 16, 243 (2019b).
* Schoof _et al._ (2015) T. Schoof, S. Groth, J. Vorberger, and M. Bonitz, Phys. Rev. Lett. 115, 130402 (2015).
* Yang _et al._ (2020) Y. Yang, V. Gorelov, C. Pierleoni, D. M. Ceperley, and M. Holzmann, Phys. Rev. B 101, 085115 (2020).
* Sun _et al._ (2018) Q. Sun, T. C. Berkelbach, N. S. Blunt, G. H. Booth, S. Guo, Z. Li, J. Liu, J. D. McClain, E. R. Sayfutyarova, S. Sharma, S. Wouters, and G. K.-L. Chan, WIREs Comput. Mol. Sci. 8, e1340 (2018).
* Kent _et al._ (2020) P. R. C. Kent, A. Annaberdiyev, A. Benali, M. C. Bennett, E. J. Landinez Borda, P. Doak, H. Hao, K. D. Jordan, J. T. Krogel, I. Kylänpää, J. Lee, Y. Luo, F. D. Malone, C. A. Melton, L. Mitas, M. A. Morales, E. Neuscamman, F. A. Reboredo, B. Rubenstein, K. Saritas, S. Upadhyay, G. Wang, S. Zhang, and L. Zhao, J. Chem. Phys. 152, 174105 (2020).
* (134) See https://github.com/pauxy-qmc/pauxy for details on how to obtain the source code.
* Holmes _et al._ (2016) A. A. Holmes, N. M. Tubman, and C. J. Umrigar, J. Chem. Theory Comput. 12, 3674 (2016).
* Sharma _et al._ (2017) S. Sharma, A. A. Holmes, G. Jeanmairet, A. Alavi, and C. J. Umrigar, J. Chem. Theory Comput. 13, 1595 (2017).
* Smith _et al._ (2017) J. E. T. Smith, B. Mussard, A. A. Holmes, and S. Sharma, J. Chem. Theory Comput. 13, 5468 (2017).
* Wagner _et al._ (2009) L. K. Wagner, M. Bajdich, and L. Mitas, J. Comput. Phys. 228, 3390 (2009).
* Haydock _et al._ (1975) R. Haydock, V. Heine, and M. J. Kelly, J. Phys. C: Solid State Phys. 8, 2591 (1975).
* Dagotto (1994) E. Dagotto, Rev. Mod. Phys. 66, 763 (1994).
* van Setten _et al._ (2015) M. J. van Setten, F. Caruso, S. Sharifzadeh, X. Ren, M. Scheffler, F. Liu, J. Lischner, L. Lin, J. R. Deslippe, S. G. Louie, C. Yang, F. Weigend, J. B. Neaton, F. Evers, and P. Rinke, J. Chem. Theory Comput. 11, 5665 (2015).
* Dunning (1989) T. H. Dunning, J. Chem. Phys. 90, 1007 (1989).
* Lee _et al._ (2020c) J. Lee, M. A. Morales, and F. D. Malone, arXiv (2020c), 2012.12228 .
* Motta _et al._ (2019) M. Motta, S. Zhang, and G. K.-L. Chan, Phys. Rev. B 100, 045127 (2019).
* Goedecker _et al._ (1996) S. Goedecker, M. Teter, and J. Hutter, Phys. Rev. B 54, 1703 (1996).
* VandeVondele and Hutter (2007) J. VandeVondele and J. Hutter, J. Chem. Phys. 127, 114105 (2007).
* Vitali _et al._ (2016) E. Vitali, H. Shi, M. Qin, and S. Zhang, Phys. Rev. B 94, 085140 (2016).
* Lee _et al._ (2020d) J. Lee, S. Zhang, and D. R. Reichman, arXiv (2020d), 2012.13473 .
|
††thanks: Klarman fellow
# Eccentric binary black hole surrogate models for the gravitational waveform
and remnant properties: comparable mass, nonspinning case
Tousif Islam<EMAIL_ADDRESS>Department of Physics, University of
Massachusetts, Dartmouth, MA 02747, USA Department of Mathematics, University
of Massachusetts, Dartmouth, MA 02747, USA Center for Scientific Computing
and Visualization Research, University of Massachusetts, Dartmouth, MA 02747,
USA Vijay Varma Department of Physics, Cornell University, Ithaca, New York
14853, USA Cornell Center for Astrophysics and Planetary Science, Cornell
University, Ithaca, New York 14853, USA TAPIR 350-17, California Institute of
Technology, 1200 E California Boulevard, Pasadena, CA 91125, USA Jackie
Lodman TAPIR 350-17, California Institute of Technology, 1200 E California
Boulevard, Pasadena, CA 91125, USA Scott E. Field Department of Mathematics,
University of Massachusetts, Dartmouth, MA 02747, USA Center for Scientific
Computing and Visualization Research, University of Massachusetts, Dartmouth,
MA 02747, USA
Gaurav Khanna Department of Physics, University of Massachusetts, Dartmouth,
MA 02747, USA Center for Scientific Computing and Visualization Research,
University of Massachusetts, Dartmouth, MA 02747, USA Department of Physics,
University of Rhode Island, Kingston, RI 02881, USA Mark A. Scheel TAPIR
350-17, California Institute of Technology, 1200 E California Boulevard,
Pasadena, CA 91125, USA Harald P. Pfeiffer Max Planck Institute for
Gravitational Physics (Albert Einstein Institute), Am Mühlenberg 1, Potsdam
14476, Germany Davide Gerosa School of Physics and Astronomy & Institute for
Gravitational Wave Astronomy,
University of Birmingham, Birmingham, B15 2TT, United Kingdom Lawrence E.
Kidder Cornell Center for Astrophysics and Planetary Science, Cornell
University, Ithaca, New York 14853, USA
(August 27, 2024)
###### Abstract
We develop new strategies to build numerical relativity surrogate models for
eccentric binary black hole systems, which are expected to play an
increasingly important role in current and future gravitational-wave
detectors. We introduce a new surrogate waveform model, NRSur2dq1Ecc, using 47
nonspinning, equal-mass waveforms with eccentricities up to $0.2$ when
measured at a reference time of $5500M$ before merger. This is the first
waveform model that is directly trained on eccentric numerical relativity
simulations and does not require that the binary circularizes before merger.
The model includes the $(2,2)$, $(3,2)$, and $(4,4)$ spin-weighted spherical
harmonic modes. We also build a final black hole model, NRSur2dq1EccRemnant,
which models the mass, and spin of the remnant black hole. We show that our
waveform model can accurately predict numerical relativity waveforms with
mismatches $\approx 10^{-3}$, while the remnant model can recover the final
mass and dimensionless spin with absolute errors smaller than $\approx 5\times
10^{-4}M$ and $\approx 2\times 10^{-3}$ respectively. We demonstrate that the
waveform model can also recover subtle effects like mode-mixing in the
ringdown signal without any special ad-hoc modeling steps. Finally, we show
that despite being trained only on equal-mass binaries, NRSur2dq1Ecc can be
reasonably extended up to mass ratio $q\approx 3$ with mismatches $\simeq
10^{-2}$ for eccentricities smaller than $\sim 0.05$ as measured at a
reference time of $2000M$ before merger. The methods developed here should
prove useful in the building of future eccentric surrogate models over larger
regions of the parameter space.
## I Introduction
Detection of gravitational waves (GWs) Abbott _et al._ (2019a, 2020a) by the
LIGO Aasi _et al._ (2015) and Virgo Acernese _et al._ (2015) detectors has
opened a new window in astrophysics to probe binary compact objects – binary
black holes (BBHs) being the most abundant source for these detectors. Both
detection and extraction of source properties from the GW signal relies on the
availability of accurate inspiral-merger-ringdown (IMR) waveform models for
BBHs. While numerical relativity (NR) provides the most accurate gravitational
waveforms for BBHs, they are computationally expensive, taking weeks to months
to generate a single waveform. Data-driven surrogate modeling strategies Field
_et al._ (2014); Pürrer (2014); Blackman _et al._ (2015, 2017a, 2017b); Varma
_et al._ (2019a); Chua _et al._ (2019); Lackey _et al._ (2019); Varma _et
al._ (2019b); Williams _et al._ (2020); Khan and Green (2020); Haegel and
Husa (2020) have been shown to be capable of producing waveforms that are
nearly indistinguishable from NR with evaluation times of less than $0.1$
seconds. While NR surrogate waveform models for nonspinning Blackman _et al._
(2015), aligned-spin Varma _et al._ (2019a), and precessing BBHs Blackman
_et al._ (2017b); Varma _et al._ (2019b) are well developed, NR surrogate
modeling of eccentric systems is completely unexplored.
So far, all GW detections of BBHs are consistent with signals emitted from
quasicircular binaries Abbott _et al._ (2019b); Romero-Shaw _et al._ (2019);
Lenon _et al._ (2020); Yun _et al._ (2020); Wu _et al._ (2020); Nitz _et
al._ (2019); Ramos-Buades _et al._ (2020a). In fact, eccentricity has been
traditionally ignored in most GW data analyses (for e.g. Refs. Abbott _et
al._ (2020a, 2019a)). This is motivated by the expectation that even if a
binary is formed with a non-zero eccentricity, it should circularize before
reaching the frequency band of ground based detectors, as eccentricity gets
radiated away via GWs during the long inspiral Peters (1964). However, this
assumption may not always hold, especially for binaries formed in dense
environments like globular clusters or galactic nuclei Giesler _et al._
(2018); Rodriguez _et al._ (2018a); O’Leary _et al._ (2006); Samsing (2018);
Fragione and Kocsis (2019); Kumamoto _et al._ (2019); O’Leary _et al._
(2009); Gondán and Kocsis (2020). Indeed, recent follow-up analysis of
GW190521 Abbott _et al._ (2020b) claim this event to be consistent with a BBH
source with eccentricity ranging from $\sim 0.1$ Romero-Shaw _et al._ (2020)
up to $\sim 0.7$ Gayathri _et al._ (2020) (see also Calderón Bustillo _et
al._ (2020a, b)).
Eccentricity, if present in GW signals, carries precious astrophysical
information about the environment in which the binary was formed. The
detection of an eccentric merger would not only be a smoking-gun signature of
sources formed via dynamical encounters, but would point towards specific type
of interactions, namely GW captures Zevin _et al._ (2019), taking place in
those environments. Catching eccentric sources in the mHz regime targeted by
the LISA space mission is also a promising avenue to distinguish astrophysical
formation channels Nishizawa _et al._ (2017, 2016); Breivik _et al._ (2016);
Fang _et al._ (2019); Rodriguez _et al._ (2018b); Gondán _et al._ (2018);
Tagawa _et al._ (2020).
Furthermore, ignoring eccentricity in our models can lead to systematic biases
if the actual signal corresponds to an eccentric system Ramos-Buades _et al._
(2020b). Such biases can also lead to eccentric systems being misidentified as
a violation of general relativity (GR). Even if all binaries are found to be
circular, eccentric models are necessary to place bounds on the eccentricity.
Therefore, including eccentricity in our GW models is important, especially as
the detectors become more sensitive.
In the last few years, a handful of eccentric inspiral-only Klein _et al._
(2018); Tiwari and Gopakumar (2020); Moore _et al._ (2018); Moore and Yunes
(2019); Liu _et al._ (2020); Tanay _et al._ (2019) and IMR models Hinderer
and Babak (2017); Hinder _et al._ (2018); Huerta _et al._ (2018); Chen _et
al._ (2020); Chiaramello and Nagar (2020); Cao and Han (2017) have become
available. We highlight some recent eccentric IMR models in the following.
ENIGMA Huerta _et al._ (2018); Chen _et al._ (2020) is a nonspinning
eccentric BBH model that attaches an eccentric post-Newtonian (PN) inspiral to
a quasicircular merger based on an NR surrogate model Blackman _et al._
(2015). SEOBNRE Cao and Han (2017) modifies an aligned-spin quasicircular EOB
waveform model Taracchini _et al._ (2012) to include some effects of
eccentricity. Similarly, Ref. Chiaramello and Nagar (2020) modifies a
different aligned-spin EOB multipolar waveform model for quasicircular BBHs
Nagar _et al._ (2018, 2020) to include some effects of eccentricity. The
model is then further improved by replacing the carrier quasicircular model
with a generic eccentric one Nagar _et al._ (2021). In addition to these
models, Ref. Setyawati and Ohme (2021) recently developed a method to add
eccentric modulations to existing quasicircular BBH models.
Notably, all of these models rely on the assumption that the binary
circularizes by the merger time. While this is approximately true for many
expected sources Huerta _et al._ (2018); Habib and Huerta (2019), this
necessarily places a limit on the range of validity of these models. In
addition, none of these models are calibrated on eccentric NR simulations,
even though their accuracy is tested by comparing against eccentric
simulations.
Apart from the waveform prediction, BBH remnant modeling from eccentric
sources is also of crucial astrophysical importance Sperhake _et al._ (2008);
Hinder _et al._ (2008); Sopuerta _et al._ (2007); Sperhake _et al._ (2020).
For example, recoils from eccentric mergers can be up to $25\%$ higher than
the circular case Sopuerta _et al._ (2007); Sperhake _et al._ (2020), which
result in a higher likelihood of ejections from astrophysical hosts like star
clusters and galaxies.
It is, therefore, timely to invest in building faithful eccentric BBHs
waveform and remnant models that address some of these limitations. In this
paper, we develop a detailed framework for constructing a surrogate model with
eccentric NR data. We then build a two-dimensional surrogate model,
NRSur2dq1Ecc, over parameters that describe eccentricity for equal-mass,
nonspinning systems to demonstrate the efficacy of the proposed methods. This
is the first eccentric waveform that is directly trained on eccentric NR
simulations and does not need to assume that the binary circularizes before
merger. The model can produce waveforms that are of comparable accuracy to the
NR simulations used to train it. Furthermore, despite being trained only on
equal-mass eccentric BBHs, we find that the model can be reasonably evaluated
beyond its training range upto mass ratio $q\approx 3$ provided the
eccentricities are small.
In addition to the waveform model, we build a surrogate model for the remnant
mass and spin, NRSur2dq1EccRemnant, which can provide accurate predictions for
the final state of eccentric binary mergers. This work paves the way forward
for building future eccentric surrogate models: we expect that the methods
developed here can be applied straightforwardly to aligned-spin eccentric
BBHs, while the precessing case requires significantly more work.
The rest of the paper is organized as follows. Sec. II describes the NR
simulations. Sec. III describes data decomposition, parameterization and
construction of the surrogate model. In Sec. IV, we test the surrogate model
by comparing against NR waveforms. We end with some concluding remarks in Sec.
V.
## II Numerical Relativity Data
NR simulations for this work are performed using the Spectral Einstein Code
(SpEC) SpE developed by the Simulating eXterme Spacetimes (SXS) collaboration
SXS . We follow the procedure outlined in Ref. Chatziioannou _et al._ (2021)
to construct initial orbital parameters that result in a desired eccentricity.
The constraint equations are solved employing the extended conformal thin
sandwich formalism York (1999); Pfeiffer and York (2003) with superposed
harmonic Kerr free data Varma _et al._ (2018). The evolution equations are
solved employing the generalized harmonic formulation Lindblom _et al._
(2006); Rinne _et al._ (2009). The time steps during the simulations are
chosen nonuniformly using an adaptive time-stepper Boyle _et al._ (2019).
Further details can be found in Ref. Boyle _et al._ (2019) and references
within. We perform 47 new eccentric NR simulations that have been assigned the
identifiers SXS:BBH:2266 - SXS:BBH:2312, and will be made available through
the SXS public catalog SXS Collaboration .
The component BH masses, $m_{1}$ and $m_{2}$, and dimensionless spins,
$\bm{\chi}_{1}$ and $\bm{\chi}_{2}$, are measured on the apparent horizons
Boyle _et al._ (2019) of the BHs, where index 1 (2) corresponds to the
heavier (lighter) BH. The component masses at the relaxation time Boyle _et
al._ (2019) are used to define the mass ratio $q=m_{1}/m_{2}\geq 1$ and total
mass $M=m_{1}+m_{2}$. Unless otherwise specified, all masses in this paper are
given in units of the total mass. When training the surrogate model, we
restrict ourselves to $q=1$, $\bm{\chi}_{1},\bm{\chi}_{2}=0$ in this work.
The waveform is extracted at several extraction spheres at varying finite
radii from the origin and then extrapolated to future null infinity Boyle _et
al._ (2019); Boyle and Mroue (2009). These extrapolated waveforms are then
corrected to account for the initial drift of the center of mass Boyle (2016,
). The spin-weighted spherical harmonic modes at future null infinity, scaled
to unit mass and unit distance, are denoted as $\mathpzc{h}_{\ell m}(t)$ in
this paper.
The complex strain $\mathpzc{h}=h_{+}-ih_{\times}$ is given by:
$\mathpzc{h}(t,\iota,\varphi_{0})=\sum^{\infty}_{\ell=2}\sum_{m=-l}^{l}\mathpzc{h}_{\ell
m}(t)\leavevmode\nobreak\ _{-2}Y_{\ell m}(\iota,\varphi_{0}),$ (1)
where $h_{+}$ ($h_{\times}$) is the plus (cross) polarization of the waveform,
${}_{-2}Y_{\ell m}$ are the spin$\,=\\!\\!-2$ weighted spherical harmonics,
and $\iota$ and $\varphi_{0}$ are the polar and azimuthal angles on the sky in
the source frame. We model modes with $(\ell,m)={(2,2),(3,2),(4,4)}$. Because
of the symmetries of equal-mass, nonspinning BBHs, all odd-$m$ modes are
identically zero, and the $m<0$ modes can be obtained from the $m>0$ modes.
Therefore, we model all non-zero $\ell\leq 3$ and $(4,\pm 4)$ modes, except
the $m=0$ modes. We exclude $m=0$ memory modes because (non-oscillatory)
Christodoulou memory is not accumulated sufficiently in our NR simulations
Favata (2009); this defect was recently addressed in both Cauchy
characteristic extraction (CCE) Mitman _et al._ (2020a); Barkett _et al._
(2020); Moxon _et al._ (2020) and extrapolation Mitman _et al._ (2020b)
approaches. The $(4,2)$ mode, on the other hand, was found to have significant
numerical error in the extrapolation procedure Boyle _et al._ (2019); Boyle
and Mroue (2009). We expect this issue to be resolved with CCE as well.
Therefore, in future models, we should be able to include the $m=0$ modes as
well as modes like the (4,2) mode.
The remnant mass $m_{f}$ and spin $\bm{\chi}_{f}$ are determined from the
common apparent horizon long after the ringdown, as described in Ref. Boyle
_et al._ (2019). For nonprecessing systems like the ones considered here, the
final spin is directed along the direction of the orbital angular momentum.
Unlike previous surrogate models Varma _et al._ (2019b, c, 2020), we do not
model the recoil kick in this work, as the symmetries of equal-mass,
nonspinning BBHs restrict the kick to be zero.
## III Surrogate methodology for eccentric waveforms
In this section, we describe our new framework to build NR surrogate models
for eccentric BBHs. We begin by applying the following post processing steps
that simplify the modeling procedure.
### III.1 Processing the training data
In order to construct parametric fits (cf. Sec. III.4) for the surrogate
model, it is necessary to align all the waveforms such that their peaks occur
at the same time. We define the peak of each waveform, $\tau_{peak}$, to be
the time when the quadrature sum,
$A_{\rm tot}(\tau)=\sqrt{\sum_{l,m}|\mathpzc{h}_{\ell m}(\tau)|^{2}}\,,$ (2)
reaches its maximum. Here the summation is taken over all the modes being
modeled. We then choose a new time coordinate,
$t=\tau-\tau_{\rm peak}\,,$ (3)
such that $A_{\rm tot}(t)$ for each waveform peaks at $t=0$.
Next, we use cubic splines to interpolate the real and imaginary parts of the
waveform modes onto a common time grid of [$-5500M$, $75M$] with a uniform
time spacing of $dt=0.1M$; this is dense enough to capture all frequencies of
interest, including near merger. The initial time of $-5500M$ is chosen so
that we can safely eliminate spurious initial transients in the waveform, also
known as junk radiation Boyle _et al._ (2019), for each waveform in our
dataset.
Once all the waveforms are interpolated onto a common time grid, we perform a
frame rotation of the waveform modes about the z-axis such that the orbital
phase is zero at $t=-5500M$. The orbital phase is obtained from the $(2,2)$
mode [cf. Eq. (14)]. Because of the symmetry of the equal-mass, equal-spin
systems considered here, the odd-$m$ modes are identically zero and so we need
not worry about remaining $\phi_{\rm orb}\rightarrow\phi_{\rm orb}+\pi$
rotational freedom as was necessary in Refs. Blackman _et al._ (2015, 2017a,
2017b); Varma _et al._ (2019a, b). This preprocessing of time and phase
ensures that the waveform varies smoothly across the parameter space, which in
turn makes modeling easier.
### III.2 Measuring eccentricity and mean anomaly
Departure of NR orbits from circularity is measured by a time-dependent
eccentricity and mean anomaly. Eccentricity takes values between $[0,1]$ where
the boundary values correspond to a quasicircular binary and an unbound orbit
Chandrasekhar (1998), respectively. Mean anomaly, on the other hand, is
bounded by $[0,2\pi)$. While it may seem most natural to estimate orbital
parameters from the BH trajectories, this task is complicated by the fact that
any such measurement will be impacted by the gauge conditions chosen by the NR
simulation. We instead choose to estimate eccentricity and anomaly parameters
directly from the waveform data at future null infinity.
#### III.2.1 Measuring eccentricity
Various methods to extract the eccentricity from NR simulations have been
proposed in the literature Habib and Huerta (2019); Healy _et al._ (2017);
Mroue _et al._ (2010); Purrer _et al._ (2012). As the eccentricity evolves
during the binary’s orbit Peters (1964), these methods use dynamical
quantities such as some combination of the $(2,2)$ mode’s amplitude, phase, or
frequency. All of these methods reduce to the eccentricity parameter in the
Newtonian limit. The estimated value of the eccentricity may differ slightly
depending on the method used and the noise in the numerical data. However, as
long as they provide a consistent measurement of eccentricity that decays
monotonically with time, one can use any of the eccentricity estimators for
constructing a surrogate waveform model. For this work, we use the following
definition of eccentricity based on orbital frequency Mora and Will (2002):
$e(t)=\frac{\sqrt{\omega_{p}(t)}-\sqrt{\omega_{a}(t)}}{\sqrt{\omega_{p}(t)}+\sqrt{\omega_{a}(t)}},$
(4)
where $\omega_{a}$ and $\omega_{p}$ are the orbital frequencies at apocenter
(i.e. point of furthest approach) and pericenter (i.e. point of closest
approach), respectively. Unlike several other eccentricity estimators proposed
in literature Habib and Huerta (2019); Healy _et al._ (2017); Mroue _et al._
(2010); Purrer _et al._ (2012), the one defined in Eq. (4) is normalized and
reduces to the eccentricity parameter in the Newtonian limit at both low and
high eccentricities Ramos-Buades _et al._ (2020b).
We first compute the orbital frequency,
$\omega_{\rm orb}=\frac{d\phi_{\mathrm{\rm orb}}}{dt}\,,$ (5)
where $\phi_{\mathrm{\rm orb}}$ is the orbital phase inferred from the (2,2)
mode (cf. Eq. (14)), and the derivative is approximated using second-order
finite differences. We then find the times where $\omega_{\rm orb}$ passes
through a local maxima (minima) and associate those to pericenter (apocenter)
passages, to obtain $\omega_{p}$ ($\omega_{a}$). We find that using the local
maxima/minima of the amplitude of the $(2,2)$ mode to identify the
pericenter/apocenter times leads to a consistent value for the eccentricity.
We then interpolate $\omega_{p}$ and $\omega_{a}$ onto the full time grid
using cubic splines. This gives us $\omega_{p}(t)$ and $\omega_{a}(t)$, which
are used in Eq. (4).
Figure 1: Time evolution of the eccentricity $e(t)$ (upper panel) and the
orbital frequency $\omega_{\rm orb}(t)$ (lower panel) for NR Simulation
SXS:BBH:2304. $\omega_{p}$ and $\omega_{a}$ denote, respectively, the the
orbital frequency at pericenter (local maxima, cyan circles) and apocenter
passages (local minima, green circles). From this data we construct spline
interpolants to obtain $\omega_{p}(t)$ (cyan curve) and $\omega_{a}(t)$ (green
curve). The eccentricity is then estimated using Eq. (4). The red dashed
vertical line corresponds to the reference time $t_{\rm ref}=-5500M$ at which
the surrogate model is parameterized.
Figure 1 shows an example of the measured eccentricity for the NR simulation
SXS:BBH:2304. We see that our method provides a smooth, monotonically
decreasing $e(t)$. The estimate become unreliable near merger where finding
local maxima/minima in $\omega_{\rm orb}$ becomes problematic as the orbit
transitions from inspiral to plunge. The estimate also becomes problematic
whenever the eccentricity is extremely small, thereby preventing the
appearance of an identifiable local maxima/minima. This does not affect our
modeling, however, as we only require an eccentricity value at a reference
time while the binary is still in the inspiral phase. We select a reference
time of $t_{\rm ref}=-5500M$ and parameterize our waveform model by
$e_{\rm ref}=e(t_{\rm ref})\,.$ (6)
While estimating $e_{\rm ref}$, we include the data segment slightly before
$t_{\rm ref}$ as this allows us to interpolate, rather than extrapolate, when
constructing $e(t)$ in Eq. (4).
#### III.2.2 Measuring mean anomaly
In the Newtonian context, the mean anomaly $l$ of an eccentric orbit is
defined as
$\displaystyle l$ $\displaystyle\equiv$ $\displaystyle 2\pi\frac{t-t_{0}}{P},$
(7)
where $t_{0}$ is a time corresponding to the previous pericenter passage and
$P$ is the radial period, which is defined to be the time between two
successive pericenter passages. In the Newtonian case $P$ is a constant, but
in GR it changes as the binary inspirals. However, one can continue to use Eq.
(7) as a meaningful measurement of the radial oscillation’s phase for the
purpose of constructing a waveform model Hinder _et al._ (2018).
Figure 2: Time evolution of the mean anomaly $l(t)$ (upper panel) and the
orbital frequency $\omega_{\rm orb}(t)$ (lower panel) for the NR Simulation
SXS:BBH:2304. Green dashed vertical lines indicate the times for pericenter
passages. The anomaly $l(t)$ grows linearly with time over $[0,2\pi)$ in
between two successive pericenters. The red dashed vertical line corresponds
to the reference time $t_{\rm ref}=-5500M$ at which the surrogate model is
parametrized.
For each NR waveform, we compute the times for all pericenter passages using
the same procedure as in Sec. III.2.1. We divide the time array into different
orbital windows defined as $[t_{i}^{\rm peri},t_{i+1}^{\rm peri})$, where
$t_{i}^{\rm peri}$ is the time for $i^{th}$ pericenter passage. The orbital
period in each window is given by $P_{i}=t_{i+1}^{\rm peri}-t_{i}^{\rm peri}$,
and the mean anomaly by
$\displaystyle l_{i}(t)=2\pi\frac{t-t_{i}^{\rm peri}}{P_{i}}\,.$ (8)
Note that each $l_{i}(t)$ grows linearly with time over $[0,2\pi)$ for the
window $[t_{i}^{\rm peri},t_{i+1}^{\rm peri})$. To obtain the full $l(t)$, we
simply join each $l_{i}(t)$ for consecutive orbits. Finally, the value for
mean anomaly parameterizing our waveform model is then simply the evaluation
of the mean anomaly at $t_{\rm ref}=-5500M$.
$\displaystyle l_{\rm ref}=l(t_{\rm ref})\,.$ (9)
Figure 2 shows an example application of our method to estimate the mean
anomaly of the NR simulation SXS:BBH:2304.
#### III.2.3 Targeted parameter space
In Fig. 3, we show the measured values for eccentricity and mean anomaly at
$t_{\rm ref}$ for all 47 NR waveforms, which leads to the following 2d
parameter space for our model:
* •
eccentricity: $e_{\rm ref}\in[0,0.2]$;
* •
mean anomaly: $l_{\rm ref}\in[0,2\pi)$.
Fig. 3 shows a large gap in the parameter space, which reflects an inherent
limitation in our current approach to achieve target eccentricity parameters
from the initial data. The method we use to construct initial orbital
parameters Chatziioannou _et al._ (2021) seeks to achieve target values of
$(e_{\rm ref},l_{\rm ref})$ at a time $500M$ after the start of the
simulation. The initial orbital frequency is chosen such that time to merger
is $6000M$, as predicted by a leading-order PN calculation. Unfortunately,
this is only approximate, leading to different merger times for different
simulations. Consequently, when we estimate the eccentricity parameters at
$t_{\rm ref}=-5500M$, this is no longer a fixed time from the start of the
simulation. The eccentricity parameters evolve differently for different
simulations during this time, leading to the clustering in Fig. 3. In the
future, we plan to resolve this using a higher order PN expression, or an
eccentric waveform model Huerta _et al._ (2018); Chen _et al._ (2020);
Chiaramello and Nagar (2020) to predict the time to merger.
Figure 3: The parameter space covered by the 47 NR waveforms (circle markers)
used in the construction of our surrogate model. The axes show the
eccentricity and mean anomaly values at $t_{\rm ref}$. We also show the
dependence of the maximum (over the sky of the source frame) flat-noise
mismatches on the parameters eccentricity and mean anomaly (cf. Sec. IV.1.2).
The colors indicate the maximum mismatch, which systematically increases near
the high eccentricity boundary where few training data points are available.
### III.3 Waveform data decomposition
Building a surrogate model becomes more challenging for oscillatory and
complicated waveform data. One solution is to transform or decompose the
waveform data into several simpler “waveform data pieces” that also vary
smoothly over the parameter space. These simpler data pieces can then be
modeled more easily and recombined to get back the original waveform.
Successful decomposition strategies have been developed for quasi-circular NR
surrogates Blackman _et al._ (2015); Varma _et al._ (2019a, b); Blackman
_et al._ (2017a, b). In order to develop similar strategies for eccentric
waveform data, we have pursued a variety of options. We now summarize the most
successful decomposition technique we have tried, while relegating some
alternatives to Appendix A.
#### III.3.1 Decomposing the quadrupolar mode $\mathpzc{h}_{22}$
The complex $(2,2)$ waveform mode,
$\displaystyle\mathpzc{h}_{22}=A_{22}\leavevmode\nobreak\
e^{-\mathrm{i}\phi_{22}}\,,$ (10)
can be decomposed into an amplitude, $A_{22}$, and phase, $\phi_{22}$. For
non-precessing systems in quasicircular orbit, $A_{22}$ and $\phi_{22}$ are
slowly varying functions of time, and have therefore been used as waveform
data pieces for many modeling efforts. For eccentric waveforms, however, both
amplitude and phase show highly oscillatory modulations on the orbital time
scale (cf. Figs. 1 and 2 for the frequency, which is a time-derivative of the
phase). This demands further decomposition of the waveforms into even simpler
data pieces. One natural solution could have been to build interpolated
functions of the local maxima and minima of $A_{22}$ and $\phi_{22}$. The
secular trend of these functions can then be subtracted out from the original
amplitude and phase. The resulting residual amplitude and phase data may be
easier to model. Unfortunately, as mentioned in Sec. III.2.1, finding the
local maxima/minima becomes problematic near the merger.
Figure 4: Example decomposition of the amplitude and phase of the $(2,2)$
mode. Upper left: Amplitude $A_{22}$ of the eccentric waveform SXS:BBH:2304
(with eccentricity $e_{\rm ref}=0.181$) along with the amplitude $A_{22}^{0}$
of the noneccentric waveform SXS:BBH:1155. Lower left: The residual amplitude
$\Delta A_{22}=A_{22}-A_{22}^{0}$. Upper right: Phase $\phi_{22}$ of the
eccentric waveform SXS:BBH:2304 and the phase $\phi_{22}^{0}$ of the
noneccentric waveform SXS:BBH:1155. Lower right: The residual phase
$\Delta\phi_{22}=\phi_{22}-\phi_{22}^{0}$. In this work we model $\Delta
A_{22}$ and $\Delta\phi_{22}$
We instead follow a simpler approach whereby the amplitude and phase of a
quasicircular $q=1$, nonspinning NR waveform (SXS:BBH:1155) is used as a proxy
for the secular trend of the amplitude and phase. We then compute the residual
amplitude and phase,
$\displaystyle\Delta A_{22}=A_{22}-A_{22}^{0},$ (11)
$\displaystyle\Delta\phi_{22}=\phi_{22}-\phi_{22}^{0},$ (12)
where $A_{22}^{0}$ and $\phi_{22}^{0}$ are the amplitude and phase of the
noneccentric waveform, respectively, which have been aligned according the
same procedure outlined in Sec. III.1. In the upper-left panel of Fig. 4, we
show the amplitude of an eccentric waveform (SXS:BBH:2304) along with the
amplitude of its noneccentric counterpart (SXS:BBH:1155) which traces the
secular trend of the nonmonotonically increasing eccentric amplitude. The
difference of these two amplitudes, $\Delta A_{22}$, is then plotted in the
lower-left panel. $\Delta A_{22}$ is simpler to model than $A_{22}$, as it
isolates the oscillatory component111In fact, the relatively simple
oscillatory behavior of $\Delta A_{22}$ suggests the use of a Hilbert
transform for further simplification. However, we found that this does not
improve the accuracy of our model. Such further simplifications, may become
necessary for larger eccentricities than considered in this work, as the
modulations will be more pronounced. of $A_{22}$. Similarly, in the right
panels of Fig. 4, we show the phase evolution of the same eccentric waveform
(SXS:BBH:2304), its noneccentric counterpart (SXS:BBH:1155), and their
difference $\Delta\phi_{22}$ which isolates the oscillatory component of
$\phi_{22}$. Note that noneccentric waveform data is plentiful Boyle _et al._
(2019) and accurate surrogate models have been built for noneccentric NR
waveforms Varma _et al._ (2019a, b). So extending the residual amplitude and
phase computation to spinning, unequal-mass systems is straightforward. For
instance the surrogate model of Ref. Varma _et al._ (2019a) can be used to
generate $A_{22}^{0}$ and $\phi_{22}^{0}$ for generic aligned-spin systems.
#### III.3.2 Decomposing the higher order modes
In this paper, we model the quadrupolar mode and the higher-order modes
differently. For $\mathpzc{h}_{22}$, we model data pieces closely associated
with the amplitude and phase as described above. On the other hand, for higher
order modes, we first transform the waveform into a co-orbital frame in which
the waveform is described by a much simpler and slowly varying function. This
is done by applying a time-dependent rotation given by the instantaneous
orbital phase:
$\displaystyle\mathpzc{h}_{\ell m}^{C}=\mathpzc{h}_{\ell
m}\leavevmode\nobreak\ e^{\mathrm{i}m\phi_{\mathrm{orb}}},$ (13)
$\displaystyle\phi_{\mathrm{orb}}=\frac{\phi_{22}}{2},$ (14)
where $\phi_{22}$ is the phase of the $(2,2)$ mode (cf. Eq. (10)),
$\phi_{\mathrm{orb}}$ is the orbital phase, and $\mathpzc{h}_{\ell m}^{C}$
represents the complex modes in the co-orbital frame.
Figure 5: The waveform modes for NR Simulation SXS:BBH:2304 ($e_{\rm
ref}=0.181$) are shown. The top panel shows the dominant $(2,2)$ mode in the
inertial frame. Two higher-order modes $(3,2)$ and $(4,4)$ in the co-orbital
frame are shown in the middle and lower panels respectively. The waveform is
aligned such that the peak of the amplitude occurs at $t=0$ and the orbital
phase is zero at $t_{\rm ref}=-5500M$.
We use the real and imaginary parts of $\mathpzc{h}_{\ell m}^{C}$ as our
waveform data pieces for the nonquadrupole modes. As shown in Fig. 5, the
$\mathpzc{h}_{\ell m}^{C}$ data have less structure, making them easier to
model. We find that using quasicircular $\mathpzc{h}_{\ell m}^{C}$ to subtract
off the secular trend does not provide any modeling advantage. We, therefore,
model the real and imaginary parts of $\mathpzc{h}_{\ell m}^{C}$ without any
further data decomposition.
#### III.3.3 Summary of waveform data pieces
To summarize, the full set of waveform data pieces we model is as follows:
$\Delta A_{22}$, $\Delta\phi_{22}$ for the (2,2) mode, and real and imaginary
parts of $\mathpzc{h}_{\ell m}^{C}$ for the (3,2) and (4,4) modes.
### III.4 Building the waveform model
We decompose the inertial frame waveform data into many waveform data pieces
as summarized in Sec. III.3.3. For each of these data pieces, we now build a
surrogate model using reduced basis, empirical interpolation, and parametric
fits across the parameter space. The detailed procedure is outlined in Refs.
Blackman _et al._ (2017b); Field _et al._ (2014), which we only briefly
describe here.
For each waveform data piece, we employ a greedy algorithm to construct a
reduced basis Field _et al._ (2011) such that the projection errors (cf. Eq.
(5) of Ref. Blackman _et al._ (2017b)) for the entire data set onto this
basis are below a given tolerance. We use a basis tolerance of $10^{-2}$
radians for $\Delta\phi_{22}$ , $1.5\times 10^{-3}$ for $\Delta A_{22}$ and
$2\times 10^{-5}$ for the real part of $\mathpzc{h}_{32}^{C}$. For all other
data pieces, basis tolerance is set to $5\times 10^{-5}$.
These choices are made so that we include sufficient number of basis functions
for each data piece [9 for $\Delta A_{22}$, 12 for $\Delta\phi_{22}$, 7 (5)
for the real (imaginary) part of $\mathpzc{h}_{32}^{C}$ and 10 (6) for the
real (imaginary) part of $\mathpzc{h}_{44}^{C}$] to capture the underlying
physical features in the simulations while avoiding over fitting. We perform
additional visual inspection of the basis functions to ensure that they are
not noisy in which case modeling accuracy can become comprised (cf. Appendix B
of Ref. Blackman _et al._ (2017b)).
The next step is to construct an empirical interpolant in time using a greedy
algorithm which picks the most representative time nodes Maday _et al._
(2009); Chaturantabut and Sorensen (2010); Field _et al._ (2014); Canizares
_et al._ (2015). The number of the time nodes for each data piece is equal to
the number of basis functions used. The final surrogate-building step is to
construct parametric fits for each data piece at each of the empirical time
nodes across the two-dimensional parameter space $\\{e_{\rm ref},l_{\rm
ref}\\}$. We do this using the Gaussian process regression (GPR) fitting
method as described in Refs. Taylor and Varma (2020); Varma _et al._ (2019c).
Figure 6: Time-domain leave-one-out errors $\mathcal{E}$, defined in Eq.
(17), for the full waveform as well as the individual modes considered in the
model. For comparison, we also show the NR error between the two highest
resolutions. The largest errors are found near the parameter domain’s boundary
where the trial surrogate, built as part of the cross-validation study, is
extrapolating.
### III.5 Evaluating the waveform surrogate
To evaluate the NRSur2dq1Ecc surrogate model, we provide the eccentricity
$e_{\rm ref}$ and mean anomaly $l_{\rm ref}$ as inputs. We then evaluate the
parametric fits for each waveform data pieces at each time node. Next, the
empirical interpolant is used to reconstruct the full waveform data pieces
(cf. Sec. III.3.3).
We compute the amplitude and phase of the $(2,2)$ mode,
$\displaystyle A_{22}^{S}=\Delta A_{22}^{S}+A_{22}^{0},$ (15)
$\displaystyle\phi_{22}^{S}=\Delta\phi_{22}^{S}+\phi_{22}^{0},$ (16)
where $\Delta A_{22}^{S}\approx\Delta A_{22}$ and
$\Delta\phi_{22}^{S}\approx\Delta\phi_{22}$ are the surrogate models for
$\Delta A_{22}$ and $\Delta\phi_{22}$ respectively while $A_{22}^{0}$ and
$\phi_{22}^{0}$ are the amplitude and phase of the quasicircular NR waveform
used in the decompositions [cf. Eqs. (11-12)]. We obtain the (2,2) mode
complex strain as $\mathpzc{h}^{S}_{22}=A_{22}^{S}\leavevmode\nobreak\
e^{-\mathrm{i}\phi_{22}^{S}}$.
For the nonquadrupole modes, we similarly evaluate the surrogate models for
the real and imaginary parts of the co-orbital frame waveform data pieces
$\mathpzc{h}_{\ell m}^{C,S}\approx\mathpzc{h}_{\ell m}^{C}$ and treat it as
$\mathpzc{h}_{\ell m}^{C}$. Finally, we use Eqs. (10), (13), and (14) to
obtain the surrogate prediction for the inertial frame strain
$\mathpzc{h}_{\ell m}^{S}$ for these modes.
Figure 7: Left panel: Flat noise mismatch between the NRSur2dq1Ecc model
(following the leave-one-out validation procedure) and the highest-resolution
NR waveform data. For comparison, we also show the NR resolution error,
obtained by comparing the two highest available resolutions. Right panel:
NRSur2dq1Ecc (validation) mismatches computed using the advanced LIGO design
sensitivity noise curve, as a function of the total mass of the binary. For
comparison, we also show the NR mismatches. For each mass, the distribution of
mismatches are shown as a smoothed vertical histogram (or a violin). The
histograms are normalized so that all violins have equal width. The largest
errors are found near the parameter domain’s boundary where the trial
surrogate, built as part of the cross-validation study, is extrapolating.
### III.6 Building the remnant surrogate
In addition to the waveform model, we also construct the first model for the
remnant quantities of eccentric BBHs. The new remnant model,
NRSur2dq1EccRemnant, predicts the final mass $m_{f}$ and the component of the
final spin, $\chi_{fz}$, along the orbital angular momentum direction. The
remnant model takes eccentricity $e_{\rm ref}$ and mean anomaly $l_{\rm ref}$
as its inputs and maps to the final state of the binary. The final mass and
spin fits are also constructed using the GPR fitting method as described in
Refs. Taylor and Varma (2020); Varma _et al._ (2019c).
## IV Results
In this section we demonstrate the accuracy of NRSur2dq1Ecc and
NRSur2dq1EccRemnant by comparing against the eccentric NR simulations
described in Sec. II. We do this by performing a leave-one-out cross-
validation study. In this study, we hold out one NR waveform from the training
set and build a trial surrogate from the remaining 46 eccentric NR waveforms.
We then evaluate the trial surrogate at the parameter value corresponding to
the held out data, and compare its prediction with the highest-resolution NR
waveform. We refer to the errors obtained by comparing against the left-out NR
waveforms as cross-validation errors. These represent conservative error
estimates for the surrogate models against NR. Since we have 47 eccentric NR
waveforms, we build 47 trial surrogates for each error study. We compare these
errors to the NR resolution error, estimated by comparing the two highest
available NR simulations.
### IV.1 NRSur2dq1Ecc errors
#### IV.1.1 Time domain error without time/phase optimization
In order to quantify the accuracy of NRSur2dq1Ecc, we first compute the
normalized $L_{2}$-norm between the NR data and surrogate approximation
$\displaystyle\mathcal{E}[\mathpzc{h},\mathpzc{\tilde{h}}]=\frac{1}{2}\frac{\sum_{\ell,m}\int_{t_{1}}^{t_{2}}|\mathpzc{h}_{\ell
m}(t)-\mathpzc{\tilde{h}}_{\ell
m}(t)|^{2}dt}{\sum_{\ell,m}\int_{t_{1}}^{t_{2}}|\mathpzc{h}_{lm}(t)|^{2}dt},$
(17)
where $\mathpzc{h}(t)$ and $\tilde{\mathpzc{h}}(t)$ correspond to the complex
strain for NR and NRSur2dq1Ecc waveforms, respectively. Here, $t_{1}$ and
$t_{2}$ denote the start and end of the waveform data. As the NR waveforms are
already aligned in time and phase, the surrogate reproduces this alignment.
Therefore, we compute the time-domain error $\mathcal{E}$ without any further
time/phase shifts.
In Fig. 6, we report both the full waveform and individual mode errors for
NRSur2dq1Ecc. For comparison, we also show the NR resolution errors. When
computing the full waveform error we use all modes included in the surrogate
model $(\ell,m)=(2,2),(3,2),(4,4)$ in Eq. (17). To compute errors for
individual modes, we restrict the sum in Eq. (17) to only the mode of
interest. The NRSur2dq1Ecc errors are comparable to the NR errors in Fig. 6.
However, we find that the surrogate errors have an extended tail around two
orders of magnitude larger than the largest NR mismatch. While this could
imply over-fitting, we find that highest mismatches correspond to the
parameter space adjacent to the higher eccentricity $e_{\rm ref}$ boundary
where only few (to none) training waveforms are used. As will be discussed in
Sec. IV.1.2, the sparsely sampled region of the training domain around
$(e_{\rm ref}=0.2,l_{\rm ref}\lesssim 2)$ leads to this extended high-error
tail in Fig. 6.
We further note in Fig. 6 that the highest error in each mode corresponds to
the same point in the parameter space indicating consistency in our modeling.
Furthermore, as we only deal with mass ratio $q=1$ waveforms, the contribution
of the higher modes are expected to be negligible compared to the dominant
$(2,2)$ mode (see for example, Ref. Varma and Ajith (2017)). Therefore, even
though the $(3,3)$ and $(4,4)$ modes have larger relative errors compared to
the $(2,2)$ mode, their contribution to the total error is much smaller. This
can be verified by comparing the full waveform errors to the (2,2) mode errors
in Fig. 6.
Figure 8: Real part of the waveform modes for the case that results in the
largest flat noise mismatch ($\sim 0.04$) for NRSur2dq1Ecc (red dashed line)
in the left panel of Fig 7. We also show the corresponding NR waveform,
SXS:BBH:2308 (black solid line). The parameter values for this waveform are:
$e_{\rm ref}=0.176$ and $l_{\rm ref}=2.51$. Note that this plot is generated
using a trial surrogate that was not trained using this NR waveform data.
#### IV.1.2 Frequency domain mismatch with time/phase optimization
In this section, we estimate leave-one-out cross-validation errors by
computing mismatches between the NR waveform and the trial surrogate waveform
in the frequency domain. The frequency domain mismatch between two waveforms,
$\mathpzc{h}_{1}$ and $\mathpzc{h}_{2}$ is defined as:
$\displaystyle\left<\mathpzc{h}_{1},\mathpzc{h}_{2}\right>=4\mathrm{Re}\int_{f_{\mathrm{min}}}^{f_{\mathrm{max}}}\frac{\tilde{\mathpzc{h}}_{1}(f)\tilde{\mathpzc{h}}_{2}^{*}(f)}{S_{n}(f)}df,$
(18)
where $\tilde{\mathpzc{h}}(f)$ indicates the Fourier transform of the complex
strain $\mathpzc{h}(t)$, ∗ indicates a complex conjugation, $\mathrm{Re}$
indicates the real part, and $S_{n}(f)$ is the one-sided power spectral
density of a GW detector.
Before transforming the time domain waveform to the frequency domain, we first
taper the time domain waveform using a Planck window McKechan _et al._
(2010), and then zero-pad to the nearest power of two. The tapering at the
start of the waveform is done over $1.5$ cycles of the $(2,2)$ mode. The
tapering at the end is done over the last $20M$. Once we obtain the frequency
domain waveforms, we compute mismatches following the procedure described in
Appendix D of Ref. Blackman _et al._ (2017b). The mismatches are optimized
over shifts in time, polarization angle, and initial orbital phase. We compute
the mismatches at 37 points uniformly distributed on the sky of the source
frame, and use all available modes for the surrogate model.
We consider a flat noise curve $S_{n}(f)=1$ as well as the Advanced-LIGO
design sensitivity Zero-Detuned-HighP noise curve from Ref. LIGO Scientific
Collaboration (2018). We take $f_{\mathrm{min}}$ to be the frequency of the
$(2,2)$ mode at the end of the initial tapering window while
$f_{\mathrm{max}}$ is set at $4f^{\rm peak}_{22}$, where $f^{\rm peak}_{22}$
is the frequency of the $(2,2)$ mode at its peak. This ensures that the peak
frequencies of all modes considered in our model are captured well, and we
have confirmed that our mismatch values do not change for larger values of
$f_{\mathrm{max}}$. Note that when computing mismatches using Advanced LIGO
noise curve, for masses below $\sim 70M_{\odot}$, $f_{\mathrm{min}}$ is
greater than $20$Hz, meaning that the signal starts within the detector
sensitivity band.
The mismatches computed using the flat noise curve are shown in the left panel
of Fig. 7. The histograms include mismatches for all 47 NR waveforms and
source-frame sky locations. We find that the typical surrogate mismatches are
$10^{-5}-10^{-3}$, which are comparable to but larger than the NR errors. As
an example, Fig. 8 shows the surrogate and NR waveforms for the case that
leads to the largest mismatch in the left panel of Fig. 7.
In Fig. 3, we show the dependence of the mismatches on the parameter space. It
can be easily recognized that the surrogate yields largest errors at and
around $(e_{\rm ref}=0.2,l_{\rm ref}\lesssim 2)$ where the training grid
becomes sparse. Further, when these sparse data points themselves are left out
when computing the cross-validation errors, the surrogate is effectively
extrapolating in parameter space. This indicates that the surrogate accuracy
could be improved by adding new NR simulations in this high-eccentricity
region. However, achieving target values of $e_{\rm ref}$ and $l_{\rm ref}$
has proven difficult. We return to this issue in the conclusions.
The right panel of Fig. 7 shows the mismatches computed using advanced LIGO
design sensitivity noise curve LIGO Scientific Collaboration (2018) for
different total masses $M$ of the binary. For each $M$, we compute the
mismatches for all 47 NR waveforms and source-frame sky locations and show the
distribution of mismatches using vertical histograms known as violin plots.
Over the mass range $20-180M_{\odot}$, the surrogate mismatches are at the
level of $\sim 10^{-4}-10^{-3}$ but with an extending tail as before. However,
we note that these errors are typically smaller than the mismatches for other
eccentric waveform models Huerta _et al._ (2018); Chen _et al._ (2020);
Chiaramello and Nagar (2020).
Figure 9: The absolute values of different spherical-harmonic modes are shown
as dashed (solid) curves for the surrogate (NR) for SXS:BBH:2308, for which
the surrogate produces largest flat noise mismatch ($\sim 0.04$). The
parameter values for this waveform are: $e_{\rm ref}=0.176$ and $l_{\rm
ref}=2.51$. Mode mixing for the $(3,2)$ mode is clearly seen in the ringdown
signal of the NR waveform and is accurately reproduced by the surrogate.
Figure 10: Leave-one-out error histograms of NRSur2dq1EccRemnant (red) for
the remnant mass $m_{f}$ (left) and remnant spin $\chi_{fz}$ (right). For
comparison we plot the NR errors (black), estimated by comparing the two
highest resolution NR simulations, and errors for the noneccentric model
NRSur3dq8cRemnant (green).
### IV.2 Mode mixing
NR waveforms are extracted as spin-weighted spherical harmonic modes Newman
and Penrose (1966); Goldberg _et al._ (1967). However, during the ringdown,
the system can be considered a single Kerr black hole perturbed by quasinormal
modes; perturbation theory tells us that the angular eigenfunctions for these
modes are the spin-weighted _spheroidal_ harmonics Teukolsky (1973, 1972). A
spherical harmonic mode $\mathpzc{h}_{\ell m}$ can be written as a linear
combination of all spheroidal harmonic modes with the same $m$ index. During
the ringdown, each (spheroidal-harmonic) quasinormal mode decays exponentially
in time, but each spherical-harmonic mode has a more complicated behavior
because it is a superposition of multiple spheroidal-harmonic modes (of the
same $m$ index) with different decay rates. This more complicated behavior is
referred to as mode mixing, since power flows between different spherical-
harmonic modes Berti and Klein (2014). This mixing is particularly evident in
the $(3,2)$ mode as significant power of the dominant $(2,2)$ spherical-
harmonic mode can leak into the $(3,2)$ spherical-harmonic mode. As the
surrogate accurately reproduces the spherical harmonic modes from the NR
simulations, it is also expected to capture the effect of mode mixing without
any additional effort Varma _et al._ (2019a). We demonstrate this for an
example case in Fig. 9 where we plot the amplitude of individual modes of the
waveform during the ringdown. We show that the mode mixing in the $(3,2)$ mode
is effectively recovered by the surrogate model.
### IV.3 NRSur2dq1EccRemnant errors
In addition to the waveform surrogate, we also build a remnant surrogate
model, NRSur2dq1EccRemnant, that predicts the mass and spin of the final BH
left behind after the merger. This is the first such model for eccentric BBHs
(but see e.g. Refs. Sopuerta _et al._ (2007); Sperhake _et al._ (2020)).
Figure 10 shows the cross-validation errors of NRSur2dq1EccRemnant in
predicting the remnant mass and spin. We find that NRSur2dq1EccRemnant can
predict the final mass and spin with an accuracy of $\lesssim 5\times
10^{-4}M$ and $\lesssim 2\times 10^{-3}$ respectively. We further compute the
errors for a noneccentric remnant model, NRSur3dq8Remnant Varma _et al._
(2019c), when compared against the same eccentric NR simulations, finding that
errors in NRSur3dq8Remnant are comparable with NRSur2dq1EccRemnant errors.
This suggests that noneccentric remnant models may be sufficient for equal-
mass nonspinning binaries with eccentricities $e_{\rm ref}\leq 0.2$. However,
we expect such models to disagree with eccentric simulations in the more
general case of unequal-mass, spinning binaries (see for e.g. Ref. Ramos-
Buades _et al._ (2020b)).
Figure 11: Mismatches against NR for the NRSur2dq1Ecc+ model (a simple
extension of NRSur2dq1Ecc) when the surrogate is evaluated beyond its training
parameter range ($q=1$). The mismatches are shown as a function of the binary
total mass $M$ (at $\iota=\pi/3$, $\varphi_{0}=0.0$), and are computed using
the advanced LIGO design sensitivity noise curve. We show mismatches for $q=2$
($q=3$) as solid lines (dashed lines). We use star markers to denote waveforms
with $e_{\rm ref}$ smaller than $\sim 0.05$ and diamond markers for the rest.
All eccentricity values are computed at a reference time of $t_{\rm
ref}=-2000M$. Figure 12: We show the NRSur2dq1Ecc+ prediction (red dashed
line) beyond training range ($q=1$) of the surrogate for the case that results
in the largest mismatch (Fig. 11) in the region defined by $e_{\rm ref}$ (at
$t_{\rm ref}=-2000M$) smaller than $\sim 0.05$. We also show the corresponding
NR waveform SXS:BBH:1371 (black solid line). The parameters for this waveform
are: $q=3$, $e_{\rm ref}=0.050$ and $l_{\rm ref}=2.45$ (at $t_{\rm
ref}=-2000M$).
### IV.4 Extending NRSur2dq1Ecc to comparable mass systems
We now assess the performance of NRSur2dq1Ecc when evaluated beyond its
training parameter range ($q=1$). To generate surrogate predictions at a given
$(q,e_{\rm ref},l_{\rm ref})$, we first evaluate NRSur2dq1Ecc at
$(q\\!=\\!1,e_{\rm ref},l_{\rm ref})$ and refer to the output as
$\mathpzc{h}^{S}_{lm}(q\\!=\\!1,e_{\rm ref},l_{\rm ref})$. We then evaluate
the noneccentric surrogate model NRHybSur3dq8 Varma _et al._ (2019a) at the
given mass ratio $q$ and mass ratio $q=1$, and refer to the output as
$\mathpzc{h}_{\ell m}^{0}(q)$ and $\mathpzc{h}_{\ell m}^{0}(q=1)$. We then
compute the difference in amplitude and phase between
$\mathpzc{h}^{S}_{lm}(q\\!=\\!1,e_{\rm ref},l_{\rm ref})$ and
$\mathpzc{h}_{\ell m}^{0}(q=1)$:
$\displaystyle\Delta A^{S}_{lm}(q\\!=\\!1,e_{\rm ref},l_{\rm ref})$
$\displaystyle\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ =A^{S}_{lm}(q\\!=\\!1,e_{\rm ref},l_{\rm
ref})-A^{0}_{lm}(q\\!=\\!1)\,,$ (19)
$\displaystyle\Delta\phi^{S}_{lm}(q\\!=\\!1,e_{\rm ref},l_{\rm ref})$
$\displaystyle\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ =\phi^{S}_{lm}(q\\!=\\!1,e_{\rm ref},l_{\rm
ref})-\phi^{0}_{lm}(q\\!=\\!1).$ (20)
Even though these amplitude and phase differences are computed at $q=1$, we
treat them as a proxy for the modulations due to eccentricity at any $q$. We
then add these modulations to the amplitude and phase of $\mathpzc{h}_{\ell
m}^{0}(q)$, the noneccentric surrogate model evaluated at the given $q$, to
get the full amplitude and phase:
$\displaystyle A^{S}_{lm}(q,e_{\rm ref},l_{\rm ref})$
$\displaystyle\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ =\Delta
A^{S}_{lm}(q\\!=\\!1,e_{\rm ref},l_{\rm ref})+A^{0}_{lm}(q)\,,$ (21)
$\displaystyle\phi^{S}_{lm}(q,e_{\rm ref},l_{\rm ref})$
$\displaystyle\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
=\Delta\phi^{S}_{lm}(q\\!=\\!1,e_{\rm ref},l_{\rm ref})+\phi^{0}_{lm}(q).$
(22)
The final surrogate prediction, which we view as a new, simple model
NRSur2dq1Ecc+, is then:
$\mathpzc{h}_{\ell m}^{S}(q,e_{\rm ref},l_{\rm ref})=A^{S}_{lm}(q,e_{\rm
ref},l_{\rm ref})e^{-\mathrm{i}\phi^{S}_{lm}(q,e_{\rm ref},l_{\rm ref})}.$
(23)
Figure 13: Importance of mean anomaly for waveform modeling and data
analysis. Left panel: Flat noise mismatch (optimized over time, phase and
polarization angle shifts) between NRSur2dq1Ecc predictions with $l_{\rm
ref}=0.0$ and $l_{\rm ref}=\Delta l_{\rm ref}$, at fixed $q=1$ and $e_{\rm
ref}=0.1$. While the mismatch, as expected, is $\sim 0$ for $\Delta l_{\rm
ref}=0.0$ and $\Delta l_{\rm ref}=2\pi$, it reaches values $\sim 0.1$ near
$\Delta l_{\rm ref}=\pi$. Right panel: The $(2,2)$ amplitude of the waveforms
leading to the maximum mismatch, i.e. $l_{\rm ref}=0$ and $l_{\rm ref}=\pi$.
These differences cannot be accounted for by a time or phase shift, therefore,
mean anomaly is an important parameter to include for waveform modeling and
data analysis of eccentric binaries.
To assess the accuracy of NRSur2dq1Ecc+ we compare against eight publicly
available eccentric NR simulations with $q=2$ and $q=3$ Hinder _et al._
(2018); Boyle _et al._ (2019). These NR waveforms are shorter in length than
the ones used to train our surrogate model. To ensure fair comparison between
surrogate predictions and NR waveforms, we build a test surrogate222While
building the test surrogate, we exclude SXS:BBH:2294 ($e_{\rm ref}=7\times
10^{-4}$, $l_{\rm ref}=5.766$ at $t_{\rm ref}=-5500M$) from the training set
as the binary circularizes enough by $t=-2000M$ such that our eccentricity
estimator defined in Eq.(4) becomes unreliable. which is parameterized by
$e_{\rm ref}$ and $l_{\rm ref}$ at $t_{\rm ref}=-2000M$.
In Fig. 11, we show mismatches computed using the advanced LIGO design
sensitivity noise curve, between the NRSur2dq1Ecc+ model and eccentric NR data
at $q=2,3$. We include all modes available in the model while computing the
mismatch. For simplicity, we only consider a single point in the source-frame
sky, with an inclination angle of $\pi/3$. For $e_{\rm ref}$ (at $t_{\rm
ref}=-2000M$) smaller than $\sim 0.05$, mismatches are always smaller than
$10^{-2}$. As we increase $e_{\rm ref}$ (at $t_{\rm ref}=-2000M$) to $0.09$,
the mismatches become significantly worse, especially for $q=3$, reaching
values $\sim 10^{-1}$. As an example, Fig. 12 shows the surrogate prediction
(and NR waveform) for the case that leads to the largest mismatch in Fig. 11
with $e_{\rm ref}$ (at $t_{\rm ref}=-2000M$) smaller than $\sim 0.05$.
This suggests that our scheme to extend the surrogate model to comparable mass
systems produces reasonable waveforms for small eccentricities. However, we
advise caution with extrapolation-type procedures in general.
### IV.5 Importance of mean anomaly for data analysis
Many existing waveform models Huerta _et al._ (2018); Chen _et al._ (2020);
Chiaramello and Nagar (2020); Cao and Han (2017) for eccentric binaries
parameterize eccentric characteristics of the waveform by only one parameter
$e_{\rm ref}$ while keeping $l_{\rm ref}$ fixed. We, however, use both $e_{\rm
ref}$ and $l_{\rm ref}$ as parameters in our model. We find that not allowing
$l_{\rm ref}$ as an independent parameter results in large modeling error,
indicating that the mean anomaly is important to consider when modeling the GW
signal from eccentric binaries.
To demonstrate the importance of mean anomaly also in data analysis, we
present a simple study. We generate NRSur2dq1Ecc predictions
$\mathpzc{h}_{\ell m}^{S}(q=1,e_{\rm ref}=0.1,l_{\rm ref})$ with $l_{\rm
ref}\in[0.0,2\pi]$. The left panel of Fig. 13 shows mismatches between the
waveform at $l_{\rm ref}=0$ and various $l_{\rm ref}$, parametrized by $\Delta
l_{\rm ref}=l_{\rm ref}-0$. For simplicity, we only consider a single point in
the source-frame sky, at $\iota=\pi/3$, $\varphi_{0}=0.0$. As expected, we
find that $\Delta l_{\rm ref}=0.0$ and $\Delta l_{\rm ref}=2\pi$ produce
identical waveforms. However, the mismatch reaches a value of $\sim 0.1$ at
$\Delta l_{\rm ref}=\pi$. As we already account for allowed time and frame
shifts when computing the mismatch, ignoring this difference can lead to
modeling errors or biased parameter estimation. In the right panel of Fig. 13,
we show the waveform amplitude for the cases with $l_{\rm ref}=0$ and $l_{\rm
ref}=\pi$. The clear differences in the amplitude reinforce our assertion that
this mismatch cannot be accounted for by a time or frame shift.
## V Conclusion
We present NRSur2dq1Ecc, the first eccentric NR surrogate waveform model. This
model is trained on 47 NR waveforms of equal-mass nonspinning BBH systems with
eccentricity $e_{\rm ref}\leq 0.2$, defined at a reference time $t_{\rm
ref}=-5500M$ before the waveform peak. The model includes the $(2,2)$, $(3,2)$
and $(4,4)$ spin-weighted spherical harmonic modes. Due to the symmetries of
the equal-mass, nonspinning systems considered here, this is equivalent to
including all $\ell\leq 3$ and $(4,\pm 4)$ modes, except the $m=0$ modes. This
is the first eccentric BBH model that is directly trained on eccentric NR
simulations and does not require that the binary circularizes before merger.
We also present NRSur2dq1EccRemnant, the first NR surrogate model for the
final BH properties of eccentric BBH mergers. This model is also trained on
the same set of simulations. We use Gaussian process regression to construct
the parametric fits for both models. Both NRSur2dq1Ecc and NRSur2dq1EccRemnant
will be made publicly available in the near future.
Through a leave-one-out cross-validation study, we show that NRSur2dq1Ecc
accurately reproduces NR waveforms with a typical mismatch of $\sim 10^{-3}$.
We further demonstrate that our remnant model, NRSur2dq1EccRemnant, can
accurately predict the final mass and spin of the merger remnant with errors
$\lesssim 5\times 10^{-4}M$ and $\lesssim 2\times 10^{-3}$ respectively. We
showed that despite being trained on equal-mass binaries, NRSur2dq1Ecc can be
reasonably extended up to mass ratio $q\approx 3$ with mismatches $\simeq
10^{-2}$ for eccentricities $e_{\rm ref}\lesssim 0.05$ at $t_{\rm
ref}=-2000M$. Finally, we demonstrate that the mean anomaly, which is often
ignored in waveform modeling and parameter estimation of eccentric binaries,
is an important parameter to include. Exclusion of mean anomaly can result in
poor modeling accuracy and/or biased parameter inference.
The NR simulations used for this work were performed using the Spectral
Einstein Code (SpEC) SpE . SpEC’s development efforts have been primarily
focused on evolutions of binary black hole systems in quasi-circular orbits
Boyle _et al._ (2019). To efficiently generate accurate training data for
high eccentricity systems, it may be necessary to improve certain algorithmic
subroutines. For example, as noted in Sec. III.2.3, we found it difficult to
achieve target values of $(e_{\rm ref},l_{\rm ref})$ at a reference time
before merger. We also noticed that the waveform’s numerical error was
noticeably larger near pericenters, suggesting better adaptive mesh refinement
algorithms Szilagyi _et al._ (2009) may be necessary for highly eccentric
simulations.
We have also explored several data decomposition techniques and
parametrizations for building eccentric NR surrogate models, which can guide
strategies for future models. Our final framework for building eccentric NR
surrogates is quite general, and we expect that it can be applied
straightforwardly to higher dimensional parameter spaces including unequal
masses and aligned-spins. We leave these explorations to future work.
###### Acknowledgements.
We thank Geraint Pratten for comments on the manuscript. We thank Nur Rifat
and Feroz Shaik for helpful discussions. We thank Katerina Chatziioannou for
the implementation of an improved eccentricity control system used in many of
our simulations. T.I. is supported by NSF grant PHY-1806665 and a doctoral
fellowship provided by UMassD Graduate Studies. V.V. is supported by a Klarman
Fellowship at Cornell, the Sherman Fairchild Foundation, and NSF grants
PHY–170212 and PHY–1708213 at Caltech. J.L. is supported by the Caltech Summer
Undergraduate Research Fellowship Program and the Rose Hills Foundation. S.F.
is supported by NSF grants No. PHY-1806665 and No. DMS-1912716. G.K.
acknowledges research support from NSF Grants No. PHY-2106755 and No.
DMS-1912716. M.S. is supported by Sherman Fairchild Foundation and by NSF
Grants PHY-2011961, PHY-2011968, and OAC-1931266 at Caltech. D.G. is supported
by European Union H2020 ERC Starting Grant No. 945155–GWmining, Leverhulme
Trust Grant No. RPG-2019-350, and Royal Society Grant No. RGS-R2-202004. L.K.
is supported by the Sherman Fairchild Foundation, and NSF Grants PHY-1912081
and OAC-1931280 at Cornell. A portion of this work was carried out while a
subset of the authors were in residence at the Institute for Computational and
Experimental Research in Mathematics (ICERM) in Providence, RI, during the
Advances in Computational Relativity program. ICERM is supported by the
National Science Foundation under Grant No. DMS-1439786. Simulations were
performed on the Wheeler cluster at Caltech, which is supported by the Sherman
Fairchild Foundation and by Caltech; and on CARNiE at the Center for
Scientific Computing and Visualization Research (CSCVR) of UMassD, which is
supported by the ONR/DURIP Grant No. N00014181255. Computations for building
the model were performed on both CARNiE and Wheeler.
## Appendix
In this Appendix, we describe various alternate modeling strategies we pursued
before deciding on the formalism presented in the main text.
## Appendix A Choice of data decomposition
In this work, we have modeled the amplitude ($A_{22}$) and phase ($\phi_{22}$)
of the $(2,2)$ mode by modeling the residual ($\Delta A_{22}$,
$\Delta\phi_{22}$) of these quantities with respect to a quasicircular NR
waveform (cf. Sec. III.3). Alternatively, one could instead model the
amplitude and frequency (or their residuals), and then integrate the frequency
to obtain the phase. The frequency of the (2,2) mode is given by
$\displaystyle\omega_{22}=\frac{d\phi_{22}}{dt},$ (24)
where $\phi_{22}$ is defined in Eq. (10). The corresponding residual is given
by:
$\displaystyle\Delta\omega_{22}=\omega_{22}-\omega_{22}^{0},$ (25)
where $\omega_{22}^{0}$ is the frequency of $(2,2)$ mode for the quasicircular
NR waveform.
We, therefore, explore four different data decomposition strategies for the
$(2,2)$ mode, summarized below:
* •
Model $\\{A_{22},\phi_{22}\\}$ directly.
* •
Model $\\{\Delta A_{22},\Delta\phi_{22}\\}$ and then add them to the amplitude
and phase of the quasicircular NR waveform to obtain $\\{A_{22},\phi_{22}\\}$.
* •
Model $\\{A_{22},\omega_{22}\\}$ and integrate the frequency data to get
$\\{A_{22},\phi_{22}\\}$.
* •
Model $\\{\Delta A_{22},\Delta\omega_{22}\\}$; add them to the amplitude and
frequency of the quasicircular NR waveforms, and finally integrate the
frequency data to obtain $\\{A_{22},\phi_{22}\\}$.
Figure 14: Histograms of surrogate errors (defined in Eqs. (17)) for the four
different decomposition strategies we consider. We find that modeling the
residual amplitude $\Delta A_{22}$ and residual phase $\Delta\phi_{22}$ yields
the least errors.
In order to explore the effectiveness of these strategies, we build a separate
surrogate model using each strategy. When building the frequency surrogates
($\omega_{22}$ or $\Delta\omega_{22}$) we use a basis tolerance of $10^{-3}$
rad/$M$. For $A_{22}$ ($\phi_{22}$) we use the same tolerance as used for
$\Delta A_{22}$ ($\Delta\phi_{22}$) in Section III.4. We compute the
normalized $L_{2}$-norm between the NR data and each surrogate approximation
using Eq. (17). In Fig. 14, we show the surrogate errors for all four
different strategies. We find that modeling the frequency $\omega_{22}$ or
residual frequency $\Delta\omega_{22}$ yields at least two-to-three orders of
magnitude larger $\mathcal{E}$ than when we model the phase $\phi_{22}$ or
$\Delta\phi_{22}$. Furthermore, modeling the residual amplitude $\Delta
A_{22}$ proves to be slightly more accurate than the case where we model the
amplitude $A_{22}$ directly. Therefore, in the main text, we build surrogate
models of the residual $\\{\Delta A_{22},\Delta\phi_{22}\\}$ (cf. Sec. III.3).
## Appendix B Choice of fit parameterization
When building the surrogate models in main text, fits across parameter space
are required for the waveform model as well as the remnant model (cf. Sec.
III.4). These fits are parameterized by the eccentricity ($e_{\rm ref}$) and
mean anomaly $(l_{\rm ref})$ at the reference time $t_{\rm ref}$. While
$\\{e_{\rm ref},l_{\rm ref}\\}$ is a natural choice, we also explore the
following choices of parameterizations:
* •
$\\{e_{\rm ref},l_{\rm ref}\\}$,
* •
$\\{e_{\rm ref},\sin(l_{\rm ref}/2)\\}$,
* •
$\\{\log_{10}(1-e_{\rm ref}),l_{\rm ref}\\}$,
* •
$\\{\log_{10}(1-e_{\rm ref}),\sin(l_{\rm ref}/2)\\}$,
Here $\sin(l_{\rm ref}/2)$ is considered because it maps the periodic
parameter $l_{\rm ref}\in[0,2\pi)$ uniquely to the range $[0,1]$, while still
mapping the physically equivalent points $l_{\rm ref}=0$ and $l_{\rm
ref}=2\pi$ to the same point ($\sin(l_{\rm ref}/2)=0$). The same is not true
for other possible parameterizations such as $\sin(l_{\rm ref})$, $\cos(l_{\rm
ref})$, or $\cos(l_{\rm ref}/2)$. $\log_{10}(1-e_{\rm ref})$ is considered
because it flattens the spread in eccentricity, which can be useful if the
eccentricity varies over several orders of magnitude in the NR dataset.
Figure 15: Histograms of the error for the full waveform, for the six
different fit parameterizations we consider.
Similarly to the previous section, to explore the effectiveness of these
strategies, we build a separate surrogate model using each strategy. Here,
however, we consider all modes included $(\ell,m)=(2,2),(3,2),(4,4)$ and
evaluate $\mathcal{E}$ errors [cf. Eq. (17)]. In Fig. 15, we show
$\mathcal{E}$ errors for each parameterization strategy. We find that while
the alternative strategies using either $\log_{10}(1-e_{\rm ref})$, or
$\sin(l_{\rm ref}/2)$, or both, may be comparable, none of the them result in
errors smaller than the original choice $\\{e_{\rm ref},l_{\rm ref}\\}$. As we
do not achieve a noticeable improvement with these alternative
parameterizations, we stick to the original choice $\\{e_{\rm ref},l_{\rm
ref}\\}$ in the main text.
## References
* Abbott _et al._ (2019a) B. P. Abbott _et al._ (LIGO Scientific, Virgo), “GWTC-1: A Gravitational-Wave Transient Catalog of Compact Binary Mergers Observed by LIGO and Virgo during the First and Second Observing Runs,” Phys. Rev. X9, 031040 (2019a), arXiv:1811.12907 [astro-ph.HE] .
* Abbott _et al._ (2020a) R. Abbott _et al._ (LIGO Scientific, Virgo), “GWTC-2: Compact Binary Coalescences Observed by LIGO and Virgo During the First Half of the Third Observing Run,” (2020a), arXiv:2010.14527 [gr-qc] .
* Aasi _et al._ (2015) J. Aasi _et al._ (LIGO Scientific), “Advanced LIGO,” Class. Quant. Grav. 32, 074001 (2015), arXiv:1411.4547 [gr-qc] .
* Acernese _et al._ (2015) F. Acernese _et al._ (Virgo), “Advanced Virgo: a second-generation interferometric gravitational wave detector,” Class. Quant. Grav. 32, 024001 (2015), arXiv:1408.3978 [gr-qc] .
* Field _et al._ (2014) S. E. Field, C. R. Galley, J. S. Hesthaven, J. Kaye, and M. Tiglio, “Fast Prediction and Evaluation of Gravitational Waveforms Using Surrogate Models,” 4, 031006 (2014), arXiv:1308.3565 [gr-qc] .
* Pürrer (2014) Michael Pürrer, “Frequency domain reduced order models for gravitational waves from aligned-spin compact binaries,” Class. Quant. Grav. 31, 195010 (2014), arXiv:1402.4146 [gr-qc] .
* Blackman _et al._ (2015) Jonathan Blackman, Scott E. Field, Chad R. Galley, Béla Szilágyi, Mark A. Scheel, Manuel Tiglio, and Daniel A. Hemberger, “Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models,” Phys. Rev. Lett. 115, 121102 (2015), arXiv:1502.07758 [gr-qc] .
* Blackman _et al._ (2017a) Jonathan Blackman, Scott E. Field, Mark A. Scheel, Chad R. Galley, Christian D. Ott, Michael Boyle, Lawrence E. Kidder, Harald P. Pfeiffer, and Béla Szilágyi, “Numerical relativity waveform surrogate model for generically precessing binary black hole mergers,” Phys. Rev. D96, 024058 (2017a), arXiv:1705.07089 [gr-qc] .
* Blackman _et al._ (2017b) Jonathan Blackman, Scott E. Field, Mark A. Scheel, Chad R. Galley, Daniel A. Hemberger, Patricia Schmidt, and Rory Smith, “A Surrogate Model of Gravitational Waveforms from Numerical Relativity Simulations of Precessing Binary Black Hole Mergers,” Phys. Rev. D95, 104023 (2017b), arXiv:1701.00550 [gr-qc] .
* Varma _et al._ (2019a) Vijay Varma, Scott E. Field, Mark A. Scheel, Jonathan Blackman, Lawrence E. Kidder, and Harald P. Pfeiffer, “Surrogate model of hybridized numerical relativity binary black hole waveforms,” Phys. Rev. D99, 064045 (2019a), arXiv:1812.07865 [gr-qc] .
* Chua _et al._ (2019) Alvin J.K. Chua, Chad R. Galley, and Michele Vallisneri, “Reduced-order modeling with artificial neurons for gravitational-wave inference,” Phys. Rev. Lett. 122, 211101 (2019), arXiv:1811.05491 [astro-ph.IM] .
* Lackey _et al._ (2019) Benjamin D. Lackey, Michael Pürrer, Andrea Taracchini, and Sylvain Marsat, “Surrogate model for an aligned-spin effective one body waveform model of binary neutron star inspirals using Gaussian process regression,” Phys. Rev. D 100, 024002 (2019), arXiv:1812.08643 [gr-qc] .
* Varma _et al._ (2019b) Vijay Varma, Scott E. Field, Mark A. Scheel, Jonathan Blackman, Davide Gerosa, Leo C. Stein, Lawrence E. Kidder, and Harald P. Pfeiffer, “Surrogate models for precessing binary black hole simulations with unequal masses,” Phys. Rev. Research. 1, 033015 (2019b), arXiv:1905.09300 [gr-qc] .
* Khan and Green (2020) Sebastian Khan and Rhys Green, “Gravitational-wave surrogate models powered by artificial neural networks: The ANN-Sur for waveform generation,” (2020), arXiv:2008.12932 [gr-qc] .
* Williams _et al._ (2020) Daniel Williams, Ik Siong Heng, Jonathan Gair, James A. Clark, and Bhavesh Khamesra, “Precessing numerical relativity waveform surrogate model for binary black holes: A Gaussian process regression approach,” Phys. Rev. D 101, 063011 (2020), arXiv:1903.09204 [gr-qc] .
* Abbott _et al._ (2019b) B.P. Abbott _et al._ (LIGO Scientific, Virgo), “Search for Eccentric Binary Black Hole Mergers with Advanced LIGO and Advanced Virgo during their First and Second Observing Runs,” Astrophys. J. 883, 149 (2019b), arXiv:1907.09384 [astro-ph.HE] .
* Romero-Shaw _et al._ (2019) Isobel M. Romero-Shaw, Paul D. Lasky, and Eric Thrane, “Searching for Eccentricity: Signatures of Dynamical Formation in the First Gravitational-Wave Transient Catalogue of LIGO and Virgo,” Mon. Not. Roy. Astron. Soc. 490, 5210–5216 (2019), arXiv:1909.05466 [astro-ph.HE] .
* Lenon _et al._ (2020) Amber K. Lenon, Alexander H. Nitz, and Duncan A. Brown, “Measuring the eccentricity of GW170817 and GW190425,” Mon. Not. Roy. Astron. Soc. 497, 1966–1971 (2020), arXiv:2005.14146 [astro-ph.HE] .
* Yun _et al._ (2020) Qian-Yun Yun, Wen-Biao Han, Gang Wang, and Shu-Cheng Yang, “Investigating eccentricities of the binary black hole signals from the LIGO-Virgo catalog GWTC-1,” (2020), arXiv:2002.08682 [gr-qc] .
* Wu _et al._ (2020) Shichao Wu, Zhoujian Cao, and Zong-Hong Zhu, “Measuring the eccentricity of binary black holes in GWTC-1 by using the inspiral-only waveform,” Mon. Not. Roy. Astron. Soc. 495, 466–478 (2020), arXiv:2002.05528 [astro-ph.IM] .
* Nitz _et al._ (2019) Alexander H. Nitz, Amber Lenon, and Duncan A. Brown, “Search for Eccentric Binary Neutron Star Mergers in the first and second observing runs of Advanced LIGO,” Astrophys. J. 890, 1 (2019), arXiv:1912.05464 [astro-ph.HE] .
* Ramos-Buades _et al._ (2020a) Antoni Ramos-Buades, Shubhanshu Tiwari, Maria Haney, and Sascha Husa, “Impact of eccentricity on the gravitational wave searches for binary black holes: High mass case,” Phys. Rev. D 102, 043005 (2020a), arXiv:2005.14016 [gr-qc] .
* Peters (1964) P. C. Peters, “Gravitational radiation and the motion of two point masses,” Phys. Rev. 136, B1224–B1232 (1964).
* Giesler _et al._ (2018) Matthew Giesler, Drew Clausen, and Christian D. Ott, “Low-mass X-ray binaries from black-hole retaining globular clusters,” Mon. Not. Roy. Astron. Soc. 477, 1853–1879 (2018), arXiv:1708.05915 [astro-ph.HE] .
* Rodriguez _et al._ (2018a) Carl L. Rodriguez _et al._ , Phys. Rev. D98, 123005 (2018a), arXiv:1811.04926 [astro-ph.HE] .
* O’Leary _et al._ (2006) Ryan M. O’Leary, Frederic A. Rasio, John M. Fregeau, Natalia Ivanova, and Richard W. O’Shaughnessy, “Binary mergers and growth of black holes in dense star clusters,” Astrophys. J. 637, 937–951 (2006), arXiv:astro-ph/0508224 .
* Samsing (2018) Johan Samsing, “Eccentric Black Hole Mergers Forming in Globular Clusters,” Phys. Rev. D 97, 103014 (2018), arXiv:1711.07452 [astro-ph.HE] .
* Fragione and Kocsis (2019) Giacomo Fragione and Bence Kocsis, “Black hole mergers from quadruples,” Mon. Not. Roy. Astron. Soc. 486, 4781–4789 (2019), arXiv:1903.03112 [astro-ph.GA] .
* Kumamoto _et al._ (2019) Jun Kumamoto, Michiko S. Fujii, and Ataru Tanikawa, “Gravitational-Wave Emission from Binary Black Holes Formed in Open Clusters,” Mon. Not. Roy. Astron. Soc. 486, 3942–3950 (2019), arXiv:1811.06726 [astro-ph.HE] .
* O’Leary _et al._ (2009) Ryan M. O’Leary, Bence Kocsis, and Abraham Loeb, “Gravitational waves from scattering of stellar-mass black holes in galactic nuclei,” Mon. Not. Roy. Astron. Soc. 395, 2127–2146 (2009), arXiv:0807.2638 [astro-ph] .
* Gondán and Kocsis (2020) László Gondán and Bence Kocsis, “High Eccentricities and High Masses Characterize Gravitational-wave Captures in Galactic Nuclei as Seen by Earth-based Detectors,” (2020), arXiv:2011.02507 [astro-ph.HE] .
* Abbott _et al._ (2020b) R. Abbott _et al._ (LIGO Scientific, Virgo), “GW190521: A Binary Black Hole Merger with a Total Mass of $150\leavevmode\nobreak\ M_{\odot}$,” Phys. Rev. Lett. 125, 101102 (2020b), arXiv:2009.01075 [gr-qc] .
* Romero-Shaw _et al._ (2020) Isobel M. Romero-Shaw, Paul D. Lasky, Eric Thrane, and Juan Calderon Bustillo, “GW190521: orbital eccentricity and signatures of dynamical formation in a binary black hole merger signal,” (2020), arXiv:2009.04771 [astro-ph.HE] .
* Gayathri _et al._ (2020) V. Gayathri, J. Healy, J. Lange, B. O’Brien, M. Szczepanczyk, I. Bartos, M. Campanelli, S. Klimenko, C. Lousto, and R. O’Shaughnessy, “GW190521 as a Highly Eccentric Black Hole Merger,” (2020), arXiv:2009.05461 [astro-ph.HE] .
* Calderón Bustillo _et al._ (2020a) Juan Calderón Bustillo, Nicolas Sanchis-Gual, Alejandro Torres-Forné, and José A. Font, “Confusing head-on and precessing intermediate-mass binary black hole mergers,” (2020a), arXiv:2009.01066 [gr-qc] .
* Calderón Bustillo _et al._ (2020b) Juan Calderón Bustillo, Nicolas Sanchis-Gual, Alejandro Torres-Forné, José A. Font, Avi Vajpeyi, Rory Smith, Carlos Herdeiro, Eugen Radu, and Samson H.W. Leong, “The (ultra) light in the dark: A potential vector boson of $8.7\times 10^{-13}$ eV from GW190521,” (2020b), arXiv:2009.05376 [gr-qc] .
* Zevin _et al._ (2019) Michael Zevin, Johan Samsing, Carl Rodriguez, Carl-Johan Haster, and Enrico Ramirez-Ruiz, “Eccentric Black Hole Mergers in Dense Star Clusters: The Role of Binary–Binary Encounters,” Astrophys. J. 871, 91 (2019), arXiv:1810.00901 [astro-ph.HE] .
* Nishizawa _et al._ (2017) Atsushi Nishizawa, Alberto Sesana, Emanuele Berti, and Antoine Klein, “Constraining stellar binary black hole formation scenarios with eLISA eccentricity measurements,” Mon. Not. Roy. Astron. Soc. 465, 4375–4380 (2017), arXiv:1606.09295 [astro-ph.HE] .
* Nishizawa _et al._ (2016) Atsushi Nishizawa, Emanuele Berti, Antoine Klein, and Alberto Sesana, “eLISA eccentricity measurements as tracers of binary black hole formation,” Phys. Rev. D94, 064020 (2016), arXiv:1605.01341 [gr-qc] .
* Breivik _et al._ (2016) Katelyn Breivik, Carl L. Rodriguez, Shane L. Larson, Vassiliki Kalogera, and Frederic A. Rasio, “Distinguishing Between Formation Channels for Binary Black Holes with LISA,” Astrophys. J. Lett. 830, L18 (2016), arXiv:1606.09558 [astro-ph.GA] .
* Fang _et al._ (2019) Xiao Fang, Todd A. Thompson, and Christopher M. Hirata, “The Population of Eccentric Binary Black Holes: Implications for mHz Gravitational Wave Experiments,” Astrophys. J. 875, 75 (2019), arXiv:1901.05092 [astro-ph.HE] .
* Rodriguez _et al._ (2018b) Carl L. Rodriguez, Pau Amaro-Seoane, Sourav Chatterjee, and Frederic A. Rasio, “Post-Newtonian Dynamics in Dense Star Clusters: Highly-Eccentric, Highly-Spinning, and Repeated Binary Black Hole Mergers,” Phys. Rev. Lett. 120, 151101 (2018b), arXiv:1712.04937 [astro-ph.HE] .
* Gondán _et al._ (2018) László Gondán, Bence Kocsis, Péter Raffai, and Zsolt Frei, “Eccentric Black Hole Gravitational-Wave Capture Sources in Galactic Nuclei: Distribution of Binary Parameters,” Astrophys. J. 860, 5 (2018), arXiv:1711.09989 [astro-ph.HE] .
* Tagawa _et al._ (2020) Hiromichi Tagawa, Bence Kocsis, Zoltan Haiman, Imre Bartos, Kazuyuki Omukai, and Johan Samsing, “Eccentric Black Hole Mergers in Active Galactic Nuclei,” (2020), arXiv:2010.10526 [astro-ph.HE] .
* Ramos-Buades _et al._ (2020b) Antoni Ramos-Buades, Sascha Husa, Geraint Pratten, Héctor Estellés, Cecilio García-Quirós, Maite Mateu-Lucena, Marta Colleoni, and Rafel Jaume, “First survey of spinning eccentric black hole mergers: Numerical relativity simulations, hybrid waveforms, and parameter estimation,” Phys. Rev. D 101, 083015 (2020b), arXiv:1909.11011 [gr-qc] .
* Klein _et al._ (2018) Antoine Klein, Yannick Boetzel, Achamveedu Gopakumar, Philippe Jetzer, and Lorenzo de Vittori, “Fourier domain gravitational waveforms for precessing eccentric binaries,” Phys. Rev. D98, 104043 (2018), arXiv:1801.08542 [gr-qc] .
* Tiwari and Gopakumar (2020) Srishti Tiwari and Achamveedu Gopakumar, “Combining post-circular and Padé approximations to compute Fourier domain templates for eccentric inspirals,” Phys. Rev. D102, 084042 (2020), arXiv:2009.11333 [gr-qc] .
* Moore _et al._ (2018) Blake Moore, Travis Robson, Nicholas Loutrel, and Nicolas Yunes, “Towards a Fourier domain waveform for non-spinning binaries with arbitrary eccentricity,” Class. Quant. Grav. 35, 235006 (2018), arXiv:1807.07163 [gr-qc] .
* Moore and Yunes (2019) Blake Moore and Nicolás Yunes, “A 3PN Fourier Domain Waveform for Non-Spinning Binaries with Moderate Eccentricity,” Class. Quant. Grav. 36, 185003 (2019), arXiv:1903.05203 [gr-qc] .
* Liu _et al._ (2020) Xiaolin Liu, Zhoujian Cao, and Lijing Shao, “Validating the Effective-One-Body Numerical-Relativity Waveform Models for Spin-aligned Binary Black Holes along Eccentric Orbits,” Phys. Rev. D101, 044049 (2020), arXiv:1910.00784 [gr-qc] .
* Tanay _et al._ (2019) Sashwat Tanay, Antoine Klein, Emanuele Berti, and Atsushi Nishizawa, “Convergence of Fourier-domain templates for inspiraling eccentric compact binaries,” Phys. Rev. D100, 064006 (2019), arXiv:1905.08811 [gr-qc] .
* Hinderer and Babak (2017) Tanja Hinderer and Stanislav Babak, “Foundations of an effective-one-body model for coalescing binaries on eccentric orbits,” Phys. Rev. D 96, 104048 (2017), arXiv:1707.08426 [gr-qc] .
* Hinder _et al._ (2018) Ian Hinder, Lawrence E. Kidder, and Harald P. Pfeiffer, “Eccentric binary black hole inspiral-merger-ringdown gravitational waveform model from numerical relativity and post-Newtonian theory,” Phys. Rev. D 98, 044015 (2018), arXiv:1709.02007 [gr-qc] .
* Huerta _et al._ (2018) E.A. Huerta _et al._ , “Eccentric, nonspinning, inspiral, Gaussian-process merger approximant for the detection and characterization of eccentric binary black hole mergers,” Phys. Rev. D 97, 024031 (2018), arXiv:1711.06276 [gr-qc] .
* Chen _et al._ (2020) Zhuo Chen, E.A. Huerta, Joseph Adamo, Roland Haas, Eamonn O’Shea, Prayush Kumar, and Chris Moore, “Observation of eccentric binary black hole mergers with second and third generation gravitational wave detector networks,” (2020), arXiv:2008.03313 [gr-qc] .
* Chiaramello and Nagar (2020) Danilo Chiaramello and Alessandro Nagar, “Faithful analytical effective-one-body waveform model for spin-aligned, moderately eccentric, coalescing black hole binaries,” Phys. Rev. D 101, 101501 (2020), arXiv:2001.11736 [gr-qc] .
* Cao and Han (2017) Zhoujian Cao and Wen-Biao Han, “Waveform model for an eccentric binary black hole based on the effective-one-body-numerical-relativity formalism,” Phys. Rev. D 96, 044028 (2017), arXiv:1708.00166 [gr-qc] .
* Taracchini _et al._ (2012) Andrea Taracchini, Yi Pan, Alessandra Buonanno, Enrico Barausse, Michael Boyle, Tony Chu, Geoffrey Lovelace, Harald P. Pfeiffer, and Mark A. Scheel, “Prototype effective-one-body model for nonprecessing spinning inspiral-merger-ringdown waveforms,” Phys. Rev. D 86, 024011 (2012), arXiv:1202.0790 [gr-qc] .
* Nagar _et al._ (2018) Alessandro Nagar _et al._ , “Time-domain effective-one-body gravitational waveforms for coalescing compact binaries with nonprecessing spins, tides and self-spin effects,” Phys. Rev. D 98, 104052 (2018), arXiv:1806.01772 [gr-qc] .
* Nagar _et al._ (2020) Alessandro Nagar, Gunnar Riemenschneider, Geraint Pratten, Piero Rettegno, and Francesco Messina, “Multipolar effective one body waveform model for spin-aligned black hole binaries,” Phys. Rev. D 102, 024077 (2020), arXiv:2001.09082 [gr-qc] .
* Nagar _et al._ (2021) Alessandro Nagar, Alice Bonino, and Piero Rettegno, “All in one: effective one body multipolar waveform model for spin-aligned, quasi-circular, eccentric, hyperbolic black hole binaries,” (2021), arXiv:2101.08624 [gr-qc] .
* Setyawati and Ohme (2021) Yoshinta Setyawati and Frank Ohme, “Adding eccentricity to quasi-circular binary-black-hole waveform models,” (2021), arXiv:2101.11033 [gr-qc] .
* Habib and Huerta (2019) Sarah Habib and E.A. Huerta, “Characterization of numerical relativity waveforms of eccentric binary black hole mergers,” Phys. Rev. D 100, 044016 (2019), arXiv:1904.09295 [gr-qc] .
* Sperhake _et al._ (2008) Ulrich Sperhake, Emanuele Berti, Vitor Cardoso, Jose A. Gonzalez, Bernd Bruegmann, and Marcus Ansorg, “Eccentric binary black-hole mergers: The Transition from inspiral to plunge in general relativity,” Phys. Rev. D 78, 064069 (2008), arXiv:0710.3823 [gr-qc] .
* Hinder _et al._ (2008) Ian Hinder, Birjoo Vaishnav, Frank Herrmann, Deirdre Shoemaker, and Pablo Laguna, “Universality and final spin in eccentric binary black hole inspirals,” Phys. Rev. D 77, 081502 (2008), arXiv:0710.5167 [gr-qc] .
* Sopuerta _et al._ (2007) Carlos F. Sopuerta, Nicolas Yunes, and Pablo Laguna, “Gravitational recoil velocities from eccentric binary black hole mergers,” Astrophys. J. Lett. 656, L9–L12 (2007), arXiv:astro-ph/0611110 [astro-ph] .
* Sperhake _et al._ (2020) U. Sperhake, R. Rosca-Mead, D. Gerosa, and E. Berti, “Amplification of superkicks in black-hole binaries through orbital eccentricity,” Phys. Rev. D101, 024044 (2020), arXiv:1910.01598 [gr-qc] .
* Haegel and Husa (2020) Leïla Haegel and Sascha Husa, “Predicting the properties of black-hole merger remnants with deep neural networks,” Class. Quant. Grav. 37, 135005 (2020), arXiv:1911.01496 [gr-qc] .
* (69) “The Spectral Einstein Code,” http://www.black-holes.org/SpEC.html.
* (70) “Simulating eXtreme Spacetimes,” http://www.black-holes.org/.
* Chatziioannou _et al._ (2021) Katerina Chatziioannou, Harald P. Pfeiffer, _et al._ , (2021), in preparation.
* York (1999) James W. York, Jr., “Conformal ’thin sandwich’ data for the initial-value problem,” Phys. Rev. Lett. 82, 1350–1353 (1999), arXiv:gr-qc/9810051 [gr-qc] .
* Pfeiffer and York (2003) Harald P. Pfeiffer and James W. York, Jr., “Extrinsic curvature and the Einstein constraints,” Phys. Rev. D67, 044022 (2003), arXiv:gr-qc/0207095 [gr-qc] .
* Varma _et al._ (2018) Vijay Varma, Mark A. Scheel, and Harald P. Pfeiffer, “Comparison of binary black hole initial data sets,” Phys. Rev. D98, 104011 (2018), arXiv:1808.08228 [gr-qc] .
* Lindblom _et al._ (2006) Lee Lindblom, Mark A. Scheel, Lawrence E. Kidder, Robert Owen, and Oliver Rinne, “A New generalized harmonic evolution system,” Class. Quant. Grav. 23, S447–S462 (2006), arXiv:gr-qc/0512093 [gr-qc] .
* Rinne _et al._ (2009) Oliver Rinne, Luisa T. Buchman, Mark A. Scheel, and Harald P. Pfeiffer, “Implementation of higher-order absorbing boundary conditions for the Einstein equations,” Class. Quant. Grav. 26, 075009 (2009), arXiv:0811.3593 [gr-qc] .
* Boyle _et al._ (2019) Michael Boyle _et al._ , “The SXS Collaboration catalog of binary black hole simulations,” Class. Quant. Grav. 36, 195006 (2019), arXiv:1904.04831 [gr-qc] .
* (78) SXS Collaboration, “The SXS collaboration catalog of gravitational waveforms,” http://www.black-holes.org/waveforms.
* Boyle and Mroue (2009) Michael Boyle and Abdul H. Mroue, “Extrapolating gravitational-wave data from numerical simulations,” Phys. Rev. D80, 124045 (2009), arXiv:0905.3177 [gr-qc] .
* Boyle (2016) Michael Boyle, “Transformations of asymptotic gravitational-wave data,” Phys. Rev. D93, 084031 (2016), arXiv:1509.00862 [gr-qc] .
* (81) Michael Boyle, “Scri,” https://github.com/moble/scri.
* Favata (2009) Marc Favata, “Post-Newtonian corrections to the gravitational-wave memory for quasi-circular, inspiralling compact binaries,” Phys. Rev. D 80, 024002 (2009), arXiv:0812.0069 [gr-qc] .
* Mitman _et al._ (2020a) Keefe Mitman, Jordan Moxon, Mark A. Scheel, Saul A. Teukolsky, Michael Boyle, Nils Deppe, Lawrence E. Kidder, and William Throwe, “Computation of displacement and spin gravitational memory in numerical relativity,” Phys. Rev. D 102, 104007 (2020a), arXiv:2007.11562 [gr-qc] .
* Barkett _et al._ (2020) Kevin Barkett, Jordan Moxon, Mark A. Scheel, and Béla Szilágyi, “Spectral Cauchy-Characteristic Extraction of the Gravitational Wave News Function,” Phys. Rev. D 102, 024004 (2020), arXiv:1910.09677 [gr-qc] .
* Moxon _et al._ (2020) Jordan Moxon, Mark A. Scheel, and Saul A. Teukolsky, “Improved Cauchy-characteristic evolution system for high-precision numerical relativity waveforms,” Phys. Rev. D 102, 044052 (2020), arXiv:2007.01339 [gr-qc] .
* Mitman _et al._ (2020b) Keefe Mitman _et al._ , “Adding Gravitational Memory to Waveform Catalogs using BMS Balance Laws,” (2020b), arXiv:2011.01309 [gr-qc] .
* Varma _et al._ (2019c) Vijay Varma, Davide Gerosa, Leo C. Stein, François Hébert, and Hao Zhang, “High-accuracy mass, spin, and recoil predictions of generic black-hole merger remnants,” Phys. Rev. Lett. 122, 011101 (2019c), arXiv:1809.09125 [gr-qc] .
* Varma _et al._ (2020) Vijay Varma, Maximiliano Isi, and Sylvia Biscoveanu, “Extracting the Gravitational Recoil from Black Hole Merger Signals,” Phys. Rev. Lett. 124, 101104 (2020), arXiv:2002.00296 [gr-qc] .
* Chandrasekhar (1998) S. Chandrasekhar, _The Mathematical Theory of Black Holes_ (Clarendon Press, 1998).
* Healy _et al._ (2017) James Healy, Carlos O. Lousto, Hiroyuki Nakano, and Yosef Zlochower, “Post-Newtonian Quasicircular Initial Orbits for Numerical Relativity,” Class. Quant. Grav. 34, 145011 (2017), arXiv:1702.00872 [gr-qc] .
* Mroue _et al._ (2010) Abdul H. Mroue, Harald P. Pfeiffer, Lawrence E. Kidder, and Saul A. Teukolsky, “Measuring orbital eccentricity and periastron advance in quasi-circular black hole simulations,” Phys. Rev. D82, 124016 (2010), arXiv:1004.4697 [gr-qc] .
* Purrer _et al._ (2012) Michael Purrer, Sascha Husa, and Mark Hannam, “An Efficient iterative method to reduce eccentricity in numerical-relativity simulations of compact binary inspiral,” Phys. Rev. D 85, 124051 (2012), arXiv:1203.4258 [gr-qc] .
* Mora and Will (2002) Thierry Mora and Clifford M. Will, “Numerically generated quasiequilibrium orbits of black holes: Circular or eccentric?” Phys. Rev. D 66, 101501 (2002), arXiv:gr-qc/0208089 .
* Field _et al._ (2011) Scott E. Field, Chad R. Galley, Frank Herrmann, Jan S. Hesthaven, Evan Ochsner, and Manuel Tiglio, “Reduced basis catalogs for gravitational wave templates,” Phys. Rev. Lett. 106, 221102 (2011), arXiv:1101.3765 [gr-qc] .
* Maday _et al._ (2009) Y. Maday, N. .C Nguyen, A. T. Patera, and S. H. Pau, “A general multipurpose interpolation procedure: the magic points,” Communications on Pure and Applied Analysis 8, 383–404 (2009).
* Chaturantabut and Sorensen (2010) Saifon Chaturantabut and Danny C Sorensen, “Nonlinear model reduction via discrete empirical interpolation,” SIAM Journal on Scientific Computing 32, 2737–2764 (2010).
* Canizares _et al._ (2015) Priscilla Canizares, Scott E. Field, Jonathan Gair, Vivien Raymond, Rory Smith, and Manuel Tiglio, “Accelerated gravitational-wave parameter estimation with reduced order modeling,” Phys. Rev. Lett. 114, 071104 (2015), arXiv:1404.6284 [gr-qc] .
* Taylor and Varma (2020) Afura Taylor and Vijay Varma, “Gravitational wave peak luminosity model for precessing binary black holes,” (2020), 10.1103/PhysRevD.102.104047, arXiv:2010.00120 [gr-qc] .
* Varma and Ajith (2017) Vijay Varma and Parameswaran Ajith, “Effects of nonquadrupole modes in the detection and parameter estimation of black hole binaries with nonprecessing spins,” Phys. Rev. D96, 124024 (2017), arXiv:1612.05608 [gr-qc] .
* McKechan _et al._ (2010) D. J. A. McKechan, C. Robinson, and B. S. Sathyaprakash, “A tapering window for time-domain templates and simulated signals in the detection of gravitational waves from coalescing compact binaries,” _Gravitational waves. Proceedings, 8th Edoardo Amaldi Conference, Amaldi 8, New York, USA, June 22-26, 2009_ , Class. Quant. Grav. 27, 084020 (2010), arXiv:1003.2939 [gr-qc] .
* LIGO Scientific Collaboration (2018) LIGO Scientific Collaboration, _Updated Advanced LIGO sensitivity design curve_ , Tech. Rep. (2018) https://dcc.ligo.org/LIGO-T1800044/public.
* Newman and Penrose (1966) E.T. Newman and R. Penrose, “Note on the Bondi-Metzner-Sachs group,” J. Math. Phys. 7, 863–870 (1966).
* Goldberg _et al._ (1967) J.N. Goldberg, A.J. MacFarlane, E.T. Newman, F. Rohrlich, and E.C.G. Sudarshan, “Spin s spherical harmonics and edth,” J. Math. Phys. 8, 2155 (1967).
* Teukolsky (1973) Saul A. Teukolsky, “Perturbations of a rotating black hole. 1. Fundamental equations for gravitational electromagnetic and neutrino field perturbations,” Astrophys. J. 185, 635–647 (1973).
* Teukolsky (1972) S.A. Teukolsky, “Rotating black holes - separable wave equations for gravitational and electromagnetic perturbations,” Phys. Rev. Lett. 29, 1114–1118 (1972).
* Berti and Klein (2014) Emanuele Berti and Antoine Klein, “Mixing of spherical and spheroidal modes in perturbed Kerr black holes,” Phys. Rev. D 90, 064012 (2014), arXiv:1408.1860 [gr-qc] .
* Szilagyi _et al._ (2009) Bela Szilagyi, Lee Lindblom, and Mark A. Scheel, “Simulations of Binary Black Hole Mergers Using Spectral Methods,” Phys. Rev. D80, 124010 (2009), arXiv:0909.3557 [gr-qc] .
|
# On explosive boiling of a multicomponent Leidenfrost drop
Sijia Lyu Center for Combustion Energy, Key Laboratory for Thermal Science
and Power Engineering of Ministry of Education, Department of Energy and Power
Engineering, Tsinghua University, 100084 Beijing, China Huanshu Tan
Department of Chemical Engineering, University of California, Santa Barbara,
USA Yuki Wakata Center for Combustion Energy, Key Laboratory for Thermal
Science and Power Engineering of Ministry of Education, Department of Energy
and Power Engineering, Tsinghua University, 100084 Beijing, China Xianjun
Yang Center for Combustion Energy, Key Laboratory for Thermal Science and
Power Engineering of Ministry of Education, Department of Energy and Power
Engineering, Tsinghua University, 100084 Beijing, China Chung K. Law
Department of Mechanical and Aerospace Engineering, Princeton University,
Princeton, NJ 08544, USA Detlef Lohse Physics of Fluids Group, MESA+
Institute and J.M. Burgers Centre for Fluid Dynamics, University of Twente,
P.O. Box 217, 7500AE Enschede, The Netherlands Max Planck Institute for
Dynamics and Self-Organization, 37077 Göttingen, Germany Chao Sun
<EMAIL_ADDRESS>Center for Combustion Energy, Key Laboratory for
Thermal Science and Power Engineering of Ministry of Education, Department of
Energy and Power Engineering, Tsinghua University, 100084 Beijing, China
Physics of Fluids Group, MESA+ Institute and J.M. Burgers Centre for Fluid
Dynamics, University of Twente, P.O. Box 217, 7500AE Enschede, The Netherlands
Department of Engineering Mechanics, School of Aerospace Engineering, Tsinghua
University, Beijing 100084, China
###### Abstract
The gasification of multicomponent fuel drops is relevant in various energy-
related technologies. An interesting phenomenon associated with this process
is the self-induced explosion of the drop, producing a multitude of smaller
secondary droplets, which promotes overall fuel atomization and, consequently,
improves the combustion efficiency and reduces emissions of liquid-fueled
engines. Here, we study a unique explosive gasification process of a tri-
component droplet consisting of water, ethanol, and oil (“ouzo”), by high-
speed monitoring of the entire gasification event taking place in the well-
controlled, levitated Leidenfrost state over a superheated plate. It is
observed that the preferential evaporation of the most volatile component,
ethanol, triggers nucleation of the oil microdroplets/nanodroplets in the
remaining drop, which, consequently, becomes an opaque oil-in-water
microemulsion. The tiny oil droplets subsequently coalesce into a large one,
which, in turn, wraps around the remnant water. Because of the encapsulating
oil layer, the droplet can no longer produce enough vapor for its levitation,
and, thus, falls and contacts the superheated surface. The direct thermal
contact leads to vapor bubble formation inside the drop and consequently drop
explosion in the final stage.
multicomponent drop $|$ Leidenfrost state $|$ internal interaction $|$
volatility differentials $|$ mutual solubility differentials
Drop gasification, possibly followed by combustion, is a major component in
daily practices and technologies such as vehicle propulsion via internal
combustion engines, spray cooling and painting, and various energy conversion
industries demirbas2005potential ; law2010combustion ; Detlef_physio . These
drops are frequently multicomponent, governed by complex gasification and
combustion processes, due to different volatilities and mutual solubilities of
the liquid components christy2011flow ; diddens2017evaporating , the
interaction between the substrate and the drop bourrianne2019cold ;
tran2013droplet ; nair2014leidenfrost ; Duan2014 ; gao2018evaporation , and
different boiling regimes at varying local temperatures shirota2016dynamic ;
tran2012drop ; staat2015phase .
Recent studies on miscible and emulsified fuel drops tsue1998effect ;
takashima2005evaporation ; calabria2007combustion ; watanabe2010experimental ;
mura2012study have shown that, for certain optimal compositions, the drop can
undergo self-induced (micro)explosion upon gasification, producing multitudes
of much smaller, secondary drops which promote the overall fuel atomization,
and, consequently improve the combustion efficiency and reduce emissions
law1982recent ; sirignano1983fuel ; kadota2007microexplosion . It has been
further shown that the explosion is especially intense for emulsion drops,
such as water dispersed in a heavy oil, whose boiling temperature is
substantially higher than the limit of superheat of water. However, its
practical utilization is frequently constrained by the requirement of
substantial amount of surfactants to achieve phase stability for storage.
Capitalizing on the concept of inducing the superheating of one volatile
liquid component by another high-boiling-point immiscible component, as
observed in the above water-in-oil emulsion, the present study is motivated by
the desire to explore alternate fuel mixture systems exhibiting microexplosion
capabilities. In particular, in one of these smart systems, the mixture can
phase separate during the course of drop gasification, and subsequently self-
explode through the superheating of the volatile liquid components by the
phase-separated, relatively less volatile, high-boiling point component. The
multicomponent liquid we selected is the ouzo mixture, which consists of
ethanol, water, and a high boiling point trans-anethole oil, and has been
extensively studied in physicochemical hydrodynamic problems
tan2016evaporation ; lu2017universal ; moerman2018emulsion ;
otero2018compositional ; li2019bouncing ; Detlef_physio . The ternary phase
diagram of the solution provides information on phase separation (SI Appendix,
Fig. S1). In particular, it quantifies how increasing the water-ethanol ratio
can lead to oil droplet nucleation because of the progressive reduction of oil
solubility. Here, we use the Leidenfrost arrangement quere2013leidenfrost by
placing a small ouzo drop on a superheated substrate such that the drop
levitates on its own vapor layer leidenfrost1756aquae ; quere2013leidenfrost
and is heated through it. We shall investigate the entire boiling process of
such an ouzo droplet in the Leidenfrost state, and demonstrate in due course
the occurrence of the sequential process of the initial nucleation of the oil
microdroplets/nanodroplets; the self-encapsulation of the parent drop by an
oil cap, which emerged out of the coalesced oil droplets, and finally, the
micro-explosion of the parent drop.
## Results and Discussion
### Four Life Stages of a Leidenfrost Ouzo Drop.
A millimeter drop of a miscible ouzo solution (with three components
consisting of water, ethanol, and oil, see Materials and Methods for details)
is deposited with a microliter-pipette onto a superheated surface at
$400\text{\,}\mathrm{\SIUnitSymbolCelsius}$. As expected, the ouzo drop is in
the levitated Leidenfrost state (Fig. 1A), and a vapor layer is formed below
it, isolating it from direct contact with the solid surface and, thus,
constituting the so-called film boiling state. A slightly curved quartz lens
is used as the solid surface to prevent the drop from moving around, as
detailed in the Method section.
Figure 1: The entire boiling process of an ouzo drop initially in the
Leidenfrost state can be divided into four stages. For each row, A-C are
experimental results, and D-F are the corresponding sketches in the cross-
section view. In the sketches, the orange parts represent the oil-rich phase
and the blue parts the water-rich phase. $t_{1},t_{2},t_{3}$ and $t_{4}$
indicate the end moment of each stage. (A and D) Stages 1 and 2: The drop
transforms from the transparent state to the early milky state. Oil
microdroplets/nanodroplets first nucleate on the interface (see arrows),
making the end moment $t_{1}$ of stage 1, defined as $t_{1}=0$. Then the drop
is filled with oil emulsions and, accordingly, becomes fully opaque at
$t_{2}$. (B and E) Stage 3: Tiny oil microdroplets gradually coalesce and,
finally, form an oil cap on the upper half of the drop. The drop transforms
from the early-milky state to a transparent state with two separate phases.
The oil cap tries to cover the drop and, finally, wraps the drop at $t_{3}$.
(C and F) Stage 4: The drop, which previously was in a stable film-boiling
Leidenfrost state, now becomes unstable. The vapor generated from the oil-
encapsulation is no longer sufficient to levitate the drop, and consequently,
the drop directly contacts the solid surface and explodes at $t_{4}$. (Scale
bars: 2 mm.)
The observed boiling process takes more than 20 seconds, with the evaporating
ouzo drop experiences four distinct stages. Fig. 1 shows representative
experimental snapshots of the drop in each stage from a side view recording,
together with the corresponding sketches in the cross-section view. As shown
in the first image of Fig. 1A, the ouzo drop is initially miscible and hence
optically transparent. Subsequently, the preferential evaporation of the most
volatile ethanol component reduces the oil solubility. Consequently,
spontaneous nucleation of the oil microdroplets/nanodroplets (arrows in the
second image in Fig. 1A)— i.e., the ouzo effect—is triggered, as was similarly
observed by Tan et al tan2016evaporation ; tan2017self for an evaporating
sessile ouzo droplet, with the drop sitting on a solid substrate with a three-
phase contact line. This marks the end of the first stage. Many microdroplets
appear and are advected around by the internal flow (the third image in Fig.
1A). Then more and more oil microdroplets visibly appear in the drop—at the
beginning only on the drop surface, but later, also in the bulk of the drop.
After about 1.28 seconds, the drop is full of the oil microdroplets and
becomes opaque (the fourth image in Fig. 1A), defined as the end of stage 2.
This is, again, similarly seen by Tan et al tan2016evaporation ; tan2017self
for an evaporating ouzo drop on a solid substrate. The entire process of the
first two stages is sketched in Fig. 1D. Starting from stage 3, the oil
microdroplets start to coalesce and form large and transparent oil droplets,
as shown in the first image of Fig. 1B. These gradually merge into one entity,
which constitutes part of the drop surface. Finally, an oil cap is formed on
the upper half of the clear aqueous drop, as shown in the second image of Fig.
1B. Remarkably, the oil cap then gradually spreads over the drop surface (the
third image in Fig. 1B), as also seen for a drop on an atmospheric surface
li2020evaporating ; tan2019porous . The process of the third stage is sketched
in Fig. 1E. After the wrapping-up process is finished, a fully oil-capsulated
Leidenfrost drop has been established. However, this aqueous-in-oil drop
cannot be stably levitated, due to the lack of sufficient vapor, and leads to
the last stage of the drop’s life, as shown in Fig. 1C. Specifically, in the
first image in Fig. 1C, the top part of the drop appears blurry, due to the
unstable motion of the drop surface. With the levitating vapor layer formed by
the vaporizing volatile water being cut off by the encapsulating surface oil
layer, the droplet falls down and contacts the superheated surface. The
droplet is now heated rapidly, eventually leading to the nucleation of the
encapsulated volatile liquid core and, hence, the explosion of the drop as the
end of its life, as shown with the third and fourth images of Fig. 1C and the
sketches in Fig. 1F.
Figure 2: Quantitative analysis of the entire evaporation process. (A) Volume
$V$ versus time $t$ of the ouzo drop. Blue circles show experimental results.
The spreading of the data originates from the drop deformation, leading to an
error in identifying the droplet volume from the cross-section (shown in the
blue area). The red dashed line is the fitting line of the volume. The red
circle with the error bar shows the average volume in stage 4. (B) Evolution
of the absolute value of the evaporation volume flux rate per unit area
$\left|{\overline{q}}\right|$ versus time. The red dashed line is calculated
by the fitting lines of volume $V$ and entire area $S_{a}$ of the drop. The
blue area shows the possible range of $\left|{\overline{q}}\right|$. The red
circle with the error bar shows the average $\left|{\overline{q}}\right|$ in
stage 4. Moments of $t_{1}$, $t_{2}$, $t_{3}$, and $t_{4}$ are marked through
the vertical dashed lines.
To start a quantitative analysis, we calculate the volume $V$ for the
evolution of the Leidenfrost ouzo drop, assuming that the drop maintains its
axisymmetric shape during the entire evaporation process. As shown in Fig. 2A,
the drop size decreases rapidly within 24 seconds. Blue circles show
experimental results. The scatter of the data originates from the drop
deformation. The red dashed line and the blue area respectively show the
fitting line and the changing range of the volume. It is seen that during
stage 4, the drop is unsteady and asymmetric. Thus, it is hard to determine
the exact volume evolution in time and only the drop volume before the
explosion moment is calculated. The red circle with the error bar shows the
average volume in stage 4. To gain an overall view of the evaporation rate, we
calculate the absolute value of the volume decreasing rate per unit area,
$\left|{\overline{q}}\right|=\left|{{\mathrm{d}V}/{\mathrm{d}t}}\right|/S_{a}$,
where $S_{a}$ is the entire surface area of the drop determined from the side
profile of the drop. As shown in Fig. 2B, the red dashed line is calculated by
the fitting lines of volume $V$ and the entire area $S_{a}$ of the drop. The
blue area shows the possible range of $\left|{\overline{q}}\right|$. The red
circle with the error bar shows the average $\left|{\overline{q}}\right|$ in
stage 4. The range of the blue area and the value of the error bar are
explained in the SI Appendix. It is seen that $\left|{\overline{q}}\right|$
initially does not change substantially, and then decreases rapidly. In stage
2, ethanol is the main evaporating component with a faster evaporation rate.
In stage 3, water gradually becomes the main evaporating component with a
slower evaporation rate than that of ethanol. As the oil film covers the water
drop (between $t_{3}$, and $t_{4}$), the average evaporation rate decreases
rapidly, as shown with the red circle in Fig. 2B. The corresponding ending
moments of each stage $t_{1},t_{2}$, $t_{3}$, and $t_{4}$ are marked by the
dashed vertical lines in Fig. 2. Three further and independent experimental
realizations of the same experiment are shown in the SI Appendix, all giving
similar results.
As discussed above, the Leidenfrost ouzo drop experiences complex boiling
dynamics, including four different stages, which we now discuss in more
detail.
### Stages 1 and 2: From Transparent Miscible Ouzo Drop to Emulsification due
to Microdroplet Nucleation.
Figure 3: From transparent miscible ouzo drop to emulsification due to
microdroplet nucleation. (A) The white nucleated oil areas first occur on the
interface and are advected by the internal flow. A laser sheet vertically
shines through the center of the drop from the top for visualization. In the
cross-section of the drop, the drop is transparent in the beginning. Then, the
nucleated oil microdrops are distributed through two large vortexes.
Subsequently, the oil microdroplets distribute throughout the entire drop. As
these oil microdroplets coalesce, the drop becomes more transparent, and a
strong flow is observed within the drop. (B, C) The images of A are used to
analyze the velocity within the drop using PIV. (B) The internal velocity
distribution in the cross-section of the drop shows that the flow moves
downwards near the axis and upwards near the interface. Two large vortices
form in the cross-section, corresponding to one toroidal vortex in the 3D
drop. For all three times, the maximum velocity is around the center of the
drop. At 8.45 s, the maximum magnitude is about 15 cm/s. (C) Temporal
evolution of the velocity at the half-height of the drop. With advancing time
$t$, both the maximum velocity and mean velocity decrease.
The first characteristic event during the boiling process of the Leidenfrost
ouzo drop is the transformation from the initial miscible drop to an opaque
one due to the nucleated oil microdroplets, i.e., an emulsified drop. The
emulsification is due to the higher evaporation rates of ethanol, which leads
to lower oil solubility and eventually, nucleation of oil microdroplets
tan2016evaporation .
The temperature of the Leidenfrost drop can be approximated by the boiling
temperature of the mixture burton2012geometry . At the initial drop
composition, the boiling temperature is around
$80\text{\,}\mathrm{\SIUnitSymbolCelsius}$ noyes1901boiling , which is higher
than the boiling point of pure ethanol
($78\text{\,}\mathrm{\SIUnitSymbolCelsius}$), but lower than that of pure
water ($100\text{\,}\mathrm{\SIUnitSymbolCelsius}$), and much lower than that
of trans-anethole ($234\text{\,}\mathrm{\SIUnitSymbolCelsius}$). Consequently,
ethanol preferentially vaporizes, and its reduced concentration leads to the
corresponding reduction of the oil solubility, which in turn induces the ouzo
effect tan2016evaporation characterized by the spontaneous nucleation of oil
emulsion. Fig. 3A gives four snapshots, showing the oil microdroplets start
nucleating near the drop surface and then are advected by the convection
inside the drop.
We use the microdroplets as tracer particles to reveal the flow field in the
drop by vertically shining a laser sheet at the center of the drop. A
particle-image velocimetry (PIV) calculation (PIVlab) shown in Fig. 3B
displays the motion of the microdroplets, which move upwards along the surface
and then downwards inside the drop (Movie S1). Two large vortices form in the
cross-section, corresponding to one toroidal vortex in the three-dimensional
(3D) drop. For a gravity-flattened drop (see the third image in Fig. 3A), the
liquid at the drop base is entrained by the viscous flow in the vapor layer
from the base center to the periphery, which contributes to the flow moving
upwards near the interface and downwards near the axis
bouillant2018leidenfrost . In addition, the temperature is lower at the apex
of a large Leidenfrost drop bouillant2018leidenfrost , again giving a higher
surface tension there. The temperature gradient also causes thermal Marangoni
flows along with the interface from the base to the apex
bouillant2018leidenfrost . However, a higher temperature at the drop base may
induce a higher evaporation rate of ethanol, giving a higher surface tension
there. The concentration gradient may induce solutal Marangoni flows along
with the interface in the opposite direction of the thermal Marangoni flow.
Apparently, the measured motion of the microdroplets (Fig. 3B) suggests that
solutal Marangoni flows are weaker in the current situation.
### Stage 3: Water-in-Oil Drop by Oil Wrapping-Up.
Figure 4: Stage 3 in the drop history. (A) The internal oil microdroplets
coalesce, and correspondingly the mean size of oil droplets increases with
time. The drop becomes more and more transparent. (B) The oil cap gradually
covers the aqueous drop, and finally wraps around it. (C) The corresponding
sketches of the wrapping process in the side view. Note that water is trapped
inside the droplet. (Scale bars: 1 mm.)
Figure 5: Stage 4 in the drop history. (A) Calculated vapor-layer thickness
(based on Eq. 1) versus time during the end of stage 3 and in stage 4. The
final vapor-layer thickness is above 30 $\mu$m in stage 3, but it decreases to
around 10 $\mu$m in stage 4. The red dashed line and the blue area,
respectively, show the average value and the changing range of the vapor-layer
thickness. The red point with the error bar indicates the average vapor-layer
thickness in stage 4. (B) Explosion of the drop. Combining the side (B, Upper)
and the bottom (B, Lower) views, after the drop violently oscillates for a
while, it locally contacts the heated substrate (shown in the red circle).
Vapor bubbles are generated within the drop at the contact area (shown in the
green circle). The vapor bubbles formed have large internal pressure, which
causes a violent explosion in the end (shown in the blue circle). (Scale bars:
1 mm.)
.
The opaque emulsified drop is in a transient state, as the nucleated oil
microdroplets are metastable. The continuing oil nucleation and the
microdroplet coalescence lead to the growth of the microdroplets. However, the
larger oil microdroplets cannot follow the flow closely. As shown in Fig. 4A,
because of buoyancy they tend to float around the apex of the drop and
consequently the oil phase merges at the top of the drop.
To provide some insight into the merging and floating processes of the oil
microdroplets, we consider the motion of the microdroplets that is controlled
by two forces—i.e., the drag force $F_{d}\propto\mu R_{o}v_{o}$ and the
buoyancy force $F_{b}\propto\Delta\rho R_{o}^{3}$, where $\mu$ is the liquid
dynamic viscosity in the drop, and $R_{o}$ and $v_{o}$ are the radius and
velocity of the microdroplets. Thus, the drag-buoyancy force ratio is
inversely proportional to the droplet size squared, $F_{d}/F_{b}\propto\mu
v_{o}\Delta\rho^{-1}R_{o}^{-2}$. When the microdroplets are small, the drag
force prevails, and hence, they follow the flow closely. However, the buoyancy
force starts to dominate for the larger microdroplets. Fig. 3C shows the
temporal evolution of the velocity at the half-height of the drop. For all
three times, the maximum velocity is around the center of the drop. At 8.45 s,
the maximum magnitude is about 15 cm/s. Recognizing that since most ethanol
may be depleted at this time, to evaluate the dimensionless parameter of the
water-rich aqueous drop, we can use the physical properties of liquid water at
the boiling temperature. The Péclet number, representing the ratio of
convective transfer over the diffusive one, is $Pe=v_{max}R/\alpha\approx
1741$, where $v_{max}$ is the maximum velocity within the drop, $R$ the
equivalent radius of the drop, and $\alpha$ the thermal diffusivity of liquid
water. Here we only consider the thermal transport, as the thermal effect is
the dominant one in the current stage. The advective transport rate is much
higher than that of thermal diffusion, suggesting that the diffusion is way
too slow to level out temperature differences. Furthermore, both the maximum
velocity and mean velocity decrease with time. Since the small microdroplets
grow and the average driven flow slows down, more and more oil droplets float
up and accumulate at the top. As demonstrated in Fig. 4A, the oil droplets
float around the apex of the drop, and then gradually merge into an oil cap.
The phenomenon is new as compared to the evaporation of an ouzo drop directly
in contact with a solid surface, as there, the contact line has major effects
on the evaporation process tan2016evaporation ; tan2017self . In such a case,
the oil droplets accumulate at the contact line and form an oil ring. In
contrast, here, the ternary Leidenfrost drop does not have a contact line, and
the oil droplets can move freely within the Leidenfrost drop. Oil droplets
start to coalesce and gradually merge into one entity. Finally, the water-rich
aqueous drop is attached to an oil cap.
The reduction of the ethanol concentration leads to an increase of the drop
surface energy and correspondingly changes the wetting condition of the oil
cap on the drop. Recognizing that the surface tension of water is very high,
the oil cap tries to cover the aqueous drop to reduce the surface energy. Fig.
4B then shows a series of snapshots that capture the spreading of the oil cap
on the drop surface. Additionally, the upward interfacial flow competes with
the spreading of the oil from the top, which causes repeated advancing and
retreating spreading of the oil film (Movie S2). In the end, the oil film
spreads and wraps the entire drop. The sketches in Fig. 4C show the wrapping
process of the oil film as discussed. Consequently, the emulsified drop
completes the oil-encapsulation and becomes a water-in-oil Leidenfrost drop.
We note in passing that a similar wrapping-up phenomenon has also been
observed in other fluids systems by Li et al li2020evaporating , Tan et al
tan2019porous , and Kadota et al kadota2007microexplosion .
### Stage 4: The Final Drop Explosion.
According to the lubrication approximation, the lifting force induced by the
vapor layer should balance the weight of the drop. The thickness of the vapor
layer can then be evaluated as lyu2019final
$h=\left(\dfrac{3\pi}{2}\dfrac{\mu_{v}\left|{\overline{q}}\right|\rho_{l}r_{b}^{4}}{\rho_{v}G}\right)^{1/3},$
(1)
where $r_{b}$ is the horizontal extent of the drop bottom surface, and
$\mu_{v}$, $\rho_{v}$, $\rho_{l}$ are the dynamic viscosity of vapor, the
vapor density and the liquid-phase density of the evaporating component (water
in stage 3 and oil in stage 4), respectively. The physical properties of water
and oil are given in the SI Appendix. Because the volume of oil is much
smaller than that of water and the density difference between oil and water is
small, the gravity of the drop can be calculated by $G=\rho_{l,w}Vg$, where
$\rho_{l,w}$ is the density of liquid water, $g$ is the gravitational
acceleration.
As shown in Fig. 5A, the vapor-layer thickness near the end of the last stage
3 is above 30 $\mu$m when water is the main evaporation phase. However, the
thickness decreases to around 10 $\mu$m based on Eq. 1 when the drop is
wrapped with the oil phase in stage 4, as shown in Fig. 5A. The range of the
blue area and the value of the error bar are explained in SI Appendix. The
very thin vapor layer results from the extremely slow evaporation rate of the
oil phase, as shown in Fig. 2B. Such a thin vapor layer cannot stably levitate
a millimeter-sized drop burton2012geometry . A small perturbation will induce
direct contact between the drop and the superheated solid surface, and this
local contact consequently triggers microexplosion of the water droplet fully
entrapped by the oil layer.
We combine the bottom and side recordings, as presented in Fig. 5B, to
identify the moment around the explosion. In the first column, there is no
contact yet, as shown in Lower. Then in the second column, the dark area
marked in the red circle demonstrates that the drop contacts the heated
substrate. Once the drop contacts the superheated substrate, the temperature
of the contacting liquid increases rapidly above the oil boiling point,
$T_{o}$, which is much higher than that of water, $T_{w}$. Thus, the water
phase next to the oil contacting area is further superheated, and vapor
bubbles are observed in the green circle of Fig. 5B. Once the drop is unable
to sustain the inner high pressure of the vapor bubbles prosperetti2017vapor ,
the drop undergoes a violent explosion (see the blue circle of Fig. 5B).
## Conclusions and Outlook
In conclusion, we have experimentally investigated the entire gasification
process of a multicomponent Leidenfrost drop of ouzo composition. It is found
that, upon gasification, the preferential evaporation and, hence,
concentration reduction of the most volatile ethanol leads to the massive
formation of oil microdroplets, which, in turn, agglomerate and form a surface
layer that encapsulates the remaining aqueous drop. The aqueous drop, within
the encapsulating oil layer, is then heated to the limit of superheat and
consequently, nucleates near the superheated surface, hence rupturing the
entire droplet.
The found phenomena illustrate the beauty & richness of multicomponent drop
systems with phase transition, which are also relevant in many combustion
processes.
## Materials and Methods
### Experimental Setup and Procedures
A high-speed camera (Photron Fastcam NOVA S12) with macro lens (Nikon 105 mm)
was placed in the side view, and another high-speed camera (Photron Fastcam
AX200) with a long-distance microscope (Navitar) was placed in the bottom view
to simultaneously record the entire process. A slightly curved quartz lens was
used as a substrate to limit the movement of drops. A sapphire base was placed
under the quartz lens to improve the temperature uniformity of the substrate.
Both of them were heated by an aluminum block heater. The surface temperature
of the substrate, $T_{s}$, was measured by a thermocouple attached to the
surface. The transparent ouzo drop solution was prepared with an initial
composition of 31.80$\%$ (volume [vol]/vol) ultrapure water, 66.05$\%$
(vol/vol) ethanol ($\geq$99.7$\%$), and 2.15$\%$ (vol/vol) transanethole oil
(Sigma-Aldrich; 99$\%$), which located the liquid to be initially in the one-
phase regime and can easily enter the nucleation region tan2017self ;
tan2016evaporation . A large ouzo drop produced by a microlitre-pipette was
deposited on the superheated substrate. The initial volume of the drop was 100
$\mu$L. The quantitative calculation starts around 50 $\mu$L, and the
substrate temperature was around 400∘C.
### Image analyses
The images were analyzed using a Matlab code. The shape of the drop was
assumed to be axisymmetric ma2018self , and the calculation on the volume
evolution starts from stage 2.
### ACKNOWLEDGMENTS
We thank X. Chao, Y. C. Liu, and Y. S. Li for insightful discussions. The work
was support by Natural Science Foundation of China Grant 11988102, National
Natural Science Foundation of China Joint Research Program 11861131005,
Deutsche Forschungsgemeinschaft Program OH 75/3-1, and Tsinghua University
Initiative Scientific Research Program 20193080058.
## References
* (1) Demirbas A (2005) Potential applications of renewable energy sources, biomass combustion problems in boiler power systems and combustion related environmental issues. Prog. Energy Combust. Sci. 31(2):171–192.
* (2) Law CK (2010) Combustion Physics. (Cambridge University Press).
* (3) Lohse D, Zhang X (2020) Physicochemical hydrodynamics of droplets out of equilibrium. Nat. Rev. Phys. https://doi.org/10.1038/s42254-020-0199-z.
* (4) Christy JR, Hamamoto Y, Sefiane K (2011) Flow transition within an evaporating binary mixture sessile drop. Phys. Rev. Lett. 106(20):205701.
* (5) Diddens C, Tan H, Lv P, Versluis M, Kuerten J, Zhang X, Lohse D (2017) Evaporating pure, binary and ternary droplets: thermal effects and axial symmetry breaking. J. Fluid Mech. 823:470–497.
* (6) Bourrianne P, Lv C, Quéré D (2019) The cold Leidenfrost regime. Sci. Adv. 5(6):eaaw0304.
* (7) Tran T, Staat HJ, Susarrey-Arce A, Foertsch TC, van Houselt A, Gardeniers HJ, Prosperetti A, Lohse D, Sun C (2013) Droplet impact on superheated micro-structured surfaces. Soft Matter 9(12):3272–3282.
* (8) Nair H, Staat HJ, Tran T, van Houselt A, Prosperetti A, Lohse D, Sun C (2014) The Leidenfrost temperature increase for impacting droplets on carbon-nanofiber surfaces. Soft Matter 10(13):2102–2109.
* (9) Lv P, Xue Y, Shi Y, Lin H, Duan H (2014) Metastable states and wetting transition of submerged superhydrophobic structures. Phys. Rev. Lett. 112(19):196101.
* (10) Gao M, Kong P, Zhang Lx (2018) Evaporation dynamics of different sizes sessile droplets on hydrophilic and hydrophobic heating surface under constant wall heat fluxes conditions. Int. Commun. Heat Mass Transfer 93:93–99.
* (11) Shirota M, van Limbeek MA, Sun C, Prosperetti A, Lohse D (2016) Dynamic Leidenfrost effect: relevant time and length scales. Phys. Rev. Lett. 116(6):064501.
* (12) Tran T, Staat HJ, Prosperetti A, Sun C, Lohse D (2012) Drop impact on superheated surfaces. Phys. Rev. Lett. 108(3):036101.
* (13) Staat HJ, Tran T, Geerdink B, Riboux G, Sun C, Gordillo JM, Lohse D (2015) Phase diagram for droplet impact on superheated surfaces. J. Fluid Mech. 779, R3.
* (14) Tsue M, Yamasaki H, Kadota T, Segawa D, Kono M (1998) Effect of gravity on onset of microexplosion for an oil-in-water emulsion droplet in Symposium (international) on combustion. (Elsevier), Vol. 27, pp. 2587–2593.
* (15) Takashima T, Shiota H (2005) Evaporation of an oil-in-water type emulsion droplet on a hot surface. Heat Transfer—Asian Research: Co-sponsored by the Society of Chemical Engineers of Japan and the Heat Transfer Division of ASME 34(7):527–537.
* (16) Calabria R, Chiariello F, Massoli P (2007) Combustion fundamentals of pyrolysis oil based fuels. Exp. Therm. Fluid Sci. 31(5):413–420.
* (17) Watanabe H, Suzuki Y, Harada T, Matsushita Y, Aoki H, Miura T (2010) An experimental investigation of the breakup characteristics of secondary atomization of emulsified fuel droplet. Energy 35(2):806–813.
* (18) Mura E, Massoli P, Josset C, Loubar K, Bellettre J (2012) Study of the micro-explosion temperature of water in oil emulsion droplets during the Leidenfrost effect. Exp. Therm. Fluid Sci. 43:63–70.
* (19) Law CK (1982) Recent advances in droplet vaporization and combustion. Prog. Energy Combust. Sci. 8(3):171–201.
* (20) Sirignano WA (1983) Fuel droplet vaporization and spray combustion theory. Prog. Energy Combust. Sci. 9(4):291–322.
* (21) Kadota T, Tanaka H, Segawa D, Nakaya S, Yamasaki H (2007) Microexplosion of an emulsion droplet during Leidenfrost burning. Proc. Combust. Inst. 31(2):2125–2131.
* (22) Tan H, Diddens C, Lv P, Kuerten J, Zhang X, Lohse D (2016) Evaporation-triggered microdroplet nucleation and the four life phases of an evaporating ouzo drop. Proc. Natl. Acad. Sci. 113(31):8642–8647.
* (23) Lu Z, Schaarsberg MHK, Zhu X, Yeo LY, Lohse D, Zhang X (2017) Universal nanodroplet branches from confining the ouzo effect. Proc. Natl. Acad. Sci. 114(39):10332–10337.
* (24) Moerman PG, Hohenberg PC, Vanden-Eijnden E, Brujic J (2018) Emulsion patterns in the wake of a liquid–liquid phase separation front. Proc. Natl. Acad. Sci. 115(14):3599–3604.
* (25) Otero J, Meeker S, Clegg PS (2018) Compositional ripening of particle-stabilized drops in a three-liquid system. Soft Matter 14(19):3783–3790.
* (26) Li Y, Diddens C, Prosperetti A, Chong KL, Zhang X, Lohse D (2019) Bouncing oil droplet in a stratified liquid and its sudden death. Phys. Rev. Lett. 122(15):154502.
* (27) Quéré D (2013) Leidenfrost dynamics. Annu. Rev. Fluid Mech. 45(1):197–215.
* (28) Leidenfrost JG (1756) De aquae communis nonnullis qualitatibus tractatus. (Ovenius).
* (29) Tan H, Diddens C, Versluis M, Butt HJ, Lohse D, Zhang X (2017) Self-wrapping of an ouzo drop induced by evaporation on a superamphiphobic surface. Soft Matter 13(15):2749–2759.
* (30) Li Y, Diddens C, Segers T, Wijshoff H, Versluis M, Lohse D (2020) Evaporating droplets on oil-wetted surfaces: Suppression of the coffee-stain effect. Proc. Natl. Acad. Sci. 117(29):16756–16763.
* (31) Tan H, Wooh S, Butt HJ, Zhang X, Lohse D (2019) Porous supraparticle assembly through self-lubricating evaporating colloidal ouzo drops. Nat. Commun. 10(1):1–8.
* (32) Burton J, Sharpe A, Van Der Veen R, Franco A, Nagel S (2012) Geometry of the vapor layer under a Leidenfrost drop. Phys. Rev. Lett. 109(7):074301.
* (33) Noyes WA, Warfel R (1901) The boiling-point curve for mixtures of ethyl alcohol and water. J. Am. Chem. Soc. 23(7):463–468.
* (34) Bouillant A, Mouterde T, Bourrianne P, Lagarde A, Clanet C, Quéré D (2018) Leidenfrost wheels. Nat. Phys. 14(12):1188–1192.
* (35) Lyu S, Mathai V, Wang Y, Sobac B, Colinet P, Lohse D, Sun C (2019) Final fate of a Leidenfrost droplet: Explosion or takeoff. Sci. Adv. 5(5):eaav8081.
* (36) Prosperetti A (2017) Vapor bubbles. Annu. Rev. Fluid Mech. 49(1):221–248.
* (37) Ma X, Burton JC (2018) Self-organized oscillations of Leidenfrost drops. J. Fluid Mech. 846:263–291.
|
# Weakly Supervised Neuro-Symbolic Module Networks for Numerical Reasoning
Amrita Saha Salesforce AI Research Shafiq Joty Steven C.H. Hoi Salesforce
AI Research
###### Abstract
Neural Module Networks (NMNs) have been quite successful in incorporating
explicit reasoning as learnable modules in various question answering tasks,
including the most generic form of numerical reasoning over text in Machine
Reading Comprehension (MRC). However, to achieve this, contemporary NMNs need
strong supervision in executing the query as a specialized program over
reasoning modules and fail to generalize to more open-ended settings without
such supervision. Hence we propose Weakly-Supervised Neuro-Symbolic Module
Network (WNSMN) trained with answers as the sole supervision for numerical
reasoning based MRC. It learns to execute a noisy heuristic program obtained
from the dependency parsing of the query, as discrete actions over both neural
and symbolic reasoning modules and trains it end-to-end in a reinforcement
learning framework with discrete reward from answer matching. On the
numerical-answer subset of DROP, WNSMN outperforms NMN by 32% and the
reasoning-free language model GenBERT by 8% in exact match accuracy when
trained under comparable weak supervised settings. This showcases the
effectiveness and generalizability of modular networks that can handle
explicit discrete reasoning over noisy programs in an end-to-end manner.
## 1 Introduction
End-to-end neural models have proven to be powerful tools for an expansive set
of language and vision problems by effectively emulating the _input-output_
behavior. However, many real problems like Question Answering (QA) or Dialog
need more interpretable models that can incorporate explicit reasoning in the
inference. In this work, we focus on the most generic form of numerical
reasoning over text, encompassed by the reasoning-based MRC framework. A
particularly challenging setting for this task is where the answers are
numerical in nature as in the popular MRC dataset, DROP (Dua et al., 2019).
Figure 1 shows the intricacies involved in the task, (i) passage and query
language understanding, (ii) contextual understanding of the passage date and
numbers, and (iii) application of quantitative reasoning (e.g., _max, not_)
over dates and numbers to reach the final numerical answer.
Three broad genres of models have proven successful on the DROP numerical
reasoning task.
First, _large-scale pretrained language models_ like GenBERT (Geva et al.,
2020) uses a monolithic Transformer architecture and decodes numerical answers
digit-by-digit. Though they deliver mediocre performance when trained only on
the target data, their competency is derived from pretraining on massive
synthetic data augmented with explicit supervision of the gold numerical
reasoning.
Second kind of models are the _reasoning-free hybrid models_ like NumNet (Ran
et al., 2019), NAQANet (Dua et al., 2019), NABERT+ (Kinley & Lin, 2019) and
MTMSN (Hu et al., 2019), NeRd (Chen et al., 2020). They explicitly incorporate
numerical computations in the standard extractive QA pipeline by learning a
multi-type answer predictor over different reasoning types (e.g., _max/min_ ,
_diff/sum_ , _count_ , _negate_) and directly predicting the corresponding
numerical expression, instead of learning to reason. This is facilitated by
exhaustively precomputing all possible outcomes of discrete operations and
augmenting the training data with the reasoning-type supervision and numerical
expressions that lead to the correct answer.
Lastly, the most relevant class of models to consider for this work are the
_modular networks for reasoning_. Neural Module Networks (NMN) (Gupta et al.,
2020) is the first explicit reasoning based QA model which parses the query
into a specialized program and executes it step-wise over learnable reasoning
modules. However, to do so, apart from the exhaustive precomputation of all
discrete operations, it also needs more fine-grained supervision of the gold
program and the gold program execution, obtained heuristically, by leveraging
the abundance of templatized queries in DROP.
Figure 1: Example (passage, query, answer) from DROP and outline of our
method: executing noisy program obtained from dependency parsing of query by
learning date/number entity specific cross attention, and sampling and
execution of discrete operations on entity arguments to reach the answer.
While being more pragmatic and richer at interpretability, both modular and
hybrid networks are also tightly coupled with the additional supervision. For
instance, the hybrid models cannot learn without it, and while NMN is the
first to _enable_ learning from QA pair alone, it still needs more finer-
grained supervision for at least a part of the training data. With this, it
manages to supercede the SoTA models NABERT and MTMSN on a carefully chosen
subset of DROP using the supervision. However, NMN generalizes poorly to more
open-ended settings where such supervision is not easy to handcraft.
Need for symbolic reasoning. One striking characteristic of the modular
methods is to avoid discrete reasoning by employing only learnable modules
with an exhaustively precomputed space of outputs. While they perform well on
DROP, their modeling complexity grows arbitrarily with more complex non-linear
numerical operations (e.g., $\exp$, $\log$, $\cos$). Contrarily, symbolic
modular networks that execute the discrete operations are possibly more robust
or pragmatic in this respect by remaining unaffected by the operation
complexity. Such discrete reasoning has indeed been incorporated for simpler,
well-structured tasks like math word problems (Koncel-Kedziorski et al., 2016)
or KB/Table-QA (Zhong et al., 2017; Liang et al., 2018; Saha et al., 2019),
with Deep Reinforcement Learning (RL) for end-to-end training. MRC however
needs a more generalized framework of modular neural networks involving more
fuzzy reasoning over noisy entities extracted from open-ended passages.
In view of this, we propose a Weakly-Supervised Neuro-Symbolic Module Network
(WNSMN)
* •
A first attempt at numerical reasoning based MRC, trained with answers as the
sole supervision;
* •
Based on a generalized framework of dependency parsing of queries into noisy
heuristic programs;
* •
End-to-end training of neuro-symbolic reasoning modules in a RL framework with
discrete rewards;
To concretely compare WNSMN with contemporary NMN, consider the example in
Figure 1. In comparison to our generalized query-parsing, NMN parses the query
into a program form _(MAX(FILTER(FIND(‘Carpenter’), ‘goal’))_ , which is step-
wise executed by different learnable modules with exhaustively precomputed
output set. To train the network, it employs various forms of strong
supervision such as gold program operations and gold query-span attention at
each step of the program and gold execution i.e., supervision of the passage
numbers (_23, 26, 42_) to execute _MAX_ operation on.
While NMN can only handle the 6 reasoning categories that the supervision was
tailored to, WNSMN focuses on the full DROP with numerical answers (called
DROP-_num_) that involves more diverse reasoning on more open-ended questions.
We empirically compare WNSMN on DROP-_num_ with the SoTA NMN and GenBERT that
allow learning with partial or no strong supervision. Our results showcase
that the proposed WNSMN achieves 32% better accuracy than NMN in absence of at
least one or more types of supervision and performs 8% better than GenBERT
when the latter is fine-tuned only on DROP in a comparable setup, without
additional synthetic data having explicit supervision.
## 2 Model: Weakly Supervised Neuro-Symbolic Module Network
We now describe our proposed WNSMN that learns to infer the answer based on
weak supervision of the QA pair by generating the program form of the query
and executing it through explicit reasoning.
##### Parsing Query into Programs
To keep the framework generic, we use a simplified representation of the
Stanford dependency parse tree (Chen & Manning, 2014) of the query to get a
generalized program (Section A.5). First, a node is constructed for the
subtree rooted at each child of the root by merging its descendants in the
original word order. Next an edge is added from the left-most node (which we
call the _root clause_) to every other node. Then by traversing left to right,
each node is organized into a step of a program having a linear flow. For
example, the program obtained in Figure 1 is _X1 = (‘which is the longest’)_ ;
_X2 = (‘goal by Carpenter’, X1)_ ; _Answer = Discrete-Reasoning(‘which is the
longest’, X2)_. Each program step consists of two types of arguments (i) Query
Span Argument obtained from the corresponding node, indicates the query
segment referred to, in that program step e.g., _‘goal by Carpenter’_ in Step
2 (ii) Reference Argument(s) obtained from the incoming edges to that node,
refers to the previous steps of the program that the current one depends on
e.g., _X1_ in Step 2. Next, a final step of the program is added, which has
the reference argument as the leaf node(s) obtained in the above manner and
the query span argument as the root-clause. This step is specifically
responsible for handling the discrete operation, enabled by the root-clause
which is often indicative of the kind of discrete reasoning involved (e.g.,
_max_). However this being a noisy heuristic, the QA model needs to be robust
to such noise and additionally rely on the full query representation in order
to predict the discrete operation. For simplicity we limit the number of
reference arguments to 2.
### 2.1 Program Execution
Our proposed WNSMN learns to execute the program over the passage in three
steps. In the preprocessing step, it identifies numbers and dates from the
passage, and maintains them as separate canonicalized entity-lists along with
their mention locations. Next, it learns an entity-specific _cross-attention_
model to rank the entities _w.r.t._ their query-relevance (Section 2.1.1), and
then _samples_ relevant entities as discrete arguments (Section 2.1.2) and
executes appropriate discrete operations on them to reach the answer. An RL
framework (Section 2.1.3) trains it end-to-end with the answer as the sole
supervision.
#### 2.1.1 Entity-Specific Cross Attention for Information Extraction
To rank the query-relevant passage entities, we model the passage, program and
entities jointly.
##### Modeling interaction between program and passage
This module (Figure 2, left) learns to associate _query span_ arguments of the
program with the passage. For this, similar to NMN, we use a BERT-base
pretrained encoder (Devlin et al., 2018) to get contextualized token
embeddings of the passage and query span argument of each program step,
respectively denoted by ${\bm{P}}_{k}$ and ${\bm{Q}}_{k}$ for the $k$’th
program step. Based on it, we learn a _similarity_ matrix
${\bm{\mathsfit{S}}}\in\mathbbm{R}^{l\times n\times m}$ between the program
and passage, where $l$, $n$, and $m$ respectively are the program length and
query span argument and passage length (in tokens). Each
${\bm{S}}_{k}\in\mathbbm{R}^{n\times m}$ represents the affinity over the
passage tokens for the $k$’th program argument and is defined as
${\bm{S}}_{k}(i,j)={\bm{w}}^{T}[{\bm{Q}}_{ki};{\bm{P}}_{kj};{\bm{Q}}_{ki}\odot{\bm{P}}_{kj}]$,
where ${\bm{w}}$ is a learnable parameter and $\odot$ is element-wise
multiplication. From this, an attention map ${\bm{A}}_{k}$ is computed over
the passage tokens for the $k$’th program argument as
${\bm{A}}_{k}(i,j)=\mathrm{softmax}_{j}({\bm{S}}_{k}(i,j))=\frac{\exp({\bm{S}}_{k}(i,j))}{\sum_{j}\exp({\bm{S}}_{k}(i,j))}$.
Similarly, for the $i$’th token of the $k$’th program argument the cumulative
attention $a_{ki}$ _w.r.t._ the passage is
$a_{ki}=\mathrm{softmax}_{i}(\sum_{j}{\bm{S}}_{k}(i,j))$. A linear combination
of the attention map ${\bm{A}}_{k}(i,\cdot)$ weighted by $a_{ki}$ gives the
expected passage attention for the $k$’th step,
$\bar{{\bm{\alpha}}}_{k}=\sum_{i}a_{ki}{\bm{A}}_{k}(i,\cdot)\in\mathbbm{R}^{m}$.
Figure 2: Modeling the interaction between the passage and (left) the program,
& (right) its number/date entities. For each program step $k$, they
respectively yield (i) Stacked Span Prediction Logits and (ii) Attention over
Number/Date entities for each passage token. The linear combination of these
two gives the expected distribution over entities, $\mathcal{T}^{num}_{k}$ and
$\mathcal{T}^{date}_{k}$ for the step $k$
Span-level smoothed attention. To facilitate information spotting and
extraction over contiguous spans of text, we regularize the passage attention
so that the attention on a passage token is high if the attention over its
neighbors is so. We achieve this by adopting a heuristic smoothing technique
(Huang et al., 2020), taking a sliding window of different lengths
$\omega=\\{1,2,\ldots 10\\}$ over the passage, and replacing the token-level
attention with the attention averaged over the window. This results in $10$
different attention maps over the passage for the $k$’th step of the program:
$\\{\bar{{\bm{\alpha}}}^{\omega}_{k}|\omega\in\\{1,2,$…$,10\\}\\}$.
Soft span prediction. This network takes a multi-scaled (Gupta et al., 2020)
version of $\bar{{\bm{\alpha}}}^{\omega}_{k}$ , by multiplying the attention
map with $|{\bm{s}}|$ different scaling factors (${\bm{s}}=\\{1,2,5,10\\}$),
yielding a $|{\bm{s}}|$-dimensional representation for each passage token,
i.e., $\bar{{\bm{\alpha}}}^{\omega}_{k}\in\mathbbm{R}^{m\times|{\bm{s}}|}$.
This is then passed through a $L$-layered stacked self-attention _transformer_
block (Vaswani et al., 2017), which encodes it to $m\times d$ dimension,
followed by a _linear layer_ of dimension $d\times 1$, to obtain the span
prediction logits:
${\bm{\alpha}}^{\omega}_{k}=Linear(Transformer(MultiScaling(\bar{{\bm{\alpha}}}^{\omega}_{k}))\in\mathbbm{R}^{m}$.
Further the span prediction logits at each program step (say $k$) is
additively combined with those from the previous steps referenced in the
current one, through the reference argument ($ref(k)$) at step $k$, i.e.,
${\bm{\alpha}}^{\omega}_{k}={\bm{\alpha}}^{\omega}_{k}+\sum_{k^{\prime}\in
ref(k)}{\bm{\alpha}}^{\omega}_{k^{\prime}}$.
##### Modeling interaction between program and number/date entities
This module (Figure 2, right) facilitates an entity-based information spotting
capability, that is, given a passage mention of a number/date entity relevant
to the query, the model should be able to attend to the neighborhood around
it. To do this, for each program step, we first compute a _passage tokens to
number tokens_ attention map ${\bm{\mathsfit{A}}}^{num}\in\mathbbm{R}^{l\times
m\times N}$, where $N$ is the number of unique number entities. Note that this
attention map is different for each program step as the contextual BERT
encoding of the passage tokens (${\bm{P}}_{k}$) is coupled with the program’s
span argument of that step. At the $k$-th step, the row
${\bm{A}}^{num}_{k}(i,\cdot)$ denotes the probability distribution over the
$N$ unique number tokens _w.r.t._ the $i$-th passage token. The attention maps
are obtained by a softmax normalization of each row of the corresponding
_passage tokens to number tokens_ similarity matrix,
${\bm{S}}^{num}_{k}\in\mathbbm{R}^{m\times N}$ for $k=\\{1\ldots l\\}$, where
the elements of ${\bm{S}}^{num}_{k}$ are computed as
${\bm{S}}^{num}_{k}(i,j)={\bm{P}}_{ki}^{T}{\bm{W}}_{n}{\bm{P}}_{kn_{j}}$ with
${\bm{W}}_{n}\in\mathbbm{R}^{d\times d}$ being a learnable projection matrix
and $n_{j}$ being the passage location of the $j$-th number token. These
similarity scores are additively aggregated over all mentions of the same
number entity in the passage.
The relation between program and entities is then modeled as
${\bm{\tau}}^{\omega}_{k}=\mathrm{softmax}(\sum_{i}{\alpha}^{\omega}_{ki}{\bm{A}}^{num}_{k}(i,\cdot))\in\mathbbm{R}^{N}$,
which gives the expected distribution over the $N$ number tokens for the
$k$-th program step and using $\omega$ as the smoothing window size. The final
stacked attention map obtained for the different windows is
${\mathcal{T}}^{num}_{k}=\\{{\bm{\tau}}^{\omega}_{k}|\omega\in\\{1,2,\ldots
10\\}\\}$. Similarly, for each program step $k$, we also compute a separate
stacked attention map ${\mathcal{T}}^{date}_{k}$ over the unique date tokens,
parameterized by a different ${\bm{W}}_{d}$.
A critical requirement for a meaningful attention over entities is to
incorporate information extraction capability in the number and date attention
maps ${\bm{\mathsfit{A}}}^{num}$ and ${\bm{\mathsfit{A}}}^{date}$, by enabling
the model to attend over the neighborhood of the relevant entity mentions.
This is achieved by minimizing the unsupervised auxiliary losses
${\mathcal{L}}^{num}_{aux}$ and ${\mathcal{L}}^{date}_{aux}$ in the training
objective, which impose an inductive bias over the number and date entities,
similar to Gupta et al. (2020). Its purpose is to ensure that the passage
attention is densely distributed inside the neighborhood of $\pm~{}\Omega$ (a
hyperparameter, e.g., 10) of the passage location of the entity mention,
without imposing any bias on the attention distribution outside the
neighborhood. Consequently, it maximises the log-form of cumulative likelihood
of the attention distribution inside the window and the entropy of the
attention distribution outside of it.
${\mathcal{L}}^{num}_{aux}=-\frac{1}{l}\sum\limits_{k=1}^{l}\bigg{[}\sum\limits_{i=1}^{m}\big{[}\log(\sum\limits_{j=1}^{N}\displaystyle{\mathbbm{1}}_{n_{j}\in[i\pm~{}\Omega]}a^{num}_{kij})-\sum\limits_{j=1}^{N}\displaystyle{\mathbbm{1}}_{n_{j}\not\in[i\pm~{}\Omega]}a^{num}_{kij}\log(a^{num}_{kij})\big{]}\bigg{]}$
(1)
where $\displaystyle{\mathbbm{1}}$ is indicator function and
$a^{num}_{kij}={\bm{A}}^{num}_{k}(i,j)$. ${\mathcal{L}}^{date}_{aux}$ for date
entities is similarly defined.
#### 2.1.2 Modeling Discrete Reasoning
The model next learns to execute a single step111This is a reasonable
assumption for DROP with a recall of 90% on the training set. However, it does
not limit the generalizability of WNSMN, as with standard beam search it is
possible to scale to an $l$-step MDP. of discrete reasoning (Figure 3) based
on the final program step. The final step contains (i) root-clause of the
query which often indicates the type of discrete operation (e.g., _‘what is
the longest’_ indicates max, _‘how many goals’_ indicates count), and (ii)
reference argument indicating the previous program steps the final step
depends on. Each previous step (say $k$) is represented as stacked attention
maps $\mathcal{T}^{num}_{k}$ and $\mathcal{T}^{date}_{k}$, obtained from
Section 2.1.1.
##### Operator Sampling Network
Owing to the noisy nature of the program, the operator network takes as input:
(i) BERT’s [CLS] representation for the passage-query pair and LSTM
(Hochreiter & Schmidhuber, 1997) encoding (randomly initialized) of the BERT
contextual representation of (ii) the root-clause from the final program step
and (iii) full query (_w.r.t._ passage), to make two predictions:
* •
Entity-Type Predictor Network, an Exponential Linear Unit (Elu) activated
fully-connected layer followed by a $\mathrm{softmax}$ that outputs the
probabilities of sampling either date or number types.
* •
Operator Predictor Network, a similar Elu-activated fully connected layer
followed by a $\mathrm{softmax}$ which learns a probability distribution over
a fixed catalog of 6 numerical and logical operations (count, max, min, sum,
diff, negate), each represented with learnable embeddings.
Apart from the diff operator which acts only on two arguments, all other
operations can take any arbitrary number of arguments. Also some of these
operations can be applied only on numbers (e.g., sum, negate) while others can
be applied on both numbers or date (e.g., max, count).
Figure 3: Operator & Argument Sampling Network and RL framework over sampled
discrete actions
##### Argument Sampling Network
This network learns to sample date/number entities as arguments for the
sampled discrete operation, given the entity-specific stacked attentions
($\mathcal{T}^{num}_{k}$ and $\mathcal{T}^{date}_{k}$) for each previous step
(say, $k$), that appears in the reference argument of the final program step.
In order to allow sampling of fixed or arbitrary number of arguments, the
argument sampler learns four types of networks, each modeled with a
$L$-layered stacked self attention based $Transformer$ block (with output
dimension $d$) followed by different non-linear layers embodying their
functionality and a $\mathrm{softmax}$ normalization to get the corresponding
probability of the argument sampling (Figure 3).
* •
Sample $n\in\\{1,2\\}$ Argument Module: $\mathrm{softmax}(Elu(Linear_{d\times
n}(Transformer(\mathcal{T}))))$, outputs a distribution over the single
entities ($n=1$) or a joint distribution over the entity-pairs ($n=2$).
* •
Counter Module: $\mathrm{softmax}(Elu(Linear_{d\times
10}(CNN\text{-}Encoder(Transformer(\mathcal{T})))))$, predicts a distribution
over possible count values ($\in[1,\ldots,10]$) of number of entity arguments
to sample.
* •
Entity-Ranker Module: $\mathrm{softmax}(PRelu(Linear_{d\times
1}(Transformer(\mathcal{T}))))$, learns to rerank the entities and outputs a
distribution over all the entities given the stacked attention maps as input.
* •
Sample Arbitrary Argument: $Multinomial$(Entity-Ranked Distribution, Counter
Prediction).
Depending on the number of arguments needed by the discrete operation and the
number of reference arguments in the final program step, the model invokes one
of _Sample{1, 2, Arbitrary} Argument_. For instance, if the sampled operator
is diff which needs 2 arguments, and the final step has 1 or 2 reference
arguments, then the model respectively invokes either _Sample 2 argument_ or
_Sample 1 argument_ on the stacked attention $\mathcal{T}$ corresponding to
each reference argument. And, for operations needing arbitrary number of
arguments, the model invokes the _Sampling Arbitrary Argument_. For the
_Arbitrary Argument_ case, the model first predicts the number of entities
$\mathit{c}\in\\{1,\ldots,10\\}$ to sample using the Counter Network, and then
samples from the multinomial distribution based on the joint of
$\mathit{c}$-combinations of entities constructed from the output distribution
of the Entity Ranker module.
#### 2.1.3 Training with Weak Supervision in the Deep RL Framework
We use an RL framework to train the model with only discrete binary feedback
from the exact match of the gold and predicted numerical answer. In
particular, we use the REINFORCE (Williams, 1992) policy gradient method where
a stochastic policy comprising a sequence of actions is learned with the goal
of maximizing the expected reward. In our case, the discrete operation along
with argument sampling constitute the _action_. However, because of our
assumption that a single step of discrete reasoning suffices for most
questions in DROP, we further simplify the RL framework to a contextual multi-
arm bandit (MAB) problem with a 1-step MDP, i.e., the agent performs only one
step action.
Despite the simplifying assumption of 1-step MDP, the following
characteristics of the problem render it highly challenging: (i) the action
space $\mathcal{A}$ is exponential in the order of number of operations and
argument entities in the passage (averaging to _12K_ actions for DROP-_num_);
(ii) the extreme reward sparsity owing to the binary feedback is further
exacerbated by the presence of spurious rewards, as the same answer can be
generated by multiple diverse actions. Note that previous approaches like NMN
can avoid such spurious supervision because they heuristically obtain
additional annotation of the question category, the gold program or gold
program execution atleast for some training instances.
In our contextual MAB framework, for an input $x$ = (passage($p$),
query($q$)), the context or environment state $s_{\phi}(x)$ is modeled by the
entity specific cross attention (Section 2.1.1, parameterized by $\phi$)
between the (i) passage (ii) program-form of the query and (iii) extracted
passage date/number entities. Given the state $s_{\phi}(x)$, the layout policy
(section 2.1.2, parameterized by $\theta$) then learns the query-specific
inference layout, i.e., the discrete action sampling policy
$P_{\theta}(a|s_{\phi}(x))$ for action $a\in\mathcal{A}$. The action sampling
probability is a product of the probability of sampling entities from the
appropriate entity type ($P_{\theta}^{type}$), probability of sampling the
operator ($P_{\theta}^{op}$), and probability of sampling the entity
argument(s) ($P_{\theta}^{arg}$), normalized by number of arguments to sample.
Therefore, with the learnable context representation $s_{\phi}(x)$ of input
$x$, the end-to-end objective is to jointly learn $\\{\theta,\phi\\}$ that
maximises the expected reward $R(x,a)\in\\{-1,+1\\}$ over the sampled actions
($a$), based on exact match with the gold answer.
To mitigate the learning instability in such sparse confounding reward
settings, we intialize with a simpler iterative _hard-Expectation Maximization
(EM)_ learning objective, called Iterative Maximal Likelihood (IML) (Liang et
al., 2017). With the assumption that the sampled actions are extensive enough
to contain the gold answer, IML greedily searches for the _good_ actions by
fixing the policy parameters, and then maximises the likelihood of the _best_
action that led to the highest reward. We define _good_ actions
($\mathcal{A}^{good}$) as those that result in the gold answer itself and take
a conservative approach of defining _best_ among them as simply the most
likely one according to the current policy.
$J^{IML}(\theta,\phi)=\sum_{x}\max_{a\in\mathcal{A}^{good}}\log{P_{\theta,\phi}(a|x)}$
(2)
After the IML initialization, we switch to REINFORCE as the learning objective
after a few epochs, where the goal is to maximise the expected reward
($J^{RL}(\theta,\phi)=\sum_{x}\mathbb{E}_{P_{\theta,\phi}(a|x)}R(x,a)$) as
$\nabla_{(\theta,\phi)}J^{RL}=\sum_{x}\sum_{a\in\mathcal{A}}P_{\theta,\phi}(a|x)(R(x,a)-B(x))\nabla_{\theta,\phi}(\log
P_{\theta,\phi}(a|x))$ (3)
where $B(x)$ is simply the average (baseline) reward obtained by the policy
for that instance $x$. Further, in order to mitigate overfitting, in addition
to $L_{2}$-regularization and dropout, we also add entropy based
regularization over the argument sampling distribution, in each of the
sampling networks.
## 3 Experiments
We now empirically compare the _exact-match_ performance of WNSMN with SoTA
baselines on versions of DROP dataset and also examine how it fares in
comparison to strong supervised skylines.
The Primary Baselines for WNSMN are the explicit reasoning based NMN (Gupta et
al., 2020) which uses additional strong supervision and the BERT based
language model GenBERT (Geva et al., 2020) that does not embody any reasoning
and autoregressively generates numeric answer tokens. As the Primary Dataset
we use DROP-_num_ , the subset of DROP with numerical answers. This subset
contains 45K and 5.8K instances respectively from the standard DROP train and
development sets. Originally, NMN was showcased on a very specific subset of
DROP, restricted to the 6 reasoning-types it could handle, out of which three
(_count_ , _date-difference_ , _extract-number_) have numeric answers. This
subset comprises 20K training and 1.8K development instances, out of which
only 10K and 800 instances respectively have numerical answers. We further
evaluate on this numerical subset, referred to as DROP-Pruned-_num_. In both
the cases, the training data was randomly split into 70%:30% for train and
internal validation and the standard DROP development set was treated as the
Test set.
Figure 4: t-SNE plot of DROP-_num_ -Test questions.
Figure 4 shows the t-SNE plot of pretrained Sentence-BERT (Reimers & Gurevych,
2019) encoding of _all_ questions in DROP-_num_ -Test and also the DROP-
Pruned-_num_ -Test subset with different colors (red, green, yellow)
representing different types. Not only are the DROP-_num_ questions more
diverse than the carefully chosen DROP-Pruned-_num_ subset, the latter also
forms well-separated clusters corresponding to the three reasoning types.
Additionally, the average perplexity (using nltk) of the DROP-Pruned-_num_ and
DROP-_num_ questions was found to be _3.9_ and _10.65_ respectively, further
indicating the comparatively open-ended nature of the former.
For the primary baselines NMN and GenBERT, we report the performance on in-
house trained models on the respective datasets, using the code open-sourced
by the authors. The remaining results, taken from Geva et al. (2020), Kinley &
Lin (2019), and Ran et al. (2019); refer to models trained on the full DROP
dataset. All models use the same pretrained BERT-base. Also note that a
primary requirement of all models other than GenBERT and WNSMN i.e., for NMN,
MTMSN, NABERT, NAQANET, NumNet, is the exhaustive enumeration of the output
space of all possible discrete operations. This simplifies the QA task to a
classification setting, thus alleviating the need for discrete reasoning in
the inference processs.
Table 1: DROP-_num_ -Test Performance of Baselines and WNSMN.
Supervision Type | Acc. (%)
---|---
Prog. | Exec. | QAtt. |
NMN-_num_ variants
✗ | ✓ | ✓ | 11.77
✓ | ✗ | ✓ | 17.52
✓ | ✓ | ✗ | 18.27
✓ | ✗ | ✗ | 18.54
✗ | ✓ | ✗ | 12.27
✗ | ✗ | ✓ | 11.80
✗ | ✗ | ✗ | 11.70
GenBERT
✗ | ✗ | ✗ | 42.30
GenBERT-_num_
✗ | ✗ | ✗ | 38.41
WNSMN
✗ | ✗ | ✗ | 50.97
Table 1 presents our primary results on DROP-_num_ , comparing the performance
of WNSMN (accuracy of the top-1 sampled action by the RL agent) with various
ablations of NMN (provided in the authors’ implementation) by removing atleast
one of Program, Execution, and Query Attention supervision (Section A.4.1) and
GenBERT models with pretrained BERT that are finetuned on DROP or DROP-_num_
(denoted as GenBERT and GenBERT-_num_). For a fair comparison with our weakly
supervised model, we do not treat NMN with all forms of supervision or GenBERT
model pretrained with additional _synthetic_ numerical and textual data as
comparable baselines. Note that these GenBERT variants indeed enjoy strong
reasoning supervision in terms of gold arithmetic expressions provided in
these auxiliary datasets.
NMN’s performance is abysmally poor, indeed a drastic degradation in
comparison to its performance on the pruned DROP subset reported by Gupta et
al. (2020) and in our subsequent experiments in Table 2. This can be
attributed to their limitation in handling more diverse classes of reasoning
and open-ended queries in DROP-_num_ , further exacerbated by the lack of one
or more types of strong supervision.222Both the results and limitations of NMN
in Table1 and 2 were confirmed by the authors of NMN as well. Our earlier
analysis on the complexity of the questions in the subset and full DROP-_num_
further quantify the relative difficulty level of the latter. On the other
hand, GenBERT delivers a mediocre performance, while GenBERT-_num_ degrades
additionally by 4%, as learning from numerical answers alone further curbs the
language modeling ability. Our model performs significantly better than both
these baselines, surpassing GenBERT by 8% and the NMN baseline by around 32%.
This showcases the significance of incorporating explicit reasoning in neural
models in comparison to the vanila large scale LMs like GenBERT. It also
establishes the generalizability of such reasoning based models to more open-
ended forms of QA, in comparison to contemporary modular networks like NMN,
owing to its ability to handle both learnable and discrete modules in an end-
to-end manner.
Next, in Table 2, we compare the performance of the proposed WNSMN with the
same NMN variants (as in Table 1) on DROP-Pruned-_num_. Some of the salient
observations are:
Table 2: DROP-Pruned-_num_ -Test Performance of NMN variants and WNSMN
. Supervision-Type Acc. (%) Count Extract-num Date-differ Prog. Exec. QAtt.
NMN-_num_ Variants ✓ ✓ ✓ 68.6 50.0 88.4 72.5 ✗ ✓ ✓ 42.4 24.1 73.9 36.4 ✓ ✗ ✓
54.3 47.9 80.7 40.9 ✓ ✓ ✗ 63.3 45.5 81.1 68.7 ✗ ✗ ✓ 48.2 38.1 72.4 41.9 ✓ ✗ ✗
61.0 44.7 81.1 63.2 ✗ ✓ ✗ 62.3 43.7 84.1 67.7 ✗ ✗ ✗ 62.1 46.8 83.6 66.1 WNSMN
✗ ✗ ✗ 66.5 58.8 66.8 75.2
(i) WNSMN in fact reaches a performance quite close to the _strongly
supervised_ NMN variant (first row), and is able to attain at least an
improvement margin of $4\%$ over all other variants obtained by removing one
or more types of supervision. This is despite all variants of NMN
_additionally_ enjoying the exhaustive precompution of the output space of
possible numerical answers; (ii) WNSMN suffers only in the case of _extract-
number_ type operations (e.g., _max,min_) that involve a more complex process
of sampling arbitrary number of arguments (iii) Performance drop of NMN is not
very large when all or none of the strong supervision is present, possibly
because of the limited diversity over reasoning types and query language; and
(iv) Query-Attention supervision infact adversely affects NMN’s performance,
in absence of the _program_ and _execution_ supervision or both, possibly
owing to an undesirable biasing effect. However when both supervisions are
available, query-attention is able to improve the model performance by $5\%$.
Further, we believe the test set of 800 instances is too small to get an
unbiased reflection of the model’s performances.
In Table 3, we additionally inspect recall over the top-$k$ actions sampled by
WNSMN to estimate how it fares in comparison to the strongly supervised
skylines: (i) NMN with all forms of strong supervision; (ii) GenBERT variants
+ND, +TD and +ND+TD further pretrained on synthetic Numerical and Textual Data
and both; (iii) reasoning-free hybrid models like MTMSN (Hu et al., 2019) and
NumNet (Ran et al., 2019), NAQANet (Dua et al., 2019) and NABERT, NABERT+
(Kinley & Lin, 2019). Note that both NumNet and NAQANet do not use pretrained
BERT. MTMSN achieves SoTA performance through a supervised framework of
training specialized predictors for each reasoning type to predict the
numerical expression directly instead of learning to reason. While top-1
performance of WNSMN (in Table 1) is $4\%$ worser than NABERT, Recall@top-2 is
equivalent to the strongly supervised NMN, top-5 and top-10 is comparable to
NABERT+, NumNet and GenBERT models +ND, +TD and top-20 nearly achieves SoTA.
Such promising recall over the top-$k$ actions suggests that more
sophisticated RL algorithms with better exploration strategies can possibly
bridge this performance gap.
Table 3: Skylines & WNSMN top-$k$ performance on DROP-_num_ -Test
Strongly Supervised Models | Acc. (%)
---|---
NMN-_num_ (all supervision) | 58.10
GenBERT+ND | 69.20
GenBERT+TD | 70.50
GenBERT+ND+TD | 75.20
NAQANet | 44.97
NABERT | 54.27
NABERT+ | 66.60
NumNet | 69.74
MTMSN | 75.00
Recall@top-$k$ actions of WNSMN (%)
---
$k=2$ | $k=3$ | $k=4$ | $k=5$ | $k=10$ | $k=20$
58.6 | 63.0 | 65.4 | 67.4 | 72.3 | 74.2
## 4 Analysis & Future Work
##### Performance Analysis
Despite the notorious instabilities of RL due to high variance, the training
trend, as shown in Figure 5(a) is not afflicted by catastrophic forgetting.
The sudden performance jump between epochs 10-15 is because of switching from
iterative ML initialization to REINFORCE objective. Figure 5(b) shows the
individual module-wise performance evaluated using the noisy pseudo-rewards,
that indicate whether the action sampled by this module _led_ to the correct
answer or not (details in Section A.6). Further, by bucketing the performance
by the total number of passage entities in Figure 5(c), we observe that WNSMN
remains unimpacted by the increasing number of date/numbers, despite the
action space explosion. On the other hand, GenBERT’s performance drops
linearly beyond 25 passage entities and NMN-_num_ degrades exponentially from
the beginning, owing to its direct dependency on the exponentially growing
exhaustively precomputed output space.
Module | Performance
---|---
Sample 1 Argument | 54% (Acc.)
Sample 2 Argument | 52% —"—
Counter | 50% —"—
Entity Ranker | 53% —"—
Operator Predictor | 78% —"—
Entity Type Predictor | 83% —"—
Overall Action Sampler | 84% (Rec@All)
Figure 5: (a) Training trend showing the Recall@top-$k$ and all actions,
accuracy of Operator and Entity-type Predictor, estimated based on noisy
psuedo rewards (Section A.6), (b) Module-wise performance (using pseudo-
reward) on DROP-_num_ -Test, (c) Bucketing performance by total number of
passage entities for WNSMN, and the best performing NMN and GenBERT model from
Table 1.
##### More Stable RL Framework
The training trend in Figure 5(a) shows early saturation and the module-wise
performance indicates overfitting despite the regularization tricks in Section
2.1.3 and Section A.6. While more stable RL algorithms like Actor-Critic,
Trust Region Policy Optimization (Schulman et al., 2015) or Memory Augmented
Policy Optimization (Liang et al., 2018) can mitigate these issues, we leave
them for future exploration. Also, though this work’s objective was to train
module networks with weak supervision, the sparse confounding rewards in the
exponential action space indeed render the RL training quite challenging. One
practical future direction to bridge the performance gap would be to pretrain
with strong supervision on at least a subset of reasoning categories or on
more constrained forms of synthetic questions, similar to GenBERT. Such a
setting would require inspection and evaluation of generalizability of the RL
model to unknown reasoning types or more open-ended questions.
## 5 Related Work
In this section we briefly compare our proposed WNSMN to the two closest genre
of models that have proven quite successful on DROP 333A more detailed related
work section is presented in the Appendix A.4 i) reasoning free hybrid models
NumNet, NAQANet, NABERT, NABERT+, MTMSN, and NeRd ii) modular network for
reasoning NMN. Their main distinction with WNSMN is that in order to address
the challenges of weak supervision, they obtain program annotation from the QA
pairs through i) various heuristic parsing of the templatized queries in DROP
to get supervision of the reasoning type (max/min, diff/sum, count, negate).
ii) exhaustive search over all possible discrete operations to get supervision
of the arguments in the reasoning.
Such heuristic supervision makes the learning problem significantly simpler in
the following ways
* •
These models enjoy supervision of specialized program that have explicit
information of the type of reasoning to apply for a question e.g., SUM(10,12)
* •
A simplistic (contextual BERT-like) _reader_ model to read query related
information from the passage trained with direct supervision of the query span
arguments at each step of the program
* •
A _programmer_ model that can be directly trained to decode the specialized
programs
* •
_Executing_ numerical functions (e.g., _difference, count, max, min_) either
by i) training purely neural modules in a strong supervised setting using the
annotated programs or by ii) performing the actual discrete operation as a
post processing step on the model’s predicted program. For each of these
previous works, it is possible to directly apply the learning objective on the
space of decoded program, without having to deal with the discrete answer or
any non-differentiability.
However, such heuristic techniques of program annotation or exhaustive search
is not practical as the language of questions or the space of discrete
operations become more complex. Hence WNSMN learns in the challenging weak-
supervised setting without any additional annotation through
* •
A noisy symbolic query decomposition that is oblivious to the reasoning type
and simply based on generic text parsing techniques
* •
An entity specific cross attention model extracting passage information
relevant to each step of the decomposed query and learning an attention
distribution over the entities of each type
* •
Learning to apply discrete reasoning by employing neural modules that learn to
sample the operation and the entity arguments
* •
Leveraging a combination of neural and discrete modules when executing the
discrete operation, instead of using only neural modules which need strong
supervision of the programs for learning the functionality
* •
Fundamentally different learning strategy by incorporating inductive bias
through auxiliary losses and Iterative Maximal Likelihood for a more
conservative initialization followed by REINFORCE
These reasoning-free hybrid models are not comparable with WNSMN because of
their inability to learn in absence of any heuristic program annotation.
Instead of learning to reason based on only the final answer supervision, they
reduce the task to learning to decode the program, based on heuristic program
annotation. NMN is the only reasoning based model that employ various
auxiliary losses to learn even in absence of any additional supervision,
similar to us.
To our knowledge WNSMN is the first work on modular networks for fuzzy
reasoning over text in RC framework, to handle the challenging cold start
problem of the weak supervised setting without needing any additional
specialized supervision of heuristic programs.
## 6 Conclusion
In this work, we presented Weakly Supervised Neuro-Symbolic Module Network for
numerical reasoning based MRC based on a generalized framework of query
parsing to noisy heuristic programs. It trains both neural and discrete
reasoning modules end-to-end in a Deep RL framework with only discrete reward
based on exact answer match. Our empirical analysis on the _numerical-answer
only subset_ of DROP showcases significant performance improvement of the
proposed model over SoTA NMNs and Transformer based language model GenBERT,
when trained in comparable weakly supervised settings. While, to our
knowledge, this is the first effort towards training modular networks for
fuzzy reasoning over RC in a weakly-supervised setting, there is significant
scope of improvement, such as employing more sophisticated RL framework or by
leveraging the pretraining of reasoning.
## References
* Chen & Manning (2014) Danqi Chen and Christopher Manning. A fast and accurate dependency parser using neural networks. In _Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pp. 740–750, Doha, Qatar, October 2014\. Association for Computational Linguistics. doi: 10.3115/v1/D14-1082. URL https://www.aclweb.org/anthology/D14-1082.
* Chen et al. (2020) Xinyun Chen, Chen Liang, Adams Wei Yu, Denny Zhou, Dawn Song, and Quoc V. Le. Neural symbolic reader: Scalable integration of distributed and symbolic representations for reading comprehension. In _8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020_. OpenReview.net, 2020. URL https://openreview.net/forum?id=ryxjnREFwH.
* Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding, 2018. URL http://arxiv.org/abs/1810.04805. cite arxiv:1810.04805Comment: 13 pages.
* Dua et al. (2019) Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In _Proc. of NAACL_ , 2019.
* Geva et al. (2020) Mor Geva, Ankit Gupta, and Jonathan Berant. Injecting numerical reasoning skills into language models. In _ACL_ , 2020.
* Gupta et al. (2020) Nitish Gupta, Kevin Lin, Dan Roth, Sameer Singh, and Matt Gardner. Neural module networks for reasoning over text. In _International Conference on Learning Representations_ , 2020. URL https://openreview.net/forum?id=SygWvAVFPr.
* Hochreiter & Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. _Neural computation_ , 9(8):1735–1780, 1997.
* Hu et al. (2019) Minghao Hu, Yuxing Peng, Zhen Huang, and Dongsheng Li. A multi-type multi-span network for reading comprehension that requires discrete reasoning. In _Proceedings of EMNLP_ , 2019.
* Huang et al. (2020) Ting Huang, Zhi-Hong Deng, Gehui Shen, and Xi Chen. A window-based self-attention approach for sentence encoding. _Neurocomputing_ , 375:25–31, 2020. doi: 10.1016/j.neucom.2019.09.024. URL https://doi.org/10.1016/j.neucom.2019.09.024.
* Kinley & Lin (2019) Jambay Kinley and Raymond Lin. Nabert+: Improving numerical reasoning in reading comprehension. 2019\. URL https://github.com/raylin1000/drop-bert.
* Koncel-Kedziorski et al. (2016) Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate Kushman, and Hannaneh Hajishirzi. MAWPS: A math word problem repository. In _Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pp. 1152–1157, San Diego, California, June 2016. Association for Computational Linguistics. doi: 10.18653/v1/N16-1136. URL https://www.aclweb.org/anthology/N16-1136.
* Liang et al. (2017) Chen Liang, Jonathan Berant, Quoc Le, Kenneth D Forbus, and Ni Lao. Neural symbolic machines: Learning semantic parsers on freebase with weak supervision. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , volume 1, pp. 23–33, 2017.
* Liang et al. (2018) Chen Liang, Mohammad Norouzi, Jonathan Berant, Quoc V Le, and Ni Lao. Memory augmented policy optimization for program synthesis and semantic parsing. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), _Advances in Neural Information Processing Systems 31_ , pp. 10015–10027. Curran Associates, Inc., 2018.
* Ran et al. (2019) Qiu Ran, Yankai Lin, Peng Li, Jie Zhou, and Zhiyuan Liu. NumNet: Machine reading comprehension with numerical reasoning. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pp. 2474–2484, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1251. URL https://www.aclweb.org/anthology/D19-1251.
* Reimers & Gurevych (2019) Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing_. Association for Computational Linguistics, 11 2019\. URL http://arxiv.org/abs/1908.10084.
* Saha et al. (2019) Amrita Saha, Ghulam Ahmed Ansari, Abhishek Laddha, Karthik Sankaranarayanan, and Soumen Chakrabarti. Complex program induction for querying knowledge bases in the absence of gold programs. _Transactions of the Association for Computational Linguistics_ , 7:185–200, March 2019. doi: 10.1162/tacl˙a˙00262. URL https://www.aclweb.org/anthology/Q19-1012.
* Schulman et al. (2015) John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. volume 37 of _Proceedings of Machine Learning Research_ , pp. 1889–1897, Lille, France, 07–09 Jul 2015. PMLR. URL http://proceedings.mlr.press/v37/schulman15.html.
* Subramanian et al. (2020) Sanjay Subramanian, Ben Bogin, Nitish Gupta, Tomer Wolfson, Sameer Singh, Jonathan Berant, and Matt Gardner. Obtaining Faithful Interpretations from Compositional Neural Networks. In _Association for Computational Linguistics (ACL)_ , 2020.
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), _Advances in Neural Information Processing Systems 30_ , pp. 5998–6008. Curran Associates, Inc., 2017. URL http://papers.nips.cc/paper/7181-attention-is-all-you-need.pdf.
* Williams (1992) R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. _Machine Learning_ , 8:229–256, 1992.
* Zhong et al. (2017) Victor Zhong, Caiming Xiong, and Richard Socher. Seq2sql: Generating structured queries from natural language using reinforcement learning, 2017.
## Appendix A Appendix
### A.1 Qualitative Analysis
Weakly Supervised Neuro-Symbolic Module Network | GenBERT
---|---
1\. Query: how many times did a game between the patriots versus colts result
in the exact same scores?, Ans: 2
Num. of Passage Entities: Date(10), Number(9)
D, N = _Entity-Attention_(‘how many times’) // _D, N are the attention distribution over date and number entities_ | Predicted AnsType: Decoded
D1, N1 = _Entity-Attention_(‘did a game between the patriots versus colts result in the exact same scores’, (D, N)) | Decoder output: 2
‘Number’, ‘Count’ = _EntType-Operator-Selector_(‘how many times’, Query) | Span extracted: “colts”
Answer 2 = _Count_(N1) | Answer = 2
2\. Query: how many people in chennai, in terms of percent population, are not
hindu?, Ans: 19.3
Num. of Passage Entities: Date(2), Number(26)
D, N = _Entity-Attention_(‘how many people in chennai, in terms of percent population’) | Predicted AnsType: Decoded
D1, N1 = _Entity-Attention_(‘are not hindu’, (D, N)) | Decoder output: 19.3
‘Number’, ‘Negate’ = _EntType-Operator-Selector_(’are not hindu’, Query) | Span extracted: “80.7”
1 = Count(N) | Answer = 19.3
{80.7} = _Sample-Arbitrary-Arguments_(N1, 1) |
Answer = 19.3 = _Negate_({80.7}) |
3\. Query: how many more percent of the population was male than female?, Ans:
0.4
Num. of Passage Entities: Date(4), Number(29)
D, N = _Entity-Attention_(‘how many’) | Predicted AnsType: Decoded
D1, N1 = _Entity-Attention_(‘more percent of the population was male’, (D, N)) | Decoder output: 3.2
D2, N2 = _Entity-Attention_(‘than female’, (D, N)) | Span extracted: “49.8”
‘Number’,‘Difference’ = _EntType-Operator-Selector_(’how many’, Query) | Answer = 3.2
50.2 = Sample-1-Argument(N1) |
49.8 = _Sample-2-Argument_(N2) |
Answer = 0.4 = _Difference_({50.2, 49.8}) |
4\. Query: how many more, in percent population of aigle were between 0 and 9
years old than are 90 and older?, Ans: 9.8
Num. of Passage Entities: Date(0), Number(25)
D, N = _Entity-Attention_(‘how many more’) | Predicted AnsType: Decoded
D1, N1 = _Entity-Attention_(‘in percent population of aigle were between 0 and 9 years old’, (D, N)) | Decoder output: 1.7
D2, N2 = _Entity-Attention_(‘than are 90 and older’, (D, N)) | Span extracted: “0.9”
‘Number’, ‘Difference’ = _EntType-Operator-Selector_(‘how many more’, Query) | Answer = 1.7
10.7 = _Sample-1-Argument_(N1) |
0.9 = _Sample-1-Argument_(N2) |
Answer = 9.8 = _Difference_({10.7, 0.9}) |
5\. Query: going into the 1994 playoffs, how many years had it been since the
suns had last reached the playoffs?, Ans: 3
Num. of Passage Entities: Date(3), Number(17)
D, N = _Entity-Attention_(‘going into the 1994 playoffs : how many years’) | Predicted AnsType: Decoded
D1, N1 = _Entity-Attention_(‘had it been since the suns had last reached the playoffs’, (D, N)) | Decoder output: 7
‘Date’, ‘Difference’ = _EntType-Operator-Selector_(‘going into the 1994 playoffs : how many years’, Query) | Span extracted:“1991”
{1991, 1994} = _Sample-2-Argument_(D) | Answer = 7
Answer = 3 = _Difference_({1991, 1994}) |
6\. Query: how many more points did the cats have in the fifth game of the AA
championship playoffs compared to st. paul saints?,
Ans: 3
Num. of Passage Entities: Date(3), Number(12)
D, N = _Entity-Attention_(‘how many’) | Predicted AnsType: Decoded
D1, N1 = _Entity-Attention_(‘more points did the cats have in the fifth game of the AA championship playoffs’, (D, N)) | Decoder output: 3
D2, N2 = _Entity-Attention_(‘compared to the st. paul saints’, (D, N)) | Span extracted: “4 - 1 in the fifth game”
‘Number’, ‘Difference’ = _EntType-Operator-Selector_(‘how many’, Query) | Answer = 3
5.0 = _Sample-1-Argument_(N1) |
2.0 = _Sample-1-Argument_(N2) |
Answer = 3.0 = _Difference_({5.0, 2.0}) |
7\. Query: how many total troops were there in the battle?, Ans: 40000
Num. of Passage Entities: Date(1), Number(3)
D, N = _Entity-Attention_(‘how many total troops’) | Predicted AnsType: Decoded
D1, N1 = _Entity-Attention_(‘were there in the battle’, (D, N)) | Decoder output: 100000
‘Number’, ‘Sum’ = _EntType-Operator-Selector_(’how many total troops’, Query) |
2 = _Count_(N1) | Span extracted: “10000 korean troops”
{10000.0, 30000.0} = _Sample-Arbitrary-Arguments_(N1, 2) | Answer = 100000
Answer = 40000.0 = _Sum_({10000.0, 30000.0}) |
Weakly Supervised Neuro-Symbolic Module Network | NMN-_num_ | GenBERT
---|---|---
8\. Query: how many field goals did sebastian janikowski and kris brown both
score each? Ans: 2
Num. of Passage Entities: Date(0), Number(9)
D, N = _Entity-Attention_(‘how many field goals’) | P1 = _Find-Passage-Attention_() | Predicted AnsType: Decoded
D1, N1 = _Entity-Attention_(‘did sebastian janikowski and kris brown both score each’, (D, N)) | P2 = _Filter-Passage-Attention_(P1) | Decoder output: 2
‘Number’, ‘Count’ = _EntType-Operator-Selector_(‘how many field goals’, Query) | 2 = _Passage-Attn-To-Count_(P2) | Span extracted: “33 - yard”
Answer = 2.0 = _Count_(N1) | Answer = 2 | Answer = 2
9\. Query: how many years was between the oil crisis and the energy crisis?
Ans: 6
Num. of Passage Entities: Date(19), Number(14)
D1, N1 = _Entity-Attention_(‘was between the oil crisis and the energy crisis’) | _year-diffs_ $\in\mathbbm{R}^{40}$ (// generated exhaustive output space of all differences) | Predicted AnsType: Decoded
D, N = _Entity-Attention_(‘how many years’, D1, N1) | P1 = _Find-Passage-Attention_() | Decoder output: 3
‘Date’, ‘Difference’ = _EntType-Operator-Selector_(‘how many years’, Query) | 6 = _Year-Difference_(P1, _year-diffs_) | Span extracted: “1973”
{1973, 1979} = _Sample-2-Argument_(D) | Answer = 6.0 | Answer = 3
Answer = 6.0 = _Difference_({1973, 1979}) | |
10\. Query: how many yards was the longest touchdown pass? Ans: 40
Num. of Passage Entities: Date(0), Number(5)
D, N = _Entity-Attention_(‘how many yards was the’) | P1 = _Find-Passage-Attention_() | Predicted AnsType: Extract-Span
D1, N1 = _Entity-Attention_(‘longest touchdown pass’, (D, N)) | N1 = _Find-Passage-Number_(P1) | Decoder output: 43
‘Number’, ‘Sum’ = _EntType-Operator-Selector_(‘how many yards was the’, Query) | 40 = _Find-Max-Num_(N1) | Span extracted: “40”
1 = _Count_(N) | Answer = 40 | Answer = 40
{40.0} = _Sample-Arbitrary-Argument_(N, 1) | |
Answer = 40.0 = _Sum_({40.0}) | |
Table 4: Example questions from DROP-_num_ along with predictions of the
Proposed model WNSMN and the best performing versions of the NMN-_num_ and
GenBERT baselines from Table 1. Detailed elaborations of outputs of these
three models below:
(i) WNSMN first parses the dependency structure in the query into a program-
form. Next, for each step of the program, it generates an attention
distribution over the date and number entities. _Entity-Attention_ refers to
that learnt entity-specific cross attention described in Section 2.1.1. It
then performs the discrete reasoning by sampling an operation and specific
entity-arguments, in order to reach the answer. _EntType-Operator-Selector_
refers to the Entity-Type and Operator Predictor in Operator Sampling Network
and Sample-*-Argument refers to the Argument Sampling Network described in
Section 2.1.2. _Sum/Difference/Logical-Not_ are some of the discrete
operations that are executed to get the answer. In some of the cases, (e.g.,
Query 3.) despite wrong parsing the model was able to predict the correct
operation even though the root clause did not have sufficient information. In
Query 10., the correct operation is _Max_ , but WNSMN reaches the right answer
by sampling only the maximum number entity through the Sample-Arbitrary-
Argument network and then applying a spurious Sum operation on it.
(ii) On the other hand, the steps of the program generated by NMN-_num_ first
compute or further filter attention distribution over the passage or entities
which are then fed into the learnable modules (_Passage-Attn-To-Count_ ,
_Year-Difference_) that predict the answer. In order to do so, it needs to
precompute all possible outputs of numerical operations that generate new
numbers for e.g. _year-diffs_ in Example 9. Because of the relatively poorer
performance of NMN-_num_ , its outputs are only reported for the last 3
instances, which were cherrypicked based on NMN-_num_ ’s predictions.
(iii) GenBERT first predicts whether the answer should be decoded or extracted
from passage span and accordingly uses the Decoder output or extracted span as
the answer. By design, the modular networks provide a more interpretable
output than the monolithic encoder-decoder model GenBERT.
### A.2 Implementation & Pseudo-Code
The source-code and models pertaining to this work would be open-sourced on
acceptance of this work. A detailed pseudo-code of the WNSMN algorithm is
provided below.
Algorithm 1 WNSMN Algorithm
Input: (Query ($q$), Passage ($p$)) = $x$
Output (or Supervision): Answer($y$) $\in\mathbbm{R}$
Preprocessing:
[$num_{1}$, $num_{2}$, $\ldots$, $num_{N}$] = $Num$ = Extract-Numbers($p$) _//
Number and Date_
[$date_{1}$, $date_{2}$, $\ldots$, $date_{D}$] = $Date$ = Extract-
Dates($p$)_// Entity and Passage Mentions_
Inference:
$[(q_{1},ref_{1}),\ldots(q_{k},ref_{k}),\ldots(q_{l},ref_{l})]$ = $Program$ =
Query-Parsing($q$)
for step $(q_{k},ref_{k})\in Program$ do
$({\bm{A}}^{num}_{k}$, $\mathcal{T}^{num}_{k}$), (${\bm{A}}^{date}_{k}$,
$\mathcal{T}^{num}_{k}$) = Entity-Attention($q_{k}$, $p$, $ref_{k}$, $Num$,
$Date$)_Section 2.1.1_
end for
${\mathcal{L}}^{num}_{aux}$, ${\mathcal{L}}^{date}_{aux}$ = Entity-Inductive-
Bias(${\bm{\mathsfit{A}}}^{num}$, ${\bm{\mathsfit{A}}}^{date}$)_Equation 1_
${\mathcal{L}}_{aux}={\mathcal{L}}^{num}_{aux}+{\mathcal{L}}^{date}_{aux}$
$q_{l}$ = _Query Span_ Argument of Last Step _// Program Arguments and Stacked
Attention_
$ref_{l}$ = _Reference_ Argument of Last Step _// Map over Entities for Last
Step_
$\mathcal{T}^{num}$ = {${\mathcal{T}^{num}_{k}|k\in ref_{l}}$},
$\mathcal{T}^{date}$ = {${\mathcal{T}^{date}_{k}|k\in ref_{l}}$}
$Operators$ = {$op_{1}$, $op_{2}$, …, $op_{k1}$} = Operator-Predictor($q_{l}$,
$q$) _// Operator and EntityType_
$EntTypes$ = {$type_{1}$, $type_{2}$, …, $type_{k1}$} = Entity-Type-
Predictor($q_{l}$, $q$)_// Sampling_
$Actions$ = {}_// Action Sampling for each Operator_
for $op$, $type$ $\in$ ($Operators$, $EntTypes$) do
if $type$ is Number then
$\mathcal{T}=\mathcal{T}^{num}$
else if $type$ is Date then
$\mathcal{T}=\mathcal{T}^{date}$
end if
if $op$ is diff then
if $|ref_{l}|==2$ then
$arg1$ = {$arg1_{1}$, $arg1_{2}$, $\ldots$, $arg1_{k2}$} =
Sample-1-Argument($\mathcal{T}_{0}$)
$arg2$ = {$arg2_{1}$, $arg2_{2}$, $\ldots$, $arg2_{k2}$} =
Sample-1-Argument($\mathcal{T}_{1}$)
$args$ = {$(a1,a2)|\hskip 5.0pt(a1,a2)\in(arg1,arg2)$}
else if $|ref_{l}|==1$ then
$args$ = {$arg_{1}$, $arg_{2}$, $\ldots$, $arg_{k2}$} =
Sample-2-Argument($\mathcal{T}_{0}$)
end if
else if $op$ is count then
$args$ = {$count_{1}$, $count_{2}$, $\ldots$, $count_{k2}$} = Count-
Network($\sum_{j}\mathcal{T}_{j}$)
else
$args$ = {$arg_{1}$, $arg_{2}$, $\ldots$, $arg_{k2}$} = Sample-Arbitrary-
Argument($\sum_{j}\mathcal{T}_{j}$)
end if
$probs$ = {$(p^{type}*p^{op}*p)|p\in p^{arg}\\}\in\mathbbm{R}^{k2}$ _// $p$’s
refer to the corresponding probabilities_
$answers$ = {Execute-Discrete-Operation($type$, $op$, $arg$)$|\hskip
5.0ptarg\in args\\}\in\mathbbm{R}^{k2}$
$actions$ = {($prob$, $answer$)$|\hskip 5.0ptprob\in probs,answer\in
answers\\}$
$Actions$ = $Actions$ $\cup$ $actions$
end for
Training:
for $i\in\\{1,\ldots,N_{IML}+N_{RL}\\}$ do
for $(x,y)\in\mathcal{D}$ do
$\mathcal{A}(x)\longleftarrow Actions$ sampled for input($x$) _// Using above
Algorithm_
$R(x,a,y)\longleftarrow$ Exact Match Reward for action $a$ for instance $x$
with gold answer $y$
if $i\leq N_{IML}$ then
$(\theta,\phi)\longleftarrow\displaystyle\max_{\theta,\phi}J^{IML}$ over
($\mathcal{A},R)+\displaystyle\min_{\phi}{\mathcal{L}}_{aux}$_$J^{IML}$ from
Equation 2_
else
$(\theta,\phi)\longleftarrow\displaystyle\max_{\theta,\phi}J^{RL}$ over
($\mathcal{A},R)+\displaystyle\min_{\phi}{\mathcal{L}}_{aux}$_$J^{RL}$ from
Equation 3_
end if
end for
end for
### A.3 Qualitative Inspection of WNSMN Predictions
Good Action: Action Resulting in exact match with gold answer
---
Correct Action: Action Manually annotated to be correct
Number of test instances (DROP-num Test) | 5800
Number of instances with atleast 1 good action | 4868
Number of instances with more than 1 good action | 2533
Average number of good actions (where there is atleast 1 good action) | 1.5
Average number of good actions (where there is more than 1 good action) | 2.25
Number of instances where the top-1 action is good action | 2956
Number of instances where top-1 is the only good action | 2335 (79% of 2956)
Number of instances with possibility of top-1 action being spuriously good | 620 (21% of 2956)
Number of instances manually annotated (out of possible cases of spurious top-1 action) | 334 (out of 620)
Number of instances where top-1 action is found to be spurious | 28 (8.4% of 334)
Avg Ratio of Probability of Top Action and Maximum Probability of all other spuriously good actions (if any) | 4.4e+11
Table 5: Analysis of the predictions of WNSMN on DROP-_num_ Test
Generic Observations/Notes
* •
Note: When the model selects a single number in the Argument Sampling network
and the Operator sampled is not of type count, we forcefully consider the
operation as a NO-OP. For example sum/min/max over a single number or date is
treated as NO-OP.
* •
One potential source of spuriously correct answer is the neural ‘counter’
module which can predict numbers in [1, 10]. However, out of the cases where
atleast one of the top-50 actions is a good action we observe that the model
is able to learn when the answer is directly present as an entity or can be
obtained through (non count) operations over other entities and when it cannot
be obtained directly from the passage but needs to aggregate (i.e., count)
over multiple entities. Table 8 below gives some examples of _hard_ instances
where the WNSMN Top-1 prediction was found to be correct.
True Reasoning | Model Prediction | Count
---|---|---
_negate_ a passage entity i.e., 100 - number | the model was able to select _negate_ of the correct entity as the top action. | 34
_min/max_ of a set of passage entities | the model instead directly sampled the correct minimum/maximum entity as a single argument and then applied NO-OP operation over it. | 11
_select_ one of the passage entities | the model was able to select the right entity and apply NO-OP on it as the top action. | 18
_count_ over passage entities | the model was able to put count as the top action and the spurious actions came much lower with almost epsilon probability | 88
_difference_ over passage entities (the same answer could be spuriously obtained by other non-difference operations over unrelated entites) | the model was able to put difference as the top action and the spurious actions came much lower with almost epsilon probability | 89
_difference_ over passage entities (the same answer could be spuriously obtained by difference over other unrelated entities) | the model was able to put difference over the correct arguments as the top action | 66
Table 6: Case Study of the 306 instances manually annotated as Correct out of 334 instances True Reasoning | Model Prediction | Count
---|---|---
difference of dates/months | count over years | 4
sum(number1, count([number2]) | count over numbers | 1
difference between entities | sum over two arguments (both arguments wrong) | 1
difference between entities | difference over two arguments (both arguments wrong) | 1
difference between entities | count over entities | 1
difference between entities | sum over arguments (one correct) (correct action was taken in one of the other top-5 beams) | 2
question is vague/incomplete/could not be answered manually | count or difference | 2
counting over text spans (Very rare type of question, only 2 found out of 334) | wrong operator | 2
miscelleneous | wrong operator | 7
miscelleneous | correct operator wrong arguments (one correct) | 2
miscelleneous | correct operator wrong arguments (all wrong) | 5
Table 7: Case Study of the 28 instances manually annotated as Wrong out of 334 instances. Question | Relevant Passage Excerpt | Model Prediction Analysis
---|---|---
How many printing had Green Mansions gone through by 1919? | “W. H. Hudson which went through nine printings by 1919 and sold over 20,000 copies…. ” | Model was able to rank the operation sum([9.0]) highest. the _count-number_ operator had near-epsilon probability, indicating that indeed it did not find any indication of the answer being 9 by counting entities over the passage. This is despite the fact that most of the ”how many” type questions need counting.
The Steelers finished their 1995 season having lost how many games difference to the number of games they had won? | “In 1995, the Steelers overcame a 3-4 start (including a 20-16 upset loss to the expansion 1995 Jacksonville Jaguars season) to win eight of their final nine games and finished with an record, the second-best in the AFC”. | Model had to avoid distracting numbers (3,4) and (20,16) to understand that the correct operation is difference of (9-8)
How many more field goals did Longwell boot over Kasay? | “26-yard field goal by kicker Ryan Longwell … Carolina got a field goal with opposing kicker John Kasay. … Vikings would respond with another Longwell field goal (a 22-yard FG) … Longwell booted the game-winning 19-yard field goal ” | Question needed counting of certain events and none of these appeared as numbers. Model was able to apply count over number entities correctly
How many delegates were women from both the Bolshevik delegates and the Socialist Revolutionary delegates? | “Of these mandatory candidates, only one Bolshevik and seven Socialist Revolutionary delegates were women.” | Model was able to apply sum on the correct numbers, even though many of the ”how many” type questions need counting
How many years in a row did the GDP growth fall into negatives? | “Growth dropped to 0.3% in 1997, -2.3% in 1998, and -0.5% in 1999.” | Model had to understand which numbers are ”negative”. It also needed to understand to count the two events instead of taking difference of the years
At it’s lowest average surface temperature in February, how many degrees C warmer is it in May? | “The average surface water temperature is 26-28 C in February and 29 C in May.” | Passage had distrative unrelated numbers in the proximity but the model was able to select the lowest temperature out of (26,28) and then take difference of (29-26)
How many years ibefore the blockade was the Uqair conference taken place? | “Ibn Saud imposed a trade blockade against Kuwait for 14 years from 1923 until 1937… At the Uqair conference in 1922, … ” | Passage had other distracting unrelated numbers in the proximity but the model was able to select the correct difference operation
Table 8: Manual Analysis of a few _hard_ instances (with Question and Relevant
Passage Excerpt) where WNSMN top-1 prediction was found to be correct
### A.4 Background: Numerical Reasoning over Text
The most generic form of Numerical reasoning over text (NRoT) is probably
encompassed by the machine reading comprehension (MRC) framework (as in Dua et
al. (2019)), where given a long passage context, $c$, the model needs to
answer a query $q$, which can involve generating a numerical or textual answer
or selecting a numerical quantity or span of text from the passage or query.
The distinguishing factor from general RC is the need to perform some
numerical computation using the entities and numbers in the passage to reach
the goal.
_Discrete/symbolic reasoning in NRoT_ : In the early NRoT datasets hosseini-
etal-2014-learning; roy-roth-2015-solving; Koncel-Kedziorski et al. (2016)
which deal with simpler math word problems with a small context and few number
entities, symbolic techniques to apply discrete operations were quite popular.
However, as the space of operations grow or the question or the context
becomes more open-ended these techniques fail to generalize. Incorporating
explicit reasoning in neural models as discrete operations requires handling
non-differentiable components in the network which leads to optimization
challenges.
_Discrete reasoning using RL_ : Recently Deep Reinforcement Learning (DRL) has
been employed in various neural symbolic models to handle discrete reasoning,
but mostly in simpler tasks like KBQA, Table-QA, or Text-to-SQL Zhong et al.
(2017); Liang et al. (2018; 2017); Saha et al. (2019); ijcai2019-679;
DBLP:conf/iclr/NeelakantanLAMA17. Such tasks can be handled by well-defined
components or modules, with well structured function-prototypes (i.e.,
function arguments can be of specific variable-types e.g., KB entities or
relations or Table row/column/cell values), which can be executed entirely as
a symbolic process. On the other hand, MRC needs more generalized frameworks
of modular networks involving fuzzy forms of reasoning, which can be achieved
by _learning_ to execute the query over a sequence of learnable neural
modules, as explored in Gupta et al. (2020). This was inspired by the Neural
Modular Networks which have proved quite promising for tasks requiring similar
fuzzy reasoning like Visual QA DBLP:conf/cvpr/AndreasRDK16;
DBLP:journals/corr/AndreasRDK15.
##### SoTA models on DROP:
While the current leaderboard-topping models already showcase quite superior
performance on the reasoning based RC task, it needs closer inspection to
understand whether the problem has been indeed fully solved.
_Pre-trained Language Models_ : On one hand, the large scale pretrained
language models Geva et al. (2020) use Transformer encoder-decoder (with
pretrained BERT) to emulate the input-output behavior, decoding digit-by-digit
for numeric and token-by-token for span based answers. However such models
perform poorly when only trained on DROP and need additional synthetic dataset
of numeric expressions and DROP-like numeric textual problems, each augmented
with the _gold numeric expression_ form.
_Reasoning-free Hybrid Models_ : On the other hand, a class of _hybrid_ neural
models have also gained SoTA status on DROP by explicitly handling the
different types of numerical computations in the standard extractive QA
pipeline. Most of the models in this genre, like NumNet (Ran et al. (2019)),
NAQANet (Dua et al. (2019)), NABERT+(Kinley & Lin (2019)), MTMSN (Hu et al.
(2019)) and NeRd (Chen et al. (2020)) do not actually treat it as a reasoning
task; instead they precompute an exhaustive enumeration of all possible
outcomes of numerical and logical operations (e.g., _sum/diff, negate, count,
max/min_) and augment the training data with knowledge of the query-type
(depending on reasoning-type) and _all_ the numerical expression that leads to
the correct answer. This reduces the question-answering task to simply
learning a multi-type answer predictor to classify into the reasoning-type and
directly predict the numerical expression, thus alleviating the need for
rationalizing the inference or handling any (non-differentiable) discrete
operation in the optimization. Some of the initial models in this genre are
NAQANet(Dua et al. (2019) and NumNet (Ran et al. (2019)) which are
respectively numerically aware enhancements of QANet(wei2018fast) and the
Graph Neural Networks. These were followed by BERT-based models, NABERT and
NABERT+(Kinley & Lin (2019)), i.e. a BERT version of the former, enhanced with
_standard numbers_ and _expression templates_ for constraining numerical
expressions. MTMSN Hu et al. (2019) models a specialized multi-type answer
predictor designed to support specific answer types (e.g.,
count/negation/add/sub) with supervision of the arithmetic expressions that
lead to the gold answer, for each type.
_Modular Networks for Reasoning_ : NMN (Gupta et al., 2020) is the first model
to address the QA task through explicit reasoning by learning to execute the
query as a specialized program over learnable modules tailored to handle
different types of numerical and logical operations. However, to do so, it
further needs to augment the training data with annotation of the _gold
program_ and _gold program execution_ i.e. the _exact_ discrete operation and
numerical expression (i.e., the numerical operation and operands) that leads
to the correct answer for e.g., the supervision of the gold numerical
expression in Figure 1 is _SUM(23, 26, 42)_. This is usually obtained through
manual inspection of the data through regex based pattern matching and
heuristics applied on the query language. However, because of the abundance of
templatized queries in DROP this pattern matching is infact quite effective
and noise-free, resulting in the annotations acting as strong supervision.
However such a manual intensive process severely limits the overall model from
scaling to more general settings. This is especially true for some of the
previous reasoning based models, NABERT+, NumNet and MTMSN which perform
better on than NMN (infact achieve SoTA performance) on the full DROP dataset.
But we do not consider them as our primary baselines, as, unlike NMN, these
models (Hu et al. (2019); Efrat2019TagbasedME; Dua et al. (2019); Ran et al.
(2019)) do not have any provision to learn in absence of the additional
supervision generated through exhaustive enumeration and manual inspection.
(Gupta et al., 2020) have been the first to train a modular network strong,
_albeit_ a more fine-grained supervision for a fraction of training data, and
auxiliary losses that allow them to learn from the QA pairs alone.
Consequently on a carefully-chosen subset of DROP, NMN showcased better
performance than NABERT and MTMSN, when strong supervision is available only
for partial training data.
Our work takes it further along the direction in two ways
* •
while NMN baseline can handle only 6 specific kinds of reasoning, for which
they tailored the program generation and gold reasoning annotation, our model
works on the full DROP-_num_ , that involves more diverse kinds of reasoning
or more open-ended questions, and requires evaluating on a subset $\times$7.5,
larger by training on $\times$4.5 larger training data.
* •
while NMN generalized poorly on the full DROP-_num_ , especially when only one
or more types of supervision is removed, our model performs significantly
better without any of these types of supervision.
Together, NMN and GenBERT are some of the latest works in the two popular
directions (reasoning and language model based) for DROP that allow learning
with partial no strong supervision and hence act as primary baselines for our
model.
Since in this work we are investigating how neural models can incorporate
explicit reasoning, we focus on only answering questions having numerical
answer (DROP-_num_), where we believe the effect of explicit reasoning is more
directly observeable. This is backed up by the category-wise performance
comparison of reasoning-free language model GenBERT (reported in Geva et al.
(2020)) with other hybrid models (MTMSN and NABERT+) that exploit numerical
computation required in answering DROP questions. While, on DROP-_num_ , there
is an accuracy gap of 33% between the GenBERT model and the hybrid models
(when all are trained on DROP only), there is only a 2-3% performance gap on
the subset having answers as single span, despite the latter also needing
reasoning. This evinces that the performance gap is indeed due to exploiting
explicit reasoning under such strong supervised settings.
#### A.4.1 Limitations of NMN
The primary motivation behind our work comes from some of the limitations of
the contemporary neural module networks, NMN and the reasoning-free hybrid
models MTMSN, NABERT+, NumNet, NAQANet; specifically their dependence on the
availability of various kinds of strong supervision. For that we first
describe the nature of programmatic decompositions of queries used in the
modular architectures in the closest comparable work of NMN.
NMN defined a program structure with modules like ‘find’, ‘filter’,
‘relocate’, ‘find-num’, ‘find-date’, ‘year-difference’, ‘max-num’, ‘min-num’,
‘compose-number’ etc., to handle a carefully chosen subset of DROP showcasing
only 6 types of reasoning, (i.e. _Date-Difference, Count, Extract Number,
Number Compare_). For e.g. for the query _Which is the longest goal by
Carpenter?_ the program structure would be _(MAX(FILTER(FIND(‘Carpenter’),
‘goal’))_ , where each of these operations are learnable networks. However to
facilitate learning of such specialized programs and the networks
corresponding to these modules, the model needs precomputation of the
exhaustive output space for different discrete operation and also various
kinds of strong supervision signals pertaining to the program generation and
execution.
_Precomputation of the Exhaustive Output-Space_ : For operations that generate
a new number as its output (e.g., _sum/diff_), the annotation enumerates the
set of all possible outputs by computing over all subsets of number or date
entities in the passage. This simplifies the task by allowing the model to
directly learn to optimize the likelihood of the arithmetic expression that
lead to the final answer, without any need for handling discrete operations.
_Program Supervision_ provides supervision of the query category out of the 6
reasoning categories, on which their program induction grammar is tailored to.
With this knowledge they can directly use the category specific grammar to
induce the program ( for e.g. _SUM(FILTER(FIND))_ in Fig 1). Further all these
models (NMN, MTMSN, NABERT+, NumNet, NAQANet) use the supervision of the query
category to understand whether the discrete operation is of type count or
add/sub or max/min. which includes the knowledge of the ‘gold’ discrete
operation (i.e. count or max/min or add/sub) to perform.
_Query Attention Supervision_ provides information about the query segment to
attend upon in each step of the program, as the program argument for e.g. in
Fig 1, _‘Carpenter’_ and _‘goal’_ in the 1st and 2nd step of the program.
_Execution Supervision_ : For operations that select one or more of the
number/date entities in the passage, (for e.g. max/min), rule based techniques
provide supervision of the subset of numbers or dates entities from the
passage, over which the operation is to be performed.
These annotations are heuristically generated through manual inspection and
regular expression based pattern matching of queries, thus limiting their
applicability to a small subset of DROP only. Furthermore, using a hand-
crafted grammar to cater to the program generation for each of their reasoning
categories, hinders their generalizability to more open ended settings. While
this kind of annotation is feasible to get in DROP, this is clearly not the
case with other futuristic datasets, with more open-ended forms of query, thus
calling for the need for other paradigms of learning that do not require such
manually intensive annotation effort.
#### A.4.2 Pretraining Data for GenBERT
Figure 6: t-SNE of questions in DROP-_num_ -Test and Synthetic Textual Data
used in GenBERT models (TD and ND+TD)
While GenBERT (Geva et al. (2020)) greatly benefits from pretraining on
synthetic data, there are few notable aspects of how the synthetic textual
data was carefully designed to be similar to DROP. The textual data was
generated for the same two categories _nfl_ and _history_ as DROP with similar
vocabulary and involving the same numerical operations over similar ranges of
numbers (2-3 digit numbers for DROP and 2-4 digit numbers for synthetic
textual data). The intentional overlap between these two datasets is evident
from the t-SNE plots (in Figure 6) of the pretrained Sentence-Transformer
embedding of questions from DROP-_num_ (blue) and the Synthetic Textual Data
(red). Further, while the generalizability of GenBERT was tested on add/sub
operations from math word problems (MWP) datasets ADD-SUB, SOP, SEQ, their
synthetic textual data was also generated using the same structure involving
world state and entities and verb categories used by hosseini-
etal-2014-learning to generate these MWP datasets. Such bias limits mitigates
the real challenges of generalizability, limiting the true test of robustness
of such language models for numerical reasoning.
Figure 7: Examples of Programs for WNSMN obtained from the Dependency Parse
Tree of the Query
### A.5 Query Parsing: Details
The Stanford Dependency parse tree of the query is organized into a program
structure as follows
* •
Step 1) A node is constructed out of the subtrees rooted at each immediate
child of the root, the left-most node is called the root-clause
* •
Step 2) Traversing the nodes from left to right, an edge is added between the
left-most to every other node, and each of these are added as steps of the
program with the node as the query span argument of that step and the
reference argument as the incoming edges from past program steps
* •
Step 3) The terminal (leaf) nodes obtained in this manner are then further
used to add a final step of the program which is responsible for handling the
discrete operation. The query-span argument of this step is the root-clause,
which often is indicative of the kind of discrete reasoning to perform. The
reference arguments of this step are the leaf nodes obtained from Step 2).
Figure 7 provides some example queries similar to those in DROP along with
their Dependency Parse Tree and the Simplified Representation obtained by
constructing the nodes and edges as in Step 1) and 2) above, and the final
program which is used by WNSMN. Note that in this simplified representation of
the parse tree the root-word of the original parse tree is absorbed in its
immediate succeeding child. Also we simplify the structure in order to limit
the number of reference arguments in any step of the program to 2, which in
turn requires the number of terminal nodes (after step 2 of the above process)
to be limited to 2. This is done in our left to right traversal by collapsing
any additional terminal node into a single node.
### A.6 RL Framework: Details
In this section we discuss some additional details of the RL framework and
tricks applied in the objective function
Iterative ML Objective: In absence of supervision of the true discrete action
that leads to the correct answer, this iterative procedure fixes the policy
parameters to search for the _good_ actions (where
$\mathcal{A}^{good}=\\{a:R(x,a)=1\\}$) and then optimizes the likelihood of
the _best_ one out of them. However, the simple, conservative approach of
defining the best action as the most likely one according to the current
policy can lead to local minima and overfitting issues, especially in our
particularly sparse and confounding reward setting. So we take a convex
combination of a conservative and a non-conservative selection that
respectively pick the most and least likely action according to the current
policy out of $\mathcal{A}^{good}$ as best. Hyperparameter $\lambda$ weighs
these two parts of the objective and is chosen to be quite low $(1e^{-3})$, to
serve the purpose of an epsilon-greedy exploration strategy without diverging
significantly from the current policy.
$J^{IML}(\theta,\phi)=\sum_{x}(1-\lambda)\max_{a\in\mathcal{A}^{good}}\log{P_{\theta,\phi}(a|x)}+\lambda\min_{a\in\mathcal{A}^{good}}\log{P_{\theta,\phi}(a|x)}$
Using Noisy Pseudo-Reward: In addition to using the REINFORCE objective to
maximise the likelihood of actions that lead to the correct answer, we can
also obtain different noisy _pseudo rewards_ ($\in\\{-1,+1\\}$) for the
different modules that contribute towards the action sampling (i.e. the
operator and the entity-type and different argument sampler networks). Towards
this end, we define pseudo-reward for sampling an operator as the maximum of
the reward obtained from _all_ the actions involving that operator. Similarly,
we can also define reward for predicting the entity-type (date or number) over
which the discrete operation should be executed. Following the same idea, we
also obtain pseudo rewards for the different argument sampling modules. For
e.g. if the most likely operator (as selected by the Operator Sampler) is of
type count and it gets a pseudo-reward of $+1$, then, in that case, we can use
the reward obtained by the different possible outputs of the Counter network
as a noisy pseudo-label supervision and subsequently add an explicit loss of
negative log-likelihood to the final objective for the Counter module. Similar
pseudo-reward can be designed for the Entity-Ranker module when the most
likely operator sampled by the Operator Sampler needs arbitrary number of
arguments. Treating the pseudo-reward as a noisy label can lead to a negative-
log-likelihood based loss on output distribution from the Entity-Ranker,
following the idea that the correct entities should atleast be ranked high so
as to get selected when sampling any arbitrary number of entities.
|
assertionAssertion conjectureConjecture definitionDefinition
hypothesisHypothesis remarkRemark noteNote observationObservation
problemProblem questionQuestion algorithmAlgorithm exampleExample
notationNotation In memory of Dick Askey, 4th June 1933 – 9th October 2019
42C05 (primary), 33C45 (secondary) This research was conducted during the
second author’s PhD, funded by the EPSRC Centre for Doctoral Training in the
Mathematics of Planet Earth [EP/L016613/1].
# Integral Representations of Ultraspherical Polynomials II
N. H. Bingham and Tasmin L. Symons<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
In the first part, by the first author’s work of 1972, an integral
representation for an ultraspherical polynomial of higher index in terms of
one of lower index and an infinite series was obtained. While this
representation works well from a theoretical point of view, it is not
numerically satisfactory as it involves polynomials of high degree, which are
numerically unstable. Here we sum this series to obtain an integral, which is
numerically tractable.
## 1 Introduction
As in [7], we write $W_{n}^{\lambda}(x)$ for the ultraspherical (or
Gegenbauer) polynomial of degree $n$ and index $\lambda>0$ (see e.g. [1, p.
302], [21, §4.4]), normalised so that $W_{n}^{\lambda}(1)=1$. This choice of
normalisation, due to Bochner [15], is convenient probabilistically; the first
part [7] was probabilistically motivated [8]; so too is this sequel (see §5).
The $W_{n}^{\nu}$ are orthogonal polynomials on the interval $[-1,1]$ with
respect to the probability measure $G_{\nu}$, where
$G_{\nu}(dy)=\frac{\Gamma(\nu+1)}{\sqrt{\pi}\Gamma(\nu+1/2)}(1-y^{2})^{\nu-1/2}dy:$
(1)
$\int_{-1}^{1}W_{m}^{\nu}(x)W_{n}^{\nu}(x)G_{\nu}(dx)={\delta}_{mn}{\omega}^{\nu}_{n},$
where
$\omega^{\nu}_{m}=\frac{n+\nu}{\nu}\frac{\Gamma(n+2\nu)}{n!\Gamma(2\nu)}.$ (2)
###### Theorem 1.1 ([7])
If $0<\nu<\lambda$, $x\in[-1,1]$, there exists a probability measure
$M_{\nu}^{\lambda}(x;dy)$ on $[-1,1]$ such that
$W^{\lambda}_{n}(x)=\int_{-1}^{1}W^{\nu}_{n}(y)\,M^{\lambda}_{\nu}(x;dy).$ (3)
Moreover, when $\lambda\neq\nu$ the measure $M^{\lambda}_{\nu}$ is absolutely
continuous with density
$M^{\lambda}_{\nu}(x;dy)=G_{\nu}(dy)\sum_{m=0}^{\infty}\omega^{\nu}_{m}W^{\lambda}_{m}(x)W^{\nu}_{m}(y).$
(4)
The series in 4 is transparent from a theoretical point of view (it is derived
in [7] from the earlier work of Askey and Fitch [3] by an Abel-limit
operation), but unsuitable for numerical use as it involves polynomials of
high degree, which oscillate wildly. Our purpose here is to circumvent this by
giving an explicit formula for the sum of the infinite series as a double
integral, which is numerically tractable. Our result, Theorem 3.1 below, is
interesting in its own right, completing the integral representations in [7]
by showing the dependence on the higher index, $\lambda$, in a more convenient
and structurally revealing way.
## 2 Preliminaries
The Poisson kernel for the Jacobi polynomials reduces in the ultraspherical
case to the generating function
$\sum_{n=0}^{\infty}{\omega}_{n}^{\nu}r^{n}W_{n}^{\nu}(x)=(1-r^{2})/(1-2rx+x^{2})^{\nu+1},\quad
r\in(-1,1),$ (5)
cf. [7, (2.1)]. Note that this is not the usual generating function for the
ultraspherical polynomials [21, §4.7.23].
Askey and Fitch [3] showed that for $x,y\in[-1,1]$, $r\in(-1,1)$,
$0\leq\nu<\lambda\leq\infty$, the series
$\sum_{n=0}^{\infty}{\omega}_{n}^{\nu}r^{n}W_{n}^{\lambda}(x)W_{n}^{\nu}(y)$
(6)
converges to a non-negative sum-function, which leads to a corresponding
probability measure $M^{\lambda}_{\nu}(x)$ satisfying
$W_{n}^{\lambda}(x)=\int_{-1}^{1}W_{n}^{\nu}(y)M^{\lambda}_{\nu}(x;dy),\quad
n=0,1,2,\ldots.$ (7)
Here (see [7]) we may take $0\leq\nu\leq\lambda\leq\infty$, $x\in[-1,1]$. Some
cases give Dirac laws: if $x=\pm 1$, $M^{\lambda}_{\nu}(\pm 1)=\delta_{\pm 1}$
(as $W_{n}^{\lambda}(\pm 1)=(\pm 1)^{n}$). If $\lambda=\nu$, then
$M^{\lambda}_{\lambda}(x)={\delta}_{x}$ (as there is no projection to be
done); so we may restrict to $\nu<\lambda$ as before. Now [7, Lemma 1] gives
the Abel-limit operation explicitly: for $x,y\in(-1,1)$, we may take $r=1$
here to get
$m^{\lambda}_{\nu}(x;y):=\sum_{n=0}^{\infty}{\omega}_{n}^{\nu}W_{n}^{\lambda}(x)W_{n}^{\nu}(y)\geq
0,$ (8)
a non-negative function in $L_{1}(G_{\nu})$, finite-valued unless $x=y$ and
$\nu<\lambda\leq\nu+1$. It is in fact the Radon-Nikodym derivative
$dM^{\lambda}_{\nu}(x;dy)/dG_{\nu}(dy)$:
$M^{\lambda}_{\nu}(x;dy)=G_{\nu}(dy)\cdot
m^{\lambda}_{\nu}(x;y)=G_{\nu}(dx)\cdot\sum_{n=0}^{\infty}{\omega}_{n}^{\nu}W_{n}^{\lambda}(x)W_{n}^{\nu}(y).$
(9)
Following [7], for $\lambda>\nu$ write $H_{\nu}^{\lambda}$ for the probability
measure of Beta type on $[0,1]$ given by the Sonine law
$H_{\nu}^{\lambda}(dx):=\frac{2\Gamma(\lambda+\frac{1}{2})}{\Gamma(\nu+\frac{1}{2})\Gamma(\lambda-\nu)}\cdot
x^{2\nu}(1-x^{2})^{\lambda-\nu-\frac{1}{2}}dx.$ (10)
This occurs in Sonine’s first finite integral for the Bessel function [22, p.
373]: for
$\displaystyle\Lambda_{\mu}(t)$
$\displaystyle:=\Gamma(\nu+1)J_{\nu}(t)(t/2)^{-\mu},$ (11)
$\displaystyle\Lambda_{\lambda-\frac{1}{2}}(t)$
$\displaystyle=\int_{0}^{1}\Lambda_{\nu-\frac{1}{2}}(ut)H_{\nu}^{\lambda}(du)$
(12)
(the drop by a half-integer in parameter here reflects the drop in dimension
in ${\mathbb{S}}^{d}\subset{\mathbb{R}}^{d+1}$; see §4 below).
For the product of $W_{n}$ terms in (8), we need Gegenbauer’s multiplication
theorem for the ultraspherical polynomials [22, p. 369],
$W_{n}^{\nu}(x)W_{n}^{\nu}(y)=\int_{-1}^{1}W_{n}^{\nu}(xy+\sigma\sqrt{1-x^{2}}\sqrt{1-y^{2}})G_{\nu-\frac{1}{2}}(d\sigma).$
(13)
To cope with the drop in index (dimension) in (8), we need the Feldheim-
Vilenkin integral [7, (2.11)], [1, p.315], [3],
$\displaystyle W_{n}^{\lambda}(x)=$
$\displaystyle\left[\frac{2\Gamma(\lambda+\frac{1}{2})}{\Gamma(\nu+\frac{1}{2})\Gamma(\lambda-\nu)}\right]\int_{0}^{1}u^{2\nu}(1-u^{2})^{\lambda-\nu-1}$
$\displaystyle\cdot[x^{2}-x^{2}u^{2}+u^{2}]^{\frac{1}{2}n}W_{n}^{\nu}\left(\frac{x}{\sqrt{x^{2}-x^{2}u^{2}+u^{2}}}\right)du.$
(14)
## 3 The result
We can now formulate our result.
###### Theorem 3.1
For $r\in(-1,1)$, the sum of the Askey-Fitch series (8) above is given by the
integral (15) below:
$\int_{0}^{1}H_{\nu}^{\lambda}(du)\int_{-1}^{1}G_{\nu-\frac{1}{2}}(dv)\,\frac{\left[1-r^{2}(x^{2}-x^{2}u^{2}+u^{2})\right]}{I^{\nu+1}},$
(15)
where $I$ is given by
$I:=1-2r\cdot\frac{xy+uv\sqrt{1-x^{2}}\sqrt{1-y^{2}}}{\sqrt{x^{2}-x^{2}u^{2}+u^{2}}}+\frac{(xy+uv\sqrt{1-x^{2}}\sqrt{1-y^{2}})^{2}}{(x^{2}-x^{2}u^{2}+u^{2})}.$
(16)
Moreover, this holds also for $r=1$ unless $\nu<\lambda\leq\nu+1$.
###### Proof 3.2.
We sum the series by reducing it to the generating function (5). There are two
steps: reduction of $\lambda$ to $\nu$ by the Feldheim-Vilenkin integral (14)
and reduction of two $W_{n}$ terms to one by Gegenbauer’s multiplication
theorem (13).
We follow [7]. As there, we may substitute for $W_{n}^{\lambda}(x)$ from (14)
into the series (8) and integrate termwise, rewriting (8) as
$\int_{0}^{1}H_{\nu}^{\lambda}(du)\sum_{n=0}^{\infty}{\omega}_{n}^{\nu}(r[x^{2}-x^{2}u^{2}+u^{2}]^{\frac{1}{2}})^{n}\cdot
W_{n}^{\nu}(y)W_{n}^{\nu}\left(\frac{x}{\sqrt{x^{2}-x^{2}u^{2}+u^{2}}}\right).$
(17)
We use Gegenbauer’s multiplication theorem (13) with
$r\mapsto r\sqrt{x^{2}-x^{2}u^{2}+u^{2}},$
and replace the product of $W_{n}^{\nu}$ factors in the above, at the cost of
another integration over $G_{\nu-\frac{1}{2}}(dv)$, by a single $W_{n}^{\nu}$
term, with argument
$\frac{xy}{\sqrt{x^{2}-x^{2}u^{2}+y^{2}}}+v\sqrt{1-y^{2}}.\sqrt{1-\frac{x^{2}}{x^{2}-x^{2}u^{2}+u^{2}}}=\frac{xy+uv\sqrt{1-x^{2}}\sqrt{1-y^{2}}}{\sqrt{x^{2}-x^{2}u^{2}+u^{2}}}.$
(18)
The integrand is now of the form
$\sum\omega^{\nu}_{n}r^{n}W^{\nu}_{n}(\cdot)$, and the result now follows from
the generating function (5).
This result completes and complements the work in [3] and [7] by displaying
the dependence on the higher index $\lambda$ in a structurally revealing way:
for simplicity, let $r=1$ so that
$I=\left(1-\frac{xy+uv\sqrt{1-x^{2}}\sqrt{1-y^{2}}}{\sqrt{x^{2}-x^{2}u^{2}+u^{2}}}\right)^{2},$
(19)
and (15) is given by
$\int_{0}^{1}H_{\nu}^{\lambda}(du)\int_{-1}^{1}G_{\nu-\frac{1}{2}}(dv)\frac{1-(x^{2}-x^{2}u^{2}+u^{2})}{I^{\nu+1}}.$
(20)
Using the definition of $H^{\lambda}_{\nu}$ and the probability measure
$G_{\nu+\frac{1}{2}}$ and simplifying, (15) becomes
$\displaystyle\frac{2}{\sqrt{\pi}}\int_{0}^{1}\frac{\Gamma(\lambda+\frac{1}{2})}{\Gamma(\lambda-\nu)}u^{2\nu}(1-u^{2})^{\lambda-\nu-\frac{1}{2}}\int_{-1}^{1}(1-v^{2})^{\nu-\frac{1}{2}}\left[\frac{1-(x^{2}-x^{2}u^{2}+u^{2})}{I^{2(\nu+1)}}\right]\,dvdu.$
(21)
Note that the higher index $\lambda$ occurs only in the outer integral.
Moreover, the interactions between the indexes in the outer integral occurs
only in the Gamma function $\Gamma(\lambda-\nu)$ and the power
$\lambda-\nu-1/2$ of $(1-u^{2})$.
Figure 1: Numerical evaluation of series (15) for $\lambda=3.0$. $\nu=0.5$.
## 4 Dimension walks
Write ${\cal P}_{\nu}$ for the class of functions $f$ on $[-1,1]$ which are
mixtures of $W_{n}^{\nu}$, i.e., of the form
$f(x)=\sum_{n=0}^{\infty}a_{n}W_{n}^{\nu}(x),\quad\sum a_{n}=1,a_{n}\geq 0$
(22)
for some probability law $a=\\{a_{n}\\}_{0}^{\infty}$ (the ultraspherical
series converges uniformly as $|W_{n}^{\nu}(x)|\leq 1$). The classes ${\cal
P}_{\nu}$ are decreasing in $\nu\in[0,\infty]$, and are continuous in $\nu$,
in that
$\bigcap\\{{\cal P}_{\mu}:0\leq\mu<\nu\\}={\cal P}_{\nu},\qquad\bigcup\\{{\cal
P}_{\mu}:\nu<\mu\leq\infty\\}={\cal P}_{\nu}$
([9, Th. 1]). While the parameters $\lambda$, $\nu>0$ are continuous here, the
half-integer values are particularly important. With ${\mathbb{S}}^{d}$ the
$d$-sphere – the unit sphere in Euclidean $(d+1)$-space ${\mathbb{R}}^{d+1}$,
a $d$-dimensional Riemannian manifold – the relevant index for the
ultraspherical polynomial is $\nu$, where
$\nu=\frac{1}{2}(d-1).$
With $\nu<\lambda$ as above, the higher dimension corresponding to $\lambda$
will be written $d^{\prime}$ (so $\lambda=\frac{1}{2}(d^{\prime}-1)$). Then,
as in [8], [7], the passage from $\lambda$ to $\nu<\lambda$ corresponds to
projection from the $d^{\prime}$-sphere to the $d$-sphere. The limiting case
$\nu=\infty$ gives $W_{n}^{\infty}(x)=x^{n}$, and ${\cal P}_{\infty}$ is the
class of probability generating functions, or the class of positive definite
functions on the unit sphere in Hilbert space ([9, Lemma 2], [20]).
Covariance functions on spheres are very valuable in applications to Planet
Earth (see §5). Operations which preserve positive-definiteness are useful in
the construction of new families of such covariance functions. Two such
operations, coined ‘walks on dimensions’, one changing the dimension by 1, and
the other by 2, were proposed for positive-definite functions on spheres by
Beatson and zu Castell [5], [6]. The one-step walks in [5] are based on the
Riemann-Liouville operators, but lack the highly desirable semi-group
property, in which passage from $\lambda$ to $\nu$ and then $\nu$ to $\mu$ is
the same as passage from $\lambda$ to $\mu$ directly.
###### Theorem 4.1.
For $f\in{\cal P}_{\nu}$ as in (22),
$f(x)=\sum_{n=0}^{\infty}a_{n}W_{n}^{\lambda}(x)\in{\cal P}_{\lambda}.$ (23)
###### Proof 4.2.
$\displaystyle\int_{-1}^{1}f(y)M^{\lambda}_{\nu}(x;dy)$
$\displaystyle=\int_{-1}^{1}\sum_{n=0}^{\infty}a_{n}W_{n}^{\nu}(y)M^{\lambda}_{\nu}(x;dy)$
(24)
$\displaystyle=\sum_{n=0}^{\infty}a_{n}\int_{-1}^{1}W_{n}^{\nu}(y)M^{\lambda}_{\mu}(x;dy)$
(25)
$\displaystyle=\sum_{n=0}^{\infty}a_{n}W^{\lambda}_{n}(x)\in\mathcal{P}(\mathbb{S}^{d^{\prime}}),$
(26)
interchange of the sum and integral in 25 being justified by the uniform
convergence of the Schoenberg expansion.
###### Corollary 4.3.
The operation of passing from $f(x)\in{\cal P}_{\nu}$ to
$\int_{-1}^{1}f(y)M_{\nu}^{\lambda}(x,dy)\in{\cal P}_{\lambda}$ in the theorem
has the semigroup property.
###### Proof 4.4.
The mixture coefficients $a_{n}$ are unchanged by this operation, and so
remain unchanged under further operations of the same type.
## 5 Complements
### 5.1 Hypergroups and symmetric spaces.
Hypergroups are ‘locally compact spaces with a group-like structure on which
the bounded measures convolve in a similar way to that on a locally compact
group’, to quote from the standard work on this important subject, [14, p.1].
The probabilistic setting of random walks on spheres [8, p.196-197] that
inspired both [7] and this paper, its sequel, is in hypergroup language that
of the Bingham (or Bingham-Gegenbauer) hypergroup. This in turn was inspired
by Kingman’s work on random walks with spherical symmetry [18], which gives
the Kingman (or Kingman-Bessel) hypergroup. The theory for spheres and for
spherical symmetry give the prototypical examples of symmetric spaces of rank
one of compact type (constant positive curvature) and of Euclidean type (zero
curvature); these are complemented by the case of constant negative curvature,
the hyperbolic or Zeuner hypergroups [24]. For background on symmetric spaces
we refer to Helgason [17], for spaces of constant curvature to Wolf [23], and
for compact symmetric spaces to Askey and Bingham [2].
We note that the Kingman situation (Euclidean space with spherical symmetry)
may be recovered from the spherical one here by letting the radius of the
sphere tend to infinity. The Bessel functions in the Kingman theory arise from
radialisation of the Fourier transform in Euclidean space under spherical
symmetry [16, II.7].
### 5.2 Gaussian processes, path properties, Tauberian theorems.
The positive definite functions in the classes ${\cal P_{\nu}}$ of §4 serve as
covariances of Gaussian processes parametrised by spheres. Their distributions
are determined by the sequence $a=\\{a_{n}\\}$ (the angular power spectrum) of
the Schoenberg expansion coefficients above. In particular, the rate of decay
of the $a_{n}$ governs the path properties: the faster the decay, the smoother
the paths. For details, see [13]. Crucial here is Malyarenko’s theorem [19,
Ch. 4]. This rests on a Tauberian theorem of the first author [10], which in
turn derives from work of Askey and Wainger [4]. Here it is necessary to move
from the one-parameter family of ultraspherical polynomials $W_{n}^{\nu}$ to
the two-parameter family of Jacobi polynomials $J_{n}^{\alpha,\beta}$
containing it ([1, Ch. 6], [21, Ch. IV]).
### 5.3 Sphere cross line.
The motivation for much of the interest in positive definite functions on
spheres derives from its applications in geostatistics. Here one has both
spatial dependence and temporal evolution, and so one is dealing with
geotemporal processes. For background here, see e.g. [11], [12].
## Postscript
To close, the first author takes pleasure in noting the half-century between
Part I [7] (which derives from his own PhD of 1969) and the present Part II
(which derives from the second author’s PhD of 2020). We both take pleasure in
dedicating the paper to the memory of Dick Askey, whose influence pervades it.
Dick was a famous expert on special functions, but was interested in their
applications, including those to probability. When [2] was written, he used to
dine out by saying, with tongue in cheek, “I’ve just written a paper with
Bingham on Gaussian processes – whatever they are.”
## References
* [1] G. E. Andrews, R. Askey, and R. Roy. Special Functions. Cambridge University Press, 1999.
* [2] R. Askey and N. H. Bingham. Gaussian processes on compact symmetric spaces. Z. Wahrschein., 37:127 – 143, 1976.
* [3] R. Askey and J. Fitch. Integral representations for Jacobi polynomials, and some applications. J. Math. Anal. Appl., 25:411–437, 1969.
* [4] R. Askey and S. Wainger. On the behaviour of special classes of ultraspherical polynomials i & ii. J. Anal. Math., 15:193 – 244 and 245 – 262, 1965.
* [5] R. K. Beatson and W. Zu Castell. One-step recurrences for stationary random fields on the sphere. SIGMA, 12:19p, 2016.
* [6] R. K. Beatson and W. Zu Castell. Dimension hopping and families of strictly positive definite zonal basis functions on spheres. J. Approximation Theory, 221:22 – 37, 2017.
* [7] N. H. Bingham. Integral representations for ultraspherical polynomials. J. London Math. Soc., 6:1 – 11, 1972.
* [8] N. H. Bingham. Random walks on spheres. Z. Wahrscheinlichkeitstheorie verw. Geb., 22:169 – 192, 1972.
* [9] N. H. Bingham. Positive definite functions on spheres. Proc. Cambridge Phil. Soc., 73:145–156, 1973.
* [10] N. H. Bingham. Tauberian theorems for Jacobi series. Proc. London Math. Soc., 36:285 – 309, 1978.
* [11] N. H. Bingham, Aleksandar Mijatović, and Tasmin L. Symons. Brownian manifolds, negative type and geo-temporal covariances. Communications on Stochastic Analysis, 10:421 – 432, 2016.
* [12] N. H. Bingham and Tasmin L. Symons. Dimension walks on the sphere cross line. Statist. Probab. Lett., 147:12 – 17, 2019.
* [13] N. H. Bingham and Tasmin L. Symons. Gaussian random fields on the sphere and sphere cross line. Stoc. Proc. and Appl., 2019. https://doi.org/10.1016/j.spa.2019.08.007. (To appear in Larry Shepp memorial issue.)
* [14] W. R. Bloom and H. Heyer. Harmonic analysis of probability measures on hypergroups, volume 20 of De Gruyter Studies in Math. Walter de Gruyter, 1994.
* [15] S. Bochner. Positive zonal functions on spheres. Proc. nat. Acad. Sci. (USA), 40:1141–1147, 1954.
* [16] S. Bochner and K. Chandrasekharan. Fourier transforms. Annals of Mathematics Studies 19. Princeton University Press, 1949.
* [17] S. Helgason. Differential geometry, Lie groups and symmetric spaces. Academic Press, 1978.
* [18] J. F. C. Kingman. Random walks with spherical symmetry. Acta. Math., 109:11 – 53, 1963.
* [19] A. A. Malyarenko. Invariant random fields on spaces with a group action. Springer, 2013.
* [20] I. J. Schoenberg. Positive definite functions on spheres. Duke Math. J., 9:96 – 108, 1942.
* [21] G. Szegő. Orthogonal polynomials. American Mathematical Society, Colloquium Publiations Volume XXIII, 1939\.
* [22] G. N. Watson. A treatise on the theory of Bessel functions. Cambridge University Press, 2nd edition, 1962.
* [23] J. A. Wolf. Spaces of constant curvature. Amer. Math. Soc., 6th edition, 2011.
* [24] H. Zeuner. On hyperbolic hypergroups. In Probability measures on groups, VIII (Oberwolfach, 1985), Lecture Notes in Math., pages 216–224. Springer, 1986.
N. H. Bingham
Department of Mathematics
Imperial College London
South Kensington Campus
London, SW7 1AZ
UK Tasmin L. Symons
Telethon Kids Institute
15 Hospital Avenue
Perth, WA 6009
Australia
|
# Non-intrusive reduced order modeling of poroelasticity of heterogeneous
media based on a discontinuous Galerkin approximation
Teeratorn Kadeethum
Sibley School of Mechanical and Aerospace Engineering
Cornell University, New York, USA Ithaca, New York, USA
<EMAIL_ADDRESS>
&Francesco Ballarin
mathLab, Mathematics Area
SISSA, Italy
<EMAIL_ADDRESS>
&Nikolaos Bouklas
Sibley School of Mechanical and Aerospace Engineering
Cornell University, New York, USA Ithaca, New York, USA
<EMAIL_ADDRESS>
###### Abstract
A simulation tool capable of speeding up the calculation for linear
poroelasticity problems in heterogeneous porous media is of large practical
interest for engineers, in particular, to effectively perform sensitivity
analyses, uncertainty quantification, optimization, or control operations on
the fluid pressure and bulk deformation fields. Towards this goal, we present
here a non-intrusive model reduction framework using proper orthogonal
decomposition (POD) and neural networks, based on the usual offline-online
paradigm. As the conductivity of porous media can be highly heterogeneous and
span several orders of magnitude, we utilize the interior penalty
discontinuous Galerkin (DG) method as a full order solver to handle
discontinuity and ensure local mass conservation during the offline stage. We
then use POD as a data compression tool and compare the nested POD technique,
in which time and uncertain parameter domains are compressed consecutively, to
the classical POD method in which all domains are compressed simultaneously.
The neural networks are finally trained to map the set of uncertain
parameters, which could correspond to material properties, boundary
conditions, or geometric characteristics, to the collection of coefficients
calculated from an $L^{2}$ projection over the reduced basis. We then perform
a non-intrusive evaluation of the neural networks to obtain coefficients
corresponding to new values of the uncertain parameters during the online
stage. We show that our framework provides reasonable approximations of the DG
solution, but it is significantly faster. Moreover, the reduced order
framework can capture sharp discontinuities of both displacement and pressure
fields resulting from the heterogeneity in the media conductivity, which is
generally challenging for intrusive reduced order methods. The sources of
error are presented, showing that the nested POD technique is computationally
advantageous and still provides comparable accuracy to the classical POD
method. We also explore the effect of different choices of the hyperparameters
of the neural network on the framework performance.
_K_ eywords poroelasticity $\cdot$ reduced order modeling $\cdot$ neural
networks $\cdot$ discontinuous Galerkin $\cdot$ heterogeneity $\cdot$ finite
element
## 1 Introduction
Coupled hydro-mechanical (HM) processes in porous media, in which fluid flow
and solid deformation tightly interact, are involved in various problems
ranging from groundwater and contaminant hydrology to biomedical engineering
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]. Porous media are generally anisotropic
and heterogeneous. Additionally, the sharp contrast of phases of the porous
microstructure leads to discontinuous material properties that can span
several orders of magnitude [12, 13, 14, 15, 16, 1, 17, 18]. The sharp
discontinuities of the strongly heterogeneous microstructure significantly
influence the time-dependent solid-fluid interactions. For instance, spatially
dependent volumetric deformation of a porous medium caused by fluid pressure
changes in pore spaces may impact the hydraulic storability and permeability
of porous material, which enhances the complexity of the fluid flow field [19,
20, 9, 21]. Therefore, accurate numerical modeling of coupled HM processes
requires a computational method that can robustly handle substantial
heterogeneity in porous media and ensure local mass conservation.
Solution of coupled HM processes can be handled analytically for simple
geometries [22, 23]. However, for complex geometries and non-homogeneous
boundary conditions, we could use numerical approximations such as finite
volume discretization [24, 25, 26] and finite element methods [27, 28, 29, 30,
31, 32, 33, 34]. Recently, the possibility of solving linear elasticity
problems and coupled HM processes using physics-informed neural networks is
also presented in [35, 36, 37, 38]. In the context of finite element methods,
the coupled HM processes can be cast as two-, three-, four-, or five-field
formulations [30, 39, 40, 41, 42]. In this study, we use the two-field
formulation, in which fluid pressure and displacement fields are the primary
variables, and following adapt the discontinuous Galerkin (DG) finite element
solver used in [43, 44, 45, 13, 6]. The DG solver is selected because it can
handle strong heterogeneity of permeability, and it is locally mass
conservative by construction [46, 47, 48, 27].
The DG solver, also referenced as a full order model (FOM) in the following,
requires substantial computational resources [49, 50]. Hence, it is not
directly suitable to handle large-scale inverse problems, optimization, or
control, in which an extensive set of simulations are needed to be explored
[51, 52, 53, 49, 50]. Consequently, reduced order modeling (ROM) is a
framework that can be employed towards handling large-scale inverse problems,
optimization, or control, since ROM aims to produce a low-dimensional
representation of FOM, which requires much less computational time than the
FOM while maintaining computational accuracy [54, 55]. The ROM methodology is
applicable to a parametrized problem (i.e., to repeated evaluations of a
problem depending on a parameter $\bm{\mu}$, which could correspond to
physical properties, geometric characteristics, or boundary conditions [51,
56, 50]). ROM is generally composed of two stages, the offline and online
stages [50]. The offline stage begins with the initialization of uncertain
input parameters, which we call a training set. Then the FOM is solved
corresponding to each member in the training set, and in the following, we
will refer to the corresponding solutions as snapshots. Data compression
techniques are then used to compress FOM snapshots to produce basis functions
that span a reduced space of very low dimensionality but still guarantee
accurate reproduction of the snapshots [57, 58, 52]. ROM can then be solved
during the online stage for any new value of $\bm{\mu}$ by seeking an
approximated solution in the reduced space.
This study focuses on utilizing a non-intrusive ROM approach [59, 60, 50] to
tackle the solution of strongly heterogeneous linear poroelasticity problems.
This approach has not yet been extended to coupled HM processes in porous
media. The non-intrusive approach is especially attractive because it does not
require any cumbersome modifications of FOM source codes, and it can capture
sharp changes present in the FOM approximation, which is a challenge in
classical intrusive ROM models [50]. This characteristic alleviates several
code compatibility complications since many of the source codes used to build
FOMs may not be available or easily accessible, especially in legacy codes or
commercial software [61]. Hence, the non-intrusive variant of ROM can provide
more flexibility in coupling to existing FOM platforms [62, 63, 64]. The non-
intrusive ROM framework presented in this manuscript is based on the
development of [63], which has been adapted and applied to a wide range of
problems [65, 66, 67, 68, 69, 70]. In this approach, a set of reduced basis
functions are extracted from high-fidelity FOM solutions through proper
orthogonal decomposition (POD). The coefficients of the reduced basis
functions are in turn approximated by artificial neural networks (ANN) using
$\bm{\mu}$ as an input. This work has three main novelties
1. 1.
The non-intrusive ROM framework has been extended to the coupled HM problem,
allowing the exploration of cases where media permeability is strongly
heterogeneous.
2. 2.
We illustrate that the non-intrusive ROM solution can mimic the sharp
discontinuities of both displacement and pressure fields approximated by the
DG solver. This problem is challenging for classical intrusive ROM approaches.
3. 3.
We distinguish between the error due to the truncation of reduced bases and
the error introduced by the prediction of ANN. Besides, we illustrate that the
nested POD technique, in which time and uncertain parameter domains are
compressed consecutively, could provide comparable accuracy to the classical
POD method, in which all domains are compressed simultaneously.
The rest of the manuscript is summarized as follows. We begin with the model
description and corresponding governing equations (section 2) and follow with
the presentation of the discretization of the coupled HM problem and the DG
solver structure (section 3). Subsequently, the ROM framework and its
components are discussed (section 4). We validate our developed ROM framework
using a series of benchmark problems (section 5). We then analyze the ROM
performance concerning different hyperparameters and data compression
strategies. We provide discussions on the sources of error, the ROM
performance, and circumstances in which the ROM is more suitable than the FOM
(section 6).
## 2 Governing equations
This section briefly describes governing equations following Biot’s
formulation for linear poroelasticity [71, 72]. Let
$\Omega\subset\mathbb{R}^{d}$ ($d\in\\{1,2,3\\}$) denote the physical domain
and $\partial\Omega$ denote its boundary. The time domain is denoted by
$\mathbb{T}=\left(0,\mathrm{T}\right]$ with $\mathrm{T}>0$. Primary variables
used in this paper are $p:\Omega\times\mathbb{T}\to\mathbb{R}$, which is a
scalar-valued fluid pressure ($\mathrm{P}\mathrm{a}$) and
$\bm{u}:\Omega\times\mathbb{T}\to\mathbb{R}^{d}$, which is a vector-valued
displacement ($\mathrm{m}$).
Although linear poroelasticity theory may oversimplify deformations in soft
porous materials such as soils [73, 74, 75, 76], this description is
reasonably good for stiff materials such as rocks, which are the focus of this
work. The theory of linear poroelasticity theory describes the HM problem
through two coupled governing equations, namely linear momentum and mass
balance equations.
The infinitesimal strain tensor $\bm{\varepsilon}$ is defined as
$\bm{\varepsilon}(\bm{u}):=\frac{1}{2}\left(\nabla\bm{u}+(\nabla\bm{u})^{\intercal}\right),$
(1)
where $\lambda_{l}$ and $\mu_{l}$ are the Lamé constants, which are related to
the bulk modulus $K$ and the Poisson ratio $\nu$ of the porous solid as
$\lambda_{l}=\frac{3K\nu}{1+\nu},\quad\text{ and
}\quad\mu_{l}=\frac{3K(1-2\nu)}{2(1+\nu)}.$ (2)
Further, $\bm{\sigma}$ is the total Cauchy stress tensor, which may be related
to the effective Cauchy stress tensor $\bm{\sigma}^{\prime}$ and the pore
pressure $p$ as
$\bm{\sigma}(\bm{u},p)=\bm{\sigma}^{\prime}(\bm{u})-\alpha p\mathbf{I}.$ (3)
Here, $\mathbf{I}$ is the second-order identity tensor, and $\alpha$ is the
Biot coefficient defined as [77]:
$\alpha=1-\frac{K}{K_{{s}}},$ (4)
with $K$ and $K_{s}$ being the bulk moduli of the bulk porous material and the
solid matrix, respectively. According to linear elasticity, the effective
stress tensor relates to the infinitesimal strain tensor, and therefore to the
displacement, through the following constitutive relationship, which can be
written as
$\bm{\sigma}^{\prime}(\bm{u})=\lambda_{l}\operatorname{tr}(\bm{\varepsilon}(\bm{u}))\mathbf{I}+2\mu_{l}\bm{\varepsilon}{(\bm{u})}.$
(5)
Under quasi-static conditions, the linear momentum balance equation can be
written as
$\nabla\cdot\bm{\sigma}(\bm{u},p)+\bm{f}=\bm{0},$ (6)
where $\bm{f}$ is the body force term defined as
$\rho\phi\mathbf{g}+\rho_{s}(1-\phi)\mathbf{g}$, where $\rho$ is the fluid
density, $\rho_{s}$ is the solid density, $\phi$ is the porosity, and
$\mathbf{g}$ is the gravitational acceleration vector. The gravitational force
will be neglected in this study, but the body force term will be kept in the
succeeding formulations for a more general case.
For this solid deformation problem, the domain boundary $\partial\Omega$ is
assumed to be suitably decomposed into displacement and traction boundaries,
$\partial\Omega_{u}$ and $\partial\Omega_{t}$, respectively. Then the linear
momentum balance equation is supplemented by the boundary and initial
conditions as:
$\begin{split}\nabla\cdot\bm{\sigma}^{\prime}(\bm{u})-\alpha\nabla\cdot\left(p\mathbf{I}\right)+\bm{f}=\bm{0}&\text{
\> in \> }\Omega\times\mathbb{T},\\\ \bm{u}=\bm{u}_{D}&\text{ \> on \>
}\partial\Omega_{u}\times\mathbb{T},\\\
\bm{\sigma}{(\bm{u})}\cdot\mathbf{n}=\bm{t_{D}}&\text{ \> on \>
}\partial\Omega_{t}\times\mathbb{T},\\\ \bm{u}=\bm{u}_{0}&\text{ \> in \>
}\Omega\text{ at }t=0,\end{split}$ (7)
where $\bm{u}_{D}$ and ${\bm{t}_{D}}$ are prescribed displacement and traction
values at the boundaries, respectively, and $\mathbf{n}$ is the unit normal
vector to the boundary.
Next, the mass balance equation is given as [78, 79]:
$\frac{1}{M}\dfrac{\partial p}{\partial
t}+\alpha\frac{\partial{\varepsilon_{v}}}{\partial
t}+\nabla\cdot\bm{q}=g\text{ in }\Omega\times\mathbb{T},$ (8)
where
$\frac{1}{M}=\left(\phi c_{f}+\dfrac{\alpha-\phi}{K_{s}}\right)$ (9)
is the Biot modulus. Here, $c_{f}$ is the fluid compressibility,
${\varepsilon_{v}}$ := $\operatorname{tr}(\bm{\varepsilon})=\nabla\cdot\bm{u}$
is the volumetric strain, and $g$ is a sink/source term. The superficial
velocity vector $\bm{q}$ is given by Darcy’s law as
$\bm{q}=-\bm{\kappa}(\nabla p-\rho\mathbf{g}).$ (10)
Here $\bm{\kappa}=\frac{\bm{k}}{\mu_{f}}$ is the porous media conductivity,
$\mu_{f}$ is the fluid viscosity. Again, the gravitational force,
$\rho\mathbf{g}$, will be neglected in this work, without loss of generality.
In addition, $\bm{k}$ is the matrix permeability tensor defined as
$\bm{k}:=\begin{cases}\left[\begin{array}[]{lll}{{k}_{xx}}&{{k}_{xy}}&{{k}_{xz}}\\\
{{k}_{yx}}&{{k}_{yy}}&{{k}_{yz}}\\\
{{k}_{zx}}&{{k}_{zy}}&{k}_{zz}\end{array}\right]&\text{if}\ d=3,\\\ \\\
\left[\begin{array}[]{ll}{{k}_{xx}}&{{k}_{xy}}\\\ {{k}_{yx}}&{{k}_{yy}}\\\
\end{array}\right]&\text{if}\ d=2,\\\ \\\ \ {k}_{xx}&\text{if}\
d=1,\end{cases}$ (11)
For the fluid flow problem, the domain boundary $\partial\Omega$ is also
suitably decomposed into the pressure and flux boundaries,
$\partial\Omega_{p}$ and $\partial\Omega_{q}$, respectively. In what follows,
we apply the fixed stress split scheme [79, 80], assuming
$\left(\sigma_{v}-\sigma_{v,0}\right)+\alpha\left(p-p_{0}\right)=K\varepsilon_{v}$.
Then we write the fluid flow problem with boundary and initial conditions as
$\begin{split}\left(\frac{1}{M}+\frac{\alpha^{2}}{K}\right)\frac{\partial
p}{\partial t}+\frac{\alpha}{K}\frac{\partial\sigma_{v}}{\partial
t}-\bm{\kappa}\nabla p=g&\text{ \> in \> }\Omega\times\mathbb{T},\\\
p=p_{D}&\text{ \> on \> }\partial\Omega_{p}\times\mathbb{T},\\\
-\bm{\kappa}\nabla p\cdot\mathbf{n}=q_{D}&\text{ \> on
\>}\partial\Omega_{q}\times\mathbb{T},\\\ p=p_{0}&\text{ \> in \>
}\Omega\text{ at }t=0,\end{split}$ (12)
where $\sigma_{v}:=\frac{1}{3}\operatorname{tr}(\bm{\sigma})$ is the
volumetric stress, and $p_{D}$ and $q_{D}$ are the given boundary pressure and
flux, respectively.
## 3 Finite element discretization and solution
We use the discontinuous Galerkin solver from [43, 44, 45, 13, 6], and we
briefly revisit the discretization in this section. We begin by introducing
the necessary notation. Let $\mathcal{T}_{h}$ be a shape-regular triangulation
obtained by a partition of $\Omega$ into $d$-simplices (segments in $d=1$,
triangles in $d=2$, tetrahedra in $d=3$). For each cell $T\in\mathcal{T}_{h}$,
we denote by $h_{T}$ the diameter of $T$, and we set
$h=\max_{T\in\mathcal{T}_{h}}h_{T}$ and
$h_{l}=\min_{T\in\mathcal{T}_{h}}h_{T}$. We further denote by
$\mathcal{E}_{h}$ the set of all facets (i.e., $d-1$ dimensional entities
connected to at least a $T\in\mathcal{T}_{h}$) and by $\mathcal{E}_{h}^{I}$
and $\mathcal{E}_{h}^{\partial}$ the collection of all interior and boundary
facets, respectively. The boundary set $\mathcal{E}_{h}^{\partial}$ is
decomposed into two disjoint subsets associated with the Dirichlet boundary
facets, and the Neumann boundary facets for each of Eqs. (7) and (12). In
particular, $\mathcal{E}_{h}^{D,u}$ and $\mathcal{E}_{h}^{N,u}$ correspond to
the facets on $\partial\Omega_{u}$ and $\partial\Omega_{tr}$, respectively,
for Eq. (7). On the other hand, for Eq. (12), $\mathcal{E}_{h}^{D,m}$ and
$\mathcal{E}_{h}^{N,m}$ conform to $\partial\Omega_{p}$ and
$\partial\Omega_{q}$, respectively.
We also define
$e=\partial T^{+}\cap\partial T^{-},\ \ e\in\mathcal{E}_{h}^{I},$
where $T^{+}$ and $T^{-}$ are the two neighboring elements to $e$. We denote
by $h_{e}$ the characteristic length of $e$ calculated as
$h_{e}:=\frac{\operatorname{meas}\left(T^{+}\right)+\operatorname{meas}\left(T^{-}\right)}{2\operatorname{meas}(e)},$
(13)
depending on the argument, meas($\cdot$) represents the measure of a cell or
of a facet.
Let $\mathbf{n}^{+}$ and $\mathbf{n}^{-}$ be the outward unit normal vectors
to $\partial T^{+}$ and $\partial T^{-}$, respectively. For any given scalar
function $\zeta:\mathcal{T}_{h}\to\mathbb{R}$ and vector function
$\bm{\tau}:\mathcal{T}_{h}\to\mathbb{R}^{d}$, we denote by $\zeta^{\pm}$ and
$\bm{\tau}^{\pm}$ the restrictions of $\zeta$ and $\bm{\tau}$ to $T^{\pm}$,
respectively. Subsequently, we define the weighted average operator as
$\\{\zeta\\}_{\delta
e}=\delta_{e}\zeta^{+}+\left(1-\delta_{e}\right)\zeta^{-},\ \text{ on
}e\in\mathcal{E}_{h}^{I},$ (14)
and
$\\{\bm{\tau}\\}_{\delta
e}=\delta_{e}\bm{\tau}^{+}+\left(1-\delta_{e}\right)\bm{\tau}^{-},\ \text{ on
}e\in\mathcal{E}_{h}^{I},$ (15)
where $\delta_{e}$ is calculated by [81, 82]:
$\delta_{e}:=\frac{{k}^{-}_{e}}{{k}^{+}_{e}+{k}^{-}_{e}}.$ (16)
Here,
${k}^{+}_{e}:=\left(\mathbf{n}^{+}\right)^{\intercal}\cdot\bm{k}^{+}\mathbf{n}^{+},\
\text{ and
}{k}^{-}_{e}:=\left(\mathbf{n}^{-}\right)^{\intercal}\cdot\bm{k}^{-}\mathbf{n}^{-},$
(17)
where ${k_{e}}$ is a harmonic average of $k^{+}_{e}$ and ${k}^{-}_{e}$ which
reads
${k_{e}}:=\frac{2{k}^{+}_{e}{k}^{-}_{e}}{{k}^{+}_{e}+{k}^{-}_{e}},$ (18)
and $\bm{k}$ is defined as in Eq. (11). The jump across an interior edge will
be defined as
$\displaystyle\left[\\!\left[\zeta\right]\\!\right]=\zeta^{+}\mathbf{n}^{+}+\zeta^{-}\mathbf{n}^{-}\quad\mbox{
and
}\quad\left[\\!\left[{\bm{\tau}}\right]\\!\right]=\bm{\tau}^{+}\cdot\mathbf{n}^{+}+\bm{\tau}^{-}\cdot\mathbf{n}^{-}\quad\mbox{on
}e\in\mathcal{E}_{h}^{I}.$
Finally, for $e\in\mathcal{E}^{\partial}_{h}$, we set
$\left\\{\zeta\right\\}_{\delta_{e}}:=\zeta$ and
$\left\\{\bm{\tau}\right\\}_{\delta_{e}}:=\bm{\tau}$ for what concerns the
definition of the weighted average operator, and
$\left[\\!\left[\zeta\right]\\!\right]:=\zeta\mathbf{n}$ and
$\left[\\!\left[\bm{\tau}\right]\\!\right]:=\bm{\tau}\cdot\mathbf{n}$ as
definition of the jump operator.
### 3.1 Temporal Discretization
We adapt the Biot’s system solver from [6, 13]. The time domain,
$\mathbb{T}=\left(0,\mathrm{T}\right]$, is partitioned into $N^{t}$ open
intervals such that, $0=:t^{0}<t^{1}<\cdots<t^{N^{t}}:=\mathrm{T}$. The length
of the interval, $\Delta t^{n}$, is defined as $\Delta t^{n}=t^{n}-t^{n-1}$
where $n$ represents the current time step. $\Delta t^{0}$ is an initial
$\Delta t$, which is defined as $t^{1}-t^{0}$, while the other time steps,
$\Delta t^{n}$, are calculated as follows
$\Delta t^{n}:=\begin{cases}\Delta t_{mult}\times\Delta t^{n-1}&\text{if}\
\Delta t^{n}\leq\Delta t_{max}\\\ \Delta t_{max}&\text{if}\ \Delta
t^{n}>\Delta t_{max},\end{cases}$ (19)
where $\Delta t_{mult}$ is a positive constant multiplier, and $\Delta
t_{max}$ is the maximum allowable time step. Then, let $\varphi(\cdot,t)$ be a
scalar function and $\varphi^{n}$ be its approximation at time $t^{n}$, i.e.
$\varphi^{n}\approx\varphi\left(t^{n}\right)$. We employ the following
backward differentiation formula for time discretization of all primary
variables [83, 84, 85, 86]
$\mathrm{BDF}_{1}\left(\varphi^{n}\right):=\frac{1}{\Delta
t^{n}}\left(\varphi^{n}-\varphi^{n-1}\right).$ (20)
### 3.2 Full Discretization
Following [48, 6, 13, 43], in this study, the displacement field is
approximated by the classical continuous Galerkin method (CG) method, and the
fluid pressure field is discretized by discontinuous Galerkin (DG) method to
ensure local mass conservation and provide a better flux approximation [6,
13].
We begin with defining the finite element space for the continuous Galerkin
(CG) method for a vector-valued function
$\mathcal{U}_{h}^{\mathrm{CG}_{k}}\left(\mathcal{T}_{h}\right):=\left\\{\bm{\psi_{u}}\in\mathbb{C}^{0}(\Omega{;\mathbb{R}^{d}}):\left.\bm{\psi_{u}}\right|_{T}\in\mathbb{Q}_{k}(T{;\mathbb{R}^{d}}),\forall
T\in\mathcal{T}_{h}\right\\},$ (21)
where $k$ indicates the order of polynomial that can be approximated in this
space, $\mathbb{C}^{0}(\Omega{;\mathbb{R}^{d}})$ denotes the space of vector-
valued piece-wise continuous polynomials, $\mathbb{Q}_{k}(T{;\mathbb{R}^{d}})$
is the space of polynomials of degree at most $k$ over each element $T$, and
$\mathbb{R}$ is a set of real numbers. We will denote in the following by
$N_{h}^{u}$ the dimension of the space
$\mathcal{U}_{h}^{\mathrm{CG}_{k}}\left(\mathcal{T}_{h}\right)$, i.e. the
number of degrees of freedom for the displacement approximation.
Next, the DG space for scalar-valued functions is defined as
$\mathcal{P}_{h}^{\mathrm{DG}_{k}}\left(\mathcal{T}_{h}\right):=\left\\{\psi_{p}\in
L^{2}(\Omega):\left.\psi_{p}\right|_{T}\in\mathbb{Q}_{k}(T),\forall
T\in\mathcal{T}_{h}\right\\},$ (22)
where $L^{2}(\Omega)$ is the space of square integrable functions. This non
conforming finite element space allows us to consider discontinuous
coefficients and preserves the local mass conservation. We will denote in the
following by $N_{h}^{p}$ the dimension of
$\mathcal{P}_{h}^{\mathrm{DG}_{k}}\left(\mathcal{T}_{h}\right)$.
We seek the approximated displacement ($\bm{u}_{h}$) and pressure (${p}_{h}$)
solutions by discretizing the linear momentum balance equation Eq. (7)
employing the above CG finite element spaces for $\bm{u}_{h}$ and the DG
spaces for ${p}_{h}$. The fully discretized linear momentum balance equation
Eq. (7) can be defined using the following forms
$\mathcal{A}_{u}\left((\bm{u}_{h}^{n},p_{h}^{n}),\bm{\psi_{u}}\right)=\mathcal{L}_{u}\left(\bm{\psi_{u}}\right),\quad\forall\bm{\psi_{u}}\in\mathcal{U}_{h}^{\mathrm{CG}_{2}}\left(\mathcal{T}_{h}\right),$
(23)
at each time step $t^{n}$, where
$\mathcal{A}_{u}\left((\bm{u}_{h}^{n},p_{h}^{n}),\bm{\psi_{u}}\right)=\sum_{T\in\mathcal{T}_{h}}\int_{T}\bm{\sigma}^{\prime}\left(\bm{u}_{h}\right):\nabla^{s}\bm{\psi_{u}}\>dV-\sum_{T\in\mathcal{T}_{h}}\int_{T}\alpha
p_{h}\mathbf{I}:\nabla^{s}\bm{\psi_{u}}\>dV,$ (24)
and
$\mathcal{L}_{u}\left(\bm{\psi_{u}}\right)=\sum_{T\in\mathcal{T}_{h}}\int_{T}\bm{f}\bm{\psi_{u}}\>dV+\sum_{e\in\mathcal{E}_{h}^{N}}\int_{e}\bm{t_{D}}\bm{\psi_{u}}\>dS.$
(25)
Here, $\nabla^{s}$ is a symmetric gradient operator. We then discretize Eq.
(12) as
$\mathcal{A}_{p}\left((\bm{u}_{h}^{n},p_{h}^{n}),\psi_{p}\right)=\mathcal{L}_{p}\left(\psi_{p}\right),\quad\forall\psi_{p}\in\mathcal{P}_{h}^{\mathrm{DG}_{1}}\left(\mathcal{T}_{h}\right),$
(26)
for each time step $t^{n}$, where
$\begin{split}\mathcal{A}_{p}\left((\bm{u}_{h}^{n},p_{h}^{n}),\psi_{p}\right)&=\sum_{T\in\mathcal{T}_{h}}\int_{T}\frac{\alpha}{K}\mathrm{BDF}_{1}(\sigma_{v})\psi_{p}\>dV+\sum_{T\in\mathcal{T}_{h}}\int_{T}\left(\frac{1}{M}+\frac{\alpha^{2}}{K}\right)\mathrm{BDF}_{1}\left(p_{h}^{n}\right)\psi_{p}\>dV\\\
&+\sum_{T\in\mathcal{T}_{h}}\int_{T}\bm{\kappa}\nabla
p_{h}^{n}\cdot\nabla\psi_{p}\>dV-\sum_{e\in\mathcal{E}_{h}^{I}\cup\mathcal{E}_{h}^{D}}\int_{e}\left\\{\bm{\kappa}\nabla
p_{h}^{n}\right\\}_{\delta_{e}}\cdot\llbracket\psi_{p}\rrbracket\>dS\\\
&-\sum_{e\in\mathcal{E}_{h}^{I}\cup\mathcal{E}_{h}^{D}}\int_{e}\left\\{\bm{\kappa}\nabla\psi_{p}\right\\}_{\delta_{e}}\cdot\llbracket
p_{h}^{n}\rrbracket\>dS+\sum_{e\in\mathcal{E}_{h}^{I}\cup\mathcal{E}_{h}^{D}}\int_{e}\frac{\beta}{h_{e}}{\bm{\kappa}}_{{e}}\llbracket
p_{h}^{n}\rrbracket\cdot\llbracket\psi_{p}\rrbracket\>dS,\end{split}$ (27)
and
$\begin{split}\mathcal{L}_{p}\left(\psi_{p}\right)&:=\sum_{T\in\mathcal{T}_{h}}\int_{T}g\psi_{p}\>dV+\sum_{e\in\mathcal{E}_{h}^{N}}\int_{e}q_{D}\psi_{p}\>dS\\\
&-\sum_{e\in\mathcal{E}_{h}^{D}}\int_{e}\bm{\kappa}\nabla\psi_{p}\cdot
p_{D}\mathbf{n}\>dS+\sum_{e\in\mathcal{E}_{h}^{D}}\int_{e}\frac{\beta}{h_{e}}{\bm{\kappa}}_{{e}}\llbracket\psi_{p}\rrbracket\cdot
p_{D}\mathbf{n}\>dS.\end{split}$ (28)
More details regarding block structure and solver algorithm could be found in
[13, 6]. For all the computations, matrices and vectors are built using the
FEniCS form compiler [87]. The block structure is assembled by using the
multiphenics toolbox [88]. Solvers are employed from PETSc package [89]. All
simulations are computed on AMD Ryzen Threadripper 3970X with a single thread.
## 4 Reduced order modeling
The DG solution scheme introduced in the previous section is typically a time-
consuming operation, making it impractical to query such a solver in a real-
time context whenever parametric studies are carried out. Such parametric
studies are often of interest to account for uncertain material properties of
porous media. Therefore in this work, we propose to employ a reduced order
modeling strategy, based on the developments in [63]. A graphical summary of
the reduced order modeling paradigm is presented in Figure 1. The idea of this
framework has been adapted and applied to a wide range of problems [65, 66,
67, 68, 69, 70]. The computations are divided into an offline phase for the
ROM construction, which we will present through five consecutive steps, and
(single-step) online stage for the ROM evaluation, described as follows.
The first step of the offline stage (colored in blue in the Figure) represents
an initialization of a training set of parameters used to train the framework,
of cardinality $\mathrm{M}$. Then, in the second step (green), we query the
full order model (FOM), based on the DG finite element solver discussed in the
previous section, for each parameter $\bm{\mu}$ in the training set. At this
point, we have $\mathrm{M}$ snapshots (FOM results) associated with the
different parametric configurations in the training set. Each snapshot
contains approximations of the primary variables ($\bm{u}_{h}$ and $p_{h}$) at
each time step of the partition of the time domain $\mathbb{T}$ as introduced
in the previous section. The third step (yellow) aims to compress the
information provided by the snapshots through the proper orthogonal
decomposition (POD) technique [50, 90, 63, 68, 91, 92]. The POD is used to
determine characteristic spatial modes based on relative energy content
criteria [93, 94, 95]. In order to carry out a compression, only the first
$\mathrm{N}$ spatial modes are retained [50], and employed as basis functions
for the reduced basis spaces $\mathcal{U}_{\mathrm{N}}$ and
$\mathcal{P}_{\mathrm{N}}$, used for approximating the displacement and
pressure fields respectively. The typical goal is to achieve
$\mathrm{N}\ll\mathrm{M}N^{t}$ (compression of the snapshots data), but also
$\mathrm{N}\ll N_{h}^{u}$ and $\mathrm{N}\ll N_{h}^{p}$ (dimensionality
reduction for the model discretization). Next, in the fourth step (purple) we
obtain the optimal representation of each snapshot in the reduced basis spaces
by means of an $L^{2}$ projection [65, 66, 67, 68, 69, 70]. This operation
defines a map between each pair $(t,\bm{\mu})$, with
$t\in\\{t^{0},\ldots,t^{N^{t}}\\}$ and $\bm{\mu}$ in the training set, and a
vector of coefficients $\bm{\theta}^{u}(t,\bm{\mu})$ that characterize the
best approximation in the reduced space $\mathcal{U}_{\mathrm{N}}$ for the
displacement field $\bm{u}_{h}(\bm{\mu})$ at time $t$. A similar map can be
defined for the pressure field, denoted in the following by
$\bm{\theta}^{p}(t,\bm{\mu})$. Finally, in the fifth step (white) we aim to
define a map $\widehat{\bm{\theta}}^{u}(t,\bm{\mu})$ and
$\widehat{\bm{\theta}}^{p}(t,\bm{\mu})$ for any time $t$ in the time interval
$\mathbb{T}$ and any value of the parameter $\bm{\mu}$ by training artificial
neural networks (ANN) to approximate $\bm{\theta}^{u}(t,\bm{\mu})$ and
$\bm{\theta}^{p}(t,\bm{\mu})$ based on the training data points obtained at
the fourth step [63]. We note that the wall time used to perform the $L^{2}$
projection during the fourth step is relatively much smaller compared to the
wall time used to train the ANN in the fifth step. This concludes the offline
stage.
Finally, during the online phase (red), for given values of the parameter
$\bm{\mu}$ and time instance $t$ we aim to recover the online approximation to
our primary variables by querying the ANN evaluation for
$\widehat{\bm{\theta}}^{u}(t,\bm{\mu})$ and
$\widehat{\bm{\theta}}^{p}(t,\bm{\mu})$ and reconstructing the resulting
finite element representation by means of the reduced basis functions spanning
$\mathcal{U}_{\mathrm{N}}$ and $\mathcal{P}_{\mathrm{N}}$, respectively [63].
The details of each phase will be further discussed in the following
paragraphs.
Figure 1: Summary of non-intrusive model reduction framework for Biot’s
system. We note that, for the sake of simplicity, we only show $\bm{\theta}$
and $\widehat{\bm{\theta}}$ in this figure. They; however, represent
$\bm{\theta}=[\bm{\theta}^{u}(t,\bm{\mu}),\bm{\theta}^{p}(t,\bm{\mu})]$ and
$\widehat{\bm{\theta}}=[\widehat{\bm{\theta}}^{u}(t,\bm{\mu}),\widehat{\bm{\theta}}^{p}(t,\bm{\mu})]$
### 4.1 Initialization of the training set
Let $\mathbb{P}\subset\mathbb{R}^{P}$, $P\in\mathbb{N}$, be a compact set
representing the range of variation of the parameters $\bm{\mu}\in\mathbb{P}$.
For the sake of notation we will denote by $\mu_{p}$, $p=1,\ldots,P$, the
$p$-th component of $\bm{\mu}$. To explore the parametric dependence of the
phenomena, we define a discrete training set of $\mathrm{M}$ parameter
instances. Each parameter instance in the training set will be indicated with
the notation $\bm{\mu}^{(i)}$, for $i=1,\ldots,\mathrm{M}$. Thus, the $p$-th
component of the $i$-th parameter instance in the training set is denoted by
$\mu_{p}^{(i)}$ in the following. The choice of the value of $\mathrm{M}$, as
well as the sampling procedure from the range $\mathbb{P}$ is typically user-
and problem-dependent. In this work, we use an equispaced distribution for the
training set, and we will also briefly discuss the robustness of numerical
results for varying $\mathrm{M}$. We note that adaptive sampling approaches
could be employed and might result in a better model accuracy with a lower
number of training instances [96, 97]. Still, we prefer here to use a
straightforward equispaced sampling as a thorough discussion of the parameter
space exploration is not among the main goals of this work.
### 4.2 Full order model (FOM)
We employ the DG scheme discussed in Section 3 as our full order model (FOM).
The FOM finite element solver is operated $\mathrm{M}$ times, corresponding to
each parameter instance of $\bm{\mu}^{(i)}$. As we develop our solver on top
of the FEniCS platform [87], this part could be performed in parallel using
any desirable processor numbers. In this study, however, we perform our FOM
simulation using only a single core and run each FOM snapshot sequentially to
provide a clear comparison of wall time used to perform the FOM and the wall
time used to construct the ROM solution in the online phase. Since the problem
formulation is time-dependent, the output of the FOM solver for each parameter
instance $\bm{\mu}^{(i)}$ collects the time series representing the time
evolution of the primary variables for each time step $t$. Therefore, based on
the training set cardinality $\mathrm{M}$ and the number $N^{t}$ of time
steps, we have a total of $N^{t}\mathrm{M}$ training data to be employed in
the subsequent steps.
### 4.3 Proper orthogonal decomposition (POD)
In this work, we utilize POD as a data compression tool, i.e. we seek a
reduced order approximation in an optimal linear subspace [50, 90, 63, 68, 91,
92]. If the problem does not allow such representation, nonlinear variants
(e.g., autoencoders) could be considered as data compression tools [98, 99,
100, 101]. Here, we prefer to employ POD because it is generally faster than
the nonlinear variants. As the numerical results will show, POD spaces provide
ROM results of sufficient accuracy for the problem at hand.
Let $\bm{\mu}^{(i)}$ be a parameter instance in the training set,
$i=1,\ldots,\mathrm{M}$. The corresponding displacement field snapshot
contains
${\mathbb{S}_{u}^{(i)}}=\left[{\bm{u}}_{h}\left({\cdot;t^{0},\bm{\mu}^{(i)}}\right),\cdots,{\bm{u}}_{h}\left({\cdot;t^{{N^{t}}},\bm{\mu}^{(i)}}\right)\right]\in\mathbb{R}^{{N_{h}^{u}}\times
N^{t}}.$ (29)
where ${\bm{u}}_{h}\left({\cdot;t^{n},\bm{\mu}^{(i)}}\right)$ represent the
displacement field at time $t^{n}$ and parameter instance $\bm{\mu}^{(i)}$. We
recall that $N_{h}^{u}$ is the number of DOFs in the displacement finite
element space, and $N^{t}$ is the total number of time steps. In this study,
$N_{h}^{u}$ is constant (i.e., the mesh and finite element function space
remain the same), and $N^{t}$ is fixed (i.e., each snapshot utilizes the same
initial and final time, and time step.).
We compare in the rest of the paper two variants of POD-based compression for
the set of snapshots ${\mathbb{S}_{u}^{(i)}}$, $i=1,\ldots,\mathrm{M}$. For
compactness of exposition in the rest of this section, we will focus on the
displacement field $\bm{u}_{h}$, but a very similar procedure is indeed
carried out for the pressure field $p_{h}$ as well. Apart from primary
variables $\bm{u}_{h}$ and $p_{h}$, compression of any other quantity of
interest (e.g., the fluid flux at internal and external boundaries or the
maximum total stress) could also be carried out using ROM. While this may be
of great interest in applications, we are just interested in reducing the
primary variables as our focus is to validate the methodology.
The first choice is based on a _standard_ POD algorithm where all snapshots
are compressed in a single procedure. We first collect all snapshots in a
matrix
${\mathbb{S}_{u}}=\left[{\mathbb{S}_{u}^{(1)},\cdots,\mathbb{S}_{u}^{(\mathrm{M})}}\right]\in\mathbb{R}^{{N_{h}^{u}}\times
N^{t}\mathrm{M}},$ (30)
by horizontally stacking all matrices $\mathbb{S}_{u}^{(i)}$,
$i=1,\ldots,\mathrm{M}$. We then perform the singular value decomposition
(SVD) of ${\mathbb{S}_{u}}$ as
${\mathbb{S}_{u}}=\mathbb{W}\left[\begin{array}[]{cc}\mathbb{D}&0\\\
0&0\end{array}\right]\mathbb{Z}^{\top}$ (31)
where
$\mathbb{W}=\left[\mathbf{w}_{1},\cdots,\mathbf{w}_{{N_{h}^{u}}}\right]\in\mathbb{R}^{{N_{h}^{u}}\times{N_{h}^{u}}}$
and
$\mathbb{Z}=\left[\mathbf{z}_{1},\cdots,\mathbf{z}_{N^{t}\mathrm{M}}\right]\in\mathbb{R}^{N^{t}\mathrm{M}\times
N^{t}\mathrm{M}}$ are orthogonal matrices,
$\mathbb{D}=\operatorname{diag}\left(\sigma_{1},\cdots,\sigma_{r}\right)\in\mathbb{R}^{r\times
r}$ is a diagonal matrix, with singular values
$\sigma_{1}\geq\sigma_{2}\geq\cdots\geq\sigma_{r}>0.$ Here, $r$ is the number
of non-zero singular values and
$r\leq\min\left\\{{N_{h}^{u}},N^{t}\mathrm{M}\right\\}$. The columns of
$\mathbb{W}$ are called left singular vectors of $\mathbb{S},$ and the columns
of $\mathbb{Z}$ are called right singular vectors of $\mathbb{S}$. To carry
out a dimensionality reduction, the POD basis of rank $\mathrm{N}\ll r$
consisting of the first $\mathrm{N}$ left singular vectors of $\mathbb{S}$,
and it has the property of minimizing the projection error defined by
$\left\\{\mathbf{w}_{1},\cdots,\mathbf{w}_{\mathrm{N}}\right\\}=\arg\min\left\\{\varepsilon\left(\tilde{\mathbf{w}}_{1},\cdots,\tilde{\mathbf{w}}_{\mathrm{N}}\right)={\sum_{i=1}^{\mathrm{M}}\sum_{k=0}^{N^{t}}\left\|\bm{u}_{h}\left(\cdot;t^{k},\bm{\mu}^{(i)}\right)-\sum_{n=1}^{\mathrm{N}}\left(\bm{u}_{h}\left(\cdot;t^{k},\bm{\mu}^{(i)}\right),\tilde{\mathbf{w}}_{n}\right)_{u}\tilde{\mathbf{w}}_{n}\right\|_{u}^{2}}\right\\}$
(32)
among all the orthonormal bases
$\left\\{\tilde{\mathbf{w}}_{1},\cdots,\tilde{\mathbf{w}}_{\mathrm{N}}\right\\}{\subset\mathbb{R}^{N_{h}^{u}}}$.
Here $(\cdot,\cdot)_{u}$ denotes an inner product for the displacement space,
while $\left\|\cdot\right\|_{u}$ its induced norm. The reduced basis space
$\mathcal{U}_{\mathrm{N}}$ is then defined as the span of
$\left\\{\mathbf{w}_{1},\cdots,\mathbf{w}_{\mathrm{N}}\right\\}$. The effect
of the dimension $\mathrm{N}$ on the accuracy of the resulting ROM will be
discussed in the numerical examples section (see Section 5).
The second choice is instead based on a _nested_ POD algorithm. The primary
rationale for this choice is that the SVD computation of the standard POD case
may become unfeasible when $\mathrm{M}$ is large (i.e., finer sampling of the
parameter space) and $N^{t}$ is large as well (i.e., small-time steps).
Indeed, the SVD of a matrix with a large number $N^{t}\mathrm{M}$ of columns
may require a large amount of resources, both in terms of CPU time and memory
storage. The bottleneck is due to the simultaneous compression in parameter
space and time. The nested POD algorithm, instead, aims at decoupling the
compression in consecutive stages, operating only either on the time interval
or on the parameter space. Similar algorithms are often employed by
practitioners in the reduced order modeling community and can be found in the
literature with various names, such as two-level POD, or hierarchical
approximate POD [102, 103, 104, 105, 106, 59, 107]. The nested POD algorithm
can be summarized in the two following sequential stages:
1. 1)
_compression on the temporal evolution_ : for each parameter instance
$\bm{\mu}^{(i)}$ in the training set compress the temporal evolution stored in
$\mathbb{S}_{u}^{(i)}\in\mathbb{R}^{N_{h}^{u}\times N^{t}}$ by means of a POD,
retaining only the first $\mathrm{N}_{\mathrm{int}}\ll N^{t}$ modes. A
compressed matrix
$\widetilde{\mathbb{S}}_{u}^{(i)}\in\mathbb{R}^{N_{h}^{u}\times\mathrm{N}_{\mathrm{int}}}$
is the assembled by storing by column the first $\mathrm{N}_{\mathrm{int}}$
modes, scaled by the respective singular values. The value of
$\mathrm{N}_{\mathrm{int}}$ can be chosen according to energy criteria (and
thus it will be, in general, depending on the index $i$) or can be fixed a
priori (as we do in this study), and is typically considerably smaller than
the number of time steps $N^{t}$.
2. 2)
_compression on the parameter space_ : after the temporal evolution of each
parameter instance has been compressed, one can assemble the following matrix
${\widetilde{\mathbb{S}}_{u}}=\left[{\widetilde{\mathbb{S}}_{u}^{(1)},\cdots,\widetilde{\mathbb{S}}_{u}^{(\mathrm{M})}}\right]\in\mathbb{R}^{{N_{h}^{u}}\times\mathrm{N}_{\mathrm{int}}\mathrm{M}}.$
(33)
One can proceed as in the standard POD and define the reduced basis space
$\mathcal{U}_{\mathrm{N}}$ obtained after compression of
${\widetilde{\mathbb{S}}_{u}}$. Note that the final goal of the nested POD is
still to obtain a reduced basis space of dimension $\mathrm{N}$, which is
computed from an SVD of a matrix with $\mathrm{N}_{\mathrm{int}}\mathrm{M}\ll
N^{t}\mathrm{M}$ columns, thus overcoming the bottleneck of the first
algorithm.
We summarize the computations required by each of the two algorithms in Figure
2. Figure 2a reports the input data to the two algorithms, namely parameters
$\bm{\mu}^{(i)}$ in the training set, time steps $t^{n}$, and corresponding
displacement or pressure fields obtained querying the DG solver. A generic
field $\varphi_{h}$ is shown to serve as a reminder that the compression is
carried out for both primary variables to obtain reduced spaces
$\mathcal{U}_{\mathrm{N}}$ and $\mathcal{P}_{\mathrm{N}}$. In the first
approach (see Figure 2b), we perform a compression over the whole matrix
$\mathbb{S}$; a reduction by a SVD is represented in the picture by means of a
colored box. In the second approach (see Figure 2c), we utilize a nested POD
method instead; compressions on the temporal evolution are represented by a
blue box, while the final compression on the parameter space is depicted by a
red box. We finally note that, due to the adopted scaling in 1), the standard
POD is formally equivalent to a nested POD algorithm with
$\mathrm{N_{int}}=N^{t}$. However, it would be impractical to carry out the
standard POD in such a manner because it would require $\mathrm{M}$
intermediate compressions without resolving the underlying bottleneck. Still,
this formal equivalence motivates us to present numerical results for the
standard POD with the label $\mathrm{N_{int}}=\infty$, where the symbol
$\infty$ (instead of the actual value $N^{t}$) serves us as a reminder that no
intermediate compressions are carried out.
Figure 2: Proper orthogonal decomposition (POD) variants used in this study:
(a) input data based on $\mathrm{M}$ training instances and ${N^{t}}$ time
steps, (b) standard POD, and (c) nested POD. Each colored box represents a
compression by SVD.
### 4.4 $L^{2}$ projection
Again, for the sake of compactness, we will focus on $\bm{u}_{h}$, but a
similar procedure is carried out for $p_{h}$. Let
$\left\\{\mathbf{w}_{1},\cdots,\mathbf{w}_{\mathrm{N}}\right\\}$ denote the
basis functions spanning $\mathcal{U}_{\mathrm{N}}$. Given a time $t^{n}$ in
the discretization of the time interval $\mathbb{T}$ and a parameter instance
$\bm{\mu}^{(i)}$ in the training set we can define the best approximation
$\widetilde{\bm{u}}_{h}\left(\cdot;t^{n},\bm{\mu}^{(i)}\right)$ to
$\bm{u}_{h}\left(\cdot;t^{n},\bm{\mu}^{(i)}\right)$ in
$\mathcal{U}_{\mathrm{N}}$ as
$\widetilde{\bm{u}}_{h}\left(\cdot;t^{n},\bm{\mu}^{(i)}\right)=\sum_{k=1}^{\mathrm{N}}{\theta}_{k}^{u}(t^{n},\bm{\mu}^{(i)})\mathbf{w}_{k}$
(34)
Here we collect in the vector
$\bm{\theta}^{u}(t^{n},\bm{\mu}^{(i)})=\left[\theta_{1}^{u}(t^{n},\bm{\mu}^{(i)}),\cdots,\theta_{\mathrm{N}}^{u}(t^{n},\bm{\mu}^{(i)})\right]\in\mathbb{R}^{\mathrm{N}}$,
where the coefficients $\theta_{j}^{u}$ are solutions to the $L^{2}$
projection problem, which can be stated as: Given
$\bm{u}_{h}\left(\cdot;t^{n},\bm{\mu}^{(i)}\right)$, find
$\bm{\theta}^{u}(t^{n},\bm{\mu}^{(i)})$ such that:
$\sum_{j=1}^{\mathrm{N}}{\theta}_{j}^{u}(t,\bm{\mu})(\mathbf{w}_{j},\mathbf{w}_{k})_{u}=(\bm{u}_{h}\left(\cdot;t^{n},\bm{\mu}^{(i)}\right),\mathbf{w}_{k})_{u},k=1,\ldots,\mathrm{N}.$
We note that this results in a linear system, which left-hand side
$(\mathbf{w}_{j},\mathbf{w}_{k})_{u}$ can be easily precomputed and stored in
a $\mathrm{N}\times\mathrm{N}$ matrix. However, the right-hand side can only
be computed once the DG solutions are available for the training set and
corresponding time steps. The goal of the next subsection is to generalize the
computation of the coefficients of the ROM expansion for any (time, parameter)
pair using artificial neural networks trained on the available data
$\bm{\theta}^{u}(t^{n},\bm{\mu}^{(i)})$.
### 4.5 Artificial neural networks (ANN)
Following the determination of the correspondence between
$(t^{n},\bm{\mu}^{(i)})$ and ${\bm{\theta}^{u}(t^{n},\bm{\mu}^{(i)})}$, we aim
to construct artificial neural networks (ANN) to map an input space of
$\mathbb{T}\times\mathbb{P}\ni(t,\bm{\mu})$ to a vector of coefficients
$\widehat{\bm{\theta}}^{u}(t,\bm{\mu})$ that reproduce the training data. The
network architecture used in this work is presented in Figure 3. The number of
hidden layers ($\mathrm{N_{hl}}$) and the number of neurons ($\mathrm{N_{n}}$)
act as so-called hyperparameters [108]. Each neuron (e.g., ${H_{1,1}}$ $...$
${H_{1,\mathrm{N_{n}}}}$) is connected to the nodes of the previous layer with
adjustable weights and also has an adjustable bias. We denote the set of
weights and biases as ($\mathrm{W}$) and ($\mathrm{b}$), respectively. These
variables are learned during a training phase [109, 108]. The neural networks
are built on the PyTorch platform [110]. The results produced using either the
rectified linear unit (ReLU) or the hyperbolic tangent ($\tanh$) were
comparable. Hence, we only present the results using the $\tanh$ activation
function in this paper.
$\vdots$$\vdots$$\vdots$$\mu_{1}$$\mu_{2}$$\mu_{P}$$t$$H_{1}$$H_{nn}$$\widehat{\theta}_{1}^{u}(t,\bm{\mu})$$\widehat{\theta}_{n}^{u}(t,\bm{\mu})$Input
LayerHidden LayerOutput Layer Figure 3: Artificial neural network
architecture used in this study. Input layer contains up to $P+1$ input nodes
($\mu_{1}$ to $\mu_{P}$ and $t$), and output layer is composed of
$1,...,\mathrm{N}$ output nodes ($\widehat{\theta}_{1}^{u}(t,\bm{\mu})$ to
$\widehat{\theta}_{\mathrm{N}}^{u}(t,\bm{\mu})$). The number of hidden layers
are denoted by $\mathrm{N_{hl}}$ and each hidden layer is composed of
$\mathrm{N_{nn}}$ neurons ($H_{1}$ to $H_{nn}$).
Here we use a mean squared error ($\mathrm{MSE}^{\theta,u}$) as the metric of
our network loss function, defined as follows
${\mathrm{MSE}^{\theta,u}}=\frac{1}{\mathrm{M}N^{t}}\sum_{i=1}^{\mathrm{M}}\sum_{k=0}^{N^{t}}\left|\widehat{\bm{\theta}}^{u}\left(t^{k},\bm{\mu}^{(i)}\right)-{\bm{\theta}^{u}}\left(t^{k},\bm{\mu}^{(i)}\right)\right|^{2}.$
(35)
To minimize Eq. (35), we train the neural network using the adaptive moment
estimation (ADAM) algorithm [111]. Throughout this study, we use a batch size
of 32, a learning rate of 0.001, a number of epoch of 20,000, and we normalize
both our input and output to $[0,1]$. To prevent our networks from overfitting
behavior, we follow early stopping and generalized cross-validation criteria
[63, 112, 113]. Note that instead of literally stopping our training cycle, we
only save the set of trained $\mathrm{W}$ and $\mathrm{b}$ to be used in the
online phase when the current validation loss is lower than the lowest
validation from all the previous training cycle. This procedure ensures we
compare our ANN training time with a fixed number of epochs. As already noted
in the two previous subsections, we train the ANN specifically for each primal
variable.
### 4.6 Online phase
During the online phase, for each inquiry (i.e., a novel value of $\bm{\mu}$),
we evaluate the ANN to obtain $\widehat{\bm{\theta}}^{u}(t,\bm{\mu})$ for each
$t\in\\{t^{0},\cdots,t^{{N^{t}}}\\}$). Subsequently, we reconstruct the
displacement as
$\widehat{\bm{u}}_{h}\left(\cdot;t,\bm{\mu}\right)=\sum_{k=1}^{\mathrm{N}}\widehat{\theta}_{k}^{u}(t,\bm{\mu})\mathbf{w}_{k},$
(36)
and similarly for the pressure. We note that the reduced basis
$\\{\mathbf{w}_{k}\\}_{k=1}^{\mathrm{N}}$ is already constructed during the
POD phase; hence, recovering the online solutions, requires to evaluate
$\widehat{\theta}_{k}^{u}(t,\bm{\mu})$ from the trained ANN (which is
typically extremely fast), and subsequently, perform a reconstruction using
Eq. (36) (which only requires a linear combination of finite element
functions). As a result, one typically enjoys an inexpensive online phase for
each inquiry.
## 5 Numerical Examples
Throughout this section we take $\Omega=\left(0,1\right)^{2}$ corresponding to
a square domain of $1\mathrm{m}^{2}$ area, and decompose its boundary
$\partial\Omega$ with the following labels
$\displaystyle\mathrm{Left}$ $\displaystyle=\\{0\\}\times[0,1]$ (37)
$\displaystyle\mathrm{Top}$ $\displaystyle=[0,1]\times\\{1\\}$
$\displaystyle\mathrm{Right}$ $\displaystyle=\\{1\\}\times[0,1]$
$\displaystyle\mathrm{Bottom}$ $\displaystyle=[0,1]\times\\{0\\}.$
A plot of the domain, its boundary labels, and the mesh we utilized for its
discretization is shown in Figure 4. The mesh contains 2370 elements, and its
maximum size $h$ is $0.047$ $m$. We note that mesh is split into two
conforming subdomains $(0,1)\times(0,0.5)$ and $(0,1)\times(0.5,1)$ because
one of the test cases presented in this section will employ different material
properties in the two subdomains. The degrees of freedom associated with this
mesh are $9722$ for the continuous approximation of the displacement field
$\bm{u}$, and $7110$ for the discontinuous approximation of the pressure $p$.
For what concerns the time discretization, we choose $\Delta
t^{0}=20.0\,\mathrm{s}$, $\Delta t_{mult}=1.0\,\mathrm{s}$, and $\Delta
t_{max}=20\,\mathrm{s}$. In each of the following subsections, we will specify
the input parameters and boundary and initial conditions for each considered
test case.
Figure 4: Domain, its boundaries, and mesh used for all numerical examples
### 5.1 Model validation
In this subsection, we verify the developed reduced order modeling framework
through a series of benchmark problems. Here we fix $\mathrm{N}=5$,
$\mathrm{N_{int}}=5$, $\mathrm{N_{hl}}=3$, and $\mathrm{N_{nn}}=7$ to simplify
the presentation, since the goal of this subsection is to showcase the
versatility of the proposed framework for test cases of increasing difficulty.
Furthermore, the effects (e.g., in terms of training time and model accuracy)
of each of the aforementioned hyperparameters will be discussed later in
Section 5.2.
#### 5.1.1 Example 1: Terzaghi’s consolidation problem
We first verify the presented reduced order model by means of the test case
that we have already employed in the validation of the finite element solver
in [43, 13, 6]. This benchmark problem is built upon Terzaghi’s 1-dimensional
consolidation problem [22]. We assume the domain is homogeneous, isotropic,
and saturated with a single-phase fluid. The boundary conditions are described
as follows
$\begin{split}\bm{u}_{D}\cdot\mathbf{n}=0\quad$\mathrm{m}$&\text{ \> on \>
}\mathrm{Left}\times\mathbb{T},\\\
\bm{t_{D}}=[0,-1]\quad$\mathrm{k}\mathrm{P}\mathrm{a}$&\text{ \> on \>
}\mathrm{Top}\times\mathbb{T},\\\
\bm{u}_{D}\cdot\mathbf{n}=0\quad$\mathrm{m}$&\text{ \> on \>
}\mathrm{Right}\times\mathbb{T},\\\
\bm{u}_{D}\cdot\mathbf{n}=0\quad$\mathrm{m}$&\text{ \> on \>
}\mathrm{Bottom}\times\mathbb{T},\end{split}$ (38)
for Eq. (7), and
$\begin{split}{q}_{D}=0\quad$\mathrm{m}\mathrm{/}\mathrm{s}$&\text{ \> on \>
}\mathrm{Left}\times\mathbb{T},\\\ {p_{D}}=0\quad$\mathrm{P}\mathrm{a}$&\text{
\> on \> }\mathrm{Top}\times\mathbb{T},\\\
{q}_{D}=0\quad$\mathrm{m}\mathrm{/}\mathrm{s}$&\text{ \> on \>
}\mathrm{Right}\times\mathbb{T},\\\
{q}_{D}=0\quad$\mathrm{m}\mathrm{/}\mathrm{s}$&\text{ \> on \>
}\mathrm{Bottom}\times\mathbb{T},\end{split}$ (39)
for Eq. (12). A graphical summary of such boundary conditions is reported in
Figure 5. The coefficients appearing in section 2 will either be considered as
input parameters, or given fixed values. In particular, we fix $\alpha\approx
1$, as the porous matrix is characterized by $K=1000$ $\mathrm{kPa}$ while the
bulk solid is modeled by $K_{\mathrm{s}}\to\infty$ $\mathrm{kPa}$,
$c_{f}=1.0\times 10^{-9}$ $\mathrm{Pa^{-1}}$, $\phi=0.3$, and fluid viscosity
- $\mu_{f}=10^{-3}$ $\mathrm{Pa.s}$. The Poisson ratio $\nu$ and the
permeability coefficient $k_{xx}$ are instead considered as input parameters
$\bm{\mu}=(\nu,k_{xx})$. The admissible range of variation for $\nu$ is
$[0.1,0.4]$, while that for $k_{xx}$ is $[1.0\times 10^{-15},1.0\times
10^{-11}]$. For any parametric realization of $\nu$ one can then easily
compute the corresponding Lamé constants $\lambda_{l}$ and $\mu_{l}$ by Eq.
(2); for any parametric realization of $k_{xx}$ the matrix permeability tensor
$\bm{k}$ is defined as
$\bm{k}:=\left[\begin{array}[]{ll}k_{xx}&k_{xy}:=0.0\\\
k_{yx}:=0.0&k_{yy}:=k_{xx}\end{array}\right].$ (40)
Figure 5: Example 1: Setup for the 1-dimensional consolidation problem in a
homogeneous material. Here $H=1,{p_{D}}=0$ and
$\bm{t_{D}}=[0,-1]\quad$\mathrm{k}\mathrm{P}\mathrm{a}$$. This figure is
adapted from [13].
For the validation we aim to carry out in this section we focus on a fixed
realization of the uncertain parameter $\bm{\mu}$, and which is outside of the
training set. Further discussion on the sensitivity of the reduced order model
over the entire parametric range $\mathbb{P}$ will follow in section 5.2.5. We
use a mean squared error ($\mathrm{MSE}_{\varphi}(t,\bm{\mu})$) and maximum
error ($\mathrm{ME}_{\varphi}(t,\bm{\mu})$) as the metrics to evaluate our
developed framework. $\mathrm{MSE}_{\varphi}(t,\bm{\mu})$ and
$\mathrm{ME}_{\varphi}(t,\bm{\mu})$ are defined as follows
${\mathrm{MSE}_{\varphi}(t,\bm{\mu}):=\left\|\varphi_{h}(\cdot;t,\bm{\mu})-\widehat{\varphi}_{h}(\cdot;t,\bm{\mu})\right\|_{\varphi}^{2}},$
(41)
and
${\mathrm{ME}_{\varphi}(t,\bm{\mu}):=\left\|\varphi_{h}(\cdot;t,\bm{\mu})-\widehat{\varphi}_{h}(\cdot;t,\bm{\mu})\right\|_{\varphi}^{\infty}.}$
(42)
where $\varphi$ stands for either the displacement $\bm{u}$ or the pressure
$p$, $\varphi_{h}(\cdot;t,\bm{\mu})$ is the corresponding finite element
solution at time $t$, and $\widehat{\varphi}_{h}(\cdot;t,\bm{\mu})$ is the
corresponding ROM solution at time $t$. Furthermore,
$\left\|\cdot\right\|_{\varphi}^{2}$ denotes the norm in the space of the
primary variable $\varphi$, $\left\|\cdot\right\|_{\varphi}^{\infty}$ the
infinity norm (i.e., the maximum pointwise absolute value of its function
argument). We remark that, even though Eqs. (35) and (41) are both MSE errors;
they play two fundamentally distinct roles: Eq. (35) is employed during the
training phase and provides a measurement of the error over the entire time
and parametric range; in contrast, Eq. (41) is employed during the testing
phase and provides a measurement of the error for each time step and each
parametric instance. Furthermore, Eq. (35) accounts for errors introduced due
to the ANN approximation and measures such errors in the space
$\mathbb{R}^{N}$ of the reduced order coefficients; in contrast, Eq. (41)
accounts for errors introduced by both the POD basis truncation and ANN
approximation, and measures such errors in the spatial norm associated to each
primary variable. Consistently with this observation, it will then be quite
natural to study further whether the error is primarily introduced by the
basis truncation or the ANN evaluation, as we will do in section 5.2.2.
The $\mathrm{MSE}$ and $\mathrm{ME}$ results are presented in Figure 6. We
observe that the error in both primary variables maintains the same order of
magnitude in the entire time interval. In particular, the reduced order
approximation of the displacement field is affected by an MSE of $O(10^{-7})$
$\mathrm{m}^{2}$ and ME of $O(10^{-9})$ $\mathrm{m}$. Since the finite element
displacement $\bm{u}_{h}$ is $O(10^{-4})$ $\mathrm{m}$ for the current
parameter value, the corresponding relative ME (i.e., the ratio between ME and
the magnitude of the finite element displacement) is $O(10^{-5})$, which makes
the online evaluation an accurate surrogate for any practical engineering
scenario. Similarly, the pressure field has MSE of $O(10^{0})$
$\mathrm{P}\mathrm{a}^{2}$, ME $O(10^{-2})$ $\mathrm{P}\mathrm{a}$; since the
finite element pressure $p_{h}$ have values of $O(10^{3})$
$\mathrm{P}\mathrm{a}$, also the online evaluation of the pressure results in
an approximation with a relative ME of $O(10^{-5})$.
Figure 6: Example 1: mean squared error (MSE) and maximum error (ME) plots
using $\bm{\mu}=(\nu,k_{xx})=(0.2,1.0\times 10^{-12})$ \- outside of the
training snapshots: (a) displacement field ($\bm{u}$) and (b) fluid pressure
field ($p$).
#### 5.1.2 Example 2: Consolidation problem with anisotropic permeability
We then move to a case where medium permeability is anisotropic. This
benchmark case has been employed, e.g., in [114, 5] and shows the advantages
of the DG formulation that we use in this work over traditional finite volume
methods, which use a standard two-point flux approximation scheme, as the
latter requires the grid to be aligned with the principal directions of the
permeability/diffusivity tensors. Boundary conditions, fixed coefficients, and
input parameters are as in Section 5.1.1, except for the anisotropic
permeability tensor
$\bm{k}:=\left[\begin{array}[]{ll}k_{xx}&k_{xy}:=0.1k_{xx}\\\
k_{yx}:=0.1k_{xx}&k_{yy}:=0.1k_{xx}\end{array}\right],$ (43)
where $k_{xx}$ is the second input parameter. The $\mathrm{MSE}$ and
$\mathrm{ME}$ results of this case are illustrated in Figure 7. Similarly to
the previous example, considering that $\bm{u}_{h}$ and $p_{h}$ at the initial
state are $O(10^{-4})$ $\mathrm{m}$ and $O(10^{3})$ $\mathrm{P}\mathrm{a}$,
respectively, we get a relative ME of $O(10^{-5})$ for both displacement and
pressure.
Figure 7: Example 2: mean squared error (MSE) and maximum error (ME) plots
using ${\bm{\mu}=(\nu,k_{xx})=(0.2,1.0\times 10^{-12})}$ \- outside of the
training snapshots: (a) displacement field ($\bm{u}$) and (b) fluid pressure
field ($p$).
#### 5.1.3 Example 3: Consolidation problem with 2-layered material
Finally, we evaluate the developed model reduction framework using a 2-layered
material as presented in Figure 8. Boundary conditions, fixed coefficients,
and input parameters are as in Section 5.1.1, except for medium permeability
defined as
$\bm{k}(x,y)=\begin{cases}\bm{k}_{1},y>0.5\\\ \bm{k}_{2},y<0.5\\\
\end{cases},\quad\text{where}\quad\begin{aligned}
\bm{k}_{1}&:=\left[\begin{array}[]{ll}1.0\times 10^{-12}&0.0\\\ 0.0&1.0\times
10^{-12}\end{array}\right],\text{and}\\\
\bm{k}_{2}&:=\left[\begin{array}[]{ll}k_{xx}&0.0\\\
0.0&k_{xx}\end{array}\right].\end{aligned}$ (44)
The second input parameter thus affects the isotropic permeability
$\bm{k}_{2}$ in the bottom subdomain depicted in Figure 8; instead, the
permeability tensor $\bm{k}_{1}$ is parameter independent in the top
subdomain. We restrict the range for the second parameter $k_{xx}$ to the
interval $[1.0\times 10^{-16},1.0\times 10^{-15}]$ to simulate parametric
configurations in which the two layers have very different material
properties.
Figure 8: Example 3: Setup for the 1-dimensional consolidation problem in a
2-layered material. This figure is adapted from [13].
This test case has been used, e.g., in [27, 43, 13, 6] to underlying how,
without a suitable stabilization, the solution may exhibit spurious
oscillations at the interface between two layers. Since the DG method we
employ in this work is oscillation-free, not only the FOM solutions will not
have spurious oscillations at the interface (i.e., at $y=0.5$), but also the
ROM fulfill such desirable property, see Figures 9c-d. For what concerns the
quantitative behavior of the error in time, we get a relative ME (the ratio
between ME and the magnitude of the finite element solutions) of $O(10^{-2})$
for both primary variables. We note that $\mathrm{MSE}$, $\mathrm{ME}$ and
relative ME values are higher than the ones presented in Sections 5.1.1 and
5.1.2 since the permeability is strongly heterogeneous, exhibiting a sharp
contrast between the two material phases resulting in a discontinuity in the
pressure field, which makes the problem significantly more challenging than
the two previous test cases.
Figure 9: Example 3: mean squared error (MSE) and maximum error (ME) plots
using ${\bm{\mu}=(\nu,k_{xx})=(0.2,5.0\times 10^{-15})}$ \- outside of the
training snapshots: (a) displacement field ($\bm{u}$), (b) fluid pressure
field ($p$), full order model (FOM) and reduced order model (ROM) results
along the $x=0.5$ line of (c) displacement field ($\bm{u}$), and (d)
displacement field ($\bm{u}$).
### 5.2 Model analysis
Following the verification of the developed ROM framework for a representative
realization of the input parameters and for fixed values of the
hyperparameters, we now perform a comprehensive analysis of the ROM using a
more realistic example.
#### 5.2.1 Example 4: Consolidation problem with heterogeneous permeability
In this example, the matrix permeability is heterogeneous as presented in
Figure 10. This field is generated as in [115] with the average of
$k_{xx}=1.77\times 10^{-12}$ $\mathrm{m}^{2}$, and the variance of
$k_{xx}=5.53\times 10^{-24}$ $\mathrm{m}^{4}$. The field $\bm{k_{m}}$ is
isotropic, which means $\bm{k_{m}}=k_{xx}\mathbf{I}$. The Zinn & Harvey
transformation is applied at then end of the permeability field generation
[115, 116]. Thus, in contrast to the previous test cases, the matrix
permeability is fixed and not parametric. The input parameters are
$\bm{\mu}=(\nu,\alpha)\in[0.1,0.4]\times[0.4,1.0]$. The remaining
coefficients, as well as the boundary conditions, are as in Section 5.1.1. We
note that the magnitude of the initial values for $\bm{u}_{h}$ and $p_{h}$ are
$O(10^{-4})$ $\mathrm{m}$ and $O(10^{3})$ $\mathrm{P}\mathrm{a}$, respectively
throughout this example.
Figure 10: Example 4: A permeability field ($\bm{k_{m}}=k_{xx}\mathbf{I}$)
generated as in [115]. The average of $k_{xx}=1.77\times 10^{-12}$
$\mathrm{m}^{2}$. The variance of $k_{xx}=5.53\times 10^{-24}$
$\mathrm{m}^{4}$. The Zinn & Harvey transformation is applied at then end of
the permeability field generation [115, 116].
The eigenvalue behavior obtained from the POD phase for both displacement and
pressure fields is presented in Figure 11. We note that these eigenvalues are
normalized by their maximum value for the sake of presentation. From this
figure, we observe that by using 30 to 50 reduced bases, we could capture most
of the information produced by FOM (i.e., the normalized eigenvalue reaches
the machine precision.), regardless of the choice of $\mathrm{N_{int}}$.
Besides, as the number of $\mathrm{N_{int}}$ increases, the behavior of
eigenvalue becomes similar to the standard POD ($\mathrm{N_{int}}=\infty$)
case (i.e., as the number of $\mathrm{N_{int}}$ increases, we could capture
most of the information in the time domain; hence, there is no difference
between the nested POD and standard POD.). By using $\mathrm{N_{int}}=10$, the
eigenvalue behavior is almost identical to the case where we use
$\mathrm{N_{int}}=\infty$. In fact, the lines overlap for the most part of the
plot, except for the trailing eigenvalues, which are below numerical
precision.
Figure 11: Example 4: normalized eigenvalue as a function of basis for (a)
displacement field ($\bm{u}$) (b) fluid pressure field ($p$).
$\mathrm{N_{int}}=\infty$ represents a case where we do not use the nested POD
technique.
The comparison of the wall time (seconds) used for SVD computations with a
different number of snapshots ($\mathrm{M}$) and a number of intermediate
reduced basis ($\mathrm{N_{int}}$) is presented in Table 1. We observe that
the $\mathrm{N_{int}}=\infty$ case consumes the longest wall time.
Furthermore, the lower the number of $\mathrm{N_{int}}$, the faster SVD
computations are. In a general sense, we note that the nested POD technique
could reduce the wall time required by the SVD computations significantly. For
instance, the $\mathrm{N_{int}}=10$ case provides a comparable eigenvalue
behavior to the $\mathrm{N_{int}}=\infty$ case, but the generation of the
reduced spaces is approximately ten times faster.
Table 1: Example 4: Comparison of the wall time (seconds) used for proper orthogonal decomposition (POD) operation with different number of snapshots ($\mathrm{M}$) and number of intermediate reduced basis ($\mathrm{N_{int}}$). $\mathrm{N_{int}}=\infty$ represents a case where we do not use the nested POD technique. | $\mathrm{N_{int}}=2$ | $\mathrm{N_{int}}=5$ | $\mathrm{N_{int}}=10$ | $\mathrm{N_{int}}=\infty$
---|---|---|---|---
$\mathrm{M}=100$ | 100 | 125 | 170 | 1574
$\mathrm{M}=400$ | 437 | 650 | 1437 | 36705
$\mathrm{M}=900$ | 1168 | 2319 | 6475 | 268754
#### 5.2.2 Sources of error
Throughout this subsection, we study the model accuracy and the sources of
error. Our goal is to differentiate the error arising due to the truncation of
reduced bases (i.e., associated to a choice $\mathrm{N}$ that is less than
$\mathrm{M}N^{t}$.) and the error introduced by the mapping between
$(t,\bm{\mu})$ and the reduced order coefficients
$\widehat{\bm{\theta}}^{u}(t,\bm{\mu})$ and
$\widehat{\bm{\theta}}^{p}(t,\bm{\mu})$ provided by ANN. In particular, we
will
1. 1.
investigate MSE and ME results of ROM framework for a realization $\bm{\mu}$
_in the training set_ , and the coefficients ${\bm{\theta}}^{u}(t,\bm{\mu})$
and ${\bm{\theta}}^{p}(t,\bm{\mu})$ determined from the $L^{2}$ projection,
rather than the ones from ANN, see Figure 12;
2. 2.
investigate MSE and ME results of ROM framework for a realization $\bm{\mu}$
_in the training set_ , and $\widehat{\bm{\theta}}^{u}(t,\bm{\mu})$ and
$\widehat{\bm{\theta}}^{p}(t,\bm{\mu})$ obtained by means of ANN, see Figure
13;
3. 3.
investigate MSE and ME results of ROM framework for a realization $\bm{\mu}$
_outside of the training set_ , and $\widehat{\bm{\theta}}^{u}(t,\bm{\mu})$
and $\widehat{\bm{\theta}}^{p}(t,\bm{\mu})$ obtained by means of ANN, see
Figure 14.
For what concerns the first goal, we select $\bm{\mu}=(\nu,\alpha)=(0.1,0.8)$
in the training set and reuse coefficients ${\bm{\theta}}^{u}(t,\bm{\mu})$ and
${\bm{\theta}}^{p}(t,\bm{\mu})$ obtained by the $L^{2}$ projection. The
corresponding results for MSE and ME indices, and for both primal variables,
are presented in Figure 12. Different colors correspond to different values of
$\mathrm{N_{int}}$, including the label $\mathrm{N_{int}}=\infty$, which
represents the use of the standard POD. Different line styles (solid, dashed,
and dotted) correspond to increasing dimension $\mathrm{N}$ of the reduced
basis spaces. As expected, the ROM accuracy increases as the number of
$\mathrm{N}$ increase, following an exponential trend. As we increase the
number of $\mathrm{N_{int}}$, the MSE and ME behaviors approach the ones of
$\mathrm{N_{int}}=\infty$ case, which means that even a nested POD compression
with moderate value of $\mathrm{N_{int}}$ is able to correctly capture the
time evoluation. Besides, the values of MSE and ME initially decrease and
remain constant over time since our problem reaches the steady-state solution.
Figure 12: Example 4: Errors of reconstruction solutions using
$\bm{\mu}=(\nu,\alpha)=(0.1,0.8)$ \- in the training snapshots and using
${\bm{\theta}}^{u}$, ${\bm{\theta}}^{p}$, using different numbers of reduced
basis ($\mathrm{N}$): (a) mean squared error (MSE) of displacement field
($\bm{u}$), (b) mean squared error (MSE) of fluid pressure field ($p$), (c)
maximum error (ME) of displacement field ($\bm{u}$) (d) maximum error (ME) of
fluid pressure field ($p$). Colors correspond to increasing values of
$\mathrm{N_{int}}$; solid, dashed, and dotted lines represent $\mathrm{N}=5$,
$\mathrm{N}=10$, and $\mathrm{N}=20$ cases, respectively.
For the second goal, the MSE and ME results using $\bm{\mu}$ in the training
set and $\widehat{\bm{\theta}}^{u}$, $\widehat{\bm{\theta}}^{p}$ predicted by
the ANN are shown in Figure 13. For a fair comparison to 12 we use the same
parameter instance $\bm{\mu}=(\nu,\alpha)=(0.1,0.8)$ in the training set, to
compare different approximation properties stemming from the use
${\bm{\theta}}^{u}$, ${\bm{\theta}}^{p}$ (Figure 12) and
$\widehat{\bm{\theta}}^{u}$, $\widehat{\bm{\theta}}^{p}$ (Figure 13).
Compared to the previous results shown in Figure 12, the MSE and ME values in
Figure 13 are approximately three orders of magnitude higher. Moreover, there
is no clear trend in how the increase of $\mathrm{N}$ and $\mathrm{N_{int}}$
affects the MSE and ME results. Indeed, as the number of $\mathrm{N_{int}}$
increase, we could not observe that the MSE and ME behaviors approach the ones
of $\mathrm{N_{int}}=\infty$ case. The MSE and ME values, however, still
decreases as the time domain progresses. To better quantify the observations
we obtain from Figure 13, we take an average for all time steps of the MSE of
the fluid pressure field ($p$) and present it in Table 2. We could see that
the average MSE remains of the same order of magnitude for any $\mathrm{N}$
and $\mathrm{N_{int}}$ pair, which is a marked difference from the MSE results
shown in Figure 12. This indicates that to improve the accuracy, one should
also consider various other ROM properties, such as the number of snapshots,
hyperparameters, or network architecture. Each of these options will be
explored in later subsections.
Figure 13: Example 4: Errors of reconstruction solutions using $\bm{\mu}=(\nu,\alpha)=(0.1,0.8)$ \- in the training snapshots and $\widehat{\bm{\theta}}^{u}$, $\widehat{\bm{\theta}}^{p}$, using different numbers of reduced basis ($\mathrm{N}$): (a) mean squared error (MSE) of displacement field ($\bm{u}$), (b) mean squared error (MSE) of fluid pressure field ($p$), (c) maximum error (ME) of displacement field ($\bm{u}$) (d) maximum error (ME) of fluid pressure field ($p$). Colors correspond to increasing values of $\mathrm{N_{int}}$; solid, dashed, and dotted lines represent $\mathrm{N}=5$, $\mathrm{N}=10$, and $\mathrm{N}=20$ cases, respectively. Table 2: Example 4: Average for all time step of mean squared error (MSE) of fluid pressure field ($p$) presented in Figure 13. | $\mathrm{N_{int}}=2$ | $\mathrm{N_{int}}=5$ | $\mathrm{N_{int}}=10$ | $\mathrm{N_{int}}=\infty$
---|---|---|---|---
$\mathrm{N}=5$ | 0.0053 | 0.0057 | 0.0050 | 0.0051
$\mathrm{N}=10$ | 0.0045 | 0.0042 | 0.0052 | 0.0066
$\mathrm{N}=20$ | 0.0049 | 0.0073 | 0.0073 | 0.0052
We then move to the third goal, where we present the MSE and ME results using
$\bm{\mu}$ outside of the training set and $\widehat{\bm{\theta}}^{u}$,
$\widehat{\bm{\theta}}^{p}$ predicted by the ANN, as shown in Figure 14. These
results are comparable to ones presented in Figure 13 as there is no clear
relationship between the MSE or ME values and the numbers of $\mathrm{N}$ or
$\mathrm{N_{int}}$. Furthermore, the MSE and ME values are approximately three
orders of magnitude higher than those presented in Figure 12.
Figure 14: Example 4: Errors of reconstruction solutions using
${\bm{\mu}=(\nu,\alpha)=(0.2,0.5)}$ \- outside of the training snapshots and
$\widehat{\bm{\theta}}^{u}$, $\widehat{\bm{\theta}}^{p}$, using different
numbers of reduced basis ($\mathrm{N}$): (a) mean squared error (MSE) of
displacement field ($\bm{u}$), (b) mean squared error (MSE) of fluid pressure
field ($p$), (c) maximum error (ME) of displacement field ($\bm{u}$) (d)
maximum error (ME) of fluid pressure field ($p$). Colors correspond to
increasing values of $\mathrm{N_{int}}$; solid, dashed, and dotted lines
represent $\mathrm{N}=5$, $\mathrm{N}=10$, and $\mathrm{N}=20$ cases,
respectively.
The average for all time step of the MSE of the fluid pressure field ($p$) is
presented in Table 3. Similar to Table 2, the average MSE is independent of
the numbers of $\mathrm{N}$ and $\mathrm{N_{int}}$. Hence, from Figures 12,
13, and 14, and Tables 2 and 3, we could see that the errors introduced by POD
and $L^{2}$ projection phases are negligible compared to the errors initiated
from the ANN phase (prediction of $\widehat{\bm{\theta}}^{u}$,
$\widehat{\bm{\theta}}^{p}$). Therefore we conclude that in practical
applications one may want to choose moderate values for both $\mathrm{N}$ and
$\mathrm{N_{int}}$. Keeping a moderate value for $\mathrm{N}$ guarantees the
evaluation of a small network during the online phase; keeping a moderate
value for $\mathrm{N_{int}}$ results in large computational savings during the
offline phase. Once $\mathrm{N}$ and $\mathrm{N_{int}}$ are fixed, if a
further increase in accuracy is desired one may then explore the possibility
of changing the remain properties of the ROM framework, as we discuss in the
following.
Table 3: Example 4: Average for all time step of mean squared error (MSE) of fluid pressure field ($p$) presented in Figure 14. | $\mathrm{N_{int}}=2$ | $\mathrm{N_{int}}=5$ | $\mathrm{N_{int}}=10$ | $\mathrm{N_{int}}=\infty$
---|---|---|---|---
$\mathrm{N}=5$ | 0.0042 | 0.0035 | 0.0049 | 0.0027
$\mathrm{N}=10$ | 0.0084 | 0.0126 | 0.0100 | 0.0110
$\mathrm{N}=20$ | 0.0048 | 0.0054 | 0.0049 | 0.0048
#### 5.2.3 Effect of number of snapshots
The number of snapshots’ ($\mathrm{M}$) effect is studied by comparing the MSE
and ME results of cases with different $\mathrm{M}$. To reiterate, the higher
$\mathrm{M}$, the longer the ROM framework will take to perform FOM solves and
POD compressions. For instance, the wall time used to solve all FOM problems
is 1780, 7120, and 16020 seconds for $\mathrm{M}=100$, $\mathrm{M}=400$, and
$\mathrm{M}=900$, respectively. The wall time used corresponding to the cases
with different $\mathrm{M}$ is presented in Table 1 for the POD compressions.
The MSE and ME results with different number of snapshots ($\mathrm{M}$) are
presented in Figure 15. The MSE averages for all time step of $p$ is also
presented in Table 4. We fix $\mathrm{N}=10$, $\mathrm{N_{hl}}=3$, and
$\mathrm{N_{nn}}=7$. From Figure 15 and Table 4, we observe that the model
with highest $\mathrm{M}$ provides the lowest MSE and ME results. However,
there is no distinct different of the MSE and ME results among the models
using different $\mathrm{N_{int}}$. Therefore an actionable way of increasing
the ROM accuracy is to provide more input data to the training phase of the
ANN; computational savings related to the basis generation during the offline
stage can still be achieved by choosing a moderate value for
$\mathrm{N_{int}}$.
Figure 15: Example 4: Errors of reconstruction solutions using $\bm{\mu}=(\nu,\alpha)=(0.2,0.5)$ \- outside of the training snapshots and $\widehat{\bm{\theta}}^{u}$, $\widehat{\bm{\theta}}^{p}$, using different number of snapshots ($\mathrm{M}$): (a) mean squared error (MSE) of displacement field ($\bm{u}$), (b) mean squared error (MSE) of fluid pressure field ($p$), (c) maximum error (ME) of displacement field ($\bm{u}$) (d) maximum error (ME) of fluid pressure field ($p$). Colors correspond to increasing values of $\mathrm{N_{int}}$; solid, dashed, and dotted lines represent $\mathrm{M}=100$, $\mathrm{M}=400$, and $\mathrm{M}=900$ cases, respectively. Note that we fix $\mathrm{N}=10$, $\mathrm{N_{hl}}=3$, and $\mathrm{N_{nn}}=7$. Table 4: Example 4: Average for all time step of mean squared error (MSE) of fluid pressure field ($p$) presented in Figure 15. | $\mathrm{N_{int}}=2$ | $\mathrm{N_{int}}=5$ | $\mathrm{N_{int}}=10$ | $\mathrm{N_{int}}=\infty$
---|---|---|---|---
$\mathrm{M}=100$ | 0.0139 | 0.0361 | 0.0271 | 0.0299
$\mathrm{M}=400$ | 0.0084 | 0.0126 | 0.0100 | 0.0110
$\mathrm{M}=900$ | 0.0054 | 0.0036 | 0.0037 | 0.0039
#### 5.2.4 Effect of network architecture
We then examine the effect of network architecture (i.e., number of hidden
layers ($\mathrm{N_{hl}}$) and number of neurons per hidden layer
($\mathrm{N_{nn}}$)). We begin with cases where we fix $\mathrm{N}=10$ and
$\mathrm{N_{nn}}=7$, but we vary $\mathrm{N_{hl}}$ as presented in Figure 16
and Table 5. From these MSE and ME results, we observe that the MSE and ME
values decrease as $\mathrm{N_{hl}}$ increases. Similar to previous cases, the
MSE and ME values, however, seems to be independent of a choice of
$\mathrm{N_{int}}$.
Figure 16: Example 4: Errors of reconstruction solutions using $\bm{\mu}=(\nu,\alpha)=(0.2,0.5)$ \- outside of the training snapshots and $\widehat{\bm{\theta}}^{u}$, $\widehat{\bm{\theta}}^{p}$, using different number of hidden layers ($\mathrm{N_{hl}}$): (a) mean squared error (MSE) of displacement field ($\bm{u}$), (b) mean squared error (MSE) of fluid pressure field ($p$), (c) maximum error (ME) of displacement field ($\bm{u}$) (d) maximum error (ME) of fluid pressure field ($p$). Colors correspond to increasing values of $\mathrm{N_{int}}$; solid, dashed, and dotted lines represent $\mathrm{N_{hl}}=1$, $\mathrm{N_{hl}}=3$, and $\mathrm{N_{hl}}=5$ cases, respectively. Note that we fix $\mathrm{N}=10$ and $\mathrm{N_{nn}}=7$. Table 5: Example 4: Average for all time step of mean squared error (MSE) of fluid pressure field ($p$) presented in Figure 16. | $\mathrm{N_{int}}=2$ | $\mathrm{N_{int}}=5$ | $\mathrm{N_{int}}=10$ | $\mathrm{N_{int}}=\infty$
---|---|---|---|---
$\mathrm{N_{hl}}=1$ | 0.0155 | 0.0117 | 0.0115 | 0.0191
$\mathrm{N_{hl}}=3$ | 0.0084 | 0.0126 | 0.0100 | 0.0110
$\mathrm{N_{hl}}=5$ | 0.0031 | 0.0038 | 0.0047 | 0.0031
The wall time as a function of $\mathrm{N_{hl}}$ and $\mathrm{N_{int}}$ is
presented in Table 6. As one expects, as the number of $\mathrm{N_{hl}}$
grows, the longer time the model takes to train during the ANN phase.
Moreover, this table shows that the $\mathrm{N_{int}}$ does not affect the
computational cost of the ANN phase. As mentioned in the methodology section,
the wall time shown in Table 6 is a combination of the wall time used to
perform $L^{2}$ projection and the wall time used to train the ANN. We combine
these two operations because the wall time used to perform $L^{2}$ projection
is relatively much smaller than the wall time used to train the ANN as well as
other phases.
Table 6: Example 4: Comparison of the wall time (seconds) used for training the neural networks with different number of hidden layers ($\mathrm{N_{hl}}$) and number of intermediate reduced basis ($\mathrm{N_{int}}$). $\mathrm{N_{int}}=\infty$ represents a case where we do not use the nested POD technique. | $\mathrm{N_{int}}=2$ | $\mathrm{N_{int}}=5$ | $\mathrm{N_{int}}=10$ | $\mathrm{N_{int}}=\infty$
---|---|---|---|---
$\mathrm{N_{hl}}=1$ | 6217 | 5963 | 6065 | 6039
$\mathrm{N_{hl}}=3$ | 7114 | 7108 | 7064 | 7103
$\mathrm{N_{hl}}=5$ | 8279 | 8271 | 8340 | 8152
We then investigate the MSE and ME results of cases where we vary
$\mathrm{N_{nn}}$, but we fix $\mathrm{N}=10$ and $\mathrm{N_{hl}}=3$ as
presented in Figure 17 and Table 7. Again, we could not observe any clear
relationships between the errors and $\mathrm{N_{int}}$. We, however, could
see that the model accuracy is improved as we increase $\mathrm{N_{nn}}$.
Figure 17: Example 4: Errors of reconstruction solutions using $\bm{\mu}=(\nu,\alpha)=(0.2,0.5)$ \- outside of the training snapshots and $\widehat{\bm{\theta}}^{u}$, $\widehat{\bm{\theta}}^{p}$, using different number of neuron per layer ($\mathrm{N_{nn}}$): (a) mean squared error (MSE) of displacement field ($\bm{u}$), (b) mean squared error (MSE) of fluid pressure field ($p$), (c) maximum error (ME) of displacement field ($\bm{u}$) (d) maximum error (ME) of fluid pressure field ($p$). Colors correspond to increasing values of $\mathrm{N_{int}}$; solid, dashed, and dotted lines represent $\mathrm{N_{nn}}=4$, $\mathrm{N_{nn}}=7$, and $\mathrm{N_{nn}}=10$ cases, respectively. Note that we fix $\mathrm{N}=10$ and $\mathrm{N_{hl}}=3$. Table 7: Example 4: Average for all time step of mean squared error (MSE) of fluid pressure field ($p$) presented in Figure 17. | $\mathrm{N_{int}}=2$ | $\mathrm{N_{int}}=5$ | $\mathrm{N_{int}}=10$ | $\mathrm{N_{int}}=\infty$
---|---|---|---|---
$\mathrm{N_{nn}}=4$ | 0.0126 | 0.0096 | 0.0140 | 0.0142
$\mathrm{N_{nn}}=7$ | 0.0084 | 0.0126 | 0.0100 | 0.0110
$\mathrm{N_{nn}}=10$ | 0.0030 | 0.0026 | 0.0031 | 0.0038
The comparison of the wall time used for the $L^{2}$ projection phase and
training the neural networks with a different number of $\mathrm{N_{nn}}$ and
number of $\mathrm{N_{int}}$ is presented in Table 8. Similar to Table 6, the
number of $\mathrm{N_{int}}$ does not significantly affect the wall time. The
$\mathrm{N_{nn}}$, on the other hand, could influence the wall time used to
train the ANN.
Table 8: Example 4: Comparison of the wall time (seconds) used for training the neural networks with different number of hidden layers ($\mathrm{N_{nn}}$) and number of intermediate reduced basis ($\mathrm{N_{int}}$). $\mathrm{N_{int}}=\infty$ represents a case where we do not use the nested POD technique. | $\mathrm{N_{int}}=2$ | $\mathrm{N_{int}}=5$ | $\mathrm{N_{int}}=10$ | $\mathrm{N_{int}}=\infty$
---|---|---|---|---
$\mathrm{N_{nn}}=4$ | 6956 | 6932 | 6923 | 6983
$\mathrm{N_{nn}}=7$ | 7114 | 7108 | 7064 | 7103
$\mathrm{N_{nn}}=10$ | 7607 | 7552 | 7657 | 7538
#### 5.2.5 Sensitivity analysis
So far, we have seen the ROM framework’s performance with only a single
instance of $\bm{\mu}$. This section aims to present the model’s performance
when it is utilized as a sensitivity analysis tool. We have 1000 test cases
randomly selected from the parameter range
$\mathbb{P}=[0.1,0.4]\times[0.4,1.0]\ni(\nu,\alpha)=\bm{\mu}$. We employ two
model settings
1. 1.
model 1: $\mathrm{M}=400$, $\mathrm{N_{int}}=10$, $\mathrm{N}=10$,
$\mathrm{N_{hl}}=3$, and $\mathrm{N_{nn}}=7$,
2. 2.
model 2: $\mathrm{M}=900$, $\mathrm{N_{int}}=10$, $\mathrm{N}=20$,
$\mathrm{N_{hl}}=5$, and $\mathrm{N_{nn}}=10$,
and present the range of MSE values of $\bm{u}$ and $p$ fields in Figure 18.
We note that the red squares represent outliers, and the box plot covers the
interval from the 25th percentile to 75th percentile, highlighting the mean
(50th percentile) with an orange line. We note that most of the outliers are
located above the 75th percentile, i.e. are all cases in which the error is
larger than average. From this figure, it is clear that the second model has
approximately one order of MSE magnitude less than the first model. Besides,
the range of uncertainties is reduced significantly (see outliers and length
of the box plots). Again, the MSE values tend to decrease with time since the
solutions approach the steady-state solutions.
Figure 18: Example 4: Sensitivity analysis - Errors of reconstruction
solutions using 1000 testing $\bm{\mu}$: (a) mean squared error (MSE) of
displacement field ($\bm{u}$) with $\mathrm{M}=400$, $\mathrm{N}=10$,
$\mathrm{N_{hl}}=3$, and $\mathrm{N_{nn}}=7$, (b) mean squared error (MSE) of
displacement field ($\bm{u}$) with $\mathrm{M}=900$, $\mathrm{N}=20$,
$\mathrm{N_{hl}}=5$, and $\mathrm{N_{nn}}=10$, (c) mean squared error (MSE) of
fluid pressure field ($p$) with $\mathrm{M}=400$, $\mathrm{N}=10$,
$\mathrm{N_{hl}}=3$, and $\mathrm{N_{nn}}=7$, and (d) mean squared error (MSE)
of fluid pressure field ($p$) with $\mathrm{M}=900$, $\mathrm{N}=20$,
$\mathrm{N_{hl}}=5$, and $\mathrm{N_{nn}}=10$. Note that we fix
$\mathrm{N_{int}}=10$. We note that the red squares represent outliers, and
the box plot covers the interval from the 25th percentile to 75th percentile,
highlighting the mean (50th percentile) with an orange line.
Next, we compare the wall time used for ROM and FOM to perform sensitivity
analysis (test set of 1000 members) as shown in Table 9. We note that we do
not present the wall time used in the initialization of $\bm{\mu}$ (see blue
box in Figure 1) because it is insignificant compared to the other phases. The
second model (i.e., the one with better accuracy) require higher wall time for
all operations than the first model. Using ROM, we could speed-up by six-
folds, as one could see from the last row in Table 9. A more comprehensive
discussion on the effectiveness of the ROM framework will be provided in the
following section.
Table 9: Example 4: Comparison of the wall time (seconds) used for sensitivity analysis | M = 400 (model 1) | M = 900 (model 2) | FOM
---|---|---|---
Train FOM snapshots | 7160 | 16020 | -
Perform POD | 1437 | 6475 | -
Train ANN | 7064 | 18492 | -
Prediction - 1000 testing $\bm{\mu}$ | 2895 | 3160 | 17790
Prediction - per testing $\bm{\mu}$ | 2.9 | 3.2 | 17.8
## 6 Discussion
The numerical observations that the benchmark cases in the previous section
have highlighted can be summarized by three main points of discussion. First,
in Section 5.2.2 we investigated the sources of the ROM error, and we observed
that the main error contribution comes from the prediction of
$\widehat{\bm{\theta}}^{u}$, $\widehat{\bm{\theta}}^{p}$ by ANN with a given
$t$ and $\bm{\mu}$. The main evidence lies in the comparisons of MSE and ME
values between Figures 12 and 13. We can see that with the same parameter
$\bm{\mu}$, the MSE and ME values resulted from $\widehat{\bm{\theta}}^{u}$,
$\widehat{\bm{\theta}}^{p}$ are about three orders of magnitude higher than
those of the ones obtained from ${\bm{\theta}}^{u}$, ${\bm{\theta}}^{p}$.
Therefore, for future works, we will focus on improving the ANN model’s
accuracy by using different types of network architecture (recurrent neural
networks) or regularization (physics-guided machine learning).
Second, throughout Section 5.2, we could not observe any clear relationships
between the ROM’s accuracy and $\mathrm{N_{int}}$ (see Figures 13, 14, 15, 16,
and 17 and Tables 2, 3, 4, 5, and 7). As discussed in the previous paragraph
and previous sections, the errors introduced by the POD and $L^{2}$ projection
phases are much less than the errors stemming from the ANN phase.
Consequently, the errors introduced by the truncation of $\mathrm{N_{int}}$
could not be observed in the final results. This observation implies that we
could utilize the nested POD technique to save computational time (see Table
1) without any observable losses in the model’s accuracy.
Third, according to Table 9, we could see that the ROM framework is
approximately six times faster than the FOM or the finite element model during
the online phase. Moreover, the ROM framework errors are very small (see
Figure 18), especially relative to the magnitude of the FOM solutions. The ROM
framework, however, has an extra cost of training (i.e., initialization, FOM,
POD, $L^{2}$ projection, and ANN phases, see Figure 1). The training time
(wall time) of model 1 and model 2 in Section 5.2.5 are 15661 and 40987
seconds, respectively. Taking the training time into account, we need to
perform at least 1050 and 2850 inquiries (online phase) to have a break-even
point for model 1 and model 2, respectively. To this end, before one wants to
build a ROM framework, one should consider how many inquiries are expected to
have.
## 7 Conclusion
A non-intrusive reduced order model (ROM) has been developed for linear
poroelasticity problems in heterogeneous media. We employ the discontinuous
Galerkin (DG) finite element framework as a full order model (FOM); the DG
solutions are thus employed as snapshots to train and test the ROM. During the
offline phase, this framework utilizes one of two variants of the proper
orthogonal decomposition (POD) to define a reduced basis space, namely a
standard POD and a nested POD, and artificial neural networks (ANN) to
construct an inexpensive map from a time and parameter pair to coefficients
associated with each reduced basis. We validate the framework through a series
of benchmark problems. Our results show that the framework could provide
reasonable approximations of the FOM results, but it is significantly faster.
Moreover, the reduced order framework can capture both displacement and
pressure fields’ sharp discontinuities resulting from the heterogeneity in the
media’s conductivity. We then present the error sources and show that the
error inherited from the ANN model trumps the error associated with the POD
operation. Consequently, we illustrate that the nested POD technique, in which
time and uncertain parameter domains are compressed consecutively, could
provide comparable accuracy to the classical POD method, in which all domains
are compressed simultaneously, but at a fraction of the offline computational
cost. Finally, we emphasize in which circumstances the ROM framework is more
suitable than the FOM. Further developments could be to consider different ANN
architectures, as well as coupled problems involving poroelasticity.
## 8 Acknowledgements
The computational results in this work have been produced by the RBniCS
project [117] (a reduced order modeling library built upon FEniCS [87]), the
multiphenics library [88] (an extension of FEniCS for multiphysics problems),
and PyTorch [110]. We acknowledge the developers of and contributors to these
libraries. FB thanks Horizon 2020 Program for Grant H2020 ERC CoG 2015 AROMA-
CFD project 681447 that supported the development of RBniCS and multiphenics.
NB acknowledges startup support from the Sibley School of Mechanical and
Aerospace Engineering, Cornell University.
## References
* [1] K. Bisdom, G. Bertotti, and H. Nick. A geometrically based method for predicting stress-induced fracture aperture and flow in discrete fracture networks. AAPG Bulletin, 100(7):1075–1097, 2016.
* [2] R. Juanes, B. Jha, B. Hager, J. Shaw, A. Plesch, L. Astiz, J. Dieterich, and C. Frohlich. Were the May 2012 Emilia-Romagna earthquakes induced? A coupled flow-geomechanics modeling assessment. Geophysical Research Letters, 43(13):6891–6897, 2016.
* [3] S. Lee, M. Wheeler, and T. Wick. Pressure and fluid-driven fracture propagation in porous media using an adaptive finite element phase field model. Computer Methods in Applied Mechanics and Engineering, 305:111–132, 2016.
* [4] H. Nick, A. Raoof, F. Centler, M. Thullner, and P. Regnier. Reactive dispersive contaminant transport in coastal aquifers: numerical simulation of a reactive henry problem. Journal of contaminant hydrology, 145:90–104, 2013.
* [5] J. Choo and W. Sun. Cracking and damage from crystallization in pores: Coupled chemo-hydro-mechanics and phase-field modeling. Computer Methods in Applied Mechanics and Engineering, 335:347–349, 2018.
* [6] T. Kadeethum, H. Nick, S. Lee, and F. Ballarin. Enriched Galerkin discretization for modeling poroelasticity and permeability alteration in heterogeneous porous media. Journal of Computational Physics, page 110030, 2021.
* [7] Y. Yu, N. Bouklas, C. Landis, and R. Huang. Poroelastic effects on the time-and rate-dependent fracture of polymer gels. Journal of Applied Mechanics, 87(3), 2020.
* [8] V. Vinje, J. Brucker, M. Rognes, K. Mardal, and V. Haughton. Fluid dynamics in syringomyelia cavities: Effects of heart rate, CSF velocity, CSF velocity waveform and craniovertebral decompression. The neuroradiology journal, page 1971400918795482, 2018.
* [9] T. Kadeethum, S. Salimzadeh, and H. Nick. An investigation of hydromechanical effect on well productivity in fractured porous media using full factorial experimental design. Journal of Petroleum Science and Engineering, 181:106233, 2019.
* [10] N. Bouklas, C. Landis, and R. Huang. A nonlinear, transient finite element method for coupled solvent diffusion and large deformation of hydrogels. Journal of the Mechanics and Physics of Solids, 79:21–43, 2015\.
* [11] S. Salimzadeh, E. Hagerup, T. Kadeethum, and H. Nick. The effect of stress distribution on the shape and direction of hydraulic fractures in layered media. Engineering Fracture Mechanics, 215:151–163, 2019.
* [12] S. Matthai and H. Nick. Upscaling two-phase flow in naturally fractured reservoirs. AAPG bulletin, 93(11):1621–1632, 2009.
* [13] T. Kadeethum, S Lee, and H. Nick. Finite element solvers for biot’s poroelasticity equations in porous media. Mathematical Geosciences, 52:977–1015, 2020.
* [14] R. Baker, H. Yarranton, and J. Jensen. Practical reservoir engineering and characterization. Gulf Professional Publishing, 2015.
* [15] P. Jia, L. Cheng, S. Huang, Z. Xu, Y. Xue, R. Cao, and G. Ding. A comprehensive model combining Laplace-transform finite-difference and boundary-element method for the flow behavior of a two-zone system with discrete fracture network. Journal of Hydrology, 551:453–469, 2017.
* [16] T. Kadeethum, S. Salimzadeh, and H. Nick. Investigation on the productivity behaviour in deformable heterogeneous fractured reservoirs. In 2018 International Symposium on Energy Geotechnics, 2018.
* [17] B. Muljadi, M. Blunt, A. Raeini, and B. Bijeljic. The impact of porous media heterogeneity on non-darcy flow behaviour from pore-scale simulation. Advances in water resources, 95:329–340, 2016.
* [18] C. Nicolaides, B. Jha, L. Cueto-Felgueroso, and R. Juanes. Impact of viscous fingering and permeability heterogeneity on fluid mixing in porous media. Water Resources Research, 51(4):2634–2647, 2015.
* [19] Z. Chen. Reservoir simulation: mathematical techniques in oil recovery, volume 77. Siam, 2007.
* [20] J. Du and R. Wong. Application of strain-induced permeability model in a coupled geomechanics-reservoir simulator. Journal of Canadian Petroleum Technology, 46(12):55–61, 2007.
* [21] T. Kadeethum, S. Salimzadeh, and H. Nick. Well productivity evaluation in deformable single-fracture media. Geothermics, 87, 2020.
* [22] K. Terzaghi. Theoretical soil mechanics. Chapman And Hall, Limited.; London, 1951.
* [23] H. Wang. Theory of linear poroelasticity with applications to geomechanics and hydrogeology. Princeton University Press, 2017.
* [24] J. Nordbotten. Cell-centered finite volume discretizations for deformable porous media. International journal for numerical methods in engineering, 100(6):399–418, 2014.
* [25] I. Sokolova, M. Bastisya, and H. Hajibeygi. Multiscale finite volume method for finite-volume-based simulation of poroelasticity. Journal of Computational Physics, 379:309–324, 2019.
* [26] H. Honorio, C. Maliska, M. Ferronato, and C. Janna. A stabilized element-based finite volume method for poroelastic problems. Journal of Computational Physics, 364:49–72, 2018.
* [27] J. Choo and S. Lee. Enriched Galerkin finite elements for coupled poromechanics with local mass conservation. Computer Methods in Applied Mechanics and Engineering, 341:311–332, 2018.
* [28] Q. Deng, V. Ginting, B. McCaskill, and P. Torsu. A locally conservative stabilized continuous Galerkin finite element method for two-phase flow in poroelastic subsurfaces. Journal of Computational Physics, 347:78–98, 2017.
* [29] B. Li and N. Bouklas. A variational phase-field model for brittle fracture in polydisperse elastomer networks. International Journal of Solids and Structures, 182:193–204, 2020\.
* [30] J. Haga, H. Osnes, and H. Langtangen. On the causes of pressure oscillations in low permeable and low compressible porous media. International Journal for Numerical and Analytical Methods in Geomechanics, 36(12):1507–1522, 2012.
* [31] J. Liu, S. Tavener, and Z. Wang. Lowest-order weak Galerkin finite element method for Darcy flow on convex polygonal meshes. SIAM Journal on Scientific Computing, 40(5):B1229–B1252, 2018.
* [32] M. Murad, M. Borges, J. Obregon, and M. Correa. A new locally conservative numerical method for two-phase flow in heterogeneous poroelastic media. Computers and Geotechnics, 48:192–207, 2013.
* [33] M. Wheeler, G. Xue, and I. Yotov. Coupling multipoint flux mixed finite element methods with continuous Galerkin methods for poroelasticity. Computational Geosciences, 18(1):57–75, 2014.
* [34] N. Bouklas, C. Landis, and R. Huang. Effect of solvent diffusion on crack-tip fields and driving force for fracture of hydrogels. Journal of Applied Mechanics, 82(8), 2015.
* [35] T. Kadeethum, T. Jørgensen, and H. Nick. Physics-informed neural networks for solving nonlinear diffusivity and Biot’s equations. PLoS ONE, 15(5):e0232683, 2020.
* [36] T. Kadeethum, T. Jørgensen, and H. Nick. Physics-informed Neural Networks for Solving Inverse Problems of Nonlinear Biot’s Equations: Batch Training. In 54th US Rock Mechanics/Geomechanics Symposium, Golden, CO, USA, 2020. American Rock Mechanics Association.
* [37] M. Guo and E. Haghighat. An energy-based error bound of physics-informed neural network solutions in elasticity. arXiv preprint arXiv:2010.09088, 2020.
* [38] E. Haghighat, M. Raissi, A. Moure, H. Gomez, and R. Juanes. A deep learning framework for solution and discovery in solid mechanics: linear elasticity. arXiv preprint arXiv:2003.02751, 2020.
* [39] P. Phillips and M. Wheeler. A coupling of mixed and continuous Galerkin finite element methods for poroelasticity I: the continuous in time case. Computational Geosciences, 11(2):131, 2007.
* [40] P. Phillips and M. Wheeler. A coupling of mixed and continuous Galerkin finite element methods for poroelasticity II: the discrete-in-time case. Computational Geosciences, 11(2):145–158, 2007.
* [41] S. Kumar, R. Oyarzua, R. Ruiz-Baier, and R. Sandilya. Conservative discontinuous finite volume and mixed schemes for a new four-field formulation in poroelasticity. ESAIM: Mathematical Modelling and Numerical Analysis, 54(1):273–299, 2020.
* [42] A. Zdunek, W. Rachowicz, and T. Eriksson. A five-field finite element formulation for nearly inextensible and nearly incompressible finite hyperelasticity. Computers & Mathematics with Applications, 72(1):25–47, 2016.
* [43] T. Kadeethum, H. Nick, S. Lee, C. Richardson, S. Salimzadeh, and F. Ballarin. A Novel Enriched Galerkin Method for Modelling Coupled Flow and Mechanical Deformation in Heterogeneous Porous Media. In 53rd US Rock Mechanics/Geomechanics Symposium, New York, NY, USA, 2019. American Rock Mechanics Association.
* [44] T. Kadeethum, H. Nick, and S. Lee. Comparison of two-and three-field formulation discretizations for flow and solid deformation in heterogeneous porous media. In 20th Annual Conference of the International Association for Mathematical Geosciences, PA, USA, 2019.
* [45] S. Lee, T. Kadeethum, and H. Nick. Choice of interior penalty coefficient for interior penalty discontinuous Galerkin method for Biot’s system by employing machine learning. submitted, 2019.
* [46] B. Riviere. Discontinuous Galerkin methods for solving elliptic and parabolic equations: theory and implementation. SIAM, 2008.
* [47] P. Phillips and M. Wheeler. A coupling of mixed and discontinuous Galerkin finite-element methods for poroelasticity. Computational Geosciences, 12(4):417–435, 2008.
* [48] R. Liu, M. Wheeler, C. Dawson, and R. Dean. On a coupled discontinuous/continuous Galerkin framework and an adaptive penalty scheme for poroelasticity problems. Computer Methods in Applied Mechanics and Engineering, 198(41-44):3499–3510, 2009.
* [49] P. Hansen. Discrete inverse problems: insight and algorithms, volume 7. Siam, 2010.
* [50] J. Hesthaven, G. Rozza, B. Stamm, et al. Certified reduced basis methods for parametrized partial differential equations. Springer, 2016.
* [51] F. Ballarin, A. D’amario, S. Perotto, and G. Rozza. A POD-selective inverse distance weighting method for fast parametrized shape morphing. International Journal for Numerical Methods in Engineering, 117(8):860–884, 2019.
* [52] S. Hijazi, S. Ali, G. Stabile, F. Ballarin, and G. Rozza. The effort of increasing Reynolds number in projection-based reduced order methods: from laminar to turbulent flows. In Numerical Methods for Flows, pages 245–264. Springer, 2020.
* [53] M. Strazzullo, F. Ballarin, R. Mosetti, and G. Rozza. Model reduction for parametrized optimal control problems in environmental marine sciences and engineering. SIAM Journal on Scientific Computing, 40(4):B1055–B1079, 2018.
* [54] W. Schilders, H. Van der Vorst, and J. Rommes. Model order reduction: theory, research aspects and applications, volume 13. Springer, 2008.
* [55] W. Schilders. Introduction to model order reduction. In Model order reduction: Theory, research aspects and applications, pages 3–32. Springer, 2008.
* [56] L. Venturi, F. Ballarin, and G. Rozza. A weighted POD method for elliptic PDEs with random inputs. Journal of Scientific Computing, 81(1):136–153, 2019.
* [57] V. DeCaria, T. Iliescu, W. Layton, M. McLaughlin, and M. Schneier. An artificial compression reduced order model. SIAM Journal on Numerical Analysis, 58(1):565–589, 2020.
* [58] J. Cleary and I. Witten. Data compression using adaptive coding and partial string matching. IEEE transactions on Communications, 32(4):396–402, 1984.
* [59] Q. Wang, J. Hesthaven, and D. Ray. Non-intrusive reduced order modeling of unsteady flows using artificial neural networks with application to a combustion problem. Journal of computational physics, 384:289–307, 2019.
* [60] D. Xiao, C. Heaney, F. Fang, L. Mottet, R. Hu, D Bistrian, E. Aristodemou, I. Navon, and C. Pain. A domain decomposition non-intrusive reduced order model for turbulent flows. Computers & Fluids, 182:15–27, 2019.
* [61] D. Xiao, F. Fang, C. Pain, and G. Hu. Non-intrusive reduced-order modelling of the Navier–Stokes equations based on rbf interpolation. International Journal for Numerical Methods in Fluids, 79(11):580–595, 2015.
* [62] M. Mignolet, A. Przekop, S. Rizzi, and M. Spottswood. A review of indirect/non-intrusive reduced order modeling of nonlinear geometric structures. Journal of Sound and Vibration, 332(10):2437–2460, 2013.
* [63] J. Hesthaven and S. Ubbiali. Non-intrusive reduced order modeling of nonlinear problems using neural networks. Journal of Computational Physics, 363:55–78, 2018.
* [64] D. Xiao, F. Fang, A. Buchan, C. Pain, I. Navon, and A. Muggeridge. Non-intrusive reduced order modelling of the Navier–Stokes equations. Computer Methods in Applied Mechanics and Engineering, 293:522–541, 2015.
* [65] M. Girfoglio, L. Scandurra, F. Ballarin, G. Infantino, F. Nicolò, A. Montalto, G. Rozza, R. Scrofani, M. Comisso, and F. Musumeci. A non-intrusive data-driven ROM framework for hemodynamics problems. arXiv preprint arXiv:2010.08139, 2020.
* [66] M. Girfoglio, F. Ballarin, G. Infantino, F. Nicolò, A. Montalto, G. Rozza, R. Scrofani, M. Comisso, and F. Musumeci. Non-intrusive PODI-ROM for patient-specific aortic blood flow in presence of a LVAD device. arXiv preprint arXiv:2007.03527, 2020.
* [67] G. Ortali, N. Demo, and G. Rozza. Gaussian process approach within a data-driven POD framework for fluid dynamics engineering problems. arXiv preprint arXiv:2012.01989, 2020.
* [68] S. Hijazi, G. Stabile, A. Mola, and G. Rozza. Data-driven POD-Galerkin reduced order model for turbulent flows. Journal of Computational Physics, page 109513, 2020.
* [69] N. Demo, G. Ortali, G. Gustin, G. Rozza, and G. Lavini. An efficient computational framework for naval shape design and optimization problems by means of data-driven reduced order modeling techniques. arXiv preprint arXiv:2004.11201, 2020.
* [70] M. Gadalla, M. Cianferra, M. Tezzele, G. Stabile, A. Mola, and G. Rozza. On the comparison of LES data-driven reduced order approaches for hydroacoustic analysis. arXiv preprint arXiv:2006.14428, 2020.
* [71] M. Biot. General theory of three-dimensional consolidation. Journal of applied physics, 12(2):155–164, 1941.
* [72] M. Biot and D. Willis. The elastic coefficients of the theory of consolidation. J. appl. Mech, 15:594–601, 1957.
* [73] J. Choo, J. White, and R. Borja. Hydromechanical modeling of unsaturated flow in double porosity media. International Journal of Geomechanics, 16(6):D4016002, 2016.
* [74] R. Borja and J. Choo. Cam-Clay plasticity, Part VIII: A constitutive framework for porous materials with evolving internal structure. Computer Methods in Applied Mechanics and Engineering, 309:653–679, 2016.
* [75] C. Macminn, E. Dufresne, and J. Wettlaufer. Large deformations of a soft porous material. Physical Review Applied, 5(4):1–30, 2016.
* [76] Y. Zhao and J. Choo. Stabilized material point methods for coupled large deformation and fluid flow in porous materials. Computer Methods in Applied Mechanics and Engineering, 362:112742, 2020.
* [77] J. Jaeger, Neville G. Cook, and R. Zimmerman. Fundamentals of rock mechanics. John Wiley & Sons, 2009.
* [78] O. Coussy. Poromechanics. John Wiley & Sons, 2004.
* [79] J. Kim, H. Tchelepi, and R. Juanes. Stability and convergence of sequential methods for coupled flow and geomechanics: Fixed-stress and fixed-strain splits. Computer Methods in Applied Mechanics and Engineering, 200(13-16):1591–1606, 2011.
* [80] A. Mikelic and M. Wheeler. Convergence of iterative coupling for coupled flow and geomechanics. Computational Geosciences, 17(3):455–461, 2013.
* [81] A. Ern, A. Stephansen, and P. Zunino. A discontinuous Galerkin method with weighted averages for advection-diffusion equations with locally small and anisotropic diffusivity. IMA J. Numer. Anal., 29(2):235–256, 2009.
* [82] A. Ern and A. Stephansen. A posteriori energy-norm error estimates for advection-diffusion equations approximated by weighted interior penalty methods. Journal of Computational Mathematics, pages 488–510, 2008.
* [83] Z. Ibrahim, K. Othman, and M. Suleiman. Implicit r-point block backward differentiation formula for solving first-order stiff ODEs. Applied Mathematics and Computation, 186(1):558–565, 2007.
* [84] O. Akinfenwa, S. Jator, and N. Yao. Continuous block backward differentiation formula for solving stiff ordinary differential equations. Computers & Mathematics with Applications, 65(7):996–1005, 2013\.
* [85] S. Lee, A. Mikelic, M. Wheeler, and T. Wick. Phase-field modeling of two phase fluid filled fractures in a poroelastic medium. Multiscale Modeling & Simulation, 16(4):1542–1580, 2018.
* [86] T. Kadeethum, S. Lee, F. Ballarin, J. Choo, and H. Nick. A locally conservative mixed finite element framework for coupled hydro-mechanical-chemical processes in heterogeneous porous media. arXiv preprint arXiv:2010.04994, 2020.
* [87] M. Alnaes, J. Blechta, J. Hake, A. Johansson, B. Kehlet, A. Logg, C. Richardson, J. Ring, M. Rognes, and G. Wells. The FEniCS Project Version 1.5. Archive of Numerical Software, 3(100), 2015.
* [88] multiphenics - easy prototyping of multiphysics problems in FEniCS, 2019.
* [89] S. Balay, S. Abhyankar, M. Adams, J. Brown, P. Brune, K. Buschelman, L. Dalcin, A. Dener, V. Eijkhout, W. Gropp, D. Kaushik, M. Knepley, D. May, L. McInnes, R. Mills, T. Munson, K. Rupp, P. Sanan, B. Smith, S. Zampini, H. Zhang, and H. Zhang. PETSc Users Manual. Technical Report ANL-95/11 - Revision 3.10, Argonne National Laboratory, 2018.
* [90] Q. Wang, N. Ripamonti, and J. Hesthaven. Recurrent neural network closure of parametric pod-galerkin reduced-order models based on the mori-zwanzig formalism. Journal of Computational Physics, page 109402, 2020.
* [91] G. Stabile, S. Hijazi, A. Mola, S. Lorenzi, and G. Rozza. POD-Galerkin reduced order methods for CFD using finite volume discretisation: vortex shedding around a circular cylinder. Communications in Applied and Industrial Mathematics, 8(1):210–236, 2017.
* [92] K. Willcox and J. Peraire. Balanced model reduction via the proper orthogonal decomposition. AIAA journal, 40(11):2323–2330, 2002.
* [93] A. Chatterjee. An introduction to the proper orthogonal decomposition. Current science, pages 808–817, 2000.
* [94] Y. Liang, H. Lee, S. Lim, W. Lin, K. Lee, and C. Wu. Proper orthogonal decomposition and its applications—part i: Theory. Journal of Sound and vibration, 252(3):527–544, 2002.
* [95] Z. Wang, D. Xiao, F. Fang, R. Govindan, C. Pain, and Y. Guo. Model identification of reduced order fluid dynamics systems using deep learning. International Journal for Numerical Methods in Fluids, 86(4):255–268, 2018.
* [96] A. Paul-Dubois-Taine and D. Amsallem. An adaptive and efficient greedy procedure for the optimal training of parametric reduced-order models. International Journal for Numerical Methods in Engineering, 102(5):1262–1292, 2015.
* [97] M. Vasile, E. Minisci, D. Quagliarella, M. Guénot, I. Lepot, C. Sainvitu, J. Goblet, and R. Coelho. Adaptive sampling strategies for non-intrusive pod-based surrogates. Engineering computations, 2013.
* [98] G. Hinton and R. Zemel. Autoencoders, minimum description length and helmholtz free energy. In Advances in neural information processing systems, pages 3–10, 1994.
* [99] T. Phillips, C. Heaney, P. Smith, and C. Pain. An autoencoder-based reduced-order model for eigenvalue problems with application to neutron diffusion. arXiv preprint arXiv:2008.10532, 2020.
* [100] D. O’Malley, J. Golden, and V. Vesselinov. Learning to regularize with a variational autoencoder for hydrologic inverse analysis. arXiv preprint arXiv:1906.02401, 2019.
* [101] H. Goh, S. Sheriffdeen, and T. Bui-Thanh. Solving forward and inverse problems using autoencoders. arXiv preprint arXiv:1912.04212, 2019.
* [102] C. Audouze, F. De Vuyst, and P. B. Nair. Reduced-order modeling of parameterized pdes using time–space-parameter principal component analysis. International Journal for Numerical Methods in Engineering, 80(8):1025–1057, 2009.
* [103] María-Luisa Rapún and José M Vega. Reduced order models based on local pod plus galerkin projection. Journal of Computational Physics, 229(8):3046–3063, 2010.
* [104] Christophe Audouze, Florian De Vuyst, and Prasanth B. Nair. Nonintrusive reduced-order modeling of parametrized time-dependent partial differential equations. Numerical Methods for Partial Differential Equations, 29(5):1587–1628, 2013.
* [105] Francesco Ballarin, Elena Faggiano, Sonia Ippolito, Andrea Manzoni, Alfio Quarteroni, Gianluigi Rozza, and Roberto Scrofani. Fast simulations of patient-specific haemodynamics of coronary artery bypass grafts based on a pod–galerkin method and a vascular shape parametrization. Journal of Computational Physics, 315:609–628, 2016.
* [106] Christian Himpe, Tobias Leibner, and Stephan Rave. Hierarchical approximate proper orthogonal decomposition. SIAM Journal on Scientific Computing, 40(5):A3267–A3292, 2018.
* [107] P. Jacquier, A. Abdedou, V. Delmas, and A. Soulaimani. Non-intrusive reduced-order modeling using uncertainty-aware deep neural networks and proper orthogonal decomposition: Application to flood modeling. arXiv preprint arXiv:2005.13506, 2020.
* [108] I. Goodfellow, Y. Bengio, and A. Courville. Deep learning. MIT press, 2016.
* [109] G. Hinton and R. Salakhutdinov. Reducing the dimensionality of data with neural networks. science, 313(5786):504–507, 2006.
* [110] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024–8035. Curran Associates, Inc., 2019.
* [111] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
* [112] L. Prechelt. Early stopping-but when? In Neural Networks: Tricks of the trade, pages 55–69. Springer, 1998.
* [113] L. Prechelt. Automatic early stopping using cross validation: quantifying the criteria. Neural Networks, 11(4):761–767, 1998.
* [114] K. Lipnikov, M. Shashkov, and I. Yotov. Local flux mimetic finite difference methods. Numerische Mathematik, 112(1):115–152, 2009.
* [115] S. Müller and L. Schüler. GeoStat-Framework/GSTools. Zenodo, 2020.
* [116] B. Zinn and C. Harvey. When good statistical models of aquifer heterogeneity go bad: A comparison of flow, dispersion, and mass transfer in connected and multivariate gaussian hydraulic conductivity fields. Water Resources Research, 39(3), 2003.
* [117] RBniCS - reduced order modelling in FEniCS. https://www.rbnicsproject.org/, 2015.
|
# Acoustic Communication and Sensing for
Inflatable Modular Soft Robots
Daniel S. Drew1, Matthew Devlin2, Elliot Hawkes2, and Sean Follmer1 1Daniel S.
Drew and Sean Follmer are with the Department of Mechanical Engineering,
Stanford University, Stanford, CA, USA<EMAIL_ADDRESS>Devlin and
Elliot Hawkes are with the Department of Mechanical Engineering, University of
California Santa Barbara, Santa Barbara, CA, USA.
###### Abstract
Modular soft robots combine the strengths of two traditionally separate areas
of robotics. As modular robots, they can show robustness to individual failure
and reconfigurability; as soft robots, they can deform and undergo large shape
changes in order to adapt to their environment, and have inherent human
safety. However, for sensing and communication these robots also combine the
challenges of both: they require solutions that are scalable (low cost and
complexity) and efficient (low power) to enable collectives of large numbers
of robots, and these solutions must also be able to interface with the high
extension ratio elastic bodies of soft robots. In this work, we seek to
address these challenges using acoustic signals produced by piezoelectric
surface transducers that are cheap, simple, and low power, and that not only
integrate with but also leverage the elastic robot skins for signal
transmission. Importantly, to further increase scalability, the transducers
exhibit multi-functionality made possible by a relatively flat frequency
response across the audible and ultrasonic ranges. With minimal hardware, they
enable directional contact-based communication, audible-range communication at
a distance, and exteroceptive sensing. We demonstrate a subset of the
decentralized collective behaviors these functions make possible with multi-
robot hardware implementations. The use of acoustic waves in this domain is
shown to provide distinct advantages over existing solutions.
Figure 1: Inflatable soft modular robots. A) Each robot unit comprises a latex
membrane, an internal pump and release valve, and “acoustic modules,”
consisting of piezoelectric transducers attached to magnetic connectors,
distributed over their internal surface. These acoustic modules are both able
to overcome challenges associated with instrumenting soft, high extension
ratio robots as well as being scalable and efficient enough to enable modular,
multi-robot systems. The proposed architecture allows (B) communication at a
distance for synchronization, (C) directional neighbor-to-neighbor data
transfer, and (D) external contact sensing.
## I Introduction
Modular robots overcome individual platform limitations by physically
connecting and reconfiguring in order to tailor their system-level
capabilities to their application and environment [1]. At the same time, soft,
shape-changing robots have distinct advantages over rigid-bodied robots,
including passive adaptation to their environment through structural
compliance, inherent safety for human-robot interaction tasks, and the ability
to exert relatively large forces and undergo relatively large strains with
low-cost actuators [2]. Modular soft robots, which take inspiration from
biological collectives (as “cellular robots” [3]), combine these advantages in
order to perform useful behaviors emergent from interactions between
relatively simple individual units. A major barrier to progress, however, is
the fact that these robots also combine the challenges of these two realms.
For example, a significant challenge in the design of modular robots meant to
be deployed in large collectives is balancing individual platform size,
complexity, and cost with the architecture and functionality of the conjoined
system. The design of multi-functional components, which can adequately
fulfill the function of multiple robotic subsystems without requiring
additional hardware, is a potential solution. The soft, extensible structure
of a modular soft robot compounds the challenge by placing additional
constraints on the possible implementations, which must be both robust to high
extension ratios as well as able to be coupled to elastic surfaces.
Many modular and swarm robots have sought to address the challenge of scalable
inter-agent communication and sensing via infrared (IR) optical transmission
[1]. This relatively low range and line-of-sight constrained method may be
supplemented by wider area radio-frequency networking [4]. In contrast, in
nature the use of acoustic signals is ubiquitous, including among the social
insects which inspire many designers of modular and swarm robots [5]. These
acoustic signals include substrate-borne vibrations, audible sound, and
vibrations shared through direct body contact [6, 7]. Inspired by the way that
existing organisms use passive mechanical body structures to efficiently
produce, receive, and transmit acoustic signals – from the audible range of
the cricket [8] to the ultrasonic range of the moth [9] – the same pre-
tensioned elastic membranes that make soft robots so difficult to instrument
for sensing and communication make them particularly attractive for multi-
functional acoustics-based components.
Existing acoustic transducers are well-suited for acting as multi-functional
components due, in part, to their ability to be operated across a wide
spectrum. The Huygens-Fresnel principle dictates that the directivity of a
wave corresponds to the size of the source relative to the wavelength. In
practice this change in directivity is beneficial for applications like
ultrasonic obstacle detection [10], where it limits the field-of-view of the
transducer and focuses the signal just as a lens does for an infrared source,
and is a challenge for designers of speakers with desirable “dispersion
patterns.” In addition to this variable directivity, the attenuation of
acoustic waves in air is proportional to the wave frequency; the absorption
coefficient of air increases approximately 30dB from 1kHz to 20kHz [11]. The
relatively flat frequency response (up to about 20kHz) of the simple commodity
piezoelectric disc transducers used in this work therefore means that they can
be operated with variable attenuation and directionality depending on desired
function.
Figure 2: A) An inflated robot with acoustic modules dispersed on its interior
surface. B) The robot, when fully deflated, is roughly the same size as the
pump it contains. C) The acoustic modules are composed of 3D printed
enclosures containing three diametrically polarized cylindrical magnets with
an affixed piezoelectric transducer. D) Magnetic connection between two
adjacent inflated robots, made through their acoustic modules.
The contribution of this work is a communication and sensing modality (Fig. 1)
based on surface-distributed “acoustic modules,” which use piezoelectric
transducers to both send and receive acoustic waves across the audible to
ultrasonic spectra, implemented on modular soft robots with high extension
ratios (Fig. 2). The modules are scalable (i.e, of minimal cost and
complexity), efficient (i.e., each module consumes 60mW, compared to the 160mW
IR emitter of the Kilobot [12]), and helps to perform multiple core robotic
functions. They not only integrate simply with elastic skins through surface
attachment, they also take advantage of the structure itself as a transmission
medium that is robust to large shape change. Together, this makes our solution
cost effective, capable, and versatile compared to other options for shape-
changing modular soft robots. After discussing related work in Section II, we
describe the technical implementation and results of testing in Section III,
showing core collective functions like communication between robots,
synchronization at a distance, and sensing of external stimuli. In Section IV,
we demonstrate two enabled collaborative behaviors in a group of three robots,
including synchronized lifting and a decentralized inchworm-based gait.
## II Related Work
The most directly relevant related work includes other modular and swarm
robots that use individual subsystems or components for multiple functions and
other examples of acoustic sensing and communication in multi-robot systems.
### II-A Multifunctional Hardware for Multi-robot Systems
The Linbot soft modular platform [13] is the most directly related to this
work. It uses a voice coil for actuation, sensing, and communication, taking
advantage of the wide frequency response in a similar manner to how we use our
piezoelectric transducers. A Hall-effect sensor is used for proprioception
through sensing of the voice coil position, electromagnetic coupling between
neighboring Linbots allows for omnidirectional communication, and audible
range waves can be produced for external communication. To accomplish this
they rely on the rigid connections between the actuator core and the extents
of the soft shell, only operating with shape changes of up to approximately
30$\%$ on their principle axis. In contrast, our robots undergo maximum volume
changes of close to 1000$\%$, and the exterior surfaces do not remain in
contact with the primary actuator.
Swarm platforms are relevant in this context because they are also motivated
by finding low complexity and cost, scalable solutions [14]. The Kilobot [12]
platform uses an IR transmitter and receiver on its underside to both
communicate with and detect the distance of neighbors, using only one pair for
both functions but doing so only omnidirectionally and only up to about 10cm
away. The Open E-Puck platform [15] uses a set of 12 pairs of radially
arranged IR transmitters and receivers to perform inter-robot communication as
well as range and bearing measurements, which allows it to send and receive
signals from specific directions. Our acoustic solution adds the additional
functionality of long-range ($>$1m) communication with no line-of-sight
requirements, as well as contact/deformation sensing, while only requiring a
single transducer instead of an emitter/receiver pair.
### II-B Multi-robot Acoustic Sensing and Communication
A common use of acoustic waves in multi-robot systems is for ultrasonic range
estimation. The relatively slow speed of sound lessens signal processing
constraints relative to radio frequency solutions (e.g., RSSI) by enabling
direct time of flight measurements, making it a useful supplement to improve
robustness of distance estimation [16]. Relative positioning of multi-robot
systems using ultrasonic ranging at distances up to seven meters has been
demonstrated with absolute average error of only 8mm [17].
As opposed to sensing, acoustic communication between autonomous robots is a
relatively underexplored area. An exception is in the realm of autonomous
underwater vehicles, which are driven towards acoustic modes by the high
electromagnetic absorption of seawater [18, 19]. Audible range communication
has been noted as a potentially useful supplement to radio frequency
networking for land-based multi-robot systems due to the fact that the
relatively strong environmental attenuation of acoustic waves can encode
environmental information [20]. In this work, we take this idea further by
using the soft pressurized structure of the robot itself as the information-
encoding transmission environment.
Outside of the robotics domain, acoustic communication has been shown between
pressurized mylar balloons that act as amplifiers and speakers when actuated
by piezoelectric transducers [21], which served as an inspiration for the
communication-at-a-distance in this work.
## III Implementation and Results
### III-A System Hardware
Figure 3: A system block diagram illustrating how the analog switch array is
used to dynamically connect the piezoelectric actuators to either the
preamplifier or to the motor driver depending on desired function.
The inflatable robot units (Fig. 2A) are based on our co-authors’ recent prior
work [22] demonstrating untethered cellular robots. Each is composed of a 45cm
maximum diameter, 0.4mm thick latex membrane enclosing a DC air pump,
solenoid-controlled valve, and up to N (N = 8 given the analog switch array
used in this work) acoustic modules distributed across the membrane. Each
acoustic module represents a sensing, communication, and connection point for
the robots to interact with their environment and each other. There is an
inherent tradeoff between the increased functionality (e.g., in terms of
sensing resolution) and the increased complexity for each additional acoustic
module which bears future investigation.
The acoustic modules (Fig. 2C) comprise FDM 3D printed, cylindrical enclosures
(30mm diameter, 7mm thickness, PLA) with 60 degree radially arrayed slots for
diametrically polarized cylindrical magnets (3.2mm diameter, 6.4mm height).
The magnet housings are slightly over-sized, allowing the magnets to reorient
when connectors are drawn together, making them “genderless.” We rely upon
passive reorientation of the units for alignment, although notably vibrations
can be transmitted with sufficient signal-to-noise ratio through even
imperfectly aligned modules. The piezoelectric transducers (27mm diameter
brass plate with 20mm diameter ceramic piezo, 0.5mm thickness) are standard
contact microphones, fixed into the printed enclosures with 3M 300LSE double
sided adhesive tape. The acoustic modules are each fixed to the inside of the
latex skin with the same tape.
A Teensy4, which contains a 600MHz Cortex M7 microprocessor, is sufficient for
the software-defined radio architecture of the acoustic communication. To
minimize cost and complexity the piezoelectric transducers are connected
through an 8:16 analog switch array (MT8816) to a single full H-bridge dual-
channel motor driver (TB6612FNG) and a single audio amplifier (MAX9814,
includes preamplifier, variable gain stage, and output amplifier) with 60dB
gain. A block diagram of the system is shown in Fig. 3. Each piezoelectric
transducer consumes approximately 60mW during full-duty cycle operation
between 3kHz and 20kHz (10mA at 6V). The poor impedance matching between the
piezoelectric transducer and the amplifier, which is designed for standard
electret condenser microphones, creates a high pass filter around 2kHz.
Although here we show units with electronics and power located externally to
the robot membrane, prior work shows that the required electronics, battery
pack, and charging circuit can be incorporated into the latex membranes inside
a 3D printed enclosure [22].
### III-B Contact-based Communication
Swarm and modular robot systems designed for large agent counts typically rely
heavily on local communication as a way to overcome challenges with scaling of
radio-based networks [14]. For modular robots the connection points represent
natural avenues for information transfer, such as through direct electrical
connections [23]. Methods that do not rely on mechanically flush or material-
specific connections, like IR transmit/receive pairs built into the faces of
the connectors [24], are more suitable for deformable surfaces. In our robots,
the piezoelectric transducers in the rigid enclosure of the magnetic
connectors can transfer information in the form of shared vibration through
even imperfect contact made between connectors; the received signal amplitude
for a 18kHz tone decreases from its full value when all three magnets are
aligned, $N_{contacts}=3$, by about 35$\%$ for $N$=2 and 45$\%$ for $N$=1,
never falling below about 40dB signal-to-noise ratio (SNR). An advantage over
an IR-based method is that vibrations are coupled from the interior modules
through the exterior surfaces of the robots mechanically, removing any optical
property design constraints.
We implement acoustic communication through binary frequency-shift keying
(FSK), chosen over amplitude modulation in order to resist contact-quality
based errors. A demonstration of the achievable packet delivery ratio (PDR)
for a 1:1 module pair is shown in Fig. 4. Packets consist of a 4-bit start
sequence, 4-bit data structure, and one parity bit. The decrease in achievable
PDR is correlated with increasingly tight timing requirements (i.e., a shorter
symbol time requires stricter phase alignment) and a decreased SNR caused
presumably by the piezoelectric transducers being unable to ring up to full
vibration amplitude before a bit transition.
Figure 4: Packet delivery ratio (PDR) versus bitrate in bits/second for a
single acoustic module pairing between two robots. Packet delivery ratio is
calculated from 1000 packet send attempts, each with randomized data bits. Any
bit difference between the sent and received packet is classified as a
failure.
Having multiple individually addressable communication points on each robot
allows for directional communication between any number of connected
neighbors. For this to be possible, signals received at each transducer must
be able to be successfully disambiguated from those received at their
neighboring nodes; as the signals here are mechanically coupled to the
structure and not based on line-of-sight, they radiate symmetrically from
their coupling point through the elastic membrane and are received at
neighboring points. Fig. 5 shows the received signal amplitude at the receiver
node versus the received signal amplitude at the neighboring nodes on both the
send and receive robots. Vibrations are increasingly attenuated at higher
signal frequencies resulting in a higher SNR in the ultrasonic range. This
means that ultrasonic signals are the best option for sending information
directionally through the connections, and can do so with the added benefits
of being inaudible and having minimal chance of encountering relevant
environmental noise.
For multi-connection data routing over a single channel – in this case, an
individual robot’s software-defined FSK receiver, which is only hooked up to a
single acoustic module at a time – we implement a slotless architecture based
on the ALOHA protocol [25]. The default listening behavior is to time
multiplex through the $N_{module}$ acoustic modules with an interval equal to
a single packet duration $t_{packet}$, waiting to detect a start sequence
(0111) and “locking” (i.e., remaining listening) if one is detected. If a full
packet is decoded with the correct parity bit, an acknowledgement is then sent
through the appropriate module. The corresponding sending behavior is to
continuously broadcast a packet on all desired output modules for a duration
equal to $N_{module}\cdot t_{packet}$, then listen on those modules for the
acknowledgement; if no acknowledgement is received the packet is resent.
Figure 5: Mean FFT amplitude ($n=20$) at the receiver node and at the
neighbors to the receiver and sender, both at approximately 15cm distance, as
a function of signal frequency for a pure tone generated by the sending node.
Points are normalized to the mean of the amplitude of the receiver node signal
for each frequency.
### III-C Communication at a Distance
Collaboration between our robots is possible without either direct contact or
line-of-sight via transmission of signals in the audible range, produced
effectively by the same piezoelectric transducers thanks to their flat
frequency response. In this case, the pressurized elastic skin acts as an
omnidirectional pickup for the airborne acoustic waves, letting the ostensibly
contact-based piezoelectric transducers act as true microphones. By operating
at the approximate resonance of the piezoelectric transducers of 6kHz signals
from robots up to a meter away can be received through the air with a measured
SNR of $\approx$7dB through the entire operational volume range ($\approx
0.05-0.5m^{3}$). The received signal amplitude is determined by factors
including the robot distance, each robots’ volume, and the contact quality
between the acoustic modules and the elastic membrane.
One important and fundamental function of decentralized multi-robot systems is
the ability to synchronize in time [26]. In nature, animals use both acoustic
and optical (e.g., in katydids [27] and fireflies [28], respectively) signals
to achieve synchronicity in a process known as “synchronized chorusing,” or
more formally as groups of pulse coupled oscillators.
Here, pulse coupled synchronization using audible signals is implemented
simply; an example spectrogram from synchronization of two robots is shown in
Fig. 6. Each robot starts with some initial phase offset from its neighbors
(about 250ms in Fig. 6). After a delay $t_{a}$, a synchronization pulse is
produced by all $N_{module}$ transducers simultaneously at 6kHz for a duration
$t_{chirp}$. The cycle repeats after another delay $t_{b}$. During each delay
interval a module acting as the receiver is continuously sampled in order to
detect amplitude peaks at 6kHz above a predetermined ambient noise threshold.
At the conclusion of the $t_{a}+t_{chirp}+t_{b}$ duration cycle, the tallied
detections are used to determine whether the chirp should be shifted “forward”
or “backward” in a binary fashion; if more are detected during $t_{a}$, for
example, then the majority of neighboring robots are pulsing before this one,
so the phase is shifted without changing the period by setting
$t_{a}\mathrel{{-}{=}}t_{shift}$ and $t_{b}\mathrel{{+}{=}}t_{shift}$.
Figure 6: Spectrogram for visual demonstration of clock synchronization
between two robots with center-to-center distance of approximately 1m. A
randomly determined initial clock gap of approximately 250ms is decreased to
less than 5ms after five two-second cycles. For clarity, ambient noise
amplitude has been subtracted from the data during post-processing. Data
collected using an external microphone.
There is a tradeoff between synchronization time and total (audible) robot
count. In the most extreme case, all time slots in the listening period would
be filled with chirps and therefore balanced. This means that the time for
synchronization is expected to scale with the number of robots as the
listening period must increase for additional robots. Time-varying chirps
(such as those produced by katydids [29]) could provide an additional layer of
information that improves the scalability of this approach.
The synchronization accuracy is related to both the chirp duration and the
digital signal processing on the receiver. The minimum chirp duration is
bounded by the response time of the piezoelectric transducers and the
associated SNR at the receiver side. For the receiver processing, non-
overlapping 256-point FFT segments with a sampling frequency of 50kS/s results
in a minimum synchronization window of approximately 5ms.
### III-D Exteroceptive Sensing
Sensing external stimuli like applied loads and environmental contacts is a
critical robotic function. Existing solutions for soft robots, such as adding
flexible signal transmission channels (e.g., optical channels [30] or printed
traces [31]) throughout the robot surface, are costly, complex, and not robust
to the high-percentage shape change exhibited by our robots. In order to sense
contact we instead take advantage of the coupling between loads on the robot
and the resultant attenuation of the acoustic waves being transmitted through
the existing unmodified external surface, reducing instrumentation cost and
complexity by taking advantage of the compliant nature of the robot.
Fig. 7 demonstrates that acoustic signals, received at a central receiving
node from tones transmitted by surrounding nodes, can be used to detect
compression of the robot. Regions are effectively “sensitized” by adding a
continuously sampling receiver. Contacts with areas $\geq a_{mod}$ centered on
the transmitting modules both dampen the vibrations of the piezoelectric
transducer in its magnetic enclosure as well as decrease the coupling of the
surrounding elastic membrane to the node, producing a clearly distinguishable
shift in received signal FFT amplitude at the tone frequency. The sensitive
region size is determined by the initial SNR of the received tones, which is a
function of inflated volume, pressure, and contact quality. The spatial
resolution is determined geometrically by the acoustic module area, $a_{mod}$,
the module dispersion density, and the current inflated volume. In this
inverse to the problem of private contact-based communication, it is important
to maximize signal transmission to neighboring nodes and hence requires
audible-range signals (see Fig. 5). Importantly, contact at the receiver node
itself manifests as decreases in amplitude from all surrounded nodes;
switching the set of “sensitized” nodes by reconfiguring the analog crosspoint
array could allow for diambiguation.
Figure 7: Time-multiplexing the transmission of a pure tone from nodes
arranged around a central receiver allows for areas of the robot to be
“sensitized” to contact. During contact, received amplitude at the central
node falls to below 10$\%$ of the initial value. The node emitting the tone
switches every 250ms (two alternating nodes shown here) and the FFT results
are averaged for a 1Hz update.
## IV Autonomous Behavior Demonstrations
With 1-DOF actuation, coordination between connected robots allows for
locomotion based on an inchworm gait [22]. Contact-based communication allows
the robots to selectively initiate inflation cycles in neighboring robots. A
“one-dimensional” locomotion example using this acoustic communication
strategy is shown in Figure 8. Here, forward motion is only possible when the
robots make full contact with the duct walls: the contact detection described
in Section III could be used to control the inflation and deflation cycles.
Locomotion in the X-Y plane could be performed with a minimum group of six
such interconnected robots with the ability to communicate in this manner.
Figure 8: Fully decentralized linear locomotion is possible using the contact-
based acoustic communication method developed here. Here, three robots move
within a clear cylindrical duct. A) The first robot is commanded to initiate
movement. Once it reaches its desired inflation volume (open loop pump
control) it sends a data packet through the module on one side of its body. B)
The next robot senses a signal at its connector and begins to parse the
incoming packet. It understands it is being told to inflate and the cycle
continues. C) The signal successfully passes from the first robot to the third
robot in the chain. D) If another robot was added to the end before the third
had finished inflating it would join in the behavior.
Clock synchronization is of practical use for an application like coordinated
lifting of unstable or safety-critical objects, such as those theoretically
encountered in search and rescue or human-assistive contexts. Figure 9 shows
that a group of three robots can lift a balanced load in tandem. A centralized
initiation signal tells all three robots to attempt a synchronous lift and
sets an initial random clock offset. They begin to use audible-range
communication to synchronize (as determined by a maximum number of FFT frames
with detected chirps) and once this condition is reached for a minimum of four
periods they begin to inflate.
Figure 9: Cooperative lifting of balanced loads is possible via audible-range
synchronization at a distance. Here, three robots lift a container of fluid
with the load distributed using a sheet of clear acrylic. A) The three robots
are commanded to begin a synchronized lift and start communicating through
audible chirps with some initial clock skew. B) Once their clocks converge to
within a threshold for a set number of periods (four in this experiment) they
simultaneously inflate. C) The load is lifted without disturbance. Vertical
bars with width roughly equal to chirp duration added to spectrogram for
clarity.
## V Future Work
There are additional sensing modalities possible using the architecture
presented here with relatively minor changes to the system hardware. In the
future, sourcing or fabricating properly tuned (i.e., a higher quality factor
in the ultrasonic region) or properly coupled (e.g., with an attached acoustic
horn) transducers for the acoustic modules may be sufficient for monostatic
ultrasonic range finding from each [32]. Bistatic range finding would be an
opportunity to take advantage of the shape changing nature of the robots,
letting them act as reconfigurable “acoustic lenses” which vary field-of-view
through changes in volume. Contact quality repeatability between modules, and
variable contact quality over multiple inflate-deflate cycles, prevented more
nuanced force and deformation sensing based on learned models, as in [33, 34].
A way to more permanently distribute and fix the modules onto the membrane
would allow for more functionality. Multi-material composite membranes for the
robot exterior could boost SNR through better acoustic impedance matching or
add region-dependent sensitivity at design time through acoustic wave guides,
as in [35].
There are a number of interesting questions related to network architecture
for a collection of robots with wide-spectrum transmission capabilities. For
example, the audible-range clock synchronization functionality could be used
for a slot-based network architecture (e.g., slotted ALOHA), increasing
network throughput. In the future, a multi-hop mesh network based on acoustic
signals could choose between omnidirectional audible broadcasts and neighbor-
to-neighbor ultrasonic modes depending on the traffic route. Additionally, the
use of audible range acoustic signals as a primary mode of communication
presents opportunities for the study of how human-interpretable modes of
multi-robot collaboration affects human operators and bystanders [36].
## VI Conclusion
Acoustic waves are fundamentally different than electromagnetic (e.g., optical
and radio frequency) waves in their transmission properties. Simple and low-
cost transducers are available with operation ranges covering broad swaths of
the spectrum. By taking advantage of the variable attenuation and directivity
of acoustic waves as a function of their frequency, these transducers can be
used for functions ranging from communication to sensing. Further, the same
high extension ratio pressurized membranes that make soft shape-changing
robots difficult to instrument can instead become useful parts of the acoustic
transduction strategy by acting as signal channels and state-dependent
amplifiers/attenuators.
## ACKNOWLEDGMENT
This work was supported in part by the Intelligence Community Postdoctoral
Research Fellowship Program, administered by the Oak Ridge Institute for
Science and Education through an Interagency Agreement between the U.S. DoE
and ODNI
## References
* [1] A. Brunete, A. Ranganath, S. Segovia, J. P. de Frutos, M. Hernando, and E. Gambao, “Current trends in reconfigurable modular robots design,” _International Journal of Advanced Robotic Systems_ , vol. 14, no. 3, p. 1729881417710457, May 2017, publisher: SAGE Publications.
* [2] C. Zhang, P. Zhu, Y. Lin, Z. Jiao, and J. Zou, “Modular Soft Robotics: Modular Units, Connection Mechanisms, and Applications,” _Advanced Intelligent Systems_ , vol. 2, no. 6, p. 1900166, 2020, _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1002/aisy.201900166.
* [3] C.-H. Yu, K. Haller, D. Ingber, and R. Nagpal, “Morpho: A Self-Deformable Modular Robot Inspired by Cellular Structure,” in _2008 IEEE/RSJ International Conference on Intelligent Robots and Systems_. Nice: IEEE, Sept. 2008, pp. 3571–3578.
* [4] J. Seo, J. Paik, and M. Yim, “Modular Reconfigurable Robotics,” _Annual Review of Control, Robotics, and Autonomous Systems_ , vol. 2, no. 1, pp. 63–88, 2019, _eprint: https://doi.org/10.1146/annurev-control-053018-023834.
* [5] R. B. Cocroft, “The public world of insect vibrational communication,” _Molecular Ecology_ , vol. 20, no. 10, pp. 2041–2043, 2011, _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1365-294X.2011.05092.x.
* [6] P. S. M. Hill, “How do animals use substrate-borne vibrations as an information source?” _Naturwissenschaften_ , vol. 96, no. 12, pp. 1355–1371, Dec. 2009.
* [7] B. Hölldobler, “Multimodal signals in ant communication,” _Journal of Comparative Physiology A_ , vol. 184, no. 2, pp. 129–141, 1999.
* [8] F. Montealegre-Z, T. Jonsson, and D. Robert, “Sound radiation and wing mechanics in stridulating field crickets (Orthoptera: Gryllidae),” _Journal of Experimental Biology_ , vol. 214, no. 12, pp. 2105–2117, June 2011.
* [9] H. G. SPANGLER, M. D. GREENFIELD, and A. TAKESSIAN, “Ultrasonic mate calling in the lesser wax moth,” _Physiological Entomology_ , vol. 9, no. 1, pp. 87–95, 1984.
* [10] D. Bank, “A novel ultrasonic sensing system for autonomous mobile systems,” _IEEE Sensors Journal_ , vol. 2, no. 6, pp. 597–606, Dec. 2002, conference Name: IEEE Sensors Journal.
* [11] H. E. Bass, L. C. Sutherland, A. J. Zuckerwar, D. T. Blackstock, and D. M. Hester, “Atmospheric absorption of sound: Further developments,” _The Journal of the Acoustical Society of America_ , vol. 97, no. 1, pp. 680–683, Jan. 1995.
* [12] M. Rubenstein, C. Ahler, and R. Nagpal, “Kilobot: A low cost scalable robot system for collective behaviors,” in _2012 IEEE International Conference on Robotics and Automation_ , May 2012, pp. 3293–3298, iSSN: 1050-4729.
* [13] R. M. McKenzie, M. E. Sayed, M. P. Nemitz, B. W. Flynn, and A. A. Stokes, “Linbots: Soft Modular Robots Utilizing Voice Coils,” _Soft Robotics_ , vol. 6, no. 2, pp. 195–205, Dec. 2018, publisher: Mary Ann Liebert, Inc., publishers.
* [14] M. Brambilla, E. Ferrante, M. Birattari, and M. Dorigo, “Swarm robotics: a review from the swarm engineering perspective,” _Swarm Intelligence_ , vol. 7, no. 1, pp. 1–41, Mar. 2013.
* [15] A. Gutierrez, A. Campo, M. Dorigo, J. Donate, F. Monasterio-Huelin, and L. Magdalena, “Open E-puck Range Bearing miniaturized board for local communication in swarm robotics,” in _2009 IEEE International Conference on Robotics and Automation_ , May 2009, pp. 3111–3116, iSSN: 1050-4729.
* [16] L. Girod and D. Estrin, “Robust range estimation using acoustic and multimodal sensing,” in _Proceedings 2001 IEEE/RSJ International Conference on Intelligent Robots and Systems. Expanding the Societal Role of Robotics in the the Next Millennium (Cat. No.01CH37180)_ , vol. 3, Oct. 2001, pp. 1312–1320 vol.3.
* [17] F. Rivard, J. Bisson, F. Michaud, and D. Letourneau, “Ultrasonic relative positioning for multi-robot systems,” in _2008 IEEE International Conference on Robotics and Automation_ , May 2008, pp. 323–328, iSSN: 1050-4729.
* [18] S. Chappell, J. Jalbert, P. Pietryka, and J. Duchesney, “Acoustic communication between two autonomous underwater vehicles,” in _Proceedings of IEEE Symposium on Autonomous Underwater Vehicle Technology (AUV’94)_ , July 1994, pp. 462–469.
* [19] A. Bahr, J. J. Leonard, and M. F. Fallon, “Cooperative Localization for Autonomous Underwater Vehicles,” _The International Journal of Robotics Research_ , vol. 28, no. 6, pp. 714–728, June 2009, publisher: SAGE Publications Ltd STM.
* [20] P. Karimian, R. Vaughan, and S. Brown, “Sounds Good: Simulation and Evaluation of Audio Communication for Multi-Robot Exploration,” in _2006 IEEE/RSJ International Conference on Intelligent Robots and Systems_ , Oct. 2006, pp. 2711–2716, iSSN: 2153-0866.
* [21] J. A. Paradiso, “The interactive balloon: Sensing, actuation and behavior in a common object,” _IBM Systems Journal_ , vol. 35, no. 3.4, pp. 473–487, 1996, conference Name: IBM Systems Journal.
* [22] M. R. Devlin, B. T. Young, N. D. Naclerio, D. A. Haggerty, and E. W. Hawkes, “An untethered soft cellular robot with variable volume, friction, and unit-to-unit cohesion,” in _2020 IEEE/RSJ International Conference on Intelligent Robots and Systems_. IEEE, 2020, p. 7.
* [23] K. Gilpin, A. Knaian, and D. Rus, “Robot pebbles: One centimeter modules for programmable matter through self-disassembly,” in _2010 IEEE International Conference on Robotics and Automation_ , May 2010, pp. 2485–2492, iSSN: 1050-4729.
* [24] K. Gilpin, K. Kotay, D. Rus, and I. Vasilescu, “Miche: Modular Shape Formation by Self-Disassembly,” _The International Journal of Robotics Research_ , vol. 27, no. 3-4, pp. 345–372, Mar. 2008, publisher: SAGE Publications Ltd STM.
* [25] N. Abramson, “THE ALOHA SYSTEM: another alternative for computer communications,” in _Proceedings of the November 17-19, 1970, fall joint computer conference_ , ser. AFIPS ’70 (Fall). New York, NY, USA: Association for Computing Machinery, Nov. 1970, pp. 281–285.
* [26] V. Trianni and A. Campo, “Fundamental collective behaviors in swarm robotics,” in _Springer handbook of computational intelligence_. Springer, 2015, pp. 1377–1394.
* [27] A. Ravignani, D. L. Bowling, and W. Fitch, “Chorusing, synchrony, and the evolutionary functions of rhythm,” _Frontiers in psychology_ , vol. 5, p. 1118, 2014.
* [28] F. Perez Diaz, “Firefly-Inspired Synchronization in Swarms of Mobile Agents,” phd, University of Sheffield, Sept. 2016\.
* [29] R. Pipher and G. Morris, “Frequency modulation in Conocephalus nigropleurum, the black-sided meadow katydid (Orthoptera: Tettigoniidae),” _The Canadian Entomologist_ , vol. 106, pp. 997–1001, Sept. 1974.
* [30] P. A. Xu, A. K. Mishra, H. Bai, C. A. Aubin, L. Zullo, and R. F. Shepherd, “Optical lace for synthetic afferent neural networks,” _Science Robotics_ , vol. 4, no. 34, Sept. 2019, publisher: Science Robotics Section: Research Article.
* [31] I. Wicaksono, E. Kodama, A. Dementyev, and J. A. Paradiso, “SensorNets: Towards Reconfigurable Multifunctional Fine-grained Soft and Stretchable Electronic Skins,” in _Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems_ , ser. CHI EA ’20. New York, NY, USA: Association for Computing Machinery, Apr. 2020, pp. 1–8.
* [32] J. Borenstein and Y. Koren, “Obstacle avoidance with ultrasonic sensors,” _IEEE Journal on Robotics and Automation_ , vol. 4, no. 2, pp. 213–218, Apr. 1988, conference Name: IEEE Journal on Robotics and Automation.
* [33] G. Laput, X. A. Chen, and C. Harrison, “SweepSense: Ad Hoc Configuration Sensing Using Reflected Swept-Frequency Ultrasonics,” in _Proceedings of the 21st International Conference on Intelligent User Interfaces_ , ser. IUI ’16. New York, NY, USA: Association for Computing Machinery, Mar. 2016, pp. 332–335.
* [34] S. Swaminathan, M. Rivera, R. Kang, Z. Luo, K. B. Ozutemiz, and S. E. Hudson, “Input, Output and Construction Methods for Custom Fabrication of Room-Scale Deployable Pneumatic Structures,” _Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies_ , vol. 3, no. 2, pp. 62:1–62:17, June 2019.
* [35] T. M. Huh, C. Liu, J. Hashizume, T. G. Chen, S. A. Suresh, F. Chang, and M. R. Cutkosky, “Active Sensing for Measuring Contact of Thin Film Gecko-Inspired Adhesives,” _IEEE Robotics and Automation Letters_ , vol. 3, no. 4, pp. 3263–3270, Oct. 2018, conference Name: IEEE Robotics and Automation Letters.
* [36] D. Moore, N. Martelaro, W. Ju, and H. Tennent, “Making Noise Intentional: A Study of Servo Sound Perception,” in _2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI_ , Mar. 2017, pp. 12–21, iSSN: 2167-2148.
|
missing
# Contagion-Preserving Network Sparsifiers: Exploring Epidemic Edge Importance
Utilizing Effective Resistance
Alexander Mercier1,2
###### Abstract
Network epidemiology has become a vital tool in understanding the effects of
high-degree vertices, geographic and demographic communities, and other
inhomogeneities in social structure on the spread of disease. However, many
networks derived from modern datasets are quite dense, such as mobility
networks where each location has links to a large number of potential
destinations. One way to reduce the computational effort of simulating
epidemics on these networks is sparsification, where we select a
representative subset of edges based on some measure of their importance.
Recently an approach was proposed using an algorithm based on the effective
resistance of the edges. We explore how effective resistance is correlated
with the probability that an edge transmits disease in the SI model. We find
that in some cases these two notions of edge importance are well correlated,
making effective resistance a computationally efficient proxy for the
importance of an edge to epidemic spread. In other cases, the correlation is
weaker, and we discuss situations in which effective resistance is not a good
proxy for epidemic importance.
## 1 Introduction
### 1.1 Motivation
Networks arise in a variety of contexts, from the study of epidemics, social
contagions, terrorism, and biological invasions, to how knowledge itself is
organized through epistemological networks. Use of network-based models for
simulating epidemics has become particularly popular. However, simulating a
stochastic epidemic model on a large network is computationally expensive,
especially for dense networks, such as those derived from high-resolution
mobility data which have been increasingly used in modeling contagion spread
[5, 7, 14]. In these networks, there is a link between every pair of
destinations, with weights corresponding to the flow of people who travel
between them [14]. Considering all possible links as potential paths of
infection takes significant of computation, and the problem is exacerbated by
the need to perform many independent runs to get a sense of the probability
distribution of epidemic sizes in stochastic models, as well as to test the
effect of various intervention strategies. It is common to apply naive
heuristics, like simply removing links whose weights are below some threshold,
but it is not clear to what extent this preserves the true behavior of an
epidemic, since rare events on low-weight edges can have important downstream
consequences.
### 1.2 Sparsification
One way to address this is _sparsification_ : choosing a subset of important
links in the network, deriving a sparser network whose behavior would be
faithful to the original but which is far less costly to study. The aim of
sparsification is to approximate a network, $G(V,E,\phi)$ by a graph
sparsifier, $\tilde{G}(V,\tilde{E},\tilde{\phi})$, on the same set of
vertices, $V$, but with a reduced number of edges, $\tilde{E}$, and modified
edge weights, $\tilde{\phi}$ such that $\tilde{G}$ approximates G in some
appropriate metric or metrics. Since graphs arise in the study of complex
networks, graph sparsification has become both a topically important area of
study and an interesting mathematical challenge.
Therefore, network sparsification for networks used in stochastic epidemic
simulations are motivated by two primary aspirations. First, to lower the
computational cost of simulating epidemics on the network while retaining the
same average dynamics. Second, the underlying goal is for the sparsification
algorithm to conserve edges important to epidemic spread and remove edges that
are not. Likewise, when paired with associated metadata, the removal of
“unimportant” edges and the conservation of “important” edges may allow
further analytic insight into network topological structure. We explore the
notion of a contagion-preserving network sparsifier (CPNS) which seeks to
reduce the number of edges in a network while simultaneously approximating
average epidemic dynamics. In this way, a CPNS reduces the computational costs
incurred in dynamical simulations on $G$, by approximating typical dynamics on
$\tilde{G}$. Yet, how do we determine which links of the original network are
the most important in an epidemic?
One possible approach comes from an algorithm created by Daniel Spielman and
Shang-Hua Teng, for which they won the Gödel Prize in 2015, and a
simplification and improvement by Spielman and Srivastava [9, 10]. The idea
from the Spielman–Srivastava algorithm is to randomly sample edges with
probability proportional to their effective resistance: in physical terms, the
potential difference between their endpoints when one unit of current flows
between them (and where each edge has resistance equal to the reciprocal of
its weight). Effective resistance, also called ”current-flow betweenness” or
”spanning edge betweenness,” has been explored as a measure of edge importance
[12, 13, 1]. This resistance is high if an edge is one of the only ways to
quickly get from one part of the network to another or if alternate paths are
long or consist of low-weight edges. Choosing edges this way preserves
important aspects of the graph spectrum — approximately preserving the graph
Laplacian — and makes it possible to solve certain systems of equations in
nearly-linear time [10]. However, this notion of “importance” may or may not
align with the role edges play in an epidemic, since approximately preserving
the graph Laplacian (which governs linear dynamics on the network) might not
preserve the highly nonlinear dynamics of an epidemic. Are the edges with
higher probability of selection in the Spielman–Srivastava algorithm more
likely to spread disease? What is the right way to sparsify a network if our
goal is to preserve its epidemic behavior rather than its spectrum?
### 1.3 Methodology
Towards this end, we develop the novel concept of an infection spanning tree,
from which a general notion of epidemic edge importance may be formed and
contrasted against effective resistance. Another formulation of effective
resistance is the probability that a random spanning tree includes a given
edge, where the spanning tree is chosen uniformly or with probability
proportional to the product of edge weights. In contrast, the infection
spanning tree lets us measures which trees and paths are most likely to spread
a contagion.
Focusing on SI contagion dynamics with a discrete-time SI model, we use the
Spielman–Srivastava algorithm utilizing effective resistance in
$O(n\log^{c}n)$ time to produce CPNS, drawing parallels between the linear
flow conceptualization of a network and contagion spread on that network. This
research takes four approaches not taken in previous literature [11]. Swarup
et al. also used effective resistance. However, in comparing with epidemics on
the original network they used the metric of minimum Hamming distance [11]. We
use a suite of metrics to study the performance of CPNS, namely _average_
Hamming distance, mutual information, and how well the fraction of infected
vertices over time matches the epidemic on the original network. Second, we
compare effective resistance with the epidemic edge importance using infection
spanning trees. Third, while preceding work centered around aggregate SI
dynamics on CPNS, this research examines SI contagion dynamics on CPNS
primarily through time. Lastly, we compare the Spielman–Srivastava algorithm
with a simpler method that samples edges uniformly.
In order to explore important edges in contagion spread and CPNS, we conduct a
range of experiments on four random networks and a real-world air transport
network. The results will show that while the linear flow analog to contagion
processes is conceptually fertile, a range of diverse metrics must be
implemented in order to view the full picture of the effectiveness of a given
CPNS. This spectral sparsification algorithm can be used to create effective
CPNS, permitting the removal of $75\%$ of edges in some networks while
approximately preserving the same average SI dynamics through time. However,
more research must be conducted to understand the importance of any given edge
within the context of an epidemic in order to fully grasp the workings of a
CPNS.
## 2 Methods
### 2.1 Linear Flow and Contagion Spread
Current Flow | Contagion Spread
---|---
Conductance | Probability of Transmission Along An Edge
Resistance | Expected Time to Infection
Current | Expected Contagion Flow
Table 1: Linear Flow and Contagion Processes
We begin by noting the parallels between linear flow in electrical networks
and epidemic processes on social contact networks (Table 1). In contagion
processes, we assume that transmission can occur along each edge
independently. Therefore, the probability of the contagion spreading along an
edge, $\pi_{e}$ is treated as analogous to the conductance of that edge. The
reciprocal of conductance, known as resistance, is the expected time to
infection along that edge or commute time, $T_{e}$. The flow of a contagion
along the network can then be thought of as current flow on the corresponding
electrical network. Effective resistance, $R_{e}$, is then defined as the
potential difference across an electrical network when one unit of current is
injected at a vertex, $i$, and extracted at another vertex, $j$, taking into
account all possible paths between $i$ and $j$. In order to approximate the
effective resistance for all pairs of vertices, we created an implementation
in R of the Spielman–Srivastava algorithm. Specifically, we implemented the
formulation by Koutis, Levin, and Peng which works in time nearly linear to
the number of edges [6].
Effective resistance between any two given vertices is given by the graph
Laplacian when the resistance of each edge is defined as the inverse of its
weight. The effective resistance between $i$ and $j$ is given by
$R_{ij}=(e_{i}-e_{j})^{T}L^{+}(e_{i}-e_{j})$
where $L^{+}$ is the pseudoinverse of the graph Laplacian and $e_{i}$ is the
column vector where there is a $1$ at $i$ and zero elsewhere. The algorithm to
approximate effective resistance between all vertex pairs inverts the
Laplacian approximately using a random projection technique based upon the
Johnson-Lindenstrauss lemma [10]. This approximation guarantees the
approximate effective resistance given by the algorithm is between
$(1-\varepsilon)R_{ij}$ and $(1+\varepsilon)R_{ij}$ for some constant error
parameter $\varepsilon$ which can be made as small as desired. In our
implementation we typically have $\varepsilon\leq 0.1$.
### 2.2 Sparsification using Effective Resistance
Algorithm 1: Sparsification by effective resistance [9]
Input: network $G(V,E,\phi)$
Output: network $\tilde{G}(V,\tilde{E},\tilde{\phi})$
Parameters: $q$, the number of samples
Procedure:
Choose random edge $e$ from $G$ with probability $p_{e}\propto
w_{e}R_{\text{e}}$
Add edge $e$ to $\tilde{G}$ with weight $\tilde{w}_{e}=w_{e}/p_{e}q$
Take $q$ samples with replacement and sum weights if an edge is chosen more
than once
Sparsification via effective resistance approximately preserves the effective
resistance between all vertices, ensuring the expected time to infection for
any vertex remains approximately the same [11]. This is due to the fact the
spectrum of the graph will remain similar to that of the original graph; the
spectrum of a graph has been shown to govern many aspects of the diffusion on
that graph [3]. The Spielman–Srivastava Algorithm takes the effective
resistance of all edges of a graph and uses $R_{e}$ to sample edges with
probability, $p_{e}$, equal to $w_{e}R_{e}$ where $w_{e}$ is the weight of
that edge. Notably, according to Algorithm 1, edge weights are sampled
proportional to $\pi_{e}T_{e}$ [11]. An edge selected by the
Spielman–Srivastava algorithm is assigned a new weight of $\tilde{w}_{e}$
equal to $w_{e}/p_{e}q$ such that $q$ is the total number of samples taken.
Edges are sampled with replacement, so that they can be selected multiple
times. If the same edge is selected more than once, the edge weights are added
together. Since the expected number of times $e$ is selected is $qp_{e}$, the
expectation of $\tilde{w}_{e}$ is equal to its original edge weight $w_{e}$.
Thus, the expected adjacency matrix will equal the original adjacency matrix
and the expected graph Laplacian will be equal to the original graph
Laplacian. In this way, Algorithm 1 modifies edge weights to compensate for
reduced edge number and can be thought of as being a part of a more general
sampling strategy whereby edges are assigned probabilities by some metric of
edge importance.
Broadly, if there is an edge $e$ which connects vertices $i$ and $j$ such that
no alternate paths exist between $i$ and $j$, or more generally if the
alternate paths are long or involve lower-weight edges, then $T_{e}$ will be
equal to the resistance of that edge with $\pi_{e}T_{e}$ equal to $1$.
Conversely, if more paths are added between $i$ and $j$, then the expected
time to infection decreases and $\pi_{e}T_{e}$ becomes less than 1. This
suggests that as more paths are added between $i$ and $j$, there is less
incentive for the Spielman–Srivastava algorithm to select $e$ for the
sparsifier. This notion is similar to the concept of the ”embeddedness” of an
edge from Schaub et al. which is defined as $(1-\pi_{e}T_{e})$ and conveys how
important an edge is in weighted cuts of that graph or how much an edge acts
as a ”bottleneck” on that network [8].
### 2.3 Contagion-Preserving Network Sparsifiers Methodology and Metrics
In order to gauge the success of a CPNS, SI discrete-time process on the
original network and CPNS are saved as an indexed list of strings. The string
is of length $N$ where $N$ is the number of vertices within the network. All
vertices are indexed $1$ through $N$ whereby entry $n$ of the string denotes
vertex $n$ within the network and can be either $0$ or $1$ in the string,
representing a susceptible or infected vertex, respectively. The cardinally of
the indexed list of strings is $T$ where $T$ is the number of timesteps
designated to run the model. The first string of the indexed list corresponds
to the state of the SI model at timestep $1$, the second string corresponding
to timestep $2$, and so on. The probability of an edge with weight $w_{e}$
transmitting a contagion with probability of transmission $\gamma$ is given by
$\pi_{e}=1-(1-\gamma)^{w_{e}}$
Because we are utilizing a discrete-time SI model, a contagion might be
transmitted to a new vertex by two or more of its edges simultaneously, i.e.
on the same time step. In this case, the edge that has the opportunity to
transmit first is chosen uniformly at random.
We employ the following set of metrics between the original network and CPNS
contagion processes: average Hamming distance, mutual information score, and
fraction of infected vertices in the network. The SI model is examined through
time, where each metric is computed per time step. When contagion processes on
the initial network are compared to those on the CPNS, the same patient zero
is selected.
Because the SI model is stochastic, the purpose of the CPNS is not to
precisely mimic the progress of any one run of the epidemic: even independent
runs of the SI model on the original network will vary and have some typical,
nonzero Hamming distance and mutual information. Then, it is appropriate for
the CPNS to be evaluated on its preservation of average metrics over multiple
runs. Therefore, we begin by calculating a baseline. This baseline is computed
by averaging the respective metric over multiple runs on the original network.
Additionally, we pick a small number of CPNS to generate and run multiple runs
of the SI model and take the average of the Hamming distance, mutual
information, and fraction of infected. To ensure that the CPNS is robust, we
pick uniformly at random one patient zero to begin the simulation, keeping the
same patient zero for both the runs on the original network and the CPNS. For
each average metric, a $95\%$ confidence interval is also calculated. An
effective CPNS will remain “close” to the given metric’s baseline while
matching its confidence interval.
To better determine if an improved metric score produced by a CPNS is due to
the addition of edges important to contagion spread or is merely due to the
addition of more edges to the CPNS, we include a null model with which to
compare. The null model uses the same framework as the Spielman–Srivastava
algorithm, but samples uniformly from the set of edges with replacement. It
will be entitled “uniform sampling.” If the Spielman–Srivastava Algorithm is
selecting edges of relevance to contagion spread, then the Spielman–Srivastava
Algorithm CPNS should perform better than the corresponding uniform sampling
CPNS while also selecting less edges. For easy comparison across multiple
types of networks with a varying number of edges and to control edge number
between the uniform sampling and SS sampling procedures, a CPNS with $25\%$,
$50\%$ and $75\%$ of the total edges will be created for each network using
both sampling procedures.
### 2.4 Epidemic Edge Importance and Networks
To investigate the relationship between effective resistance, the
Spielman–Srivastava sparsification algorithm, and the spread of contagion, we
introduce the measure of epidemic edge importance and an infection spanning
tree. An infection spanning tree is created by simulating an SI contagion
process from a patient zero until all vertices of the same component as
patient zero are infected, keeping track of edges that transmit the contagion.
Those edges which transmit the contagion make up the infection spanning tree.
This process is performed over multiple runs, considering all possible patient
zeros in the given network; the probability that an edge is found in any given
infection spanning tree is termed the epidemic edge importance. The
probability that an edge is selected by the Spielman–Srivastava algorithm,
$w_{e}R_{e}$, and epidemic edge importance of all edges are normalized and
organized in a Q-Q scatter plot, whereby the Pearson correlation coefficient
can be used as a quantitative measure of similarity between the two metrics of
edge importance.
To explore the notion of edge importance as it relates to effective
resistance, we test this methodology on four random networks and a real world
airline network. The four random networks are a random network drawn from the
configuration model with exponential logarithmic degree distribution,
stochastic block model, complete network with edge weights drawn from a normal
distribution, and a complete network with edge weights drawn from a power law
distribution. The configuration network and stochastic block model each have
$500$ vertices and both complete networks have $100$ vertices, denoted
$K_{100}$. The configuration network and stochastic block model both have all
edge weights set to one. The airline network (AirNet) contains $500$ vertices
corresponding to the $500$ airports with the most traffic during the year 2002
in the United States[2]. Edge weights correspond to the total number of seats
passing between a pair of airports [2].
## 3 Results
### 3.1 Contagion-Preserving Network Sparsifiers
We wish to check if the Spielman–Srivastava algorithm CPNS is close to the
dynamics on the original network for average Hamming distance, mutual
information, and fraction of vertices infected by the SI model as a function
of time . Because the SI model is stochastic, the objective is not for the
Hamming distance to be zero. Rather, it should be comparable to what is
outputted from multiple independent runs on the original network. Likewise,
for both mutual information and fraction of infected. For comparison, we also
measure these quantities for CPNS with same percentage of total edges produced
by uniform sampling.
Figure 1: Comparison of CPNS performance on a configuration network with
degree list generated from an exponential logarithmic distribution. From top
to bottom, the plot displays the Hamming distance, mutual information, and
fraction of infected vertices. On the left are the uniform sampling CPNS,
termed ”Uniform Sampling”, and the right the Spielman–Srivastava CPNS, called
”SS Sampling.” The baseline is shown in purple, $25\%$ edge sparsifier in red,
$50\%$ in green, and $75\%$ in blue, with shaded region in each color
representing the $95\%$ confidence interval.
On the configuration network with an exponential-logarithmic degree
distribution, the $25\%$ and $50\%$ uniform and Spielman–Srivastava sampling
CPNS are comparable across all metrics (Figure1). The $75\%$
Spielman–Srivastava sampling CPNS performs better than the uniform sampling
CPNS, adhering closer to the baseline for all metrics (Figure 1). It should be
noted that the $75\%$ uniform sampling CPNS and $50\%$ Spielman–Srivastava
sampling CPNS have lower than baseline Hamming distances (Figure 1). This
implies that variation found when the SI model was run on the original network
was lowered by the CPNS. This could be disadvantageous for the SI model on the
CPNS if it wishes to capture the average dynamics found on the original
network.
For the stochastic block model in Figure 2, the three CPNS of varying edge
number for the uniform sampling and SS sampling are comparable. Each of the
CPNS adhere closely to the baseline, with the exception of both the uniform
sampling and SS sampling $25\%$ CPNS, for both Hamming distance and fraction
of infected. No CPNS correctly captured the average mutual information
dynamics through time. (Figure 2).
Figure 2: Comparison of CPNS performance on a stochastic block model with four
communities of equal size. From top to bottom, the plot displays the Hamming
distance, mutual information, and fraction of infected vertices. On the left
are the uniform sampling CPNS and the right the Spielman–Srivastava CPNS. The
baseline is shown in purple, $25\%$ edge sparsifier in red, $50\%$ in green,
and $75\%$ in blue, with shaded region in each color representing the $95\%$
confidence interval.
Likewise, for a complete graph with 100 vertices, $K_{100}$, with edge weights
drawn from a normal distribution, the three CPNS of varying edge number for
the uniform sampling and Spielman–Srivastava sampling are similar, staying
close to the baseline. The only exception is the $75\%$ SS sampling CPNS,
which has a higher Hamming distance than the baseline in the middle timesteps
of the SI simulation (Figure 3). However, this elevated average Hamming
distance does not effect the $75\%$ Spielman–Srivastava sampling CPNS’s
performance with either mutual information or the fraction of infected through
time. The $75\%$ Spielman–Srivastava sampling CPNS is closer to the mutual
information baseline than the corresponding uniform sampling CPNS.
Figure 3: Comparison of CPNS performance on a complete network with $100$
vertices and edge weights drawn from a normal distribution. From top to
bottom, the plot displays the Hamming distance, mutual information, and
fraction of infected vertices. On the left are the uniform sampling CPNS,
termed ”Uniform Sampling”, and the right the Spielman–Srivastava CPNS, called
”SS Sampling.” The baseline is shown in purple, $25\%$ edge sparsifier in red,
$50\%$ in green, and $75\%$ in blue, with shaded region in each color
representing the $95\%$ confidence interval.
Additionally, the uniform and Spielman–Srivastava sampling CPNS perform
similarly on the $K_{100}$ network with edge weights drawn from a power
distribution. The $25\%$ and $75\%$ SS sampling are closer to the baseline
than the uniform sampling $25\%$ and $75\%$ CPNS (Figure 4). However, the
$50\%$ Spielman–Srivastava sampling CPNS performs worse with mutual
information as a metric than all other CPNS. Lastly, when compared to the
uniform sampling CPNS, it appears that the Spielman–Srivastava sampling CPNS
is closer to the fraction of infected baseline through time.
Figure 4: Comparison of CPNS performance on a complete network with $100$
vertices and edge weights drawn from a power law distribution. From top to
bottom, the plot displays the Hamming distance, mutual information, and
fraction of infected vertices. On the left are the uniform sampling CPNS,
termed ”Uniform Sampling”, and the right the Spielman–Srivastava CPNS, called
”SS Sampling.” The baseline is shown in purple, $25\%$ edge sparsifier in red,
$50\%$ in green, and $75\%$ in blue, with shaded region in each color
representing the $95\%$ confidence interval.
Lastly, the we examine a real-world airline network, AirNet. It is notable
that neither the uniform nor Spielman–Srivastava sampling CPNS fully capture
the SI dynamics on AirNet 5. As measured by Hamming distance, it appears that
uniform sampling produced more effective CPNS than the Spielman–Srivastava
sampling CPNS. Additionally, both $50\%$ uniform and Spielman–Srivastava CPNS
and the $75\%$ Spielman–Srivastava CPNS have a larger than baseline fraction
of infected through time. Only both $25\%$ CPNS correctly captured the
fraction of infected through time. However, as both $25\%$ CPNS performed
poorly in Hamming distance and mutual information, regardless of adherence to
the baseline, both the uniform and Spielman-Srivastava sampling $25\%$ CPNS
fail to capture the totality of dynamics from AirNet.
Figure 5: Comparison of CPNS performance on an airline network, AirNet,
containing the top $500$ airports in the United States in 2002, with edge
weights corresponding to the number of seats passing between pairs of
airports. From top to bottom, the plot displays the Hamming distance, mutual
information, and fraction of infected vertices. On the left are the uniform
sampling CPNS, termed ”Uniform Sampling”, and the right the
Spielman–Srivastava CPNS, called ”SS Sampling.” The baseline is shown in
purple, $25\%$ edge sparsifier in red, $50\%$ in green, and $75\%$ in blue,
with shaded region in each color representing the $95\%$ confidence interval.
### 3.2 Comparing Effective Resistance and Epidemic Edge Importance
The Q-Q plots showing the similarity of epidemic edge importance and
$w_{e}R_{e}$ show relatively good correlation for the configuration network
$(r=0.93)$, the stochastic block model $(0.8)$, and $K_{100}$ with edge
weights selected from a normal distribution $(r=0.96)$ (Figure 6). However,
AirNet has poor correlation, with $r=0.067$ (Figure 6). For AirNet,
$w_{e}R_{e}$ undervalues the majority of edges deemed important by epidemic
edge importance and overvalues certain select edges (Figure 6). When
considering AirNet, this discrepancy between $w_{e}R_{e}$ and epidemic edge
importance seems to be because the majority of highly important $w_{e}R_{e}$
edges do not coincide with the edges epidemic edge importance deems as
important (Figure 6).
Figure 6: Displayed are the Q-Q plots of the (a) complete network with edge
weights from a normal distribution, (b) stochastic block model, (c) AirNet,
and (d) configuration network with degree drawn from an exponential-
logarithmic distribution. The x-axis corresponds to a normalized epidemic edge
importance of edge $e$ and the y-axis to a normalized $w_{e}R_{e}$. The value
$r$ represents the Pearson correlation coefficient.
(a) $\gamma=3\times 10^{-5}$
(b) $\gamma=3\times 10^{-3}$
Figure 7: Q-Q Plot of epidemic edge importance and $w_{e}R_{e}$ at two
different probabilities of transmission, $\gamma$, resulting in two Pearson
correlation coefficient values: (a) $r=0.99$ and (b) $r=0.51$.
To further investigate the relationship between epidemic edge importance and
$w_{e}R_{e}$, two visualizations of AirNet were generated using the NetworkX
in Python [4]: one with edge color dependent on $w_{e}R_{e}$ and another with
edge color dependent on epidemic edge importance (Figure 8) If an edge has
high metric importance, the edge will be colored red. The generation of the
two network visualizations suggests a key difference: a subset of edges
connecting the core to a singular vertex, which is connected to the periphery
of the network, are marked important by epidemic edge importance while
$w_{e}R_{e}$ does not mark the same subset of edges as important. Instead, the
$w_{e}R_{e}$ measure of importance more evenly picks edges throughout the
network, with only a few edges in the core of the network being marked as
especially important.
(a) AirNet: Epidemic Edge Importance
(b) AirNet: $w_{e}R_{e}$
Figure 8: A visualization of the air traffic network AirNet showing (a)
epidemic edge importance and (b) $w_{e}R_{e}$ . Edges are colored such that
red means an edge is more important and blue means an edge is less important
for each respective metric.
Lastly, on the $K_{100}$ network with edge weights drawn from a power law
distribution, epidemic edge importance and $w_{e}R_{e}$ are poorly correlated
($r=0.51$) for a larger probability of transmission, $\gamma=3\times 10^{-3}$
, and well correlated ($r=0.99$) for a probability of transmission that is
sufficiently small, $\gamma=3\times 10^{-5}$ (Figure 7). Lowering $\gamma$ has
two consequences. First, lowering $\gamma$ lowers the possibility that a
vertex can be infected simultaneously by two or more of its edges within our
SI model. Second, the SI discrete-time model moves closer to a continuous-time
model as $\gamma$ is lowered. This suggests that a continuous-time SI model
would potentially produce better correlation between $w_{e}R_{e}$ and epidemic
edge importance.
## 4 Discussion
### 4.1 Contagion-Preserving Network Sparsifiers and the Spielman-Srivastava
Sparsification Algorithm
To some extent, the Spielman–Srivastava sparsification algorithm successfully
created effective CPNS to preserve average SI dynamics across the three
metrics with all four random networks. With respect to the configuration
network, the Spielman–Srivastava sampling $75\%$ CPNS best adheres to the
baseline, allowing for a removal of $25\%$ of the original edges while
maintaining approximately the same average SI dynamics as measured by the
average Hamming distance, mutual information, and fraction of infected. One
quality worth noting is that both the $75\%$ uniform sampling CPNS and the
$50\%$ Spielman–Srivastava sampling CPNS have a smaller Hamming distance than
the baseline, suggesting that both CPNS lowered the baseline amount of
variance between SI runs on the original network and itself. However, by
removing some level of variance, this may cause both CPNS to be less faithful
to the original network in a probabilistic sense: generating similar
distributions of trajectories.
For the stochastic block model, $50\%$ of the edges could be removed with both
the uniform and Spielman–Srivastava sampling, with both $50\%$ CPNS staying
close to the baseline in each metric except mutual information where it was
lower for both sampling procedures. Both $K_{100}$ networks see effective
uniform and Spielman–Srivastava sampling, with the $25\%$ CPNS performing
adequately. This allows the removal of $75\%$ of edges in both networks while
retaining average SI dynamics. The two aspects of note are the relatively high
Hamming distance for the $75\%$ Spielman–Srivastava sampling CPNS on $K_{100}$
with edge weights from a normal distribution and the SS sampling procedure
performing marginally better on $K_{100}$ with edge weights from a power law
distribution when viewed through the fraction of infected metric. However,
while moderately successful CPNS were produced for the four random networks,
the Spielman–Srivastava algorithm only unambiguously performed better than the
uniform sampling on the configuration network. Moreover, the small size of the
networks used in this research were limited by the need to run the SI model
over multiple runs on the original network.
This does not necessarily imply that the Spielman–Srivastava performed poorly
at generating CPNS. Rather, we suggest that this in part could be explained as
a byproduct of the respective network structures. The stochastic block model
and complete networks are both well connected, while the configuration network
contains more vertices with lower degree. Because this uniform sampling is
similar to the performance of the Spielman–Srivastava sampling CPNS such that
both are successful at preserving average SI dynamics, this instead implies
that edges within those networks have nearly the same level of importance to
the epidemic. Conversely, this could be seen that no specific edge is
important to the epidemic. In other words, it does not matter which edges are
chosen to create the CPNS for those networks. Rather, what matters is that
edges _are_ chosen for those specific networks. Nevertheless, the relative
success of the Spielman–Srivastava sampling procedure on the configuration
network when compared to the uniform sampling procedure suggests that there
are some instances where the Spielman–Srivastava algorithm will succeed and
the uniform sampling procedure will fall short.
The airline network AirNet was the only network where all CPNS that were
ineffective. This may be due to how the Spielman–Srivastava algorithm modifies
edge weights to compensate for reduced edge number. The Spielman-Srivastava
CPNS may be ensuring certain vertices usually infected on the orignal network
are almost always infected on the CPNS, inflating the mutual information score
above the baseline by removing variation innate to the SI model on the
original network. Similarly, the modified edge weights may account for the
$50\%$ uniform and Spielman–Srivastava CPNS and the $75\%$ Spielman–Srivastava
CPNS having a higher than baseline fraction of infected, where the
Spielman–Srivastava algorithm giving higher edge weights to certain edges on
the CPNS than found on the orginal network [11]. In this way, the
Spielman–Srivastava algorithm may cause the contagion to spread faster,
causing a higher than baseline fraction of infected and causing the CPNS
Hamming distance to not adhere to the baseline.
The relative success of both uniform sampling and Spielman–Srivastava sampling
procedures speaks to the effectiveness of random sampling in preserving
certain topological features of a network that a deterministic algorithm would
not [11]. Consider the example of a network with groups that contain many
intra-group edges and few inter-group edges such that edges between groups
have lower weight than edges within groups. In this scenario, one common
deterministic strategy would be to simply remove edges below a certain weight
threshold; this would cut off the communities from one another, resulting in a
poor CPNS. In contrast, a random sampling (either the uniform sampling or the
Spielman–Srivastava sampling procedures) procedure would most likely retain a
few of the inter-group edges and produce a better performing CPNS.
### 4.2 Epidemic Edge Importance and Probability of Selection
The relatively high correlation of $w_{e}R_{e}$ with epidemic edge importance
– probability of that same edge appearing in an infection spanning tree – on
the unweighted configuration and stochastic block model networks suggests that
the Spielman–Srivastava algorithm is selecting edges with high importance to
the SI model. This is especially notable in the configuration network, which
is sparser. Similarly, $K_{100}$ with edge weights from a normal distribution
also has high correlation between epidemic edge importance and $w_{e}R_{e}$.
Particularly, for the configuration network, edges connecting low degree
vertices have epidemic edge importance and probability of selection close to
$1$. Yet, AirNet has relatively low correlation between epidemic edge
importance and probability of selection. For AirNet, probability of selection
undervalues many edges with high epidemic edge importance, suggesting that the
Spielman–Srivastava algorithm is overvaluing certain edges that are not as
important to disease spread. Additionally, the correlation between
$w_{e}R_{e}$ and epidemic edge importance may be dependent on the probability
of transmission $\gamma$, whereby if $\gamma$ is small enough the difference
in edge weight becomes more pronounced and those bottle necks marked important
by effective resistance also have higher epidemic edge importance, as
supported by Figure 7.
Because the epidemic edge importance relies on the SI model, the metric of
edge importance only relies on the infection rate and the topological
structure of the network. This is in contrast to something like the SIR model,
where it would be dependent on both the infection and recovery rates. One
consequence of this is that for vertices on the periphery of the network, the
SI epidemic will eventually infect them where an SIR metric of edge importance
may or may not. This is important when considering all possible patient zeros;
An SIR model measure of epidemic edge importance may bias towards the core of
the network undervaluing potentially important edges on the periphery. This is
ideal when considering potential intervention strategies that necessitate
interdiction of an edge. Note that any edge which exists as the only path from
one part of the network to another is assigned epidemic edge importance $1$ by
our method, as well as a effective resistance of $1$. If this edge leads to
only a single isolated vertex on the periphery of the network, this may seem
counter intuitive, since a typical epidemic might not reach this vertex.
However, in the SI model, all vertices in the connected component containing
patient zero eventually becomes infected. Moreover, this isolated vertex might
itself be patient zero, in which case its single edge is crucial.
In general, the idea of epidemic edge importance depends on the details of the
epidemic model and parameters used; the probability that any given edge
transmit a contagion depends on the specifics of the contagion. For instance,
”SIR epidemic edge importance” could also be defined. We chose not to examine
SIR epidemic edge importance, as this measure of edge importance depends on
two parameters, rate of infection _and_ recovery, instead of one, rate of
infection. We focus on the SI model version of epidemic edge importance
because it is a simple measure of whether the contagion is likely to spread by
an edge if the epidemic reaches (or begins at) either of its endpoints. In
this way, the SI version of epidemic edge importance is robust.
Even so, AirNet had poor correlation between $w_{e}R_{e}$ and epidemic edge
importance. This may be because even if there are many alternate paths out of
the core of the network, those paths may be long or consist of low-weight
edges. Then, especially in a discrete-time model where $\gamma$ is fairly
large, the epidemic will typically cross to the other part of the network
before it has time to traverse the alternate paths. This suggests a tension
between vertex centrality and effective resistance of an edge in this
particular network, where an edge that is connected to a highly central vertex
has high epidemic edge importance but low effective resistance. We see this in
the subset of edges with high epidemic edge importance connecting a vertex
which links the core of the network with the periphery of the network. The
fact that AirNet is the only network to have poor correlation of $w_{e}R_{e}$
and epidemic edge importance and is the only network to produce poor forming
CPNS implies that correlation of $w_{e}R_{e}$ may be a good predictor of CPNS
effectiveness.
### 4.3 Future Outlook
The Spielman–Srivastava algorithm was shown to be reasonably effective at
producing reliable and robust contagion-preserving network sparsifiers on an
assortment of different networks, allowing for the removal of up to $75\%$ of
the edges in certain networks. To some extent, this means that the
Spielman–Srivastava algorithm, which approximately preserves the graph
Laplacian and therefore linear dynamics, can also approximately preserve the
nonlinear behavior of a contagion. However, the fact that even a simple
uniform sampling procedure also works fairly well reveals that in some
networks there is not a strong enough difference between the importance of
different edges to illustrate the specific virtues or faults of the
Speilman–Srivastava algorithm within the context of preserving average
epidemic dynamics. Moreover, both the uniform and Spielman–Srivastava sampling
procedures failed to generate effective contagion-preserving network
sparsifiers for AirNet, demonstrating potential flaws. Therefore, a greater
variety of networks tailored to illustrate the workings of the
Spielman–Srivastava sampling procedure in the context of an epidemic should be
considered.
Additionally, for a majority of the networks considered we found a strong
correlation between SI epidemic epidemic importance and the importance,
$w_{e}R_{e}$, assigned by the Spielman–Srivastava algorithm, suggesting that
effective resistance operates as a good approximation of the importance of any
given edge in contagion spread. For AirNet, we found that the network that
only produced ineffective Spielman-Srivastava CPNS also was the only network
that had poorly correlated $w_{e}R_{e}$ and epidemic edge importance,
suggesting that the correlation between $w_{e}R_{e}$ and epidemic edge
importance may act as an indicator of Spielman-Srivastava CPNS performance. We
also found that this correlation increases when the parameter $\gamma$ in a
discrete-time simulation is reduced, or equivalently when we approach a
continuous-time version of the SI model, for $K_{100}$ where the edge weights
were drawn from a power law distribution. Still, the fact that the real-world
airline network did not have well correlated $w_{e}R_{e}$ and epidemic edge
importance shows that a greater understanding of the relationship between the
two metrics is needed. To better explore the notion of epidemic edge
importance, a transition from a discrete time to a continuous time SI model is
suggested. More comprehensively, further work is needed to understand the
importance of an edge within the context of contagion spread. Intervention
strategies acting on vertices, such as vaccinations to protect a vertex, and
behavioral interventions, through the interdiction of edges, are critical to
controlling and containing disease spread [14].
Much future work remains to address the problem of epidemic sparsifiers. A
deeper exploration of network sparsifiers which approximately preserve the
simplest case of SI dynamics on larger and more complex real-world networks is
needed, as well as an exploration of SIR and more complex epidemic models such
as SEIR. Additionally, while there are appealing parallels between important
edges in epidemics and edges with high effective resistance, a better notion
of the importance of an edge in the context of an epidemic may exist.
Likewise, other notions of edge importance using more complex models than SI
should be explored. In general, a deeper understanding of the relationship
between linear flows and epidemic processes on networks would greatly benefit
this course of research. The question of what edge importance means in the
context of contagion spread is critical to producing effective contagion-
preserving network sparsifiers . Lastly, a broader testing of other
sparsification algorithms within the same class of random sampling sparsifiers
to approximately preserve average contagion dynamics is recommended. This
leaves the issue of producing effective contagion-preserving network
sparsifiers an open question.
## Acknowledgments
This work was carried out as part of an REU (Research Experience for
Undergraduates) program at the Santa Fe Institute under the mentorship of
Cristopher Moore and Maria Riolo, funded by NSF grants OAC-1757923 and
IIS-1838251. We are also grateful to Samuel Scarpino, Sayandeb Basu, and
Andrew Kramer for their helpful conversations.
## References
* [1] Enrico Bozzo and Massimo Franceschet. Effective and efficient approximations of the generalized inverse of the graph Laplacian matrix with an application to current-flow betweenness centrality. CoRR, abs/1205.4894, 2012.
* [2] Vittoria Colizza, Romualdo Pastor-Satorras, and Alessandro Vespignani. Reaction–diffusion processes and metapopulation models in heterogeneous networks. Nature Physics, 3(4):276–282, 2007.
* [3] Ayalvadi Ganesh, Laurent Massoulié, and Don Towsley. The effect of network topology on the spread of epidemics. In Proceedings IEEE 24th Annual Joint Conference of the IEEE Computer and Communications Societies., volume 2, pages 1455–1466. IEEE, 2005\.
* [4] Aric Hagberg, Pieter Swart, and Daniel S Chult. Exploring network structure, dynamics, and function using networkx. Technical report, Los Alamos National Lab.(LANL), Los Alamos, NM (United States), 2008.
* [5] M Elizabeth Halloran, Alessandro Vespignani, Nita Bharti, Leora R Feldstein, KA Alexander, Matthew Ferrari, Jeffrey Shaman, John M Drake, Travis Porco, Joseph NS Eisenberg, et al. Ebola: mobility data. Science (New York, NY), 346(6208):433, 2014.
* [6] Ioannis Koutis, Alex Levin, and Richard Peng. Improved spectral sparsification and numerical algorithms for sdd matrices. 2012\.
* [7] Nuria Oliver, Bruno Lepri, Harald Sterly, Renaud Lambiotte, Sébastien Deletaille, Marco De Nadai, Emmanuel Letouzé, Albert Ali Salah, Richard Benjamins, Ciro Cattuto, et al. Mobile phone data for informing public health actions across the covid-19 pandemic life cycle. 2020\.
* [8] Michael T Schaub, Jörg Lehmann, Sophia N Yaliraki, and Mauricio Barahona. Structure of complex networks: Quantifying edge-to-edge relations by failure-induced flow redistribution. Network Science, 2(1):66–89, 2014.
* [9] Daniel A Spielman and Nikhil Srivastava. Graph sparsification by effective resistances. SIAM Journal on Computing, 40(6):1913–1926, 2011.
* [10] Daniel A Spielman and Shang-Hua Teng. Spectral sparsification of graphs. SIAM Journal on Computing, 40(4):981–1025, 2011.
* [11] Samarth Swarup, SS Ravi, MM Hassan Mahmud, and Kristian Lum. Identifying core network structure for epidemic simulations. https://nssac.bii.virginia.edu/~swarup/papers/swarup_etal_admi2016_sparsification.pdf, 2016\.
* [12] Andreia Teixeira, Pedro Monteiro, Jo$\tilde{a}$o Carriço, Mario Ramirez, and A.P. Francisco. Spanning edge betweenness. volume 24, pages 27–31, 01 2013.
* [13] Andreia Sofia Teixeira, Francisco C. Santos, and Alexandre P. Francisco. Spanning Edge Betweenness in Practice, pages 3–10. Springer International Publishing, 2016.
* [14] Amy Wesolowski, Taimur Qureshi, Maciej F Boni, Pål Roe Sundsøy, Michael A Johansson, Syed Basit Rasheed, Kenth Engø-Monsen, and Caroline O Buckee. Impact of human mobility on the emergence of dengue epidemics in pakistan. Proceedings of the National Academy of Sciences, 112(38):11887–11892, 2015.
|
# LAMOST carbon star
J. Shejeelammal1, Aruna Goswami1, Jianrong Shi2,
1Indian Institute of Astrophysics, Koramangala, Bangalore 560034, India;
<EMAIL_ADDRESS>
2 CAS Key Laboratory of Optical Astronomy, National Astronomical
Observatories, Beijing 100101, China.
( Accepted —; Received —; in original form — )
# HCT/HESP study of two carbon stars from the LAMOST survey ††thanks: Based on
data collected using HCT/HESP
J. Shejeelammal1, Aruna Goswami1, Jianrong Shi2,
1Indian Institute of Astrophysics, Koramangala, Bangalore 560034, India;
<EMAIL_ADDRESS>
2 CAS Key Laboratory of Optical Astronomy, National Astronomical
Observatories, Beijing 100101, China.
( Accepted —; Received —; in original form — )
###### Abstract
Carbon stars, enhanced in carbon and neutron-capture elements, provide wealth
of information about the nucleosynthesis history of the Galaxy. In this work,
we present the first ever detailed abundance analysis of carbon star
LAMOSTJ091608.81+230734.6 and a detailed abundance analysis of neutron-capture
elements for the object LAMOSTJ151003.74+305407.3. Updates on the abundances
of elements C, O, Mg, Ca, Cr, Mn and Ni for LAMOSTJ151003.74+305407.3 are also
presented. Our analysis is based on high resolution spectra obtained using
Hanle Echelle Spectrograph (HESP) attached to the Himalayan Chandra Telescope
(HCT), IAO, Hanle. The stellar atmospheric parameters (Teff, logg, micro-
turbulance ${\zeta}$, metallicity [Fe/H]) are found to be (4820, 1.43, 1.62,
$-$0.89) and (4500, 1.55, 1.24, $-$1.57) for these two objects respectively.
The abundance estimates of several elements, C, N, O, Na, $\alpha$-elements,
Fe-peak elements and neutron-capture elements Rb, Sr, Y, Zr, Ba, La, Ce, Pr,
Nd, Sm and Eu are presented. Our analysis shows the star
LAMOSTJ151003.74+305407.3 to be a CEMP-r/s star, and LAMOSTJ091608.81+230734.6
a CH giant. We have examined if the i-process model yields ([X/Fe]) of heavy
elements could explain the observed abundances of the CEMP-r/s star based on a
parametric model based analysis. The negative values obtained for the neutron
density dependent [Rb/Zr] ratio confirm former low-mass AGB companions for
both the stars. Kinematic analysis shows that LAMOSTJ151003.74+305407.3
belongs to the Galactic halo population and LAMOSTJ091608.81+230734.6 to the
disc population.
###### keywords:
stars:individual - stars: Carbon - stars: Abundances - stars: nucleosynthesis
††pagerange: LABEL:firstpage–LABEL:lastpage††pubyear: 2014
## 1 Introduction
Allowing to measure carbon and neutron-capture elements, the atmospheres of
the less-evolved low-mass stars form a unique treasure trove of information
for the astrophysicists seeking the chemical evolution history of the Galaxy.
Thus, studies on the metal-poor stars such as CH stars (Keenan 1942) with
their more metal-poor counterparts, Carbon Enhanced Metal-Poor (CEMP) stars
offer the best means to constrain the neutron-capture nucleosynthesis
processes, especially the nucleosythesis occurring in the Asymptotic Giant
Branch (AGB) stars. The spectra of these peculiar stars show strong CH and C2
molecular bands and features due to enhanced neutron-capture elements compared
to the normal stars. They are characterized by C/O$>$ 1.
The CEMP stars are more metal-poor ([Fe/H]$<$$-$1) than the classical CH stars
(Lucatello et al. 2005, Aoki et al. 2007, Abate et al 2016, Hansen et al.
2016a, c) with [C/Fe]$>$1 (Beers & Christlieb 2005, Abate et al. 2016). They
were first identified among the Very Metal-Poor stars discovered in the
extensive spectroscopic survey to identify a large sample of most metal-poor
stars, HK survey (Beers et al. 1985, 1992, 2007 Beers 1999), and later in a
number of successive surveys like Hamburg/ESO Survey (HES; Christlieb et al.
2001a, 2001b, Christlieb 2003), Sloan Digital Sky Survey (SDSS; York et al.
2000), Sloan Extension for Galactic Understanding and Exploration (SEGUE;
Yanny et al. 2009) etc. A number of other large sky survey programs in the
past were also dedicated to identify the Galactic carbon stars, for instance,
the First Byurakan Spectral Sky Survey (Gigoyan et al. 1998), the Automatic
Plate Measuring survey (Totten & Irwin 1998, Ibata et al. 2001), infrared
objective-prism surveys (Alksnis et al. 2001), Large Sky Area Multi-Object
Fiber Spectroscopic Telescope (LAMOST) pilot survey (Cui et al. 2012, Deng et
al. 2012, Zhao et al. 2012).
Beers & Christlieb (2005) put forward the first classification scheme for CEMP
stars and classified them into different sub-classes, CEMP-s, CEMP-r, CEMP-r/s
and CEMP-no depending on the level of enrichment of neutron-capture elements
Ba and Eu. A slight deviation from the original classification schemes have
been adopted by several authors (Aoki et al. 2007, Abate et al. 2016, Frebel
2018, Hansen et al. 2019). High-resolution spectroscopic analyses have shown
that, at present, about 80% of the CEMP stars are CEMP-s stars (Aoki et al.
2007) and about half of the CEMP-s stars are CEMP-r/s stars (Sneden et al.
2008, Käppeler et al. 2011, Bisterzo et al. 2011).
CH stars and CEMP-s/(r/s) stars belong to the main-sequence or giant phase of
stellar evolution. Hence, the observed over abundances of the carbon and
neutron-capture elements are attributed to an extrinsic origin. In the case of
CH and CEMP-s stars, enriched in s-process elements, the most accepted
scenario involves binary mass-transfer from an AGB companion. There exist a
number of proposed scenarios for the simultaneous r- and s- process enrichment
observed in CEMP-r/s stars (Jonsell et al. 2006 and references therein);
however, none of them could successfully reproduce the observed frequency and
high [hs/ls] ratio of CEMP-r/s stars (Abate et al. 2016). An intermediate
neutron-capture process (i-process) that operates with neutron densities in
between s- and r-process neutron densities had been invoked to explain the
observed abundances of CEMP-r/s stars. Hampel et al. (2016, 2019) could
successfully reproduce the observed abundance trend of several CEMP-r/s stars
considering this production scenario. The i-process was originally proposed by
Cowan & Rose (1977). Among the proposed scenarios for the nucleosynthetic
sites of the i-process are, massive (5 - 10 M⊙) super-AGB stars (Doherty et
al. 2015; Jones et al. 2016), evolved low-mass stars (Herwig et al. 2011;
Hampel et al. 2019), low-mass, low-metallicity ([Fe/H] ${\leq}$ $-$3) stars
(Campbell & Lattanzio 2008; Campbell et al. 2010; Cruz et al. 2013; Cristallo
et al. 2016), and Rapidly Accreting White Dwarfs (Herwig et al. 2014;
Denissenkov et al. 2017). Clarkson et al. (2018) and Banerjee et al. (2018)
have suggested that massive (M ${\geq}$ 20 M⊙), metal-poor stars could also
play a role in the production of i-process elements. In spite of several
efforts, large uncertainties still exist regarding the i-process
nucleosynthesis and the possible astrophysical sites of its occurrence.(Frebel
2018; Koch et al. 2019).
It has been found from the long-term radial velocity monitoring studies that
vast majority of CH stars (McClure et al. 1980, McClure 1983, 1984, McClure &
Woodsworth 1990, Jorissen et al. 2016) and CEMP-s/rs stars (Lucatello et al.
2005, Starkenburg et al. 2014, Jorissen et al. 2016, Hansen et al. 2016c) are
most likely binaries, thus strongly favoring the binary mass transfer
scenario.
It has been identified that the fraction of CEMP stars in the Galactic halo
increases with decreasing metallicity; $\sim$20% for [Fe/H]$<$$-$2 (Norris et
al. 1997, Rossi et al. 1999, 2005, Christlieb 2003, Cohen et al. 2005,
Marsteller et al. 2005, Frebel et al. 2006, Lucatello et al. 2006, Carollo et
al. 2012, Lee et al. 2013), $\sim$40% for [Fe/H]$<$$-$3 (Aoki et al. 2013, Lee
et al. 2013, Yong et al. 2013b), $\sim$75% for [Fe/H]$<$$-$4 (Lee et al. 2013,
Placco et al. 2014, Frebel & Norris 2015), and thus making them important
tools for the studies of formation and evolution of early Galactic halo.
Ji et al. (2016) have identified 894 new carbon stars from the LAMOST DR2
which contains almost four million medium-resolution (R$\sim$1800) stellar
spectra, based on multiple line indices measurement. In this work, we have
carried out a detailed spectroscopic analysis of two carbon stars
LAMOSTJ091608.81+230734.6 and LAMOSTJ151003.74+305407.3 from Ji et al. (2016).
Observations and data reduction are presented in Section 2. Radial velocity of
the stars and the methodology used for the determination of stellar
atmospheric parameters are presented in section 3. The same section also
provides a brief discussion on the stellar mass determination. Section 4
provides a discussion on abundance uncertainties. Elemental abundance
determination is discussed in Section 5. In Section 6, we present the
kinematic analysis of the program stars, followed by the discussion on the
binary status of the stars in Section 7. Interpretations of abundance ratios
are presented in Section 8. A discussion on individual stars along with the
parametric model based analysis is also given in section 8. Conclusion are
drawn in Section 9.
## 2 OBSERVATIONS AND DATA REDUCTION
High-quality, high-resolution spectra of the objects LAMOSTJ091608.81+230734.6
and LAMOSTJ151003.74+305407.3 were obtained with the HESP (Hanle Echelle
SPectrograph) attached to the 2m Himalayan Chandra Telescope (HCT) operated by
Indian Astronomical Observatory, Hanle. The HESP spectra covers the wavelength
range 3530 - 9970 Å. The spectra of LAMOSTJ091608.81+230734.6 (Vmag = 10.4)
were obtained on April 4, 2018 at a spectral resolution
($\lambda/\delta\lambda$) $\sim$ 60,000, and the spectra of
LAMOSTJ151003.74+305407.3 (Vmag = 11.38) were obtained on May 23, 2018 at a
spectral resolution ($\lambda/\delta\lambda$) $\sim$ 30,000. For both the
objects we had acquired three frames; each frame was taken with 2700 seconds
exposure time. The three frames were added to enhance the S/N ratio, and the
co-added spectrum was used for further analysis. The data was reduced using
the standard procedures in IRAF111IRAF (Image Reduction and Analysis Facility)
is distributed by the National Optical Astronomical Observatories, which is
operated by the Association for Universities for Research in Astronomy, Inc.,
under contract to the National Science Foundation software. The basic
information of the program stars are given in Table 1. Two sample spectra of
the stars in the wavelength region 5160 - 5190 Å are shown in Figure 1.
Figure 1: Sample spectra of the program stars in the wavelength region 5160
to 5190 Å. Table 1: Basic information of the program stars.
Star | RA$(2000)$ | Dec.$(2000)$ | B | V | J | H | K | Exposure | Date of obs. | S/N |
---|---|---|---|---|---|---|---|---|---|---|---
| | | | | | | | (seconds) | | 5500 Å | 7500 Å
LAMOSTJ091608.81+230734.6 | 09 16 8.82 | +23 07 34.86 | 11.44 | 10.40 | 8.654 | 8.141 | 8.022 | 2700(3) | 04/04/2018 | 37.15 | 42.38
LAMOSTJ151003.74+305407.3 | 15 10 3.30 | +30 54 7.36 | 13.50 | 11.38 | 9.33 | 8.737 | 8.539 | 2700(3) | 23/05/2018 | 33.67 | 70.17
The number of frames taken are indicated within the parenthesis with exposure.
## 3 ESTIMATION OF ATMOSPHERIC PARAMETERS AND RADIAL VELOCITY
A set of clean lines of several elements is used to calculate the radial
velocities of the program stars. While LAMOSTJ151003.74+305407.3 is found to
be a high velocity object with an estimated radial velocity of
$-$141.58$\pm$3.57 kms-1, the radial velocity of LAMOSTJ091608.81+230734.6 is
found to be 16.13$\pm$4.30 kms-1. The radial velocities of these two objects
are $-$145.2 and 17.3 kms-1 respectively, as noted from the SIMBAD
astronomical database (Gaia collaboration et al. 2018). For the star
LAMOSTJ151003.74+305407.3, our estimate shows a difference of $\sim$4 kms-1
from the SIMBAD value; this may be a clear indication that the star could be a
binary.
The equivalent width measurements of a set of Fe I and Fe II lines are used to
derive the atmospheric parameters of the stars. The lines are selected such
that the range of equivalent width and excitation potential are 20 - 180 mÅ
and 0.0 - 6.0 eV respectively. The photometric temperature estimates,
estimated using the temperature calibration equations of Alonso et al. (1999,
2001), had been used as an initial guess to derive the stellar atmospheric
parameters. The final model atmosphere is obtained through an iterative
process from the initial one taken from the Kurucz grid of model atmosphere
with no convective overshooting (http://cfaku5.cfa.hardvard.edu/). We made use
of the recent version MOOG2013 of the radiative transfer code MOOG (Sneden
1973) assuming Local Thermodynamic Equilibrium (LTE) for the analysis.
The temperature which gives nearly zero slope between the abundance and
excitation potential of Fe I lines is taken as the effective temperature. The
microturbulent velocity at this fixed effective temperature is then determined
such that there is no dependence of the Fe I abundance on equivalent width.
With this temperature and microturbulent velocity estimates, the surface
gravity is determined by demanding the abundances obtained from Fe I and Fe II
lines are to be nearly same. Figure 2 shows the abundances estimated from Fe I
and Fe II lines, as functions of excitation potential and equivalent widths.
The derived atmospheric parameters and the radial velocity estimates are given
in Table 2.
Figure 2: Iron abundances of the program stars derived from individual Fe I
and Fe II lines as function of (i) Excitation potential (lower panel), (ii)
equivalent width (upper panel). Solid circles correspond to Fe I and solid
triangles correspond to Fe II lines. Table 2: Derived atmospheric parameters
of the program stars.
Star | Teff | log g | $\zeta$ | [Fe I/H] | [Fe II/H] | Vr | Vr | Remarks
---|---|---|---|---|---|---|---|---
| (K) | cgs | (km s-1) | | | (km s-1) | (km s-1) |
| $\pm$100 | $\pm$0.2 | $\pm$0.2 | | | our estimates | SIMBAD |
LAMOSTJ091608.81+230734.6 | 4820 | 1.43 | 1.62 | $-$0.89$\pm$0.14 | $-$0.89$\pm$0.12 | +16.13$\pm$4.30 | +17.33$\pm$0.40 | 1
LAMOSTJ151003.74+305407.3 | 4500 | 1.55 | 1.24 | $-$1.57$\pm$0.12 | $-$1.57$\pm$0.12 | $-$141.58$\pm$3.57 | $-$145.25$\pm$0.003 | 1
| 4358.31 | 0.956 | 1.667 | $-$1.346 | – | – | – | 2
Remarks: 1. Our work, 2. Hayes et al. (2018)
We have determined the mass of the star LAMOSTJ151003.74+305407.3 from its
position in the Hertzsprung-Russell diagram, generated using the evolutionary
tracks of Girardi et al. 2000, with the estimates of spectroscopic
temperature, Teff, and the luminosity, log$(L/L_{\odot})$. Then log g is
recalculated using this mass estimate as described in our previous work
Shejeelammal et al. (2020). For estimation of log$(L/L_{\odot})$, the required
visual magnitudes V and the parallaxes $\pi$ are taken from Simbad and Gaia
DR2 (Gaia collaboration et al. 2018, https://gea.esac.esa.int/archive/)
respectively. We have used z = 0.001 tracks for this star. The evolutionary
tracks for LAMOSTJ151003.74+305407.3 is shown in Figure 3. The estimated mass,
log g determined using parallax method are presented in Table 3. The mass
could not be determined for the other star using this method as the
evolutionary tracks corresponding to its temperature and luminosity are not
available.
Figure 3: The locations of LAMOSTJ151003.74+305407.3. The evolutionary tracks for 0.6, 0.7 and 0.8 M⊙ are shown from bottom to top for z = 0.001. Table 3: Mass and log g estimates by parallax method Star name | Parallax | $M_{bol}$ | log(L/L⊙) | Mass(M⊙) | log g | log g (spectroscopic)
---|---|---|---|---|---|---
| (mas) | | | | (cgs) | (cgs)
LAMOSTJ091608.81+230734.6 | 1.0543$\pm$0.0517 | 0.105$\pm$0.107 | 1.854$\pm$0.043 | – | – | 1.43
LAMOSTJ151003.74+305407.3 | 0.2778$\pm$0.0346 | 1.891$\pm$0.271 | 2.653$\pm$0.109 | 0.70$\pm$0.10 | 1.20$\pm$0.05 | 1.55
## 4 ABUNDANCE UNCERTAINTIES
The main sources of uncertainty in the abundance are the errors in the
estimated stellar atmospheric parameters and the errors in the line
parameters. Total uncertainty in the elemental abundance, log $\epsilon$, is
given as;
$\sigma_{log\epsilon}^{2}$ = $\sigma_{ran}^{2}$ \+ $(\frac{\partial
log\epsilon}{\partial T})^{2}$ $\sigma_{T_{eff}}^{2}$ \+ $(\frac{\partial
log\epsilon}{\partial logg})^{2}$ $\sigma_{logg}^{2}$ \+
$(\frac{\partial log\epsilon}{\partial\zeta})^{2}$ $\sigma_{\zeta}^{2}$ \+
$(\frac{\partial log\epsilon}{\partial[Fe/H]})^{2}$ $\sigma_{[Fe/H]}^{2}$
where $\sigma_{ran}$ = $\frac{\sigma_{s}}{\sqrt{N}}$, $\sigma_{s}$ being the
standard deviation of the abundances derived from N lines of a particular
element considered. The other $\sigma$’s in the equation are the typical
uncertainties in the atmospheric parameters; $\Delta$Teff$\sim$ $\pm$100 K,
$\Delta$log g$\sim$ $\pm$0.2 dex, $\Delta$$\zeta$$\sim$ $\pm$0.2 km s-1 and
$\Delta$[Fe/H]$\sim$ $\pm$0.1 dex.
We have calculated the uncertainty in the abundance, [X/Fe], for each element
following a detailed procedure described in our earlier paper Shejeelammal et
al. (2020). As an example, these values for the star LAMOSTJ091608.81+230734.6
are given in Table 4.
Table 4: Differential Abundance ($\Delta$log$\epsilon$) of different elemental
species due to the variations in stellar atmospheric parameters for
LAMOSTJ091608.81+230734.6. The sixth column gives the computed rms uncertainty
of the second to fifth columns. The seventh column gives the total uncertainty
in the abundance ratio, [X/Fe], of each elemental species.
Element | $\Delta$Teff | $\Delta$log g | $\Delta$$\zeta$ | $\Delta$[Fe/H] | ($\Sigma\sigma_{i}^{2}$)1/2 | $\sigma_{[X/Fe]}$
---|---|---|---|---|---|---
| ($\pm$100 K) | ($\pm$0.2 dex) | ($\pm$0.2 kms-1) | ($\pm$0.1 dex) | |
C | $\pm$0.10 | 0.00 | 0.00 | $\mp$0.05 | 0.11 | 0.19
N | $\pm$0.13 | $\pm$0.03 | 0.00 | $\pm$0.02 | 0.13 | 0.21
O | 0.00 | $\pm$0.04 | $\mp$0.02 | 0.00 | 0.04 | 0.16
Na I | $\pm$0.08 | $\mp$0.01 | $\mp$0.05 | $\mp$0.01 | 0.10 | 0.19
Mg I | $\pm$0.09 | $\mp$0.03 | $\mp$0.08 | $\mp$0.01 | 0.05 | 0.17
Si I | $\pm$0.04 | $\pm$0.01 | $\mp$0.02 | 0.00 | 0.06 | 0.17
Ca I | $\pm$0.10 | $\mp$0.01 | $\mp$0.08 | $\mp$0.01 | 0.13 | 0.21
Sc II | $\mp$0.02 | $\pm$0.09 | $\mp$0.06 | $\pm$0.02 | 0.11 | 0.25
Ti I | $\pm$0.14 | $\mp$0.01 | $\mp$0.07 | $\mp$0.01 | 0.16 | 0.23
Ti II | 0.00 | $\pm$0.09 | $\mp$0.08 | $\pm$0.03 | 0.12 | 0.26
V I | $\pm$0.16 | $\mp$0.01 | $\mp$0.07 | $\mp$0.01 | 0.18 | 0.24
Cr I | $\pm$0.17 | $\mp$0.02 | $\mp$0.13 | $\mp$0.03 | 0.22 | 0.27
Mn I | $\pm$0.09 | $\mp$0.02 | $\mp$0.16 | $\mp$0.01 | 0.18 | 0.24
Fe I | $\pm$0.10 | $\pm$0.01 | $\mp$0.11 | $\pm$0.05 | 0.16 | –
Fe II | $\mp$0.06 | $\pm$0.11 | $\mp$0.10 | $\mp$0.17 | 0.23 | –
Co I | $\pm$0.07 | $\pm$0.02 | $\mp$0.06 | $\pm$0.01 | 0.09 | 0.18
Ni I | $\pm$0.09 | 0.00 | $\mp$0.05 | $\mp$0.01 | 0.10 | 0.19
Zn I | $\mp$0.03 | $\pm$0.05 | $\mp$0.03 | $\pm$0.02 | 0.07 | 0.17
Rb I | $\pm$0.10 | 0.00 | $\mp$0.03 | 0.00 | 0.10 | 0.19
Sr I | $\pm$0.18 | $\mp$0.02 | $\mp$0.13 | $\mp$0.02 | 0.22 | 0.27
Y I | $\pm$0.17 | $\mp$0.02 | $\mp$0.04 | $\mp$0.02 | 0.18 | 0.24
Y II | 0.00 | $\pm$0.09 | $\mp$0.11 | $\pm$0.03 | 0.14 | 0.27
Zr I | $\pm$0.17 | $\mp$0.02 | $\mp$0.07 | $\mp$0.02 | 0.19 | 0.25
Zr II | $\pm$0.01 | $\pm$0.08 | $\mp$0.06 | $\pm$0.02 | 0.10 | 0.25
Ba II | $\pm$0.02 | $\pm$0.05 | $\mp$0.15 | $\pm$0.02 | 0.16 | 0.28
La II | $\pm$0.01 | $\pm$0.09 | $\mp$0.06 | $\pm$0.03 | 0.11 | 0.25
Ce II | $\pm$0.02 | $\pm$0.08 | $\mp$0.13 | $\pm$0.02 | 0.16 | 0.28
Pr II | $\pm$0.02 | $\pm$0.08 | $\mp$0.07 | $\pm$0.02 | 0.11 | 0.25
Nd II | $\pm$0.03 | $\pm$0.08 | $\mp$0.16 | $\pm$0.02 | 0.18 | 0.29
Sm II | $\pm$0.04 | $\pm$0.09 | $\mp$0.08 | $\pm$0.03 | 0.13 | 0.26
Eu II | $\mp$0.02 | $\pm$0.09 | $\mp$0.03 | $\pm$0.03 | 0.10 | 0.25
## 5 Abundance determination
Elemental abundances are derived from the measured equivalent widths and the
spectral synthesis calculation of spectral lines of neutral and ionized atoms.
Only clean unblended lines have been considered for the abundance
determination. Absorption lines due to each element is identified from the
close comparison of Doppler-corrected spectrum of the star Arcturus and the
program stars spectra. The line information such as log $gf$ and the lower
excitation potential are from the Kurucz database of atomic line lists. In
addition to the equivalent width method, spectral synthesis calculation is
also performed for the elements showing hyper-fine splitting. Also, the
abundances derived from the molecular bands are based on spectral synthesis
calculation. The hyper-fine components of Eu are taken from Worely et al.
(2013), Ba from Mcwilliam (1998), V, Co and Cu from Prochaska et al. (2000)
and Sc and Mn from Prochaska & Mcwilliam (2000). Solar abundance values are
taken from Asplund et al. (2009).
The abundance results are presented in Table 5. The lines used for the
estimation of abundances are given in Tables A1 and A2.
Table 5: Elemental abundances in LAMOSTJ091608.81+230734.6 and
LAMOSTJ151003.74+305407.3
| | | | LAMOSTJ091608.81+230734.6 | | | LAMOSTJ151003.74+305407.3 |
---|---|---|---|---|---|---|---|---
| Z | solar log$\epsilon^{\ast}$ | log$\epsilon$ | [X/H] | [X/Fe] | log$\epsilon$ | [X/H] | [X/Fe]
C (C2 band 5165 Å) | 6 | 8.43 | 7.90(syn) | $-$0.53 | 0.36 | 8.60(syn) | 0.17 | 1.74
C (C2 band 5635 Å) | 6 | 8.43 | 7.90(syn) | $-$0.53 | 0.36 | 8.60(syn) | 0.17 | 1.74
N | 7 | 7.83 | 7.79(syn) | $-$0.04 | 0.85 | 7.53(syn) | $-$0.30 | 1.27
O | 8 | 8.69 | 7.76(syn) | $-$0.93 | $-$0.04 | – | – | –
Na I | 11 | 6.24 | 6.03$\pm$0.15(4) | $-$0.21 | 0.68 | 5.24$\pm$0.05(2) | $-$1.00 | 0.57
Mg I | 12 | 7.60 | 7.12$\pm$0.15(2) | $-$0.46 | 0.43 | 6.20$\pm$0.18(3) | $-$1.58 | $-$0.01
Si I | 14 | 7.51 | 6.59$\pm$0.18(3) | $-$0.92 | $-$0.03 | – | – | –
Ca I | 20 | 6.34 | 5.38$\pm$0.15(10) | $-$0.96 | $-$0.07 | 4.52$\pm$0.13(8) | $-$1.82 | $-$0.25
Sc II | 21 | 3.15 | 2.19(syn) | $-$0.96 | $-$0.07 | 1.85(syn) | $-$1.30 | 0.27
Ti I | 22 | 4.95 | 4.07$\pm$0.17(12) | $-$0.88 | 0.01 | 3.30$\pm$0.14(4) | $-$1.65 | $-$0.08
Ti II | 22 | 4.95 | 3.77$\pm$0.17(4) | $-$1.18 | $-$0.29 | 3.21$\pm$0.19(5) | $-$1.74 | $-$0.17
V I | 23 | 3.93 | 3.42(syn) | $-$0.51 | 0.38 | 3.11(syn) | $-$0.82 | 0.75
Cr I | 24 | 5.64 | 5.13$\pm$0.18(6) | $-$0.51 | 0.38 | 3.59$\pm$0.10(5) | $-$2.05 | $-$0.48
Mn I | 25 | 5.43 | 4.70(syn) | $-$0.73 | 0.16 | 3.43(syn) | $-$2.00 | $-$0.43
Fe I | 26 | 7.50 | 6.61$\pm$0.14(73) | $-$0.89 | - | 5.93$\pm$0.12(35) | $-$1.57 | -
Fe II | 26 | 7.50 | 6.61$\pm$0.12(6) | $-$0.89 | - | 5.93$\pm$0.12(4) | $-$1.57 | -
Co I | 27 | 4.99 | 3.89(syn) | $-$1.10 | $-$0.21 | – | – | –
Ni I | 28 | 6.22 | 5.63$\pm$0.11(11) | $-$0.59 | 0.30 | 4.61$\pm$0.17(6) | $-$1.61 | $-$0.04
Zn I | 30 | 4.56 | 4.11(1) | $-$0.49 | 0.40 | 2.56$\pm$0.14(2) | $-$2.00 | $-$0.43
Rb I | 37 | 2.52 | 2.10(syn) | $-$0.42 | 0.47 | 1.00(syn) | $-$1.52 | 0.05
Sr I | 38 | 2.87 | 2.95(syn) | 0.08 | 0.97 | – | – | –
Y I | 39 | 2.21 | 2.56(syn) | 0.35 | 1.24 | 2.20(syn) | $-$0.01 | 1.56
Y II | 39 | 2.21 | 2.32$\pm$0.10(6) | 0.11 | 1.00 | 3.05$\pm$0.11(2) | 0.84 | 2.40
Zr I | 40 | 2.58 | 3.08(syn) | 0.50 | 1.39 | 2.05(syn) | $-$0.53 | 1.04
Zr II | 40 | 2.58 | 2.73(syn) | 0.15 | 1.04 | 2.09(syn) | $-$0.49 | 1.08
Ba II | 56 | 2.18 | 2.48(syn) | 0.30 | 1.19 | 2.00(syn) | $-$0.18 | 1.39
La II | 57 | 1.10 | 2.00(syn) | 0.90 | 1.79 | 1.10(syn) | 0.00 | 1.57
Ce II | 58 | 1.58 | 2.34$\pm$0.11(7) | 0.76 | 1.65 | 1.32$\pm$0.13(7) | $-$0.26 | 1.31
Pr II | 59 | 0.72 | 1.48$\pm$0.18(6) | 0.76 | 1.65 | 1.02$\pm$0.11(5) | 0.30 | 1.87
Nd II | 60 | 1.42 | 2.17$\pm$0.17(9) | 0.75 | 1.64 | 1.35$\pm$0.14(12) | $-$0.07 | 1.50
Sm II | 62 | 0.96 | 1.41$\pm$0.11(10) | 0.45 | 1.34 | 1.03$\pm$0.08(6) | 0.07 | 1.64
Eu II | 63 | 0.52 | 0.33(syn) | $-$0.19 | 0.70 | 0.09(syn) | $-$0.43 | 1.14
$\ast$ Asplund (2009), The numbers within the parenthesis are the number of
lines used for abundance estimation.
### 5.1 Abundance analysis: C, N, O, 12C/13C, Na, $\alpha$\- and $Fe$-peak
elements
The abundance of oxygen is derived using the spectral synthesis calculation of
[O I] line at 6300.304 Å in LAMOSTJ091608.81+230734.6. The resonance O I
triplet lines at around 7770 Å and the [O I] 6363.776 Å line are blended and
could not be used for abundance determination. The estimated oxygen abundance
shows near solar value, with [O/Fe]$\sim$$-$0.04. We could not estimate the
abundance of oxygen for the star LAMOSTJ151003.74+305407.3 as no clean good
lines of oxygen could be detected in the spectrum of this object.
The carbon abundance is derived from the spectral synthesis calculation of the
C2 molecular bands at 5165 and 5635 Å (Figure 4). Both the bands gave the same
carbon abundance value in both the stars. Carbon is found to be enhanced in
LAMOSTJ151003.74+305407.3 with [C/Fe]$\sim$1.74 and mildly enhanced in the
other object with [C/Fe]$\sim$0.36.
Figure 4: Synthesis of C2 band around 5165 Å (lower panel) and 5635 Å (upper
panel). Dotted and solid lines represent synthesized and observed spectra
respectively. Short dashed and long dashed lines represent the synthetic
spectra for $\Delta$ [C/Fe] = $-$0.3 and +0.3 respectively.
Once the carbon abundance is estimated, the abundance of nitrogen is derived
using the spectral synthesis calculation of 12CN lines at 8000 Å region. The
12CN molecular band at 4215 Å is not usable in both the stars. The CN and C2
molecular lines are taken from Ram et al. (2014), Sneden et al. (2014) and
Brooke et al. (2013). In both the stars nitrogen is enhanced with
[N/Fe]$\geq$0.85. The final C, N and O abundances are determined by an
iterative process. Using the first estimate of oxygen abundance derived from
the spectral synthesis calculation of 6300.304 Å [O I] line, the abundance of
carbon is estimated from the C2 molecular bands at 5165 and 5635 Å. The
abundance of nitrogen is then determined using these derived abundance
estimates of O and C. Once the nitrogen abundance is obtained, the oxygen and
carbon abundances are re-determined using this nitrogen abundance. This
iteration process is continued until a convergence is reached.
The carbon isotopic ratio, 12C/13C, is estimated from the spectral synthesis
calculation of 12CN lines at 8003.292, 8003.553, 8003.910 Å, and 13CN features
at 8004.554, 8004.728, 8004.781 Å. The spectrum synthesis fits for the program
stars in this region is shown in Figure 5. The values of this ratio are 8.67
and 13.33 in LAMOSTJ091608.81+230734.6 and LAMOSTJ151003.74+305407.3
respectively. These are typical values normally found in the case of giants
(Smith et al. 1993). For CEMP-s and CEMP-r/s stars, the values of 12C/13C
ratio are found in the range 2.5 - 40 (Bisterzo et al. 2011). We could
estimate the C/O ratio in the star LAMOSTJ091608.81+230734.6 which is found to
be greater than 1 as typically seen in CH stars.
Figure 5: Spectral synthesis of CN band around 8005 Å. Dotted and solid lines
represent synthesized and observed spectra respectively. Short dashed and long
dashed lines are the synthetic spectra for 12C/13C $\simeq$ 83 and 2.7
respectively.
The abundances of the elements Na, Mg, Si, Ca, Ti, Cr, Ni and Zn are derived
from the measured equivalent width of spectral lines listed in Table A2.
Scandium abundance is derived from the spectral synthesis calculation of Sc II
line at 6245.637 Å, vanadium abundance from the V I lines at 6251.827 and
4864.731 Å, manganese abundance from Mn I line at 6013.513 Å and the abundance
of cobalt from Co I line at 5342.695 Å.
A comparison of our estimated light element abundances for the star
LAMOSTJ151003.74+305407.3 with the literature values is given in Table 6.
Within the error limits, our estimates of Mg and Ni match with the estimates
of Hayes et al. (2018). However, our estimates are lower by $\sim$0.6 dex for
[Ca/Fe] and [Cr/Fe] and $\sim$0.3 dex for [Mn/Fe]. We have obtained
[C/Fe]$\sim$1.74, which is largely different from $\sim$0.81 of Hayes et al.
(2018).
Table 6: Comparison of the light element abundances of
LAMOSTJ151003.74+305407.3 with the literature values.
Star name | [C/Fe] | [Mg/Fe] | [Ca/Fe] | [Cr/Fe] | [Mn/Fe] | [Ni/Fe] | Ref
---|---|---|---|---|---|---|---
LAMOSTJ151003.74+305407.3 | 1.74 | $-$0.01 | $-$0.25 | $-$0.48 | $-$0.43 | $-$0.04 | 1
| 0.81 | 0.21 | 0.36 | 0.11 | $-$0.15 | 0.07 | 2
References: 1. Our work, 2. Hayes et al. 2018
### 5.2 Heavy element abundance analysis
#### 5.2.1 The light s-process elements: Rb, Sr, Y, Zr
The spectral synthesis calculation of resonance line of Rb I at 7800.259 Å is
used to derive the Rb abundance in both the stars. The Rb I 7947.597 Å line
was not usable for the abundance estimation. The Rb hyperfine components are
taken from Lambert & Luck (1976). It is mildly enhanced in
LAMOSTJ091608.81+230734.6 with [Rb/Fe]$\sim$0.47, while it is near-solar in
LAMOSTJ151003.74+305407.3.
We could derive the strontium abundance only in LAMOSTJ091608.81+230734.6 as
no usable lines could be measured on the spectrum of the other star. The Sr I
4607.327 Å line in LAMOSTJ091608.81+230734.6 returned a value
[Sr/Fe]$\sim$0.97 from the spectral synthesis calculation.
The abundance of yttrium is derived using the spectral synthesis calculation
of Y I line at 6435.004 Å and equivalent width measurement of a few Y II lines
in both the stars. Both the species give a value [Y/Fe]$\geq$1\. The spectrum
synthesis fits for Y I of the program stars are shown in Figure 6.
The spectral synthesis calculation of Zr I line at 6134.585 Å and Zr II line
at 5112.297 Å are used to derive the zirconium abundance in the program stars.
In both the cases, Zr is found to be enhanced with [Zr/Fe]$>$1\. The spectrum
synthesis fits for Zr I of the program stars are shown in Figure 7.
In the case of LAMOSTJ091608.81+230734.6, neutral lines of Y and Zr give a
higher abundance than the singly ionized lines.
Figure 6: Synthesis of Y I line at 6435.004 Å. Dotted and solid lines
represent synthesized and observed spectra respectively. Short dashed and long
dashed lines represent the synthetic spectra for $\Delta$[Y/Fe] = $-$0.3 and
+0.3 respectively. Figure 7: Synthesis of Zr I line at 6134.585 Å. Dotted and
solid lines represent synthesized and observed spectra respectively. Short
dashed and long dashed lines represent the synthetic spectra for
$\Delta$[Zr/Fe] = $-$0.3 and +0.3 respectively.
#### 5.2.2 The heavy s-process elements: Ba, La, Ce, Pr, Nd
The spectral synthesis calculation of Ba II line at 5853.668 Å in
LAMOSTJ091608.81+230734.6 and Ba II 6141.713 Å line in
LAMOSTJ151003.74+305407.3 are used to derive the barium abundances. La
abundance is derived from the spectral synthesis calculation of La II line at
5259.380 Å in both the stars. We could not detect any other useful lines due
to lanthanum in the program stars for the abundance determination. The
abundances of Ce, Pr and Nd are derived using the measured equivalent widths
of several spectral lines due to singly ionized species of the respective
elements. All these elements are found to be enhanced in both the stars with
[X/Fe]$>$1.
Finally, we have estimated the [ls/Fe], [hs/Fe] and [hs/ls] ratios in the
program stars. Here, ls and hs are light (Sr, Y and Zr) and heavy (Ba, La, Ce
and Nd) s-process elements respectively. Also, we have estimated the mean
abundance ratio of s-process elements (Sr, Y, Zr, Ba, La, Ce, Nd ), [s/Fe], to
find the s-process content.
#### 5.2.3 The r-process elements: Sm, Eu
The Sm abundance is derived from the equivalent width measurement of Sm II
lines listed in Table A2. Both the stars show enhancement of Sm with
[Sm/Fe]$\sim$1.34 and 1.64 in LAMOSTJ091608.81+230734.6 and
LAMOSTJ151003.74+305407.3 respectively.
The abundance of europium is estimated from the spectral synthesis calculation
of Eu II 6645.064 Å. In LAMOSTJ091608.81+230734.6 Eu is enhanced with
[Eu/Fe]$\sim$0.70 while the other star shows a value 1.14.
## 6 Kinematic Analysis
The spatial velocity of a star in the solar neighborhood is measured with
respect to the Local Standard of Rest (LSR). The components of the spatial
velocity are $U_{LSR}$, $V_{LSR}$ and $W_{LSR}$; measured along the axes
pointing towards the Galactic center, the direction of Galactic rotation and
the North Galactic Pole respectively (Johnson & Soderblom 1987). Space
velocity of the programme stars are calculated following the procedures in
Bensby et al. (2003). Components of the spatial velocity of the star with
respect to LSR:
$(U,V,W)_{LSR}=(U,V,W)+(U,V,W)_{\odot}$ km/s.
where, $(U,V,W)_{\odot}$ is the solar motion with respect to LSR and its value
is (11.1, 12.2, 7.3) km/s (Schönrich et al., 2010) and
$\left[\begin{array}[]{c}U\\\ V\\\
W\end{array}\right]=B.\left[\begin{array}[]{c}V_{r}\\\ k.\mu_{\alpha}/\pi\\\
k.\mu_{\delta}/\pi\end{array}\right]$
where, $B=T.A$, T is the transformation matrix connecting the Galactic
coordinate system and equatorial coordinate system and A is a coordinate
matrix defined below
$T=\left[\begin{array}[]{ccc}-0.06699&-0.87276&-0.48354\\\
+0.49273&-0.45035&+0.74458\\\ -0.86760&-0.18837&0.46020\end{array}\right]$
$A=\left[\begin{array}[]{ccc}Cos\alpha.Cos\delta&-Sin\alpha&-Cos\alpha.Sin\delta\\\
Sin\alpha.Cos\delta&Cos\alpha&-Sin\alpha.Sin\delta\\\
Sin\delta&0&Cos\delta\end{array}\right]$
where $\alpha$ is the RA, $\delta$ the DEC, $V_{r}$ the radial velocity in
km/s, $k=4.74057km/s$ equivalent of 1 AU in one year, $\mu_{\alpha}$ and
$\mu_{\delta}$ respectively the proper motions in RA and DEC in arcsec/year
and $\pi$ the parallax in arcsec. The proper motion and parallax values are
taken from GAIA DR2 (Gaia collaboration et al. 2018,
https://gea.esac.esa.int/archive/) and SIMBAD astronomical database. The
spectroscopic velocity estimates have been used in this calculation.
The total spatial velocity of the star:
$V_{spa}^{2}=U_{LSR}^{2}+V_{LSR}^{2}+W_{LSR}^{2}$
Errors in the respective velocity components are calculated as follows:
$\left[\begin{array}[]{c}\sigma_{U}^{2}\\\ \sigma_{V}^{2}\\\
\sigma_{W}^{2}\end{array}\right]=C.\left[\begin{array}[]{c}\sigma_{V_{r}}^{2}\\\
(k/\pi)^{2}[\sigma_{\mu_{\alpha}}^{2}+(\mu_{\alpha}\sigma_{\pi}/\pi)^{2}]\\\
(k/\pi)^{2}[\sigma_{\mu_{\delta}}^{2}+(\mu_{\delta}\sigma_{\pi}/\pi)^{2}]\end{array}\right]+$
$\frac{2\mu_{\alpha}\mu_{\delta}k^{2}\sigma_{\pi}^{2}}{\pi^{4}}.\left[\begin{array}[]{c}b_{12}.b_{13}\\\
b_{22}.b_{23}\\\ b_{32}.b_{33}\end{array}\right]$
where $C_{ij}=b_{ij}^{2}$ and $\sigma^{\prime}_{s}$ the errors in respective
quantities.
The probability that a star belongs to the Galactic thin/thick disc or halo
population is calculated following the procedure described in Mishenina et al.
(2004), Bensby et al. (2003, 2004) and Reddy et al. (2006), with assumption
that the space velocities follow a Gaussian distribution.
$\begin{array}[]{c}P_{thin}=\frac{f_{1}.p_{1}}{P},P_{thick}=\frac{f_{2}.p_{2}}{P},P_{halo}=\frac{f_{3}.p_{3}}{P}\\\
\end{array}$
$P=\Sigma f_{i}.p_{i}$
$p_{i}=K_{i}.exp[\frac{-U_{LSR}^{2}}{2.\sigma_{U_{i}}^{2}}-\frac{(V_{LSR}-V_{ad})^{2}}{2.\sigma_{V_{i}}^{2}}-\frac{W_{LSR}^{2}}{2.\sigma_{W_{i}}^{2}}]$
$K_{i}=\frac{1}{(2.\pi)^{(3/2)}.\sigma_{U_{i}}.\sigma_{V_{i}}.\sigma_{W_{i}}}$;
$i=1,2,3$
where $\sigma^{\prime}_{s}$ are the dispersion in velocities, $V_{ad}$ the
mean galactic rotation velocity for each stellar population relative to LSR
and $f$ the fractional population. These values are taken from Reddy et al.
(2006).The estimates of the total spatial velocity and components of spatial
velocity along with the probability estimates are given in Table 7. Our
analysis shows that the star LAMOSTJ091608.81+230734.6 belongs to the thin
disc population and LAMOSTJ151003.74+305407.3 belongs to the Galactic halo
population.
Table 7: Spatial velocity and probability estimates for the program stars Star name | ULSR | VLSR | WLSR | Vspa | pthin | pthick | phalo
---|---|---|---|---|---|---|---
| (kms-1) | (kms-1) | (kms-1) | (kms-1) | | |
LAMOSTJ091608.81+230734.6 | $-$12.14$\pm$2.99 | 1.57$\pm$1.41 | 2.71$\pm$2.95 | 12.53$\pm$1.98 | 0.99 | 0.01 | 0.00
LAMOSTJ151003.74+305407.3 | $-$16.38$\pm$2.84 | $-$207.80$\pm$20.74 | $-$49.68$\pm$8.67 | 214.25$\pm$22.33 | 0.00 | 0.29 | 0.71
## 7 Binary status of the program stars
For a precise identification of the source of carbon and heavy-element
enrichment in peculiar stars, it is very crucial to know their binary status.
A number of investigations dedicated to identify the binarity of the peculiar
stars have been carried out to date. The precise radial velocity monitoring
studies have shown that most of the Ba and CH stars (McClure et al. 1980,
McClure 1983, 1984, McClure & Woodsworth 1990, Udry et al. 1998a,b, Lucatello
et al. 2005, Jorissen et al. 2016) and a high fraction of CEMP-s/(r/s)
stars(Lucatello et al. 2005, Starkenburg et al. 2014, Jorissen et al. 2016)
are in binary systems. The compilation of Duquennoy & Mayor (1991) and
Starkenburg et al. (2014) have shown the binary fraction of CEMP-s/(r/s) stars
to be 100%. However, a few recent studies have reported a binary frequency of
82$\pm$10% for CEMP-s/(r/s) stars (Hansen et al. 2016c) and 17$\pm$9% for
CEMP-no stars (Hansen et al. 2016b).
All these conclusions are based on the data available to each study.
Information available in literature on the binary status of many CEMP stars is
still very limited. Yoon et al.(2016) (and references therein) have done a
compilation of the literature data for 305 CEMP stars that includes 147
CEMP-s/(r/s) stars and 127 CEMP-no stars. Out of these, 35 CEMP-s/rs stars and
22 CEMP-no stars have known binary status. Earlier to this, Spite et al.
(2013) observed for the first time the bimodality in the absolute carbon
abundances among the CEMP stars, with CEMP-s stars populating the high-carbon
region (A(C)$\sim$8.25) and CEMP-no stars populating the low-carbon region
(A(C)$\sim$6.5). Later, this bimodality is confirmed by Bonifacio et al (2015)
and Hansen et al (2015a) for an extended sample of CEMP stars. Yoon et al.
(2016) investigated anew the distribution of absolute carbon abundance, A(C),
in terms of the metallicity for these CEMP stars. Their analysis have shown
that the fiducial line at A(C)=7.1 in the [Fe/H] versus A(C) diagram could
well separate the binary nature of all the CEMP-s/(r/s) stars and majority of
CEMP-no stars. Despite of a few outliers, majority of the binary stars lie
above this line. Thus this diagram can be used as a tool to derive clues on
the binary status of the CEMP stars as well as the origin of their chemical
peculiarities. We have used such a figure to understand the binary nature of
our program stars. The distribution of absolute carbon abundance with
metallicity for different classes of chemically peculiar stars are shown in
figure 8. In this figure, both the program stars lie in the region occupied by
binary stars, indicating that our program stars are likely binaries. In
addition, the variation in radial velocity estimate for
LAMOSTJ151003.74+305407.3 by $\sim$4 kms-1 also suggests that this object
could be a binary star.
Figure 8: Distribution of A(C) as a function of [Fe/H] for known (and likely)
binary and single stars. Red nine-sided stars, magenta six-sided crosses, red
starred triangles, red filled circles and magenta five-sided stars represent
binary CEMP-s, single CEMP-s, binary CEMP-r/s, binary CEMP-no and single CEMP-
no stars respectively from literature (Yoon et al. 2016). All the red symbols
corresponds to the binary CEMP stars and magenta symbols to single CEMP stars.
Blue crosses represent the binary CH stars from literature (Purandardas et al.
2019, Karinkuzhi & Goswami 2014, 2015, Luck 2017). Binary Ba stars from
literature (Shejeelammal et al. 2020, Karinkuzhi et al. 2018) are represented
by green open hexagons. LAMOSTJ091608.81+230734.6 (filled triangle) and
LAMOSTJ151003.74+305407.3 (filled square). The dashed line indicate the
fiducial that separates binary and single stars.
## 8 Discussion
From the multiple line indices measured from the stellar spectra from LAMOST
DR2, Ji et al. (2016) have identified the objects LAMOSTJ091608.81+230734.6
and LAMOSTJ151003.74+305407.3 to be CH stars. Our detailed abundance analysis
confirms LAMOSTJ091608.81+230734.6 to be a CH star and
LAMOSTJ151003.74+305407.3 to be a CEMP -r/s star. The estimated abundances of
light elements from Na to Zn are quite similar to the normal giants following
the Galactic trend as shown in Figure 9. The estimated abundances of heavy
elements however show enhancement compared to their counterparts in other
normal stars (Figure 10).
The neutron density dependent [Rb/Zr] ratio has been estimated in order to see
the nature of companion star. For this ratio, the AGB models predict a
negative value in low-mass AGB stars (M$\leq$ 3 M⊙) and a positive value in
intermediate-mass AGB stars (M$\geq$ 4 M⊙) (Abia et al. 2001, van Raai et al.
2012, Karakas et al. 2012). This trend has been observed among the AGB stars
in the Galaxy and Magellanic Clouds (Plez et al. 1993, Lambert et al. 1995,
Abia et al. 2001, García-Hernández et al. 2006, 2007, 2009). The estimated
values of this ratio (Table 8) are shown in Figure 11. The range of Rb and Zr
observed in the Galactic and Magellanic Cloud low- and intermediate-mass AGB
stars (shaded regions) are also shown in the same figure.
Table 8: Estimates of [ls/Fe], [hs/Fe], [s/Fe], [hs/ls], [Rb/Zr], [Ba/Eu] and C/O Star name | [Fe/H] | [ls/Fe] | [hs/Fe] | [s/Fe] | [hs/ls] | [Rb/Zr] | [Ba/Eu] | C/O
---|---|---|---|---|---|---|---|---
LAMOSTJ091608.81+230734.6 | $-$0.89 | 1.20 | 1.57 | 1.41 | 0.37 | $-$0.92 | 0.49 | 1.38
LAMOSTJ151003.74+305407.3 | $-$1.57 | 1.30 | 1.44 | 1.67 | 0.14 | $-$0.99 | 0.25 | –
Figure 9: Observed [X/Fe] ratios of the light elements in the program stars
with respect to metallicity [Fe/H]. Red open circles correspond to normal
giants from literature (Honda et al. 2004, Venn et al. 2004, Aoki et al. 2005,
2007, Reddy et al. 2006, Luck & Heiter 2007, Hansen et al. 2016a, Yoon et al.
2016). Magenta nine-sided stars and blue starred triangles represent CEMP-s
and CEMP-r/s stars respectively from literature (Masseron et al. 2010). Cyan
crosses and green open squares represent giant and sub-giant CH stars
respectively from literature (Vanture 1992, Karinkuzhi & Goswami 2014, 2015,
Goswami et al. 2016). LAMOSTJ091608.81+230734.6 (filled triangle) and
LAMOSTJ151003.74+305407.3 (filled square).
Figure 10: Observed [X/Fe] ratios of the heavy elements in the program stars
with respect to metallicity [Fe/H]. Symbols have same meaning as in Figure 9.
Figure 11: The observed [Rb/Fe] and [Zr/Fe]. LAMOSTJ091608.81+230734.6 (filled
triangle) and LAMOSTJ151003.74+305407.3 (filled square). The shaded regions
represent the observed ranges of Zr and Rb in intermediate-mass (short-dashed
lines) and low-mass (dots) AGB stars of the Galaxy and the Magellanic Clouds
(van Raai et al. 2012). The Rb and Zr abundances in the program stars are
consistent with that of low-mass AGB stars.
LAMOSTJ091608.81+230734.6: The object is found to be metal-poor with a
metallicity of $-$0.89. Eventhough we could not find its position on HR
diagram, its estimated log g value suggests the star to be on RGB (Allen &
Barbuy 2006). The star shows a mild over abundance of carbon with
[C/Fe]$\sim$0.36. Similar values of carbon in CH stars have been reported in
literature (Purandardas et al. 2019, Goswami et al. 2016, Karinkuzhi & Goswami
2015, Vanture 1992). Nitrogen enhancement compared to carbon in the star,
along with the low 12C/13C ratio of $\sim$ 8.67, suggest the CN processing and
the First Dredge-Up (FDU) during the giant phase. Further, with its estimated
values of mean s-process abundance ratio, [s/Fe]$\sim$1.41, and C/O$\sim$1.38,
this star can be included in the CH giant category. The star shows negative
value for the [Rb/Zr], which indicates a low-mass companion AGB as expected
for the CH stars (Figure 11). The estimated [hs/ls] ratio is $\sim$0.37,
indicating the over abundance of second-peak s-process elements over the first
peak, as normally seen in most of the CH stars. Our kinematic analysis shows
that this star belongs to Galactic thin disc population.
Figure 12 compares the observed abundance ratios [Ba/Fe] and [Eu/Fe], the
representative s- and r- process elements respectively, in different classes
of chemically peculiar stars. The star LAMOSTJ091608.81+230734.6 falls in the
region occupied by other CH stars, Ba stars and CEMP-s stars. These three
classes of stars are known to have the same origin of s-process enhancement,
pollution from a binary low-mass AGB companion.
Figure 12: Observed [Eu/Fe] and [Ba/Fe] ratios for different classes of
chemically peculiar stars. Magenta nine-sided stars, blue starred triangles
and red six-sided crosses represent CEMP-s CEMP-r/s and r (including both
CEMP-r and rI/rII stars) stars respectively from literature (Masseron et al.
2010). Cyan crosses represent CH stars from literature (Vanture 1992,
Karinkuzhi & Goswami 2014, 2015, Goswami et al. 2016). Green open hexagons
represent the Ba stars from literature (Shejeelammal et al. 2020, Yang et al.
2016, Allen & Barbuy 2006). LAMOSTJ091608.81+230734.6 (filled triangle) and
LAMOSTJ151003.74+305407.3 (filled square). The dashed line is the least-square
fit to the observed abundances in r-stars.
LAMOSTJ151003.74+305407.3: This star is found to be metal-poor with a
metallicity [Fe/H] ${\sim}$ $-$1.57 and enhanced in carbon with [C/Fe]$>$1\.
With its estimated value of [Ba/Fe]$\sim$1.39 and [Ba/Eu]$\sim$0.25, along
with the enhanced carbon abundance, the star satisfies to be a CEMP-r/s star
(Beers & Christlieb, 2005, Abate et al. 2016). From the position of the star
on the HR diagram, it is found to be a giant. The abundances of heavy elements
in this star are well within the range observed in other CEMP-r/s stars
(Figure 10). In [C/N] versus log(12C/13C) space, this star occupies the same
region as the CEMP stars (Figure 13).
Figure 13: Observed [C/N] and log(12C/13C) ratios of the CEMP stars. Magenta
nine-sided stars, red six-sided crosses, blue starred triangles and green
filled pentagons represent CEMP-s, CEMP-r, CEMP-r/s and CEMP-no stars
respectively from literature (Masseron et al. 2010). LAMOSTJ151003.74+305407.3
(filled square).
From the study conducted by Aoki et al.(2007) for 22 metal-poor stars
($-$3.3$\leq$[Fe/H]$\leq$-1.0) exhibiting strong CH and/or C2 molecular band,
they came up with a new empirical definition for CEMP stars in terms of [C/Fe]
and log(L/L${}_{\odot})$. Based on this definition, there exist a clear
division between the CEMP stars and the carbon-normal metal-poor stars in the
log(L/L⊙) versus [C/Fe] plane. We have demonstrated this in Figure 14. The
star LAMOSTJ151003.74+305407.3 is also shown in the same figure. This star
falls in the region occupied by the CEMP stars.
Figure 14: Observed [C/Fe] ratios as a function of luminosity estimated from
the effective temperature. Red filled hexagons represent CEMP stars from
literature (Aoki et al. 2007 and references therein, Purandardas et al. 2019,
Goswami et al. 2016). Blue crosses represent carbon-normal metal-poor stars
from literature (Aoki et al. 2005, 2007, Cayrel et al. 2004, Honda et al.
2004). LAMOSTJ151003.74+305407.3 (filled square). The dashed line indicates
the dividing line between CEMP and the carbon-normal metal-poor stars.
The Figure 12 shows the position of the star LAMOSTJ151003.74+305407.3 in the
[Eu/Fe] - [Ba/Fe] space. The star is not placed in the bulk of the CEMP-r/s
stars, however, there are a few CEMP-r/s stars in the same region as
LAMOSTJ151003.74+305407.3.
With the knowledge that r-process and s-process are ascribed to different
astrophysical sites (Burbidge et al. 1957), several formation scenarios for
CEMP-r/s stars have been proposed in literature (Cohen et al. 2003, Qian &
Wasserburg 2003, Zijlstra 2004, Barbuy et al. 2005, Wanajo et al. 2006,
Jonsell et al. 2006, Bisterzo et al. 2011 and references therein), most of
them suggesting different independent processes for the r- and s- peculiarity.
One scenario describes that the CEMP-r/s star could have been a secondary in a
binary system formed out of r-process enriched ISM and its s-elements and
carbon enhancement is due to the later pollution from the AGB companion
through the mass-transfer mechanism (Hill et al. 2000, Cohen et al. 2003,
Ivans et al. 2005). But this could not successfully explain the observed
frequency of CEMP-r/s stars. The study by Barklem et al. (2005) have revealed
that an order of 1% of population II stars are CEMP-r/s stars. Cohen et al
(2003) have discussed another scenario that invokes a triple system in which
the star could have been a least massive tertiary, polluted first with the
r-elements from the massive primary exploded as supernova and later polluted
with s-elements by the secondary star evolved into an AGB . This hypothesis
had been discarded, as such a dynamically stable tertiary system is unlikely
to exist. Accretion Induced Collapse (AIC) in a binary system have been
suggested by Qian & Wasserburg (2003) and Cohen et al. (2003), but discarded
since it is physically uncertain with the existing neutrino theories (Qian &
Woosley 1996). Another scenario is a binary picture in which the primary star
evolved through AGB contributed the s-rich material to the secondary star, and
later explodes as Type 1.5 supernova (Zijlstra 2004, Wanajo et al. 2005)
depositing r-material on the surface of the secondary. Abate et al. (2016)
have calculated the frequency of CEMP-r/s stars among the CEMP-s stars for all
these formation scenarios. The theoretical frequency predicted in most of the
scenarios underestimate the observed frequency ($\sim$54% in their sample)
atleast by a factor of five. The simulation based on the hypothesis of
independent enrichment of s- and r- elements could predict a frequency
($\sim$22%) that approaches the observations, however, it fails to reproduce
the observed correlation of observed Ba and Eu abundances in CEMP-r/s stars.
The simulations of Jonsell et al. (2006) for a high neutron density s-process
in AGB star in a binary system also could not reproduce the observed abundance
pattern in CEMP-r/s stars.
Allen et al. (2012) have claimed from their analysis for a sample of CEMP
stars that both CEMP-s and CEMP-r/s stars have same astrophysical origin. A
modified neutron-capture process called intermediate neutron-capture process
(i-process) in AGB stars, first proposed by Cowan & Rose (1977), has been
invoked recently (Dardelet et al. 2014, Hampel et al 2016, 2019, Hansen et al.
2016c) to explain the observed abundance trend of CEMP-r/s stars in the
context of binary-mass transfer scenario. When a substantial amount of
hydrogen rich-material is mixed into the intershell region of the evolved red-
giant stars (Proton Ingestion Episodes, PIE) undergoing helium shell flash, a
significantly high neutron density of the order Nn $\sim$ 1015 \- 1017 cm-3
(intermediate between s- and r- processes) can be produced (Cowan & Rose
1977). A number of sites have been proposed for the PIEs in order for the
i-process nuclesynthesis to take place. Recent simulations have shown that the
neutron-densities of the order of 1012 \- 1015 cm-3 can be achieved in very
low-metallicity (z $\leq$ 10-4), low-mass (M $\leq$ 2M⊙) AGB stars (Fujimoto
et al. 2000, Campell & Lattanzio 2008, Lau et al. 2009, Cristallo et al. 2009,
Campbell et al. 2010, Stancliffe et al. 2011). A similar neutron density is
achieved during the dual core flash (H flash following the PIEs during the He
flash) in low-mass, extremely low-metallicity (z $\leq$ 10-5) stars (Fujimoto
et al. 1990, Hollowell et al. 1990, Lugaro et al. 2009); in the very-late
thermal pulses of post-AGB stars (Herwig et al. 2011, 2014 Bertolli et al.
2013, Woodward et al. 2015); during the thermal pulses of low-metallicity
super-AGB stars (Doherty et al. 2015, Jones et al. 2016); in Rapidly Accreting
White Dwarfs (RAWD) (Denissenkov et al. 2019). Dardelet et al. (2014), Hampel
et al. (2016) and Hampel et al. (2019) considered the possibility of i-process
in their simulation to see whether the CEMP-r/s phenomena could be explained
on the basis of s- and r- process elements produced at a single stellar site.
The i-process models could satisfactorily reproduce the abundance patterns of
twenty CEMP-r/s stars (Hampel et al. 2016) and seven low-Pb Magellanic post-
AGB stars (Hampel et al. 2019).
Hampel et al. (2016) used the single-zone nuclear network calculations to
simulate the properties of the intershell region of low-mass (1 M⊙), low-
metallicity (z = 10-4) AGB star and studied the neutron-capture
nucleosynthesis under the influence of different constant neutron densities
ranging from 107 to 1015 cm-3. The physical input conditions of intershell
region are adapted from Stancliffe et al. (2011) and the composition from
Abate et al. (2015b). The considered temperature and density of the intershell
region are 1.5 $\times$ 108 K and 1600 g cm-3 respectively. Compared to the
classical s-process, at i-process neutron densities, this simulation resulted
in an increased production of heavy s-process and r-process elements, while
similar abundances of light s-process elements, as typically observed for
CEMP-r/s stars (Abate et al. 2015a, Hollek et al. 2015). A range of
temperatures (1.0 $\times$ 108 \- 2.2 $\times$ 108 K) and densities (800 -
3200 g cm-3) have been tested in this simulation, which did not produce
significant changes in the result.
We have compared the observed heavy elemental abundance ratios of the star
LAMOSTJ151003.74+305407.3 with the model yields, [X/Fe], of Hampel et al.
(2016) for a range of neutron densities, n$\sim$109 \- 1015 cm-3. A further
dilution of the accreted material can occur on the surface of the CEMP-r/s
stars just like similarly formed CEMP-s stars (Stancliffe et al. 2007, 2013).
The neutron-density responsible for the observed abundance in this star is
derived by fitting the observed abundance with the dilution factor
incorporated parametric model function:
X = Xi . (1-d) + X⊙ . d
where X is the final abundance, Xi is the i-process abundance, X⊙ is the
solar-scaled abundance and d is the dilution factor. The best fit obtained
along with the neutron density and dilution factor is shown in Figure 15. The
neutron density responsible for the observed abundance of this star is found
to be n$\sim$1012 cm-3, if we assume a single stellar site for the production
of the observed neutron-capture abundance pattern.
Figure 15: The best fit obtained for the parametric model function is
represented by the solid line. The squares with error bars are the observed
abundances in LAMOSTJ151003.74+305407.3.
From the kinematic analysis, we find that this object belongs to the Galactic
halo with a probability of 71%. The spatial velocity estimate of the star is
similar to the typical velocity of halo objects, Vspa$>$180 kms-1 (Chen et al.
2004). Also it satisfies the criteria, [Fe/H]$<$$-$0.90 and VLSR$<$$-$120 km
s-1 (Eggen 1997) to be a halo object.
## 9 CONCLUSIONS
The results from a detailed high-resolution spectroscopic analysis of two
carbon stars identified from LAMOST DR2 are presented. Both the objects are
identified to be CH stars by Ji et al. (2016). Our analysis shows that the
object LAMOSTJ151003.74+305407.3 is a CEMP-r/s star, while the object
LAMOSTJ091608.81+230734.6 is a CH star.
Although, a few light element abundances are available in literature for
LAMOSTJ151003.74+305407.3, we have presented the first time detailed abundance
analysis for both the stars. We have estimated the abundances of 26 elements
along with the carbon isotopic ratio 12C/13C.
Our analysis based on the neutron-density dependent [Rb/Zr] ratio confirms
low-mass for the former AGB companion of the program stars. The kinematic
analysis shows, LAMOSTJ091608.81+230734.6 belongs to the Galactic disc, and
LAMOSTJ151003.74+305407.3 belongs to Galactic halo population.
An i-process parametric model based analysis performed for the CEMP-r/s star
LAMOSTJ151003.74+305407.3, yields a neutron density n$\sim$1012 cm-3 at the
neutron-capture nucleosynthesis site, may indicate that the i-process in the
companion AGB star be responsible for its observed abundance pattern.
## 10 ACKNOWLEDGMENT
We thank the staff at IAO and at the remote control station at CREST,
Hosakotte for assisting during the observations. Funding from the DST SERB
project No. EMR/2016/005283 is gratefully acknowledged. We are thankful to the
referee, Dr. Luca Sbordone, for useful comments and suggestions that have
considerably improved the paper. We are thankful to Melanie Hampel for
providing us with the i-process yields in the form of number fractions and
Partha Pratim Goswami for generating the model fits used in Figure 15. This
work made use of the SIMBAD astronomical database, operated at CDS,
Strasbourg, France, and the NASA ADS, USA. This work has made use of data from
the European Space Agency (ESA) mission Gaia
(https://www.cosmos.esa.int/gaia), processed by the Gaia Data Processing and
Analysis Consortium (DPAC,
https://www.cosmos.esa.int/web/gaia/dpac/consortium). Based on data collected
using HESP
Data Availability
The data underlying this article will be shared on reasonable request to the
corresponding author.
## References
* [1] Abate C., Pols O. R., Izzard R. G. & Karakas A. I., 2015a, A&A, 581, A22
* [2] Abate C., Pols O. R., Karakas A. I. & Izzard R. G., 2015b, A&A, 576, A118
* [3] Abate C., Stancliffe R.J., Liu Z-W., 2016, A&A, 587, 50
* [4] Abia C., Busso M., Gallino R., Domínguez I., Straniero O. & Isern J., 2001, ApJ, 559, 1117
* [5] Alksnis A., Balklavs A., Dzervitis U. et al., 2001, BaltA, 10, 1
* [6] Allen D.M., Barbuy B., 2006, A&A, 454, 895
* [7] Allen D. M., Ryan S. G., Rossi S., Beers T. C. & Tsangarides S. A., 2012, A&A, 548, A34
* [8] Alonso A., Arribas S., Martinez-Roger C., 1999, A&AS, 140, 261
* [9] Alonso A., Arribas S., Martinez-Roger C., 2001, A&A, 376, 1039
* [10] Aoki W. et al., 2005, ApJ, 632, 611
* [11] Aoki W., Beers T.C., Christlieb N., Norris J.E., Ryan S.G et al., 2007, ApJ, 655, 492
* [12] Aoki W. et al., 2013, AJ, 145, 13
* [13] Asplund M., Grevesse N., Sauval A.J., 2009, Ann. Rev. Astron. Astrophy., 47, 481
* [14] Banerjee P., Qian Y.-Z. & Heger A., 2018, ApJ, 865, 120
* [15] Barbuy B., Spite M., Spite F., Hill V., Cayrel R., Plez B., Petitjean P., 2005, A&A, 429, 1031
* [16] Barklem P. S., Christlieb N., Beers T. C. et al., 2005, A&A, 439, 129
* [17] Beers, T. C. 1999, in ASP Conf. Ser. 165, The Third Stromlo Symposium: The 1443 Galactic Halo, ed. B. Gibson, T. Axelrod, & M. Putman (San Francisco: Hill, V., et al. 2000, A&A, 353, 557 ASP), 202
* [18] Beers T. C., Preston G. W. & Shectman S. A., 1985, AJ, 90, 2089
* [19] Beers T. C., Preston G. W. & Shectman S. A., 1992, AJ, 103, 1987
* [20] Beers T. C., Flynn C., Rossi S. et al., 2007, ApJS, 168, 128
* [21] Beers T. C., Christlieb N., 2005, ARA&A, 43, 531
* [22] Bensby T., Feltzing S., Lundstrom I., 2003, A&A, 410, 527
* [23] Bensby T., Feltzing S., Lundstrom I., 2004, A&A, 415, 155
* [24] Bertolli M. G., Herwig F., Pignatari M. & Kawano T., 2013, ArXiv e-prints [arXiv:1310.4578]
* [25] Biemont E., Grevesse N., Hannaford P., Lowe R.M., 1981, ApJ 248, 867-873
* [26] Bisterzo S., Gallino R., Straniero O., Cristallo S., Käppeler F., 2011, MNRAS, 418, 284
* [27] Bonifacio P., Caffau E., Spite M., Limongi M., Klessen R. S. et al., 2015, A&A, 579, A28
* [28] Brooke J. S. A., Bernath P. F., Schmidt T. W. & Bacskay G. B., 2013, JQSRT, 124, 11
* [29] Burbidge E. M., Burbidge G. R., Fowler W. A. & Hoyle F., 1957, RvMP, 29, 547
* [30] Campbell S. W. & Lattanzio J. C., 2008, A&A, 490, 769
* [31] Campbell S. W., Lugaro M. & Karakas A. I., 2010, A&A, 522, L6
* [32] Carollo D. et al., 2012, ApJ, 744, 195
* [33] Cayrel R. et al., 2004, A&A, 416, 1117
* [34] Chen Y.Q., Nissen P.E., Zhao G., 2004, A&A, 425, 697
* [35] Christlieb N., 2003, RvMA, 16, 191
* [36] Christlieb N., Green P. J., Wisotzki L., & Reimers D., 2001a, A&A, 375, 366
* [37] Christlieb N., Wisotzki L., Reimers D., Homeier D., Koester D. & Heber U., 2001b, A&A, 366, 898
* [38] Clarkson O., Herwig F. & Pignatari M., 2018, MNRAS, 474, L37
* [39] Cohen J.G., Christlieb N., Qian Y.Z., Wasserburg G.J., 2003, AJ, 588, 1082
* [40] Cohen J. G. et al., 2005, ApJ, 633, L109
* [41] Corliss C.H., Bozman W.R., 1962, NBS Monograph 53
* [42] Cowa J. J. & Rose W. K., 1977, ApJ, 212, 149
* [43] Cowley C.R., Corliss C.H., 1983, MNRAS 203, 651-659
* [44] Cristallo S., Straniero O., Gallino R., Piersanti L., Domínguez I., Lederer M. T., 2009, ApJ, 696, 797
* [45] Cristallo S., Karinkuzhi D., Goswami A., Piersanti L. & Gobrecht D., 2016, ApJ, 833, 181
* [46] Cruz M. A., Serenelli A. & Weiss A., 2013, A&A, 559, A4
* [47] Cui X., Zhao Y. H., Chu Y. Q. et al., 2012, RAA, 12, 1197
* [48] Dardelet L., Ritter C., Prado P. et al., 2014, in POS XIII Nuclei in the Cosmos (NIC XIII), ed. E. Zoltán & Z. Fülöp, 145
* [49] Deng L., Newberg H. J., Liu C. et al., 2012, RAA, 12, 735
* [50] Denissenkov P. A., Herwig F., Battino U. et al., 2017, ApJL, 834, L10
* [51] Denissenkov P. A., Herwig F., Woodward P. et al., 2019, MNRAS, 488, 4258
* [52] Doherty C. L., Gil-Pons P., Siess L., Lattanzio J. C. & Lau H. H. B., 2015, MNRAS, 446, 2599
* [53] Duquennoy A. & Mayor M., 1991, A&A, 248, 485
* [54] Eggen O.J., 1997, AJ, 114, 825
* [55] Frebel A., 2018, Annual Review of Nuclear and Particle Science, 68, 237
* [56] Frebel A. & Norris J. E., 2015, ARA&A, 53, 631
* [57] Frebel A. et al., 2006, ApJ, 652, 1585
* [58] Führ J.R., Martin G.A., Wiese W.L., 1988, J. Phys. Chem. Ref. Data 17, Suppl. 4
* [59] Fujimoto M. Y., Iben I. Jr. & Hollowell D., 1990, ApJ, 349, 580
* [60] Fujimoto M. Y., Ikeda Y. & Iben I. Jr., 2000, ApJL, 529, L25
* [61] Gaia collaboration, Kartz D., Antoja T. et al., 2018, A&A, 616, A11
* [62] García-Herández D. A., García-Lario P., Plez B., D’Antona F., Manchado A., Trigo-Rodríguez M., 2006, Science, 314, 1751
* [63] García-Herández D. A., García-Lario P., Plez B., Manchado A., D’Antona F., Lub J. & Habing H., 2007, A&A, 462, 711
* [64] García-Herández D. A. et al., 2009, ApJ, 705, L31
* [65] Garz T., 1973, A&A. 26, 471.
* [66] Gigoyan K. S., HamBaryan V. V. & Azzopardi M., 1998, APADS, 41, 545
* [67] Girardi L., Bressan A., Bertelli G., Chiosi C., 2000, A&AS 141, 371
* [68] Goswami A., Aoki W., Karinkuzhi D., 2016, MNRAS, 455, 402
* [69] Hampel M., Stancliffe R.J., Lugaro M., Meyer B.S., 2016, ApJ, 831, 171
* [70] Hampel M., Karakas A. I., Stancliffe R. J., Meyer B. S., Lugaro M., 2019, ApJ, 887, 11
* [71] Hannaford P., Lowe R.M., Grevesse N., Biemont E., Whaling W., 1982, ApJ 261, 736-746
* [72] Hansen T., Hansen C. J., Christlieb N., Beers T. C., Yong D. et al., 2015a, ApJ, 807, 173
* [73] Hansen C. J., Nordström B., Hansen T. T., Kennedy C. R., Placco V. M. et al., 2016a, A&A, 588, A37
* [74] Hansen T. T., Andersen J., Nordström B., Beers T. C., Placco V. M. et al., 2016b, A&A, 586, A160
* [75] Hansen T. T., Andersen J., Nordström B., Beers T. C., Placco V. M. et al., 2016c, A&A, 588, A3
* [76] Hansen C. J., Hansen T. T., Koch A. et al., 2019, A&A, 623, A128
* [77] Hayes C. R., Majewski S. R., Shetrone M., Fernandez-Alvar E., Prieto C. A. et al., 2018, ApJ, 852, 49H
* [78] Herwig F., Pignatari M., Woodward P. R. et al., 2011, ApJ, 727, 89
* [79] Herwig F., Woodward P. R., Lin P.-H., Knox M. & Fryer C., 2014, ApJL, 792, L3
* [80] Hill V., Barbuy B., Spite M. et al., 2000, A&A, 353, 557
* [81] Hollek J. K., Frebel A., Placco V. M. et al., 2015, ApJ, 814, 121
* [82] Hollowell D., Iben I. Jr. & Fujimoto M. Y., 1990, ApJ, 351, 245
* [83] Honda S., Aoki W., Kajino T., Ando H., Beers T. C. et al., 2004, ApJ, 607, 474
* [84] Ibata R., Lewis G. F., Irwin M., Totten E. & Quinn T., 2001, ApJ, 551, 294
* [85] Ivans I. I., Sneden C., Gallino R. et al., 2005, ApJ, 627, L145
* [86] Ji W., Cui W., Liu C., Luo A., Zhao G. & Zhao B., 2016, ApJS, 226, 1
* [87] Johnson D.R.H., Soderblom D.R., 1987, AJ, 93, 864
* [88] Jones S., Ritter C., Herwig F. et al., 2016, MNRAS, 455, 3848
* [89] Jonsell K., Barklem P.S., Gustafsson B., Christlieb N., Hill V. et al., 2006, A&A, 451, 651
* [90] Jorissen A., Eck S. V., Winckel H. V., Merle T., Boffin H. M. J. et al., 2016, A&A, 586, 158
* [91] J. Shejeelammal., Goswami A., Goswami P. P., Rathour R. S., Masseron T., 2020, MNRAS, 492, 3708
* [92] Käppeler F., Gallino R., Bisterzo S., Aoki W., 2011, RvMP, 83, 157
* [93] Karakas A.I., García-Hernández D. A. & Lugaro M., 2012, ApJ, 751, 8
* [94] Karinkuzhi D., Goswami A., 2014, MNRAS, 440, 1095
* [95] Karinkuzhi D., Goswami A., 2015, MNRAS, 446, 2348
* [96] Karinkuzhi D., Van Eck S., Jorissen A., Goriely S., Siess L., Merle T., Escorza A., Van der Swaelmen M., Boffin H. M. J., Masseron T., Shetye S. & Plez B., 2018, A&A, 618, A32
* [97] Keenan P.C., 1942, ApJ, 96, 101
* [98] Koch A., Reichert M., Hansen C. J. et al., 2019, A&A, 622, A159
* [99] Kurucz R.L., Peytremann E., 1975, SAO Special Report 362
* [100] Kurucz R.L., 1988, Trans. IAU, XXB, M. McNally, ed., Dordrecht: Kluwer, 168-172
* [101] Lage C.S., Whaling W., 1976, JQSRT 16, 537-542
* [102] Lambert D.L., Mallia E.A. & Warner B., 1969, MNRAS 142, 71.
* [103] Lambert D. L. & Luck R. E., 1976, Obs., 96, 100L
* [104] Lambert D. L., Smith V. V., Busso M., Gallino R. & Straniero O., 1995, ApJ, 450, 302
* [105] Lambert D.L., Heath J.E., Lemke M., Drake J., 1996, ApJS, 103, 183
* [106] Lau H. H. B., Stancliffe R. J. & Tout C. A., 2009, MNRAS, 396, 1046
* [107] Lee Y. S. et al., 2013, AJ, 146, 132
* [108] Lincke R., Ziegenbein G., 1971, Z. Phyzik, 241, 369.
* [109] Lucatello S., Tsangarides S., Beers T. C., Carretta E., Gratton R. G., Ryan S. G., 2005, ApJ, 652, 825
* [110] Lucatello S., Beers T. C., Christlieb N., Barklem P. S., Rossi S. et al., 2006, ApJ, 652, L37
* [111] Luck R. E., 2017, AJ, 153, 21
* [112] Luck R. E. & Heiter U., 2007, AJ, 133, 2464
* [113] Lugaro M., Campbell S. W. & de Mink S. E., 2009, PASA, 26, 322
* [114] Marsteller B., Beers T. C., Rossi S., Christlieb N., Bessell M., Rhee J., 2005, Nucl. Phys. A, 758, 312
* [115] Martin G.A., Führ J.R., Wiese W.L., 1988, J.Phys.Chem.Ref.Data, 17, Suppl.3
* [116] Masseron T., Johnson J.A., Plez B., Van Eck S., Goriely S., Jorissen A., 2010, A&A, 509, 93
* [117] McClure R.D., 1983, ApJ, 208, 264
* [118] McClure R.D., 1984, ApJ, 280, 31
* [119] McClure R.D., Woodsworth W., 1990, ApJ, 352, 709
* [120] McClure R. D., Fletcher J. M., Nemec J., 1980, ApJ, 238, L35
* [121] McWilliiam A., 1998, AJ, 115, 1640
* [122] Meggers W.F., Corliss C.H., Scribner B.F., 1975, NBS Monograph 145
* [123] Mishenina T.V., Soubiran C., Kovtyukh V.V., Korotin S.A., 2004, A&A, 418, 551
* [124] Norris J. E., Ryan S. G., Beers T. C., 1997, ApJ, 488, 350
* [125] Placco V. M., Frebel A., Beers T. C. & Stancliffe R. J., 2014, ApJ, 797, 21
* [126] Plez B., Smith V. V. & Lambert D. L., 1993, ApJ, 418, 812
* [127] Prochaska J.X., McWilliam A., 2000, ApJ, 537, 57
* [128] Prochaska J. X., Naumov S. O., Carney B. W., McWilliam A., Wolfe A. M., 2000, AJ, 120, 2513
* [129] Purandardas M., Goswami A., Goswami P. P., Shejeelammal J., Masseron T., 2019, MNRAS, 486, 3266
* [130] Qian Y.-Z & Wasserburg G. J., 2003, ApJ, 588, 1099
* [131] Qian Y.-Z. & Woosley, 1996, ApJ, 471, 331
* [132] Ram R. S., Brooke James S. A., Bernath P. F., Sneden C., Lucatello S., 2014, ApJS, 211, 5
* [133] Reddy B.E., Lambert D.L., Priesto C.A., 2006, MNRAS, 367, 1329
* [134] Rossi S., Beers T. C., Sneden C., 1999, in Gibson B. K., Axelrod R. S., Putman M. E., eds, ASP Conf. Ser., Vol. 165, The Third Stromlo Symposium: The Galactic Halo. Astron. Soc. Pac., San Francisco, p. 264
* [135] Rossi S., Beers T. C., Sneden C. et al., 2005, AJ, 130, 2804
* [136] Schönrich R., Binney J., Dehnen W., MNRAS, 2010, 403, 1829
* [137] Schulz-Gulde E., 1969, JQSRT 9, 13
* [138] Smith P. L., Kuhne M., 1978, Proc. Roy. Soc., 363, 263
* [139] Smith V. V., Coleman H. & Lambert D. L., 1993, ApJ, 417, 287
* [140] Sneden C., 1973, PhD thesis, Univ. Texas
* [141] Sneden C., Cowan J.J., Gallino R., 2008, ARA&A, 46, 241
* [142] Sneden C., Lucatello S., Ram R. S., Brooke J. S. A. & Bernath P. F., 2014, ApJS, 214, 26
* [143] Spite M., Caffau E., Bonifacio P., Spite F., Ludwig H. -G. et al., 2013, A&A, 552, A107
* [144] Stancliffe R.J., Glebbeek E., Izzard R.G., Pols O.R., 2007, A&A, 464, 57
* [145] Stancliffe R. J., Dearborn D. S. P., Lattanzio J. C., Heap S. A. & Campbell S. W., 2011, ApJ, 742, 121
* [146] Stancliffe R. J., Kennedy C. R., Lau H. H. B. & Beers T. C., 2013, MNRAS, 435, 698
* [147] Starkenburg E., Shetrone M D., McConnachie A W., Venn K A., 2014, MNRAS, 441, 1217
* [148] Totten E. J. & Irwin M. J., 1998, MNRAS, 294, 1
* [149] Udry S., Jorissen A., Mayor M., Van Eck S., 1998a, A&AS, 131, 25
* [150] Udry S., Mayor M., Van Eck S., Jorissen A., Prévot L., Grenier S., Lindgren H., 1998b, A&AS, 131, 43
* [151] van Raai M. A., Lugaro M., Karakas A. I., García-Hernández D. A. & Yong D., 2012, A&A, 540, A44
* [152] Vanture A.D., 1992, AJ, 104, 1977
* [153] Venn K. A., Irwin M., Shetrone M. D. et al., 2004, AJ, 128, 1177
* [154] Wanajo S., Nomoto K., Iwamoto N., Ishimaru Y. & Beers T. C., 2006, ApJ, 636, 842
* [155] Wanajo S., Itoh N., Goriely S., Samyn M., Ishimaru Y., 2005, NuphA, 758, 671
* [156] Ward L., Vogel O., Arnesen A., Hallin R., Wannstrom A., 1985, Phys. Scripta 31, 162-165
* [157] Warner B., 1968, MNRAS 140, 53-59
* [158] Woodward P. R., Herwig F. & Lin P.-H., 2015, ApJ, 798, 49
* [159] Worely C.C., Hill V. J., Sobeck J., Carretta E., 2013, A&A, 553, A47
* [160] Yang G.C. et al., 2016, RAA, 16, 1
* [161] Yanny B. et al., AJ, 2009, 137, 4377
* [162] Yong D. et al., 2013, ApJ, 762, 27
* [163] Yoon J. et al., 2016, ApJ, 833, 20
* [164] York D. et al., AJ, 2000, 120, 1579
* [165] Zhao G., Zhao Y., Chu Y., Jing Y. & Deng L., 2012, RAA, 12, 723
* [166] Zijlstra A.A., 2004, MNRAS, 348, 23
## Appendix
Table A1 : Equivalent widths (in mÅ) of Fe lines used for deriving atmospheric
parameters Wavelength(Å) El $E_{low}$(eV) log gf LAMOSTJ091608.81+230734.6
LAMOSTJ151003.74+305407.3 Ref 4445.471 Fe I 0.087 -5.380 77.50(6.44) - 1
4484.220 3.603 -0.720 91.70(6.40) 88.1(5.72) 1 4489.739 0.121 -3.966
133.5(6.39 - 1 4619.288 3.603 -1.120 84.2(6.60) 76.7(5.95) 1 4635.846 2.845
-2.420 56.2(6.45) - 1 4637.503 3.283 -1.390 86.3(6.53) - 1 4643.464 3.654
-1.290 70.9(6.54) - 1 4690.138 3.686 -1.640 53.7(6.60) - 1 4882.143 3.417
-1.640 82.1(6.80) 55.7(5.94) 1 4907.732 3.430 -1.840 61.7(6.62) - 1 4908.031
4.217 -1.396 39.8(6.58) - 2 4917.229 4.191 -1.180 35.7(6.38) - 1 4924.770
2.278 -2.220 111(6.64) 112(5.85) 1 4939.687 0.859 -3.340 142(6.64) 173(6.02) 1
4967.890 4.191 -0.622 82.8(6.7) - 2 4969.917 4.216 -0.710 56.7(6.31) - 1
4985.253 3.930 -0.560 84.8(6.36) - 2 5022.236 3.984 -0.530 80.8(6.31) 96(6.05)
1 5028.126 3.573 -1.474 71.9(6.59) 42.6(5.78) 2 5049.819 2.278 -1.420
160.4(6.77) - 1 5109.652 4.302 -0.980 62.3(6.77) - 1 5127.359 0.915 -3.307
133.2(6.37) - 1 5159.058 4.283 -0.820 55.3(6.46) - 1 5187.915 4.143 -1.260
54.2(6.71) - 1 5215.179 3.266 -0.933 121(6.67) - 1 5242.491 3.634 -0.840
95.7(6.58) 84.2(5.87) 1 5247.050 0.087 -4.946 - 136.1(5.89) 1 5250.209 0.121
-4.938 120.4(6.63) 131.1(5.82) 1 5253.462 3.283 -1.670 92.1(6.8) 78.6(6.06) 1
5281.790 3.038 -1.020 - 150.6(5.94) 1 5322.040 2.280 -2.840 81.2(6.48)
76.4(5.88) 3 5339.930 3.270 -0.680 132.7(6.63) - 3 5364.858 4.445 0.2200
110.5(6.68) - 3 5365.399 3.573 -1.440 86.7(6.8) 55.7(5.9) 2 5367.479 4.415
0.3500 113.5(6.57) - 1 5369.961 4.370 0.3500 114.3(6.54) - 1 5383.369 4.312
0.5000 130(6.65) - 1 5543.936 4.217 -1.140 58.8(6.74) - 1 5569.620 3.420
-0.490 125.7(6.45) - 3 5576.090 3.430 -0.851 113.5(6.69) - 1 5586.756 3.368
-0.210 157.8(6.71) - 1 5618.631 4.209 -1.380 51.4(6.72) - 1 5701.544 2.559
-2.216 108.5(6.69) 90.7(5.79) 1 5741.848 4.256 -1.730 20.8(6.63) - 1 5753.120
4.260 -0.760 77.6(6.72) 58.6(6.08) 1 5856.088 4.294 -1.640 34(6.79) - 1
5859.586 4.549 -0.386 83.1(6.79) - 2 5862.357 4.549 -0.051 81.7(6.31) - 2
5956.692 0.859 -4.605 95.9(6.62) 81.6(5.81) 1 6003.010 3.881 -1.120 -
47.7(5.82) 1 6082.710 2.222 -3.573 55.2(6.66) 28.3(5.86) 1 6136.994 2.198
-2.950 84.5(6.47) 78.3(5.88) 1 6137.694 2.588 -1.403 157.5(6.76) - 1 6151.620
2.180 -3.290 79.8(6.71) 62.4(6.01) 3 6173.340 2.220 -2.880 101.7(6.72)
108.2(6.17) 3 6180.204 2.727 -2.780 64.8(6.64) 43.2(5.96) 1 6200.314 2.608
-2.437 - 85(5.96) 1 6213.429 2.222 -2.660 113(6.68) 102.7(5.88) 1 6219.279
2.198 -2.433 120.5(6.56) 140.4(6.08) 1 6240.646 2.222 -3.380 70.8(6.49)
54.1(5.84) 1 6252.554 2.404 -1.687 155.1(6.73) 152.2(5.76) 1 6254.258 2.279
-2.480 - 117.3(5.86) 1
The numbers within the parenthesis in columns 5-6 give the derived abundances
from the respective line.
Table A1 continues…
Wavelength(Å) El $E_{low}$(eV) log gf LAMOSTJ091608.81+230734.6
LAMOSTJ151003.74+305407.3 Ref 6280.617 0.859 -4.390 120.5(6.75) - 1 6297.800
2.222 -2.740 90.3(6.36) - 1 6301.500 3.654 -0.672 107.8(6.45) - 2 6322.690
2.588 -2.426 - 99.7(6.08) 2 6335.328 2.198 -2.230 128(6.46) - 1 6393.602 2.432
-1.620 158.3(6.71) - 1 6408.016 3.686 -1.048 93.5(6.52) - 2 6419.950 4.730
-0.090 81.3(6.77) 53.5(6.05) 3 6421.349 2.278 -2.027 - 153.1(5.91) 1 6430.850
2.180 -2.010 151.5(6.63) 160.5(5.84) 3 6481.870 2.278 -2.984 99.5(6.8) - 1
6574.227 0.990 -5.040 73.2(6.81) 69.5(6.23) 1 6575.019 2.588 -2.820 79.4(6.69)
- 1 6592.910 2.720 -1.470 131.8(6.53) - 3 6593.871 2.432 -2.422 107.1(6.55) -
1 6677.989 2.692 -1.470 148.4(6.64) 169(6.07) 1 6739.521 1.557 -4.950
28.6(6.59) - 1 6750.150 2.424 -2.621 113.7(6.82) - 1 4416.830 Fe II 2.778
-2.600 - 71.3(5.95) 1 4629.339 2.807 -2.280 108.3(6.55) - 1 4923.927 2.891
-1.320 159.8(6.68) - 1 5234.620 3.220 -2.240 99.2(6.48) 60.9(5.76) 3 6247.550
3.890 -2.340 57.1(6.6) 20.3(5.98) 3 6369.462 2.891 -4.253 29.7(6.81) - 2
6456.383 3.903 -2.075 67.3(6.56) 30.8(6.03) 1
The numbers within the parenthesis in columns 5-6 give the derived abundances
from the respective line.
References: 1. Führ et al. (1988), 2. Kurucz (1988), 3. Lambert et al. (1996)
Table A2 : Equivalent widths (in mÅ) of lines used for deriving elemental
abundances Wavelength(Å) El $E_{low}$(eV) log gf LAMOSTJ091608.81+230734.6
LAMOSTJ151003.74+305407.3 Ref 5682.633 Na I 2.102 -0.700 109.4(6.15) - 1
5688.205 2.100 -0.450 106.8(5.86) - 1 6154.226 2.102 -1.560 59.90(6.16)
23.10(5.27) 1 6160.747 2.104 -1.260 66.40(5.97) 35.20(5.20) 1 4702.991 Mg I
4.346 -0.666 - 116.9(5.85) 2 5528.405 4.346 -0.620 175.6(7.23) 138.9(6.00) 2
5711.088 4.346 -1.833 92.80(7.01) 64.30(6.20) 2 5690.425 Si I 4.929 -1.870
31.90(6.67) - 3 5948.541 5.083 -1.230 61.40(6.72) - 3 6145.016 5.616 -0.820
31.60(6.38) - 4 4435.679 Ca I 1.890 -0.520 - 119.2(4.77) 5 5349.465 2.709
-1.178 38.20(5.42) - 5 5512.980 2.932 -0.290 69.90(5.36) - 5 5581.965 2.523
-1.833 - 39.60(4.41) 5 5590.114 2.521 -0.710 89.50(5.67) 35.30(4.33) 5
5857.451 2.932 0.23 93.60(5.27) 76.20(4.47) 5 6102.723 1.879 -0.890
102.0(5.26) 103.5(4.59) 5 6166.439 2.521 -0.900 64.10(5.33) - 5 6169.042 2.523
-0.550 79.50(5.26) - 5 6169.563 2.523 -0.270 - 83.30(4.53) 5 6439.075 2.525
0.47 157.5(5.64) - 5 6449.808 2.523 -0.550 86.90(5.37) 70.30(4.61) 5 6471.662
2.525 -0.590 - 56.80(4.48) 5 6493.781 2.521 0.14 117.9(5.27) - 5
The numbers within the parenthesis in columns 5-6 give the derived abundances
from the respective line.
Table A2 continues…
Wavelength(Å) El $E_{low}$(eV) log gf LAMOSTJ091608.81+230734.6
LAMOSTJ151003.74+305407.3 Ref 4512.734 Ti I 0.836 -0.480 84.90(4.04) - 6
4617.269 1.749 0.389 87.00(4.29) - 6 4759.272 2.255 0.514 40.30(3.82) - 6
4840.874 0.899 -0.509 81.20(3.98) 78.00(3.19) 6 5007.210 0.820 0.1700
131.4(4.25) - 6 5024.842 0.818 -0.602 82.10(3.95) 90.20(3.30) 6 5210.386 0.047
-0.884 110.0(3.79) 143.9(3.23) 6 5460.499 0.050 -2.880 27.30(4.26) - 7
5918.535 1.070 -1.460 33.30(4.16) 25.70(3.50) 6 5922.110 1.046 -1.466
35.10(4.17) - 6 5941.751 1.050 -1.510 30.70(4.14) - 6 6303.757 1.443 -1.566
25.00(4.03) - 6 4568.314 Ti II 1.220 -2.650 69.10(3.98) 30.20(3.02) 6 4571.960
1.571 -0.530 - 140.4(3.16) 6 4764.526 1.236 -2.770 - 49.90(3.47) 6 4798.521
1.080 -2.430 72.10(3.62) - 6 4865.612 1.116 -2.610 71.30(3.82) - 6 5185.900
1.890 -1.350 82.10(3.66) 86.30(3.34) 6 5381.015 1.566 -2.080 - 46.00(3.07) 6
5247.565 Cr I 0.961 -1.640 - 81.80(3.60) 6 5296.691 0.982 -1.400 131.8(5.17) -
6 5300.744 0.982 -2.120 - 49.00(3.68) 6 5345.801 1.003 -0.980 150.1(5.12)
128.7(3.64) 6 5348.312 1.003 -1.290 121.6(4.86) 106.5(3.62) 6 5409.772 1.030
-0.720 156.1(4.98) 130.0(3.41) 6 5787.965 3.323 -0.083 67.70(5.30) - 6
6362.862 0.941 -3.623 33.00(5.33) - 5 4686.207 Ni I 3.597 -0.640 64.70(5.44) -
5 4752.415 3.658 -0.700 - 40.90(4.76) 8 4821.130 4.153 -0.850 37.40(5.73) - 8
4953.200 3.74 -0.670 59.80(5.51) 22.00(4.45) 5 4980.166 3.606 -0.110 -
72.90(4.55) 8 5082.350 3.657 -0.540 77.40(5.63) 36.50(4.49) 8 5102.960 1.676
-2.620 - 93.90(4.87) 8 6086.280 4.266 -0.530 53.50(5.79) - 8 6175.360 4.089
-0.530 52.30(5.55) - 8 6177.236 1.826 -3.500 38.40(5.56) - 8 6186.710 4.106
-0.777 41.80(5.62) - 8 6204.600 4.088 -1.130 32.40(5.76) - 8 6327.593 1.676
-3.150 79.10(5.72) - 8 6378.247 4.154 -0.890 31.40(5.57) - 5 6643.629 1.676
-2.300 - 106.6(4.53) 1 4722.150 Zn I 4.029 -0.370 - 34.80(2.66) 9 4810.530
4.080 -0.170 - 33.80(2.46) 9 6362.338 0.150 5.796 29.60(4.11) - 10 4607.327 Sr
I 0.000 -0.570 84.20(3.21) - 11 6435.004 Y I 0.066 -0.820 47.70(2.73)
55.20(2.19) 12 4883.684 Y II 1.084 0.07 145.4(2.37) - 12 5119.112 0.992 -1.360
- 172.7(3.13) 12 5289.815 1.033 -1.850 63.50(2.27) - 12 5402.774 1.839 -0.510
77.50(2.20) - 13 5544.611 1.738 -1.090 64.50(2.36) - 12 5546.009 1.748 -1.110
- 124.9(2.97) 12 5662.925 1.944 0.16 105.3(2.26) - 13 6613.733 1.748 -1.100
74.20(2.49) - 12
The numbers within the parenthesis in columns 5-6 give the derived abundances
from the respective line.
Table A2 continues…
Wavelength(Å) El $E_{low}$(eV) log gf LAMOSTJ091608.81+230734.6
LAMOSTJ151003.74+305407.3 Ref 4739.480 Zr I 0.651 0.230 84.30(3.30)
47.30(1.89) 14 4772.323 0.623 0.044 60.90(2.92) 63.90(2.27) 14 4805.889 0.687
-0.420 40.40(3.06) - 14 6134.585 0.000 -1.280 54.10(3.18) 32.50(2.15) 14
4257.120 Ce II 0.460 -1.116 - 36.80(1.15) 15 4336.244 0.704 -0.564 82.70(2.32)
- 15 4349.789 0.701 -0.107 - 65.90(1.34) 13 4407.273 0.701 -0.741 -
53.70(1.32) 15 4497.846 0.958 -0.349 - 67.00(1.44) 15 4508.079 0.621 -1.238
60.90(2.28) - 15 4628.161 0.516 0.008 120.8(2.43) - 15 4747.167 0.320 -1.246 -
50.80(1.25) 15 4873.999 1.107 -0.892 49.60(2.22) - 15 5187.458 1.211 -0.104
86.20(2.35) - 15 5274.229 1.044 -0.323 - 96.10(1.21) 6 5330.556 0.869 -0.760
72.30(2.23) 59.40(1.53) 15 6034.205 1.458 -1.019 42.10(2.52) - 15 5188.217 Pr
II 0.922 -1.145 26.00(1.59) 20.40(1.18) 15 5219.045 0.795 -0.240 - 77.20(1.02)
16 5259.728 0.633 -0.682 65.90(1.60) 61.90(1.04) 16 5292.619 0.648 -0.300
83.40(1.62) - 16 5322.772 0.482 -0.315 90.30(1.58) 98.00(0.98) 15 6165.891
0.923 -0.205 59.20(1.24) 63.00(0.87) 15 6278.676 1.196 -0.630 22.20(1.23) - 16
4446.384 Nd II 0.204 0.590 - 110.9(1.27) 17 4451.563 0.380 -0.040 115.1(2.04)
- 17 4556.133 0.064 -1.610 76.60(2.13) - 15 4811.342 0.064 -1.140 92.80(2.00)
- 15 4825.478 0.182 -0.860 112.4(2.37) - 15 4947.020 0.559 -1.250 -
58.00(1.42) 15 4961.387 0.631 -0.710 96.00(2.32) 98.20(1.56) 15 5212.361 0.204
-0.870 - 104.7(1.21) 15 5276.869 0.859 -0.440 - 71.80(1.15) 17 5287.133 0.744
-1.300 - 39.30(1.39) 15 5356.967 1.264 -0.250 - 68.20(1.42) 17 5361.510 0.680
-0.400 - 92.40(1.16) 17 5442.264 0.680 -0.910 84.40(2.19) 86.00(1.56) 17
5485.696 1.264 -0.120 77.70(1.96) 68.70(1.28) 17 5603.648 0.380 -1.830
71.70(2.43) - 15 5718.118 1.410 -0.340 - 48.90(1.39) 17 5825.857 1.080 -0.760
65.90(2.08) 50.00(1.40) 15 4458.509 Sm II 0.104 -1.110 76.30(1.45) - 15
4499.475 0.248 -1.413 52.10(1.34) 61.20(1.04) 15 4519.630 0.543 -0.751 -
81.70(1.07) 15 4566.210 0.330 -1.245 57.80(1.38) 63.70(1.01) 15 4615.444 0.544
-1.262 50.60(1.49) - 15 4642.228 0.379 -0.951 71.20(1.45) 88.90(1.15) 15
4674.593 0.184 -1.055 68.10(1.23) - 15 4676.902 0.040 -1.407 62.30(1.28) - 15
4704.400 0.000 -1.562 73.10(1.62) 65.00(0.89) 15 4726.026 0.333 -1.849
31.80(1.42) 27.70(1.02) 15 4854.368 0.379 -1.873 32.10(1.49) - 15
The numbers within the parenthesis in columns 5-6 give the derived abundances
from the respective line.
References: 1. Kurucz et al. (1975), 2. Lincke et al. (1971), 3. Garz (1973),
4\. Schulz-Gulde (1969), 5. Kurucz (1988), 6. Martin et al. (1988), 7. Smith
et al. (1978), 8. Führ et al. (1988), 9\. Warner (1968), 10. Lambert et al.
(1969), 11. Corliss et al. (1962), 12. Hannaford et al. (1982), 13. Cowley et
al. (1983), 14. Biemont et al. (1981), 15\. Meggers et al. (1975), 16.
Lagehaling et al. (1976), 17. Ward et al. (1985)
|
1Indian Institute of Astrophysics, Koramangala, Bangalore 560034, India
# Abundances of neutron-capture elements in CH and Carbon-Enhanced Metal-Poor
(CEMP) stars
Meenakshi Purandardas1 and Aruna Goswami1
###### Abstract
All the elements heavier than Fe are produced either by slow (-s) or rapid
(-r) neutron-capture process. Neutron density prevailing in the stellar sites
is one of the major factors that determines the type of neutron-capture
processes. We present the results based on the estimates of corrected value of
absolute carbon abundance, [C/N] ratio, carbon isotopic ratio and [hs/ls]
ratio obtained from the high resolution spectral analysis of six stars that
include both CH stars and CEMP stars. All the stars show enhancement of
neutron-capture elements. Location of these objects in the A(C) vs. [Fe/H]
diagram shows that they are Group I objects, with external origin of carbon
and neutron-capture elements. Low values of carbon isotopic ratios estimated
for these objects may also be attributed to some external sources. As the
carbon isotopic ratio is a good indicator of mixing, we have used the
estimates of 12C/13C ratios to examine the occurance of mixing in the stars.
While the object HD 30443 might have experienced an extra mixing process that
usually occurs after red giant branch (RGB) bump for stars with log(L/L⊙) $>$
2.0, the remaining objects do not show any evidence of having undergone any
such mixing process. The higher values of [C/N] ratios obtained for these
objects also indicate that none of these objects have experienced any strong
internal mixing processes. Based on the estimated abundances of carbon and the
neutron-capture elements, and the abundance ratios, we have classified the
objects into different groups. While the objects HE 0110$-$0406, HD 30443 and
CD$-$38 2151 are found to be CEMP-s stars, HE 0308$-$1612 and HD 176021 show
characteristic properties of CH stars with moderate enhancement of carbon. The
object CD$-$28 1082 with enhancement of both r- and s-process elements is
found to belong to the CEMP-r/s group.
###### keywords:
Stars—Individual; Stars—Abundances; Stars—Carbon; Stars—Nucleosynthesis.
<EMAIL_ADDRESS><EMAIL_ADDRESS>—- March 2020———
12.3456/s78910-011-012-3 #### 000 0000 1– 1
## 1 Introduction
Chemical analysis of metal-poor stars such as CH stars and CEMP stars can
provide important clues about the nature of nucleosynthesis processes occured
in the early Galaxy. Especially, the abundances of neutron-capture elements
can be used to constrain the Galactic chemical evolution due to heavy
elements. Various sky survey programmes (HK survey and Hamburg/ESO survey,
Beers et al. 1985; Wisotzki et al. 2000; Christlieb et al. 2001) were
conducted in the past to find metal-poor stars. All these surveys show that
the fraction of carbon enhanced objects increases with decreasing metallicity
(Beers & Christlieb 2005; Frebel et al. 2005; Norris et al. 2007; Spite et al.
2013; Yong et al. 2013).
CH stars are FGK giants that show strong carbon molecular bands in their
spectra. They are high radial velocity objects, mostly found in the halo of
our Galaxy. CEMP stars are the metal-poor ([Fe/H] $<$ -1) counter parts of CH
stars. Both the CH stars and CEMP stars show enhancement of carbon and
neutron-capture elements. Hence these objects are ideal candidates to study
the origin and evolution of these elements. Based on the type of enhancement
of neutron-capture elements CEMP stars are classified into different groups,
such as CEMP-s, CEMP-r, CEMP-r/s and CEMP-no stars (Beers & Christlieb 2005).
The evolutionary status of CH and CEMP stars do not support the enhancement of
carbon and heavy elements observed in these stars. The widely accepted
scenario to explain this enhancement is that these objects are in a binary
system. The primary companion once passed through the Asymptotic Giant branch
(AGB) phase and synthesized carbon and heavy elements. The synthesized
materials are then transferred to the secondary companion through some mass
transfer mechanisms. The radial velocity variations exhibited by CH and CEMP
stars (McClure 1983, 1984; McClure & Woodsworth 1990; Hansen et al.2016a)
support this idea.
In this paper, we have presented the results from the high-resolution analysis
of six stars that include four CEMP stars and two CH stars. The paper is
organized as follows. In section 2, we have presented a brief discussion on
the new results obtained for our programme stars as some of the results from
abundance analysis of these objects were presented in Purandardas et al.
(2019), and Purandardas, Goswami & Doddamani (2019). Section 3 presents the
sample selection, observations and data reductions. Section 4 describes the
determination of radial velocity and stellar atmospheric parameters. Details
of the abundance anaysis are presented in section 5. In section 6,
interpretation of our results are presented. Conclusions are drawn in Section
7.
## 2 Novelty of this work
We have presented the abundance analysis results for 26 elements in our
programme stars in Purandardas et al. (2019), and Purandardas, Goswami &
Doddamani (2019). In these works, we have also reported the mass and age of
these stars as well as the results from the kinematic analysis. The location
of these objects in the H-R diagram shows that they are either sub-giants or
in the ascending stage of the giant branch. Various mixing processes have been
found to operate in giant stars. It is therefore important to understand
whether the stars have undergone any internal mixing processes before
interpretting the observed abundances. We had not addressed this problem in
our previous works. In this paper, we have checked whether any internal mixing
processes have altered the surface chemical composition of these stars based
on [C/N] and carbon isotopic ratios. While HD 30443 might have experienced an
extra mixing process that usually occurs after red giant branch (RGB) bump for
stars with log(L/L⊙) $>$ 2.0, the remaining objects do not show any evidence
of having undergone any such mixing process. In our previous works, we have
not checked the possible source of enrichment of neutron-capture elements
observed in our programme stars. In the present work, we have tried to
understand the possible source of neutron-capture elements based on the
location of the absolute carbon abundance values of the programme stars, in
the A(C) vs. [Fe/H] diagram. Based on the estimated value of carbon abundances
together with the observed enhancement of neutron-capture elements, we have
classified them as Group I objects following Yoon et al. (2016) classification
scheme. As the Group I objects are all binaries, it is likely that our
programme stars that belong to this group are also in binary systems with
external origin of carbon and neutron-capture elements. Low values of carbon
isotopic ratios estimated for these objects may also be attributed to external
sources. In the present work, we have re-calculated the [hs/ls] ratio for our
programme stars without considering the contribution from samarium which is an
r-process element which we had taken into account for this estimation in our
previous works.
## 3 Observations and Data reduction
Programme stars are selected from the CH star catalogue of Bartkevicious
(1996), Goswami (2005) and Goswami et al. (2010). In the later papers,
potential CH star candidates are identified based on the low resolution
(R=$\lambda/\delta\lambda$$\sim$1330) spectroscopic studies of faint high
latitude carbon stars. In these works, two of our programme stars HE
0110$-$0406 and HE 0308$-$1612 are identified to be potential CH star
candidates. We have obtained high resolution (R $\sim$ 60 000) spectra of
these objects along with HD 30443 using the high-resolution fiber fed Hanle
Echelle Spectrograph (HESP) attached to the 2 m Himalayam Chandra Telescope
(HCT) at the Indian Astronomical Observatory, Hanle. The spectra cover the
wavelength range from 3530 to 9970 Å. The spectrograph allows a resolution of
60 000 with slicer, and a resolution of 30 000 without slicer. The spectrum is
recorded on a CCD with 4096$\times$4096 pixels of 15 micron size. For the
programme stars, CD$-28$ 1082 and HD 176021, we have used the high resolution
FEROS spectra (Fiber-fed Extended Range Optical Spectrograph (FEROS) of 1.52 m
telescope of Europian Southern Observatory at La Silla). The wavelength
coverage of the FEROS spectra is from 3500 - 9000 Å with a spectral resolution
of $\sim$ 48 000. The detector is a back-illuminated CCD with 2948$\times$4096
pixels of 15 $\mu$m size. For the object CD$-38$ 2151, a high resolution (R
$\sim$ 72 000) spectrum was obtained using the high-resolution fiber fed
Echelle spectrometer attached to the 2.34 m Vainu Bappu Telescope (VBT) at the
Vainu Bappu Observatory (VBO), Kavalur. The spectrum covers the wavelegth
region from 4100 to 9350 Å with gaps between orders. The spectrometer operates
in two modes. It allows a resolution of 72 000 with a 60 micron slit and a
resolution of 27 000 without the slit. The spectrum is recorded on a CCD with
4096$\times$4096 pixels of 12 $\mu$m size. The data reduction is carried out
using various spectroscopic reduction packages such as IRAF. Examples of a few
sample spectra of the programme stars are shown in Figure 1.
Figure 1: Sample spectra of the programme stars in the wavelength region
7980-8030 Å.
## 4 Determination of radial velocity and stellar atmospheric parameters
Radial velocity of the programme stars are determined by measuring the shift
in the wavelength for a large number of unblended and clean lines in their
spectra. The radial velocities range from $-26.7$ to 139.7 km s-1. The
estimated radial velocities of the programme stars are presented in Table 1.
Stellar atmospheric parameters are determined from the measured equivalent
widths of clean and unblended Fe I and Fe II lines using the local
thermodynamic equilibrium (LTE) analysis. We made use of the recent version of
MOOG of Sneden (1973) for our analysis. Model atmospheres are selected from
Kurucz grid of model atmospheres with no convective overshooting
(http://cfaku5.cfa.hardvard.edu/). Solar abundances are taken from Asplund,
Grevesse & Sauval (2009). Effective temperature is taken to be that value for
which the trend between the abundance derived from Fe I lines and the
corresponding excitation potential gives a zero slope. At this temperature,
microturbulent velocity is determined for which the abundance derived from Fe
I lines do not exhibit any dependence on the reduced equivalent width.
Corresponding to these values of effective temperature and microturbulent
velocity, log g is determined in such a way that the abundances obtained from
Fe I and Fe II lines are nearly the same. Only those lines with excitation
potential from 0.0 - 5.0 eV and equivalent widths from 20 - 180 mÅ are
considered for the analysis. The derived atmospheric parameters and the radial
velocities are listed in Table 1.
Table 1: Derived atmospheric parameters and radial velocities of the programme
stars.
Star | Teff | log g | $\zeta$ | [Fe I/H] | [Fe II/H] | Vr
---|---|---|---|---|---|---
| (K) | (cgs) | (km s-1) | | | (km s-1)
HE 0110$-$0406 | 4670 | 1.00 | 1.92 | $-1.31$$\pm$0.09 | $-1.29$$\pm$0.12 | $-44.40$$\pm$3.8 (HESP)
HE 0308$-$1612 | 4600 | 1.70 | 1.42 | $-0.72$$\pm$0.19 | $-0.73$$\pm$0.15 | 85.5$\pm$1.22 (HESP)
CD$-28$ 1082 | 5200 | 1.90 | 1.42 | $-2.46$$\pm$0.08 | $-2.44$$\pm$0.02 | $-26.7$$\pm$0.3 (FEROS)
HD 30443 | 4040 | 2.05 | 2.70 | $-1.68$$\pm$0.05 | $-1.69$$\pm$0.11 | 66.61$\pm$0.20 (HESP)
CD$-38$ 2151 | 4600 | 0.90 | 2.30 | $-2.03$$\pm$0.10 | $-2.03$ | 139.7$\pm$1.9 (VBT)
HD 176021 | 5900 | 3.95 | 1.02 | $-0.62$$\pm$0.08 | $-0.65$$\pm$0.05 | 109.1$\pm$0.5 (FEROS)
## 5 Abundance analysis
The detailed discussion on the abundance analysis results for the six
programme stars are presented in Purandardas et al. (2019), and Purandardas,
Goswami & Doddamani (2019). Here we present a brief summary of these results.
However here we give more emphasize to the results based on the absolute
carbon abundance, [C/N] ratio, carbon isotopic ratio and the [hs/ls] ratio
which is recalculated in this work.
The abundances of various elements are determined from the measured equivalent
widths of absorption lines due to neutral and ionized elements. We have used
only the symmetric and clean lines for our analysis. Lines are identified by
overplotting the arcturus spectra upon the individual spectra of our programme
stars. Then a master linelist is prepared using the measured equivalent widths
and other line information such as lower excitation potential and the loggf
values taken from the Kurucz database. We have also consulted VALD database.
We could estimate the abundances of 24 elements which include the light
elements C, N, O, odd-Z element Na, $\alpha$ \- and Fe-peak elements Mg, Si,
Ca, Ti, V, Cr, Mn, Co, Ni and Zn and the neutron-capture elements Sr, Y, Zr,
Ba, La, Ce, Pr, Nd, Sm and Eu. We have also used spectrum synthesis
calculations for elements such as Sc, V, Mn, Ba, La and Eu taking their
hyperfine structures into considerations. The hyperfine structures of Sc, V
and Mn are taken from Prochaska & McWilliam (2000). For Ba, La and Eu, the
hyperfine structures are taken from McWilliam (1998), Jonsell et al. (2006)
and Woorley et al. (2013) respectively.
We could estimate oxygen abundance only for CD$-38$ 2151 and HD 30443. For
these objects, the abundance of oxygen is determined from the spectrum
synthesis calculations of the [OI] line at 6300.3 and OI line at 6363.8 Å. The
carbon abundance could be determined for all the objects from the spectrum
synthesis calculations of the C2 molecular band at 5165 Å. We could estimate
the carbon isotopic ratio for all of our programme stars except HD 176021
using the spectrum synthesis calculation of CN band at 8005 Å (Figure 2). The
values lie in the range from 7.4 to 45. Abundance of nitrogen is estimated
using the spectrum synthesis calculation of CN band at 4215 Å. Nitrogen is
found to be enhanced in CD$-28$ 1082 and CD$-38$ 2151\. Other objects exhibit
moderate enhancement in nitrogen. The molecular lines for C2 and CN are taken
from Brooke et al. (2013), Sneden et al. (2014) and Ram et al. (2014). The
estimated values of carbon and nitrogen are presented in Table 2.
Table 2: Abundance results for carbon, nitrogen, C/O and carbon isotopic
ratios.
Star | log$\epsilon$(C) | log$\epsilon$(C)∗ | [C/Fe] | log$\epsilon$(N) | [N/Fe] | C/O | 12C/13C
---|---|---|---|---|---|---|---
HE 0110$-$0406 | 7.85 | 7.98 | 0.73 | 7.15 | 0.63 | - | 45.0
HE 0308$-$1612 | 8.48 | 8.50 | 0.78 | 7.25 | 0.15 | - | 15.6
CD$-28$ 1082 | 8.16 | 8.23 | 2.19 | 8.10 | 2.73 | - | 16.0
HD 30443 | 8.43 | 8.47 | 1.68 | 6.55 | 0.40 | 1.02 | 7.40
CD$-38$ 2151 | 7.90 | 8.02 | 1.50 | 7.20 | 1.40 | 2.95 | 11.2
HD 176021 | 8.33 | 8.33 | 0.52 | 7.80 | 0.59 | - | -
∗ Corrected value of carbon.
Figure 2: Synthesis of CN band around 8005 Å. Synthesized spectra is shown in
red colour and the observed spectra is represented in black colour. Synthetic
spectra corresponding to 12C/13C $\simeq$ 12 (blue) and 1 (magenta) are also
shown.
Sodium is moderately enhanced in all our programme stars except HD 176021 in
which Na is near solar. HE 0110$-$0406 and HD 176021 show near solar
abundances of alpha elements. While, HE 0308$-$1612 shows slight enhancement
of these elements. We could not estimate Si in these objects. Among the alpha
elements, we could estimate only Mg and Ca in CD$-28$ 1082 with [Mg/Fe] $\sim$
0.45 and [Ca/Fe] $\sim$ 0.27. In HD 30443, Si and Ca are moderately enhanced
with [Si/Fe] $\sim$ 0.82 and [Ca/Fe] $\sim$ 0.51. While Magnesium, Sc and Ti
are near solar. Magnesium and Ca are found to be moderately enhanced in
CD$-38$ 2151\. While Si is found to be enhanced with [Si/Fe] $\sim$ 1.62.
Scandium and Ti are found to be near solar in this object.
HE 0110$-$0406 shows near solar abundance of Fe peak elements except Ni which
is slightly enhanced with [Ni/Fe] $\sim$ 0.45. While Fe-peak elements are
slightly enhanced in HE 0308$-$1612\. We could not estimate Co in HE
0308$-$1612\. Among the Fe peak elements, we could estimate only Mn in CD$-28$
1082 which is found to be enhanced with [Mn/Fe] $\sim$ 1.48. In HD 30443, Mn
and Co are near solar and Ni is moderately enhanced with [Ni/Fe]$\sim$ 0.59.
Cobalt and Ni are near solar in CD$-38$ 2151\. While chromium is slightly
enhanced and Mn is underabundant with [Mn/Fe] $\sim$ $-0.20$. In HD 176021,
all the Fe peak elements are found to be near solar.
All of our programme stars exhibit enhancement of neutron-capture elements.
From our detailed analysis, we found that CD$-28$ 1082 is a CEMP-r/s star and
the objects HE 0110$-$0406, CD$-38$ 2151 and HD 30443 are CEMP-s stars. We
could not estimate Eu in HD 30443 and CD$-38$ 2151\. Hence, it is not possible
to classify these stars based on the criteria as given by Beers & Christlieb
(2005). In this case, we have used the criteria for the classification of
CEMP-s stars as given by Hansen et al. (2019) based on [Sr/Ba] ratio.
According to this classification scheme, [Sr/Ba] $>$ $-0.5$ can be used to
seperate CEMP-s stars from CEMP-r/s stars. HD 30443 and CD$-38$ 2151 show
[Sr/Ba] $\sim$ $-0.27$ and [Sr/Ba] $\sim$ 0.88 respectively. HE 0308$-$1612
and HD 176021 exhibit the properties of CH subgiants. All the programme stars
show the enhancement of heavy s-process elements more than the light s-process
elements except for CD$-38$ 2151 and HD 176021. The abundance results for
neutron-capture elements are listed in Table 3. In this table, ls stands for
light s-process elements (Sr, Y and Zr) and hs represents the heavy s-process
elements (Ba, La, Ce, and Nd).
Table 3: Ratios of light and heavy s-process elements
Star | [Fe/H] | [ls/Fe] | [hs/Fe] | [hs/ls]
---|---|---|---|---
HE 0110$-$0406 | $-1.30$ | 1.03 | 1.36 | 0.33
HE 0308$-$1612 | $-0.73$ | 1.11 | 1.62 | 0.51
CD$-28$ 1082 | $-2.45$ | 1.52 | 1.90 | 0.38
HD 30443 | $-1.69$ | 1.24 | 1.93 | 0.69
CD$-38$ 2151 | $-2.03$ | 1.24 | 1.10 | $-0.14$
HD 176021 | $-0.64$ | 1.50 | 1.37 | $-0.13$
## 6 Interpretation of results
The interpretation of the results based on the corrected value of absolute
carbon abundance, [C/N], carbon isotopic ratios and [hs/ls] ratios obtained
for our programme stars are presented here in detail. Understanding the
possible source of origin of the enhancement of neutron-capture elements is
very important to understand the type of nucleosynthesis process which
produced it. One of the best ways to get any clues regarding the source is to
locate the stars in the A(C) vs. [Fe/H] diagram. It is found that CEMP stars
exhibit a bimodal distribution in the A(C) vs. [Fe/H] diagram. Spite et al.
(2013) claims the occurance of two plateau at A(C) = 8.25 and another at A(C)
= 6.50. While Yoon et al. (2016) confirmed two peaks at A(C) = 7.96 and A(C) =
6.28 respectively for high and low carbon regions corresponding to the
corrected carbon values. The stars that occupy the high carbon region are
mostly found to be in binary systems. Hence the observed enhancement of carbon
is attributed to the binary companion. Stars that occupy the low carbon region
are found to be single and the observed carbon abundance is intrinsic in
origin. CEMP stars are classified by Yoon et al. (2016) into three groups
based on the morphology in the A(C) vs. [Fe/H] diagram. Group I objects are
mainly composed of CEMP-s and CEMP-r/s stars. These objects show a weak
dependence of A(C) on [Fe/H]. The absolute carbon abundance, A(C) of Group II
objects show a clear dependence on [Fe/H]. While for Group III objects, A(C)
is found to be independent of [Fe/H], Group II and Group III objects are
mainly composed of CEMP-no stars. In Figure 3, the location of our programme
stars are shown in the A(C) vs. [Fe/H] diagram. We have applied corrections to
the estimated carbon abundances using the public online tool by Placco et al.
(2014) available at http/://vplacco.pythonanywhere.com/. The corrected carbon
values are listed in Table 2. Figure 3 shows that all of our programme stars
are Group I objects. Hence we assume that the observed enhancement of carbon
may be attributed to the binary companion.
Figure 3: Corrected A(C) vs. [Fe/H] diagram for the compilation of CEMP stars
taken from Yoon et al. (2016). CEMP-s stars are represented by Open circles.
CEMP-r/s and CEMP-no stars are shown by open squares and open triangles
respectively. Programme stars are represented by black coloured symbols, HE
0110$-$0406 (cross), HE 0308$-$1612 (filled circle), CD$-28$ 1082 (filled
triangle), HD 30443 (open pentagon), CD$-38$ 2151 (filled square) and HD 17602
(filled pentagon)
The low values of 12C/13C ratio observed in our programme stars also support
the extrinsic origin of carbon. Various mixing processes such as first dredge
up, thermohaline mixing and rotation induced mixing (Charbonnel 2005; Dearborn
et al. 2006; Eggleton et al. 2006) can cause a low carbon isotopic ratio.
Vanture (1992) suggests that certain nucleosynthetic reactions of the accreted
material can reduce the 12C abundance. We have estimated the [C/N] ratio of
our programme stars to derive any clues if they have undergone any mixing
processes. Since nitrogen is produced at the expense of carbon, [C/N] ratio
acts as a sensitive indicator of mixing. Spite et al. (2005) analysed CNO and
Li abundances for a sample of extremely metal-poor stars. Their analysis shows
that the stars that exhibit clear evidence of mixing have a low value of [C/N]
ratio ($<$ $-0.60$), and they belong to upper RGB or horizontal branch. The
stars that do not show any evidence of mixing have [C/N] $>$ $-0.60$ and lie
on the lower RGB. Figure 4 shows that all of our programme stars have [C/N]
$>$ $-0.6$. This implies that none of our programme stars have undergone any
significant internal mixing processes.
Figure 4: Position of the programme stars in the [C/N] vs. Teff diagram.
Programme stars are represented using red coloured symbols. The symbols used
for the programme stars are same as in Figure 3. Open squares represent the
stars from literature (Spite et al. 2006, Aoki et al. 2007, Goswami et al.
2016, Hansen et al. 2016b and Hansen et al. 2019).
Stellar models predict that when a low mass star ascends the red giant branch,
the outer convective envelope expands inwards and penetrates into the region
of CN processed materials (First dredge up (FDU)). The luminosity at which the
FDU occurs in low mass field stars is log(L/L⊙) $\sim$ 0.80 (Gratton et al.
2000). All of our programme stars have log(L/L⊙) $>$ 0.80, except HD 176021.
Hence they might have undergone first dredge up. However, these stars fall in
the unmixed category. Even though in solar mass/metallicity stars, 12C/13C
ratio decreases by a factor of 20-30 from the original value and surface
abundance of nitrogen increases after FDU (Iben & Renzim 1984), it is found to
be less efficient in metal-poor stars (VandenBerg & Smith 1988, Charbonnel
1994). This implies that the changes in the surface compositions of C and N
after the occurance of FDU are very small in metal-poor stars. The carbon
abundance is found to be decreased only by about 0.05 dex and the decrease in
the 12C/13C ratio is not large enough for these stars (Gratton et al. 2000).
A second mixing episode can happen when the star becomes brighter than the RGB
bump at a luminosity of log(L/L⊙) $\sim$ 2.0. In this process, 12C abundance
decreases by a factor of $\sim$ 2.5 and 12C/13C reaches a value of $\sim$ 6 to
10. Nitrogen abundance also increases by a factor of nearly 4 (Gratton et al.
2000). Two of our programme stars HD 30443 and CD$-38$ 2151 have log(L/L⊙)
$\sim$ 2.2 and 2.87 respectively. Hence they are expected to show the
signatures of mixing. But these stars show no signatures of mixing based on
[C/N] ratio. Spite et al. (2006) suggest that [C/N] raio is not a clean
indicator of mixing as the abundance of C and N in the interstellar medium
from which these stars are formed show large variations. In that case, one can
use the 12C/13C as a good indicator of mixing, since it is high in primordial
matter ($>$ 70) and the determination of carbon isotopic ratio is found to be
insenstive to the choice of atmospheric parameters for the stars (Spite et al.
2006). We have plotted 12C/13C ratio and Teff of our programme stars as shown
in Figure 5. From the figure, it is clear that the object HD 30443 share the
region occupied by the stars that have undergone internal mixing processes.
This means that HD 30443 had experienced internal mixing processes that might
have altered its initial surface chemical compositions. CD$-38$ 2151 lies
close to the boundary of seperation of mixed and unmixed stars. The low value
of carbon isotopic ratio in other programme stars may be interpretted as the
result of strong processing in the AGB progenitors (Hansen et al. 2016b) and
not in the star itself.
Figure 5: Position of the programme stars in 12C/13C vs. Teff diagram.
Programme stars are represented using red coloured symbols. The symbols used
for the programme stars are same as in Figure 3. Open squares represent the
stars from Spite et al. (2006) and Aoki et al. (2007).
From the estimated carbon isotopic ratio and the location of our programme
stars in the A(C) vs. [Fe/H] diagram, we assume that the observed enhancement
of neutron-capture elements may be attributed to an extrinsic source. As Group
I objects are mostly found to be associated with binary systems, the
enhancement may be justified by the mass transfer from the binary companion.
According to Busso et al.(2001), the light s-process elements such as Y, Zr
and Sr are predominantly produced in AGB stars with near solar metallicity.
Hence, mass transfer from such a companion can produce more enhancement of
light s-process elements than the heavy s-process elements. The AGB stars with
[Fe/H] $\sim$ $-1.0$ can produce more heavy s-process elements such as Ba, La,
Ce and Nd than the light s-process elements, and this leads to lower [ls/Fe]
than [hs/Fe] in the stars that have accreted materials from the metal-poor AGB
stars. The reason is that, in the metal-poor AGB stars, the number of Fe seed
nuclei available for neutron-capture process is less and hence the neutron
exposure for each Fe seed nuclei will be more. This leads to the formation of
more heavier elements. The binary mass transfer scenario can similarly be
applied to justify the observed enhancement of neutron-capture elements in
CD$-28$ 1082 which is a CEMP-r/s star. Several authors (Hampel et al. 2016;
Hansen et al. 2016a and references therein) have shown that the observed
abundance patterns of CEMP-r/s stars are consistent with the model
calculations of i-process in low-mettalicity AGB stars.
## 7 Conclusion
The possible source of the origin of enhancement of neutron-capture elements
in six carbon stars are examined in the light of absolute carbon abundances.
Location of our programme stars in the A(C) vs. [Fe/H] diagram shows that
these objects belong to Group I category. Various studies on radial vecities
of these objects show that most of them exhibit radial velocity variations.
That is, these objects are mostly found to be associated with binary systems.
Hence the observed enhancement of carbon and neutron-capture elements in our
programme stars may be attributed to the binary companion. But none of the
programme stars are known to be confirmed binaries. The low values of carbon
isotopic ratio also supports the extrinsic origin.
Elemental abundance ratios also bear important signatures about the source of
enrichment of neutron-capture elements. We have re-estimated the [hs/ls] ratio
for our programme stars. Abate et al. (2015) predict an [ls/hs] ratio less
than zero for AGB models. All the programme stars except CD$-38$ 2151 and HD
176021 show an [hs/ls] ratio characteristics of AGB progenitors.
We have also examined whether the programme stars have undergone any internal
mixing based on [C/N] and carbon isotopic ratios. Because it is important to
understand whether the mixing have modified the surface composition of the
star before interpretting the observed abundance patterns. We found that none
of the proramme stars have experienced internal mixing, except HD 30443. The
estimated values of [C/N] ratio also show that the objects have not gone
through any significant mixing processes. Thus, the observed values of low
carbon isotopic ratio in the unmixed stars may be due to the strong mixing
processes occured in the AGB progenitors. In other words, the observed surface
chemical compositions of our programme stars except HD 30443, preserve the
fossil records of the materials synthesised in the AGB stars from which they
have accreted the materials.
## Acknowledgements
We thank the staff members at IAO, CREST and VBO for their assistance and
cooperation during the observations. Funding from DST SERB project No.
EMR/2016/005283 is gratefully acknowledged. This work made use of the SIMBAD
astronomical database, operated at CDS, Strasbourg, France, the NASA ADS, USA
and data from the European Space Agency (ESA) mission Gaia
(https://www.cosmos.esa.int/gaia), processes by the Gaia Data Processing and
Analysis Consortium (DPAC,
https://www.cosmos.esa.int/web/gaia/dpac/consortium).
## References
* [1] Abate C., Pols O.R., Izzard R.G., Karakas A.I., 2015, A&A, 581, A22
* [2] Aoki W., Beers T.C., Christlieb N., Norris J.E., Ryan S.G et al., 2007, ApJ, 655, 492
* [3] Asplund M., Grevesse N., Sauval A.J., 2009, Ann. Rev. Astron. Astrophy., 47:481
* [4] Bartkevicious A., 1996, Baltic Astron, 5, 217
* [5] Beers T.C., Preston G.W., Shectman S.A., 1985, AJ, 90, 2089
* [6] Beers T.C., Christlieb N., 2005, ARA&A, 43, 531
* [7] Brooke J.S., Bernath P.F., Schmidt T.W., Bacskay G.B., 2013, J.Quant. Spectrosc. Radiat. Transfer, 124, 11
* [8] Busso M., Gallino R., Lambert D.L., Travaglio C., Smith V.V., 2001, ApJ, 557, 802
* [9] Charbonnel C., 2005, ASP Conference Series, Vol. 336
* [10] Christlieb N., Green P. J., Wisotzki L., Reimers D., 2001, A&A, 375, 366
* [11] Dearborn D. S. P., Lattanzio J. C., Eggleton P. P., 2006, ApJ, 639, 405
* [12] Eggleton P. P., Dearborn D. S. P., Lattanzio J. C., 2006, Science, 314, 1580
* [13] Frebel A., Aoki W., Christlieb N., Ando H., Asplund M. et al., 2005, Nature, 434, 871
* [14] Goswami A., 2005, MNRAS, 359, 531
* [15] Goswami A., Karinkuzhi D., Shantikumar N.S., 2010, MNRAS, 402, 1111
* [16] Goswami A., Aoki W., Karinkuzhi D., 2016, MNRAS, 455, 402
* [17] Gratton R., Sneden C., Carretta E., Bragaglia A., 2000, A&A, 354, 169
* [18] Hampel M., Stancliffe R.J., Lugaro M., Meyer B.S., 2016, ApJ, 831, 171
* [19] Hansen C.J., Nordström B., Hansen T.T., Kennedy C.R., Placco V M. et al., 2016b, A&A, 588, A37
* [20] Hansen T. T., Andersen J., Nordström B., Beers T.C., Placco V.M. et al. 2016a, A&A, 586, A160
* [21] Hansen C.J., Hansen T.T., Koch A., Beers T.C., Nordström B. et al., 2019, A&A, 623, 128
* [22] Iben I.Jr., Renzini A., 1984, Phys. Letters 105, 329
* [23] Jonsell K., Barklem P.S., Gustafsson B., Christlieb N., Hill V. et al., 2006, A&A, 451, 651
* [24] McClure R.D., 1983, ApJ, 208, 264
* [25] McClure R.D., 1984, ApJ, 280, 31
* [26] McClure R.D., Woodsworth W., 1990, ApJ, 352, 709
* [27] McWilliiam A., 1998, AJ, 115, 1640
* [28] Norris J.E., Christlieb N., Korn A.J., Eriksson K., Bessell M.S. et al., 2007, ApJ, 670, 774
* [29] Placco V.M., Frebel A., Beers T. C., Stancliffe R. J., 2014, ApJ, 797, 21
* [30] Prochaska J.X., McWilliam A. 2000, ApJ, 537, 57
* [31] Purandardas M., Goswami A., Goswami P.P., Shejeelammal J., Masseron T., MNRAS, 2019, 486,3266
* [32] Purandardas M., Goswami A., Doddamani V.H., 2019, BSRSL, 88, 207
* [33] Ram R.S., Brooke James S.A., Bernath P.F., Sneden C., Lucatello S., 2014, ApJS, 211, 5
* [34] Sneden C., 1973, PhD thesis, Univ. Texas
* [35] Sneden C., Lucatello S., Ram R.S., Brook J.S.A., Bernath P., 2014, Ap. J. Supp., 214, 26
* [36] Spite M., Cayrel R., Plez B., Hill V., Spite F. et al., 2005, A&A 430, 655
* [37] Spite M., Cayrel R., Hill V, Spite F., Francois P. et al., 2006, A&A 455, 291
* [38] Spite, M., Caffau, E., Bonifacio, P., Spite F., Ludwig H.-G. et al. 2013, A&A, 552, 107
* [39] VandenBerg D.A., Smith G.H., 1988, PASP 100, 314
* [40] Yong D., Norris J. E., Bessell M. S., Christlieb N., Asplund M. et al., 2013, ApJ, 762, 26
* [41] Yoon J., Beers T.C., Placco V.M., Rasmussen K.C., Carollo D. et al., 2016, ApJ, 833, 20
* [42] Vanture A.D., 1992, AJ, 104, 1977
* [43] Wisotzki L., Christlieb N., Bade N., Beckmann V., Köhler T, et al., 2000, A&A, 358, 77
* [44] Worley C.C., Hill V.J., Sobeck J., Carretta E., 2013, A&A, 553, A47
|
# Exploring Lightweight Interventions at Posting Time to Reduce the Sharing of
Misinformation on Social Media
Farnaz Jahanbakhsh Computer Science and Artificial Intelligence Laboratory,
Massachusetts Institute of TechnologyCambridgeUSA , Amy X. Zhang Allen
School of Computer Science & Engineering, University of WashingtonSeattleUSA ,
Adam J. Berinsky Political Science, Massachusetts Institute of
TechnologyCambridgeUSA , Gordon Pennycook Hill/Levene Schools of Business,
University of ReginaReginaCanada , David G. Rand Sloan School of
Management/Brain and Cognitive Sciences, Massachusetts Institute of
TechnologyCambridgeUSA and David R. Karger Computer Science and Artificial
Intelligence Laboratory, Massachusetts Institute of TechnologyCambridgeUSA
(June 2020; October 2020; December 2020)
###### Abstract.
When users on social media share content without considering its veracity,
they may unwittingly be spreading misinformation. In this work, we investigate
the design of lightweight interventions that nudge users to assess the
accuracy of information as they share it. Such assessment may deter users from
posting misinformation in the first place, and their assessments may also
provide useful guidance to friends aiming to assess those posts themselves.
In support of lightweight assessment, we first develop a taxonomy of the
reasons why people believe a news claim is or is not true; this taxonomy
yields a checklist that can be used at posting time. We conduct evaluations to
demonstrate that the checklist is an accurate and comprehensive encapsulation
of people’s free-response rationales.
In a second experiment, we study the effects of three behavioral nudges—1)
checkboxes indicating whether headings are accurate, 2) tagging reasons (from
our taxonomy) that a post is accurate via a checklist and 3) providing free-
text rationales for why a headline is or is not accurate—on people’s intention
of sharing the headline on social media. From an experiment with 1668
participants, we find that both providing accuracy assessment and rationale
reduce the sharing of false content. They also reduce the sharing of true
content, but to a lesser degree that yields an overall decrease in the
fraction of shared content that is false.
Our findings have implications for designing social media and news sharing
platforms that draw from richer signals of content credibility contributed by
users. In addition, our validated taxonomy can be used by platforms and
researchers as a way to gather rationales in an easier fashion than free-
response.
Misinformation, Social Media, Behavioral Nudges, Reasons Why People Believe
News
††copyright: rightsretained††journal: PACMHCI††journalyear:
2021††journalvolume: 5††journalnumber: CSCW1††article: 18††publicationmonth:
4††doi: 10.1145/3449092††ccs: Human-centered computing Empirical studies in
collaborative and social computing††ccs: Human-centered computing Empirical
studies in HCI
## 1\. Introduction
Social media has lowered the barrier to publishing content. While this
empowerment of the individual has led to positive outcomes, it has also
encouraged the fabrication of misinformation by malicious actors and its
circulation by unwitting platform users (Vosoughi et al., 2018; Grinberg et
al., 2019). Given widespread concerns about misinformation on social media
(Reed, 2018; Bengali, 2019; Dixit and Mac, 2018; Coleman, [n.d.]; Argentino,
[n.d.]), many researchers have investigated measures to counter misinformation
on these platforms. Some of these initiatives include detecting false or
misleading information using machine learning algorithms (Castillo et al.,
2011; Potthast et al., 2016; Shu et al., 2017) and crowdsourcing (Epstein et
al., 2020; Pennycook and Rand, 2019a; Allen et al., 2020; Kim et al., 2018;
Bhuiyan et al., 2020), identifying bad actors and helping good actors
differentiate themselves111The Trust Project: https://thetrustproject.org/,
Credibility Coalition: https://credibilitycoalition.org/ (Zhang et al., 2018),
and providing fact-checked information related to circulated news claims
(Kriplean et al., 2014; Graves, 2016; Pennycook et al., 2020a; Yaqub et al.,
2020). Social media companies themselves have also enlisted more contract
moderators as well as third-party fact-checkers to remove or down-rank
misinformation after publication (Fac, [n.d.]b).
While credibility signals and fact-check flags provide useful filters and
signals for readers, and algorithmic and human moderators help stop
misinformation that is already spreading, none of these initiatives confront
the underlying design of social media that leads to the proliferation of
misinformation in the first place. Part of the reason misinformation spreads
so well is that social media prioritizes engagement over accuracy. For
instance, a user who encounters an incorrect post and then leaves a comment
refuting it may in fact be inadvertently disseminating it farther because the
system considers the comment as engagement. A related driver is an emphasis on
low barriers to sharing that allows users to share content without much
attention to its accuracy or potential negative consequences, simply to
receive social feedback or elevated engagement (Bazarova et al., 2015;
Grinberg et al., 2017).
Thus, in this work, we begin to explore how one might alter social media
platforms to better surface accuracy and evidence, not just engagement, when
spreading content. To achieve this, we consider how to raise some small
barriers to sharing in order to promote accurate content, without drastically
changing the lightweight sharing practices typical in social media. For
example, sharing of any kind of content would likely plummet if platforms
demanded that users receive substantial training in assessing information
before being allowed to share, or that they perform extensive research on
every post before sharing it. Instead, we look to interventions matched in
scale to current sharing behavior such as clicking a “like button” or
emoticon, adding a hashtag, or writing a comment. We therefore explore the
potential impact of (i) requiring sharers to click a button to indicate
whether they think a story is accurate or not when sharing it, (ii) requiring
sharers to choose at least one tag from a small checklist indicating _why_
they consider it accurate, and (iii) writing a short comment explaining their
accuracy assessment. Introducing these interventions at posting time would
encourage users to reflect on veracity before posting and perhaps reconsider
posting an inaccurate piece of content. These assessments could also provide
valuable information to other users seeking to form their own assessments of a
post’s accuracy.
In this work, we describe two studies completed using participants from
Mechanical Turk motivated by these ideas. The first aims to develop a small
set of categories that (i) cover the majority of reasons people consider
content to be accurate or inaccurate and (ii) can be used accurately by them
when providing assessments. This allows us to test as one of our interventions
a checklist that can reasonably replace a free-text response. The second study
considers the impact of our three proposed interventions—accuracy assessment,
reasoning “categories” provided by the first study, and textual reasoning—on
sharing behavior.
### 1.1. Research Questions
Prior work has shown that priming people to think about accuracy when they are
considering sharing a post can help reduce their likelihood of sharing
falsehoods (Pennycook et al., 2020b, 2021). But little work has been done on
how asking about accuracy could be integrated into platforms in a lightweight
way. In the first study, we examine how to capture people’s rationale for a
claim’s (in)accuracy in a structured format. To create categories of
rationales, we ask the following research question:
* •
RQ1: What are the reasons why people believe or disbelieve news claims?
In this study, we developed a taxonomy of reasons and presented a set of
claims along with the taxonomy to participants to choose from, as they
provided their rationales for the (in)accuracy of the news claims. We iterated
on the taxonomy until participants were able to use it reliably to label their
rationales.
Armed with our taxonomy of reasons, the second research question that we
address is:
* •
RQ2: How does asking people to provide accuracy assessments of news claims and
their rationales affect their self-reported sharing intentions on social
media?
Based on results from prior literature (Pennycook et al., 2021, 2020b), our
hypothesis is that asking people about content accuracy lowers their intention
of sharing false content, but also of sharing true content to some degree. An
ideal intervention would significantly reduce sharing of false content without
much impacting the sharing of true. We further hypothesize that additionally
asking people to provide their reasoning when evaluating the accuracy of
content would help them even more with the discernment than simply asking for
an accuracy judgement.
The study we conducted involved presenting news claims spanning different
domains and partisan orientations to a set of participants and asking a subset
if the claims were accurate and another subset their reasons for believing so
in addition to accuracy assessment. For each news claim, participants
indicated whether they would share it on social media. We find that asking
about accuracy decreases the likelihood of sharing both true and false news
but of false news to a greater degree. We also find that additionally asking
for rationale via free-text leads to a similar outcome, reducing sharing of
both true and false headlines, but further decreasing the ratio of shared
false headlines to true ones. We delve deeper into providing rationales by
comparing a condition where reasoning is provided in free-text to one where
users must also select from a checklist of the reason categories taken from
our taxonomy. We find that having people additionally work through the
checklist of reasons does not further reduce ratio of shared false headlines
to true ones compared to free-text reasoning alone. However, such structured
rationales can be beneficial for integration into platforms as signals of
credibility.
Our findings on the effects of the accuracy and reasoning nudges on sharing
decisions point to potential designs for social media platforms to introduce
at posting time to reduce sharing of misinformation. In addition, our taxonomy
of rationales can be used to form a checklist for users to report their
reasons for believing or disbelieving a claim in an easier fashion. Because
these inputs are structured annotations, they provide a potential added
benefit in that they can be _aggregated_ and presented to friends of the
poster, for example indicating that “2 friends think this post is true and 27
think it is false, 2 based on firsthand knowledge.” This aggregate assessment
could help warn a user about misinformation and could also help guide users to
friends who are misinformed and could benefit from a conversation about the
post. We conclude by discussing these and other possible social media designs
that could be introduced as a result of this work.
## 2\. Related Work
We situate our study in prior work related to how misinformation spreads and
what factors affect people’s sharing behaviors, measures to combat
misinformation, and factors influencing people’s perceptions of content
accuracy.
### 2.1. Spread of Misinformation
Although misinformation is not a recent phenomenon (Posetti and Matthews,
2018), the fairly recent use of online platforms for its dissemination has
gained misinformation fresh attention. Researchers have focused on defining
the problem space of misinformation (Wardle and Derakhshan, 2017; Vraga and
Bode, 2020) and determining how it has changed the societal and political
arena (Bovet and Makse, 2019; Shane, 2017). A body of work examines data
collected from online communities to study how people use social media to seek
and share information, and how misinformation spreads through these
communities (Oh et al., 2010; Del Vicario et al., 2016). For example, Starbird
et al. investigate rumors that emerge during crisis events and report on the
diffusion patterns of both false content and its corrections (Starbird et al.,
2018b, 2014). Vosoughi et al. analyze the spread of rumor cascades on Twitter
and find that, among fact-checked articles, false content diffuses farther,
faster, deeper, and more broadly than the truth (Vosoughi et al., 2018). By
analyzing tweets during and following the US 2016 election, Shao et al. find
evidence that social bots played a role in spreading articles from low-
credibility sources. In addition, they identify common strategies used by such
bots, e.g., mentioning influential users (Shao et al., 2018). Other work has
examined echo chambers on social media and their selective exposure to
information, e.g., in communities that relate to certain conspiracy theory
narratives (Quattrociocchi et al., 2016; Del Vicario et al., 2016; Schmidt et
al., 2018; Mosleh et al., 2021b).
Another strand of research tries to understand how people’s sharing behavior
on social platforms is impacted by different aspects of the content or the
platform. For instance, Vosoughi et al. report that people are more likely to
share information that is novel, a characteristic that false content usually
has (Vosoughi et al., 2018). Pennycook et al. report that subtly priming
people to be mindful of content accuracy before deciding whether to share the
content helps lower their intention of sharing false information (Pennycook et
al., 2021, 2020b). They argue that this phenomenon happens because although
people are generally good at discerning accuracy, when deciding whether to
share content, they are less concerned with accuracy than other aspects of
sharing such as the amount of social feedback they receive. Focusing their
attention on accuracy can help alleviate the problem. This interpretation is
consistent with prior work that reports people who rely more on their
intuition and engage in less critical thinking are more susceptible to
believing political fake news in survey experiments (Pennycook and Rand,
2019b) and in fact share news from lower quality sources on Twitter (Mosleh et
al., 2021c).
We extend this body of work by studying the effects of asking people to
provide accuracy assessments as well as their reasoning for why a news story
is or is not accurate on their intentions of sharing the content, quantifying
the degree to which they reduce sharing of false (and true) content. If these
nudges prove to be effective at curbing the sharing of inaccurate content,
news sharing platforms and social media can benefit from implementing them.
### 2.2. Measures to Counter Misinformation
Another body of work has been investigating how to combat misinformation. For
example, Bode and Vraga investigate the roles that authoritative and expert
sources, peers of social media users, and platforms play in correcting
misinformation (Vraga and Bode, 2017; Bode and Vraga, 2018). Other such
studies investigate the impact of the wording of corrections (Bode and Vraga,
2018; Martel et al., 2021) and explore identifying misinformation in its early
stages using previously known rumors (Wu et al., 2017), presenting linguistic
and social network information about new social media accounts to help users
differentiate between real and suspicious accounts (Karduni et al., 2019), and
increasing media literacy (Haigh et al., 2019).
A related thread of work studies how social media users engage with fact-
checking in the wild. Zubiaga et al. explore how rumors are diffused on social
media and how users respond to them before and after the veracity of a rumor
is resolved. They report that the types of tweets that are retweeted more are
early tweets supporting a rumor that is still unverified. Once a rumor has
been debunked however, users do not make the same effort to let others know
about its veracity (Zubiaga et al., 2016). In another study on rumors, Shin et
al. analyze rumors spread on Twitter during the 2012 election and find that
they mostly continued to propagate even when information by professional fact-
checking organizations had been made available (Shin et al., 2017). Shin and
Thorson analyze how Twitter users engage with fact-checking information and
find that such information is more likely to be retweeted if it is
advantageous to the user’s group (partisanship) (Shin and Thorson, 2017).
Other work has studied circumstances under which fact-checking is effective
and find that social media users are more willing to accept corrections from
friends than strangers (Margolin et al., 2018; Hannak et al., 2014). In
addition, a Twitter field experiment found that being corrected by a stranger
significantly reduced the quality of content users subsequently shared (Mosleh
et al., 2021a).
Social media platforms have also been exploring ways to ameliorate their
misinformation problem. Facebook for example, started showing red flags on
certain posts to signal their lack of credibility. Such flags however,
encouraged people to click on the content, causing Facebook to remove the
flags in favor of adding links to related articles underneath the posts (Qua,
[n.d.]). Facebook has also reported that it lowers the ranking of groups and
pages that spread misinformation about vaccination (fac, [n.d.]). Platforms
have recently turned to working with third-party fact-checkers to remove
content that violate their community policies (Fac, [n.d.]b). These measures
in general force the platforms into a truth-arbitration role which is
especially problematic in cases where policies have not had the foresight to
predict all accounts of problematic posts or in grey areas (Shrimsley, [n.d.];
Spray, [n.d.]). We are interested in exploring “friend-sourced” methods in
which the platforms are only responsible for delivering the assessments and
not for making them.
We study the effects of two behavioral nudges, requesting accuracy assessments
and rationales, on sharing false news as countermeasures that could be
incorporated into social media platforms. To best leverage them, we also study
how to capture people’s rationales in structured form. We hypothesize that
requesting users to assess accuracy of news headlines at sharing time acts as
a barrier to posting, reducing sharing of false content but also of true
content to a lesser degree. In addition, we hypothesize that further asking
users to provide rationales for their accuracy assessments will result in a
higher reduction in sharing of false headlines, and potentially of true
headlines although to a lesser degree.
### 2.3. Why People Believe News
A body of work has been investigating the characteristics of posts or of
people’s interaction with them that affect their perceived accuracy. For
instance, number of quoted sources (Sundar, 1998), prior exposure to a claim
(Pennycook et al., 2018), official-looking logos and domains names (Wineburg
and McGrew, 2017), and post topic and author username (Morris et al., 2012)
have been found to impact perceptions of news credibility, whereas the news
publisher has surprisingly little effect on news accuracy ratings (Dias et
al., 2020). Pennycook et al. find that attaching warning flags to a subset of
news stories increases the perceived credibility of those without flags
(Pennycook et al., 2020a). By conducting interviews with participants whose
newsfeeds were manipulated to contain false posts, Geeng et al. study why
people do not investigate content credibility, e.g., because they have
undergone political burnout (Geeng et al., 2020).
Our study builds upon prior work by investigating self-reported reasons why
people believe or disbelieve claims. We develop a taxonomy from these reasons
and revise it iteratively until people untrained in the taxonomy are able to
use it reliably to label their own rationales. We hypothesize that by
leveraging these structured rationales and deliberating on all the dimensions
of accuracy, users will be more discerning of false vs true content compared
to if they provide unguided free-form reasoning. In addition, structured
reasons have the added benefit that they could easily be integrated into
platforms as signals of content credibility.
## 3\. Terminology and Methods
In this section we introduce terminology that we will use to discuss and
evaluate our interventions, some of which are also used in the course of the
studies.
### 3.1. Performance Metrics for Sharing Interventions
Our overall goal is to study interventions at the moment of sharing content
online. To evaluate these interventions, we seek a meaningful performance
metric. Interventions that occur only when a user has decided to share can
only _prevent_ some sharing, thus reducing the amount of sharing overall. An
intervention that might be considered ideal would not prevent sharing of true
content but would prevent all sharing of false content. More generally, it is
useful to separately consider the degree to which an intervention reduces
sharing of true content and the degree to which it reduces sharing of false.
Previous work (Pennycook et al., 2021, 2020b; Pennycook et al., 2020a) often
assessed the performance of an intervention by comparing the change in the
_absolute difference_ between the rate at which true and false content was
shared. But here, we argue that an intervention which results in no change in
the _difference_ in sharing rates can still be highly beneficial if it changes
the _ratio_ of sharing rates.
Consider for example a user who shared 20% of the true content and 10% of the
false content they encountered. If the “input stream” of content they read
were balanced between true and false, then they would share twice as much true
content as false, meaning the “output stream” of content they shared would be
2/3 true to 1/3 false. Now suppose that an intervention decreases their
sharing rate on both true and false content by 5%, to 15% and 5% respectively.
There is no change in the absolute difference in sharing rates, but the user’s
new output stream consists of 3/4 true content and 1/4 false. Going farther,
if both rates are decreased by 10% the output stream will contain only true
content.
Therefore, in assessing the effect of interventions, we focus on the (change
in the) _ratio_ of sharing rates rather than the difference. If a user shares
a fraction $f$ of false posts and a fraction $t$ of true posts, then an input
stream with a ratio $r$ of false posts to true posts will yield an output
stream with a ratio $fr/t$ of false to true. Thus, an intervention that
reduces the _discernment ratio_ $f/t$ will improve the output false-true ratio
regardless of the change in the _difference_ of sharing rates. Of course this
comes at a cost: a reduction in the overall sharing of true content. Different
platforms may have different cost-benefit analyses of this trade-off. Note
also that a portion of the benefit can be acquired at a portion of the cost by
invoking the intervention on only a certain fraction of the shares.
On many social platforms, content one user consumes and shares is content that
has been shared by others. If each user’s assessments improve the false-true
ratio by a factor $f/t$, then over a chain of $k$ sharing steps the ratio is
improved by a factor of $(f/t)^{k}$ overall; so even minor improvements
accumulate exponentially.
### 3.2. Veracity and Perceived Accuracy
We now introduce relevant terminology. Previous work has shown that personal
deliberation can improve people’s ability to distinguish accurate from
inaccurate information (Zhang et al., 2018). And our interventions seek to
engender different amounts of that deliberation. Initially, it is possible
that users might choose to share certain information without even considering
whether it is accurate. Our minimal intervention, simply asking a user whether
a claim is accurate, already forces users to deliberate at least enough to
answer the question. We expect that spending more time and effort in
deliberation will generally improve a user’s accuracy assessments. However,
there is a limit to this improvement based on a user’s available knowledge
(and inaccurate knowledge) and rationality that will prevent their ever
assessing perfectly. We use _veracity_ to refer to whether the claim is
accurate or not independent of the user’s knowledge. We use _perceived
accuracy_ to refer to the user’s initial answer to whether they consider the
claim accurate. We expect that subjective assessment of accuracy will
correlate with objective accuracy, but imperfectly. Finally, we define the
_average perceived accuracy_ of a claim to define the fraction of users who
perceive the claim as accurate.
## 4\. Experimental and Study Design
The objective of our first study (Taxonomy study) was to develop a taxonomy of
self-reported reasons why people believe or disbelieve news claims. In the
second (Nudge study), we investigated whether asking people to provide
accuracy assessments and reasoning for the (in)accuracy of a news claim before
they share it on social media nudges them to be more mindful of its
credibility and if this nudge affects their sharing behavior. We further
examined the effects of different instruments (a free-format text-box or a
structured set of checkboxes) for capturing reasons on sharing news stories
that are not credible. Our study was approved by our Institutional Review
Board.
### 4.1. Claims
We collected most of the claims we used in our studies from
Snopes222https://snopes.com, with a few from mainstream media. Each claim was
presented with a picture, a lede sentence, and a source that had originally
published an article on the claim, similar to news stories on social media
(see Figure 1) and also because users do not generally read entire articles
and mainly limit their attention to headlines (Manjoo, 2013). The claims
varied along different dimensions of veracity, domain, partisanship, and
original source. For the claims that we collected from Snopes, we had the
ground-truth that the website’s fact-checkers had provided. We fact-checked
the few that we collected from mainstream media by investigating the sources
to which they referred. For domain, we chose claims that were either political
or about science and technology, with claims in both domains covering a wide
range of issues. Some of the claims were pro-Republican, some pro-Democratic,
and others had no clear partisanship. The claims came from various sources
including mainstream media, conspiracy websites, and social media. For the
claims that had originally been circulated on social media such as Facebook,
we displayed the source as “Posted via Facebook.com”.
Because we intended for our political claims to be relevant at the time of the
study and not outdated, for each iteration of the study, we collected new
political claims that had emerged or re-emerged within the past few months
prior to the iteration. Selecting relevant headlines resulted in the
iterations of the Taxonomy study having different but overlapping sets of
claims, which supported our goal of a generalizable taxonomy. Another set of
claims was used for the Nudge study which was conducted in one iteration.
These claims had a large overlap with those used in the last iteration of the
Taxonomy study. We have provided this set in Appendix F.
In addition to relevance, we considered provenance when selecting claims for
the study. We chose those claims for which Snopes had provided the originating
source or the ones that it explicitly mentioned as being widespread rumors.
For example, some claims had been requested to be fact-checked by Snope’s
readership and therefore did not have a clear source or a place where they had
emerged and therefore, we did not select these claims. In addition, we
filtered out those claims that were not factual (e.g., satire) because
including them would have required presenting the item at the article and not
the headline level.
Figure 1. An example of a headline in the study. Headlines were shown with an
image, a lede sentence, and a source.
Figure 2. The survey interface for the Nudge study iteration 4, where for each
reason that a participant selected, they were required to fill in a text-box
explaining their rationale.
### 4.2. Participants
We recruited U.S. based participants from Amazon Mechanical Turk. Across the 4
iterations of the Taxonomy study, 317 participants provided at least one (non-
spam) answer to the news items presented to them. Of those, 305 completed the
full survey and provided demographic information. The number of (non-spam)
participants for the Nudge study was 1668, of whom 1502 provided demographic
information. Of the 1807 participants across both studies who provided
demographic information, 42% were female. 47% identified as Democratic, 25% as
Republican, and 26% as Independent. They were distributed across a wide age
range with a median of 35 years. The median income was $40,000 to $49,999.
The payment for the Taxonomy study was $3. We determined the pay for the Nudge
study ($3.75) based on a soft-launch of the HIT with 100 workers which had a
median completion time of 25 minutes. The soft-launch also revealed a high
spam rate, leading us to limit the HIT to workers with a past approval HIT
rating of higher than 95% for all requesters.
## 5\. RQ1: Developing a Taxonomy of Reasons People Believe or Disbelieve
Claims (Taxonomy study).
We developed a taxonomy for people with no prior training to label their own
rationales for why they (dis)believed a claim. We therefore, assigned a
descriptive label to each category from a first person’s point of view (e.g.,
“The claim is not consistent with my past experience and observations.”). A
goal was for this description to be one that subjects could use correctly—that
is, that participants would generally agree with each other, and with us,
about the meaning of a particular category. We used multiple iterations of our
study in order to achieve this goal, as described below.
### 5.1. Procedure
Through an online survey, participants were shown a series of 10 claims one at
a time, with the claims randomly chosen from a pool. For each claim, the
survey asked whether the claim was accurate or inaccurate, why, and how
confident the participant was in their belief (4 point scale). Participants
then answered another survey gauging their critical thinking (Frederick,
2005), statistical numeracy and risk literacy (Cokely et al., 2012), political
knowledge, and attitudes towards science. These questions were drawn from
research studying users’ judgements assessing claims (Pennycook and Rand,
2019b). Finally, participants answered demographics questions on political
preference and theistic ideologies, among others.
We performed this study in 2 stages. To develop a preliminary taxonomy, we ran
a first stage in which participants provided rationales for believing or
disbelieving each of the claims via free-text responses. A total of 50
participants completed this stage of the study. We first divided the free-form
responses that we collected into idea units, each being a coherent unit of
thought (Strauss, 1987). This resulted in 534 idea units. A member of the
research team then conducted a first pass over the idea units and assigned
preliminary categories to each using a grounded theory approach (Charmaz and
Belgrave, 2007).
In the second stage of the study, we worked iteratively to refine our
taxonomy, consolidating categories showing too much overlap and splitting some
that represented distinct ideas. A particular goal was for the categories and
their labels to align with how participants labeled their own responses. In
this stage, for each claim, participants were asked to mark checkboxes
corresponding to the reasoning categories they used and then to provide
elaboration in text. To measure the alignment between our intended use for the
categories and participants’ perception of them, a member of the research team
with no knowledge of the users’ checked categories assigned categories to the
elaborated reasons. We then measured Cohen’s Kappa as a measure of the
agreement between the categories selected by the research team coder and the
ones participants had selected for their own responses. We conducted 3 rounds
of this study, each time iterating over the categories.
The Kappa score in our initial iterations of the study was low which led us to
revise the reason categories. However, we discovered that the low score was
partly an artifact of the study design. In the initial iterations,
participants were asked to first mark their reasons using the checkboxes and
then to provide an aggregate explanation in one text-box. We noticed that the
explanations did not cover all the selected reasons possibly because
participants did not feel impelled to provide comprehensive explanations or
that they deemed some checkboxes self-explanatory. We addressed this issue by
modifying the survey interface so that for each checkbox that a participant
selected, they were required to fill in a text-box explaining their reasoning
(see Figure 2).
Our attained agreement score in the 3rd iteration of this stage of the study
was 0.63 which we measured across 729 idea units collected for 48 news claims.
The score exceeded the recommended threshold for accepting the results (Landis
and Koch, 1977). Other scholars suggest a higher threshold for various tasks.
However, while our attained agreement score may be lower than ideal, we deem
it sufficient for this type of task. The 4 iterations of the Taxonomy study
spanned 13 months.
### 5.2. Results
Some categories emerging from the study would definitively determine that a
claim was or was not accurate from the evaluator’s perspective. We signalled
the strength of these categories by grouping them under the name Accurate on
the evidence and Inaccurate by contrary knowledge. Some rationales on the
other hand were not conclusive but rendered a claim (im)plausible. For these,
we used the terms Plausible and Implausible to indicate the strength. Other
rationales were surmises and speculations. Although these rationales were not
informative, we nonetheless wanted people to have the means to provide such
rationales while indicating their lack of strength. We grouped these under the
term Don’t know.
We determined in the earlier iterations that the Don’t know categories were
self-explanatory and the elaborations often repeated the terms. We therefore
did not ask for elaborations when participants selected these categories in
the final iteration.
Two categories emerged in the taxonomy that were used by participants in the
initial stages as reasons a claim was inaccurate, but that we concluded were
not reliable for deducing accuracy but rather belonged to other dimensions of
credibility. One of these categories, The claim is misleading, could be used
for claims that for instance are accurate if taken literally, but are
intentionally worded in such a way to lead the reader to make inaccurate
deductions. Similarly, the category The claim is not from a trusted source
could be applicable to claims that are in fact accurate since sources of
unknown reputation and even malicious ones publish accurate information mixed
with false content or propaganda (Starbird et al., 2018a). Therefore, we
separated these two categories from the rest and requested that participants
evaluate each claim on these two signals regardless of whether they judged the
claim as accurate. Tables 1, 2, and 3 show the full taxonomy.
#### 5.2.1. Explanations of Certain Taxonomy Categories.
Here we elaborate on some of the categories that were not self-explanatory.
Throughout the paper, where we present participants’ free-text responses, we
identify them with a string of the form “p-” + a participant number to
preserve their anonymity. If a participant is from an iteration other than the
final, we have concatenated the following string to the end of their
identifier: “-” + iteration number.
##### 5.2.1.1 The Claim Is (Not) Consistent with My Past Experiences and
Observations.
One of the most cited rationales for a claim’s (in)accuracy, was that it was
(not) consistent with the participant’s past experience and observations. This
assessment at times resulted from the participant’s general knowledge of how
an authority, e.g., the law, operates: “I am pretty sure that to sign up for
these benefits you would need a social security number.” (p-9) —Claim: Seniors
on Social Security Have to Pay for Medicare While ‘Illegal Immigrants’ Get It
Free.
At other times, the rationale was based on whether the assertion in the claim
matched the subject of the claim’s past profile or pattern of behavior: “Joe
Biden has a history of gaffes that come off this silly.” (p-79) —Claim: Joe
Biden Said: ‘Poor Kids Are Just as Talented as White Kids’.
Sometimes the assessment referred to whether the claim confirmed or
contradicted the customary state of world as the participant perceived it: “It
has been my experience that attitudes on sexual orientation have changed
considerably” (p-53) —Claim: Age Matters More than Sexual Orientation to U.S.
Presidential Voters, Poll Finds. This rationale also emerged in cases where
the participant had heard about similar claims before and although the claim’s
accuracy had not been established, the repeated encounter made it seem
plausible: “I’ve been hearing this statement since I was a child, snowflakes
are like fingerprints. There are none that are identical.” (p-32-3)—Claim: No
Two Snowflakes Are Exactly Alike.
This phenomenon has also been reported in (Pennycook et al., 2018), where
Pennycook et al. found that even a single prior exposure to a headline
increases subsequent perceptions of its accuracy. Surprisingly, the illusory
truth effect of repeated false statements influences people across the
political spectrum, i.e., even if their ideological beliefs disagree with the
statements (Murray et al., 2020).
In fact, in the earlier iterations of the taxonomy, Familiarity of Claim was a
separate category from Consistency with Past Experiences and Observations.
However, we merged the two because in many instances participants could not
make the distinction.
Table 1. Taxonomy of reasons why people believe news claims. | Category | Example
---|---|---
Accurate on the evidence | I have a high degree of knowledge on this topic that allows me to assess this claim myself (e.g., I teach/write about this topic or I use this in my work). (N=2) | “Global warming is really happening around us and we must stop it. I have researched it for some time now.” (p-3)
Claim: Global Sea Ice is at a Record Breaking Low.
I have firsthand knowledge of the subject or am an eyewitness. (N=23) | “My own dog does this when other dogs come into the picture, I can see her getting jealous for my attention.” (p-60)
Claim: A Study Showed that Dogs Exhibit Jealousy.
My other trusted sources (besides the source of this article) confirm the entire claim. (N=54) | “I’ve read numerous articles from Huffington Post, Buzzfeed, and Mashable about this happening.” (p-26)
Claim: Some Phone Cameras Inadvertently Opened While Users Scrolled Facebook
App.
The claim is from a source I trust. (N=49) | “I do trust the Washington Post to report accurately.” (p-47)
Claim: Gun Violence Killed More People in U.S. in 9 Weeks than U.S. Combatants
Died in D-Day [source: WashingtonPost.com].
Evidence presented in the article corroborates the claim. (N=9) | “The mountain peaks in the background look the same. I think the claim is very likely to be true.” (p-23)
Claim: These Photographs Show the Same Spot in Arctic 100 Years Apart. [The
claim is presented with two juxtaposed photos, one showing mountains covered
by glaciers, and in the other the glaciers have almost completely melted.]
Plausible | The claim is consistent with my past experience and observations. (N=120) | “This seems fairly consistent. The media seems to only report when Trump does wrong, even from the start.” (p-71)
Claim: President Trump’s Awarding of a Purple Heart to a Wounded Vet Went
Unreported by News Media.
Don’t know | I’m not sure, but I want the claim to be true. (N=32) | “It is an interesting claim so I would hope that it was true but I’ve never heard of YUP and Twitter isn’t very reliable.” (p-46-3)
Claim: There is a Point in the Ocean Where the Closest Human Could Be an
Astronaut. [The picture presented with the claim shows a tweet explaining the
claim from the source YUP.]
| I was just guessing. (N=104) | “I have no knowledge of the headlines, but it seems plausible that it could be true based off just a guess.” (p-38-3)
Claim: There Are More Trees on Earth Than Stars in the Milky Way.
| Other (N=14) | “It’s probably not the whole story, and is probably connected to the lack of federal recognition of same sex marriages. As in because the marriages aren’t recognized, the adoption of foreign national children by those couple as a couple aren’t [sic] recognized, so the child can’t be naturalized, etc.” (p-37)
Claim: The Trump Administration Is Denying U.S. Citizenship to Children of
Same-Sex Couples.
Table 2. Taxonomy of reasons why people disbelieve news claims.
| Category | Example
---|---|---
Inaccurate by contrary knowledge | I have a high degree of knowledge on this topic that allows me to assess this claim myself (e.g., I teach/write about this topic or I use this in my work). (N=3) | “Asylum seekers can only apply on US soil. I’ve worked with immigrants abroad for an extended period of time.” (p-2)
Claim: Asylum-Seekers Can Apply at U.S. Embassies Abroad.
I have firsthand knowledge on the subject or am an eyewitness. (N=7) | “I watched the news coverage.” (p-67)
Claim: ABC, CBS, and NBC Blacked Out Pam Bondi’s Legal Defense of Trump during
His Impeachment Trial.
The claim contradicts some information related to the case that I know from trusted sources. (N=49) | “I think that number is almost equal to the total number of homicides in the US which is a ridiculous notion.” (p-80)
Claim: 10,150 Americans Were Killed by Illegal Immigrants in 2018.
Implausible | The claim is not consistent with my past experience and observations. (N=46) | “The man is a showman, there’s no way he’d do something like this without letting anyone know about it.” (p-30)
Claim: President Trump’s Awarding of a Purple Heart to a Wounded Vet Went
Unreported by News Media.
If this were true, I would have heard about it. (N=91) | “I feel [something] like this would have been a huge story that would have been carried on many national news networks.” (p-39)
Claim: US Intelligence Eliminated a Requirement That Whistleblowers Provide
Firsthand Knowledge.
The claim appears to be inaccurate based on its presentation (its language, flawed logic, etc.). (N=14) | “This tweet follows the standard ”ah! everybody panic!” format you see for unsubstantiated information. Also, there is no link to a source.” (p-79)
Claim: Presidential Alerts Give the Government Total Access to Your Phone.
[The accompanying picture shows a tweet by John McAfee warning about the
issue.]
The claim appears motivated or biased. (N=90) | “Not saying which library or for what reason leads people to come up with their own conclusions.” (p-30)
Claim: Biden’s Campaign Demanded an American Flag Be Removed from a Library.
The claim references something that is impossible to prove. (N=12) | “‘Most sane of all’ is a hard metric to measure.” (p-29)
Claim: A Scientific Study Proved that “Conspiracists” Are “The Most Sane of
All”.
Don’t know | I’m not sure, but I do not want the claim to be true. (N=99) | “Knowing that I don’t want to believe but aware of Biden’s inaccurate pronouncements, I just chose not to give this any credence because, it is irrelevant.” (p-37-3)
Claim: Joe Biden said the Mass Shootings in El Paso and Dayton Happened in
‘Houston’ and ‘Michigan’.
| I was just guessing. (N=74) | “I am purely guessing here because I don’t know. I’d rather not believe something that is true than believe something that is actually false.” (p-36-3)
Claim: Mueller Concluded Trump Committed ‘No Obstruction’ in the 2016 Election
Probe.
| Other (N=61) | “If migrants could leave at any time, why wouldn’t they leave. What would be the point in detaining them and sending them to a center if they were free to do as they please in the first place?” (p-9)
Claim: Migrants ‘Are Free to Leave Detention Centers Any Time’.
Table 3. Other Signals of credibility that do not necessarily render a claim accurate or inaccurate. Category | Example
---|---
The claim is misleading. ($N_{\textit{Accurate}}=23$, $N_{\textit{Inaccurate}}=57$) | “It might be that women end up with DNA-containing fluid/skin cells during sex, but not permanently. Or during pregnancy some of DNA from the fetus (which would contain some of the male partner DNA) might end up in the woman’s blood. But not every partner’s.” (p-37)
Claim: Women Retain DNA From Every Man They Have Ever Slept With.
The claim is not from a source I trust. (N=4) | “NBC has a habit of exaggerating to obtain ratings.” (p-88)
Claim: Watchdog: ICE Doesn’t Know How Many Veterans It Has Deported [source:
NBCNews.com].
##### 5.2.1.2 The Claim Appears to Be Inaccurate Based on Presentation.
Participants reported how a claim looks as a factor impacting their perception
of the claim’s accuracy. They referred to sensational language—“It’s too much
like a sensationalist tabloid headline.” (p-10), grammatical errors—“I really
have no idea if this is true or not but the language seems weird.” (p-46),
clickbait titles—“The title seems to be clickbait” (p-27), and quality of
presented artifacts—“The image looks like a generic image that is commonly
used with clickbait. The website referenced is ”cityscrollz.com” which has a
stylized spelling and does not seem to be a reliable news source.” (p-65) as
indicators of an article’s falsehood. Interestingly, some of these factors are
among the set of indicators for evaluating content credibility suggested by
Zhang et al. (Zhang et al., 2018). While Zhang et al.’s proposed indicators
are intended for evaluating both accurate and inaccurate content, in our
study, this type of rationale was cited only as an argument for refuting a
claim. One explanation is that because we only showed headlines and not full
articles, participants may have interpreted the presence of these factors as
red flags invalidating the claim, but their absence simply calling for more
investigation.
##### 5.2.1.3 The Claim Appears Motivated or Biased.
Sometimes participants determined that a claim was false because it seemed to
advance a particular agenda. In most cases, they did not know about the
accuracy of the particular claim, but based their assessment on their prior
familiarity with the general topic or the source of the claim: “Fox news is
biased and probably doesn’t like whatever the ”new way forward” is and wants
to focus on the negatives.” (p-45) —Claim: The New Way Forward Act Would
Protect Criminals from Deportation [source: FoxNews.com]. This type of
rationale generally surfaced for partisan issues and claims that were
associated with particular groups and movements: ”The source has a liberal
bias, and likely is including suicides as ‘violence’ which most will interpret
as person on person violence.” (p-24) —Claim: Gun Violence Killed More People
in U.S. in 9 Weeks Than U.S. Combatants Died in D-Day [source:
WashingtonPost.com].
#### 5.2.2. Common Misalignments and Transformation of the Categories.
The taxonomy categories underwent multiple iterations of transformation based
on the misalignment between our expected use of the categories and how
participants used them. Here we elaborate on some examples of this
misalignment as potential areas for future refinement.
##### 5.2.2.1 My Trusted Sources Confirm the Entire Claim.
This category was intended for cases where a participant had heard about the
claim from other sources or had learned about it from an authority (e.g., at
school). Sometimes, participants believed that they had heard about the claim
before but that they did not fully remember the particulars of the claim they
had heard or from whom they heard it, but that having encountered it
nonetheless made them more accepting of its plausibility. In these cases, they
often used the category The claim is consistent with my experience and
observations, as seen in the following example: “It seems like something I
have read/heard before in the past.” (p-16-3) —Claim: U.S Sen. Lindsey Graham
Once Said a Crime Isn’t Required for Impeachment. Therefore, it appears that
the degree of confidence in one’s rationale in addition to the type of the
rationale can shift the assigned label from one of these categories to the
other.
##### 5.2.2.2 I Have a High Degree of Knowledge on this Topic that Allows Me
To Assess the Claim Myself.
In the earlier iterations of the taxonomy, this category was referred to as I
Have Specific Expertise on the Subject. However, we discovered that the
definition of expertise varied across participants, as demonstrated by this
example that was labeled by the participant as belonging to the expertise
category: “I have seen this nearly exact headline on social media, and was
curious about its claims. Turns out that, from what I read on the government
site, that this headline is misleading.” (p-28-3) In the subsequent
iterations, in addition to refining the name of the category, we added the
example I teach/write about this topic or I use this in my work to better
convey the intended bar for expertise.
##### 5.2.2.3 I Have Firsthand Knowledge on the Subject or Am an Eyewitness.
Participants occasionally said they had investigated the sources of a claim or
had heard another source verify or refute the claim and thus had firsthand
knowledge: “I saw it from reliable sources, I witnessed the news articles
myself firsthand.” (p-67) However, our intended category for such occasions
would be My other sources confirm the entire claim if considered accurate, and
The claim contradicts some information related to the case that I know from
trusted sources if considered inaccurate.
#### 5.2.3. Demographics.
We investigated whether the demographics of participants have an effect on the
types of rationales that they produce. We found that the distribution of
rationales differ statistically across genders and present details in the
Appendix Section A.
### 5.3. Discussion of Taxonomy
Some categories in Table 3 may appear similar to other rationales that can
render a claim implausible and need to be further distinguished. One such pair
is a claim being misleading and its appearing to be inaccurate based on its
presentation (e.g., use of sensational language). In the absence of other
information, sensational language renders the claim implausible, i.e., fails
to convince the user that the claim is accurate. The claim being misleading
however, is not a reason why the claim should be accurate or inaccurate, but
exists in addition to the accuracy dimension. Users could consult other
rationales to determine the accuracy or plausibility of the claim, and
determine that for instance, although the claim is accurate, the accurate
information pieces are chosen and the article crafted in such a way as to
imply a particular inaccurate message. Another such pair is the claim
appearing motivated or biased and its source being one that the user does not
trust. To determine whether a claim is motivated or biased, users often resort
to such information as their familiarity with the subject matter or their
prior knowledge of the agenda of the source as well as their inferred bias of
the claim to determine whether the truth has been twisted in the claim.
Therefore, when the message of the claim agrees with the bias of the source,
users see it as an indication that the claim may in fact not be accurate. A
separate dimension of credibility is whether the source is not trusted by the
user. For instance, a user who has stated they do not trust Fox News may in
fact assess Fox’s claim that Biden won the presidential election in Arizona
(Steinhauser, [n.d.]) as accurate, the claim as not biased, while maintaining
that the claim is from a source they do not trust.
We now discuss some general issues that arose in this study.
There is an important logical asymmetry between being consistent and
inconsistent with past experience. Inconsistency with the past does offer some
level of indication that the claim is inaccurate. However, given the
tremendous number of things that _could_ happen consistent with the past,
consistency offers little evidence that a claim is accurate—instead, it only
_fails_ to provide evidence that the claim is not accurate. In general, many
participants seemed not to make this distinction, using consistency with the
past as sufficient evidence that a claim is true. Because participants used
this category often, system designers may feel compelled to make it available.
But those designers might want to consider treating this category as
indicating that the user _does not know_ whether the claim is accurate, rather
than indicating accuracy.
The confusion of lack of refutation with accuracy offers opportunities for
manipulation. It suggests that a subject might tend to believe that a
politician voted yes, and equally that a politician voted no, simply because
they saw one or the other headline, without any other evidence.
In a similar vein, some subjects treated The claim is not from a source I
trust as a reason to consider a claim false. Related work shows that
alternative media sources borrow content from other sources including
mainstream media (Starbird et al., 2018a). Therefore, it is important to bring
users to this realization that sources of unknown or low reputation may in
fact publish accurate content. While we observed that users can rather
reliably use the taxonomy in its current format to characterize their
rationales, the taxonomy can still benefit from further refinement, which we
leave to future work.
## 6\. RQ2: Effects of Providing Accuracy Reasons on Sharing Behavior (Nudge
study).
We hypothesized that asking people to reflect on the accuracy of a news story
before they share it on social media would help prevent sharing news stories
that are not credible. We additionally hypothesized that getting people to
consider their rationales would help with their deliberation. For these
purposes, we used the taxonomy that we developed in the Taxonomy study as one
option to nudge people to consider possible rationales, along with a second
free-text option.
### 6.1. Method
Table 4 summarizes our experimental conditions. Similar to the Taxonomy study,
participants were shown a series of 10 claims one at a time via an online
survey. Headlines were randomly drawn from a set of 54 headlines. Of the pool
of headlines, 24 were true, 25 false, and 5 were assessed as being a mixture
of true and false. If a participant was in any of the treatment conditions,
for each claim, the survey would ask whether the claim was accurate or
inaccurate and how confident the participant was in their belief (4 point
scale), displayed in Figure 3. If the participant was in one of the reasoning
conditions, the survey would additionally ask why they believed the claim was
(in)accurate. At the end of each item, all participants were asked if they
would consider sharing the article on social media, with options Yes, Maybe,
and No, displayed in Figure 4. We also followed up with another question
asking why they would (not) consider sharing the article. Following the
claims, participants answered a survey asking how they would verify the
accuracy of a headline like what they saw and how comfortable they were with
asserting their judgments publicly. Then they answered partisanship questions
for each claim that they had previously seen: “Assuming the above headline is
entirely accurate, how favorable would it be to Democrats versus Republicans?”
(5 point scale). The survey contained similar post-task questions as the
Taxonomy study. The full questionnaire is included in the Supplementary
Materials. The Nudge study was completed in 12 days.
Figure 3. The UI for how accuracy and confidence questions were presented to a
participant along with an example of a headline.
Figure 4. The UI for how we asked users whether they would consider sharing
the headline presented to them.
Table 4. Experimental conditions of the Nudge study. Table shows participants in which conditions were presented with questions to assess the accuracy of claims, provide their reasoning for why the claim is (not) accurate, and whether we used the taxonomy we developed in the Nudge study to capture their reasoning. Cond. | Assessed accuracy | Provided reasoning | Reasoning format
---|---|---|---
1 | – | – | –
2 | ✓ | – | –
3 | ✓ | ✓ | Free-form text
4 | ✓ | ✓ | Checkbox of taxonomy categories + text
### 6.2. Results
Our dataset from the Nudge study contained 21,113 datapoints of which we
identified 5,403 as spams by investigating their associated free-text
responses. The responses that we labeled as spam were copy pastes, sometimes
with minor modifications, of the headline title or unrelated answers to the
question (e.g., responding “good” to all the questions). Other responses that
we labeled as spams had glaring grammatical errors and were mismatched between
the participant’s accuracy assessment and their text. In these cases, we
deemed that the participant had not received the intended treatment and
therefore, we treated their response as a spam. Exclusions were applied at the
participant level as whenever we determined that a datapoint was suspicious of
being spam, we examined all the other datapoints submitted by the same
participant as well as their responses to the survey that followed the claims,
described in Section 6.1. In almost all of the cases, the qualities that
disqualified a datapoint were present in all of that participant’s responses,
and therefore, we labeled all the participant’s submitted datapoints as spam.
A datapoint that we discarded as spam for example was the following: “the
claim will be dangerous.”—in response to Why do you think the claim is
inaccurate?; “it gives the fear about the treatment.”—in response to Why would
you not consider sharing it?; both from the same datapoint; Claim: Tibetan
Monks Can Raise Body Temperature With Their Minds. We also excluded 6
datapoints where participants had technical issues. The datapoints that we
included in the analyses were collected from 1,668 participants. Participants
did not always complete all 10 claims due to dropout or technical issues;
also, we remove from our analyses some datapoints participants labeled for
headlines whose ground truth was neither completely true nor false (mixture),
which were collected for exploratory purposes. From the datapoints that we
included in the analyses, 3,740 were in condition 1, 3,977 in condition 2,
3,405 in condition 3, and 3,118 in condition 4.
#### 6.2.1. Models.
To test the effect of the nudges and their interaction with objective or
subjective veracity of headlines, we fit two types of models to our dataset,
both with share intention as the dependent variable. One was a linear mixed
effect model for which we assigned values of 0, 0.5, and 1 to the share
decisions of “No”, “Maybe”, and “Yes” respectively. The other was a cumulative
link mixed model which treated the share decisions as ordinal. Results were
consistent between the two models. Because the linear model is more
straightforward to interpret, we discuss the results of this model below and
leave the results of the cumulative link model to the Appendix section C.
##### 6.2.1.1 Veracity Model.
To test the effect of nudges and their interaction with objective veracity of
headlines, we developed a linear mixed effect model with sharing intention as
the dependent variable and our study treatments as independent variables. The
treatments were whether participants were asked about accuracy, whether they
were asked about reasoning, and whether the reason checklist was presented to
them. The model also included the veracity of the headline and the interaction
between veracity and each of the treatments as independent variables. We fit
this model to the whole dataset. We included participant identifier and the
headline the participant had assessed as random effects in our models. The
inclusion of these random effects accounts for the non-independence between
the data points provided by one participant or for one headline and captures
the variance in the intercepts between participants or headlines. We used the
function “lmer” from the R package ”lme4” to define the model. We refer to
this model as the veracity model:
(1) $\text{share}\sim\text{veracity}\times(\text{accuracy
condition}+\text{reasoning condition}+\text{reasoning format)}\\\
+(1|\text{participant})+(1|\text{claim})$
We also developed another more refined veracity model with the demographics of
participants as control variables, discussed in Appendix B. The effects we
observed for headline veracity as well as our treatments in the model
discussed in the Appendix were consistent with the results we observed for the
model outlined in this section.
##### 6.2.1.2 Perceived Accuracy Model.
Although the ultimate desired sharing behavior is for people to share content
that is in fact true and refrain from sharing objectively false stories, the
best achievable outcome for behavioral nudges that encourage deliberation is
to guide people to come to a better discernment _based on what they already
know_. One realization for example, could be that maybe after all, they do not
know what they previously had taken for granted. Therefore, in addition to
examining how the nudges affect sharing of objectively true and false content,
we investigate how they interact with headlines that participants had
initially believed to be true or false, indicated by their accuracy
assessments.
Therefore, to test how the treatments and their interaction with a
participant’s initial accuracy assessment affect sharing intentions, we fit a
model similar to the one described in 6.2.1.1 but included perceived accuracy,
i.e., participant’s assessment of the accuracy of the headline, rather than
veracity as the independent variable. Because we did not have accuracy
assessments from participants in the control (condition 1), we fit this model
to the data from conditions 2, 3, and 4. The treatments that we included as
independent variables were whether participants were asking about reasoning
and whether we presented the reason checklist to them. We refer to this model
as the perceived accuracy model:
(2) $\text{share}\sim\text{perceived accuracy}\times(\text{reasoning
condition}+\text{reasoning format)}+\\\
(1|\text{participant})+(1|\text{claim})$
#### 6.2.2. Findings.
We performed a Wald Chi-Square test on each of the fitted models to determine
if our explanatory variables were significant. The tests revealed that the
effect of veracity in the veracity model and the effect of perceived accuracy
in the perceived accuracy model were both significant [$\chi^{2}(1)=34.03$,
$p<0.001$ for veracity, $\chi^{2}(1)=2025.12$, $p<0.001$ for perceived
accuracy]. Post-hoc Estimated Marginal Means tests revealed that participants
were more likely to share an objectively true rather than a false headline
[$z=4.98$, $p<0.001$]. Similarly, they were more likely to have the intention
to share a headline that they assessed as accurate [$z=37.70$, $p<0.001$]. We
report the result of the tests for each of our study interventions in sections
that follow.
Throughout the paper, we present figures showing the means of our outcome
measures across conditions. The error bars in these figures are standard
errors around the mean.
##### 6.2.2.1 Effect of Providing Accuracy Assessments.
We observed that providing accuracy assessment had a significant effect on
sharing intentions [$\chi^{2}(1)=38.05$, $p<0.001$]. Note that this variable
was included in the veracity model only. Figure 5 shows sharing likelihood for
condition 2, where we did request accuracy assessment, and condition 1, where
we did not, across both true and false headlines. The results suggest that
providing accuracy assessment about an article before deciding whether to
share it lowers the probability that one shares the article for both true and
false headlines. However, although this intervention results in a 18% decrease
in sharing of true headlines, the decrease in sharing false headlines is
higher (37%), therefore, reducing the ratio of false shared headlines to true
ones. The effect of the interaction between providing accuracy and veracity
was not significant [$\chi^{2}(1)=3.22$, $p=0.07$]. Consistent with the
results that follow, this finding can be because the headlines that
participants perceived as accurate when they were prompted to deliberate on
were a mix of objectively true and objectively false. Therefore, the sharing
of both objectively true as well as false headlines was reduced. However,
because sharing of objectively false headlines was less likely to begin with,
the drop in sharing of false headlines was higher.
Figure 5. Share rate of true and false headlines across study conditions. The
results suggest that people are less likely to share both accurate and
inaccurate content if they are asked to assess the content’s accuracy
(condition 1 vs 2). We observe similar trends if they provide their reasoning
in addition to assessing accuracy, compared with if they only assess accuracy
(condition 2 vs 3). These interventions however, lower sharing of false
content to a greater degree. The means of sharing true and false content also
both decrease when people are asked to provide checkbox reasoning in addition
to free-text (condition 3 vs 4). The ratio of shared false to true content
however does not change.
##### 6.2.2.2 Effect of Providing Reasoning.
We saw that whether participants provided reasoning had a significant effect
on sharing intentions in both the veracity and perceived accuracy models
[$\chi^{2}(1)=8.45$, $p=0.004$ for the veracity model, $\chi^{2}(1)=10.33$,
$p=0.001$ for the perceived accuracy model].
Figure 5 shows sharing likelihood for condition 3, where we requested
participants’ rationale for why they believed a claim was or was not accurate,
and condition 2, where we did not, across both true and false headlines.
Similar to the results we observed for requesting accuracy assessments,
requesting reasoning reduces sharing of false headlines to a greater degree
(27%) compared to the decrease in sharing of true headlines (14%). Therefore,
the ratio of false shared headlines to shared true headlines is reduced.
Figure 6 shows that requesting reasoning resulted in less sharing of headlines
that participants initially believed as true but did not have an impact on
sharing of perceived false content. The lack of reduction in sharing of
subjectively false headlines however, could be because their sharing rate was
very low to begin with and therefore there was not much room for improvement
(6% in condition 2, 5% in condition 3). As expected, because the sharing of
subjectively false headlines did not change to a great extent, but that
sharing of headlines initially perceived as true decreased, the interaction
between reasoning and perceived accuracy was significant in the perceived
accuracy model [$\chi^{2}(1)=19.38$, $p<0.001$]. However, because headlines
perceived as accurate were a mix of objectively true and objectively false,
the sharing of both objectively true as well as false headlines was decreased
(Figure 5). It is therefore reasonable that the interaction between reasoning
and veracity was not significant in the veracity model [$\chi^{2}(1)=0.06$,
$p=0.80$].
Figure 6. Share rate of headlines by their perceived accuracy, regardless of
actual veracity. Participants are less likely to share content they initially
perceived as true when they are asked about reasoning, or when they are
requested to work through the checklist of reason categories. Sharing does not
differ for headlines that were initially perceived as false, which could be
because sharing of these headlines is rare to begin with.
##### 6.2.2.3 Effect of Reasoning Format.
The effect of reasoning format on sharing was significant in both the veracity
and perceived accuracy models [$\chi^{2}(1)=4.97$, $p=0.03$ for the veracity
model, $\chi^{2}(1)=4.72$, $p=0.03$ for the perceived accuracy model].
Figure 5 shows that the sharing likelihood mean in the checkbox condition has
a decrease of 17% for false headlines, and 18% for true headlines, compared to
the free-text condition. Figure 6 shows that similar to the results we
observed for requesting reasoning, presenting participants with reason
checkboxes resulted in less sharing of content that participants initially
perceived as true and did not lower sharing of headlines that were perceived
as false. Sharing of content perceived as false however, was already rare (5%
in condition 3, 4% in condition 4). This was the reason why we observed an
interaction between reasoning format and perceived accuracy was significant in
the perceived accuracy model [$\chi^{2}(1)=9.27$, $p=0.002$]. However, because
headlines perceived as accurate were in fact a mix of objectively true and
false, sharing of both objectively true and false headlines was reduced, so
the interaction between reasoning format and veracity was not significant in
the veracity model [$\chi^{2}(1)=3.48$, $p=0.06$].
#### 6.2.3. Reasons for Sharing Content Perceived as False.
We examined participants’ free-text responses to understand why they were
willing to share a fraction, albeit small, of the headlines they perceived as
false. One member of the research team used open coding to assign labels to
participant responses (a total of 427). Of the responses that provided a
reason, the most cited was that they believed the story was entertaining or
that they thought it amusing to see which of their social media friends would
believe the story: “If I felt in a playful mood I might post this just to see
how many people no longer recognize satire” (22%). Others stated they would
consider sharing the claim after fact-checking it: “If I could verify the
contents this would be worth sharing.” (20%). Some considered the claim a
debate starter: “I would share this because I think it would spark a good
debate between pro and anti gun members. It would be interesting to see if
people actually believe this information in the headline is true or not.”
(17%). Other reasons included because they wanted to let their social circle
know that the claim is false: “I would share this only to point out the
misinformation available on most anything.” (11%) or that they wished to fact-
check the claim: “to see if anyone can prove the authenticity of the image”
(10%). Some participants believed it was important to inform their friends
about the claim in case it turned out to be true: “Just in case it is accurate
and, in either case, would make people look into the claim.” (9%). Some
pointed out that they would share the article simply because it was
interesting “It is interesting, even though I am not sure if it is true or
not.” (4%). Others wished to share their emotions or frustration about the
article with their social circle: “Just to point out how absurd the title of
this article is.” (4%).
Interestingly, in a few instances, we saw that although a participant had
originally labelled a claim as inaccurate, they knowingly decided to share it
to help advance their view: “Because I am against its [Marijuana’s]
legalization, maybe this would help instill fear in its users.” (1%) aligned
with what was suggested in (Marwick, 2018). In a few other occasions, we
observed that participants had had a change of heart about the claim’s
accuracy: “I’d share because I know that it’s not something that would just be
out in the open like that and I know that he’s stolen from the government by
not paying his taxes, so I feel like it’s accurate in a sense.” (1%), or that
they wanted to provide an addendum to the claim to help rectify it: “So I
could type. ”…Inadvertently?” (cough) Because Facebook is evil, not Cenobite
evil, but corporation-evil. They did that on purpose, they know it, and I know
it. The trouble is how few OTHER people know it.”—Claim: Some Phone Cameras
Inadvertently Opened While Users Scrolled Facebook App.
#### 6.2.4. Factors Interacting with Findings
##### 6.2.4.1 Confidence.
Figure 7. Share rate of true and false headlines across different confidence
levels. For true headlines that they correctly assess as true, participants
are less likely to share content they are less confident about regardless of
whether they are asked for their reasoning. It is on those headlines about
which they are more confident that requesting reasoning plays a role. For
false headlines that they mistakenly assess as true, asking about reasoning
plays a role in sharing across all confidence levels.
We observed that asking people to provide reasoning inhibits sharing of true
as well as false content. We wished to see how sharing behavior across our
treatments differs with regards to how confident participants in each
condition are in their accuracy assessments. For instance, it is possible that
asking people for their rationales makes them reluctant to share headlines on
whose accuracy they report lower levels of confidence. Figure 7 shows that
participants are not likely to share headlines that they assess as false
regardless of whether they are confident about their assessment. For headlines
that they assess as true however, in all the treatment conditions, they are
less likely to share headlines they are less confident about.
We expect that the participants who are asked to provide accuracy assessments
or reasons will be less willing to share headlines about which they initially
are more confident, compared to those participants who are not. This is what
we observe for false headlines that participants initially misjudged as true.
However, surprisingly, for true headlines that participants had correctly
judged as true, the intervention does not reduce sharing at lower confidence
levels. It is on those headlines about which participants report higher
confidence that the reasoning treatment and the reason checkboxes play a role.
Note that we asked participants to indicate their level of confidence before
they provided their reasons and they could not return to the confidence
question once they advanced to the question requesting reasoning. It is
possible that reflecting on their rationale has lowered their confidence.
##### 6.2.4.2 Average Perceived Accuracy.
It is conceivable that there are headlines that are in fact accurate but sound
too outlandish to be true. And conversely, actual false headlines can seem
reasonable. While we found that headline veracity does indeed have a strong
effect on whether people are willing to share the headline, we wanted to tease
apart the ground truth from how accurate a claim was perceived as according to
the wisdom of the crowd. Therefore, we assigned to each headline an average
perceived accuracy metric, which was the average of accuracy assessments from
all the participants who had provided accuracy assessments for the headline,
mapping accurate to 1, and inaccurate to 0. With this metric, a headline would
no longer have a dichotomous ground truth, and instead, would have a degree of
truth.
Overall, actual true headlines had a higher average perceived accuracy
compared to the false ones. Table 5 presents examples of headlines that were
judged by most participants correctly or incorrectly, and some in between.
Table 5. Examples of easy, medium, and hard calls around headline accuracy assessed by the perceived accuracy of the headline averaged over all participants who assessed the headline. The average perceived accuracy is on a scale of 0-1, with 0 indicating that the headline was perceived as inaccurate and 1, as accurate. Difficulty | Headline | Veracity | Avg. perceived accuracy
---|---|---|---
| A Study Showed That Dogs Exhibit Jealousy | True | 0.90
Easy | Sipping Water Every 15 Minutes Will Prevent a Coronavirus Infection | False | 0.05
| President Trump’s Awarding of a Purple Heart to a Wounded Vet Went Unreported by News Media | True | 0.49
Medium | Eric Trump Tweeted About Iran Strike Before It Was Made Public | False | 0.45
| There Are More Trees on Earth Than Stars in the Milky Way | True | 0.25
Hard | Rain That Falls in Smoky Areas After a Wildfire Is Likely to Be Extremely Toxic | False | 0.62
We built a linear model to explain share likelihood of a headline predicted by
its average perceived accuracy. The dependent variable of the model was the
average of share intentions for a headline from all the participants that had
been presented the headline, mapping the 3-item Likert outcomes “Yes”,
“Maybe”, and “No” to numeric values as explained in 6.2. The independent
variable was the headline’s average perceived accuracy (continuous). We fit
this model to the data from each of the control and treatment conditions
separately. Because the average perceived accuracy of each headline was
calculated over the whole dataset, it remained constant across conditions.
Share average however, varied in each condition.
Figure 8 shows how participants’ sharing intentions differ across conditions
as the average perceived accuracy of a headline increases. In the treatment
groups where we asked for accuracy assessment or reasoning in addition to
accuracy, the slopes are higher compared to the control condition. This
observation suggests that the interventions helped people be more
differentiating in sharing of headlines that they perceived as true vs false.
The confidence intervals around the mean also seem to be narrower for the
treatment conditions compared to the control. With the numbers of participants
across treatment conditions being almost similar to or less than that of the
control, a narrower confidence interval around the treatment slopes suggests
less dispersion and uncertainty in sharing intentions.
Figure 8. Share likelihood of each headline by its perceived accuracy averaged
over all treatment conditions. The slopes in the treatment groups are higher
than the control, suggesting more sharing differentiation between headlines
that were perceived as true vs false. The fitted lines in the treatment groups
also have narrower confidence intervals, suggesting less dispersion and
uncertainty in sharing intentions.
##### 6.2.4.3 Demographics.
We conducted an exploratory analysis of the demographics of participants and
their effect on share likelihood. We have reported these analyses in Appendix
B.
### 6.3. Discussion of Nudge Study Findings
This study investigated the effects of two types of interventions on people’s
intention of sharing headlines. The interventions included asking people to
assess the accuracy of the headline and to provide their rationale for why
they believe the headline is or is not accurate. We observed that the
participants who assessed the accuracy of a headline were less likely to
indicate they are willing to share it, compared to those participants whom we
did not prompt about accuracy. Although the intervention resulted in curbing
of sharing both true and false content, the reduction in sharing of false
headlines was higher. Our results corroborate prior findings that nudging
people to be mindful of news accuracy increases sharing discernment (Pennycook
et al., 2020b), and enrich our understanding of how impactful different nudges
will be.
We found that asking people to reflect and elaborate on the reason for a
headline’s (in)accuracy further lowers the probability that they share either
true or false content, compared to if they only assess the headline’s
accuracy. The intervention however, reduces sharing of false content to a
greater degree.
We observed that the decrease in sharing as a result of requesting people to
provide structured reasoning was almost similar across true and false content
(17% for false, 18% for true in our sample). Although this observation
suggests that selecting from a checklist of rationales in addition to
providing accuracy assessments and free-text reasoning does not seem to help
in further differentiating between false and true content, such a checklist
can still be used on platforms for the added benefit of capturing and
surfacing reasons in structured form.
In addition, we examined how sharing intentions differ across conditions as
the average perceived accuracy of headlines increases. We observed that the
interventions caused people to be more differentiating in sharing of content
that they perceived as true vs false compared to the control condition and
that the dispersion of sharing decisions in the treatment groups was also
lower. Interestingly, Figure 8 shows that the slopes of sharing by perceived
accuracy across all conditions are less than 1, indicating that as the
perceived accuracy of a headline increases, its share likelihood also
increases but at a lower rate. One possible explanation for this phenomenon is
that headlines that most people agree are true could appear as less
interesting and already believed to be known by the others in one’s social
circle.
One concern around the generalizability of our results to online platforms is
the possible existence of Hawthorne effect, under which participants change
their behavior due to an awareness of being studied (McCambridge et al., 2014;
Kreuter et al., 2008; Preist et al., 2014). We therefore need to understand if
participants would exhibit the same behavior as observed in our study if they
were placed in a different intervention condition. In a user study with users
recruited from worker platforms, Pennycook et al. investigated how self-
reported share likelihood of headlines were influenced by an accuracy nudge at
the onset of the study where as a pre-task they asked users to assess the
accuracy of a single news item. They found that the participants’ likelihood
of sharing false headlines compared to true decreased with the intervention,
similar to what we observed in our study. However, they reported that asking
users to instead assess the humorousness of a headline did not yield similar
results. In another study Pennycook et al. sent an unsolicited message to
Twitter users who had recently shared links to websites that produced false or
hyperpartisan content and asked them to assess the accuracy of a non-political
headline. They report that the quality of the news sites that these users
shared in the 24 hours after receiving the message was increased compared to
the sites shared by others who had not received the message (Pennycook et al.,
2021), suggesting that the effects of these behavioral nudges will indeed
generalize to social media platforms.
In addition, other prior work has reported that there is a correlation between
hypothetical sharing of news stories reported by survey respondents and actual
sharing on social media (Mosleh et al., 2020). With our nudges proving to be
effective and the results generalizing to actual social media platforms,
platform designers can implement these nudges to encourage more informed
propagation and consumption of news.
As we discuss in Appendix D, the difference in spam rates across conditions
may have influenced the makeup of the data. However, if these interventions
are implemented on social media, it is unlikely that we will observe similar
spamming behaviors as we do on a paid worker platform. Spamming in lieu of
providing legitimate rationales by itself could be a clear signal of the
sharer’s credibility to those who will encounter the shared news.
It is clear that other factors beyond the perceived accuracy of the headline
can affect sharing intentions. Some participant responses indicated that they
were reluctant to share a headline because they were not interested in the
topic—“It’s not a story that I would share with others, I’m not interested and
my followers wouldn’t be either.”, the headline referred to a sensitive topic
and might create controversies—“It is a sensitive topic that I feel strongly
one way about and I don’t want to start disagreements on my page.”, they
simply do not share on social media—“I don’t share anything on social media.”,
they did not want to overburden their social media friends with information—“I
would feel bad clogging up peoples timelines with useless information that is
already well known.”, even though the claim was true, it shed a good light on
someone of the opposite party—“It is positive about Trump. I would never post
anything positive about him.”, or even though the claim was true, it put
someone affiliated with their party in a bad light—“I wouldn’t bad mouth Joe
Biden”, aligned with what is reported in prior work (Shin and Thorson, 2017).
Nevertheless, the primary cited reason for not sharing a headline was that it
was of dubious credibility—“That is just an outright lie. I refuse to
contribute to the misinformation being shared on Facebook.”.
Therefore, sharing intentions, or lack thereof, can be used as a proxy to gain
insight into how accurate participants believed each headline to be post
interventions. Although we did request accuracy assessments across the two
treatment groups where participants provided reasoning, these accuracy
assessments were done at the onset of each item and before participants were
subjected to the reasoning interventions. In addition, the control group did
not provide accuracy assessments. It is by contrasting the sharing intentions
and not accuracy assessments that we understand the effects of our
interventions. Figure 8 confirms this paradigm that share probability
increases with the perceived accuracy of a claim.
## 7\. Discussion
We have discussed the two studies in their respective sections; here we offer
more general observations.
The behavioral nudges that we tested in this study, providing accuracy
assessment and rationale for whether a news story is or is not accurate,
proved to be effective at reducing the ratio of false content to true that
participants indicated they were willing to share. The reduction of shared
false headlines came at the cost of also curbing the sharing of true ones.
However, we still believe that the outcome is preferable to leaving
misinformation unchecked and unchallenged. Indeed, it is conceivable that
platforms can benefit from an overall improved engagement if the feeds that
they present to their users become more reliable. In addition, even if
approaches similar to the nudges in our study result in loss of some profit,
the implications that unmuddied online information spaces have for the society
may warrant persuading platforms, through activism or legislation, to adopt
them.
### 7.1. Design Implication
While platform moderators and fact-checkers play a valuable role in flagging
and removing content that has already spread and become visible, other
measures are needed to restrain sharing of misinformation as it is being
handed from user to user. In addition, although policy-driven platform
moderation is necessary in some contexts, communities should be wary of
relinquishing all the power of content filtering and highlighting to the
platforms whose incentives as for-profit entities running on ads do not
necessarily align with the users’ (Grygiel and Brown, 2019; Kelly, [n.d.]).
The challenge of moderation is exacerbated as not all accounts of problematic
behavior or posts can be provisioned a priori in platform policies, leading
moderators to make ad-hoc decisions in grey areas that sometimes draw
criticism (Shrimsley, [n.d.]; Ibrahim, 2017; Spray, [n.d.]; Fac, [n.d.]a).
These challenges suggest that the problem of misinformation could additionally
be tackled at the user level. The interventions that we studied in this work
aim at this problem by first, introducing a barrier, albeit low, to posting
and sharing, and second, shifting users’ attention to accuracy and away from
the social feedback and engagement that they would receive at posting time.
Our interventions can be used in the existing social media platforms such as
Twitter and Facebook and are aligned with the initiatives they have already
undertaken to combat the proliferation of misinformation. However, the
usefulness of these nudges is not limited to these platforms as they can be
leveraged in alternative platforms with different publishing models, such as
WikiTribune where users curate content collectively (O’Riordan et al., 2019).
We envision that requesting reasoning on social media or content sharing
platforms can be done in a similar fashion to how emotions and reactions are
currently captured on the existing social media or how users cite references
when developing content on wiki-based platforms. Structured reasons provided
by users for or against a post’s accuracy can serve as rich metadata based on
which other users can filter the posts they would want to view. In such a
scenario, a user might choose to view only those articles that have been
evaluated as true by a friend because the evaluator has asserted they have
domain knowledge on the subject. The taxonomy that we developed originates
from people untrained in credibility signifiers often developed by experts,
and we determined in our studies that other untrained people can reliably use
it to provide their rationales. Therefore, the adoption of requesting
rationales via a checklist similar to that of our study on social media does
not appear to incur a barrier to entry for users, except maybe in forcing some
degree of deliberation which is desirable. Prior work reports that the
effectiveness of fact-checking depends on the relationship of the user
offering the fact-checking information with the user requesting it or one who
has produced an inaccurate post (Margolin et al., 2018; Hannak et al., 2014).
Therefore, by incorporating accuracy assessments of our study in platforms and
making them visible and accessible to users, we hope social media friends can
benefit from them.
## 8\. Future Work
One direction for future work is to examine how users react to posts
accompanied by such rationale tags as the ones in our study and what factors
they consider when deciding the credibility of a post or the persuasiveness of
its tagged rationales. However, because who the sharer of a post is can also
impact perceived content credibility (Flintham et al., 2018), such a study
would be more informative if conducted as a field study on a social network,
rather than a controlled experiment.
Our work examined the effects of accuracy assessment and reasoning nudges on
content sharing when users are required to provide them. Future work can
investigate the effects of allowing users to optionally provide these signals,
similar to how “likes” and “upvotes” are captured on social media.
Although one reason why people would share a story was to inform the others,
we found that there are various reasons they might share even a headline they
do not necessarily believe to be accurate. Such reasons include because the
article is entertaining or to ask their social circle to help them with fact-
checking the story. Future work could investigate how providing a checklist of
these sharing intentions in a fashion similar to our study would affect
sharing behaviors on social media and how it would impact the consumption of
posts on which these signals are provided by the users who encounter them.
One interesting observation from our exploratory analyses in Appendix B was
that our Republican participants were more likely to share false claims
compared to our Democratic participants. While our analyses were not planned a
priori, they are bolstered by prior studies that have reported similar
findings (Guess et al., 2019). These observations give rise to interesting
questions that could be examined in future work, such as whether behavioral
nudges should be applied indiscriminately to all platform users or only those
who have been found to habitually share misinformation, or whether users
should be primed every time they intend to share posts or if it is sufficient
to apply the nudges only occasionally.
## 9\. Limitations
In condition 4 of the Taxonomy study where we presented the checklist of
reason categories, we also asked participants to explain their choices in
free-text to examine how they had used the categories and if they had truly
understood their intended use. With this model, comparing this condition with
condition 3 where participants provided their reasoning via free-text gives us
insight into whether restricting people’s rationales to the taxonomy framework
works as well as otherwise allowing them to provide unstructured reasons.
However, our study did not include another condition where participants needed
to only work through the checklist without elaborating on their choices. The
inclusion of such a condition would have allowed us to determine whether the
checklist of reasons can replace free-text entirely and induce the same level
of discernment. Not requiring free-text but rather providing it as optional in
addition to the checklist could potentially be more desirable for the adoption
of this strategy on social media. Future work can include a condition of this
nature.
## 10\. Conclusion
In this work, we explored how to alter social media platforms such that users
would have content accuracy in mind upon sharing posts. We explored nudges
that could be used at scale intended to encourage users to think whether a
post is accurate and their rationales for believing so. To facilitate
capturing people’s rationales in a structured format and help the adoption of
the nudges on social media, we conducted a study where we developed a taxonomy
of reasons why people believe or disbelieve news claims. That study involved
presenting news claims to people as well as the taxonomy and asking them to
use the reason categories to provide their rationales for the (in)accuracy of
the claims. We conducted multiple iterations of the study while revising the
taxonomy until participants could reliably use it to label their responses.
We then examined the effects of two different nudges, accuracy assessment and
providing reasoning for why a news story is or is not accurate, on people’s
sharing intentions on social media. We found that both nudges reduce sharing
of true and false content, but the decrease in sharing of false content was
higher. Our findings on the effects of the accuracy and reasoning nudges offer
implications for social media platform designers on how to mitigate sharing of
false information. Furthermore, these platforms can ask their users to provide
accuracy assessments for the posts the users share by guiding them through the
taxonomy categories that we developed in the study. These structured reasons
could potentially help those who encounter the post e.g., by enabling them to
filter their newsfeed based on different reasons that post sharers have
specified, arguing for or against a post’s accuracy.
## 11\. Acknowledgments
We would like to thank Ezra Karger, Ali Kheradmand, and Mohammad Amin Nabian
for their valuable feedback regarding the statistical analyses.
## References
* (1)
* fac ([n.d.]) [n.d.]. _Combatting Vaccine Misinformation - About Facebook_. https://about.fb.com/news/2019/03/combatting-vaccine-misinformation/
* Fac ([n.d.]a) [n.d.]a. _Facebook apologises for blocking Prager University’s videos_. https://www.bbc.com/news/technology-45247302
* Qua ([n.d.]) [n.d.]. _Facebook is ditching its own solution to fake news because it didn’t work_. https://qz.com/1162973/to-fight-fake-news-facebook-is-replacing-flagging-posts-as-disputed-with-related-articles/
* Fac ([n.d.]b) [n.d.]b. _https://www.facebook.com/journalismproject/programs/third-party-fact-checking_. https://www.facebook.com/journalismproject/programs/third-party-fact-checking
* Allen et al. (2020) Jennifer Nancy Lee Allen, Antonio Alonso Arechar, Gordon Pennycook, and David Rand. 2020. Scaling up fact-checking using the wisdom of crowds. (2020).
* Argentino ([n.d.]) Marc-André Argentino. [n.d.]. _QAnon and the storm of the U.S. Capitol: The offline effect of online conspiracy theories_. https://theconversation.com/qanon-and-the-storm-of-the-u-s-capitol-the-offline-effect-of-online-conspiracy-theories-152815
* Bazarova et al. (2015) Natalya N Bazarova, Yoon Hyung Choi, Victoria Schwanda Sosik, Dan Cosley, and Janis Whitlock. 2015\. Social sharing of emotions on Facebook: Channel differences, satisfaction, and replies. In _Proceedings of the 18th ACM conference on computer supported cooperative work & social computing_. 154–164.
* Bengali (2019) Shashank Bengali. 2019\. How WhatsApp is battling misinformation in India, where ’fake news is part of our culture’. _Los Angeles Times. https://www.latimes.com/world/la-fg-india-whatsapp-2019-story.html_ (2019).
* Bhuiyan et al. (2020) Md Momen Bhuiyan, Amy X Zhang, Connie Moon Sehat, and Tanushree Mitra. 2020. Investigating Differences in Crowdsourced News Credibility Assessment: Raters, Tasks, and Expert Criteria. _Proceedings of the ACM on Human-Computer Interaction_ 4, CSCW2 (2020), 1–26.
* Bode and Vraga (2018) Leticia Bode and Emily K Vraga. 2018. See something, say something: Correction of global health misinformation on social media. _Health communication_ 33, 9 (2018), 1131–1140.
* Bovet and Makse (2019) Alexandre Bovet and Hernán A Makse. 2019. Influence of fake news in Twitter during the 2016 US presidential election. _Nature communications_ 10, 1 (2019), 1–14.
* Castillo et al. (2011) Carlos Castillo, Marcelo Mendoza, and Barbara Poblete. 2011\. Information credibility on twitter. In _Proceedings of the 20th international conference on World wide web_. 675–684.
* Charmaz and Belgrave (2007) Kathy Charmaz and Linda Liska Belgrave. 2007. Grounded theory. _The Blackwell encyclopedia of sociology_ (2007).
* Cokely et al. (2012) Edward T Cokely, Mirta Galesic, Eric Schulz, Saima Ghazal, and Rocio Garcia-Retamero. 2012\. Measuring risk literacy: The Berlin numeracy test. _Judgment and Decision making_ (2012).
* Coleman ([n.d.]) Alistair Coleman. [n.d.]. _’Hundreds dead’ because of Covid-19 misinformation_. https://www.bbc.com/news/world-53755067
* Del Vicario et al. (2016) Michela Del Vicario, Alessandro Bessi, Fabiana Zollo, Fabio Petroni, Antonio Scala, Guido Caldarelli, H Eugene Stanley, and Walter Quattrociocchi. 2016. The spreading of misinformation online. _Proceedings of the National Academy of Sciences_ 113, 3 (2016), 554–559.
* Dias et al. (2020) Nicholas Dias, Gordon Pennycook, and David G Rand. 2020\. Emphasizing publishers does not effectively reduce susceptibility to misinformation on social media. _Harvard Kennedy School Misinformation Review_ 1, 1 (2020).
* Dixit and Mac (2018) Pranav Dixit and Ryan Mac. 2018. How WhatsApp Destroyed A Village. _Buzzfeed News_ (2018).
* Epstein et al. (2020) Ziv Epstein, Gordon Pennycook, and David Rand. 2020\. Will the crowd game the algorithm? Using layperson judgments to combat misinformation on social media by downranking distrusted sources. In _Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems_. 1–11.
* Flintham et al. (2018) Martin Flintham, Christian Karner, Khaled Bachour, Helen Creswick, Neha Gupta, and Stuart Moran. 2018\. Falling for fake news: investigating the consumption of news via social media. In _Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems_. 1–10.
* Frederick (2005) Shane Frederick. 2005\. Cognitive reflection and decision making. _Journal of Economic perspectives_ 19, 4 (2005), 25–42.
* Geeng et al. (2020) Christine Geeng, Savanna Yee, and Franziska Roesner. 2020\. Fake News on Facebook and Twitter: Investigating How People (Don’t) Investigate. In _Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems_. 1–14.
* Graves (2016) Lucas Graves. 2016\. _Deciding what’s true: The rise of political fact-checking in American journalism_. Columbia University Press.
* Grinberg et al. (2019) Nir Grinberg, Kenneth Joseph, Lisa Friedland, Briony Swire-Thompson, and David Lazer. 2019\. Fake news on Twitter during the 2016 US presidential election. _Science_ 363, 6425 (2019), 374–378.
* Grinberg et al. (2017) Nir Grinberg, Shankar Kalyanaraman, Lada A Adamic, and Mor Naaman. 2017. Understanding feedback expectations on Facebook. In _Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing_. 726–739.
* Grygiel and Brown (2019) Jennifer Grygiel and Nina Brown. 2019. Are social media companies motivated to be good corporate citizens? Examination of the connection between corporate social responsibility and social media safety. _Telecommunications Policy_ 43, 5 (2019), 445–460.
* Guess et al. (2019) Andrew Guess, Jonathan Nagler, and Joshua Tucker. 2019\. Less than you think: Prevalence and predictors of fake news dissemination on Facebook. _Science advances_ 5, 1 (2019), eaau4586.
* Haigh et al. (2019) Maria Haigh, Thomas Haigh, and Tetiana Matychak. 2019\. Information Literacy vs. Fake News: The Case of Ukraine. _Open Information Science_ 3, 1 (2019), 154–165.
* Hannak et al. (2014) Aniko Hannak, Drew Margolin, Brian Keegan, and Ingmar Weber. 2014. Get Back! You Don’t Know Me Like That: The Social Mediation of Fact Checking Interventions in Twitter Conversations.. In _ICWSM_.
* Ibrahim (2017) Yasmin Ibrahim. 2017\. Facebook and the Napalm Girl: reframing the iconic as pornographic. _Social Media+ Society_ 3, 4 (2017), 2056305117743140\.
* Karduni et al. (2019) Alireza Karduni, Isaac Cho, Ryan Wesslen, Sashank Santhanam, Svitlana Volkova, Dustin L Arendt, Samira Shaikh, and Wenwen Dou. 2019\. Vulnerable to misinformation? Verifi!. In _Proceedings of the 24th International Conference on Intelligent User Interfaces_. 312–323.
* Kelly ([n.d.]) Makena Kelly. [n.d.]. _Facebook proves Elizabeth Warren’s point by deleting her ads about breaking up Facebook_. https://www.theverge.com/2019/3/11/18260857/facebook-senator-elizabeth-warren-campaign-ads-removal-tech-break-up-regulation
* Kim et al. (2018) Jooyeon Kim, Behzad Tabibian, Alice Oh, Bernhard Schölkopf, and Manuel Gomez-Rodriguez. 2018\. Leveraging the crowd to detect and reduce the spread of fake news and misinformation. In _Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining_. 324–332.
* Kreuter et al. (2008) Frauke Kreuter, Stanley Presser, and Roger Tourangeau. 2008\. Social desirability bias in cati, ivr, and web surveysthe effects of mode and question sensitivity. _Public opinion quarterly_ 72, 5 (2008), 847–865.
* Kriplean et al. (2014) Travis Kriplean, Caitlin Bonnar, Alan Borning, Bo Kinney, and Brian Gill. 2014. Integrating on-demand fact-checking with public dialogue. In _Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing_. 1188–1199.
* Landis and Koch (1977) J Richard Landis and Gary G Koch. 1977. The measurement of observer agreement for categorical data. _biometrics_ (1977), 159–174.
* Manjoo (2013) Farhad Manjoo. 2013\. You won’t finish this article. _Why people online don’t read to the end: Slate_ (2013).
* Margolin et al. (2018) Drew B Margolin, Aniko Hannak, and Ingmar Weber. 2018\. Political fact-checking on Twitter: When do corrections have an effect? _Political Communication_ 35, 2 (2018), 196–219.
* Martel et al. (2021) Cameron Martel, Mohsen Mosleh, and David Gertler Rand. 2021\. You’re definitely wrong, maybe: Correction style has minimal effect on corrections of misinformation online. _Media and Communication_ 9, 1 (2021).
* Marwick (2018) Alice E Marwick. 2018\. Why do people share fake news? A sociotechnical model of media effects. _Georgetown Law Technology Review_ 2, 2 (2018), 474–512.
* McCambridge et al. (2014) Jim McCambridge, John Witton, and Diana R Elbourne. 2014\. Systematic review of the Hawthorne effect: new concepts are needed to study research participation effects. _Journal of clinical epidemiology_ 67, 3 (2014), 267–277.
* Morris et al. (2012) Meredith Ringel Morris, Scott Counts, Asta Roseway, Aaron Hoff, and Julia Schwarz. 2012\. Tweeting is believing? Understanding microblog credibility perceptions. In _Proceedings of the ACM 2012 conference on computer supported cooperative work_. 441–450.
* Mosleh et al. (2021a) Mohsen Mosleh, Cameron Martel, Dean Eckles, and David G. Rand. 2021a. Perverse Downstream Consequences of Debunking: Being Corrected by Another User for Posting False Political News Increases Subsequent Sharing of Low Quality, Partisan, and Toxic Content in a Twitter Field Experiment. In _To appear in proceedings of the 2021 CHI Conference on Human Factors in Computing Systems_.
* Mosleh et al. (2021b) Mohsen Mosleh, Cameron Martel, Dean Eckles, and David G Rand. 2021b. Shared partisanship dramatically increases social tie formation in a Twitter field experiment. _Proceedings of the National Academy of Sciences_ 118, 7 (2021).
* Mosleh et al. (2021c) Mohsen Mosleh, Gordon Pennycook, Antonio A Arechar, and David G Rand. 2021c. Cognitive reflection correlates with behavior on Twitter. _Nature Communications_ 12, 1 (2021), 1–10.
* Mosleh et al. (2020) Mohsen Mosleh, Gordon Pennycook, and David G Rand. 2020\. Self-reported willingness to share political news articles in online surveys correlates with actual sharing on Twitter. _Plos one_ 15, 2 (2020), e0228882.
* Murray et al. (2020) Samuel Murray, Matthew Stanley, Jonathon McPhetres, Gordon Pennycook, and Paul Seli. 2020\. ” I’ve said it before and I will say it again”: Repeating statements made by Donald Trump increases perceived truthfulness for individuals across the political spectrum. (2020).
* Oh et al. (2010) Onook Oh, Kyounghee Hazel Kwon, and H Raghav Rao. 2010\. An Exploration of Social Media in Extreme Events: Rumor Theory and Twitter during the Haiti Earthquake 2010.. In _Icis_ , Vol. 231. 7332–7336.
* O’Riordan et al. (2019) Sheila O’Riordan, Gaye Kiely, Bill Emerson, and Joseph Feller. 2019. Do you have a source for that? Understanding the Challenges of Collaborative Evidence-based Journalism. In _Proceedings of the 15th International Symposium on Open Collaboration_. 1–10.
* Pennycook et al. (2020a) Gordon Pennycook, Adam Bear, Evan T Collins, and David G Rand. 2020a. The implied truth effect: Attaching warnings to a subset of fake news headlines increases perceived accuracy of headlines without warnings. _Management Science_ (2020).
* Pennycook et al. (2018) Gordon Pennycook, Tyrone D Cannon, and David G Rand. 2018\. Prior exposure increases perceived accuracy of fake news. _Journal of experimental psychology: general_ 147, 12 (2018), 1865\.
* Pennycook et al. (2021) Gordon Pennycook, Ziv Epstein, Mohsen Mosleh, Antonio A Arechar, Dean Eckles, and David G Rand. 2021. Shifting attention to accuracy can reduce misinformation online. _Nature_ (2021).
* Pennycook et al. (2020b) Gordon Pennycook, Jonathon McPhetres, Yunhao Zhang, Jackson G Lu, and David G Rand. 2020b. Fighting COVID-19 misinformation on social media: Experimental evidence for a scalable accuracy-nudge intervention. _Psychological science_ 31, 7 (2020), 770–780.
* Pennycook and Rand (2019a) Gordon Pennycook and David G Rand. 2019a. Fighting misinformation on social media using crowdsourced judgments of news source quality. _Proceedings of the National Academy of Sciences_ 116, 7 (2019), 2521–2526.
* Pennycook and Rand (2019b) Gordon Pennycook and David G Rand. 2019b. Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. _Cognition_ 188 (2019), 39–50.
* Posetti and Matthews (2018) Julie Posetti and Alice Matthews. 2018. A short guide to the history of ‘fake news’ and disinformation. _International Center For Journalists_ (2018), 2018–07.
* Potthast et al. (2016) Martin Potthast, Sebastian Köpsel, Benno Stein, and Matthias Hagen. 2016. Clickbait detection. In _European Conference on Information Retrieval_. Springer, 810–817.
* Preist et al. (2014) Chris Preist, Elaine Massung, and David Coyle. 2014\. Competing or aiming to be average? Normification as a means of engaging digital volunteers. In _Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing_. 1222–1233.
* Quattrociocchi et al. (2016) Walter Quattrociocchi, Antonio Scala, and Cass R Sunstein. 2016\. Echo chambers on Facebook. _Available at SSRN 2795110_ (2016).
* Reed (2018) John Reed. 2018\. Hate speech, atrocities and fake news: The crisis of democracy in Myanmar. _Financial Times. Retrieved from https://www. ft. com/content/2003d54e-169a-11e8-9376-4a6390addb44_ (2018).
* Schmidt et al. (2018) Ana Lucía Schmidt, Fabiana Zollo, Antonio Scala, Cornelia Betsch, and Walter Quattrociocchi. 2018. Polarization of the vaccination debate on Facebook. _Vaccine_ 36, 25 (2018), 3606–3612.
* Shane (2017) Scott Shane. 2017\. The fake Americans Russia created to influence the election. _The New York Times_ 7, 09 (2017).
* Shao et al. (2018) Chengcheng Shao, Giovanni Luca Ciampaglia, Onur Varol, Kai-Cheng Yang, Alessandro Flammini, and Filippo Menczer. 2018. The spread of low-credibility content by social bots. _Nature communications_ 9, 1 (2018), 1–9.
* Shin et al. (2017) Jieun Shin, Lian Jian, Kevin Driscoll, and François Bar. 2017. Political rumoring on Twitter during the 2012 US presidential election: Rumor diffusion and correction. _new media & society_ 19, 8 (2017), 1214–1235.
* Shin and Thorson (2017) Jieun Shin and Kjerstin Thorson. 2017. Partisan selective sharing: The biased diffusion of fact-checking messages on social media. _Journal of Communication_ 67, 2 (2017), 233–255.
* Shrimsley ([n.d.]) Robert Shrimsley. [n.d.]. _Facebook photos: snap judgments_. https://www.ft.com/content/dbcdf744-7ac6-11e6-b837-eb4b4333ee43
* Shu et al. (2017) Kai Shu, Amy Sliva, Suhang Wang, Jiliang Tang, and Huan Liu. 2017. Fake news detection on social media: A data mining perspective. _ACM SIGKDD Explorations Newsletter_ 19, 1 (2017), 22–36.
* Spray ([n.d.]) Sara Spray. [n.d.]. _Facebook Is Embroiled In A Row With Activists Over “Censorship”_. https://www.buzzfeed.com/saraspary/facebook-in-dispute-with-pro-kurdish-activists-over-deleted
* Starbird et al. (2018a) Kate Starbird, Ahmer Arif, Tom Wilson, Katherine Van Koevering, Katya Yefimova, and Daniel Scarnecchia. 2018a. Ecosystem or Echo-System? Exploring Content Sharing across Alternative Media Domains.. In _ICWSM_. 365–374.
* Starbird et al. (2018b) Kate Starbird, Dharma Dailey, Owla Mohamed, Gina Lee, and Emma S Spiro. 2018b. Engage early, correct more: How journalists participate in false rumors online during crisis events. In _Proceedings of the 2018 CHI conference on human factors in computing systems_. 1–12.
* Starbird et al. (2014) Kate Starbird, Jim Maddock, Mania Orand, Peg Achterman, and Robert M Mason. 2014. Rumors, false flags, and digital vigilantes: Misinformation on twitter after the 2013 boston marathon bombing. _IConference 2014 Proceedings_ (2014).
* Steinhauser ([n.d.]) Paul Steinhauser. [n.d.]. _Arizona certifies Biden as election winner, with Wisconsin following hours later_. https://www.foxnews.com/politics/arizona-wisconsin-election-certification-biden-trump
* Strauss (1987) Anselm L Strauss. 1987\. _Qualitative analysis for social scientists_. Cambridge university press.
* Sundar (1998) S Shyam Sundar. 1998\. Effect of source attribution on perception of online news stories. _Journalism & Mass Communication Quarterly_ 75, 1 (1998), 55–68.
* Vosoughi et al. (2018) Soroush Vosoughi, Deb Roy, and Sinan Aral. 2018. The spread of true and false news online. _Science_ 359, 6380 (2018), 1146–1151.
* Vraga and Bode (2017) Emily K Vraga and Leticia Bode. 2017. Using expert sources to correct health misinformation in social media. _Science Communication_ 39, 5 (2017), 621–645.
* Vraga and Bode (2020) Emily K Vraga and Leticia Bode. 2020. Defining misinformation and understanding its bounded nature: using expertise and evidence for describing misinformation. _Political Communication_ 37, 1 (2020), 136–144.
* Wardle and Derakhshan (2017) Claire Wardle and Hossein Derakhshan. 2017. Information disorder: Toward an interdisciplinary framework for research and policy making. _Council of Europe report_ 27 (2017).
* Wineburg and McGrew (2017) Sam Wineburg and Sarah McGrew. 2017. Lateral reading: Reading less and learning more when evaluating digital information. (2017).
* Wu et al. (2017) Liang Wu, Jundong Li, Xia Hu, and Huan Liu. 2017\. Gleaning wisdom from the past: Early detection of emerging rumors in social media. In _Proceedings of the 2017 SIAM international conference on data mining_. SIAM, 99–107.
* Yaqub et al. (2020) Waheeb Yaqub, Otari Kakhidze, Morgan L Brockman, Nasir Memon, and Sameer Patil. 2020\. Effects of Credibility Indicators on Social Media News Sharing Intent. In _Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems_. 1–14.
* Zhang et al. (2018) Amy X Zhang, Aditya Ranganathan, Sarah Emlen Metz, Scott Appling, Connie Moon Sehat, Norman Gilmore, Nick B Adams, Emmanuel Vincent, Jennifer Lee, Martin Robbins, et al. 2018\. A structured response to misinformation: Defining and annotating credibility indicators in news articles. In _Companion Proceedings of the The Web Conference 2018_. 603–612.
* Zubiaga et al. (2016) Arkaitz Zubiaga, Maria Liakata, Rob Procter, Geraldine Wong Sak Hoi, and Peter Tolmie. 2016\. Analysing how people orient to and spread rumours in social media by looking at conversational threads. _PloS one_ 11, 3 (2016), e0150989.
## Appendix A Association Between Participants’ Demographics and the Taxonomy
Categories.
We performed an exploratory analysis on the data from our Taxonomy study to
understand whether the demographics of participants influence the types of
rationales they give for why they believe a claim is or is not accurate. This
analysis was not part of our study design and was added as a stepping stone
for future work. We limited our analyses to the data obtained from the last
iteration of the study because the categories that the participants and the
research team coder had used in the prior iterations had changed. We further
excluded those datapoints for which we did not have the gender, party, and
ethnicity of the participant, resulting in 953 datapoints of which 645 had
free-text elaboration (did not belong to the Don’t know categories).
For party, each participant was labeled either a Democratic or a Republican.
We were able to place all participants including those who identified as
Independent or other (e.g., Green) in one of the two Democratic or Republican
categories because in addition to party, we had asked participants about their
political preference (strongly Republican, lean Republican, Republican,
Democrat, lean Democrat, strongly Democrat). Because the majority of our
participants were White, ethnicity was given the values of White, and Not
White. With respect to education, we categorized the participants as having a
college degree including Associate’s degree vs not.
We then performed Chi-square tests of independence on the contingency tables
of rationales and each of the demographic factors of party, gender, and
ethnicity. We caution that these tests were underpowered considering the
number of categories and our sample size and future studies are needed to
ascertain whether our results hold true with a larger sample. The tests did
not find a statistically significant association between rationales that
participants gave and their party or ethnicity [$\chi^{2}(22)=30.03$, $p=0.12$
for party; $\chi^{2}(22)=21.80$, $p=0.47$ for ethnicity]. However, the
rationales were not independent of the participants’ gender
[$\chi^{2}(22)=40.87$, $p=0.009$].
Figures 9 and 10 show how the distributions of rationales across categories
vary by the user’s gender. Because the numbers of rationales provided by each
gender is different, the bar for each question category and each gender is
normalized by the number of all questions asked by users of the same gender.
Figure 9. Distributions of rationales by the gender of the user for claims
perceived as accurate and inaccurate. Each bar shows the ratio of the
rationale category relative to all rationales asked by users of the same
gender.
Figure 10. Distributions of rationales by the gender of the user for other
signals of credibility besides accuracy. Each bar shows the ratio of the
rationale category relative to all rationales asked by users of the same
gender.
## Appendix B Effects of the Demographics of the Nudge Study Participants
We performed an exploratory analysis of the demographics of the Nudge study
participants to understand what role these factors play in participants’
decision of whether to share headlines. We had not planned for these analyses
a priori in our experiment design, but later included them for completeness.
Thus, this analysis should be considered exploratory, and $p$-values not
indicative of true significance. We developed a linear model with share
intention as the dependent variable, with outcomes “Yes”, “Maybe”, and “No”
mapped to numeric values as explained in 6.2, in addition to several
demographics factors:
(3) $\text{share}\sim\text{veracity}\times(\text{partisanship
concordance}\times(\text{accuracy condition}+\text{reasoning condition}+\\\
\text{reasoning
format)}+\text{party}+\text{gender}+\text{age}+\text{ethnicity}+\text{education})+(1|\text{participant})+(1|\text{claim})$
Similar to the model in 6.2, we included the veracity of the headline and
treatment conditions as independent variables, and claim and participant as
random effects. We limited this analysis to that portion of the data for which
we had the complete demographics information required for our model, excluding
638 datapoints. In addition, we excluded 46 datapoints from the participants
who had identified as neither male nor female because these datapoints were
too few for fitting the model.
We treated party, ethnicity, and education similar to Appendix A. We binned
age into 7 buckets. We did not include the interaction between the demographic
factors because given our sample size, we did not have enough power to do so.
In the model, we also included partisanship concordance which was a measure of
the alignment between the participants’ self-declared party and the
partisanship rating that they had given to the headline (measured on a 5-item
Likert scale). This value ranged from 1-5 with 1 indicating no alignment, and
5 complete alignment.
(4) $\text{concordance}=(\text{partisanship rating of the
headline})\times(\text{rater party}==\text{Republican})+\\\
(6-\text{partisanship rating of the headline})\times(\text{rater
party}==\text{Democratic})$
Because this model includes several demographic factors that may capture some
degree of variance in share likelihood, the effects of the treatments observed
in this more refined model can serve as a confirmation of the results in 6.2.
The results obtained for the demographic factors however, should be taken with
caution and further examined in future work, as these were not planned
analyses.
We performed a Wald Chi-Square test on the fitted model to determine which of
the factors had a significant effect. Consistent with the results in 6.2, the
effects of veracity, providing accuracy, providing reasoning, and reasoning
format were significant [$\chi^{2}(1)=33.04$, $p<0.001$ for veracity,
$\chi^{2}(1)=33.62$, $p<0.001$ for providing accuracy, $\chi^{2}(1)=7.59$,
$p<0.01$ for providing reasoning, $\chi^{2}(1)=4.88$, $p=0.03$ for reasoning
format]. The interaction between veracity and whether the participant was
asked about accuracy was also significant at the $\alpha=0.05$ level
[$\chi^{2}(1)=4.15$, $p=0.04$]. The sample means shown in Figure 5 for
conditions 1 and 2, suggest that although accuracy assessment reduces sharing
of both false and true content, when users are asked to assess accuracy, the
reduction in sharing affects false headlines more compared to true headlines.
In addition, we observed that the effects of a number of demographic factors
were significant as well.
##### B.0.0.1 Concordance
had a statistically significant effect on sharing intentions
[$\chi^{2}(1)=256.52$, $p<0.001$]. Figure 11 displays the predicted values
(marginal effects) for share likelihood as concordance increases. As the
alignment between a participant’s party and their perceived partisanship of a
headline increases, the probability that they share the headline increase as
well. This observation aligns with prior studies that have found people are
more likely to consider sharing politically concordant headlines than
discordant headlines (Pennycook et al., 2021; Shin and Thorson, 2017).
The interaction between concordance and veracity was also significant
[$\chi^{2}(1)=20.67$, $p<0.001$], indicating that the slope of share by
concordance is different for false and true headlines. Figure 12 indicates
that the slope is slightly higher for true headlines, suggesting that the
alignment of a headline’s partisanship with the participant’s increases
sharing likelihood more when the headline is true compared to when it is
false. Furthermore, we observed that the interaction between concordance and
whether the participant was asked to assess accuracy was significant as well
[$\chi^{2}(1)=8.23$, $p<0.01$]. As shown in Figure 13, asking users to assess
the accuracy of headlines restrains their sharing of headlines that are well-
aligned with their partisanship.
Figure 11. Predicted values (marginal effects) for share likelihood as
concordance (alignment between headline and participant partisanship)
increases obtained from the model with demographics included as independent
variables.
Figure 12. The alignment of a headline’s partisanship with the participant’s
increases the likelihood of sharing slightly more when the headline is true
compared to when it is false.
Figure 13. Asking users to assess the accuracy of headlines restrains their
sharing of headlines that are well-aligned with their partisanship.
##### B.0.0.2 Party
had a significant effect on sharing intentions [$\chi^{2}(1)=13.24$,
$p<0.001$]. Figure 14 shows sample means of share likelihood by party. The
figure suggests that Republicans share headlines more often than Democrats,
irrespective of veracity. However, the interaction between party and headline
veracity also had a significant effect [$\chi^{2}(1)=18.16$, $p<0.001$], with
the means displayed in Figure 15. The figure indicates that while Democratic
and Republican participants shared true headlines at a similar rate,
Democratic participants were less likely to share false headlines compared to
Republicans. This observations aligns with prior work that among other
demographic factors investigated the association between Facebook users’ party
identification and the number of fake news stories they had shared (Grinberg
et al., 2019).
Figure 14. Sample means of share likelihood by party. Republican participants
were more likely to share headlines.
Figure 15. Sample means of share likelihood by party and headline veracity.
Democratic participants were less likely to share false headlines compared to
Republicans.
##### B.0.0.3 Gender
had a significant effect on sharing intentions [$\chi^{2}(1)=11.23$,
$p<0.001$], with the sample means shown in Figure 16. The figure suggests that
males are more likely to share headlines compared to females.
##### B.0.0.4 Education
had a significant effect on likelihood of sharing at $\alpha=0.05$ level
[$\chi^{2}(1)=5.85$, $p=0.02$]. As shown in Figure 17, participants who held
an Associate’s degree or higher were more likely to share headlines compared
to those that did not have a college degree.
Figure 16. Sample means of share likelihood by gender. Males are slightly more
likely to share headlines compared to females.
Figure 17. Sample means of share likelihood by education. Participants who
held a college degree were more likely to share headlines compared. The
difference in share likelihood however, is small.
## Appendix C Cumulative Link Models
We tested the effects of our interventions on share intentions using the same
formula as outlined in Section 6.2, but using cumulative link mixed models
instead of linear models. Cumulative link models are appropriate for fitting
ordinal values and find the cumulative probability of the ith rating
(datapoint) falling in the jth category or below. The categories in our data
are ordered share decisions “No”, “Maybe”, and “yes”. The cumulative link
model assumes that there is a continuous but unobservable variable $Y_{i}$
with a mean that depends on the predictors and that this underlying
distribution has a set of cut-points $\theta_{1}$, $\theta_{2}$, …,
$\theta_{j}$ where if $\theta_{k}<Y_{i}<\theta_{k+1}$, the manifest response
(share decision) will take the value $k$.
Similar to the linear models from 6.2, we developed a veracity model in which
the independent variables were the main effects of the objective veracity of
the headlines and our treatments (asking about accuracy, asking about
reasoning, presenting checkboxes vs free-text to capture reasons), as well as
the interaction between veracity and the treatments. In addition, we developed
the cumulative link counterpart to the perceived accuracy model in 6.2, which
was fit to the data from the treatment conditions. In this model, the
independent variables were participant’s assessment of the accuracy of the
headline, whether participants were asked to provide reasoning and whether
they were presented with checkboxes, as well as the interaction between these
treatments and accuracy assessment. In both models we included participant and
claim as random effects.
To fit these models, we used the function “clmm” with a “logit” link from the
package “ordinal” in R and set the threshold as symmetric. We then performed
Likelihood Ratio Chi-Square tests (function “Anova” from package
“RVAideMemoire”) on each of the fitted models to determine whether the effects
of the independent variables were significant. If we determined a factor was
significant, we then performed a post-hoc Estimated Marginal Means (EMMeans)
test across the levels of the factor of interest averaging over all other
factors. We used the function “emmeans” from the R package “emmeans” with mode
“mean.class” to obtain and compare the expected values of the ordinal response
on a scale of 1 to 3 (the number of categories) for each of the levels of the
factor of interest. P values were adjusted with Tukey method to account for
multiple comparisons. The results of the cumulative link models were
consistent with the results obtained from the linear models in 6.2.
Similar to the results we observed for the linear model counterparts, the
effects of veracity in the veracity model and perceived accuracy in the
perceived accuracy model were both significant [$\chi^{2}(1)=29.80$, $p<0.001$
for veracity model, $\chi^{2}(1)=1851.56$, $p<0.001$ for perceived accuracy
model]. The EMMeans showed that participants were more likely to have the
intention of sharing objectively true rather than false headlines [$z=5.03$,
$p<0.001$, $E(False)=1.23$, $E(True)=1.48$]. Similarly, they were more likely
to share headlines that they perceived as true [$z=13.49$, $p<0.001$,
$E(\textit{Perceived as false})=1.05$, $E(\textit{Perceived as true}=1.58$].
Similarly, providing accuracy assessments had a significant effect on
participants’ likelihood of sharing either false or true headlines
[$\chi^{2}(1)=33.83$, $p<0.001$]. The EMMeans test revealed that participants
were more likely to share headlines if they were not asked about their
accuracy [$z=4.89$, $p<0.001$, $E(\textit{Accuracy not provided})=1.44$,
$E(\textit{Accuracy provided})=1.28$].
In addition, the effects of providing reasoning and the format of reasoning
were significant in both the veracity and the perceived accuracy models
[providing reasoning: $\chi^{2}(1)=13.05$, $p<0.001$ for veracity,
$\chi^{2}(1)=13.38$, $p<0.001$ for perceived accuracy; reasoning format:
$\chi^{2}(1)=6.21$, $p=0.01$ for veracity, $\chi^{2}(1)=5.14$, $p=0.02$ for
perceived accuracy]. Participants were more likely to share headlines if they
were not asked to provide their reasoning about why the claim was or was not
accurate [$z=3.52$, $p<0.001$, $E(\textit{Reasoning not provided})=1.41$,
$E(\textit{Reasoning provided})=1.30$ for veracity; $z=3.38$, $p<0.001$,
$E(\textit{Reasoning not provided})=1.36$, $E(\textit{Reasoning
provided})=1.26$ for perceived accuracy]. Providing reasons via the checkbox
set of reason categories also lowered their likelihood of sharing content
[$z=2.60$, $p=0.01$, $E(\textit{Free-text})=1.40$, $E(\textit{Checkbox})=1.32$
for veracity; $z=2.58$, $p=0.01$, $E(\textit{Free-text})=1.35$,
$E(\textit{Checkbox})=1.27$ for perceived accuracy].
The veracity model also indicated that the interaction between veracity and
providing accuracy is statistically significant [$\chi^{2}(1)=10.98$,
$p<0.001$]. The interaction however, was not practically meaningful
[$E(\textit{False, Accuracy not provided})=1.32$, $E(\textit{False, Accuracy
provided})=1.15$, $E(\textit{True, Accuracy not provided})=1.57$,
$E(\textit{True, accuracy provided})=1.40$].
## Appendix D Investigation of Potential Confounds in the Nudge Study
### D.1. Makeup of Data and Impact of Removing Spams.
The task in the Nudge study presented 10 claims to each participant but
because some participants abandoned the task before its conclusion, we had
fewer data from them. It is conceivable that if the attrition rate is
different across conditions, then the conditions differ not only in what
treatment they received, but also in what type of people contributed more data
to each condition. Therefore, we probed how many participants per condition
did not finish all the 10 headlines. This number across all conditions was in
the range of 30-50, suggesting that the dropout rate was similar. We then
analyzed the spam rate across conditions which was more variant (condition 1:
145 , condition 2: 114 , condition 3: 136, condition 4: 174). It is possible
that different interventions result in different spam rates and that those
participants who stay and work through a more laborious condition, are in
fact, characteristically different from those who finish the task by spamming.
We performed a Pearson’s Chi Square test to investigate if the distribution of
spams was different from a uniform distribution. The difference was
statistically significant [$\chi^{2}(3)=13.02$, $p=0.005$], suggesting that
the conditions may have had a role in different numbers of users becoming
spammers across conditions. We then analyzed the share rate in spams across
different conditions which was similar (see Table 6), with share mean of
approximately 0.72 across all conditions regardless of headline veracity.
In the main section of the paper, we have presented our findings above
excluding the Spams. However, we perform the same analyses including the spam
datapoints in the Appendix section E. Some of our results that pertain to
conditions that have heavier interventions are no longer statistically
significant when spams are included. The reason is that in these conditions,
the share rates are low and therefore the difference between conditions is
smaller but detectable in the absence of noise. Including noise, i.e., spams,
in a condition, increases the sample size while adding a relatively large
number of positive datapoints, or datapoints that indicate a positive
intention of sharing, for both true and false headlines. The difference that
existed before will now be diluted.
### D.2. Deliberation Priming.
Although in the control condition for the Taxonomy study we did not have any
of the accuracy and reasoning nudges, after each sharing decision, we asked
participants why they would or would not share the article. This question by
itself may have acted as a deliberation prime on subsequent sharing decisions.
To test this hypothesis, we developed a model with share intention as the
dependent variable and veracity and whether the item was the first item
presented to the user as independent variables and included participant
identifier as a random effect. We fit the model to the first and last
datapoints that participants in the control condition provided. We found that
as expected, veracity was positively correlated with sharing intentions and
the correlation was statistically significant [$\beta=0.16,p<0.001$]. Being
the first decision by the participant also had a positive albeit
nonsignificant correlation with sharing [$\beta=0.49,p=0.23$]. The interaction
between the two had a negative and nonsignificant correlation
[$\beta=-0.08,p=0.20$]. Despite the lack of significance, we observe that the
effect of veracity on the last item presented to the user is twice as large as
that of the first item [$0.16$ for the last decision, $0.16-0.08=0.08$ for the
first]. This observation gives some degree of support to the hypothesis that
simply asking users to ponder over their sharing decision may have primed them
to be mindful of the headline’s accuracy over time.
### D.3. Investigating Potential Learning Effects of Repeated Accuracy
Assessments
We wished to investigate whether making repeated judgements about accuracy had
a learning effect on participants leading their subsequent accuracy
assessments to be closer to the headline’s actual veracity. Therefore, we fit
the following model to the first and last datapoints that participants in the
treatment conditions had provided:
(5) $\text{accuracy assessment}==\text{veracity}\sim\text{is first
question}\times(\text{reasoning condition}+\text{reasoning format})\\\
+(1|\text{participant})+(1|\text{claim})$
Because the outcome of the model was dichotomous (1 if veracity and perceived
accuracy matched, 0 if they did not), we used the function “glmer” with link
“logit” from the R package “lme4” to fit the data. A Wald Chi-Square test on
the model revealed that the effect of whether the participant’s judgement was
the first or the last was in fact not significant [$\chi^{2}(1)=0.07$,
$p=0.80$], indicating that repeated judgements had not been a significant
confounder in the study.
## Appendix E Spams
In this section, we include the spam datapoints of the Nudge study in the
dataset and perform the same analyses that we conducted in the Results
section.
The Wald-Chi Square tests fitted to our linear models revealed that the effect
of veracity in the veracity model and the effect of perceived accuracy in the
perceived accuracy model on sharing intention were both significant
[$\chi^{2}(1)=30.13$, $p<0.001$ for veracity, $\chi^{2}(1)=2645.61$, $p<0.001$
for perceived accuracy]. Post-hoc Estimated Marginal Means tests revealed that
participants were more likely to share an objectively true headline compared
to a false one [$z=4.60,p<0.001$]. Similarly, they were more likely to share a
headline that they perceived as true rather than one they perceived as false
[$z=42.04,p<0.001$]
### E.1. Effect of Providing Accuracy Assessments
Consistent with the results we observed when excluding spams, providing
accuracy assessment had a significant effect on sharing intentions for the
veracity model [$\chi^{2}(1)=32.04$, $p<0.001$].
Figure 18 shows how sharing rates differ in conditions 1 and 2 by whether
participants were asked about accuracy and headline veracity. Asking people to
provide accuracy assessments decreases sharing of false headlines by 29% while
the reduction in sharing of true content is 17%.
### E.2. Effect of Providing Reasoning
The effect of reasoning on sharing intention when including spams was not
significant in either the veracity or the perceived accuracy models
[$\chi^{2}(1)=0.40$, $p=0.53$ for the veracity model, $\chi^{2}(1)=0.50$,
$p=0.48$ for the perceived accuracy model]. Similarly, the interaction effect
of reasoning and veracity was not significant in the veracity model
[$\chi^{2}(1)=1.13$, $p=0.29$]. However, the effect of interaction between
reasoning and perceived accuracy was significant [$\chi^{2}(1)=7.25$,
$p=0.007$].
Figure 18 shows sharing rate means across true and false headlines for
conditions 2 and 3 which differ in whether participants provided reasoning.
Figure 19 shows that the sharing of headlines that are perceived as true is
reduced when reasoning is requested. The means in sharing rate of headlines
perceived as false however, do not vary much across the 2 conditions.
### E.3. Effect of Reasoning Format
We observed that the effect of reasoning form when including spams was not
significant in either the veracity or the perceived accuracy models
[$\chi^{2}(1)=0.61$, $p=0.43$ in the veracity model, $\chi^{2}(1)=0.52$,
$p=0.47$ in the perceived accuracy model]. The effect of the interaction
between veracity and reasoning form was not significant either
[$\chi^{2}(1)=2.92$, $p=0.09$]. However, the interaction between reasoning
form and perceived accuracy was significant [$\chi^{2}(1)=14.81$, $p<0.001$].
Figure 18 shows the means across the conditions with different instruments for
capturing reasoning. Figure 19 shows that in condition 4 where checkboxes were
presented, people shared headlines that they initially perceived as false at a
higher rate compared to condition 3.
Table 6. Share means in spam entries across experimental conditions and headline veracity. Veracity. | Cond. 1 | Cond. 2 | Cond. 3 | Cond. 4
---|---|---|---|---
True | 0.76 | 0.70 | 0.67 | 0.70
False | 0.74 | 0.72 | 0.71 | 0.73
Figure 18. Share rate of true and false headlines across study conditions
including spam datapoints. The results suggest that people are less likely to
share both accurate and inaccurate content if they are asked to assess the
content’s accuracy although the reduction in shared false content is higher
(condition 1 vs 2). However, asking people to provide their reasoning in
addition to assessing accuracy does not result in a statistically significant
difference compared to if they only assess accuracy (condition 2 vs 3).
Similarly, there does not exist a statistically significant difference in
means of sharing true and false content across the reasoning format conditions
(condition 3 vs 4).
Figure 19. Share rate of headlines across study conditions including spam
datapoints for headlines that were perceived as true or false. In condition 3
where participants where asked about their rationales, the share mean for
headlines perceived as true was decreased compared to condition 2 (condition 2
vs 3). However, asking people people about their rationales via a checkbox
increases sharing of content that they initially perceived as false (condition
3 vs 4).
## Appendix F Headlines Used in the Study of Behavioral Nudges
We present the headlines that we used in the user study of behavioral nudges
along with their veracity and partisanship. Partisanship of a headline was
rated by each participant that was presented the headline in the study. The
partisanship measure in the table is an average over all these ratings on
scale of -2 (more favorable for Democrats) to 2 (more favorable for
Republicans).
|
# Adaptive Decision Forest: An Incremental Machine Learning Framework
Md Geaur Rahman<EMAIL_ADDRESS>Md Zahidul Islam<EMAIL_ADDRESS>School
of Computing and Mathematics, Charles Sturt University, Australia
###### Abstract
In this study, we present an incremental machine learning framework called
Adaptive Decision Forest (ADF), which produces a decision forest to classify
new records. Based on our two novel theorems, we introduce a new splitting
strategy called iSAT, which allows ADF to classify new records even if they
are associated with previously unseen classes. ADF is capable of identifying
and handling concept drift; it, however, does not forget previously gained
knowledge. Moreover, ADF is capable of handling big data if the data can be
divided into batches. We evaluate ADF on five publicly available natural data
sets and one synthetic data set, and compare the performance of ADF against
the performance of eight state-of-the-art techniques. Our experimental
results, including statistical sign test and Nemenyi test analyses, indicate a
clear superiority of the proposed framework over the state-of-the-art
techniques.
###### keywords:
Incremental learning, Decision forest algorithm, Concept drift, Big data,
Online learning
††journal: Pattern Recognition
## 1 Introduction
Nowadays information is considered the backbone of all organizations and is
critical for their success. In real applications, big data often arrive as
batches over time [1]. Let us consider a scenario of an undergraduate
admission system of a university where the admission authority can identify
those applicants who have a high chance of completing the degree. The
likelihood of success of an applicant can be determined by comparing the
applicant’s information (such as academic record, age, gender, nationality,
etc.) with similar information from successful graduates. The yearly list of
both successful and unsuccessful students can be considered as the batch
training data and the yearly undergraduate admission applicants can be
considered as the batch testing data, as shown in Fig. 1. Moreover, the batch
testing data may follow not only the distribution of current batch training
data but also the distributions of some previous batches of training data. In
Fig. 1 we can see that a group of students obtained their qualification (i.e.
completed Year 12 degree) in the current year and another group of students
obtained the qualification in previous years. Both groups of students are
generally eligible to apply for the admission. The batch testing data of the
year 2019 (i.e. Test batch 3) may have the students who obtained the admission
qualification in the years 2017, 2018 and 2019.
Figure 1: An example of an undergraduate admission system of a university
where both training and testing data arrive yearly as batches and the
incremental classifier is updated (marked with the dotted box) each year based
on the batch training data.
Traditional machine learning algorithms such as Random Forest (RF) [2] build
recognition models based on training data, which are available at a time, to
classify test data. Since all the training data in many real applications may
not be available at a time, traditional machine learning algorithms are unable
to build good models [3]. Accurate recommendations and recognition are also an
important challenge for the algorithms since they are not capable of adapting
to dynamic changes in real applications [4, 3]. Dynamic changes in data are
also known as concept drifts. To adapt to the concept drifts, the models of
traditional algorithms need to be retrained from scratch, which leads to large
waste of time and memory [4, 3]. Also, the data can be so big that it may not
fit in a memory to be processed by traditional machine learning algorithms
[3].
Thus, to adapt concept drifts and to handle big data that often arrive as
batches over time, it is crucial to have learning algorithms that are capable
of learning incrementally and building a knowledge base over time for ensuring
accurate classification of test batches that follow the distributions of
current and previous batches [3]. In Fig. 1 we can see that an incremental
classifier is built by the authority to assess the possibility of applicants
to be successful graduates. Moreover, the classifier is updated incrementally
based on the yearly batch training data.
A number of methods have been proposed recently in the literature for
incremental learning [5, 3, 6, 7, 8]. An existing method called Nearest Class
Mean Classifier (NCMC) [6] classifies a new record based on the centroids. It
computes the centroid of the records labeled with a class and for a new
unlabeled record it assigns the class value for the centroid of which is the
closest to the record among all centroids. It updates the centroids based on
the records that arrive over time. Although the method requires a low
execution time for adapting the model with new records, it suffers with a low
classification accuracy if the initial training data set is small [5, 9].
The classification accuracy is increased significantly in a recent technique
called CIRF [3], which is capable of updating an incremental classifier based
on the current batch data only. CIRF first builds a decision forest by
applying the RF [2] on the initial batch data. It also identifies the boundary
of the decision forest by considering the records as a box, which is also
known as the Axis Aligned Minimum Bounding Box (AABB) [10] (see Definition 1).
A sample AABB of a set of data points is illustrated in Fig. 2a. Using the
AABB, CIRF calculates two vectors called minimum and maximum vectors, where
the $j$th element of the minimum vector is the minimum value of the $j$the
attribute and the $j$th element of the maximum vector is the maximum value of
the $j$the attribute.
Figure 2: (a) An Axis Aligned Minimum Bounding Box (AABB) of a set of data
points represented by black stars. (b) Two AABBs are illustrated in a
2-dimensional form where attributes $A_{i}$ and $A_{j}$ are projected on
x-axis and y-axis, respectively. (c) ${AABB}_{2}$ contains records having
multiple class values. (d) There is an overlap between ${AABB}_{1}$ and
${AABB}_{2}$ due to the mixture of known (black stars) and unknown (red
triangles) classes.
For a new batch data having known class values111Known class values: the class
values that appeared in the previous batches and hence previous forests were
built from the batches having these class values., CIRF updates a decision
tree of the decision forest just by assigning the records into leaves they
belong to. However, for a new batch data having unknown class values222Unknown
class values: the class values that do not appear in the previous batches and
hence previous forests were built from the batches not having these class
values., CIRF updates a decision tree by adding a root where the previous tree
is considered as a child of the root and the new batch data are used to create
another child of the root. CIRF uses an existing splitting strategy called
Separating Axis Theorem (SAT) [11] (see Theorem 1) to determine the split
attribute as follows. Let ${AABB}_{1}$ and ${AABB}_{2}$ be two boxes that are
represented by the records of previous batches and the records of current
batch, respectively (as shown in Fig. 2b). CIRF calculates the minimum and
maximum vectors for both ${AABB}_{1}$ and ${AABB}_{2}$. Using the vectors, the
method then calculates the distance between ${AABB}_{1}$ and ${AABB}_{2}$ for
all numerical attributes. For example, in Fig. 2b $d_{1}$ is the distance
between AABB1 and AABB2 for the attribute $A_{i}$ which is projected on the
x-axis. The attribute which has the maximum distance is chosen as the split
attribute (see Eq. (19) for details). It is reported that CIRF outperforms
state-of-the-art incremental learning algorithms in terms of both
classification accuracy and execution time [3].
We argue that CIRF has three main shortcomings. First, CIRF considers only the
numerical attributes while calculating the minimum and maximum vectors and
thereby, categorical attributes are never considered as the split attributes.
Second, for a new batch having multiple unknown class values (such as
${AABB}_{2}$ Fig. 2c) CIRF adapts the decision tree by creating new a leaf
under the root of the tree. Due to the heterogeneity in the new leaf, CIRF
leads to a low classification accuracy. Third, if a new batch contains records
with a mix of known and unknown classes (such as ${AABB}_{2}$ Fig. 2d), there
will be an overlap between the AABBs (as shown in Fig 2d) resulting is a high
heterogeneity in the newly created leaf. Thus, CIRF suffers with a low
classification accuracy (see Fig. 5).
The existing methods therefore have room for further improvement. We propose a
novel incremental learning framework called Adaptive Decision Forest (ADF) in
this paper. In the ADF framework, we build decision forests by using one of
the three techniques: RF [2], HT [12] and SysFor [13], and thereby obtain
three variants called ADF-R, ADF-H and ADF-S, respectively. The ADF framework
is adapted for the training batch data that arrive over time. During
adaptation, ADF makes use of our proposed novel splitting strategy called
improved Separating Axis Theorem (iSAT) to find the best split attribute. In
iSAT, we use both SAT and entropy-based strategies. For a new batch having a
mix of known and unknown classes, ADF can find two sub-AABBs (as shown in Fig
2d and Fig. 8) based on our proposed theorems (see Theorem 2 and Theorem 3)
where a sub-AABB contains records with a minimum number of a mix of known and
unknown classes and the other sub-AABB has records with new class values. For
the first sub-AABB, we use the entropy based splitting strategy and for the
second sub-AABB, we consider SAT based splitting strategy.
The main contributions of this paper can be summarized as follows. 1) We
present a novel splitting strategy called improved separating axis theorem
(iSAT), which makes use of the separating axis theorem (SAT) [11] and our
proposed two theorems (Theorem 2 and Theorem 3) to find the best split
attribute which can be either numerical or categorical. 2) We present a
framework called Adaptive Decision Forest (ADF) which can identify and handle
concept drifts and preserve previously acquired knowledge by introducing a set
of forests (see Section 3.1.3). 3) ADF can handle all scenarios (as shown in
Fig. 3) that may occur for the new batches that arrive over time. 4) ADF is
also applicable to big data applications where the data can be divided into
batches (see section 4.6).
We evaluate ADF variants on five real data sets [14] and one synthetic data
set by comparing its performance with the performance of eight high quality
existing techniques including HT [12], CIRF [3], and ARF [7]. We also compare
the performance of ADF variants with the performance of two non-incremental
existing algorithms such as SysFor [13] and RF [2]. Our experimental results
indicate that ADF variants achieve a higher classification accuracy than the
existing methods.
The rest of the paper is organized as follows. Section 2 presents the problem
formulation and assumptions and a background study on incremental learning
methods. Our proposed incremental framework is presented in Section 3. Section
4 presents empirical evaluations, and Section 5 gives concluding remarks.
## 2 Problem Formulation and Related Work
### 2.1 Problem Formulation and Assumptions
Let $D$ be an input data set which we consider as a two dimensional table,
where rows represent records $X=\\{X_{1},X_{2},\ldots X_{n}\\}$ and columns
represent attributes $A=\\{A_{1},A_{2},\ldots A_{m}\\}$. There are $m$
attributes i.e. $|A|=m$ and $n$ records i.e. $|D|=|X|=n$. $X_{ij}$ represents
the $j$th attribute value of the $i$th record. An attribute $A_{j}\in A$ can
be categorical or numerical. The domain of a categorical attribute $A_{j}$ can
be $\\{a_{j}^{1},a_{j}^{2},\ldots a_{j}^{k}\\}$ meaning the domain size of
$A_{j}$ is $|A_{j}|=k$. Similarly, the domain of a numerical attribute $A_{p}$
can be $[a_{p}^{low},a_{p}^{up}]$, where $a_{p}^{low}$ and $a_{p}^{up}$ are
the lower and upper limits of the domain, respectively. Let $Y\in A$ be the
class attribute having a set of class values $C=\\{C_{1},C_{2},\ldots
C_{k}\\}$. A classifier $T$ is a function $T\leftarrow f(X):D\rightarrow Y$
that maps records $X\in D$ to the class values. The notations that we use in
this paper are presented in Table 1.
Table 1: Notations and their meanings. Notations | Meaning | Notations | Meaning
---|---|---|---
$D$ | Data set | $P(X)$ | Probability of $X$
$D_{train}$ | Training data set | $P(Y|X)$ | Probability of $Y$ given $X$
$D_{test}$ | Test data set | $P(Y|\neg X)$ | Probability of $Y$ with changes $X$
$D^{t^{i}}_{B}$ | Batch data set at time $t^{i}$ | $P(X,Y)$ | Probability distribution of $X$ and $Y$
${D^{t^{i}}_{B_{train}}}$ | Training batch data set at time $t^{i}$ | $\Psi$ | Set of parameters
$D^{t^{i}}_{B_{test}}$ | Test batch data set at time $t^{i}$ | $\theta$ | Repairable threshold
$X$ | Set of records in $D$ | $\lambda$ | Concept drift threshold
$X_{i}$ | $i$-th record in $D$ | $\gamma$ | Reserve window threshold
$|D|=|X|=n$ | Number of records in $D$ | $T$ | Classifier/decision forest
$|{D^{t^{i}}_{B_{train}}}|=n_{B}$ | Number of records in ${D^{t^{i}}_{B_{train}}}$ | $l$ | Number of leaves of a tree
$A$ | Set of attributes in $D$ | $T^{A}$ | Active Forest (AF)
$|A|=m$ | Number of attributes in $D$ | $T^{P}$ | Permanent Forest (PF)
$A_{j}$ | $j$-th attribute in $D$ | $T^{T}$ | Temporarily Forest (TF)
$|A_{j}|$ | Domain size of $j$-th attribute in $D$ | $L^{A}$ | Leaves statistics of $T^{A}$
$a_{j}^{low}$ | Lower limit of $j$-th numerical attribute | $L^{P}$ | Leaves statistics of $T^{P}$
$a_{j}^{up}$ | Upper limit of $j$-th numerical attribute | $L^{T}$ | Leaves statistics of $T^{T}$
$Y$ | Class attribute in $D$ | $D^{W}$ | Reserve window data set
$C$ | Set of class values in $D$ | $P^{A}$ | Perturbed leaves of $T^{A}$
$C_{k}$ | $k$-th class value in $C$ | $P^{U}$ | Perturbed leaves of $T^{P}$
$M$ | Number of trees | $P^{R}$ | Perturbed leaves of $T^{T}$
$\epsilon$ | Error tolerance threshold | |
Let $D_{train}=\\{X_{i},Y_{i}\\},X_{i}\in X,Y_{i}\in C$ and
$D_{test}=\\{X^{\prime}_{i}\\},X^{\prime}_{i}\in
X^{\prime}\>{X\cap}X^{\prime}\rightarrow\emptyset$ be the training data set
and test data set, respectively, where the test data set does not have the
class attribute. Traditional supervised learning algorithms assume that the
whole training data set $D_{train}$ is available during training, and both
$D_{train}$ and $D_{test}$ follow the same probability distribution
$P(X,Y)=P(Y|X).P(X)$. The goal of the supervised learning algorithms is to
build a classifier $T$, from $D_{train}$, which is capable of classifying the
records of the test data set $D_{test}$ with a high classification accuracy.
Sometimes the scenario can be more complex where the whole input data set
$D_{train}$ is not available during building the classifier $T$. Sometimes
although the whole data is available, it may not fit in a single memory due to
its size being too big. Many incremental learning algorithms assume that data
arrive as batches over time [3, 5]. Moreover, the distribution
$P(X,Y)=P(Y|X).P(X)$ of a training data set may change over time and may cause
concept drift i.e. $P(Y|\neg X)$ which may have a serious impact on
classification accuracy. However, the test data sets may follow both current
and previous distributions. Let $D^{t^{0}}_{B_{train}}$,
$D^{t^{1}}_{B_{train}}$ and $D^{t^{2}}_{B_{train}}$ be three batches of
training data with the class attribute arrive at time $t^{0}$, $t^{1}$, and
$t^{2}$, respectively and $D^{t^{0}}_{B_{test}}$, $D^{t^{1}}_{B_{test}}$ and
$D^{t^{2}}_{B_{test}}$ be three corresponding test batches at time $t^{0}$,
$t^{1}$, and $t^{2}$, respectively. The test batch $D^{t^{2}}_{B_{test}}$
contains data that may follow the distributions of $D^{t^{2}}_{B_{train}}$,
$D^{t^{1}}_{B_{train}}$ and $D^{t^{0}}_{B_{train}}$ (as discussed in Section 1
and demonstrated in Fig 1). Thus, it is essential to adapt the classifier over
time to achieve a high classification accuracy. Our proposed incremental
framework is formulated under the following assumptions: 1) Data arrive as
batches over time. Let $D^{t^{i}}_{B_{train}}$ and $D^{t^{j}}_{B_{train}}$ be
the batches of data arriving at time $t^{i}$ and $t^{j}$, respectively. 2) The
set of class values $C$ may change over time. Let $C^{t^{i}}$ and $C^{t^{j}}$
be the sets of class values of $t^{i}$ and $t^{j}$, respectively, where
$C^{t^{i}}$ and $C^{t^{j}}$ may differ. 3) The class attribute ${Y}^{i}$ in
the training batch data set ${D^{t^{i}}_{B_{train}}}$ at $t^{i}$ is available,
thus, it is possible to adapt the classifier $T^{i}$ over time. 4)
$D^{t^{i}}_{B_{test}}$ follows the distributions of training batch data sets
arriving at times ${\\{t^{j}\\}}^{\gamma}_{j=i}$.
Under such assumptions, the goal of our proposed framework is to define an
incremental classifier $T^{i}\leftarrow
f^{i}(X^{i}|T^{i-1},D^{t^{i}}_{B_{train}},D^{t^{i}}_{B_{test}},\Psi)$ based on
a decision forest which will achieve an accurate classification for the test
batch data set $D^{t^{i}}_{B_{test}}$ at time $t^{i}$ by adapting the
classifier $T^{i-1}$ at time $t^{i-1}$ with the training batch data set
$D^{t^{i}}_{B_{train}}$ and the set of parameters $\Psi$.
### 2.2 Related Work
A number of methods have been proposed for incremental learning [5, 3, 6].
Incremental learning methods process records one by one or batch by batch that
arrive over a period of time [1]. An important challenge of the methods is to
handle dynamic changes that may occur in real applications. Generally dynamic
changes can happen in two ways: 1) changes in data distribution $\&$ feature
dimensions, and 2) changes in class values [3].
For handling the changes in data distribution $\&$ feature dimensions, a
number of feature incremental learning algorithms [4, 15] have been proposed.
An existing method called FLSSVM [15] that makes use of a least square support
vector machine algorithm to adapt a classifier with the new batch data set.
For the initial batch data set, the method builds a classifier using an
existing least square support vector machine algorithm [16]. With the
previously learned structural parameters, FLSSVM then adapts the data of the
new attributes. Although FLSSVM requires comparatively low training time and
memory, it is difficult to find a suitable kernel function for SVM for
achieving a good accuracy [16].
For handling the changes in class values a number of class incremental
learning algorithms [5, 3, 17] have been proposed. An existing class
incremental learning method called NCMC [6] adapts the new batch records
$X_{j}\in D^{t^{j}}_{B}$ based on the mean vectors $V_{C_{k}}\forall k$ of
class values $C$. For each class value $C_{k}\in C$ of the initial data set,
NCMC first calculates the mean vector $V_{C_{k}}$ of the records $X_{C_{k}}$
belonging to the class $C_{k}\in C$ as follows.
$\displaystyle V_{C_{k}}={\frac{1}{|X_{C_{k}}|}}{\sum_{X_{i}\in
X_{C_{k}}}{X_{i}}}$ (1)
For a new record $X_{j}\in D^{t^{j}}_{B}$ the method then calculates distances
separately between the new record and the means of the class values. The
Euclidean distance $d^{j}_{k}$ between $X_{j}\in D^{t^{j}}_{B}$ and mean
vector $V_{C_{k}};C_{k}\in C$ is calculated as follows.
$\displaystyle d^{j}_{k}=\left\|X_{j}-V_{C_{k}}\right\|_{2}$ (2)
The record $X_{j}\in D^{t^{j}}_{B}$ is classified by assigning the class value
$C_{k}\in C$ which has the minimum distance. The method then recalculates
$V_{C_{k}}\forall k$ based on the records that arrive over time. Although the
method requires a low execution time for adapting the model with new records,
it suffers with a low classification accuracy if the initial training data set
does not have enough records [5, 9].
This problem is addressed in an existing method called NCMF [9] which we
discussed in the Introduction section. NCMF first builds a decision forest $T$
by applying an existing decision forest algorithm called Random Forest (RF)
[2] on the initial batch data $D^{t^{i}}_{B}$. The decision forest $T$ is then
updated for the batch data sets that arrive over time. For a batch data set
$D^{t^{j}}_{B}$, NCMF updates the decision forest $T$ as follows. NCMF first
assigns the records $X_{j}\in D^{t^{j}}_{B}$ in the leaves of a tree $t\in T$
the records belong to. If a leaf $l$ contains records having multiple class
values, NCMF calculates the mean vectors $V_{|C|}$ by using the Eq. 1. NCMF
then translates the mean vectors into two class values $e\in\\{+ve,-ve\\}$ by
randomly assigning the mean vectors either into Positive or Negative. The
splitting function $f(X_{j})$ for assigning a record $X_{j}\in D^{t^{j}}_{B}$
is defined as follows [9].
$\displaystyle
f(X_{j})=e_{C_{k}^{*}(X_{j})}~{}~{}~{}~{}~{}~{}~{}~{}where~{}~{}~{}C_{k}^{*}(X_{j})=\underset{C_{K}\in
C}{argmin}\left\|X_{j}-V_{C_{k}}\right\|_{2}$ (3)
Using the splitting function $f(X_{j})$, NCMF splits the leaf $l$ into two
sets where one set is considered as the left child and the other set is
considered as the right child. It continues this splitting process if the size
of any child is greater than a user-defined threshold. It is reported that
NCMF achieves a high accuracy over some existing methods including NCMC and
RF. However, during the splitting process the method requires all previous
data, resulting is a high computational time [5].
The computational time is reduced in a recent technique called CIRF [3] that
first builds a decision forest $T$ by applying the RF on the initial batch
data $D^{t^{i}}_{B}$. The decision forest $T$ is then updated by using only
the records of the current batch data set $D^{t^{j}}_{B}$. The procedure of
updating the decision forest $T$ is discussed in detail in the Introduction
section. Although CIRF outperforms state-of-the-art incremental learning
algorithms in terms of both classification accuracy and execution time, the
method has three main limitations (that are also mentioned in the Introduction
section) resulting in a low classification accuracy. Therefore, the existing
methods have room for further improvement.
## 3 Our Proposed Incremental Learning Framework: Adaptive Decision Forest
(ADF)
In this section, we present our proposed incremental learning framework called
Adaptive Decision Forest (ADF), which produces a decision forest to classify
new data. ADF takes as input the data that arrive as batches over time. We
consider a batch data set $D^{t^{i}}_{B}$ that arrives at time $t^{i}$. We
also consider that the batch data $D^{t^{i}}_{B}$ has a set of class values
$C^{t^{i}}=\\{C^{t^{i}}_{1},C^{t^{i}}_{2},\ldots C^{t^{i}}_{k}\\}$ where a
class value $C^{t^{i}}_{v}$ is associated with a record $X_{i}\in
D^{t^{i}}_{B}$. Let $T^{i}$ be a decision tree which is built on
$D^{t^{i}}_{B}$. Thus, the class values $C^{t^{i}}\in D^{t^{i}}_{B}$ are
considered to be known to $T^{i}$.
At time $t^{j}$, ADF receives another batch data set $D^{t^{j}}_{B}$ which we
assume has the same attributes $A=\\{A_{1},A_{2},\ldots A_{m}\\}$. However,
$D^{t^{j}}_{B}$ can have a different set of class values
$C^{t^{j}}=\\{C^{t^{j}}_{1},C^{t^{j}}_{2},\ldots C^{t^{j}}_{k}\\}$. The class
values $C^{t^{j}}$ are considered to be new (or unknown) to the decision tree,
$T^{i}$. The class values $C^{t^{j}}$ of a new batch $D^{t^{j}}_{B}$ can be
categorized into five scenarios, namely single known class (SKC), multiple
known class (MKC), single unknown class (SUC), multiple unknown class (MUC)
and a mixture of known and unknown classes (MKUC). The scenarios are defined
in Eq. 9.
$\displaystyle{Scenario(C^{t^{i}},C^{t^{j}})}_{i\neq
j}=\left\\{\begin{array}[]{l l}\text{SKC}&\quad\text{if $|C^{t^{j}}|=1$ \&
$C^{t^{j}}_{k}\in C^{t^{i}}$}\\\ \text{MKC}&\quad\text{if $|C^{t^{j}}|>1$ \&
$C^{t^{j}}_{k}\in C^{t^{i}}$;$\forall{k}$}\\\ \text{SUC}&\quad\text{if
$|C^{t^{j}}|=1$ \& $C^{t^{j}}_{k}\notin C^{t^{i}}$}\\\
\text{MKC}&\quad\text{if $|C^{t^{j}}|>1$ \& $C^{t^{j}}_{k}\notin
C^{t^{i}}$;$\forall{k}$}\\\ \text{MKUC}&\quad\text{if $|C^{t^{j}}|>1$ \&
$C^{t^{j}}_{k}\in C^{t^{i}}$;$\forall{k}$ \& $C^{t^{j}}_{v}\notin
C^{t^{i}}$;$\forall{v}$}\\\ \end{array}\right.;\forall{i,j}$ (9)
The categorization of class values of a new batch data is shown in Fig. 3. To
the best of our knowledge, traditional class incremental learning methods
including CIRF [3] usually focus on the SUC scenario and do not consider other
scenarios. ADF is capable of handling all five scenarios by integrating our
novel $iSAT$ splitting strategy with an existing decision forest algorithm
such as RF [2].
Figure 3: Categorization of class values of a new batch data.
### 3.1 Basic Concept of ADF
Incremental decision forest algorithms (such as CIRF [3] and ARF [7]) can
adapt the decision trees continuously and perform better than the traditional
(non-incremental) decision forest algorithms (such as RF [2] and SysFor [13])
in terms of classification accuracy and training time [3]. We observe the
following three issues that play a vital role in achieving high accuracy by an
incremental decision forest algorithm which is also susceptible to handling
all five scenarios as shown in Eq. 9 and Fig. 3.
The first issue is to identify the best split for a new batch data
$D^{t^{i}}_{B}$. The commonly used splitting strategies are entropy [18], Gini
index [19] and axis aligned [2]. An existing algorithm called CIRF [3] makes
use of an existing splitting strategy called Separating Axis Theorem (SAT)
[11] to find the best split. However, we argue that CIRF does not perform well
on $D^{t^{i}}_{B}$ if it contains records with a mix of known and unknown
classes (MKUC). To find the best split for a batch that follows the MKUC
scenario, we propose a novel splitting strategy called improved separating
axis theorem (iSAT) (see section 3.1.1) by which it is expected to achieve a
high classification accuracy.
We illustrate the argument by considering three toy batch data sets as shown
in Fig. 4a, Fig. 4b and Fig. 4c. Each batch data set contains 10 records and
three attributes, namely “Area of a House in square meter (Area)”, “number of
bedrooms (Beds)” and “House rent category (Rent)”. Figure 4d shows a decision
tree which is built from the batch data $B0$ (as shown in Fig. 4a). The
decision tree is repaired based on the SAT splitting strategy for the batch
data $B1$ (see Fig. 4b) which follows the MKUC scenario. The repaired decision
tree is shown in Fig. 4e. Figure 5 shows that the classification accuracy of
the repaired decision tree (as shown in Fig. 4b) is lower than the
classification accuracy of the decision trees as shown in Fig. 4h and Fig. 4k
that are repaired based on the splitting strategies, namely entropy and iSAT
(see section 3.1.1), respectfully. Besides, the decision tree with the iSAT
splitting strategy achieves high classification accuracy for all batches.
Figure 4: Construction of trees on toy batches in terms of SAT, Entropy and
iSAT splitting strategies. Figure 5: Comparison of classification accuracies
on toy batches in terms of SAT, Entropy and iSAT splitting strategies.
The second issue is to decide whether the decision trees are required to
repair or not. Moreover, if the trees are required to repair then an important
task is to decide how many of the trees should be repaired by considering the
cost of modification. To address the issue, we introduce a repairable strategy
in section 3.1.2 to find an optimal repairable threshold for achieving a high
accuracy with a low cost.
The third issue is to handle concept drifts and preserving historical
information that are gained from the previous batches. In dynamic data
structure, data distribution may change over time and this situation is known
as concept drift, which may generally be categorized in two types, namely
Temporary Concept Drift (TCD)333TCD: If the concept drift occurs in only one
or a few consecutive batches. and Sustainable Concept Drift (SCD)444SCD: If
the concept drift sustains over a period of time.. To handle TCD, if a
decision forest is rebuilt by considering only the current batch
$D^{t^{i}}_{B}$ then it is most likely that the historical information of the
forest will be lost. We argue that the decision forest is unable to classify
the records if they are drawn from the distribution of the previous batches
and may lead to a low classification accuracy. We also argue that the
classification accuracy may be increased if the decision forest is rebuilt
only for the SCD instead of the TCD. A strategy for identifying a SCD is
presented in Section 3.1.4. Moreover, to achieve a high accuracy, a number of
consecutive batches called window (discussed in Section 3.1.5) instead of just
the current batch can be used while rebuilding the decision forest.
We also argue that the historical information may be preserved by building a
set of forests called Permanent Forest ($PF$), Active Forest ($AF$), and
Temporary Forest ($TF$) which are discussed in Section 3.1.3. ADF can update
$PF$, $AF$ and $TF$ in parallel and achieves a high accuracy by keeping the
historical information.
To address the issues, in ADF we consider a number of strategies that are
summarized as follows: improved separating axis theorem (iSAT) based splitting
strategy (in Section 3.1.1), decision trees repairable strategy (in Section
3.1.2), the use of a set of decision forests (in Section 3.1.3),
identification of SCD strategy (in Section 3.1.4), and the use of a window of
batches strategy (in Section 3.1.5). We now discuss each of these strategies
as follows.
#### 3.1.1 Improved Separating Axis Theorem (iSAT) Splitting Strategy
We propose a novel splitting strategy called iSAT to find a splitting
attribute and value for a node of a decision tree. Our proposed splitting
strategy $iSAT$ handles the problems of an existing method CIRF [3] which
makes use of an existing algorithm called Axis Aligned Minimum Bounding Box
(AABB) [10] and an existing theorem called Separating Axis Theorem (SAT) [11]
to find the best split attribute and value. We first briefly introduce AABB
and SAT before $iSAT$.
###### Definition 1.
In geometry, the smallest surrounding box of a set of data points is known as
the minimum bounding box [10] where all the data points are surrounded by the
box.
A sample AABB of a set of data points is illustrated in Fig. 2a. If two
minimum bounding boxes are not intersected then it can be concluded that no
overlap exists between the corresponding sets of data points. For example, Fig
2b shows two sets of data points: black stars and red triangles. The smallest
surrounding boxes for the black stars and red triangles are marked with
${AABB}_{1}$ and ${AABB}_{2}$ where the boxes are overlapped at axis $A_{j}$
but not at $A_{i}$. Thus, the black stars and red triangles are separated by
${AABB}_{1}$ and ${AABB}_{2}$ at axis $A_{i}$.
The strategy of separating two sets of data points can be used for the
incremental expansion of a decision tree by inserting a parent of a node. For
example, Fig. 4 demonstrates that the decision tree shown in Fig. 4d is
expanded by adding a parent node “Area” at the root node and the resulting
tree is shown in Fig. 4e. Using AABBs, a decision tree can be represented as a
hierarchical form of nested bounding boxes [3]. For $D^{t^{i}}_{B}$ with new
class values, it is crucial to identify the best split attribute and value.
The issue can be addressed by using the SAT [11] as follows.
###### Theorem 1.
Separating Axis Theorem (SAT) [11, 3, 20]: If two nonempty convex objects are
disjoint then there exists an axis on which the projection of the convex
objects will not overlap.
###### Proof.
Let $R$ be a set of real numbers and $R^{n}$ be a set of real n-vectors. Also
let $z\in R^{n}$ be a normal vector and $b\in R$ be a real number. We now
consider two nonempty convex objects Q and S where $Q\cap S=\emptyset$. Then
there exist $z\neq 0$ and $b$ such that $x|z^{T}x\leq b;\forall x\in Q$ and
$x|z^{T}x\geq b;\forall x\in S$. That is the affine function, $f(x)=z^{T}x-b$
is nonpositive on Q and nonnegative on S. The hyperplane, $h$,
$\\{x|z^{T}x=b\\}$ is considered as the separating hyperplane for the two
objects Q and S. In other words, the hyperplane, $h$, separates the convex
objects Q and S which is illustrated in Fig. 6.
Figure 6: Construction of a separating hyperplane, $h$, $\\{x|z^{T}x=b\\}$
between two disjoint convex objects Q and S [20]. (a) The hyperplane, $h$,
separates the convex objects Q and S. The affine function $z^{T}x-b$ is
nonpositive on Q and nonnegative on S. (b) The points $q\in Q$ and $s\in S$
are the pair points in the two sets that are closest to each other, where the
line segment $\overline{qs}$ between q and s is bisected by the separating
hyperplane $h$ and $h\perp z$. (c) For a point $u\in S$, the affine function
$f(u)$ is nonnegative on S and for a point $v\in Q$, the affine function
$f(v)$ is nonpositive on Q.
Let $u\in S$ and $v\in Q$ be two points. The Euclidean distance between the
points is calculated as
$\displaystyle dist(u,v)={\left\|u-v\right\|}_{2}$ (10)
We consider a vector $DIST$ that contains the distances between the points of
Q and S. There exists points $q\in Q$ and $s\in S$ which has the minimum
distance between Q and S that is
$\displaystyle mindist(Q,S)=\inf\\{DIST\\}$ (11)
We now define
$z=s-q$, $b=\frac{\left\|s\right\|_{2}^{2}-\left\|q\right\|_{2}^{2}}{2}$
Thus by substituting the value of $z$ and $b$, the affine function
$f(x)=z^{T}x-b$ can be expressed as
$\displaystyle f(x)=(s-q)^{T}(x-\frac{s+q}{2})$ (12)
The goal is to show that $f(x)$ is nonpositive on Q and nonnegative on S that
is the hyperplane, $h$, $\\{x|z^{T}x=b\\}$ is perpendicular to the normal
vector $z$ and bisects the line segment $\overline{qs}$ between q and s as
shown in Fig. 6.
We first prove that $f(x)$ is nonnegative on S. For any point $u\in S$ (as
shown in Fig. 6), the affine function $f(u)$ can be written as
$\displaystyle f(u)=(s-q)^{T}(u-\frac{s+q}{2})>0$ (13)
The affine function $f(u)$ can also be expressed as
$\displaystyle
f(u)=(s-q)^{T}(u-s+\frac{s-q}{2})=(s-q)^{T}(u-s)+\frac{\left\|s-q\right\|_{2}^{2}}{2}$
(14)
Now the Euclidean distance between $u\in S$ and $q\in Q$ can be calculated by
taking the derivative of $f(u)$ for a closed interval $t$ (with $t$ as close
to zero) as follows
$\displaystyle\frac{d}{dt}\left\|s+t(u-s)-q\right\|_{2}^{2}|_{t=0}=2(s-q)^{T}(u-s)>0$
(15)
So for some small $t~{}(0<t\leq 1)$, we have
$\displaystyle\left\|s+t(u-s)-q\right\|_{2}>\left\|s-q\right\|_{2}$ (16)
that is the point $s+t(u-s)$ is closer to s than q. Since S is convex and
contains s and u, we have $s+t(u-s)\in S$. Thus, the affine function
$f(x)\forall x\in S$ is nonnegative on S. Similarly, for any point $x\in Q$,
the affine function $f(x)\forall x\in Q$ is nonpositive on Q. Hence, the
hyperplane, $h$ separates the convex objects S and Q. Therefore, there exists
an axis $z\neq 0$ on which the projection of two disjoint convex objects S and
Q does not overlap. ∎
To illustrate Theorem 1, we consider two dimensional AABBs as shown in Fig 2b
where two attributes $A_{i}$ and $A_{j}$ are projected on the x-axis and
y-axis, respectively. ${AABB}_{1}$ is created based on black stars and
${AABB}_{2}$ is created based on red triangles. From the figure we can see
that there is no overlap between ${AABB}_{1}$ and ${AABB}_{2}$ for the $A_{j}$
attribute. According to SAT, the x-axis is the separating axis between
${AABB}_{1}$ and ${AABB}_{2}$ and thus $A_{i}$ is the splitting attribute.
Moreover, if multiple attributes have no overlap then the attribute having the
maximum gap will be considered as the splitting attribute. Let $d_{1}$ be the
gap between ${AABB}_{1}$ and ${AABB}_{2}$ on the x-axis (i.e. attribute
$A_{i}$ ) and $d_{2}$ be the gap between ${AABB}_{1}$ and ${AABB}_{2}$ on the
y-axis (i.e. attribute $A_{j}$ ) then the split attribute $A_{s}$ is
determined based on Eq. (19) [3].
$\displaystyle A_{s}=\left\\{\begin{array}[]{l l}A_{i}&\quad\text{if
$d_{1}>d_{2}$ \& $d_{1}>0$}\\\ A_{j}&\quad\text{if $d_{2}>d_{1}$ \&
$d_{2}>0$}\\\ \end{array}\right.;\forall{i,j}$ (19)
For the x-axis (attribute $A_{i}$), let $V^{i,1}_{min}$ and $V^{i,1}_{max}$ be
the minimum and maximum values of ${AABB}_{1}$, and $V^{i,2}_{min}$ and
$V^{i,2}_{max}$ be the minimum and maximum values of ${AABB}_{2}$. Similarly,
for the y-axis (attribute $A_{j}$), let $V^{j,1}_{min}$ and $V^{j,1}_{max}$ be
the minimum and maximum values of ${AABB}_{1}$, and $V^{j,2}_{min}$ and
$V^{j,2}_{max}$ be the minimum and maximum values of ${AABB}_{2}$. Thus, the
split value $SV$ for $A_{s}$ is calculated by using Eq. (24) [3].
$\displaystyle SV=\left\\{\begin{array}[]{l
l}\frac{V^{i,1}_{max}+V^{i,2}_{min}}{2}&\quad\text{if $d_{1}>d_{2}$ \&
$V^{i,1}_{max}>V^{i,2}_{min}$}\\\
\frac{V^{i,2}_{max}+V^{i,1}_{min}}{2}&\quad\text{if $d_{1}>d_{2}$ \&
$V^{i,2}_{min}>V^{i,1}_{max}$}\\\
\frac{V^{j,1}_{max}+V^{j,2}_{min}}{2}&\quad\text{if $d_{2}>d_{1}$ \&
$V^{j,1}_{max}>V^{j,2}_{min}$}\\\
\frac{V^{j,2}_{max}+V^{j,1}_{min}}{2}&\quad\text{if $d_{2}>d_{1}$ \&
$V^{j,2}_{min}>V^{j,1}_{max}$}\\\ \end{array}\right.;\forall{i,j}$ (24)
Using the SAT based splitting strategy, incremental decision forest algorithms
(such as CIRF [3]) find the split attribute which has the maximum distance
among all numerical attributes. For example, the two bounding boxes
${AABB}_{1}$ and ${AABB}_{2}$ are separated by the attribute $A_{i}$ in Fig
2a. However, the algorithms do not perform well on $D^{t^{i}}_{B}$ with MKUC
scenario (as discussed in Section 3.1). Moreover, the algorithms fail to find
a split if the bounding boxes are non-separable. For example, Fig. 2d shows
that two bounding boxes ${AABB}_{1}$ and ${AABB}_{2}$ that are not separated
by any attributes.
We argue that the accuracy of such algorithms can be improved if it is
possible to find the best split in the case of overlapping bounding boxes. We
propose an improved separating axis theorem (iSAT) where we propose a
definition and two theorems. In iSAT we first define a sub-box called Sub Axis
aligned minimum bounding box (Sub-AABB) as follows.
###### Definition 2.
If we have a set of data points then a Sub-AABB is the smallest axis aligned
minimum bounding box of a subset of the set of data points.
Three sample Sub-AABBs of an AABB are illustrated in Fig. 7a where the Sub-
AABBs are marked with the dotted red rectangles. According to the Definition
2, two overlapping AABBs (as shown in Fig. 2d) can have a number of
overlapping and non-overlapping Sub-AABBs as illustrated in Fig. 7b.
Figure 7: (a) A 2-dimensional illustration of Sub-AABBs. (b) The ${AABB}_{1}$
and ${AABB}_{2}$ consist of sub-AABBs ${Sub-AABB}_{1}$ and ${Sub-AABB}_{2}$,
respectively. (c) The hyperplane separates the two sub-AABBs ${Sub-AABB}_{1}$
and ${Sub-AABB}_{2}$.
###### Theorem 2.
If there are two overlapping AABBs then it is possible to have non-overlapping
Sub-AABBs of the two AABBs.
###### Proof.
We consider two nonempty convex objects ${AABB}_{1}$ and ${AABB}_{2}$ where
${AABB}_{1}\cap{AABB}_{2}\neq\emptyset$ as illustrated in Fig. 2d. According
to Definition 2, both ${AABB}_{1}$ and ${AABB}_{2}$ may have a number of Sub-
AABBs. Let ${Sub-AABB}_{1}$ and ${Sub-AABB}_{2}$ be two Sub-AABBs where ${Sub-
AABB}_{1}\cap{Sub-AABB}_{2}=\emptyset$ as illustrated in Fig. 7b. Then there
exist $z\neq 0$ and $b$ such that $x|z^{T}x\leq b;\forall x\in{Sub-AABB}_{1}$
and $x|z^{T}x\geq b;\forall x\in{Sub-AABB}_{2}$. That is the affine function,
$f(x)=z^{T}x-b$ is nonpositive on ${Sub-AABB}_{1}$ and nonnegative on ${Sub-
AABB}_{2}$. The hyperplane, $h$, $\\{x|z^{T}x=b\\}$ is considered as the
separating hyperplane for the two objects ${Sub-AABB}_{1}$ and ${Sub-
AABB}_{2}$. According to Theorem 1, for any point $x\in{Sub-AABB}_{2}$ the
affine function $f(x)\forall x\in{Sub-AABB}_{2}$ is nonnegative on ${Sub-
AABB}_{2}$. Similarly, for any point $x\in{Sub-AABB}_{1}$, the affine function
$f(x)\forall x\in{Sub-AABB}_{1}$ is nonpositive on ${Sub-AABB}_{2}$. Hence,
the hyperplane, $h$, separates the sub-AABBs ${Sub-AABB}_{1}$ and ${Sub-
AABB}_{2}$. Therefore, there exists an axis $z\neq 0$ on which the projection
of two disjoint sub-AABBs ${Sub-AABB}_{1}$ and ${Sub-AABB}_{2}$ does not
overlap which is illustrated in Fig. 7c. ∎
###### Theorem 3.
If there are two overlapping AABBs, then there will be at least two
overlapping Sub-AABBs of the two AABBs.
###### Proof.
We again consider two nonempty convex objects ${AABB}_{1}$ and ${AABB}_{2}$
where ${AABB}_{1}\cap{AABB}_{2}\neq\emptyset$ as illustrated in Fig. 2d.
According to Definition 2, both ${AABB}_{1}$ and ${AABB}_{2}$ may have a
number of Sub-AABBs. Let ${Sub-AABB}_{3}$ and ${Sub-AABB}_{4}$ be two Sub-
AABBs where ${Sub-AABB}_{3}\cap{Sub-AABB}_{4}\neq\emptyset$ as illustrated in
Fig. 8a. For simplicity, we consider a two dimensional projection of ${Sub-
AABB}_{3}$ and ${Sub-AABB}_{4}$ on the x-axis and y-axis as shown in Fig. 8b.
On the x-axis, let $x_{1}$ and $x_{3}$ be the lower bound and upper bound,
respectively of ${Sub-AABB}_{3}$ and $x_{2}$ and $x_{4}$ be the lower bound
and upper bound, respectively of ${Sub-AABB}_{4}$ such that
$x_{1}<x_{2}<x_{3}<x_{4}$. On the y-axis, let $y_{1}$ and $y_{3}$ be the lower
bound and upper bound, respectively of ${Sub-AABB}_{4}$ and $y_{2}$ and
$y_{4}$ be the lower bound and upper bound, respectively of ${Sub-AABB}_{3}$
such that $y_{1}<y_{2}<y_{3}<y_{4}$. Let $u$ and $v$ be two points of ${Sub-
AABB}_{3}$ and ${Sub-AABB}_{4}$, respectively. If
$\left\|v\right\|_{2}^{2}<\left\|u\right\|_{2}^{2}$ for any points $u\in{Sub-
AABB}_{3}$ and $v\in{Sub-AABB}_{4}$ then there does not exist any axis for
which the two Sub-AABBs are separable.
On the x-axis, let $u=x_{3}$ and $v=x_{2}$. Since $x_{2}<x_{3}$ then
$\left\|v\right\|_{2}^{2}<\left\|u\right\|_{2}^{2}$. Thus, ${Sub-AABB}_{3}$
and ${Sub-AABB}_{4}$ are not separable on the x-axis.
Similarly, on the y-axis, let $u=y_{3}$ and $v=y_{2}$. Since $y_{2}<y_{3}$
then $\left\|v\right\|_{2}^{2}<\left\|u\right\|_{2}^{2}$. Thus, ${Sub-
AABB}_{3}$ and ${Sub-AABB}_{4}$ are not separable on the y-axis. Therefore, it
can be concluded that the two overlapping AABBs consist at least two
overlapping Sub-AABBs.
∎
Figure 8: (a) The ${AABB}_{1}$ and ${AABB}_{2}$ consist of overlapping Sub-
AABBs. (b) The boundaries of AABBs and Sub-AABBs are projected on the x-axis
and y-axis. (c) ${Sub-AABB}_{1}$ and ${Sub-AABB}_{2}$ are overlapping.
Following Theorem 2 and Theorem 3 we realise that the best split-attribute
($A_{s}$) and split-value ($SV$) can be determined by considering the
overlapping and non-overlapping Sub-AABBs. For non-overlapping Sub-AABBs, we
use the SAT based splitting strategy to find the best $A_{s}$ and $SV$. On the
other hand, for overlapping Sub-AABBs, we first identify the Sub-AABB which
has the minimum number of records that have a mix of class values. For
example, in Fig 8c, we can see that ${Sub-AABB}_{4}$ has the minimum number of
records that have a mix of class values. For ${Sub-AABB}_{4}$ we use the
entropy based splitting strategy to find the best $A_{s}$ and $SV$. The steps
of the iSAT is shown in Algorithm 3. The experimental results on the toy data
set (see Fig.5) also justifies the use of both SAT and entropy based splitting
strategies while modifying the classifier $T$ incrementally.
#### 3.1.2 Decision Trees Repairable Strategy
We propose a novel strategy in our framework ADF to decide whether a decision
forest, $T$ is repairable or not. We first calculate the confidence of all
leaves of the decision forest, $T$. Let $L^{c^{t^{i}}}_{pq}$ be the confidence
of the $q$-th leaf of the $p$-th tree for the batch data set $D^{t^{i}}_{B}$
at time $t^{i}$. For a new batch data set $D^{t^{j}}_{B}$ at time $t^{j}$, we
also calculate the confidence of all leaves of the decision forest, $T$. Let
$L^{c^{t^{j}}}_{pq}$ be the confidence of the $q$-th leaf of the $p$-th tree
for $D^{t^{j}}_{B}$. We then calculate a repairable matrix $F$ for $T$ by
comparing confidences that are calculated from the batch data sets
$D^{t^{i}}_{B}$ and $D^{t^{j}}_{B}$. A zero value in an element $F_{pq}\in F$
indicates that the $q$-th leaf of the $p$-th tree is not perturbed, whereas a
non-zero value indicates that the $q$-th leaf of the $p$-th tree is perturbed.
The value of the elements $F_{pq}\in F$ is calculated by using Eq. 27.
$\displaystyle F_{pq}=\left\\{\begin{array}[]{l l}1&\quad\text{if
$L^{c^{t^{i}}}_{pq}>(L^{c^{t^{j}}}_{pq}+\epsilon)$}\\\
0&\quad\text{Otherwise}\\\ \end{array}\right.;\forall{p,q}$ (27)
where $\epsilon$ is an error tolerance threshold.
We now calculate the ratio of perturbed leaves of the decision forest, $T$.
Let $l$ be the total leaves of a tree and $l_{total}$ be the total leaves of
the decision forest. Also let $F_{total}\leftarrow\sum_{\forall p,q}{F_{pq}}$
be the total number of perturbed leaves in $T$. We calculate the ratio of
perturbed leaves $\vartheta\leftarrow F_{total}/l_{total}$ in $T$. If the
value of $\vartheta$ is less than or equal to a user-defined threshold,
$\theta$ (also called as the repairable threshold) then $T$ is considered as
repairable.
(a) LDPA dataset
(b) Avila dataset
Figure 9: Justification of repairable threshold $\theta$ and error tolerance
threshold $\epsilon$ on LDPA and Avia data sets in terms of classification
accuracy.
We test the influences of different repairable threshold ($\theta$) and error
tolerance threshold ($\epsilon$) values on two real data sets, namely LDAP
[14], and Avila [14] in terms of classification accuracy. From the data sets,
we create 34 batches of training data and testing data (for details see
Section 4.3). We incrementally apply ADF on the training and testing batches
and calculate the classification accuracies. We use the user-defined $\theta$
that varies between 0 and 1, and $\epsilon$ that varies between 0.0 and 0.1.
For each combination of $\theta$ and $\epsilon$, we calculate average
classification accuracy (from all 34 batches). The average classification
accuracies on two data sets are presented in Fig. 9. Note that the perfect
combination of $\theta$ and $\epsilon$ enables the ADF to perform the best. It
is clear from the Fig. 9 that we get the best classification accuracy (marked
with circles) when $\theta$ is between 0.2 and 0.4 and $\epsilon$ is between
0.01 and 0.03 for both data sets.
#### 3.1.3 Handling concept drifts using three parallel forests
We explained in the basic concept (see Section 3.1) that the distribution of
data in dynamic situations may change over time, and Temporary Concept Drift
(TCD) and Sustainable Concept Drift (SCD) may occur. In ADF, we handle both
TCD and SCD by introducing three parallel decision forests, namely Permanent
Forest (PF), $T^{P}$, Active Forest (AF), $T^{A}$, and Temporary Forest (TF),
$T^{T}$. The main motivation of the parallel forests is to keep historical
knowledge leading to a high classification accuracy.
The mechanism of ADF in parallel learning $T^{P}$, $T^{A}$ and $T^{T}$ is
shown in Fig. 10. For every new batch, $T^{P}$ is adapted. Besides, if $T^{A}$
is repairable, we adapt $T^{A}$ for the new batch; otherwise we check the
adaptability of $T^{T}$, if $T^{T}$ is repairable, we adapt $T^{T}$ with the
new batch; otherwise, we build a $T^{T}$ using a window of batches (see
Section 3.1.5). Among the three forests, ADF suggests the user with the forest
which achieves the best classification accuracy.
Figure 10: Parallel incremental learning of $T^{P}$, $T^{A}$ and $T^{T}$.
We test the use of parallel forests on the LDPA data set. We create 34 batches
of training and test data as described in Section 4.2. The performances of PF,
PF, TF, and ADF on the LDPA data set is shown in Fig. 11. We can see that TF
achieves the highest accuracy for the batches where AF is not repairable.
Although AF is repaired for some batches (27 to 29), PF achieves the highest
accuracy. The experimental result indicates the importance of the use of
parallel forests in ADF.
Figure 11: Classification accuracies of PF, AF, TF and ADF on LDPA data set
and the identification of temporary concept drift (TCD) and sustainable
concept drift (SCD).
#### 3.1.4 Identification of Sustainable Concept Drift (SCD)
The repairable strategy of ADF (see Section 3.1.2) enables it to determine the
event of Sustainable Concept Drift (SCD) which is discussed in Section 3.1.
Let $cdf$ be the counter which counts the number of consecutive batches for
which the active forest $AF$ (see Section 3.1.3) is not repairable. The event
of SCD is determined by using Eq. 30.
$\displaystyle SCD=\left\\{\begin{array}[]{l l}true&\quad\text{if
$cdf>\lambda$}\\\ false&\quad\text{Otherwise}\\\ \end{array}\right.$ (30)
where, $\lambda$ be the user-defined concept drift threshold. The default
value of $\lambda$ is set to 3. Once a SCD is determined, we replace $T^{A}$
with $T^{T}$. However, if $cdf\leq\lambda$ and $T^{A}$ is repairable, we reset
$cdf$ and $T^{T}$. Figure 11 shows the detection of SCD on LDPA data set.
#### 3.1.5 Analysis of impact of using a window of batches
For handling SCD, we replace active forest, $T^{A}$, with temporary forest,
$T^{T}$ which is rebuilt in case of TCD (see Section 3.1.3). We argue that if
the data of a current batch are drawn from the distribution of the previous
batches, the newly build $T^{T}$ may not classify the records correctly and
may lead to a low classification accuracy. Therefore, to keep the accuracy
high, we rebuild $T^{T}$ based on a window of batches. In ADF, we introduce a
user-defined threshold $\gamma$ which is the size of the window. The default
value of $\gamma$ is 3.
We test the influence of using a window of batches in ADF on a synthetic data
set (see Table 3) by creating 8 batches of training data and testing data (for
details see Section 4.3). For a batch $D^{t^{i}}_{B}$, we build a decision
forest by applying SysFor [13] on ${D^{t^{i}}_{B_{train}}}$ and noted the
execution time. The accuracy of the forest is calculated by classifying the
records of $D^{t^{i}}_{B_{test}}$. Similarly, we apply SysFor [13] on a window
that contains the last $\gamma$ number of batches and calculate the
classification accuracy and execution time. Figure 12 shows that SysFor (with
a window of batches) achieves a higher classification accuracy at the expense
of a slightly higher execution time. The result indicates the importance of
using the window of batches while rebuilding $T^{T}$ in ADF.
(a) Classification accuracies of SysFor [13] and SysFor (Window) methods on
Synthetic data set
(b) Execution time of SysFor and SysFor (Window) methods on Synthetic data set
Figure 12: Classification accuracies and execution times of SysFor [13] and
SysFor (Window) methods on Synthetic data set.
### 3.2 Incremental Tree Growing Mechanism of ADF
To accommodate all scenarios as shown in Eq. 9 and Fig. 3, in ADF we propose
an incremental tree growing mechanism, which is presented in Algorithm 1.
At time $t^{0}$, ADF takes a batch data set ${D^{t^{0}}_{B_{train}}}$ (which
is renamed as $D^{B}$ for simplicity) as input. It also takes a number of
parameters as shown in Algorithm 1. Initially, the user-defined thresholds are
initialised with their default values (see Section 3.1.2, Section 3.1.4 and
Section 3.1.5) and the forests and the statistics are set to $null$. For
$D^{B}$, ADF makes use of Algorithm 1 to build decision forests and calculate
their statistics. ADF finds the best forest and recommends it for the user.
At time $t^{i}$, for $D^{B}$ (which is actually ${D^{t^{i}}_{B_{train}}}$) ADF
updates the forests and statistics by using Algorithm 1 and recommends the
best forest for the user. ADF repeats this process for the remaining batches.
We now present the main steps of Algorithm 1 as follows. In Step 1, we build
$T^{P}$ (if it is $null$) by applying an existing decision forest algorithm
such as RF [2] on $D^{B}$. We also make a copy of $T^{P}$ to $T^{A}$. In Step
2, we first identify the perturbed leaves $F^{P}$ by using Eq. (27). We then
repair $T^{P}$ by using a procedure called RepairForest() which is presented
in Algorithm 2. RepairForest() takes $T^{P}$, $D^{B}$, $L^{P}$, $F^{P}$ and
$\theta$ as input and returns updated permanent forest ($T^{P\prime}$) and
updated leaf statistics ($L^{P\prime}$) as output. In Step 3, we again find
the perturbed leaves $F^{A}$ for $T^{A}$ by using Eq. 27 and then calculate
the percentage of perturbed leaves $\vartheta^{A}$ based on the process
discussed in Section 3.1.2. If $\vartheta^{A}$ is less than or equal to
$\theta$, we consider that $T^{A}$ is repairable. We then repair $T^{A}$ by
using the procedure RepairForest() that returns updated active forest
($T^{A\prime}$) and updated leaf statistics ($L^{A\prime}$) as output.
In Step 4, we update the window of batches $D^{W}$. Let $w=|D^{W}|$ be the
number of batches that are currently stored in $D^{W}$. If $w$ is greater than
or equal to $\lambda$, we first remove the oldest batch from $D^{W}$ and the
add $D^{B}$ to $D^{W}$. In Step 5, we build $T^{T}$,
$T_{k}^{T}$,$k=1,2,\ldots,M$ (if $T^{T}$ is null) by applying RF on $D^{B}$.
Similar to Step 3, in Step 6 we repair $T^{T}$ if it is repairable. We find
the perturbed leaves $F^{T}$ of $T^{T}$ and then calculate the percentage of
perturbed leaves $\vartheta^{T}$. If $\vartheta^{T}$ is less than or equal to
$\theta$, we consider that $T^{T}$ is repairable. We then repair $T^{T}$ by
using procedure RepairForest() that returns updated temporary forest
($T^{T\prime}$) and updated leaf statistics ($L^{T\prime}$) as output.
Finally, in step 7, Algorithm 1 returns all updated classifiers, statistics
and parameters.
Input : Batch data set $D^{B}$, Permanent Forest (PF) $T^{P}$, Active Forest
(AF) $T^{A}$, Temporary Forest (TF) $T^{R}$, Statistics of PF leaves $L^{P}$,
Statistics of AF leaves $L^{A}$, Statistics of TF leaves $L^{T}$, Reserve
window $D^{W}$, Concept Drift Flag ($cdf$) Concept Drift Threshold
($\lambda$), Repairable Threshold ($\theta$), Error Tolerance Threshold
($\epsilon$) and Reserve Window Size ($\gamma$).
Output : Updated $T^{P^{\prime}}$, $T^{A^{\prime}}$, $T^{T^{\prime}}$,
$L^{P^{\prime}}$, $L^{A^{\prime}}$, $L^{T^{\prime}}$, $D^{W^{\prime}}$, and
$cdf$.
Step 1:
if _$T^{P}$ is $null$_ then
Apply an existing decision forest algorithm such as RF [2] on $D^{B}$ to build
PF, $T_{m}^{P}$,$m=1,2,\ldots,M$; /* PF is built with M trees */
Set $T^{A}\leftarrow T^{P}$;
$Goto~{}Step~{}7$;
end if
end
Step 2:
Find perturbed leaves, $F^{P}\leftarrow
FindPerturbedLeaves(D^{B},T^{P},\epsilon,L^{P})$; /* Find perturbed leaves in
$T^{P}$ */
$T^{P^{\prime}},L^{P^{\prime}}\leftarrow
RepairForest(T^{P},D^{B},L^{P},F^{P},\theta)$;
end
Step 3:
Find perturbed leaves, $F^{A}\leftarrow
FindPerturbedLeaves(D^{B},T^{A},\epsilon,L^{A})$; /* Find faulty leaves in
$T^{A}$ */
Calculate, $\vartheta^{A}\leftarrow CalculatePerturbedLeavesRatio(F^{A})$; /*
Calculate the percentage of faulty leaves in $T^{A}$ */
if _$\vartheta^{A}\leq\theta$_ then
$T^{A^{\prime}},L^{A^{\prime}}\leftarrow
RepairForest(T^{A},D^{B},L^{A},F^{A},\theta)$;
$Goto~{}Step~{}7$;
end if
end
Step 4:
if _$|D^{W}|\geq\gamma$_ then
Find the oldest batch in $D^{W}$ and set to $D^{O}$;
$D^{W}\leftarrow D^{W}\cap D^{O}$;
end if
$D^{W^{\prime}}\leftarrow D^{W}\cup D^{B}$;
$cdf\leftarrow cdf+1$;
end
Step 5:
if _$T^{T}$ is $null$_ then
Apply RF on $D^{W^{\prime}}$ to build TF, $T_{m}^{T}$, $m=1,2,\ldots,M$;
$Goto~{}Step~{}7$;
end if
end
Step 6:
Find perturbed leaves, $F^{T}\leftarrow
FindPerturbedLeaves(D^{B},T^{T},\epsilon,L^{T})$; /* Find faulty leaves in
$T^{R}$ */
Calculate, $\vartheta^{R}\leftarrow CalculatePerturbedLeavesRatio(F^{T})$; /*
Calculate the percentage of perturbed leaves in $T^{T}$ */
if _$\vartheta^{T}\leq\theta$_ then
$T^{T^{\prime}},L^{T^{\prime}}\leftarrow
RepairForest(T^{T},D^{B},L^{T},F^{T},\theta)$;
end if
else
Apply RF on $D^{W^{\prime}}$ to build TF, $T_{m}^{T}$, $m=1,2,..,M$;
if _$cdf >\lambda$_ then
$T^{A^{\prime}}\leftarrow T^{T^{\prime}}$;
$T^{T^{\prime}}\leftarrow\emptyset$; $cdf\leftarrow 0$;
$L^{A^{\prime}}\leftarrow L^{T^{\prime}}$;
end if
end if
end
Step 7:
Return $T^{P^{\prime}}$, $T^{A^{\prime}}$, $T^{T^{\prime}}$, $L^{P^{\prime}}$,
$L^{A^{\prime}}$, $L^{T^{\prime}}$, $D^{W^{\prime}}$ and $cdf$.
end
Algorithm 1 LearnADF()
Input : Decision forest, $T$, Batch data set, $D^{B}$, Leaves statistics, $L$,
Perturbed leaves $F$ and Repairable threshold $\theta$.
Output : Updated decision forest $T^{\prime}$, updated leaves statistics,
$L^{\prime}$.
Step 1:
Set $T^{\prime}\leftarrow\emptyset$;
Set $L^{\prime}\leftarrow\emptyset$;
for _$k=1$ to $M$_ do
calculate, $\vartheta\leftarrow CalculatePerturbedLeavesRatio(F_{k})$;
/*Calculate the percentages of perturbed leaves of the $k^{th}$-tree,
$T_{k}$.*/
if _$\vartheta >\theta$_ then
$T^{\prime}_{k}\leftarrow iSAT(T_{k},D^{B})$; /*Repair $T_{k}$ by using our
proposed improved separating axis theorem (iSAT).*/
for _each leaf, $l\in T^{\prime}_{i}$_ do
Set $D^{l}\leftarrow FindRecords(T_{k},D^{B},l)$;
if _$isPure(D^{l})==false\ &isPerturbed(l)\&SizeOf(D^{l})>MinLeafSize$_ then
Expand leaf $l$ using Entropy based splitting strategy;
end if
end for
end if
calculate, $L^{\prime}_{k}\leftarrow
UpdateLeafStatistics(T^{\prime}_{k},D^{B},L)$;
$L^{\prime}\leftarrow L^{\prime}\cup L^{\prime}_{k}$ $T^{\prime}\leftarrow
T^{\prime}\cup T^{\prime}_{k}$
end for
end
Step 2:
Return $T^{\prime}$;
end
Algorithm 2 Procedure RepairForest()
Input : Decision tree, $t$, Batch data set, $D^{B}$
Output : Updated decision tree, $t^{\prime}$
Step 1:
set $LB^{t}\leftarrow min(t)$;/*Find lower boundary of $t$ for all numerical
attributes. */
set $UB^{t}\leftarrow max(t)$; /*Find upper boundary of $t$ for all numerical
attributes. */
set $LB^{B}\leftarrow min(D^{B})$; /*Find lower boundary of $D^{B}$ for all
numerical attributes. */
set $UB^{B}\leftarrow max(D^{B})$; /*Find upper boundary of $D^{B}$ for all
numerical attributes. */
set $Dist_{1}\leftarrow LB^{B}-UB^{t}$; $Dist_{2}\leftarrow LB^{t}-UB^{B}$;
set $Max_{1}\leftarrow max(Dist_{1})$; $Max_{2}\leftarrow max(Dist_{2})$;
end
Step 2:
/*If there is no overlapping between $t$ and $D^{B}$. */
if _( $Max_{1}>0\&Max_{1}\geq Max_{2}$) OR ($Max_{2}>0\&Max_{2}\geq Max_{1}$)
_ then
if _$Max_{1} >0\&Max_{1}\geq Max_{2}$_ then
Set $splitAttr\leftarrow findAttributeWithMaxDistance(Dist_{1},Max_{1})$;
set $SplitValue\leftarrow(LB^{B}[splitAttr]+UB^{t}[splitAttr])/2$;
Create a node, $node\leftarrow CreateNode(splitAttr,SplitValue)$;
Add $t$ as the left child of $t^{\prime}$ ;
Add $node$ as the right child of $t^{\prime}$;
end if
else if _$Max_{2} >0\&Max_{2}\geq Max_{1}$_ then
Set $splitAttr\leftarrow findAttributeWithMaxDistance(Dist_{2},Max_{2})$;
set $SplitValue\leftarrow(LB^{t}[splitAttr]+UB^{B}[splitAttr])/2$;
Create a node, $node\leftarrow CreateNode(splitAttr,SplitValue)$;
Add $t$ as the right child of $t^{\prime}$ ;
Add $node$ as the left child of $t^{\prime}$;
end if
end if
/*If there is overlapping exists between $t$ and $D^{B}$. */
else
Update $Dist_{1}\leftarrow UB^{B}-UB^{t}$;
Update $Max_{1}\leftarrow max(Dist_{1})$;
if _$Max_{1} >0$_ then
Set $splitAttr\leftarrow findAttributeWithMaxDistance(Dist_{1},Max_{1})$;
set $SplitValue\leftarrow UB^{t}[splitAttr]$;
Create a node, $node\leftarrow CreateNode(splitAttr,SplitValue)$;
Add $t$ as the left child of $t^{\prime}$ ;
Add $node$ as the right child of $t^{\prime}$;
end if
Update $Dist_{2}\leftarrow LB^{t}-LB^{B}$;
Update $Max_{2}\leftarrow max(Dist_{2})$;
if _$Max_{2} >0$_ then
Set $splitAttr\leftarrow findAttributeWithMaxDistance(Dist_{2},Max_{2})$;
set $SplitValue\leftarrow LB^{t}[splitAttr]$;
Create a node, $node\leftarrow CreateNode(splitAttr,SplitValue)$;
Add $t$ as the right child of $t^{\prime}$ ;
Add $node$ as the left child of $t^{\prime}$;
end if
end if
end
Step 3:
Return updated decision tree, $t^{\prime}$;
end
Algorithm 3 Procedure iSAT()
### 3.3 Complexity Analysis of ADF
We now analyse the computational complexity of ADF. We consider a batch data
set with $n$ records, and $m$ attributes. We also consider that the ensemble
size is $M$ where a tree can have $l$ leaves. Although, ADF iteratively
updates the trees based on each training batch data using Algorithm 1, we
present a complexity analysis of the algorithm for an iteration as follows.
In Step 1 of Algorithm 1, we build a decision forest by using an existing
algorithm such as RF [2] which has a complexity $O(Mnm^{2})$ [21]. In Step 2,
we find the number of perturbed leaves using Eq. 27 which has a complexity
$O(Mnml)$. In this step, we also repair the trees one by one by using the
RepairForest procedure (see Algorithm 2). For $M$ trees, the complexity of the
RepairForest() is $O(M(nm+\theta nm^{2}))\approx O(Mnm^{2})$. Thus, the
complexity of Step 2 of Algorithm 1 is $O(Mnml+Mnm^{2})\approx O(Mnm^{2})$ for
a small $l$. Step 3 of the algorithm has the same complexity.
In Step 4 of Algorithm 1, we update the window of batches that has a
complexity $O(\gamma{n})$. Similar to Step 1, Step 5 has a complexity
$O(M{n}m^{2})$ for a small $\gamma$. In Step 6, we either repair trees or
build new trees. The complexity of these operations is $O(Mnm^{2})$. Thus, the
overall complexity of Algorithm 1 is $O(Mnm^{2})$. This variant of ADF is
known as ADF-R.
ADF can also build a decision forest by using a decision tree algorithm such
as the Hoeffding Tree (HT) algorithm [12]. The complexity of the HT algorithm
is $O(nmlcv)$ [7], where $c$ is the number of class values and $v$ is the
maximum domain size of an attribute. Thus, for $M$ trees, ADF requires a
complexity $O(Mnmlcv)$. This variant of ADF is known as ADF-H. Typically, $c$,
$v$ and $l$ values are very small, especially compared to $n$. Therefore, the
complexity of ADF-H is $O(Mnm)$. We present the complexities of ADF variants,
CIRF, RF and ARF in Table 2.
Table 2: Complexity analysis. Proposed Method | Computational complexity | Existing Method | Computational complexity
---|---|---|---
ADF-R | $O(Mnm^{2})$ | RF | $O(Mnm^{2})$ [21]
| | CIRF | $O(Mnm^{2})$ [3]
ADF-H | $O(Mnm)$ | ARF | $O(Mnm(log(m)))$ [7]
## 4 Experimental Results and Discussion
We carry out a set of experiments to evaluate our proposed framework ADF. We
compare the performance of ADF with eight state-of-the-art machine learning
and incremental learning techniques, comprising two non-incremental forest
algorithms (namely RF [2] and SysFor [13]), two incremental tree algorithms
(namely Hoeffding Tree (HT) [12] and Hoeffding Adaptive Tree (HAT) [22] and
four incremental forest algorithms (namely LeveragingBag [23], OzaBag [8],
CIRF [3] and ARF [7]).
We implement ADF in the Java programming language using the Massive Online
Analysis (MOA) API [24]. The source code of ADF is available at GitHub
(https://github.com/grahman20/ADF). We test the ADF framework by using one of
the three techniques: RF [2], HT [12] and SysFor [13], and thereby obtain
three variants called ADF-R, ADF-H and ADF-S, respectively. We also implement
an existing technique called CIRF [3]. For implementing the RF and SysFor, we
use the Java code from the Weka platform [25]. All other methods are already
available in the MOA framework. We use the default settings of the methods
while running the experiment.
### 4.1 Data Sets
We apply the incremental learning techniques on five real data sets and one
synthetic data set, as shown in Table 3. The real data sets are available
publicly in the UCI Repository [14].
Table 3: Data sets at a glance. Data set | Records | Attributes | Number of class values | Classification accuracy | Data source
---|---|---|---|---|---
LDPA | 164,860 | 6 | 11 | 56% | UCI
UIFWA | 149,332 | 5 | 22 | 40% | UCI
EB | 45,781 | 5 | 31 | 66% | UCI
AReM | 42,239 | 7 | 7 | 63% | UCI
Avila | 20,867 | 11 | 12 | 51% | UCI
House | 300,000 | 10 | 7 | 80% | Synthetic
### 4.2 Simulation of Training and Test Batch Data Sets
For each data set, we artificially create 34 training and test batch data sets
in which we implement the five scenarios, namely SKC, MKC, SUC, MUC and MKUC
as discussed in Section 3 and presented in Eq. (9) and in Fig. 3. The process
of simulating scenarios and batches is summarized in Table 4.
We first divide a data set into two sub data sets, where each sub data set
contains approximately half of the total records of the data set and
approximately half of the class values. From the first sub data set, we create
3 batches (having the numbers from 9 to 11) that follow the SKC scenario in
which each batch contains records having a single class value. Similarly, from
the second sub data set, we create 3 batches (having the numbers from 12 to
14) which follow the SUC scenario as shown in Table 4.
We then build two decision trees: $T_{1}$ and $T_{2}$ by applying SysFor [13]
on the first and second sub data sets, respectfully. Using $T_{1}$ and
$T_{2}$, we then create the remaining 28 batches where the source of records
of each batch is described in the third column of Table 4. For each batch, we
create training and test batch data sets, where the test batch contains 20% of
records in which 10% of records are taken from the current batch and the
remaining 10% of records are taken from the previous two batches. Note that
all records in the training and test batches are chosen randomly.
Table 4: Simulation of batch data sets. Scenario | Batches | Source of records
---|---|---
MKC-1 | 1-4 | | 25% of records are taken from the largest two leaves of $T_{1}$ and
---
75% of records are taken from the remaining leaves of $T_{1}$
MKC-2 | 5-8 | | 75% of records are taken from the largest two leaves of $T_{1}$ and
---
25% of records are taken from the remaining leaves of $T_{1}$
SKC | 9-11 | Each batch contains records (from the first sub data set) having a single class value
SUC | 12-14 | Each batch contains records (from the second sub data set) having a single class value
MUC-1 | 15-18 | | 25% of records are taken from the largest two leaves of $T_{2}$ and
---
75% of records are taken from the remaining leaves of $T_{2}$
MUC-1 | 19-22 | | 75% of records are taken from the largest two leaves of $T_{2}$ and
---
25% of records are taken from the remaining leaves of $T_{2}$
MKUC-1 | 23-26 | 75% of records are taken from $T_{1}$ and 25% of records are taken from $T_{2}$
MKUC-2 | 27-30 | 25% of records are taken from $T_{1}$ and 75% of records are taken from $T_{2}$
MKUC-3 | 31-34 | 50% of records are taken from $T_{1}$ and 50% of records are taken from $T_{2}$
### 4.3 Experimental Settings
In our experiment, we use a number of parameters for ADF variants and existing
techniques. Most of the techniques use two common parameters, namely ensemble
size and minimum leaf size (or grace period). The ensemble size is set to 10
and the minimum leaf size for large data sets (i.e. the size of the data set
is greater than 100000) is set to 100; otherwise it is set to 20. For ADF, we
set the default values for $\lambda$, $\theta$, $\epsilon$, and $\gamma$ at
3,0.4, 0.02, and 3, respectively. Moreover, we use the majority voting [26]
method to calculate the final output of a decision forest.
### 4.4 Detail Experimental Results and Performance Analysis
We present the performances of ADF variants (ADF-H, ADF-R and ADF-S) and eight
existing techniques on 34 batches that are created from the LDPA data set in
Table 5. In the case of incremental methods, a classifier is built by applying
a method on the first batch training data and the classifier is repaired and
updated incrementally for the remaining batches. However, for non-incremental
methods, we rebuild the classifier for each batch training data. For both
types of methods, we obtain a classier for batch training data
${D^{t^{i}}_{B_{train}}}$. We then use the classifier to classify batch test
data $D^{t^{i}}_{B_{test}}$ and obtain classification accuracy. Bold values in
the table indicate the best results. Out of 34 batches, ADF-S performs the
best in 25 batches.
For each method, we also calculate the average classification accuracy which
is shown in the last row of Table 5. The average classification accuracies of
ADF-R and ADF-S are 0.640 and 0.682, respectively. From the experimental
results, it is clear that ADF-R and ADF-S outperform other methods on the LDPA
data set.
Table 5: Classification performances of ADF variants and existing methods on 34 batches of LDPA data set. Scenario | Batch No. | Batch Size | Non-incremental Forest | Incremental Tree | Incremental Forest
---|---|---|---|---|---
Training Data | Test Data | SysFor | RF | HT | HAT | LeveragingBag | OzaBag | CIRF | ARF | ADF-H | ADF-R | ADF-S
MKC-1 | 1 | 2613 | 803 | 0.707 | 0.690 | 0.247 | 0.247 | 0.247 | 0.247 | 0.707 | 0.262 | 0.690 | 0.690 | 0.707
2 | 3216 | 802 | 0.766 | 0.717 | 0.069 | 0.069 | 0.070 | 0.069 | 0.743 | 0.080 | 0.721 | 0.722 | 0.749
3 | 3205 | 801 | 0.758 | 0.730 | 0.213 | 0.705 | 0.664 | 0.642 | 0.742 | 0.690 | 0.720 | 0.727 | 0.740
4 | 3194 | 801 | 0.782 | 0.754 | 0.738 | 0.738 | 0.705 | 0.705 | 0.744 | 0.729 | 0.738 | 0.742 | 0.770
MKC-2 | 5 | 3391 | 845 | 0.783 | 0.773 | 0.749 | 0.749 | 0.737 | 0.753 | 0.756 | 0.753 | 0.749 | 0.748 | 0.750
6 | 3391 | 845 | 0.807 | 0.806 | 0.263 | 0.175 | 0.174 | 0.392 | 0.793 | 0.180 | 0.775 | 0.776 | 0.786
7 | 3360 | 845 | 0.793 | 0.786 | 0.199 | 0.176 | 0.180 | 0.299 | 0.780 | 0.196 | 0.759 | 0.761 | 0.759
8 | 3329 | 845 | 0.808 | 0.792 | 0.683 | 0.142 | 0.062 | 0.657 | 0.798 | 0.063 | 0.765 | 0.765 | 0.774
SKC | 9 | 4001 | 968 | 0.885 | 0.885 | 0.885 | 0.885 | 0.885 | 0.861 | 0.882 | 0.885 | 0.885 | 0.886 | 0.883
10 | 4001 | 968 | 0.551 | 0.551 | 0.551 | 0.551 | 0.551 | 0.550 | 0.614 | 0.551 | 0.476 | 0.699 | 0.745
11 | 1898 | 484 | 0.500 | 0.500 | 0.500 | 0.500 | 0.500 | 0.502 | 0.310 | 0.500 | 0.625 | 0.635 | 0.644
SUC | 12 | 4080 | 968 | 0.500 | 0.500 | 0.500 | 0.500 | 0.500 | 0.500 | 0.210 | 0.500 | 0.458 | 0.600 | 0.655
13 | 296 | 169 | 0.503 | 0.503 | 0.503 | 0.503 | 0.503 | 0.503 | 0.271 | 0.503 | 0.541 | 0.611 | 0.612
14 | 3904 | 968 | 0.500 | 0.500 | 0.500 | 0.500 | 0.500 | 0.500 | 0.276 | 0.500 | 0.692 | 0.590 | 0.590
MUC-1 | 15 | 3685 | 920 | 0.450 | 0.340 | 0.180 | 0.251 | 0.274 | 0.180 | 0.213 | 0.302 | 0.398 | 0.416 | 0.489
16 | 3685 | 920 | 0.549 | 0.505 | 0.495 | 0.297 | 0.297 | 0.490 | 0.340 | 0.272 | 0.495 | 0.641 | 0.660
17 | 3682 | 920 | 0.572 | 0.502 | 0.067 | 0.096 | 0.105 | 0.072 | 0.404 | 0.097 | 0.386 | 0.514 | 0.638
18 | 3679 | 920 | 0.524 | 0.501 | 0.316 | 0.172 | 0.150 | 0.316 | 0.395 | 0.155 | 0.368 | 0.517 | 0.629
MUC-2 | 19 | 3746 | 933 | 0.660 | 0.548 | 0.403 | 0.251 | 0.170 | 0.403 | 0.467 | 0.195 | 0.432 | 0.592 | 0.702
20 | 3746 | 933 | 0.662 | 0.606 | 0.027 | 0.098 | 0.099 | 0.027 | 0.525 | 0.107 | 0.414 | 0.599 | 0.712
21 | 3685 | 933 | 0.663 | 0.581 | 0.323 | 0.297 | 0.273 | 0.323 | 0.531 | 0.287 | 0.402 | 0.640 | 0.735
22 | 3624 | 933 | 0.554 | 0.544 | 0.066 | 0.194 | 0.136 | 0.066 | 0.466 | 0.134 | 0.407 | 0.583 | 0.680
MKUC-1 | 23 | 4714 | 1177 | 0.466 | 0.371 | 0.295 | 0.295 | 0.302 | 0.295 | 0.453 | 0.299 | 0.394 | 0.604 | 0.641
24 | 4714 | 1177 | 0.556 | 0.472 | 0.427 | 0.427 | 0.130 | 0.427 | 0.452 | 0.134 | 0.469 | 0.637 | 0.659
25 | 4714 | 1177 | 0.520 | 0.577 | 0.104 | 0.104 | 0.110 | 0.104 | 0.500 | 0.098 | 0.579 | 0.663 | 0.693
26 | 4714 | 1177 | 0.605 | 0.578 | 0.557 | 0.557 | 0.524 | 0.557 | 0.488 | 0.555 | 0.590 | 0.664 | 0.692
MKUC-2 | 27 | 4704 | 1176 | 0.554 | 0.418 | 0.177 | 0.177 | 0.253 | 0.177 | 0.508 | 0.221 | 0.413 | 0.603 | 0.671
28 | 4704 | 1176 | 0.561 | 0.429 | 0.026 | 0.026 | 0.103 | 0.026 | 0.530 | 0.131 | 0.420 | 0.599 | 0.655
29 | 4691 | 1176 | 0.558 | 0.442 | 0.016 | 0.110 | 0.119 | 0.016 | 0.431 | 0.102 | 0.329 | 0.557 | 0.625
30 | 4683 | 1176 | 0.554 | 0.389 | 0.168 | 0.201 | 0.159 | 0.168 | 0.500 | 0.143 | 0.317 | 0.566 | 0.607
MKUC-3 | 31 | 4920 | 1229 | 0.579 | 0.407 | 0.234 | 0.150 | 0.136 | 0.234 | 0.530 | 0.138 | 0.362 | 0.599 | 0.632
32 | 4811 | 1209 | 0.560 | 0.392 | 0.285 | 0.037 | 0.070 | 0.285 | 0.526 | 0.074 | 0.383 | 0.596 | 0.634
33 | 5338 | 1256 | 0.569 | 0.450 | 0.171 | 0.160 | 0.104 | 0.171 | 0.522 | 0.102 | 0.410 | 0.629 | 0.651
34 | 5814 | 1292 | 0.557 | 0.484 | 0.106 | 0.135 | 0.098 | 0.106 | 0.480 | 0.101 | 0.385 | 0.594 | 0.629
Average | 0.615 | 0.559 | 0.325 | 0.308 | 0.291 | 0.342 | 0.526 | 0.295 | 0.528 | 0.640 | 0.682
Similar to LDPA, for the remaining five data sets ADF variants achieve higher
classification accuracies than other existing techniques. Due to space
limitation, we present the average classification accuracies (see Table 6) and
execution times (see Table 7) of the ADF variants and eight existing
techniques on six data sets. Bold values in the tables indicate the best
results. From Table 6, we can see that ADF-R and ADF-S outperform the other
techniques in terms of classification accuracy for all data sets. However,
while ADF variants perform better than other techniques, the variants take
slightly more time than the ARF, CIRF, RF and SysFor. As accurate
classification is the primary goal in real applications, for handling dynamic
changes in both class and data distribution, ADF-R can be recommended over the
eight state-of-the-art techniques by considering the classification accuracy
and execution time.
Table 6: Overall classification accuracy of ADF variants and other existing methods on all data sets. Data set | Non-incremental Forest | Incremental Tree | Incremental Forest
---|---|---|---
SysFor | RF | HT | HAT | LeveragingBag | OzaBag | CIRF | ARF | ADF-H | ADF-R | ADF-S
LDPA | 0.615 | 0.559 | 0.325 | 0.308 | 0.291 | 0.342 | 0.526 | 0.295 | 0.528 | 0.640 | 0.682
UIFWA | 0.484 | 0.504 | 0.186 | 0.191 | 0.195 | 0.182 | 0.393 | 0.205 | 0.416 | 0.591 | 0.550
EB | 0.627 | 0.619 | 0.407 | 0.315 | 0.343 | 0.409 | 0.629 | 0.516 | 0.626 | 0.722 | 0.705
AReM | 0.714 | 0.729 | 0.327 | 0.311 | 0.285 | 0.346 | 0.650 | 0.289 | 0.522 | 0.838 | 0.818
Avila | 0.685 | 0.719 | 0.382 | 0.376 | 0.328 | 0.395 | 0.643 | 0.350 | 0.582 | 0.788 | 0.825
House | 0.801 | 0.802 | 0.390 | 0.339 | 0.354 | 0.402 | 0.720 | 0.351 | 0.853 | 0.860 | 0.867
Table 7: Overall execution time (ms) of ADF variants and other existing methods on all data sets. Data set | Non-incremental Forest | Incremental Tree | Incremental Forest
---|---|---|---
SysFor | RF | HT | HAT | LeveragingBag | OzaBag | CIRF | ARF | ADF-H | ADF-R | ADF-S
LDPA | 2651 | 1791 | 20 | 30 | 281 | 103 | 2062 | 938 | 1415 | 2901 | 3589
UIFWA | 1591 | 1584 | 70 | 185 | 369 | 657 | 1785 | 1149 | 1226 | 3287 | 3528
EB | 511 | 500 | 44 | 45 | 102 | 82 | 444 | 710 | 960 | 1357 | 1426
AReM | 790 | 427 | 62 | 103 | 363 | 137 | 929 | 477 | 654 | 959 | 1011
Avila | 288 | 222 | 28 | 37 | 157 | 70 | 808 | 424 | 506 | 643 | 917
House | 2673 | 2829 | 27 | 65 | 350 | 167 | 2292 | 1211 | 2135 | 3184 | 3395
To demonstrate the effectiveness of ADF framework, we also carry out
experimentation by rearranging the scenarios of the batches on LDPA and UIFWA.
In the new arrangement, we have 28 batches where the training and test batches
are created randomly. Table 8 presents the details about the rearrangement of
scenarios, and classification accuracies of ADF variants and existing
techniques on the LDPA data set. Bold values in the table indicate that ADF-S
performs the best in all batches.
Table 8: Accuracies of ADF variants and existing methods by rearranging the scenarios of the batches on LDPA data set. Scenario | Batch No. | Batch Size | Non-incremental Forest | Incremental Tree | Incremental Forest
---|---|---|---|---|---
Training Data | Test Data | SysFor | RF | HT | HAT | LeveragingBag | OzaBag | CIRF | ARF | ADF-H | ADF-R | ADF-S
MKC-1 | 1 | 2981 | 917 | 0.662 | 0.614 | 0.325 | 0.325 | 0.320 | 0.310 | 0.662 | 0.323 | 0.578 | 0.614 | 0.662
2 | 3669 | 916 | 0.689 | 0.624 | 0.107 | 0.107 | 0.180 | 0.218 | 0.664 | 0.175 | 0.592 | 0.652 | 0.689
3 | 3641 | 916 | 0.663 | 0.634 | 0.162 | 0.097 | 0.121 | 0.154 | 0.669 | 0.119 | 0.615 | 0.652 | 0.691
4 | 3613 | 916 | 0.638 | 0.549 | 0.393 | 0.540 | 0.512 | 0.400 | 0.632 | 0.550 | 0.541 | 0.607 | 0.662
MKC-2 | 5 | 4130 | 1029 | 0.739 | 0.704 | 0.335 | 0.227 | 0.230 | 0.333 | 0.731 | 0.225 | 0.702 | 0.725 | 0.758
6 | 4130 | 1029 | 0.765 | 0.758 | 0.732 | 0.756 | 0.756 | 0.703 | 0.781 | 0.750 | 0.756 | 0.765 | 0.786
7 | 4047 | 1029 | 0.794 | 0.777 | 0.511 | 0.052 | 0.056 | 0.511 | 0.811 | 0.055 | 0.777 | 0.791 | 0.814
8 | 3964 | 1029 | 0.794 | 0.779 | 0.751 | 0.779 | 0.779 | 0.724 | 0.796 | 0.779 | 0.780 | 0.787 | 0.810
MKUC-1 | 9 | 5457 | 1361 | 0.700 | 0.670 | 0.654 | 0.654 | 0.647 | 0.642 | 0.691 | 0.630 | 0.656 | 0.695 | 0.724
10 | 5457 | 1361 | 0.652 | 0.604 | 0.079 | 0.061 | 0.062 | 0.063 | 0.622 | 0.065 | 0.588 | 0.646 | 0.660
11 | 5457 | 1361 | 0.616 | 0.555 | 0.151 | 0.096 | 0.096 | 0.140 | 0.552 | 0.086 | 0.531 | 0.589 | 0.644
12 | 5457 | 1361 | 0.531 | 0.588 | 0.042 | 0.191 | 0.181 | 0.150 | 0.597 | 0.182 | 0.553 | 0.620 | 0.652
MKUC-2 | 13 | 5456 | 1361 | 0.526 | 0.482 | 0.172 | 0.159 | 0.080 | 0.133 | 0.491 | 0.097 | 0.357 | 0.478 | 0.589
14 | 5456 | 1361 | 0.582 | 0.445 | 0.258 | 0.202 | 0.217 | 0.247 | 0.453 | 0.238 | 0.266 | 0.525 | 0.600
15 | 5534 | 1361 | 0.580 | 0.519 | 0.155 | 0.244 | 0.055 | 0.149 | 0.425 | 0.265 | 0.285 | 0.525 | 0.609
16 | 5612 | 1361 | 0.589 | 0.506 | 0.179 | 0.245 | 0.051 | 0.195 | 0.405 | 0.259 | 0.300 | 0.544 | 0.598
MUC-1 | 17 | 4207 | 1049 | 0.557 | 0.459 | 0.214 | 0.046 | 0.079 | 0.187 | 0.410 | 0.269 | 0.341 | 0.582 | 0.605
18 | 4207 | 1049 | 0.594 | 0.470 | 0.331 | 0.097 | 0.152 | 0.291 | 0.396 | 0.131 | 0.380 | 0.583 | 0.605
19 | 4205 | 1049 | 0.520 | 0.501 | 0.362 | 0.238 | 0.310 | 0.308 | 0.418 | 0.384 | 0.425 | 0.609 | 0.619
20 | 4203 | 1049 | 0.575 | 0.447 | 0.417 | 0.421 | 0.409 | 0.416 | 0.404 | 0.407 | 0.410 | 0.624 | 0.628
MUC-2 | 21 | 4230 | 1056 | 0.631 | 0.618 | 0.342 | 0.303 | 0.086 | 0.331 | 0.517 | 0.080 | 0.398 | 0.634 | 0.667
22 | 4230 | 1056 | 0.696 | 0.599 | 0.277 | 0.325 | 0.147 | 0.270 | 0.564 | 0.149 | 0.402 | 0.666 | 0.726
23 | 4136 | 1056 | 0.632 | 0.563 | 0.334 | 0.186 | 0.252 | 0.363 | 0.578 | 0.272 | 0.386 | 0.677 | 0.728
24 | 4047 | 1056 | 0.635 | 0.656 | 0.349 | 0.194 | 0.325 | 0.323 | 0.580 | 0.281 | 0.366 | 0.671 | 0.728
MKUC-3 | 25 | 5740 | 1433 | 0.581 | 0.479 | 0.266 | 0.278 | 0.227 | 0.262 | 0.518 | 0.207 | 0.293 | 0.617 | 0.639
26 | 5618 | 1413 | 0.580 | 0.402 | 0.233 | 0.246 | 0.137 | 0.238 | 0.524 | 0.118 | 0.346 | 0.620 | 0.640
27 | 6242 | 1469 | 0.543 | 0.447 | 0.280 | 0.212 | 0.170 | 0.251 | 0.500 | 0.191 | 0.372 | 0.611 | 0.630
28 | 6817 | 1513 | 0.538 | 0.484 | 0.231 | 0.295 | 0.264 | 0.293 | 0.499 | 0.249 | 0.398 | 0.601 | 0.606
Average | 0.629 | 0.569 | 0.309 | 0.271 | 0.246 | 0.307 | 0.568 | 0.269 | 0.478 | 0.633 | 0.670
By rearranging the scenarios, we present the overall accuracies of ADF
variants and other existing methods on the LDPA and UIFWA data sets in Table
9. It is clear that ADF-R and ADF-S also outperform other techniques in terms
of classification accuracy after rearranging the scenarios of the batches.
Table 9: Overall accuracies of ADF variants and other methods on LDPA and UIFWA data sets by rearranging the scenarios. Data set | Non-incremental Forest | Incremental Tree | Incremental Forest
---|---|---|---
SysFor | RF | HT | HAT | LeveragingBag | OzaBag | CIRF | ARF | ADF-H | ADF-R | ADF-S
LDPA | 0.629 | 0.569 | 0.309 | 0.271 | 0.246 | 0.307 | 0.568 | 0.269 | 0.478 | 0.633 | 0.670
UIFWA | 0.493 | 0.518 | 0.114 | 0.092 | 0.112 | 0.119 | 0.409 | 0.117 | 0.413 | 0.565 | 0.540
### 4.5 Statistical Analysis of the Experimental Results
We now analyze the results by using a statistical non-parametric sign test
[27] and Nemenyi [28] test for all 34 batches to evaluate the statistical
significance of the superiority of ADF variants over the existing methods.
We carry out a sign test on the results of ADF-S with other methods (one by
one) at the right-tailed by considering significance level $\alpha=0.025$
(i.e. 97.5% significance level) as shown in Fig. 13. In the figure, each bar
represents the z-value of the comparison between ADF-S and an existing method,
whereas the line represents the z-ref value (which is obtained from a table
[27]). The sign test results (see Fig. 13) indicate that ADF-S performs
significantly better than the other methods (at $z>1.96$, $p<0.025$) on all
data sets. The cases where the performance of ADF-S is not significantly
superior are marked with down arrow signs on top of the bars. We also get
similar results for ADF-R.
Figure 13: Statistical analysis based on sign test on all datasets.
We also carry out the Nemenyi test on the results of ADF-S with other methods
(one by one) at the right-tailed by considering significance level
$\alpha=0.025$ (i.e. 97.5% significance level). The Nemenyi test results
indicate that both ADF-S and ADF-R perform significantly better than the other
techniques (at $p<0.025$) on all data sets.
### 4.6 Experimentation in Handling Big data using ADF
ADF can handle big data where the data can be divided into batches. For this
task, we consider two big data sets, namely LDPA and UIFWA (see Table 3).
For the LDPA data set, we create a training data set having 80% of the records
that are selected randomly, and a test data set having the remaining records.
To calculate the accuracy of the benchmark method such as Sysfor [13], we
build a decision tree by applying Sysfor [13] on the training data set and the
tree is used to classify the test records. The accuracy of the benchmark
method on the LDPA data set is 65.86% as shown in Table 10. Besides, for ADF,
we create 34 equal batch data sets where the records are chosen randomly from
the training data set of the LDPA data set. We build a decision tree by
applying Sysfor on Batch 1. The tree is then updated incrementally for the
remaining 33 batches. The final tree is used to classify the test records. The
classification accuracy of ADF on the LDPA data set is 61.88% (see Table 10).
We also calculate the accuracy of CIRF on the LDPA data set, which is 53.77%.
Similarly, for the UIFWA data set, the accuracies of the benchmark method,
CIRF and ADF are 45.90%, 35.77% and 40.91%, respectively, as reported in Table
10. For both data sets, the performance of ADF is better than CIRF and is
closer to the benchmark method. The experimental results on two data sets
indicate the effectiveness of ADF over CIRF for handling big data.
Table 10: Classification accuracy of ADF, CRIF and Benchmark methods on two big data sets. Data set | Benchmark (SysFor) | CIRF | ADF | Data set | Benchmark (SysFor) | CIRF | ADF
---|---|---|---|---|---|---|---
LDPA | 65.86% | 53.77% | 61.88% | UIFWA | 45.09% | 35.77% | 40.91%
## 5 Conclusion and future work
This paper introduced ADF, an incremental machine learning framework, which
produces a decision forest to classify new data. ADF can learn from new data
that arrive as batches over time and the data can have new classes. As
accurate classification is the primary goal in real applications, we argue
that an incremental decision forest algorithm can achieve high accuracy based
on three factors: identification of the best split, determination of trees
that need to be repaired, and identification and management of concept drift.
Based on our two novel theorems (see Theorem 2 and Theorem 3), we introduce a
novel splitting strategy called iSAT (see Section 3.1.1) which can find the
best split for new batch data. We also introduce a repairable strategy (see
section 3.1.2) to find trees that need to be repaired. Moreover, we build a
set of forests (see Section 3.1.3) to identify and handle concept drift and
preserve previously acquired knowledge.
The effectiveness of ADF is also reflected in the experimental results. In the
ADF framework, we build decision forests by using one of the three techniques:
RF [2], HT [12] and SysFor [13], and thereby obtain three variants called
ADF-R, ADF-H and ADF-S, respectively. We evaluate ADF variants on five
publicly available natural data sets [14] and one synthetic data set by
comparing its performance with the performance of eight state-of-the-art
techniques including HT [12], CIRF [3], and ARF [7]. From Table 6, we can see
that in all data sets ADF-R and ADF-S outperform the other techniques in terms
of classification accuracy, while the variants require a comparable execution
time. Statistical sign test and Nemenyi test results (see Fig. 13) indicate
that ADF-R and ADF-S perform significantly better than the other methods,
except one case, at $z>1.96$, $p<0.025$ on all data sets.
Our initial experimentation on two data sets (see Table 10) indicates that ADF
is also applicable to big data applications where the data can be divided into
batches. In future work, we plan to explore the applicability of ADF for the
non-dividable big data applications.
## References
* [1] J. Gama, I. Žliobaitė, A. Bifet, M. Pechenizkiy, A. Bouchachia, A survey on concept drift adaptation, ACM computing surveys (CSUR) 46 (4) (2014) 44.
* [2] L. Breiman, Random forests, Machine learning 45 (1) (2001) 5–32.
* [3] C. Hu, Y. Chen, L. Hu, X. Peng, A novel random forests based class incremental learning method for activity recognition, Pattern Recognition 78 (2018) 277–290.
* [4] C. Hu, Y. Chen, X. Peng, H. Yu, C. Gao, L. Hu, A novel feature incremental learning method for sensor-based activity recognition, IEEE Transactions on Knowledge and Data Engineering 31 (6) (2018) 1038–1050.
* [5] M. Ristin, M. Guillaumin, J. Gall, L. Van Gool, Incremental learning of random forests for large-scale image classification, IEEE transactions on pattern analysis and machine intelligence 38 (3) (2015) 490–503.
* [6] T. Mensink, J. Verbeek, F. Perronnin, G. Csurka, Distance-based image classification: Generalizing to new classes at near-zero cost, IEEE transactions on pattern analysis and machine intelligence 35 (11) (2013) 2624–2637.
* [7] H. M. Gomes, A. Bifet, J. Read, J. P. Barddal, F. Enembreck, B. Pfharinger, G. Holmes, T. Abdessalem, Adaptive random forests for evolving data stream classification, Machine Learning 106 (9-10) (2017) 1469–1495.
* [8] N. C. Oza, Online bagging and boosting, in: 2005 IEEE international conference on systems, man and cybernetics, Vol. 3, Ieee, 2005, pp. 2340–2345.
* [9] M. Ristin, M. Guillaumin, J. Gall, L. Van Gool, Incremental learning of ncm forests for large-scale image classification, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 3654–3661.
* [10] J. Arvo, Transforming axis-aligned bounding boxes, in: Graphics gems, Academic Press Professional, Inc., 1990, pp. 548–550.
* [11] S. Gottschalk, Separating axis theorem, Tech. rep., Technical Report TR96-024, Department of Computer Science, UNC Chapel Hill (1996).
* [12] P. Domingos, G. Hulten, Mining high-speed data streams, in: Proceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining, 2000, pp. 71–80.
* [13] M. Z. Islam, H. Giggins, Knowledge discovery through sysfor: a systematically developed forest of multiple decision trees, in: Proceedings of the Ninth Australasian Data Mining Conference-Volume 121, Australian Computer Society, Inc., 2011, pp. 195–204.
* [14] A. Frank, A. Asuncion, UCI machine learning repository, accessed August 25, 2020 (2010).
* [15] X. Liu, G. Zhang, Y. Zhan, E. Zhu, An incremental feature learning algorithm based on least square support vector machine, in: International Workshop on Frontiers in Algorithmics, Springer, 2008, pp. 330–338.
* [16] J. A. Suykens, J. Vandewalle, Least squares support vector machine classifiers, Neural processing letters 9 (3) (1999) 293–300.
* [17] H. He, S. Chen, K. Li, X. Xu, Incremental learning from stream data, IEEE Transactions on Neural Networks 22 (12) (2011) 1901–1914.
* [18] J. R. Quinlan, Induction of decision trees, Machine learning 1 (1) (1986) 81–106.
* [19] L. Breiman, J. Friedman, R. Olshen, C. Stone, Classification and regression trees–crc press, Boca Raton, Florida.
* [20] S. Boyd, L. Vandenberghe, Convex optimization, Cambridge university press, 2004\.
* [21] J. Su, H. Zhang, A fast decision tree learning algorithm, in: Proceedings of the National Conference on Artificial Intelligence, Vol. 21, Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999, 2006, p. 500.
* [22] A. Bifet, R. Gavaldà, Adaptive learning from evolving data streams, in: International Symposium on Intelligent Data Analysis, Springer, 2009, pp. 249–260.
* [23] A. Bifet, G. Holmes, B. Pfahringer, Leveraging bagging for evolving data streams, in: Joint European conference on machine learning and knowledge discovery in databases, Springer, 2010, pp. 135–150.
* [24] A. Bifet, G. Holmes, R. Kirkby, B. Pfahringer, MOA: massive online analysis, J. Mach. Learn. Res. 11 (2010) 1601–1604.
* [25] E. Frank, M. A. Hall, I. H. Witten, The weka workbench. online appendix for ”data mining: Practical machine learning tools and techniques” (2016).
* [26] N. Jamali, C. Sammut, Majority voting: Material classification by tactile sensing using surface texture, IEEE Transactions on Robotics 27 (3) (2011) 508–521.
* [27] R. D. Mason, D. A. Lind, W. G. Marchal, Statistics: an introduction, Duxbury Press, 1994.
* [28] J. Demšar, Statistical comparisons of classifiers over multiple data sets, Journal of Machine learning research 7 (Jan) (2006) 1–30.
|
# A holistic approach to computing first-arrival traveltimes using neural
networks
Umair bin Waheed<EMAIL_ADDRESS>Department of Geosciences, King
Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia. Tariq
Alkhalifah Physical Sciences and Engineering Division, King Abdullah
University of Science and Technology, Thuwal 23955, Saudi Arabia. Ehsan
Haghighat Department of Civil Engineering, Massachusetts Institute of
Technology, MA 02139, USA. Chao Song
###### Abstract
Since the original algorithm by John Vidale in 1988 to numerically solve the
isotropic eikonal equation, there has been tremendous progress on the topic
addressing an array of challenges, including improvement of the solution
accuracy, incorporation of surface topography, adding more accurate physics by
accounting for anisotropy/attenuation in the medium, and speeding up
computations using multiple CPUs and GPUs. Despite these advances, there is no
mechanism in these algorithms to carry information gained by solving one
problem to the next. Moreover, these approaches may breakdown for certain
complex forms of the eikonal equation, requiring simplification of the
equations to estimate approximate solutions. Therefore, we seek an alternate
approach to address the challenge in a holistic manner, i.e., a method that
not only makes it simpler to incorporate topography, allows accounting for any
level of complexity in physics, benefiting from computational speedup due to
the availability of multiple CPUs or GPUs, but also able to transfer knowledge
gained from solving one problem to the next. We develop an algorithm based on
the emerging paradigm of physics-informed neural network to solve various
forms of the eikonal equation. We show how transfer learning and surrogate
modeling can be used to speed up computations by utilizing information gained
from prior solutions. We also propose a two-stage optimization scheme to
expedite the training process in the presence of sharper heterogeneity in the
velocity model and recommend using a locally adaptive activation function for
faster convergence. Furthermore, we demonstrate how the proposed approach
makes it simpler to incorporate additional physics and other features in
contrast to conventional methods that took years and often decades to make
these advances. Such an approach not only makes the implementation of eikonal
solvers much simpler but also puts us on a much faster path to progress. The
method paves the pathway to solving complex forms of the eikonal equation that
have remained unsolved using conventional algorithms or solved using some
approximation techniques at best; thereby, creating new possibilities for
advancement in the field of numerical eikonal solvers.
###### keywords:
Eikonal equation , anisotropy , traveltimes , neural networks , scientific
machine learning
††journal: Elsevier
## 1 Introduction
The eikonal equation is a nonlinear partial differential equation (PDE)
obtained from the first term of the Wentzel-Kramers-Brillouin expansion of the
wave equation and represents a class of Hamilton-Jacobi equations [1]. It
finds applications in multiple domains of science and engineering, including
image processing [2], robotic path planning and navigation [3], computer
graphics [4], and semi-conductor manufacturing [5]. In seismology, it is used
to compute first-arrival traveltimes, which are necessary for the success of a
wide range of seismic processing and imaging tools including statics and
moveout correction [6], traveltime tomography for initial velocity model
building [7, 8], microseismic source localization [9], and ray-based migration
[10]. Ray tracing and finite-difference based solutions of the eikonal
equation are the most popular approaches for computing traveltimes.
Ray tracing methods compute traveltimes along the characteristics of the
eikonal equation by solving a system of ordinary differential equations [11].
The approach is generally efficient for a sparse source-receiver geometry, but
the computational cost increases dramatically with the increase in the number
of source-receiver pairs. Moreover, for practical applications such as imaging
and velocity model building, traveltime solutions need to be interpolated onto
a regular grid. This requirement not only adds to the computational cost of
the method but also poses a challenge, particularly in complex media where
rays may diverge from one another, leading to large spatial gaps between rays,
creating regions known as shadow zones [12]. Additionally, in strongly varying
velocity models, multiple ray-paths may connect a source-receiver pair, making
it easy to miss the path with the minimum traveltime. Therefore, the numerical
solution of the eikonal equation has been a topic of continued research
interest over the years.
Vidale [13] led the development of numerical eikonal solvers by proposing an
expanding box strategy to compute first-arrival traveltimes in heterogeneous
media. Subsequently, the method was improved and extended to three dimensions
[12], to incorporate anisotropy [14, 15], and to high-order accurate solutions
[16]. The instability of the expanding box method due to turning rays led to
the development of the expanding wavefront scheme [17]. This was further
improved to obtain maximum energy traveltimes [18], and to incorporate
anisotropy in the model [19].
Another algorithm that became popular during the late 1990s was the fast
marching method [20]. The popularity of the method was due to its accuracy,
stability, and efficiency properties. The fast marching method saw great
interest and development in the subsequent period. This included extension of
the method to improve traveltime accuracy [21, 22, 23], incorporating
anisotropy [24, 25, 26], parallelization for computational speedup using
multiple CPUs [27], and even acceleration using GPUs [28].
Despite its success, the fast marching method was overtaken in popularity by
the fast sweeping method [29] since the mid-2000s. This was mainly due to the
flexibility and robustness of the fast sweeping method to various forms of the
eikonal equation. Numerous advances to the fast sweeping method have since
been proposed to improve the accuracy of the method [30, 31], to incorporate
anisotropy [32, 33, 34, 35], to account for attenuation [36], to tackle
surface topography [37], and parallelization for computational speedup [38,
39].
Several other hybrid strategies have also been proposed to solve the eikonal
equation. For a detailed review of these methods, we refer the interested
reader to [40].
In light of these developments, it is beyond doubt that there has been
tremendous progress since the original eikonal solver by Vidale [13]. This
huge and growing body of literature, spanning over three decades, on the
numerical solution of the eikonal equation, required significant research
efforts to address an array of challenges, including improvement of the
solution accuracy, incorporation of surface topography, adding more accurate
physics by accounting for anisotropy/attenuation in the medium, and speeding
up computations by using multiple CPUs and GPUs. Therefore, we seek an
alternate approach that could address these challenges in a holistic manner –
a method that makes it simpler to incorporate topography, allow accounting for
more accurate physics, and benefit from computational speedup due to the
availability of multiple CPUs or GPUs. Such an approach would not only make
the implementation of eikonal solvers much simpler but also put us on a much
faster path to progress in solving complex forms of the eikonal equation.
Furthermore, a major drawback of the conventional eikonal solvers is that
there is no mechanism to utilize the information gained by solving one problem
to the next. Therefore, the same amount of computational effort is needed even
for a small perturbation in the source position and/or the velocity model.
This can lead to a computational bottleneck, particularly for
imaging/inversion applications that require repeated computations, often with
thousands of source positions and multiple updated velocity models. Therefore,
a method that could use information gained from one solution to the next to
speed up computations can potentially remedy this situation. With these
objectives in mind, we look into the machine learning literature for
inspiration.
Having shown remarkable success across multiple research domains [41], machine
learning has recently shown promise in tackling problems in scientific
computing. The idea to use an artificial neural network for solving PDEs has
been around since the 1990s [42]. However, due to recent advances in the
theory of deep learning coupled with a massive increase in computational power
and efficient graph-based implementation of new algorithms and automatic
differentiation, we are witnessing a resurgence of interest in using neural
networks to approximate solutions of PDEs.
Recently, Raissi et al. [43] developed a deep learning framework for the
solution and discovery of PDEs. The so-called physics-informed neural network
(PINN) leverages the capabilities of deep neural networks (DNNs) as universal
function approximators. Contrary to the conventional deep learning approaches,
PINNs restrict the space of admissible solutions by enforcing the validity of
the underlying PDE governing the actual physics of the problem. This is
achieved by using a simple feed-forward neural network leveraging automatic
differentiation (AD) to compute the differential variables in the PDE. It is
worth noting that PINNs do not require a _labeled data_ to learn the mapping
between inputs and outputs, rather learning is facilitated through the loss
function formed by the underlying PDE. PINNs have already demonstrated success
in solving forward and inverse problems in geophysics [44, 45, 46, 47, 48].
Unlike classical discretization-based methods, PINNs are the only unified
framework that can be used readily both for data-driven and PDE-based forward
and inverse solution of Eikonal equations by incorporating different terms in
the loss function.
In this chapter, we present a neural network approach to solve various forms
of the eikonal equation. We use the PINN framework, where the governing
equation is incorporated into the loss function of the neural network. We also
show how the proposed method addresses the highlighted challenges compared to
conventional algorithms. Specifically, we show that by simply updating the
loss function of the neural network, we can account for more accurate physics
in the traveltime solution. Moreover, since the proposed method is mesh-free,
we will observe that to incorporate topography, no special treatment is needed
as opposed to conventional finite-difference methods. In addition, the use of
computational graphs allows us to run the same piece of code on different
platforms (CPUs, GPUs) and architectures (desktops, clusters) without worrying
about the implementation details. Most importantly, the proposed method allows
us to use information gained while solving for a particular source position
and velocity model to speed up computations for perturbations in the velocity
model and/or source position. We demonstrate this aspect through the use of
machine learning techniques like transfer learning and surrogate modeling.
The rest of the chapter is organized as follows: We begin by presenting the
theoretical foundations of the proposed method and discuss how it can be used
to solve more complex forms of the eikonal equation. Next, we test the method
on a diverse set of 2D and 3D benchmark synthetic models and compare its
performance with the popular fast sweeping method. Finally, we conclude the
chapter by discussing the strengths of the method and identifying future
research opportunities.
## 2 Theory
In this section, we describe how neural networks can be used to compute
traveltime solutions for eikonal equations corresponding to isotropic and
anisotropic media. We do so by first introducing the different forms of the
eikonal equation and the concept of factorization. Next, we outline the
general mechanism of a feed-forward neural network followed by its capability
as a function approximator. This is followed by a brief overview of the
concept of automatic differentiation, which is used to compute the derivative
of the networks’ output with respect to the inputs. Finally, putting these
concepts together, we will present the proposed algorithm for solving various
forms of the eikonal equation.
### 2.1 Eikonal equations
The eikonal equation is a non-linear first-order PDE that is, for an isotropic
medium, given as:
$\left(\frac{\partial T}{\partial x}\right)^{2}+\left(\frac{\partial
T}{\partial z}\right)^{2}=\frac{1}{v(x,z)^{2}},$ (1)
subject to a point-source boundary condition as:
$T(x_{s},z_{s})=0,$ (2)
where $T(x,z)$ is the traveltime from the source point $(x_{s},z_{s})$ to a
point $(x,z)$ in the computational domain, and $v(x,z)$ is the phase velocity
of the isotropic medium. Since the curvature of the wavefront near the point-
source is extremely large, previous studies [31, 49] have shown that it is
better to solve the factored eikonal equation instead of equation (1). The
idea is to factor the unknown traveltime into two multiplicative factors,
where one of the factors is specified analytically to capture the source-
singularity such that the other factor is gently varying in the source
neighborhood. Therefore, we factor $T(x,z)$ into two multiplicative functions:
$T(x,z)=T_{0}(x,z)\cdot\tau(x,z),$ (3)
where $T_{0}(x,z)$ is the known function and $\tau(x,z)$ is the unknown
function. Plugging this into equation (1), we get the factored eikonal
equation for an isotropic model as:
$\left(T_{0}\frac{\partial\tau}{\partial x}+\tau\frac{\partial T_{0}}{\partial
x}\right)^{2}+\left(T_{0}\frac{\partial\tau}{\partial z}+\tau\frac{\partial
T_{0}}{\partial z}\right)^{2}=\frac{1}{v^{2}},$ (4)
subject to the updated point-source condition:
$\tau(x_{s},z_{s})=1.$ (5)
The known factor $T_{0}$ is the traveltime solution in a homogeneous isotropic
model given as:
$T_{0}(x,z)=\frac{\sqrt{(x-x_{s})^{2}+(z-z_{s})^{2}}}{v_{s}},$ (6)
where $v_{s}$ is taken to be the velocity at the point-source location.
Figure 1: Illustration of the different subsurface approximations: isotropic
(left), vertically transversely isotropic (VTI) (center), and tilted
transversely isotropic (TTI) (right). The solid lines indicate the direction
of the symmetry axis for the VTI and TTI cases.
Equation (4) is the factored eikonal equation for an isotropic medium;
however, sedimentary rocks exhibit at least some degree of anisotropy due to a
number of factors including thin layering and preferential alignment of grains
cracks [50]. This results in the velocity being a function of the wave
propagation direction, making the isotropic approximation of the Earth
invalid. Therefore, traveltime computation algorithms must honor the
anisotropic nature of the Earth for accurate subsurface imaging and other
applications. Thus, we consider a realistic approximation of the subsurface
anisotropy known as the tilted transverse isotropy (TTI) case. The factored
eikonal equation for a TTI medium is considerably more complex than the
isotropic case and is given, under the acoustic assumption, as [49]:
$\begin{aligned}
&(1+2\epsilon)\left(\cos\theta\left(T_{0}\frac{\partial\tau}{\partial
x}+\tau\frac{\partial T_{0}}{\partial
x}\right)+\sin\theta\left(T_{0}\frac{\partial\tau}{\partial
z}+\tau\frac{\partial T_{0}}{\partial z}\right)\right)^{2}\\\
+&\left(\cos\theta\left(T_{0}\frac{\partial\tau}{\partial
z}+\tau\frac{\partial T_{0}}{\partial
z}\right)-\sin\theta\left(T_{0}\frac{\partial\tau}{\partial
x}+\tau\frac{\partial T_{0}}{\partial x}\right)\right)^{2}\\\
\times&\left(1-\frac{2\eta
v_{t}^{2}(1+2\epsilon)}{1+2\eta}\left(\cos\theta\left(T_{0}\frac{\partial\tau}{\partial
x}+\tau\frac{\partial T_{0}}{\partial
x}\right)+\sin\theta\left(T_{0}\frac{\partial\tau}{\partial
z}+\tau\frac{\partial T_{0}}{\partial
z}\right)\right)^{2}\right)=\frac{1}{v_{t}^{2}},\end{aligned}$
(7)
where $v_{t}(x,z)$ is the velocity along the symmetry axis, $\epsilon(x,z)$
and $\eta(x,z)$ are the anisotropy parameters, and $\theta(x,z)$ is the tilt
angle that the symmetry axis makes with the vertical. The point-source
condition is the same as the one given in equation (5). Again $\tau(x,z)$ is
the unknown function we solve equation (7) for, whereas $T_{0}(x,z)$ is the
known function which may be taken as the solution of a homogeneous, tilted
elliptically isotropic medium, given as [32]:
$T_{0}(x,z)=\sqrt{\frac{b_{s}(x-x_{s})^{2}+2c_{s}(x-x_{s})(z-z_{s})+a_{s}(z-z_{s})^{2}}{a_{s}b_{s}-c_{s}^{2}}},$
(8)
with
$\displaystyle a_{s}$
$\displaystyle=v_{ts}^{2}(1+2\epsilon_{s})\cos\theta_{s}^{2}+v_{ts}^{2}\sin\theta_{s}^{2},$
(9) $\displaystyle b_{s}$
$\displaystyle=v_{ts}^{2}\cos\theta_{s}^{2}+v_{ts}^{2}(1+2\epsilon_{s})\sin\theta_{s}^{2},$
$\displaystyle c_{s}$
$\displaystyle=\left(v_{ts}^{2}-v_{ts}^{2}(1+2\epsilon_{s})\right)\cos\theta_{s}\sin\theta_{s}.$
In the above expressions, $v_{ts}$ and $\epsilon_{s}$ are the velocity along
the symmetry axis and the anisotropy parameter, respectively, at the point-
source location. Similarly, $\theta_{s}$ is the tilt angle taken at the
source-point.
It is worth highlighting that the isotropic and TTI cases represent
mathematical approximations of the subsurface. An isotropic model considers
the velocity to be invariant with respect to the direction of propagation,
which is a crude representation of the Earth’s crust. The simplest practical
anisotropic symmetry system is axisymmetric anisotropy, commonly known as
transverse isotropy (TI). A TI medium with a vertical axis of symmetry (VTI)
is a good approximation for horizontally layering shale formation or thin-
layering sediments. The factored eikonal equation for VTI media can be
obtained by setting $\theta=0$ in equation (7). For more complex geology, such
as sediments near the flanks of salt domes and fold-and-thrust belts like the
Canadian foothills, a TTI model represents the best approximation. Figure 1
illustrates these approximations graphically.
The reason for considering eikonal equations corresponding to different media
is to highlight, in comparison with the conventional methods, how easy it is
to adapt the proposed method to solve a relatively more complex eikonal
equation (more on this in Section 2.4).
### 2.2 Approximation property of neural networks
A feed-forward neural network, also known as a multi-layer perceptron, is a
set of neurons organized in layers in which evaluations are performed
sequentially through the layers. It can be seen as a computational graph with
an input layer, an output layer, and an arbitrary number of hidden layers. In
a fully connected neural network, neurons in adjacent layers are connected
with each other, but neurons within a single layer share no connections. It is
called a feed-forward neural network because information flows from the input
through each successive layer to the output. Moreover, there are no feedback
or recursive loops in a feed-forward neural network.
Neural networks are well-known for their strong representational power. A
neural network with $n$ neurons in the input layer and $m$ neurons in the
output layer can be used to represent a function
$u:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}$. In fact, it has been shown that a
neural network with a finite number of neurons in the hidden layer can be used
to represent any bounded continuous function to the desired accuracy. This is
also known as the universal approximation theorem [51, 52]. In addition, it
was later shown that by using a deep network with multiple hidden layers and a
nonlinear activation function, the total number of neurons needed to represent
a given function could be significantly reduced [53]. Therefore, our goal here
is to train a DNN that could represent the mapping between the spatial
coordinates $(x,z)$, as inputs to the network, and the unknown traveltime
function $\tau(x,z)$ representing the output of the DNN. Figure 2 illustrates
this idea pictorially showing a neural network with input neurons for the
spatial coordinates $(x,z)$ that are passed through the hidden layers to the
output layer for predicting the traveltime factor at the inputted spatial
location.
Figure 2: A feedforward neural network architecture containing an arbitrary
number of hidden layers/neurons that is used to approximate the traveltime
factor $\hat{\tau}$ at a given spatial coordinates $(x,z)$ for a 2D
computational domain.
We formulate here considering a 2D case for simplicity of illustration. In a
3D model, one would need a neural network with three input neurons, one for
each spatial dimension. It must also be noted that while DNNs are, in theory,
capable of representing very complex functions, finding the actual parameters
(weights and biases) needed to solve a given PDE can be very challenging.
### 2.3 Automatic differentiation
Solving a PDE using neural networks requires a mechanism to accurately compute
derivatives of the network’s output(s) with respect to the input(s). There are
multiple ways to compute derivatives including hand-coded analytical
derivatives, symbolic differentiation, numerical approximation, and automatic
differentiation (AD) [54]. While manually working out the derivatives is
exact, it is often time consuming to code and it is error-prone. Symbolic
differentiation, while also exact, may result in exponentially large
expressions and, therefore, can be prohibitively slow and memory intensive.
Numerical differentiation, on the other hand, is easy to implement but can be
highly inaccurate due to round-off errors. Contrary to these approaches, AD
uses exact expressions with floating-point values instead of symbolic strings
and it involves no approximation errors. This results in an accurate
evaluation of derivatives at machine precision. Therefore, we evaluate the
partial derivatives of the unknown traveltime factor $\tau$ with respect to
the inputs $(x,z)$ using AD.
However, it must be noted that an efficient implementation of the AD algorithm
can be non-trivial. Fortunately, many existing computational frameworks such
as Tensorflow [55] and PyTorch [56] have made available efficiently
implemented AD libraries.
### 2.4 Solving eikonal equations
We begin by considering how the different pieces of the puzzle outlined in the
previous subsections can be combined to solve eikonal equations. First, we
illustrate this using the factored isotropic eikonal equation (4) and then
demonstrate how simple it is under the proposed framework to solve a more
complex eikonal equation, such as the factored TTI eikonal equation (7).
To solve equation (4), we leverage the capabilities of neural networks as
function approximators and define a loss function that minimizes the residual
of the underlying PDE for a chosen set of training (collocation) points. This
is achieved using the following components:
* i.
a DNN approximation of the unknown traveltime field variable $\tau(x,z)$,
* ii.
a differentiation algorithm, i.e., AD in this case, to evaluate partial
derivatives of $\tau(x,z)$ with respect to the spatial coordinates $(x,z)$,
* iii.
a loss function incorporating the underlying eikonal equation, sampled on a
collocation grid, and
* iv.
an optimizer to minimize the loss function by updating the neural network
parameters.
To illustrate the idea, let us consider a two-dimensional domain
$\Omega\in\mathbb{R}^{2}$. A point-source is located at coordinates
$(x_{s},z_{s})$, where $\tau(x_{s},z_{s})=1$. The unknown traveltime factor
$\tau(x,z)$ is approximated using a DNN, $\mathcal{N}_{\tau}$, such that:
$\tau(x,z)\approx\hat{\tau}(x,z)=\mathcal{N}_{\tau}(x,z;\bm{\theta}),$ (10)
where $x,z$ are network inputs, $\hat{\tau}$ is the network output, and
$\bm{\theta}\in\mathbb{R}^{D}$ represents the set of all trainable parameters
of the network with $D$ as the total number of parameters.
The loss function is given by the mean-squared error norm as
$\mathfrak{J}=\frac{1}{N_{I}}\sum_{(x^{*},z^{*})\in
I}\left\lVert\mathcal{L}\right\rVert^{2}+\frac{1}{N_{I}}\sum_{(x^{*},z^{*})\in
I}\left\lVert\mathcal{H}(-\hat{\tau})|\hat{\tau}|\right\rVert^{2}\\\
+\left\lVert\hat{\tau}(x_{s},z_{s})-1\right\rVert^{2},$ (11)
where $\mathcal{L}$ represents the residual of the factored isotropic eikonal
equation (4), given by
$\displaystyle\mathcal{L}=\left(T_{0}\frac{\partial\hat{\tau}}{\partial
x}+\hat{\tau}\frac{\partial T_{0}}{\partial
x}\right)^{2}+\left(T_{0}\frac{\partial\hat{\tau}}{\partial
z}+\hat{\tau}\frac{\partial T_{0}}{\partial z}\right)^{2}-\frac{1}{v^{2}}.$
(12)
The first term on the right side of equation (11) imposes validity of the
factored eikonal equation (4) on a given set of training points
$(x^{*},z^{*})\leavevmode\nobreak\ \in\leavevmode\nobreak\ I$, where $N_{I}$
is the number of training samples. The second term forces the solution
$\hat{\tau}$ to be positive by penalizing negative solutions using the
Heaviside function $\mathcal{H}()$. The last term enforces the boundary
condition by imposing the solution $\hat{\tau}$ to be unity at the source
point $(x_{s},z_{s})$. The set of network parameters $\bm{\theta}^{*}$ that
minimizes the loss function (11) on this set of training points,
$(x^{*},z^{*})\leavevmode\nobreak\ \in\leavevmode\nobreak\ I$, is then
identified by solving the optimization problem:
$\bm{\theta}^{*}=\arg\min_{\bm{\theta}\in\mathbb{R}^{D}}\mathfrak{J}(x^{*},z^{*};\bm{\theta}).$
(13)
Once the DNN is trained, we evaluate the network on a set of regular grid-
points in the computational domain to obtain the unknown traveltime field. The
final traveltime solution is obtained by multiplying it with the known
traveltime part, i.e.,
$\hat{T}(x,z)=T_{0}(x,z)\cdot\hat{\tau}(x,z).$ (14)
Figure 3: A workflow of the proposed factored TTI eikonal solver in 2D: A
randomly initialized neural network is trained on a set of randomly selected
collocation points $(x^{*},z^{*})$ in the model space with given model
parameters $v_{t}(x^{*},z^{*})$, $\epsilon(x^{*},z^{*})$, $\eta(x^{*},z^{*})$,
$\theta(x^{*},z^{*})$, the known traveltime function $T_{0}(x^{*},z^{*})$, and
its spatial derivative $\nabla T_{0}(x^{*},z^{*})$ to minimize the loss
function given in equation 11. Once the network is trained, it is evaluated on
a regular grid of points $(x,z)$ to yield an estimate of the traveltime field
$\hat{\tau}$, which is then multiplied with the factored traveltime part
$T_{0}$ to yield the estimated first-arrival traveltime solution $\hat{T}$.
This yields traveltimes corresponding to an isotropic approximation of the
Earth. However, it is well-known that the subsurface is anisotropic in nature.
Therefore, a significant amount of research effort has been spent over the
years on extending numerical eikonal solvers to anisotropic media. The
complication in numerically solving the eikonal equation arises due to
anellipticity of the wavefront [57] resulting in high-order nonlinear terms in
the eikonal equation. These high-order terms dramatically increase the
complexity in solving the anisotropic eikonal equation and, therefore, have
been a topic of immense research interest. On the contrary, the proposed
neural network formulation allows solving for the anisotropic eikonal equation
by simply replacing the residual in equation (11) with the one corresponding
to the anisotropic eikonal equation. Therefore, to solve the factored TTI
eikonal equation (7), we would, instead of equation (12), use the following:
$\begin{aligned}
\mathcal{L}=&\,(1+2\epsilon)\left(\cos\theta\left(T_{0}\frac{\partial\hat{\tau}}{\partial
x}+\hat{\tau}\frac{\partial T_{0}}{\partial
x}\right)+\sin\theta\left(T_{0}\frac{\partial\hat{\tau}}{\partial
z}+\hat{\tau}\frac{\partial T_{0}}{\partial z}\right)\right)^{2}\\\
+&\,v_{t}^{2}\left(\cos\theta\left(T_{0}\frac{\partial\hat{\tau}}{\partial
z}+\hat{\tau}\frac{\partial T_{0}}{\partial
z}\right)-\sin\theta\left(T_{0}\frac{\partial\hat{\tau}}{\partial
x}+\hat{\tau}\frac{\partial T_{0}}{\partial x}\right)\right)^{2}\\\
\times&\left(1-\frac{2\eta
v_{t}^{2}(1+2\epsilon)}{1+2\eta}\left(\cos\theta\left(T_{0}\frac{\partial\hat{\tau}}{\partial
x}+\hat{\tau}\frac{\partial T_{0}}{\partial
x}\right)+\sin\theta\left(T_{0}\frac{\partial\hat{\tau}}{\partial
z}+\hat{\tau}\frac{\partial T_{0}}{\partial
z}\right)\right)^{2}\right)-\frac{1}{v_{t}^{2}}.\end{aligned}$
(15)
This is a highly desirable feature of this approach because eikonal equations
corresponding to models with even lower symmetry than TTI can be easily solved
by simply using a different residual term. By contrast, conventional
algorithms such as fast marching or fast sweeping methods would require
significant effort to incorporate such changes, thereby resulting in much
slower scientific progress.
A workflow summarizing the proposed solver for the factored TTI eikonal
equation is shown in Figure 3.
## 3 Numerical Tests
In this section, we test the neural network based eikonal solver and compare
its performance with the first-order fast sweeping method, which is routinely
used in geophysical (and other) applications for traveltime computations. We
consider several isotropic and anisotropic 2D/3D models for these tests and
also include a model with topography to demonstrate the flexibility of the
proposed method.
For each of the examples presented below, we use a neural network having 20
hidden layers with 20 neurons in each layer and minimize the neural network’s
loss function using full-batch optimization. A locally adaptive inverse
tangent activation function is used for all hidden layers except the final
layer, which uses a linear activation function. Locally adaptive activation
functions have been shown to achieve superior optimization performance and
convergence speed over base methods. The introduction of a scalable parameter
in the activation function for each neuron changes the slope of the activation
function and, therefore, alters the loss landscape of the neural network for
improved performance. For more information on locally adaptive activation
functions, we refer the interested reader to [58].
We choose the afore-mentioned configuration of the neural network based on
some initial tests and keep it fixed for the entire study to minimize the need
for hyper-parameter tuning for each new velocity model.
The following examples are prepared and trained using the neural network
library, SciANN [59], a Keras/Tensorflow API that is designed and optimized
for physics-informed deep learning. SciANN leverages the latest advancements
of Tensorflow while keeping the interface close to the mathematical
description of the problem.
Figure 4: A vertically varying velocity model with a constant velocity
gradient of 1 $\text{s}^{-1}$. The velocity at zero depth is equal to 2 km/s
and it increases linearly to 3 km/s at a depth of 1 km. The black star
indicates the point-source location used for the test.
### Example 1: An isotropic model with constant vertically varying gradient
First, we consider a vertically varying $1\times 1$ km2 isotropic model. The
velocity at zero depth is 2 km/s and it increases linearly with a gradient of
1 s-1 to 3 km/s at a depth of 1 km. We compute traveltime solutions using the
neural network and first-order fast sweeping method by considering a point-
source located at $(0.5\leavevmode\nobreak\ \text{km},0.5\leavevmode\nobreak\
\text{km})$. We compare their performance with a reference solution computed
analytically [60]. The velocity model is shown in Figure 4 and is discretized
on a 101$\times$101 grid with a 10 m grid interval along both axes.
Figure 5: Absolute traveltime errors for the neural network solution (a) and
the first-order fast sweeping solution (b) for the isotropic model and the
source location shown in Figure 4. Figure 6: Traveltime contours for the
reference solution (solid red) computed analytically, neural network solution
(dashed black), and the first-order fast sweeping solution (dotted blue) for
the vertically varying isotropic model. The black star shows the location of
the point-source.
For training the neural network, we begin with randomly initialized parameters
and train on 50% of the total grid points selected randomly and use the Adam
optimizer [61] with 10,000 epochs. Once the network is trained, we evaluate
the trained network on the regularly sampled (101$\times$101) grid to obtain
the unknown traveltime field $\hat{\tau}$, which is then multiplied with the
corresponding factored traveltime field $T_{0}$ to obtain the final traveltime
solution. We compare the accuracy of the neural network solution and the
first-order fast sweeping solution, computed on the same regular grid in
Figure 5. We observe significantly better accuracy for the neural network
based solution despite using only half of the total grid points for training.
In Figure 6, we confirm this observation by plotting the corresponding
traveltime contours.
### Example 2: Smoothly varying TTI model
Figure 7: A velocity model for the parameter $v_{t}$ with a constant vertical
gradient of 1.5 $\text{s}^{-1}$ and a horizontal velocity gradient of 0.5
$\text{s}^{-1}$. A homogeneous model is used for the anisotropy parameters
($\epsilon$=0.2, $\eta$ = 0.083) and for the tilt angle ($\theta$ = 30∘). The
black star indicates the point-source location used for the test. Figure 8:
A comparison of the loss history for training of the TTI model in example 2
using pre-trained weights from example 1 (orange) and random initialization
(blue). An L-BFGS-B optimizer is used for both cases.
Next, we train the neural network to solve the TTI eikonal equation. Compared
to the fast sweeping method that requires significant modifications to the
isotropic eikonal solver, the neural network based approach requires only an
update to the loss function by incorporating the appropriate residual based on
the TTI eikonal equation. For the velocity parameter $v_{t}$, we consider a
linear velocity model with a vertical gradient of 1.5 $\text{s}^{-1}$ and a
horizontal gradient of 0.5 $\text{s}^{-1}$ as shown in Figure 7. We use
homogeneous models for the anisotropy parameters with $\epsilon$=0.2 and
$\eta$ = 0.083. We also consider a homogeneous tilt angle of $\theta$ = 30∘.
These models are also discretized on a 101$\times$101 grid with 10 m grid
interval along both axes.
Figure 9: The absolute traveltime errors for the neural network solution (a)
and the fast sweeping solution (b) for the TTI model considered in example 2.
Figure 10: The traveltime contours for the reference solution (solid red),
neural network solution (dashed black), and the first-order fast sweeping
solution (dotted blue) for the smoothly varying TTI model. The black star
shows the location of the point-source.
Instead of training the network from scratch, we use transfer learning, which
is a machine learning technique that relies on storing knowledge gained while
solving one problem and applying it to a different but related problem.
Starting with a pre-trained network from example 1, we fine-tune the neural
network parameters for the TTI model using 50% of the total grid points,
selected randomly, using the L-BFGS-B solver [62] for 100 epochs. Starting
with a pre-trained network allows us to use the L-BFGS-B method for faster
convergence as opposed to starting with the Adam optimizer and then switching
to L-BFGS-B as suggested by previous studies [43]. For comparison, we also
train a neural network from scratch and the convergence history, shown in
Figure 8, confirms that the solution converges much faster when using transfer
learning.
Figure 9 compares absolute traveltime errors computed using the neural network
and the first-order fast sweeping method using the iterative solver of Waheed
et al. [33]. The reference solution is obtained using a high-order fast
sweeping method on a finer grid. We observe that, despite using transfer
learning, the accuracy of the neural network solution is considerably better
than the fast sweeping method. We confirm this observation visually by
comparing the corresponding traveltime contours in Figure 10. One can also
observe the effect of the additional anisotropy parameters and the tilt angle
on the traveltime contours here. By comparing the shapes of the contours in
Figure 6 and 10, it is obvious that the wave propagation speed varies with the
direction of propagation. A faster propagation is observed orthogonal to the
symmetry direction, given by the tilt angle, compared to the propagation along
the symmetry axis.
It is worth noting that while the complexity of the fast sweeping solvers and
their computational cost increases dramatically when switching from an
isotropic to a TTI model, for the neural network both cases require similar
complexity and computational cost. Therefore, the proposed method is
particularly suited to model complex physics involving media with anisotropy,
attenuation, etc.
One of the main challenges in seismic imaging and inversion is the need for
repeated traveltime computations for thousands of source locations and
multiple updated velocity models. Unfortunately, conventional techniques do
not allow the transfer of information from one solution to the next and,
therefore, the same amount of computational effort is needed for even small
perturbations in the source location and/or the velocity model. We noted above
how transfer learning can be used to speed up convergence for a new velocity
model and source position. This could be further extended by adding source
location $(x_{s},z_{s})$ as input to the network and training a surrogate
model.
To do so, we train a neural network on solutions computed for 16 sources
located at regular intervals in the considered TTI model as shown in Figure
11. Through the training process, the network learns the mapping between a
given source location and the corresponding traveltime field. Once the
surrogate model is trained, traveltime fields for additional source locations
can be computed instantly by using a single evaluation of the trained network.
This is similar to obtaining an analytic solver as no further training is
needed for computing traveltimes corresponding to new source locations. This
feature is particularly advantageous for large 3D models that need thousands
of such computations.
Figure 11: A velocity model for the parameter $v_{t}$ with a constant
vertical gradient of 1.5 $\text{s}^{-1}$ and a horizontal velocity gradient of
0.5 $\text{s}^{-1}$. A homogeneous model is used for the anisotropy parameters
($\epsilon$=0.2, $\eta$ = 0.083) and for the tilt angle ($\theta$ = 30∘).
Black stars indicate locations of sources used to train the network as a
surrogate model.
Figure 12: The absolute traveltime errors for the solution computed using the
surrogate model (a) and the fast sweeping solution (b) for the TTI model
considered in example 2 for a randomly chosen source location.
After training the surrogate model, we test its performance by computing the
traveltime field corresponding to a randomly chosen source location. Figure 12
compares the absolute traveltime errors for the solution predicted by the
surrogate model and the fast sweeping TTI solver. We observe that even without
any additional training for this new source position, we obtain remarkably
high accuracy compared to the fast sweeping method. This is also confirmed by
visually comparing the corresponding traveltime contours in Figure 13.
Figure 13: The traveltime contours for solutions obtained using the reference
solution (solid red), the neural network surrogate model (dashed black), and
the first-order fast sweeping solver (dotted blue) for a randomly chosen
source point in a smoothly varying TTI model.
### Example 3: VTI SEAM model
Figure 14: (a) The vertical velocity $v_{t}$, (b) the $\epsilon$ parameter,
and (c) the $\eta$ parameter for the considered portion of the VTI SEAM model.
The white star indicates the position of the point-source.
Next, we test the performance of the proposed method on a portion of the VTI
SEAM model, shown in Figure 14. The model parameters are extracted from the
3D SEG Advanced Modeling (SEAM) Phase I subsalt earth model [63]. This is a
particularly interesting example due to sharper variations in the velocity
model and the anisotropy parameters. The model is discretized on a
101$\times$101 grid with a grid spacing of 100 m along both axes. Based on our
recent efforts in using the neural network solver [47, 44], we observe that
the convergence of the neural network approach slows down considerably in the
presence of sharp variations in the velocity model. We have already seen above
that using a pre-trained neural network yields faster convergence by allowing
the use of the second-order optimization method (L-BFGS-B) directly.
Therefore, in this example, we use the pre-trained network obtained from the
TTI eikonal solver in example 2.
For further speedup, we propose a two-stage training scheme to obtain an
accurate traveltime solution. In the first stage, when the neural network is
learning a smooth representation of the underlying function, we use only a
small percentage of grid points for training. In this case, we use only 1% of
the total grid points chosen randomly for the first 200 epochs and then switch
to using 50% of the total grid points in stage 2 for another 1000 epochs to
update the learned function in better approximating sharp features in the
resulting traveltime field. Again, since we start training with a pre-trained
network, we use the L-BFGS-B optimizer for faster convergence.
Figure 15: The traveltime contours for the reference solution (solid red),
neural network solution (dashed black), and the first-order fast sweeping
solution (dotted blue) for the VTI SEAM model. Red arrows indicate the
improvement of accuracy due to the additional training points used in the
neural network solution in going from stage 1 (a) to stage 2 (b) of the
proposed training process. The black star shows the location of the point-
source.
Figure 16: The absolute traveltime errors for the neural network solution (a)
and the fast sweeping solution (b) for the SEAM VTI model.
Figure 15 shows traveltime contours for a point-source located at (5 km, 1 km)
for the reference solution, the neural network solution, and the first-order
fast sweeping solution. In Figure 15, we show the neural network solution at
the end of stage 1. It can be seen that the solution is quite smooth and
misses sharp features visible in the reference solution. In Figure 15, we
observe that the neural network solution captures these features as additional
training points are added in the second stage of training. Therefore, using a
small number of training points in stage 1 reduces the training cost without
compromising on solution accuracy. By comparing the absolute traveltime errors
in Figure 16, we observe that the neural network solution after stage 2 is
considerably more accurate than the first-order fast sweeping method, even for
a realistic VTI model.
### Example 4: BP TTI model
Figure 17: (a) The velocity along the symmetry axis $v_{t}$, (b) the
$\epsilon$ parameter, (c) the $\eta$ parameter, and (d) the tilt angle
$\theta$ for the considered portion of the BP TTI model with a layer of non-
flat topography.
Figure 18: The absolute traveltime errors for the neural network solution (a)
and the fast sweeping solution (b) for the BP TTI model.
We have already seen how the neural network based approach is flexible in
incorporating complex physics compared to conventional techniques. In this
example, we will see how incorporating irregular topography is straightforward
using the proposed approach. The free-surface encountered in land seismic
surveys is often non-flat and requires taking this into account for accurate
traveltime computation. One approach to tackle this is to transform from
Cartesian to curvilinear coordinate system to mathematically flatten the free-
surface and solve the resulting topography-dependent eikonal equation [37].
This adds additional computational cost and may result in instabilities when
topography varies sharply. On the contrary, the neural network approach
outlined here is mesh-free and doesn’t require any modification to the
algorithm. We demonstrate this through a test on a portion of the BP TTI
model, shown in Figure 17. The model was developed by Shah [64] and publicly
provided courtesy of the BP Exploration Operating Company. The considered
portion of the model is discretized on a 161$\times$161 grid using a grid
spacing of 6.25 m along both axes. Points above the considered topography
layer are then removed from the model.
Figure 19: The traveltime contours for the reference solution (solid red),
neural network solution (dashed black), and the first-order fast sweeping
solution (dotted blue) for the BP TTI model. The black star shows the location
of the point-source and the solid black curve indicates the topography layer.
For a point-source located at (5 km, 0.5 km), we compare the neural network
and fast sweeping traveltime solutions. For training the neural network, we
take into account about 50% of the grid points below the topography and once
the network is trained, we evaluate the solution on the regular grid points
that fall below the topography. We start training using pre-trained parameters
from example 2 and train for 10,000 epochs using an L-BFGS-B optimizer. Figure
18 shows absolute traveltime errors for the two cases, indicating that the
neural network solution is again considerably more accurate than fast
sweeping. We confirm this observation visually through traveltime contours
plotted in Figure 19.
### Example 5: 3D TTI model
Figure 20: A velocity model for the parameter $v_{t}$ for the 3D TTI model
test. A homogeneous model is used for the anisotropy parameters
($\epsilon$=0.2, $\eta$ = 0.1) and for the tilt angle ($\theta$ = 30∘).
Finally, we show an example of extending the proposed method to a 3D TTI
model. The model for the velocity parameter $v_{t}$ is shown in Figure 20. A
homogeneous model is used for the anisotropy parameters with $\epsilon$=0.2
and $\eta$ = 0.1. We also consider a homogeneous tilt angle of 30∘. The model
is discretized on a 101$\times$101$\times$51 grid with a grid spacing of 50 m
along each axis. The workflow for obtaining the neural network solution is
essentially the same. In this case, the neural network takes three input
parameters corresponding to the spatial axes $(x,y,z)$ and outputs the
corresponding unknown traveltime field $\hat{\tau}(x,y,z)$. Similar to before,
the neural network output is multiplied by the known traveltime factor
$T_{0}(x,y,z)$ to obtain the final traveltime solution.
Starting with a pre-trained neural network on a smoothly varying TTI model, we
train the network using 50% of the total grid points chosen randomly. We use
20,000 L-BFGS-B epochs during the training process. For a point-source located
at $(x,y,z)$=(2 km, 2 km, 1 km), we compare the accuracy of the neural network
solution and the first-order fast sweeping solution in Figure 21. We observe
that the proposed method is capable of computing accurate traveltimes for 3D
TTI models as well without requiring any major alteration to the underlying
algorithm.
Figure 21: The absolute traveltime errors for the neural network solution
(a,b) and the fast sweeping solution (c,d) for the 3D TTI model.
## 4 Discussion and conclusions
We proposed a neural network approach to computing first-arrival traveltimes
based on the framework of physics-informed neural networks. By leveraging the
capabilities of neural networks as universal function approximators, we define
a loss function to minimize the residual of the governing eikonal equation at
a chosen set of training points. This is achieved with a simple feed-forward
neural network using automatic differentiation.
We demonstrated the flexibility of the proposed framework in incorporating
anisotropy in the model simply by updating the loss function of the neural
network according to the underlying PDE. Since the method is mesh-free, we
also saw how easy it is to incorporate non-flat topography into the solver
compared to conventional methods. Another attractive feature, due to this
mesh-free nature of the algorithm, is that sources and receivers do not have
to lie on a regular grid as required by conventional finite-difference
techniques. We also showed that by using machine learning techniques like
transfer learning and surrogate modeling, we could transfer information gained
by solving one problem to the next – a key feature missing in our conventional
numerical algorithms. This would be key in achieving computational efficiency
beyond conventional methods on models of practical interest.
Furthermore, the neural network based eikonal solver uses Tensorflow at the
backend, which allows for easy deployment of computations across a variety of
platforms (CPUs, GPUs) and architectures (desktops, clusters). On the
contrary, significant effort is needed in adapting conventional algorithms to
benefit from different computational platforms or architectures.
In short, the approach tackles many problems associated with obtaining fast
and accurate traveltimes, that required decades of research, in a holistic
manner. In fact, it opens new possibilities in solving complex forms of the
eikonal equation that have remained unsolved by conventional algorithms or
solved through some approximation techniques at best. Our recent tests have
demonstrated success in solving such equations using neural networks without
approximations. This includes solutions to the qSV eikonal equation for
anisotropic media [65] and the attenuating VTI eikonal equation [66].
It is also worth emphasizing that the actual computational advantage of the
proposed method, compared to conventional numerical solvers, depends on many
factors including the network architecture, optimization hyper-parameters, and
sampling techniques. If the initialization of the network and the optimizer
learning rate are chosen carefully, the training can be completed quite
efficiently. Furthermore, the activation function used, the adaptive weighting
of the loss function terms, and the availability of second-order optimization
techniques can accelerate the training significantly. Therefore, a detailed
study is needed to quantify the computational gains afforded by the proposed
neural network-based solver compared to conventional algorithms by considering
the afore-mentioned factors. A rudimentary analysis performed in [44]
indicates that once the surrogate model is obtained, the PINN eikonal solver
is computationally faster by more than an order of magnitude than the fast
sweeping method. Since the computational complexity of solving the anisotropic
eikonal equation using conventional methods increases dramatically compared to
the isotropic case [33], the proposed framework is computationally more
attractive for traveltime modeling in anisotropic media.
Nevertheless, there are a few challenges associated with the method that
requires further research. Chief among them is the slow convergence of the
solution in the presence of sharp heterogeneity in the velocity and/or
anisotropy models. We proposed a two-stage optimization process in this
chapter that alleviates part of the problem by using only a small fraction of
the training points during the initial training phase. Since at this stage,
the network is learning a smooth representation of the underlying function, we
could save some computational cost by using a small number of training points
initially. We also used a locally adaptive activation function that has been
shown to achieve faster convergence. Other possible solutions may include an
adaptive sampling of the velocity model by using denser sampling of training
points around parts of the model with large velocity gradients. Another
challenge concerns the optimal choice of the neural networks’ hyper-
parameters. In this study, we alleviated the problem by choosing them through
some quick initial tests and keeping them fixed for all the test examples.
Recent advances in the field of meta-learning may, potentially, enable
automated selection of these parameters in the future.
## References
* Crandall and Lions [1983] M. G. Crandall, P.-L. Lions, Viscosity solutions of Hamilton-Jacobi equations, Transactions of the American mathematical society 277 (1983) 1–42.
* Alvino et al. [2007] C. Alvino, G. Unal, G. Slabaugh, B. Peny, T. Fang, Efficient segmentation based on eikonal and diffusion equations, International Journal of Computer Mathematics 84 (2007) 1309–1324.
* Garrido et al. [2016] S. Garrido, D. Álvarez, L. Moreno, Path planning for mars rovers using the fast marching method, in: Robot 2015: Second Iberian Robotics Conference, Springer, 2016, pp. 93–105.
* Raviv et al. [2011] D. Raviv, A. M. Bronstein, M. M. Bronstein, R. Kimmel, N. Sochen, Affine-invariant geodesic geometry of deformable 3D shapes, Computers & Graphics 35 (2011) 692–697.
* Helmsen et al. [1996] J. J. Helmsen, E. G. Puckett, P. Colella, M. Dorr, Two new methods for simulating photolithography development in 3D, in: Optical Microlithography IX, volume 2726, International Society for Optics and Photonics, 1996, pp. 253–261.
* Lawton [1989] D. C. Lawton, Computation of refraction static corrections using first-break traveltime differences, Geophysics 54 (1989) 1289–1296.
* Hole and Zelt [1995] J. Hole, B. Zelt, 3-D finite-difference reflection traveltimes, Geophysical Journal International 121 (1995) 427–434.
* Taillandier et al. [2009] C. Taillandier, M. Noble, H. Chauris, H. Calandra, First-arrival traveltime tomography based on the adjoint-state method, Geophysics 74 (2009) WCB1–WCB10.
* Grechka et al. [2015] V. Grechka, A. De La Pena, E. Schisselé-Rebel, E. Auger, P.-F. Roux, Relative location of microseismicity, Geophysics 80 (2015) WC1–WC9.
* Lambare et al. [2003] G. Lambare, S. Operto, P. Podvin, P. Thierry, 3D ray+ born migration/inversion—Part 1: Theory, Geophysics 68 (2003) 1348–1356.
* Cerveny [2001] V. Cerveny, Seismic ray theory, Cambridge University Press, 2001.
* Vidale [1990] J. E. Vidale, Finite-difference calculation of traveltimes in three dimensions, Geophysics 55 (1990) 521–526.
* Vidale [1988] J. Vidale, Finite-difference calculation of travel times, Bulletin of the Seismological Society of America 78 (1988) 2062–2076.
* Dellinger [1991] J. Dellinger, Anisotropic finite-difference traveltimes, in: SEG Technical Program Expanded Abstracts 1991, Society of Exploration Geophysicists, 1991, pp. 1530–1533.
* Dellinger and Symes [1997] J. Dellinger, W. Symes, Anisotropic finite-difference traveltimes using a Hamilton-Jacobi solver, in: SEG Technical Program Expanded Abstracts 1997, Society of Exploration Geophysicists, 1997, pp. 1786–1789.
* Kim and Cook [1999] S. Kim, R. Cook, 3-D traveltime computation using second-order ENO scheme, Geophysics 64 (1999) 1867–1876.
* Podvin and Lecomte [1991] P. Podvin, I. Lecomte, Finite difference computation of traveltimes in very contrasted velocity models: a massively parallel approach and its associated tools, Geophysical Journal International 105 (1991) 271–284.
* Nichols [1996] D. E. Nichols, Maximum energy traveltimes calculated in the seismic frequency band, Geophysics 61 (1996) 253–263.
* Wang et al. [2006] Y. Wang, T. Nemeth, R. T. Langan, An expanding-wavefront method for solving the eikonal equations in general anisotropic media, Geophysics 71 (2006) T129–T135.
* Sethian and Popovici [1999] J. A. Sethian, A. M. Popovici, 3-D traveltime computation using the fast marching method, Geophysics 64 (1999) 516–523.
* Rickett and Fomel [1999] J. Rickett, S. Fomel, A second-order fast marching eikonal solver, Stanford Exploration Project Report 100 (1999) 287–293.
* Alkhalifah and Fomel [2001] T. Alkhalifah, S. Fomel, Implementing the fast marching eikonal solver: spherical versus cartesian coordinates, Geophysical Prospecting 49 (2001) 165–178.
* Popovici and Sethian [2002] A. M. Popovici, J. A. Sethian, 3-D imaging using higher order fast marching traveltimes, Geophysics 67 (2002) 604–609.
* Sethian and Vladimirsky [2003] J. A. Sethian, A. Vladimirsky, Ordered upwind methods for static Hamilton–Jacobi equations: Theory and algorithms, SIAM Journal on Numerical Analysis 41 (2003) 325–363.
* Cristiani [2009] E. Cristiani, A fast marching method for Hamilton-Jacobi equations modeling monotone front propagations, Journal of Scientific Computing 39 (2009) 189–205.
* bin Waheed et al. [2015] U. bin Waheed, T. Alkhalifah, H. Wang, Efficient traveltime solutions of the acoustic TI eikonal equation, Journal of Computational Physics 282 (2015) 62–76.
* Breuß et al. [2011] M. Breuß, E. Cristiani, P. Gwosdek, O. Vogel, An adaptive domain-decomposition technique for parallelization of the fast marching method, Applied Mathematics and Computation 218 (2011) 32–44.
* Monsegny et al. [2018] J. Monsegny, J. Monsalve, K. León, M. Duarte, S. Becerra, W. Agudelo, H. Arguello, Fast marching method in seismic ray tracing on parallel GPU devices, in: Latin American High Performance Computing Conference, Springer, 2018, pp. 101–111.
* Zhao [2005] H. Zhao, A fast sweeping method for eikonal equations, Mathematics of computation 74 (2005) 603–627.
* Zhang et al. [2006] Y.-T. Zhang, H.-K. Zhao, J. Qian, High order fast sweeping methods for static hamilton–jacobi equations, Journal of Scientific Computing 29 (2006) 25–56.
* Fomel et al. [2009] S. Fomel, S. Luo, H. Zhao, Fast sweeping method for the factored eikonal equation, Journal of Computational Physics 228 (2009) 6440–6455.
* Luo and Qian [2012] S. Luo, J. Qian, Fast sweeping methods for factored anisotropic eikonal equations: multiplicative and additive factors, Journal of Scientific Computing 52 (2012) 360–382.
* Waheed et al. [2015] U. B. Waheed, C. E. Yarman, G. Flagg, An iterative, fast-sweeping-based eikonal solver for 3D tilted anisotropic media, Geophysics 80 (2015) C49–C58.
* Han et al. [2017] S. Han, W. Zhang, J. Zhang, Calculating qP-wave traveltimes in 2-D TTI media by high-order fast sweeping methods with a numerical quartic equation solver, Geophysical Journal International 210 (2017) 1560–1569.
* Le Bouteiller et al. [2019] P. Le Bouteiller, M. Benjemaa, L. Métivier, J. Virieux, A discontinuous galerkin fast-sweeping eikonal solver for fast and accurate traveltime computation in 3D tilted anisotropic media, Geophysics 84 (2019) C107–C118.
* Hao et al. [2018] Q. Hao, U. Waheed, T. Alkhalifah, A fast sweeping scheme for P-wave traveltimes in attenuating VTI media, in: 80th EAGE Conference and Exhibition 2018, volume 2018, European Association of Geoscientists & Engineers, 2018, pp. 1–5.
* Lan and Zhang [2013] H. Lan, Z. Zhang, A high-order fast-sweeping scheme for calculating first-arrival travel times with an irregular surface, Bulletin of the Seismological Society of America 103 (2013) 2070–2082.
* Zhao [2007] H. Zhao, Parallel implementations of the fast sweeping method, Journal of Computational Mathematics (2007) 421–429.
* Detrixhe et al. [2013] M. Detrixhe, F. Gibou, C. Min, A parallel fast sweeping method for the eikonal equation, Journal of Computational Physics 237 (2013) 46–55.
* Gómez et al. [2019] J. V. Gómez, D. Álvarez, S. Garrido, L. Moreno, Fast methods for eikonal equations: an experimental survey, IEEE Access 7 (2019) 39005–39029.
* Jordan and Mitchell [2015] M. I. Jordan, T. M. Mitchell, Machine learning: Trends, perspectives, and prospects, Science 349 (2015) 255–260.
* Lagaris et al. [1998] I. E. Lagaris, A. Likas, D. I. Fotiadis, Artificial neural networks for solving ordinary and partial differential equations, IEEE transactions on neural networks 9 (1998) 987–1000.
* Raissi et al. [2019] M. Raissi, P. Perdikaris, G. E. Karniadakis, Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations, Journal of Computational Physics 378 (2019) 686–707.
* Waheed et al. [2020] U. Waheed, E. Haghighat, T. Alkhalifah, C. Song, Q. Hao, Eikonal solution using physics-informed neural networks, arXiv preprint arXiv:2007.08330 (2020).
* Smith et al. [2020] J. D. Smith, K. Azizzadenesheli, Z. E. Ross, Eikonet: Solving the eikonal equation with deep neural networks, IEEE Transactions on Geoscience and Remote Sensing (2020).
* Moseley et al. [2020] B. Moseley, A. Markham, T. Nissen-Meyer, Solving the wave equation with physics-informed deep learning, arXiv preprint arXiv:2006.11894 (2020).
* Song et al. [2021] C. Song, T. Alkhalifah, U. Waheed, Solving the frequency-domain acoustic vti wave equation using physics-informed neural networks, Geophysical Journal International (2021).
* Waheed et al. [2021] U. b. Waheed, T. Alkhalifah, E. Haghighat, C. Song, J. Virieux, PINNtomo: Seismic tomography using physics-informed neural networks, arXiv preprint arXiv:2104.01588 (2021).
* Waheed and Alkhalifah [2017] U. Waheed, T. Alkhalifah, A fast sweeping algorithm for accurate solution of the tilted transversely isotropic eikonal equation using factorization, Geophysics 82 (2017) WB1–WB8.
* Thomsen [1986] L. Thomsen, Weak elastic anisotropy, Geophysics 51 (1986) 1954–1966.
* Cybenko [1989] G. Cybenko, Approximation by superpositions of a sigmoidal function, Mathematics of control, signals and systems 2 (1989) 303–314.
* Hornik et al. [1989] K. Hornik, M. Stinchcombe, H. White, Multilayer feedforward networks are universal approximators, Neural networks 2 (1989) 359–366.
* Lu et al. [2017] Z. Lu, H. Pu, F. Wang, Z. Hu, L. Wang, The expressive power of neural networks: A view from the width, in: Advances in neural information processing systems, 2017, pp. 6231–6239.
* Baydin et al. [2017] A. G. Baydin, B. A. Pearlmutter, A. A. Radul, J. M. Siskind, Automatic differentiation in machine learning: a survey, The Journal of Machine Learning Research 18 (2017) 5595–5637.
* Abadi et al. [2015] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, X. Zheng, TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL: https://www.tensorflow.org/, software available from tensorflow.org.
* Paszke et al. [2017] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, A. Lerer, Automatic differentiation in PyTorch, in: Proceedings of Neural Information Processing Systems, 2017.
* Alkhalifah [2000] T. Alkhalifah, An acoustic wave equation for anisotropic media, Geophysics 65 (2000) 1239–1250.
* Jagtap et al. [2020] A. D. Jagtap, K. Kawaguchi, G. Em Karniadakis, Locally adaptive activation functions with slope recovery for deep and physics-informed neural networks, Proceedings of the Royal Society A 476 (2020) 20200334.
* Haghighat and Juanes [2021] E. Haghighat, R. Juanes, Sciann: A Keras/Tensorflow wrapper for scientific computations and physics-informed deep learning using artificial neural networks, Computer Methods in Applied Mechanics and Engineering 373 (2021) 113552\.
* Slotnick [1959] M. Slotnick, Lessons in seismic computing, Soc. Expl. Geophys 268 (1959).
* Kingma and Ba [2014] D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, arXiv preprint arXiv:1412.6980 (2014).
* Zhu et al. [1997] C. Zhu, R. H. Byrd, P. Lu, J. Nocedal, Algorithm 778: L-BFGS-B: Fortran subroutines for large-scale bound-constrained optimization, ACM Transactions on Mathematical Software (TOMS) 23 (1997) 550–560.
* Fehler and Keliher [2011] M. Fehler, P. J. Keliher, SEAM Phase 1: Challenges of subsalt imaging in tertiary basins, with emphasis on deepwater Gulf of Mexico, Society of Exploration Geophysicists, 2011\.
* Shah [2007] H. Shah, The 2007 BP anisotropic velocity-analysis benchmark, in: 70th Anuual EAGE meeting, workshop, 2007\.
* Waheed et al. [2021] U. Waheed, T. Alkhalifah, B. Li, E. Haghighat, A. Stovas, J. Virieux, Traveltime computation for qSV waves in TI media using physics-informed neural networks, in: 82nd EAGE Conference and Exhibition, Submitted, 2021.
* Taufik et al. [2021] M. H. Taufik, U. Waheed, Q. Hao, T. Alkhalifah, The eikonal solution for attenuating VTI media using physics-informed neural networks, in: 82nd EAGE Conference and Exhibition, Submitted, 2021.
|
# Lie symmetry analysis and similarity solutions for the Camassa-Choi
equations
Andronikos Paliathanasis
Institute of Systems Science, Durban University of Technology
PO Box 1334, Durban 4000, Republic of South Africa Email<EMAIL_ADDRESS>
###### Abstract
The method of Lie symmetry analysis of differential equations is applied to
determine exact solutions for the Camassa-Choi equation and its
generalization. We prove that the Camassa-Choi equation is invariant under an
infinite-dimensional Lie algebra, with an essential five-dimensional Lie
algebra. The application of the Lie point symmetries leads to the construction
of exact similarity solutions.
Keywords: Lie symmetries; Similarity solutions; Camassa-Choi; Long waves;
## 1 Introduction
The Camassa-Choi (CC) equation
$\left(u_{t}+\alpha u_{x}-uu_{x}+u_{xx}\right)_{x}+u_{yy}=0.$ (1)
was derived by Choi and Camassa in [1] in order to describe weakly nonlinear
internal waves in a two-fluid system. Parameter $\alpha=h^{-1}$ describes the
depth in the two-fluid system. CC equation can be seen as the two-dimensional
extension of the Benjamin-Ono equation, indeed when $u_{yy}=0$, from (1) the
Benjamin-Ono equation is recovered. Because of the nonlinearity of (1) there
are not any known-exact solutions in the literature. Only recently the
existence of small data global solutions was proven by Harrop and Marzula in
[2].
In this work, we propose that we apply the theory of Lie point symmetries in
order to determine similarity solutions for the CC equation. The theory of Lie
symmetries of differential equations is the standard technique for the
computation of solutions and describes the algebra for nonlinear differential
equations. The novelty of the Lie symmetries is that invariant transformations
can be found in order to simplify the given differential equation [3, 4, 5, 6,
7, 8, 9]. There is a plethora of applications in the Lie symmetries in the
fluid dynamics with important results which have been used to understand the
physical properties of the models.
The Lie symmetry analysis of the Camassa-Holm equation has been previously
performed in [10]. Similarity solutions to the shallow water equations with a
variable bottom were found in [11], while the Lie symmetry analysis of the
rotating shallow-water equation was performed in [12]. The algebraic
properties of the Benjamin-Ono equation were studied in [13]. For other
applications in the point symmetries in fluid dynamics we refer the reader to
[14, 15, 16, 18] and references therein.
We extend our analysis to the generalized Camassa-Choi (GCC) equation
$\left(u_{t}+\alpha u_{x}-u^{n}u_{x}+\beta u_{xx}\right)_{x}+u_{yy}=0.$ (2)
which is the natural generalization of the generalized Benjamin-Ono equation
[19]. For the two equations (1), (2) we determine the Lie point symmetries
while we prove the existence of travel-wave similarity solutions for every
value of parameter $n\geq 1$ and arbitrary depth $\alpha$. The plan of the
paper is as follows.
In Section 2, for the convenience of the reader we briefly discuss the basic
properties and definitions of the theory of Lie point symmetries. Sections 3
and 4, include the main new material of our analysis, where we present the
algebraic properties for equations (1) and (2). Finally in Section 5 we draw
our conclusions.
## 2 Preliminaries
In this section, we briefly discuss the theory of Lie point symmetries of
differential equations which is the main mathematical tool that we apply in
the following.
Consider function $\Phi$ to describe the map of a one-parameter point
transformation such as
$u^{\prime}\left(t^{\prime},x^{\prime},y^{\prime}\right)=\Phi\left(u\left(t,x,y\right);\varepsilon\right)\
$where the infinitesimal transformation is expressed as follows
$\displaystyle t^{\prime}$ $\displaystyle=$ $\displaystyle
t+\varepsilon\xi^{t}\left(t,x,y,u\right)$ (3) $\displaystyle x^{\prime}$
$\displaystyle=$ $\displaystyle x+\varepsilon\xi^{x}\left(t,x,y,u\right)$ (4)
$\displaystyle y^{\prime}$ $\displaystyle=$ $\displaystyle
y+\varepsilon\xi^{y}\left(t,x,y,u\right)$ (5) $\displaystyle u$
$\displaystyle=$ $\displaystyle u+\varepsilon\eta\left(t,x,y,u\right)$ (6)
where $\varepsilon$ is the infinitesimal parameter, that is,
$\varepsilon^{2}\rightarrow 0.$
From the latter one-parameter point transformation we can define the
infinitesimal generator
$X=\frac{\partial t^{\prime}}{\partial\varepsilon}\partial_{t}+\frac{\partial
x^{\prime}}{\partial\varepsilon}\partial_{x}+\frac{\partial
y^{\prime}}{\partial\varepsilon}\partial_{y}+\frac{\partial
u^{\prime}}{\partial\varepsilon}\partial_{u},$ (7)
from where the map $\Phi$ can be written as follows
$\Phi\left(u\left(t,x,y\right);\varepsilon\right)=u\left(t,x,y\right)+\varepsilon
X\left(u\left(t,x,y\right)\right),$ (8)
that is,
$X\left(u\left(t,x,y\right)\right)=\lim_{\varepsilon\rightarrow
0}\frac{\Phi\left(u\left(t,x,y\right);\varepsilon\right)-u\left(t,x,y\right)}{\varepsilon}.$
(9)
The latter expression defines the Lie derivative of the function
$u\left(t,x,y\right)$ with respect to the vector field $X,$ also noted as
$L_{X}u.$
When
$L_{X}u=0$ (10)
then we shall say that $u\left(t,x,y\right)$ is invariant under the action of
the one-parameter point transformation with generator the vector field $X.$
In terms of differential equations, i.e.
$\mathcal{H}\left(u,u_{t},u_{x},u_{y},..\right)=0;$ (11)
then the symmetry condition reads
$L_{X}\left(\mathcal{H}\right)=0\text{ or
}X^{\left[n\right]}\left(\mathcal{H}\right)=0,$ (12)
where $X^{\left[n\right]}$ describes the $n$th prolongation/extension of the
symmetry vector in the jet-space of variables
$\left\\{u,u_{t},u_{x},...\right\\}$ defined as
$X^{\left[n\right]}=X+\eta_{i}^{\left[1\right]}\partial_{u_{i}}+...+\eta^{\left[n\right]}\partial_{u_{i_{i}i_{j}...i_{n}}},$
where $u_{i}=\frac{\partial u}{\partial z^{i}},~{}z^{i}=\left(t,x,y\right)$
and
$\eta_{i}^{\left[n\right]}=D_{i}\eta^{\left[n-1\right]}-u_{ii_{2}...i_{n-1}}D_{j}\left(\xi^{j}\right)~{},~{}i\succeq
1.$ (13)
The main application of the Lie point symmetries is based on the determination
of the Lie invariants which are used to define similarity transformations and
simplify the given differential equation. The exact solutions which follow by
the application of the Lie point symmetries are called similarity solutions.
If $X$ is an admitted Lie point symmetry, the solution of the associated
Lagrange’s system,
$\frac{dt}{\xi^{t}}=\frac{dx}{\xi^{x}}=\frac{dy}{\xi^{y}}=\frac{du}{\eta},$
(14)
provides the zeroth-order invariants, $U^{A\left[0\right]}\left(t,x,u\right)$
which are applied to reduce the number of independent variables in partial
differential equations, or the order in the case of ordinary differential
equations.
For more details on the symmetry analysis of differential equations we refer
the reader to the standard references [20, 21, 22].
## 3 Point symmetries of the Camassa-Choi equation
From the symmetry condition (12) for the CC equation (1) and for the one-
parameter point transformation with generator
$X=\xi^{1}\left(t,x,y,u\right)\partial_{t}+\xi^{2}\left(t,x,y,u\right)\partial_{x}+\xi^{3}\left(t,x,y,u\right)\partial_{y}+\eta\left(t,x,y,u\right)\partial_{u},$
we find the following system of differential equations
$\xi_{,u}^{1}=0~{},~{}\xi_{,u}^{2}=0~{},~{}\xi_{,u}^{3}=0~{},~{}\xi_{,x}^{1}=0~{},~{}\xi_{,x}^{3}=0~{},~{}\eta_{,uu}=0~{},$
(15)
$\xi_{,x}^{3}+2\xi_{y}^{1}=0~{},~{}\xi_{,y}^{3}+3\xi_{,x}^{2}=0~{},~{}2\xi_{,x}^{2}-\xi_{,t}^{1}=0~{},~{}~{}2\xi_{,y}^{2}-\xi_{,t}^{3}=0,$
(16) $\eta_{,xxx}+\left(\alpha-u\right)\eta_{,xx}+\eta_{,yy}+\eta_{,tx}=0~{},$
(17) $\eta_{,xu}-\xi_{,yy}^{1}=0~{},~{}2\eta_{,yu}-\xi_{,yy}^{3}=0~{},~{}$
(18)
$3\eta_{,xu}-3\xi_{,xx}^{2}+\left(\alpha-u\right)\xi_{,x}^{2}-\xi_{,t}^{2}-\eta=0~{},$
(19)
$3\eta_{,xuu}+\left(\alpha-u\right)\eta_{,uu}-\xi_{,x}^{2}-\eta_{,u}=0~{},$
(20)
$3\eta_{,xxu}-\xi_{,xxx}^{2}+\left(\alpha-u\right)\left(2\eta_{,xu}-\xi_{,xx}^{2}\right)+\eta_{,tu}-\xi_{,yy}^{2}-2\eta_{,x}-\xi_{,tx}^{2}=0.$
(21)
The generic solution of the latter system is
$X=\left(c_{1}+c_{2}\left(2t\right)\right)\partial_{t}+\left(c_{2}x+c_{3}\phi\left(t\right)-\frac{1}{2}c_{4}\psi_{t}\left(t\right)y\right)\partial_{x}+\left(\frac{3}{2}c_{2}y+c_{4}\psi\left(t\right)\right)\partial_{y}+\left(c_{2}\left(\alpha-u\right)-c_{3}\phi_{t}\left(t\right)+c_{4}\frac{1}{2}\psi_{tt}\left(t\right)y\right)\partial_{u}\,,$
where $c_{1},c_{2},c_{3},c_{4}$ are constants of integration and
$\phi\left(t\right),~{}\psi\left(t\right)$ are arbitrary functions.
Therefore, the Lie point symmetries of the CC equation (1) are
$X_{1}=\partial_{t}~{},~{}X_{2}=2t\partial_{t}+x\partial_{x}+\frac{3}{2}y\partial_{y}-\left(u-\alpha\right)\partial_{u}~{},$
(22)
$X_{3}\left(\phi\right)=\phi\left(t\right)\partial_{x}-\phi_{t}\left(t\right)\partial_{u}~{},~{}X_{4}\left(\psi\right)=\psi\left(t\right)\partial_{y}-\frac{1}{2}\psi_{t}\left(t\right)y\partial_{x}+\frac{1}{2}\psi_{tt}\left(t\right)y\partial_{u}~{}.$
(23)
Surprisingly, the CC equation admits infinity Lie point symmetries.
The commutators of the Lie point symmetries are
$\displaystyle\left[X_{1},X_{2}\right]$ $\displaystyle=$ $\displaystyle
2X_{1}~{},~{}\left[X_{1},X_{3}\left(\phi\right)\right]=\left(X_{3}\left(\phi_{t}\right)\right)~{},~{}\left[X_{1},X_{4}\left(\psi\right)\right]=\left(X_{4}\left(\psi_{t}\right)\right)~{},~{}$
(24) $\displaystyle\left[X_{2},X_{3}\left(\phi\right)\right]$ $\displaystyle=$
$\displaystyle
X_{3}\left(\phi-2t\phi_{t}\right)~{},~{}\left[X_{2},X_{4}\left(\psi\right)\right]=X_{4}\left(\frac{3}{4}\psi-2t\psi_{t}\right)~{},~{}\left[X_{3}\left(\phi\right),X_{4}\left(\psi\right)\right]=0~{},$
(25)
$\displaystyle\left[X_{3}\left(\phi\right),X_{3}^{\prime}\left(\chi\right)\right]$
$\displaystyle=$ $\displaystyle 0~{}\
,~{}\left[X_{4}\left(\psi\right),X_{4}\left(\xi\right)\right]=\frac{1}{2}\left(X_{3}\left(\xi\psi_{t}-\psi\xi_{t}\right)\right).$
(26)
from where we observe that they form an infinity-dimensional Lie algebra. The
existence of the infinity number of symmetries it is not a real surprise. From
$X_{3}$ we determine the similarity transformation $u=$
$-\left(\ln\phi\left(t\right)\right)_{,t}x+U\left(t,y\right)$ where
$U\left(t,y\right)=\frac{1}{2}\frac{\phi_{,tt}}{\phi}y^{2}+U_{1}\left(t\right)y+U_{0}\left(t\right)$
solves the reduced equation, functions
$U_{1}\left(t\right),~{}U_{0}\left(t\right)$ are arbitrary functions.
In the special case where $\phi\left(t\right)$ and $\psi\left(t\right)$ are
constants, without loss of generality we assume that
$\phi\left(t\right)=\psi\left(t\right)=1$, the Lie point symmetries are
simplified as
$X_{1}^{\prime}=\partial_{t}~{},~{}X_{2}^{\prime}=2t\partial_{t}+x\partial_{x}+\frac{3}{2}y\partial_{y}-\left(u-\alpha\right)\partial_{u}~{},~{}X_{3}^{\prime}=\partial_{x}~{},~{}X_{4}^{\prime}=\partial_{y}$
(27)
with commutators
$\displaystyle\left[X_{1}^{\prime},X_{2}^{\prime}\right]$ $\displaystyle=$
$\displaystyle
2X_{1}^{\prime}~{},\left[X_{1}^{\prime},X_{3}^{\prime}\right]=0~{},~{}\left[X_{1}^{\prime},X_{4}^{\prime}\right]=0~{},~{}$
(28) $\displaystyle\left[X_{2}^{\prime},X_{3}^{\prime}\right]$
$\displaystyle=$ $\displaystyle
X_{3}^{\prime}~{},~{}\left[X_{2}^{\prime},X_{4}^{\prime}\right]=\frac{3}{2}X_{3}^{\prime}~{},~{}\left[X_{3}^{\prime},X_{4}^{\prime}\right]=0.$
(29)
However, there are not any finite-dimensional closed Lie algebras for
arbitrary functions of $\phi\left(t\right)$ and $\psi\left(t\right)$. The
commutators of the latter finite-dimensional Lie algebra are presented in
Table 1.
Table 1: Commutators for Lie point symmetries of CC which form a finite-dimensional Lie algebra $\left[~{},~{}\right]$ | $X_{1}^{\prime}$ | $X_{2}^{\prime}$ | $X_{3}^{\prime}$ | $X_{4}^{\prime}$
---|---|---|---|---
$X_{1}^{\prime}$ | $0$ | $2X_{1}^{\prime}$ | $0$ | $\,0$
$X_{2}^{\prime}$ | $2X_{1}^{\prime}$ | $0$ | $X_{3}^{\prime}$ | $\frac{3}{2}X_{3}^{\prime}$
$X_{3}^{\prime}$ | $0$ | $-X_{3}^{\prime}$ | $0$ | $0$
$X_{4}^{\prime}$ | $0$ | $-\frac{3}{2}X_{3}^{\prime}$ | $0$ | $0$
Let us demonstrate that by assuming
$\phi\left(t\right)=\phi_{1}+\phi_{2}e^{\omega_{1}t}~{}$and
$\psi\left(t\right)=\psi_{1}+\psi_{2}e^{\omega_{2}t}.$ Then from (22), (23) it
follows that the CC equation admits six Lie point symmetries which are the
vector fields
$X_{1}^{\prime}~{},~{}X_{2}^{\prime}~{},~{}X_{3}^{\prime}~{},~{}X_{4}^{\prime}~{},~{}X_{5}^{\prime}=e^{\omega_{1}t}\left(\partial_{x}-\omega_{1}\partial_{u}\right)~{},~{}X_{6}^{{}^{\prime}}=e^{\omega_{1}t}\left(\partial_{y}-\frac{\omega_{2}}{2}y\partial_{x}+\frac{\omega_{2}^{2}}{2}y\partial_{u}\right)$
(30)
with commutators (28), (29) and
$\displaystyle\left[X_{1}^{\prime},X_{5}^{\prime}\right]$ $\displaystyle=$
$\displaystyle\omega_{1}X_{5}^{\prime}~{},~{}\left[X_{1}^{\prime},X_{6}^{\prime}\right]=\omega_{2}X_{6}^{\prime}~{},~{}\left[X_{2}^{\prime},X_{5}^{\prime}\right]=e^{\omega_{1}t}\left(\left(1-\omega_{1}t\right)\partial_{x}+\omega_{1}\left(1+2\omega_{1}t\right)\partial_{u}\right)~{},$
(31) $\displaystyle\left[X_{2}^{\prime},X_{6}^{\prime}\right]$
$\displaystyle=$ $\displaystyle
e^{\omega_{2t}}\left(\frac{\left(3-4\omega_{2}t\right)}{2}\partial_{y}+\frac{\left(1+4\omega_{2}t\right)}{4}\omega_{2}y\partial_{x}-\frac{\left(5+4\omega_{2}t\right)}{4}\omega_{2}^{2}y\partial_{u}\right)~{},$
(32) $\displaystyle~{}\left[X_{3}^{\prime},X_{5}^{\prime}\right]$
$\displaystyle=$ $\displaystyle
0~{},~{}\left[X_{3}^{{}^{\prime}},X_{6}^{{}^{\prime}}\right]=0~{},~{}\left[X_{4}^{\prime},X_{6}^{\prime}\right]=\frac{\omega_{2}}{2}e^{\omega_{2}t}\left(\partial_{t}-\omega_{2}\partial_{u}\right).$
(33)
from where it is clear that the symmetry vectors (30) do not form a closed Lie
algebra.
We want to constraint functions $\phi\left(t\right),~{}$and
$\psi\left(t\right)$ such that the admitted Lie symmetries to form a closed
Lie algebra of five-dimension with a different basis. In particular we focus
on the case where the coefficients of the commutators (24)-(26) are constants.
Thus we end up with the system of equations
$\left\\{\phi=c_{1}\phi_{t}~{},~{}\phi=c_{2}\phi-2t\phi_{t}\right\\}~{}~{}\text{or~{}}\left\\{\phi=c_{1}^{\prime}\phi_{t}~{},~{}\phi_{t}=c_{2}^{\prime}\left(\phi-2t\phi_{t}\right)\right\\}~{},$
(34)
and
$\left\\{\psi=c_{3}\psi_{t}~{},~{}\psi=c_{4}\left(\frac{3}{4}\psi-2t\psi_{t}\right)\right\\}\text{
or~{}}\left\\{\psi^{\prime}=c_{3}\psi_{t}~{},~{}\psi_{t}^{\prime}=c_{4}\left(\frac{3}{4}\psi-2t\psi_{t}\right)\right\\},$
(35)
with constraint equations
$\xi=\xi\psi_{t}-\psi\xi_{t}\text{, where }\xi=\phi,~{}\text{or
~{}}\xi=\phi_{t}\text{ or }\xi=\left(\phi-2t\phi_{t}\right).$ (36)
Therefore, from (34), (35) and (36) it follows that the unique possible
admitted five-dimensional Lie algebra is that of (27) for
$\phi\left(t\right)=const$. and $\psi\left(t\right)=\psi_{0}+\psi_{1}t$. Of
course there are additional finite dimensional Lie algebras, for instance any
set of generators constructed by $X_{3}$ form a Lie algebra; however this
specific five-dimensional Lie algebra has the novelty that it can provide a
plethora of different similarity transformations, while for instance the
similarity transformations which follow by $X_{3}$ are all of the same family.
###### Proposition 1
The CC equation (1) is invariant under infinity Lie point symmetries which
form the Lie algebra
$\left\\{A_{2,1}\otimes_{s}A_{\infty}\otimes_{s}A_{\infty}\right\\}$ in the
Morozov-Mubarakzyanov classification scheme [25, 26, 27, 28]. However, there
exists a five-dimensional subalgebra consisted by the vector fields
$\left\\{X_{1},X_{2},X_{3}^{\prime},X_{4}^{\prime},X_{5}=t\partial_{y}-\frac{1}{2}y\partial_{x}\right\\}$
and form the Lie algebra $A_{5,19}^{ab}$ in the Patera-Winternitz
classification scheme [29, 30]. This five-dimensional Lie algebra provides the
maximum number of alternative families of similarity transformations.
As we shall see in the following, this five-dimensional Lie algebra plays a
significant role in the study of the Lie point symmetries for the GCC equation
(2). We proceed with the application of the Lie point symmetries for the
derivation of similarity solutions.
### 3.1 Similarity Solutions for the Camassa-Choi equation
Let us not apply the Lie point symmetries found in the previous section in
order to find similarity solutions for the CC equation (1). The CC equation is
a third equation of three independent variables. By applying the Lie point
symmetries in partial differential equations we reduce the number of the
independent variables. Hence, in order to reduce the CC equation to an
ordinary differential equation we should apply two symmetry vectors. However,
not all the symmetry vectors survive through the reduction process. In
particular, if a given differential equation admits the two symmetry vectors
$\Gamma_{1},\Gamma_{2}$ with commutator
$\left[\Gamma_{1},\Gamma_{2}\right]=c\Gamma_{2}$, where $c$ may be zero, then
reduction of the differential equation with respect to the symmetry vector
$\Gamma_{2}$ provides that the reduced equation inherits the symmetry vector
$\Gamma_{1}$, while reduction with $\Gamma_{1}$ provides a differential
equation where $\Gamma_{2}$ is not a point symmetry when $c\neq 0~{}$[24]. It
is clear, that if we want to perform a second reduction for the differential
equation we start by considering the symmetry vector $\Gamma_{2}$.
Therefore, by using the results of Table 1 we find that the reduction with the
symmetry vectors
$\left\\{X_{1}^{\prime},X_{3}^{\prime},X_{4}^{\prime},X_{3}^{\prime}+X_{4}^{\prime}\right\\}$
gives reduced equations which inherits some of the symmetries of the original
equation. However, the application of the symmetry vectors
$\left\\{X_{1}^{\prime},X_{3}^{\prime},X_{4}^{\prime}\right\\}$ gives time-
independent or static solutions which are not solutions of special interests
solutions. Hence, we focus on the reduction which follows by the symmetry
vector $X_{3}^{\prime}+X_{4}^{\prime}$.
From the Lie point symmetry $X_{34}=X_{3}^{\prime}+X_{4}^{\prime}$ we
calculate the invariants
$t~{},~{}w=y-x~{},~{}u=U\left(t,w\right).$ (37)
By using the latter invariant functions equation (1) is reduced to the
following partial differential equation
$U_{www}+\left(U_{w}\right)^{2}-\left(1-U+h_{0}\right)U_{ww}+U_{wt}=0.$ (38)
In order to proceed with the reduction we should derive the Lie point
symmetries of (38). Hence, by applying the Lie symmetry condition we find that
equation (38) is invariant under the Lie point symmetries
$\displaystyle Z_{1}$ $\displaystyle=$
$\displaystyle\partial_{t},~{}Z_{2}=2t\partial_{t}+w\partial_{w}+\left(h_{0}+1-U\right)\partial_{U}~{},$
(39) $\displaystyle Z_{3}$ $\displaystyle=$ $\displaystyle
t^{2}\partial_{t}+tw\partial_{w}+\left[\left(h_{0}+1-U\right)t+w\right]\partial_{U}~{},$
(40) $\displaystyle Z_{4}$ $\displaystyle=$
$\displaystyle\phi\left(t\right)\partial_{w}+\phi_{t}\partial_{U}.$ (41)
Vector fields $Z_{1},~{}Z_{2}$ and $Z_{4}$ are reduced symmetries, while
$Z_{3}$ is a new symmetry for the reduced equation (38). It is important to
mention that $Z_{4}$ describes an infinity number of symmetries, hence the
reduced equation (38) admits infinity number of Lie point symmetries as the
“mother” equation (1). On the other hand, Lie point symmetries
$\left\\{Z_{1},Z_{2},Z_{3}\right\\}$ form a closed Lie algebra, known as
$SL\left(2,R\right)$.
The application of $Z_{4}$ in (38) provides the linear second-order ODE
$\phi_{tt}=0$, where
$U\left(t,w\right)=U_{0}\left(t\right)+\frac{\phi_{t}}{\phi}w,$ where
$U_{0}\left(t\right)$ is an arbitrary function. Therefore, the similarity
solution is derived to be
$U\left(t,w\right)=U_{0}\left(t\right)+\frac{\phi_{1}}{\phi_{1}t+\phi_{0}}w.$
(42)
Reduction with respect the symmetry vector $Z_{1}$ of equation (38) provides
the third-order ODE
$Y_{www}+\left(Y_{w}\right)^{2}-YY_{ww}=0~{},~{}U\left(t,w\right)=Y\left(w\right)+1+h_{0}\,~{},w=x$
(43)
which admit two point symmetries the reduced symmetries $Z_{2},~{}$and
$Z_{3}$. Equation (43) can be integrated as follows
$Y_{ww}+YY_{w}+Y_{0}=0,$ (44)
where the latter equation can be solved in terms of quadratics. Indeed for the
integration constant $Y_{0}=0$, the general solution is
$Y\left(w\right)=Y_{0}\tanh\left(\frac{w-w_{0}}{2c}\right),$ (45)
while in general equation (44) becomes
$Y_{w}+\frac{1}{2}Y^{2}+Y_{0}w+Y_{1}=0.$ (46)
The application of the Lie symmetry vector $Z_{2}$ provides the reduced third-
order ODE
$2\bar{Y}_{\sigma\sigma\sigma}+\left(\sigma-2\left(1+\bar{Y}\right)\right)\bar{Y}_{\sigma\sigma}-2\bar{Y}=0~{},~{}$
(47)
where now
$U\left(t,w\right)=1+h_{0}+\frac{Y\left(\sigma\right)}{\sqrt{t}}~{},~{}\sigma=\frac{w}{\sqrt{t}}$.
The latter equation can be easily integrated as follows
$2\bar{Y}_{\sigma\sigma}-\bar{Y}-\left(2\bar{Y}-\sigma\right)\bar{Y}+\bar{Y}_{0}=0$
(48)
or
$2\bar{Y}_{\sigma}+\bar{Y}^{2}-\sigma\bar{Y}+\bar{Y}_{0}\sigma+\bar{Y}_{1}=0.$
(49)
In a similar way, the reduction with respect to the Lie symmetry vector
$Z_{3}$ gives the solution
$U\left(t,w\right)=\frac{w}{t}+h_{0}+1+\frac{Y^{\prime}\left(\lambda\right)}{t}~{},~{}\lambda=\frac{w}{t},$
(50)
where $Y^{\prime}\left(\lambda\right)$ is given by the following first-order
ODE
$Y_{\lambda}^{\prime}+\frac{1}{2}\left(Y^{\prime}\right)^{2}+Y_{0}\lambda+Y_{1}=0.$
(51)
It comes as no surprise that the reduction with the three elements of the
$SL\left(2,R\right)$ provides similar reduced equations. That is because the
three symmetry vectors are related with similarity transformations as well as
also the reduced equations are related, for more details we refer the reader
to [23].
Finally, reduction with the vector field $Z_{1}+Z_{4}$, for
$\phi\left(t\right)=1$, provides travel-wave solution and the reduced equation
is that of (43) where $w=t-x$.
Similarly, the reduction of CC equation (1) with respect the symmetry vector
$X_{14}=X_{1}^{\prime}+X_{4}^{\prime}$, provides a travel-wave solution, as
before. Therefore, we conclude that travel-wave solutions exist for the CC
equation.
We proceed our analysis by studying the invariant point transformations for
the GCC equation (2).
## 4 Point symmetries of the generalized Camassa-Choi equation
The Lie point symmetries of the GCC equation (2) are
$Y_{1}=\partial_{t}~{},~{}Y_{2}=2t\partial_{t}+x\partial_{x}+\frac{3}{2}y\partial_{y}-\frac{1}{n}u\partial_{u}$
(52)
$Y_{3}=\partial_{x}~{},~{}Y_{4}=\partial_{y}~{},~{}Y_{5}=2t\partial_{y}-y\partial_{x}~{},$
(53)
when $\alpha=0$ and
$\bar{Y}_{1}=\partial_{t}~{},~{}\bar{Y}_{2}=\partial_{x}~{},\bar{Y}_{2}=2t\partial_{t}+\left(x+\alpha
t\right)\partial_{x}+\frac{3}{2}y\partial_{y}-\frac{1}{n}u\partial_{u}$ (54)
$\bar{Y}_{3}=\partial_{x}~{},~{}\bar{Y}_{4}=\partial_{y}~{},~{}\bar{Y}_{5}=2t\partial_{y}-y\partial_{x}~{},$
(55)
for $\alpha\neq 0$.
The corresponding commutators for the admitted Lie symmetries are presented in
Table 2. We observe that the two admitted Lie algebras are different. For
$\alpha\neq 0$ the Lie symmetries form the Lie algebra $A_{5,23}^{b}$ and for
$\alpha=0$, the Lie symmetries form the Lie algebra $A_{5,19}^{ab}$ in the
Patera-Winternitz classification scheme [29, 30].
Table 2: Commutators of the admitted Lie point symmetries by the GCC $\left[~{},~{}\right]$ | $\bar{Y}_{1}$ | $\bar{Y}_{2}$ | $\bar{Y}_{3}$ | $\bar{Y}_{4}$ | $\bar{Y}_{5}$
---|---|---|---|---|---
$\bar{Y}_{1}$ | $0$ | $2Y_{1}+\alpha Y_{3}$ | $0$ | $0$ | $2\bar{Y}_{4}$
$\bar{Y}_{2}$ | $-\left(2\bar{Y}_{1}+\alpha Y_{3}\right)$ | $0$ | $-\bar{Y}_{3}$ | $-\frac{3}{2}\bar{Y}_{4}$ | $\frac{1}{2}\bar{Y}_{5}$
$\bar{Y}_{3}$ | $0$ | $\bar{Y}_{3}$ | $0$ | $0$ | $0$
$\bar{Y}_{4}$ | $0$ | $\frac{3}{2}\bar{Y}_{4}$ | $0$ | $0$ | $-\bar{Y}_{3}$
$\bar{Y}_{5}$ | $-2\bar{Y}_{4}$ | $-\frac{1}{2}\bar{Y}_{5}$ | $0$ | $\bar{Y}_{3}$ | $0$
When the parameter $\alpha$ is zero, the Lie point symmetries
$\left\\{Y_{1},Y_{2},Y_{3},Y_{4}\right\\}$ are these which form a finite-
dimensional Lie algebra for the CC equation (1), that is, vector fields (27).
However, When $\alpha\neq 0$ things are different. The fifth symmetry $Y_{5}$
is a case of $X_{4}\left(\psi\right)$ with $\psi\left(t\right)=t$. Indeed, the
admitted Lie point symmetries by the GCC are those which form the maximum
finite-dimensional Lie algebra for the CC equation.
We continue our analysis by applying the Lie point symmetries to determine
similarity solutions for the GCC equation.
### 4.1 Similarity Solutions for the generalized Camassa-Choi equation
As in the case of the CC we consider the similarity transformation provided by
the vector field $Y_{34}=Y_{3}+Y_{4}$, because it is the similarity
transformation which provides a reduced equation which inherits symmetry
vectors. Hence, we find that the GCC equation (2) is reduced to
$U_{www}+U_{wt}+nU^{n-1}\left(U_{w}\right)^{2}+\left(U^{n}+1-\alpha\right)U_{ww}=0,$
(56)
where $u\left(t,x,y\right)=U\left(t,w\right)$ and $w=x-y.~{}$We observe that
equation (56) reduces into (38) when $n=1$.
For $n\neq 1,$ we calculate the Lie point symmetries of (56) which are we
found to be
$\bar{Z}_{1}=\partial_{t},~{}\bar{Z}_{2}=\partial_{w}~{}\text{and
}\bar{Z}_{3}=2t\partial_{t}+\left(t\left(1+\alpha\right)-w\right)\partial_{w}-\frac{1}{n}U\partial_{u}.$
The application of the Lie symmetry vector
$\bar{Z}_{12}=\partial_{t}+\partial_{w}$ in (56) provides the travel-wave
solution
$Y_{\sigma\sigma\sigma}+nY^{n-1}\left(Y_{\sigma}\right)^{2}+\left(Y^{n}-\alpha-2\right)Y_{\sigma\sigma}=0~{},~{}U\left(t,w\right)=Y\left(\sigma\right)~{},~{}\sigma=w-t.$
(57)
The latter equation can be easily integrated by quadratures as follows
$Y_{\sigma}+\frac{1}{n+1}Y^{n+1}+\left(2+A\right)Y+Y_{1}\sigma+Y_{0}=0.$ (58)
On the other hand, the reduction of (56) with respect to the similarity
transformation provided by the vector field $\bar{Z}_{3}$ provides
$U\left(t,w\right)=H\left(\zeta\right)t^{-\frac{1}{2n}}~{}\
,~{}\zeta=\frac{w+t\left(1+\alpha\right)}{\sqrt{t}}$ (59)
where$~{}H\left(\zeta\right)$ satisfies the third-order ordinary differential
equation
$2nHH_{\zeta\zeta\zeta}+nH\left(2H^{n}-\zeta\right)H_{\zeta\zeta}-\left(\left(n+1\right)H-2n^{2}H^{n}H_{z}\right)H_{z}=0.$
(60)
Equation (60) can be integrated as follows
$H_{\zeta\zeta}-\frac{1}{2n}H+\left(H^{n}-\frac{\zeta}{2}H\right)H_{\zeta}+H_{1}=0.$
(61)
The latter equation does not admit any point symmetry and we cannot perform
further reduction. However in 1 we present some numerical solutions. What is
also important to mention is that in equation (61) parameter $\alpha$ plays no
role. Hence the same reduction holds and for the case $\alpha=0$.
Figure 1: Qualitative evolution of$~{}H(\zeta)$ as it is given by the
differential equation (61) for initial conditions $H\left(0\right)=1$ and
$H_{\zeta}\left(0\right)=-0.5$. The plots are for $H_{1}=0$ and $n=2~{}$(red
line), $n=3$ (blue line) and $n=5$ (yellow line).
## 5 Conclusions
In this work, we applied the theory of symmetries of differential equations in
order to determine exact similarity solutions for the Camassa-Choi equation
(1) and its generalization (2). CC equation describes weakly nonlinear
internal waves in a two-fluid system and it can be seen as the two-dimensional
generalization of the Benjamin-Ono.
For the CC equation we found that it is invariant under an infinity-
dimensional Lie algebra, with maximum finite Lie subalgebra of dimension five.
That five-dimensional subalgebra is the one which form the complete group of
invariant one-parameter point transformations for the GCC equation.
We apply the Lie point symmetries and we prove the existence of similarity
solutions in the two-dimensional plane $\left\\{x,y\right\\}$. Specifically,
we found that the similarity solutions can be expressed in terms of
quadratures.
Surprisingly, the CC equation under the application of similarity
transformations can be reduced into a three-dimensional ordinary differential
equation which is invariant under the $SL\left(3,R\right)$, where all the
possible reductions provide similarity solutions related under point
transformations.
In a future work we plant to study the physical properties of those new
similarity solutions.
## References
* [1] W. Choi and R. Camassa. J. Fluid Mech. 313, 83 (1996)
* [2] B. Harrop-Griffiths and J.L. Marzula, Nonlinearity 31, 1868 (2018)
* [3] A. Paliathanasis, K. Krishnakumar, K.M. Tamizhmani and P.G.L. Leach, Mathematics 4, 28 (2016)
* [4] X. Xin, Appl. Math. Lett. 55, 63 (2016)
* [5] X. Xin, Acta Phys. Sin. 65, 240202 (2016)
* [6] N. Kallinikos and E. Meletlidou, J. Phys. A: Math. Theor. 46, 305202 (2013)
* [7] S. Jamal and A. Paliathanasis, J. Geom. Phys. 117, 50 (2017)
* [8] G.M. Webb, J. Phys A: Math. Gen. 23, 3885 (1990)
* [9] P.G.L. Leach, J. Math. Anal. Appl. 348, 487 (2008)
* [10] M.S. Velan and M. Lakshmanan, Int. J. Non-linear Mech. 31, 339 (1996)
* [11] M. Pandey, Int. J. Nonl. Sc. Num. Sim. 16, 337 (2015)
* [12] A. Paliathanasis, Zeitschrift für Naturforschung A, in press [DOI:10.1515/zna-2019-0063]
* [13] V.N. Chetverikov, Acta Appl. Math. 56, 121 (1999)
* [14] S. Szatmari and A. Bihlo, Comm. Nonl. Sci. Num. Sim. 19, 530 (2014)
* [15] A.A. Chesnokov, J. Appl. Mech. Techn. Phys. 49, 737 (2008)
* [16] A.A. Chesnokov, Eur. J. Appl. Math. 20, 461 (2009)
* [17] X. Xin, L. Zhang, Y. Xia and H. Liu, Appl. Math. Lett. 94, 112 (2019)
* [18] A. Paliathanasis, Physica Scripta, in press [DOI: 10.1088/1402-4896/ab32ad]
* [19] C.E. Keing, G. Ponce and L. Vega, Transactions of the American Mathematical Society 342, 155 (1994)
* [20] G.W. Bluman and S. Kumei, Symmetries and Differential Equations, Springer-Verlag, New York, (1989)
* [21] P.J. Olver, Applications of Lie Groups to Differential Equations, Springer-Verlag, New York, (1993)
* [22] N.H. Ibragimov, CRC Handbook of Lie Group Analysis of Differential Equations, Volume I: Symmetries, Exact Solutions, and Conservation Laws, CRS Press LLC, Florida (2000)
* [23] S. Jamal, P.G.L. Leach and A. Paliathanasis, Quaestiones Mathematicae, 42, 125 (2018)
* [24] K.S. Govinder, J. Math. Anal. Appl. 258, 720 (2001)
* [25] V.V. Morozov, Classification of six-dimensional nilpotent Lie algebras, Izvestia Vysshikh Uchebn Zavendeniĭ Matematika, 5, 161 (1958)
* [26] G.M Mubarakzyanov, Izvestia Vysshikh Uchebn Zavendeniĭ Matematika, 32, 114 (1963)
* [27] G.M Mubarakzyanov Izvestia Vysshikh Uchebn Zavendeniĭ Matematika, 34, 99 (1963)
* [28] G.M Mubarakzyanov Izvestia Vysshikh Uchebn Zavendeniĭ Matematika, 35, 104 (1963)
* [29] J. Patera, R.T. Sharp, P. Winternitz and H. Zassenhaus, J. Math. Phys. 17, 986 (1976)
* [30] J. Patera and P. Winternitz, J. Math. Phys. 18, 1449 (1977)
|
8pt
0.7(0.16,0.88) Derks I.P., de Waal A. (2020) A Taxonomy of Explainable
Bayesian Networks. In: Gerber A. (eds) Artificial Intelligence Research.
SACAIR 2021. Communications in Computer and Information Science, vol 1342.
Springer, Cham.
https://doi.org/10.1007/978-3-030-66151-9˙14
11institutetext: Department of Statistics, University of Pretoria
22institutetext: Center for Artificial Intelligence Research (CAIR)
# A Taxonomy of Explainable Bayesian Networks
Iena Petronella Derks 11 0000-0002-7070-5036 Alta de Waal 1122
0000-0001-8121-6249
###### Abstract
Artificial Intelligence (AI), and in particular, the explainability thereof,
has gained phenomenal attention over the last few years. Whilst we usually do
not question the decision-making process of these systems in situations where
only the outcome is of interest, we do however pay close attention when these
systems are applied in areas where the decisions directly influence the lives
of humans. It is especially noisy and uncertain observations close to the
decision boundary which results in predictions which cannot necessarily be
explained that may foster mistrust among end-users. This drew attention to AI
methods for which the outcomes can be explained. Bayesian networks are
probabilistic graphical models that can be used as a tool to manage
uncertainty. The probabilistic framework of a Bayesian network allows for
explainability in the model, reasoning and evidence. The use of these methods
is mostly ad hoc and not as well organised as explainability methods in the
wider AI research field. As such, we introduce a taxonomy of explainability in
Bayesian networks. We extend the existing categorisation of explainability in
the model, reasoning or evidence to include explanation of decisions. The
explanations obtained from the explainability methods are illustrated by means
of a simple medical diagnostic scenario. The taxonomy introduced in this paper
has the potential not only to encourage end-users to efficiently communicate
outcomes obtained, but also support their understanding of how and, more
importantly, why certain predictions were made.
###### Keywords:
Bayesian network Reasoning Explainability.
## 1 Introduction
Advances in technology have contributed to the generation of big data in
nearly all fields of science, giving rise to new challenges with respect to
explainability of models and techniques used to analyse such data. These
models and techniques are often too complex; concealing the knowledge within
the machine, hence decreasing the extent of interpretability of results.
Subsequently, the lack of explainable models and techniques contribute to
mistrust among users in fields of science where interpretability and
explainability are indispensable.
To elucidate the need for explainable models, consider the following three
scenarios. Firstly, suppose a medical diagnosis system is used to determine
whether a tumour sample is malignant or benign. Here, the medical practitioner
must be able to understand how and why the system reached the decision, and,
if necessary, inspect whether the decision is supported by medical knowledge
[2]. Next, consider self-driving cars. In this context, the self-driving car
must be able to process information faster than a human, such that accidents
and fatalities can be avoided [21]. Suppose a self-driving car is involved in
an accident, then the system must be able to explain that in order to avoid
hitting a pedestrian, the only option was to swerve out of the way and, by
coincidence, into another vehicle. Lastly, consider an online restaurant
review system, where reviews are classified as positive or negative based on
the words contained in the review. Here, the classifier simply returns whether
a review is positive or negative, without explaining which words contributed
to the classification. As such, negative reviews that are expressed in, for
example, a sarcastic manner, might be classified as positive, resulting in a
restaurant receiving a higher rating and more diners – who might experience
bad service (or even food poisoning) as a result of mislabelled reviews.
Given its relevance in many application areas, the explainability problem has
attracted a great deal of attention in recent years, and as such, is an open
research area [24]. The manifestation of explainable systems in high-risk
areas has influenced the development of explainable artificial intelligence
(XAI) in the sense of prescriptions or taxonomies of explanation. These
include fairness, accountability, transparency and ethicality [3, 11, 23]. The
foundation of such a system should include these prescriptions such that a
level of usable intelligence is reached to not only understand model behaviour
[1] but also understand the context of an application task [14]. Bayesian
networks (BNs) – which lie at the intersection of AI, machine learning, and
statistics – are probabilistic graphical models that can be used as a tool to
manage uncertainty. These graphical models allow the user to reason about
uncertainty in the problem domain by updating ones beliefs, whether this
reasoning occurs from cause to effect, or from effect to cause. Reasoning in
Bayesian networks is often referred to as what-if questions. The flexibility
of a Bayesian network allows for these questions to be predictive, diagnostic
and inter-causal. Some what-if questions might be intuitive to formulate, but
this is not always the case especially on a diagnostic and inter-causal level.
This might result in sub-optimal use of explainability in BNs - especially on
an end-user level. Apart from well-established reasoning methods, the
probabilistic framework of a Bayesian network also allows for explainability
in evidence. These include most probable explanation and most relevant
explanation. To extend on the existing explainability methods, we propose an
additional approach which considers explanations concerned with the decision-
base.
In this paper, we research the current state of explainable models in AI and
machine learning tasks, where the domain of interest is BNs. In the current
research, explanation is often done by principled approaches to finding
explanations for models, reasoning, and evidence. Using this, we are able to
formulate a taxonomy of explainable BNs. We extend this taxonomy to include
explanation of decisions. This taxonomy will provide end-users with a set of
tools to better understand predictions made by BNs and will therefore
encourage efficient communication between end-users. The paper is structured
as follows. We first investigate the community and scope of explainability
methods in Section 2. Thereafter, we introduce explanation in BNs, which
includes the formulation of principled approaches, the theoretical properties
associated therewith and a hands-on medical diagnosis example. Section 4
presents our newly formulated taxonomy of explainable BNs. The final section
concludes the paper and includes a short discussion of future work.
## 2 Related Work
In application areas where erroneous decisions have a direct impact on
livelihood, relying on systems where the predictions cannot be explained may
not be an option. Explainability in such systems aids in establishing trust in
not only circumstances where the system is used as a primary decision tool,
but also cases where the system takes on a supportive role [28].
Over the past few years, explainability in AI systems has gained immense
attention from the research community. This is reflected in the launch of
various events and organisations. The Defense Advanced Research Projects
Agency (DARPA) launched the Explainable Artificial Intelligence (XAI)
initiative in 2016. The XAI programs intention is to encourage the production
of AI techniques where emphasis is placed on developing more accurate and
precise models, while still maintaining a high level of explainability.
Ultimately, XAI systems must be able to explain their rationale and enable
understanding [12]. Conferences, such as the International Joint Conferences
on Artificial Intelligence (IJCAI) conducts workshops specifically focusing on
XAI [26]. This topic has also made a noticeable appearance at the Neural
Information Processing Systems (NeurIPS) conference, with panel discussions
solely concentrating on XAI.
The scope of explainability is inherently linked to the complexity of the
model, as well as the goal thereof. Usually, but not necessarily, there is a
trade-off between model accuracy and explainability – the higher the accuracy,
the lower the explainability [31]. For example, decision trees provide a clear
explanation but are often less accurate than deep learning models, which are
less transparent. It should be mentioned that this trade-off is also connected
to the quality of data. AI and machine learning models that are transparent by
design, such as linear regression, decision trees and k-nearest neighbours,
convey a degree of explainability [1]. However, when AI and machine learning
models do not provide clear explanations, separate explainability methods are
applied to the model to gain meaningful explanations. Methods of
explainability are not limited to the behaviour of the model or decision-
making process as a whole, and may be applied to single instances, predictions
or decisions [6]. These explanations can be in the form of visualisations or
natural language [10]. Some of the existing explainability methods are layer-
wise relevance propagation (LRP), which are often used in deep neural networks
where the prediction made by the network is propagated back into the neural
network using a set of predefined rules [27]. Another explainability method is
local interpretable model-agnostic explanations (LIME). LIME methods can be
used to explain prediction instances by attempting to understand the behaviour
of the prediction function in the context of the prediction. Here, the user is
able to obtain a local explanation for that particular instance. LIME can also
be used to obtain explanations for the entire model by generating multiple
instances [17]. Methods of explainability are also extended to document
classifiers, where documents are classified based on predicted likelihood.
Here, explanations can be produced based on a search through the text-space of
possible word combinations – starting with a single word and expanding the
number of words until an explanation is found [25].
Uncertainty is present in the majority of AI fields, such as knowledge
representation, learning and reasoning [22]. Real-world data often contain
noisy and uncertain observations close to the decision boundary, which may
result in predictions that cannot be explained [7]. Probabilistic graphical
models can be seen as uncertainty management tools as they are able to
represent and reason with uncertainty. These probabilistic models are often
employed to support decision making in various application fields, including
legal and medical applications [29]. One such probabilistic model is BNs,
which is capable of combining expert knowledge and statistical data, therefore
allowing for complex scenarios to be modelled. However, not only are the inner
workings of Bayesian networks complicated to most end-users [15], the
explanation of probabilistic reasoning is challenging and as such results
appear to be counter-intuitive or wrong [16]. Therefore, there exists a demand
for explanation in Bayesian networks.
Explanation methods for Bayesian networks can be divided into three broad
approaches. The first approach consists of presenting information contained in
the knowledge base and is known as explanation of the model. There are two
objectives associated with this type of explanation. Firstly, explanation of
the model is used to assist application experts in the model-construction
phase. Secondly, it is used for instructional purposes to offer knowledge
about the domain [19]. The objective of the second approach is to justify the
conclusion and how it was obtained. This approach is referred to as
explanation of reasoning [9]. The final approach, explanation of evidence, is
concerned with the treatment of the variables in the Bayesian network [13]. In
explanation of evidence, also referred to as abduction, an explanation is seen
as the configuration of a portion of the variables present in the Bayesian
network, given evidence. Not included in the aforementioned explanation
methods are techniques that describe whether the end-user is ready to make a
decision, and if not, what additional information is required to better
prepare for decision making. Techniques such as sensitivity analysis [4] and
same-decision probability [5] provide the end-user with insight on decisions.
We group these methods into a fourth approach, explanation of decisions. For
the purpose of this paper, and accordingly, the formulation of the explainable
taxonomy, we only consider explanation of reasoning, evidence, and decisions.
Explanation of the model is excluded from this taxonomy – at the time being –
as the intent of the taxonomy is to support understanding of how and why
predictions were made and not on the model-construction itself.
## 3 Explainable Bayesian networks
We adapt the XAI terminology to the scope of BNs by defining the term XBN and
thereby referring to explainable BNs. To illustrate XBN in Bayesian networks,
consider the Asia network from Lauritzen and Spiegelhalter (1988) [20] as an
example.
Example statement: Suppose a patient visits a doctor, complaining about
shortness of breath (dyspnoea) (D). The patient is worried he might have lung
cancer. The doctor knows that lung cancer is only one of the possible causes
for dyspnoea, and other causes include bronchitis (B) and tuberculosis (T).
From her training, the doctor knows that smoking increases the probability of
lung cancer (C) and bronchitis. Both tuberculosis and lung cancer would result
in an abnormal X-ray (X) result. Lastly, a recent visit to Asia might increase
the probability of tuberculosis, as the disease is more prevalent there than
the patient’s country of origin.
From this example statement, the nodes and values are defined and then the
graphical structure of the BN is constructed. This is followed by the
quantification of the conditional probability tables (CPTs) for each node [18]
111All BN models are constructed in BayesiaLab (www.bayesialab.com). The final
BN is illustrated in Figure 1. Now that the domain and uncertainty are
represented in the BN, we will look into how to use the BN. Reasoning in BNs
takes place once we observe the value of one or more variables and we want to
condition on this new information [18]. It is important to note that this
information need not necessarily flow in the direction of the arcs, and
therefore, reasoning can occur in the opposite direction of the arcs.
Figure 1: Asia Bayesian network
### 3.1 Explanation of reasoning
Suppose during the doctor’s appointment, the patient tells the doctor he is a
smoker before any symptoms are assessed. As mentioned earlier, the doctor
knows smoking increases the probability of the patient having lung cancer and
bronchitis. This will, in turn, also influence the expectation of other
symptoms, such as the result of the chest X-Ray and shortness of breath. Here,
our reasoning is performed from new information about the causes to new
beliefs of the effects. This type of reasoning is referred to as predictive
reasoning and follows the direction of the arcs in the network. Through
predictive reasoning, we are interested in questions concerning what will
happen. In some cases, predictive reasoning is not of great insight and it is
often required to reason from symptoms (effect) to cause, which entails
information flow in the opposite direction to the network arcs. For example,
bronchitis can be seen as an effect of smoking. Accordingly, we are interested
in computing $P(S|B)$. This is referred to as diagnostic reasoning and is
typically used in situations where we want to determine what went wrong. The
final type of probabilistic reasoning in BNs is inter-causal reasoning, which
relates to mutual causes of a common effect – typically indicated by a
v-structure in the network. In other words, inference is performed on the
parent nodes of a shared child node. Note that the parent nodes are
independent of one another unless the shared child node is observed, a concept
known as d-separation. From the Asia network, we observe a v-structure between
Tuberculosis, Lung Cancer and Tuberculosis or Cancer (see Figure 2(a)). Here,
Tuberculosis is independent from Lung cancer. Suppose we observe the patient
has either Tuberculosis or Cancer – indicated by the green (or light grey if
viewed in grey-scale) bar in Figure 2(b) – then this observation increases the
probabilities of the parent nodes, Tuberculosis and Lung Cancer. However, if
it is then revealed that the patient does, in fact, have Tuberculosis it, in
turn, lowers the probability of a patient having Lung Cancer (see Figure
2(c)). We can then say Lung Cancer has been explained away. It should be noted
that the probabilistic reasoning methods discussed above can be used as is, or
can be combined to accommodate the problem at hand.
(a) Joint Probability Tables for T, C and P
(b) Adding evidence to P
(c) Adding evidence to T
Figure 2: Belief Updating for T, C and P
### 3.2 Explanation of evidence
Sometimes, users of the system find the results of reasoning unclear or
questionable. One way to address this is to provide scenarios for which the
reasoning outcomes are upheld. A fully specified scenario is easier to
understand than a set of reasoning outcomes. Explanation of evidence methods
are useful in specifying these scenarios. They are based on the posterior
probability and the generalised Bayes factor. Firstly, we focus on methods
that aim to find a configuration of variables such that the posterior
probability is maximised given the evidence. Here, we consider the Most
Probable Explanation (MPE), which is a special case of the Maximum A
Posteriori (MAP). The MAP in a BN is a variable configuration which includes a
subset of unobserved variables in the explanation set such that the posterior
probability – given evidence – is maximised. Similarly, if the variable
configuration consists of all variables present in the explanation set, we
have an MPE solution [13]. However, in some real-world applications, the
variable set often consists of a large number of variables, which may result
in over-specified or under-specified explanations obtained from the MPE. In
fact, only a few variables may be relevant in explaining the evidence. The
next approach finds a single instantiation that maximises the generalised
Bayes factor in a trans-dimensional space containing all possible partial
instantiations. In other words, this approach aims to obtain an explanation
only consisting of the most relevant variables in the BN, given the evidence.
This approach is known as the Most Relevant Explanation (MRE) [34, 32, 33].
#### 3.2.1 Most Probable Explanation
Let’s first consider the MPE method. Recall that the MPE finds the complete
instantiation of the target variables – which are defined to be unobserved –
such that the joint posterior probability is maximised given evidence. Figure
3 shows the scenario (or case) that has the highest joint probability in the
Asia network. Note here the probabilities are replaced by the likelihood of
the variable state belonging to the most probable scenario, for example, if we
look at the two possible states for Bronchitis, we see that ‘False’, i.e., the
patient does not have bronchitis, is more probable. Suppose we discover the
patient suffers from shortness of breath, we can then set the evidence for
Dyspnoea as ‘True’ (illustrated in Figure 4). By introducing this new
evidence, we now observe a slightly different scenario, where it is more
probable for the patient to be a smoker and have bronchitis. Notice here that
variables that seem irrelevant to the evidence explanation, such as Visit to
Asia and XRay, are included in the explanation. This could lead to
overspecified hypotheses, especially in larger networks.
Figure 3: Initial MPE for Asia Bayesian network Figure 4: Updated MPE for Asia
Bayesian network
#### 3.2.2 Most Relevant Explanation
To avoid an overspecified hypotheses, one approach is to trim or prune less
relevant variables from the explanation. That is, instead of finding the
complete instantiation of the target variables, a partial instantiation of the
target variables is found such that the generalised Bayes factor is maximised.
Let’s first consider the explanations obtained from the generalised Bayes
factor. Again, suppose the patient suffers from shortness of breath
(evidence). We are then interested in finding only those variables that are
relevant in explaining why the patient has shortness of breath. Table 1
contains the set of explanations obtained from the generalised Bayes factor.
For example, the last entry shows that a possible explanation for shortness of
breath is a trip to Asia and an abnormal X-ray. Thus including only 2
variables from the remaining 7 variables (excluding Dyspnoea). As mentioned,
the MRE is the explanation that maximises the generalised Bayes factor. From
Table 1 we see that having Bronchitis best explains the shortness of breath.
Notice that this explanation does not include Smoking, as opposed to the MPE
which included Smoking. Thus, although smoking is a probable cause for
shortness of breath, it is not the most relevant cause. An interesting
characteristic of the MRE is its ability to capture the explaining away
phenomenon [33].
Table 1: Explanations of GBF scores for Asia network Explanation | Generalised Bayes Factor
---|---
(Bronchitis) | 6.1391
(Smoker, Tuberculosis or Cancer) | 1.9818
(Tuberculosis or Cancer) | 1.9771
(Lung Cancer, Smoker) | 1.9723
(Lung Cancer) | 1.9678
(Smoker, Tuberculosis) | 1.8896
(Tuberculosis) | 1.8276
(Smoker, XRay) | 1.7779
(Smoker) | 1.7322
(Visit to Asia, XRay) | 1.5635
### 3.3 Explanation of decisions
Hidden or unobserved variables appear in most application fields, especially
in areas where decisions made by the end-user directly influence human lives.
For example, when first examining a patient, the health-state of the patient
is unknown. In these situations, one would typically ask two questions. The
first being given the available information, are we ready to make a decision?
and secondly, if we are not yet ready to make a decision, what additional
information do we require to make an informed decision?. To answer these
questions, the authors of [5] propose a threshold-based notion, named same-
decision probability, which provides the user with a confidence measure that
represents the probability that a certain decision will be made, had
information pertaining unknown variables been made available. Another possible
threshold-based solution to this is sensitivity analysis [30]. In sensitivity
analysis, the assessments for the conditional probabilities in the BN are
systematically changed to study the effect on the output produced by the
network. The idea is that some conditional probabilities will hardly influence
the decisions, while others will have significant impact.
#### 3.3.1 Same-decision Probability
Suppose we are interested in making a decision on whether the patient is a
smoker (Smoking), which is conditioned on evidence Tuberculosis or Cancer. We
can then use the BN such that our decision pertaining to the hypothesis is
supported on the basis that the belief in the hypothesis given some evidence
exceeds a given threshold. Now, the patient may have access to information
that is unknown to us, for example, the patient recently visited Asia and
chose not to disclose this information. Therefore, we do not have access to
the true state of this variable. The true state knowledge may confirm or
contradict our decision based on the probability of smoking given some
evidence and the patient visiting Asia. If we now compare this probability
with some threshold, we have a degree of confidence in our original decision
regarding smoking and the available evidence. Had the patient disclosed his
trip to Asia, it is then unlikely that we would have made a different
decision. Hence, we can make use of the same-decision probability (SDP).
Consider now the BN given in Figure 5. Notice here the addition of three
nodes, P(Smoker=True), Decision Threshold and Decision. Where P(Smoker=True)
represents the hypothesis probability and the decision threshold is set to
55%. Suppose now we update our network such that Tuberculosis or Cancer (P) is
True – to reflect the scenario discussed above. The hypothesis probability
then increases from 50.00% to 84.35% (see Figure 6). Our decision is confirmed
given the threshold since the hypothesis probability now exceeds the given
threshold value. From Table 2, the SDP before adding evidence for the ‘True’
state is 0.00%. After adding evidence, the SDP for our decision is 83.88%,
indicating that our decision confidence is 83.88%222The SDP scenario was
constructed using the decision node functionality in Bayesialab. The decision
nodes are indicated as green (or dark grey if viewed in grey-scale)..
Figure 5: Addition of decision node in Asia network Figure 6: Updated decision for Asia network Table 2: Decision Confidence for Asia network | States | Minimum | Maximum | Mean | Standard Deviation
---|---|---|---|---|---
No evidence | False | 100.00% | 100.00% | 100.00% | 0.00%
True | 0.00% | 0.00% | 0.00% | 0.00%
Evidence | False | 0.00% | 100.00% | 16.12% | 36.77%
True | 0.00% | 100.00% | 83.88% | 36.77%
## 4 XBN in Action
The point of XBN is to explain the AI task at hand. In other words, the
question the decision-maker seeks to answer, and not the technique in
principle. Therefore, we need to be able to freely ask ‘why’ or ‘what’ and
from this select a method that would best address the AI task. In Figure 7 we
present a taxonomy of XBN. The purpose of this taxonomy is to categorise XBN
methods into four phases of BNs: The first phase involves the construction of
the BN model. Explanation in the ‘model’ phase is critical when the model is
based on expert knowledge. The second phase is reasoning, the third phase
evidence, and the fourth decision. Explanation of the model and sensitivity
analysis are illustrated in grey as it is out of scope for this paper.
Although we define the taxonomy along these phases, we do acknowledge that not
all phases are necessarily utilised by the decision-maker. For example, when
using BNs to facilitate participatory modelling [8], the main emphasis is on
explaining the model. Or, when using BNs as a classifier, the emphasis is on
explaining the decisions. In this section, we present typical questions of
interest to the decision-maker in each category of the XBN taxonomy.
Figure 7: A schematic view of XBN
### 4.1 Reasoning
Reasoning in the XBN taxonomy is concerned with the justification of a
conclusion. Returning to our Asia example, the end-user might ask the
following question,
1. “Given the patient recently visited Asia, how likely is an abnormal chest X-Ray?”
Here, we are concerned with a single outcome: the X-Ray result. On the other
hand, the doctor may have knowledge about symptoms presented by the patient
and ask,
1. “What is the probability of a patient being a smoker, given that he presented shortness of breath?”
We can extend this to a forensic context. Suppose a crime scene is
investigated where a severely burned body is found. The forensic analyst can
then ask,
1. “The burn victim is found with a protruded tongue, was the victim exposed to fire before death or after?”
Consider now a financial service context where a young prospective home owner
is declined a loan. The service provider can then ask,
1. “Did the prospective owner not qualify for the home loan because of his age?”
From these examples, we see that explanation of reasoning is used where
questions are asked in the context of single variable outcomes for diagnosis.
### 4.2 Evidence
When we are interested in the subset of variables that describes specific
scenarios, we use explanation of evidence methods. For example, in our Asia
example the doctor may ask,
1. “Which diseases are most probable to the symptoms presented by the patient?”
or
1. “Which diseases are most relevant to the symptoms presented by the patient?”
In a forensic context, the forensic analyst investigating a crime scene may
ask the following question,
1. “What are the most relevant causes of death, given the victim is found with a severely burned body and protruded tongue?”
Similarly this can be applied to fraud detection. Suppose the analyst
investigates the credit card transactions of a consumer. The analyst can then
ask,
1. “What are the most probable transaction features that contributed to the flagging of this consumer?”
Explanation of evidence can also be used to provide explanations for financial
service circumstances. For example, if a prospective home owner is turned down
for a loan, he may ask the service provider which features in his risk profile
are more relevant (contributed most) to being turned down.
### 4.3 Decisions
Explanation of decisions typically asks the following questions “Do we have
enough evidence to make a decision?”, and if not, “what additional evidence is
required to make a decision?”. For example, in our Asia example we can ask,
1. “Do we have enough evidence on the symptoms presented to make a decision on the disease?”
or
1. “Since we are not yet able to determine the disease, what additional information – test, underlying symptoms, comorbidities – is required to make a decision?”
Applied to forensic investigations, this can be used to answer questions
relating to crime scene investigations. The analyst may ask questions
regarding the actual evidence collected from the crime scene, i.e., if enough
evidence is collected to rule a crime as a homicide or what additional
evidence is required to rule the crime as a homicide. Should they investigate
further or is the evidence that is already collected enough to make an
informed decision?
## 5 Conclusion
The development of AI systems has faced incredible advances in recent years.
We are now exposed to these systems on a daily basis, such as product
recommendation systems used by online retailers. However, these systems are
also being implemented by medical practitioners, forensic analysts and
financial services – application areas where decisions directly influence the
lives of humans. It is because of these high-risk application areas that
progressively more interest is given to the explainability of these systems.
This paper addresses the problem of explainability in BNs. We first explored
the state of explainable AI and in particular BNs, which serves as a
foundation for our XBN framework. We then presented a taxonomy to categorise
XBN methods in order to emphasise the benefits of each method given a specific
usage of the BN model. This XBN taxonomy will serve as a guideline, which will
enable end-users to understand how and why predictions were made and will,
therefore, be able to better communicate how outcomes were obtained based on
these predictions.
The XBN taxonomy consists of explanation of reasoning, evidence and decisions.
Explanation of the model is reserved for future work, since the taxonomy
described in this paper is focused on how and why predictions were made and
not on the model-construction phase. Other future research endeavours include
the addition of more dimensions and methods to the XBN taxonomy – this
involves more statistical-based methods and the incorporation of causability
(which also addresses the quality of explanations) – as well as applying this
taxonomy to real-world applications.
## References
* [1] Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., Herrera, F.: Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion 58, 82–115 (2020). https://doi.org/10.1016/j.inffus.2019.12.012
* [2] Brito-Sarracino, T., dos Santos, M.R., Antunes, E.F., de Andrade Santos, I.B., Kasmanas, J.C., de Leon Ferreira, A.C.P., et al.: Explainable Machine Learning for Breast Cancer Diagnosis. In: 2019 8th Brazilian Conference on Intelligent Systems (BRACIS). pp. 681–686. IEEE (2019)
* [3] Cath, C.: Governing artificial intelligence: Ethical, legal and technical opportunities and challenges. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 376(2133) (2018). https://doi.org/10.1098/rsta.2018.0080
* [4] Chan, H., Darwiche, A.: On the Robustness of Most Probable Explanations. Proceedings of the 22nd Conference on Uncertainty in Artificial Intelligence, UAI 2006 (06 2012)
* [5] Choi, A., Xue, Y., Darwiche, A.: Same-decision probability: A confidence measure for threshold-based decisions. International Journal of Approximate Reasoning 53(9), 1415–1428 (2012)
* [6] Das, A., Rad, P.: Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey. arXiv preprint arXiv:2006.11371 (2020)
* [7] De Waal, A., Steyn, C.: Uncertainty measurements in neural network predictions for classification tasks. In: 2020 IEEE 23rd International Conference on Information Fusion (FUSION). pp. 1–7. IEEE (2020)
* [8] Düspohl, M., Frank, S., Döll, P.: A review of Bayesian networks as a participatory modeling approach in support of sustainable environmental management. Journal of Sustainable Development 5(12) (2012). https://doi.org/10.5539/jsd.v5n12p1
* [9] Gallego, M.J.F.: Bayesian networks inference: Advanced algorithms for triangulation and partial abduction (2005)
* [10] Goebel, R., Chander, A., Holzinger, K., Lecue, F., Akata, Z., Stumpf, S., Kieseberg, P., Holzinger, A.: Explainable AI: the new 42? In: International cross-domain conference for machine learning and knowledge extraction. pp. 295–303. Springer (2018)
* [11] Greene, D., Hoffmann, A.L., Stark, L.: Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning. Proceedings of the 52nd Hawaii International Conference on System Sciences pp. 2122–2131 (2019). https://doi.org/10.24251/hicss.2019.258
* [12] Gunning, D., Aha, D.W.: DARPA’s explainable artificial intelligence program. AI Magazine 40(2), 44–58 (2019)
* [13] Helldin, T., Riveiro, M.: Explanation Methods for Bayesian Networks: review and application to a maritime scenario. In: Proc. of The 3rd Annual Skövde Workshop on Information Fusion Topics, SWIFT. pp. 11–16 (2009)
* [14] Holzinger, A., Malle, B., Kieseberg, P., Roth, P.M., Müller, H., Reihs, R., Zatloukal, K.: Towards the Augmented Pathologist: Challenges of Explainable-AI in Digital Pathology. arXiv preprint arXiv:1712.06657 pp. 1–34 (2017), http://arxiv.org/abs/1712.06657
* [15] Keppens, J.: Explaining Bayesian Belief Revision for Legal Applications. In: JURIX. pp. 63–72 (2016)
* [16] Keppens, J.: Explainable Bayesian network query results via natural language generation systems. In: Proceedings of the Seventeenth International Conference on Artificial Intelligence and Law. pp. 42–51 (2019)
* [17] Khedkar, S., Subramanian, V., Shinde, G., Gandhi, P.: Explainable AI in Healthcare. In: Healthcare (April 8, 2019). 2nd International Conference on Advances in Science & Technology (ICAST) (2019)
* [18] Korb, K.B., Nicholson, A.E.: Bayesian Artificial Intelligence. CRC press (2010)
* [19] Lacave, C., Díez, F.J.: A review of explanation methods for Bayesian networks. Knowledge Engineering Review 17(2), 107–127 (2002). https://doi.org/10.1017/S026988890200019X
* [20] Lauritzen, S.L., Spiegelhalter, D.J.: Local computations with probabilities on graphical structures and their application to expert systems. Journal of the Royal Statistical Society: Series B (Methodological) 50(2), 157–194 (1988)
* [21] Lawless, W.F., Mittu, R., Sofge, D., Hiatt, L.: Artificial Intelligence, Autonomy, and Human-Machine Teams: Interdependence, Context, and Explainable AI. AI Magazine 40(3), 5–13 (2019)
* [22] Lecue, F.: On the role of knowledge graphs in explainable AI. Semantic Web 11(1), 41–51 (2020)
* [23] Leslie, D.: Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector (Jun 2019). https://doi.org/10.5281/zenodo.3240529
* [24] Lipton, Z.C.: The mythos of model interpretability. Queue 16(3), 31–57 (2018)
* [25] Martens, D., Provost, F.: Explaining data-driven document classifications. MIS Quarterly 38(1), 73–100 (2014)
* [26] Miller, T., Weber, R., Magazzeni, D.: Proceedings of the IJCAI 2019 Workshop on Explainable AI (2019)
* [27] Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.R.: Layer-wise relevance propagation: an overview. In: Explainable AI: interpreting, explaining and visualizing deep learning, pp. 193–209. Springer (2019)
* [28] Samek, W., Müller, K.R.: Towards explainable artificial intelligence. In: Explainable AI: interpreting, explaining and visualizing deep learning, pp. 5–22. Springer (2019)
* [29] Timmer, S.T., Meyer, J.J.C., Prakken, H., Renooij, S., Verheij, B.: A two-phase method for extracting explanatory arguments from Bayesian networks. International Journal of Approximate Reasoning 80, 475–494 (2017)
* [30] Van Der Gaag, L.C., Coupé, V.M.: Sensitivity analysis for threshold decision making with bayesian belief networks. In: Congress of the Italian Association for Artificial Intelligence. pp. 37–48. Springer (1999)
* [31] Xu, F., Uszkoreit, H., Du, Y., Fan, W., Zhao, D., Zhu, J.: Explainable AI: A Brief Survey on History, Research Areas, Approaches and Challenges. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 11839 LNAI, 563–574 (2019). https://doi.org/10.1007/978-3-030-32236-6_51
* [32] Yuan, C.: Some properties of most relevant explanation. In: ExaCt. pp. 118–126 (2009)
* [33] Yuan, C., Lim, H., Lu, T.C.: Most relevant explanation in bayesian networks. Journal of Artificial Intelligence Research 42, 309–352 (2011). https://doi.org/10.1613/jair.3301
* [34] Yuan, C., Liu, X., Lu, T.C., Lim, H.: Most relevant explanation: Properties, algorithms, and evaluations. Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence, UAI 2009 pp. 631–638 (2009)
|
# Integral Representation of Hydraulic Permeability
Chuan Bi Department of Mathematical Sciences, University of Delaware, Newark,
DE 19716, USA M. Yvonne Ou Department of Mathematical Sciences, University
of Delaware, Newark, DE 19716, USA Shangyou Zhang Department of Mathematical
Sciences, University of Delaware, Newark, DE 19716, USA
###### Abstract
In this paper, we show that the permeability of a porous material [40] and
that of a bubbly fluid [29] are limiting cases of the complexified version of
the two-fluid models posed in [29]. We assume the viscosity of the inclusion
fluid is $z\mu_{1}$ and the viscosity of the hosting fluid is
$\mu_{1}\in\mathbb{R}^{+}$, $z\in\mathbb{C}$. The proof is carried out by the
construction of solutions for large $|z|$ and small $|z|$ with an iteration
process similar to the one used in [16, 21] and the analytic continuation.
Moreover, we also show that for a fixed microstructure, the permeabilities of
these three cases share the same integral representation formula (IRF) (99)
with different values of contrast parameter $s:=1/(z-1)$, as long as $s$ is
outside the interval
$[-\frac{2E_{2}^{2}}{1+2E_{2}^{2}},-\frac{1}{1+2E_{1}^{2}}]$, where the
positive constants $E_{1}$ and $E_{2}$ are the extension constants that depend
only on the geometry of the periodic pore space of the material.
Version of:
Keywords: hydraulic permeability, Stokes equations, Composite materials,
Integral representation formula, Stieltjes class
Classification: 35Q35, 35Q70
## 1 Introduction
Darcy’s law, which was first proposed by H. Darcy in 1856 [19] based on
experimental observation of water flowing through beds of sand, describes the
relationship between the spontaneous flow discharge rate of steady state
through a porous medium, the viscosity of the fluid, and the pressure drop
over a distance. Later, theoretical/mathematical derivations of Darcy’s law
were presented in many works, e.g. M. Poreh et. al [36], S.P. Neuman[32], E.
Sanchez-Palencia[38], J.L. Lions [28], J.B. Keller[25] and J-L Auriault et al
[5], just to name a few.
In the setting of a periodic pore microstructure, as the period goes to zero,
the convergence to the Darcy’s law of the Stokes system with no-slip boundary
condition posed on the boundary of the pore space was proved by L. Tartar
using the energy method [40]. G. Allaire implemented the two-scale convergence
method introduced by G. Nguetseng[33] to derive the Darcy’s law and show the
convergence [3, 4]. Prior to the proof of Darcy’s law in the ’80s, H.
Brinkman[14] studied the viscous force exerted by a flowing flow on a dense
swarm of particles by adding a diffusion term to the Darcy’s law so as to take
into account the transitional flow between boundaries. H. Brinkman’s method
was further studied in [39, 37, 30]. In the case of a porous material where
the solid region is much smaller than the fluid part, T. Levy [26] and E.
Sanchez-Palencia[38] proposed the same form of Darcy’s law but with a
different representation of the permeability tensor $K$. Later on, G.
Allaire[2] showed the continuity of the transition between the two forms of
Darcy’s laws by considering various ratios between the size of the solid
inclusion and the size of the separation. Moreover, instead of considering the
porous materials as a periodic structured material, A. Beliaev [9] considered
the porous materials as a random and stochastically homogeneous material and
deduced the same Darcy’s law. G. Allaire [1] generalized the homogenization to
handle the more realistic micro-geometries of the porous medium where both the
solid part and the fluid part are connected. Furthermore, in terms of the the
fluid-solid interface conditions, a slip boundary condition is considered by
G. Allaire in [1, 17]. In the case of the fluid flow through a porous medium
subject to a time-harmonic pressure gradient, the permeability depends on the
frequency and is referred to as the dynamic permeability. The theory of
dynamic permeability is established [8, 22, 5, 12] and further developed by
M.-Y. Ou [35].
The goal of this paper is to study how the permeability tensor derived from
the homogenization approach for porous materials [40, 41] depend on the
microstruture of the pore space. Details of this will be presented in Section
1.1.
The main tool we use will be the integral representation formula (IRF) for
composite materials. Composite materials are materials made from more than one
constituent materials with different physical or chemical properties. The
effective properties of composites, such as elasticity, conductivity and
permeability are of great interest in different application fields.
Homogenization theory for composite materials has been extensively studied in
[10, 31, 34]. Mathematically, for a two-component composite material,
themicrostructural information is carried into the analytical formulation of
the effective properties of the composite. Bergman pioneered the study of
analyticity of the effective dielectric constant [11], and in terms of
integral representation of effective material properties, a rigorous basis of
integral representation of the effective conductivity is established by K.
Golden and G. Papanicolaou [21], the effective elastic constants by Y. Kantor
and D. Bergman [23], the effective diffusivity in convection-enhanced
diffusion was derived by M. Avellaneda and A. Majda [6, 7]. Further
enlargements of the domain of analyticity of the IRF of elasticity tensor to
the case where one phase is a void or a hard inclusion is studied by Bruno and
Leo [15, 16].
Unlike the problem setup for calculating effective material properties such as
effective conductivity, elasticity, and diffusivity, where the physical
property of interest is well defined both in the micro-scale and the
macro(homogenized)-scale, the permeability of porous material is by definition
an effective property and hence it makes sense only in the macro-scale. To
overcome this difficulty, we consider a porous material as the limit case of a
two-fluid mixture.
Specifically, we will start with the two-fluid mixture problem studied in
[29], where the effective property is called the _self-permeability_. We will
derive the IRF for the self-permeability and show that the permeability for a
porous material is equal to the limit of the self-permeability when the
viscosity of one phase becomes infinite. Similar to the hard/soft inclusion
case studied in [15, 16], we will extend the domain of analyticity of the IRF
to $\infty$ and to $0$ by an iterative process. As a result, the IRF derived
here is valid for porous materials with a solid skeleton as well as for fluid-
bubble mixtures. Hence it provides a theoretical connection between the
permeability defined in [40] and the self-permeability for the bubbly fluid
studied in [29] and any mixture in between these two limiting cases.
The paper is organized as follows. The permeability of a porous material is
defined in Section 1.1. Section 2 starts with the definition of the _self-
permeability_ $K$of a two-fluid mixture and the corresponding cell problem,
followed by an analysis of the cell problem and the construction of the
solution in the vicinity of the two limiting cases of $z=\infty$ and $z=0$. In
Section 3, the IRF of $K$ is obtained by applying the theory of matrix-valued
Stieltjes functions. In this section, the spectral representation of $K$ is
also derived. The relationships between the moments of the measure in the IRF
and the geometry of the pore space are derived by comparing these two
representations. Section 4 presents the numerical solutions of the cell
problem of a special pore structure, which validate the theoretical results
given in Section 3.
Einstein summation convention is applied unless stated otherwise.
### 1.1 Definition of Permeability from Homogenization
Following the convention of homogenization, the space coordinates for the cell
problem in the open unit cell $Q=(0,1)^{n}$ for $n=2,3$, are denoted by
$\mathbf{y}=(y_{1},y_{2},y_{3})$. Let $\Omega$ be a smooth bounded open set
and $Q$ an open unit cube made of two open sets $Q_{1}$, $Q_{2}$ and the
interface $\Gamma=\mbox{cl}(Q_{1})\cap\mbox{cl}(Q_{2})$ with cl($A$) being the
closure of a set $A$. Moreover, $\widetilde{Q_{i}}$ denotes the $Q$-periodic
extension of $Q_{i}$, $i=1,2$. Following [1], we assume that (1) $Q_{1}$ and
$Q_{2}$ have strictly positive measures in cl($Q$). (2) The set
$\widetilde{Q_{i}}$ is open with $C^{1}$ boundary and is locally located on
one side of its boundary, $i=1,2$, and $\widetilde{Q_{1}}$ is connected. (3)
$Q_{1}$ is connected with a Lipschitz boundary. In addition, we consider the
case of inclusion, i.e. $Q_{2}\cap\partial Q=\emptyset$.
Consider $\epsilon>0$ much smaller than the size of $\Omega$ and $\epsilon
Q$-periodically extend $\epsilon Q_{1}$ in the entire space.
$\Omega_{\epsilon}$ denotes the intersection of $\Omega$ and this $\epsilon
Q$-periodically extended structure. In [40], the permeability is derived from
the Stokes equation in $\Omega_{\epsilon}$, which reads: find
$\mathbf{u}^{\epsilon}\in H_{0}^{1}(\Omega_{\epsilon})^{n}$ and
$p^{\epsilon}\in L^{2}(\Omega_{\epsilon})/\mathbb{R}$ such that
$\left\\{\begin{split}-\mu\triangle\mathbf{u}^{\epsilon}+\nabla
p^{\epsilon}&=\mathbf{f}\quad\text{ in }\Omega_{\epsilon}.\\\
\text{div}\mathbf{u}^{\epsilon}&=0\quad\text{ in }\Omega_{\epsilon}\\\
\end{split}\right.$ (1)
where $\mathbf{f}\in L^{2}(\Omega)$ is independent of $\epsilon$ and the
viscosity $\mu$ is a constant ($\mu$ is set to 1 in [40, 41]). See Figure 1
for an example of the unit cube. Note that the superscript $\epsilon$ is used
to signify that the solutions $\mathbf{u}^{\epsilon}$ and $p^{\epsilon}$
depend on $\epsilon$.
Figure 1: A sample illustration of a periodic cell.
To be able to prove the convergence of $(\mathbf{u}^{\epsilon},p^{\epsilon})$
as $\epsilon\rightarrow 0$, it is necessary to extend these solutions from
$\Omega_{\epsilon}$ to $\Omega$ so they are defined in the same spatial
domain. In [40, 41], $\mathbf{u}^{\epsilon}$ was extended by zero and
$p^{\epsilon}$ by a properly defined extension operator with their extensions
denoted by $\hat{\mathbf{u}^{\epsilon}}$ and $\hat{p^{\epsilon}}$,
respectively. As $\epsilon\rightarrow 0$,
$\frac{\hat{\mathbf{u}^{\epsilon}}}{\epsilon^{2}}\rightharpoonup\mathbf{U}\mbox{
weakly in
}L^{2}(\Omega)^{n},\,\text{div}\,\mathbf{U}=0,\mathbf{U}\cdot\mathbf{n}=0\mbox{
on }\Gamma,\mbox{ and }\hat{p^{\epsilon}}\rightarrow p\mbox{ in
}L^{2}(\Omega)/\mathbb{R}$
and the limit functions satisfy the following Darcy’s law [40]
$\mathbf{U}=\frac{\mbox{\boldmath$K$}^{(D)}}{\mu}(\mathbf{f}-\nabla p)$ (2)
where the permeability tensor $\mbox{\boldmath$K$}^{(D)}$ is defined as
$K_{ij}^{(D)}=\int_{Q_{1}}\mathbf{u}_{D}^{j}\cdot\mathbf{e}_{i}d\mathbf{y},\,i,j=1,\cdots
n$ (3)
with $\mathbf{e}_{i}$ denoting the unit vector in the $i$-th direction and
$\mathbf{u}_{D}^{j}$ the unique solution of the following boundary value
problem
$\left\\{\begin{split}\mu\Delta_{\mathbf{y}}\mathbf{u}_{D}^{j}-\nabla_{\mathbf{y}}p^{j}=-\mathbf{e}_{j}\quad&\text{in
}Q_{1}\\\ \text{div}_{\mathbf{y}}\mathbf{u}_{D}^{j}=0\quad&\text{in }Q_{1}\\\
\mathbf{u}_{D}^{j}=\mathbf{0}\quad&\text{on }\Gamma\end{split}\right.$ (4)
in the space $\mathring{H}(Q_{1}):=\left\\{\mathbf{v}:\mathbf{v}\in
H^{1}(Q_{1})^{n}\biggr{|}\;\text{div}_{\mathbf{y}}\mathbf{v}=0,\mathbf{v}|_{\Gamma}=\mathbf{0},Q\text{-periodic}\right\\}$.
Note that the subscript $i$ of $\mathbf{u}$ and $p$ signifies the solutions
corresponding to the force term $\mathbf{e}_{i}$.
Since $\mu$ is set to 1 in [40, 41], the permeability $K$ presented there is
related to $\mbox{\boldmath$K$}^{(D)}$ by
$\mbox{\boldmath$K$}^{(D)}={\mbox{\boldmath$K$}}/{\mu}$. For future analysis,
we will derive here the quadratic form representation of the permeability. We
start by observing that for incompressible fluid, we have
$\triangle\mathbf{u}=\mbox{div}(\nabla\mathbf{u}+\nabla^{T}\mathbf{u})$
Therefore, (3) can be expressed as
$K_{ij}^{(D)}=\int_{Q_{1}}\mathbf{u}^{j}_{{D}}\cdot\left(\nabla_{\mathbf{y}}p^{i}-{\mu}\Delta_{\mathbf{y}}\mathbf{u}^{i}_{{D}}\right)d\mathbf{y}=\int_{Q_{1}}{\mu}\nabla_{\mathbf{y}}\mathbf{u}^{j}_{{D}}:\left(\nabla_{\mathbf{y}}\mathbf{u}^{i}_{{D}}+\nabla_{\mathbf{y}}^{T}\mathbf{u}^{i}_{{D}}\right)d\mathbf{y}$
(5)
after applying Divergence theorem, periodicity of $\mathbf{u}$, $p$ and no-
slip conditions on $\Gamma$. Here we have used the Frobenius inner product of
matrices $\mathbf{A}:\mathbf{B}=\sum_{i,j=1}A_{ij}B_{ij}$. In terms of the
usual notion of the symmetric part and the antisymmetric part of vector field
$\nabla\mathbf{u}$
$\displaystyle
e(\mathbf{u}):=\frac{1}{2}\left(\nabla\mathbf{u}+\nabla^{T}\mathbf{u}\right),\,\,\tilde{e}(\mathbf{u}):=\frac{1}{2}\left(\nabla\mathbf{u}-\nabla^{T}\mathbf{u}\right){,}$
(6)
the right-hand side of equation (5) becomes
$\int_{Q_{1}}2\mu(e(\mathbf{u}_{D}^{j})+\tilde{e}(\mathbf{u}_{D}^{j})):e(\mathbf{u}_{D}^{i})d\mathbf{y}=\int_{Q_{1}}2\mu
e(\mathbf{u}_{D}^{j}):e(\mathbf{u}_{D}^{i})d\mathbf{y}$ because the Frobenius
product of a symmetric matrix and an antisymmetric matrix must be 0. Therefore
we have the quadratic form of permeability tensor $\mbox{\boldmath$K$}^{(D)}$
$K_{ij}^{(D)}=\int_{Q_{1}}2\mu
e(\mathbf{u}_{D}^{j}):e(\mathbf{u}_{D}^{i})d\mathbf{y}.$ (7)
## 2 Approximation of flow in porous medium by a two-phase Stokes flow
In this section, we consider the system for porous materials (4) as one of the
limiting cases of the two-fluid problem described below, which is the same as
the one studied in [29] with the exception that the fluid viscosity here can
be complex-valued. It is easy to check that the homogenization process in [29]
stays valid after making small modifications to accommodate the complex valued
viscosity described below.
Let $\Omega$, $Q$ and $\epsilon$ be the same as in Section 1.1. $Q_{2}$ is
still the inclusion in the periodic cell. Consider the ${\epsilon Q}$-periodic
extension of $\epsilon Q_{1}$ (${\epsilon}Q_{2}$) and denote by
$\Omega_{1{\epsilon}}$ ($\Omega_{2\epsilon}$) its intersection with $\Omega$.
We note that $\Omega_{1{\epsilon}}$ (region of the hosting fluid) is the same
as $\Omega_{\epsilon}$ in the previous section. Suppose $\Omega_{1{\epsilon}}$
is occupied by fluid with viscosity $\mu_{1}>0$ and $\Omega_{2}$ by fluid with
viscosity $z\mu_{1}$ with $z\in{\mathbb{C}}$. The interface
$\tilde{\Gamma}=\partial\Omega_{1{\epsilon}}\cap\partial\Omega_{2{\epsilon}}$
is such that $\Omega_{1{\epsilon}}\cup{\tilde{\Gamma}\ \cup\
}{\Omega_{2{\epsilon}}}=\Omega$. For the ease of notation, we define the
stress tensor $\boldsymbol{\tau}(\mathbf{u},\mbox{\boldmath$\mu$})$ of a fluid
with viscosity $\mu$, velocity field $\mathbf{u}$ and pressure field $p$ as
$\boldsymbol{\tau}(\mathbf{u},p,\mu)=2{\mu}e(\mathbf{u})-p\mathbf{I},\quad\mbox{$\mathbf{I}$
is the identity matrix.}$ (8)
Let $\chi_{i}$ be the characteristic function of $\Omega_{\epsilon i}$,
$i=1,2$, consider the viscosity function
${\xi}^{{\epsilon}}(\mathbf{x};z)=(\chi_{2}(\mathbf{x})z\mu_{1}+\chi_{1}(\mathbf{x})\mu_{1}),\quad
z\in{\mathbb{C}}.$ (9)
The two-fluid problem is given by the following Stokes system
$\left\\{\begin{split}\text{div}\left(2{{\xi}}^{\epsilon}(\mathbf{x};z)e(\mathbf{u}^{\epsilon})\right)-\nabla
p^{\epsilon}&=-\mathbf{f}\quad\text{ in }\Omega\backslash\tilde{\Gamma}\\\
\text{div}\mathbf{u}^{\epsilon}&=0\quad\text{ in }\Omega\\\
\mathbf{u}^{\epsilon}&=\mathbf{0}\quad\text{ on }\partial\Omega\\\
\llbracket\mathbf{u}^{\epsilon}\rrbracket=0,\,\mathbf{u}^{\epsilon}\cdot\mathbf{n}&=0\quad\text{on
}\tilde{\Gamma}\\\
\llbracket\mbox{\boldmath$\pi$}\rrbracket\cdot\mathbf{n}=\left(\llbracket\mbox{\boldmath$\pi$}\cdot\mathbf{n}\rrbracket\cdot\mathbf{n}\right)\mathbf{n}&\equiv\llbracket\mbox{\boldmath$\pi$}\cdot\mathbf{n}\rrbracket-\mathbf{n}\times\mathbf{n}\times\llbracket\mbox{\boldmath$\pi$}\cdot\mathbf{n}\rrbracket\quad\text{on
}\tilde{\Gamma}\end{split}\right.$ (10)
where
$\mbox{\boldmath$\pi$}=\mbox{\boldmath$\tau$}(\mathbf{u}^{\epsilon},p^{\epsilon},\boldsymbol{\xi}^{\epsilon})$,
$\mathbf{f}$ is a square integrable momentum source independent of $\epsilon$,
$\llbracket\cdot\rrbracket$ the jump across the interface $\tilde{\Gamma}$,
and $\mathbf{n}$ is the outward unit normal of $\partial\Omega_{2\epsilon}$.
The second jump condition in (10) means the traction can only jump in the
normal direction. Also note that the superscript $\epsilon$ is used to signify
that the solutions $\mathbf{u}^{\epsilon}$ and $p^{\epsilon}$ depend on
$\epsilon$.
It is shown in [29] that as $\epsilon\to 0$, $\mathbf{u}^{\epsilon}$ and the
properly normalized $p^{\epsilon}$, which is denoted by $\hat{p}^{\epsilon}$,
converge as follows
$\frac{{\mathbf{u}^{\epsilon}}}{\epsilon^{2}}\to\mathbf{u}^{0}\quad\text{weakly
in }L^{2}(\Omega)^{n},\;\hat{p^{\epsilon}}\to P\quad\text{strongly in
}L^{2}(\Omega)/\mathbb{R}$
where $\mathbf{u}^{0}$ and $P$ satisfy the homogenized system:
$\left\\{\begin{split}\mathbf{u}^{0}&=-\mbox{\boldmath$K$}(\nabla
P-\mathbf{f})\quad\text{in }\Omega\\\ \text{div
}\mathbf{u}^{0}&=0\quad\text{in }\Omega\\\ \end{split}\right.$ (11)
where the components of $K$, which is referred to as the _self-permeability_
in [29], is defined as
$K_{ij}(z)\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\int_{Q}{\mathbf{u}^{j}}\cdot\mathbf{e}_{i}d\mathbf{y},\,{i,j=1,\dots,n}$
(12)
with $\mathbf{u}^{{i}}$ being the unique solution to the cell problem posed in
the function space $H(Q)$, which is defined in (14),
$\left\\{\begin{split}\text{div}_{\mathbf{y}}\left(2{\mbox{\boldmath$\mu$}}(\mathbf{y};z)e(\mathbf{u}^{i})-p^{i}\mathbf{I}\right)+\mathbf{e}_{i}&=\mathbf{0}\quad\text{in
}Q_{1}\cup Q_{2}\\\
\llbracket\mbox{\boldmath$\pi$}\rrbracket\cdot\mathbf{n}&=\left(\llbracket\mbox{\boldmath$\pi$}\cdot\mathbf{n}\rrbracket\cdot\mathbf{n}\right)\mathbf{n}\text{
on }\Gamma\end{split}\right.$ (13)
where
$\mbox{\boldmath$\mu$}(\mathbf{y};z)=\mu_{1}\chi_{1}(\mathbf{y})+z\mu_{1}\chi_{2}(\mathbf{y})$
with $\chi_{m}$ being the characteristic functions of $Q_{m}$, $m=1,2,$ and
$\mbox{\boldmath$\pi$}=\mbox{\boldmath$\tau$}(\mathbf{u}^{k},p^{k},\mbox{\boldmath$\mu$})$,
cf. (8). Note that the superscript $i$ is used to signify that
$\mathbf{u}^{i}$ and $p^{i}$ are solutions to the cell problem (13) with the
force term $-\mathbf{e}_{i}$, $i=1,\dots,n$.
### 2.1 Function Spaces
Let $\mathcal{R}(Q_{2})$ denote the space of rigid body displacements in
$Q_{2}$, i.e. $\mathbf{u}=\mathbf{A}\mathbf{y}+\mathbf{b}$ with constant skew-
symmetric matrix $A$ and constant vector $\mathbf{b}$ in $Q_{2}$. We start
with the space of admissible functions for the velocity
$\displaystyle H(Q)$ $\displaystyle:=\left\\{\mathbf{v}:\mathbf{v}\in
H^{1}(Q_{1}\cup
Q_{2})^{n}\biggr{|}\;\text{div}_{\mathbf{y}}\mathbf{v}=0,\;\mathbf{v}\cdot\mathbf{n}=0\text{
in }H^{-\frac{1}{2}}(\Gamma),\right.$
$\displaystyle\qquad\left.{}\llbracket\mathbf{v}\rrbracket_{\Gamma}=\mathbf{0},\;(\mathbf{v},\mathbf{\eta})_{H^{1}(Q_{2})}=0,\forall\mathbf{\eta}\in\mathcal{R}(Q_{2}),\,\mathbf{v}\text{
is }Q\text{- periodic}\right\\}$ (14)
where $\mathbf{n}$ is the outward unit normal of $\partial Q_{2}$. $H(Q)$ is
endowed with the inner product
$(\mathbf{u},\mathbf{v})_{Q}=\int_{Q}2\mu_{1}e(\mathbf{u}):\overline{e(\mathbf{v})}d\mathbf{y}.$
(15)
The induced norm is denoted by
$\left\lVert\mathbf{u}\right\rVert_{Q}^{2}:=(\mathbf{u},\mathbf{u})_{Q}$. Note
that we have $H(Q)\cap\mathcal{R}(Q)=\\{\mathbf{0}\\}$ because $\mathbf{A}=0$
due to the $Q$-periodicity and $\mathbf{u}\cdot\mathbf{n}=0$ implies
$\mathbf{b}=\mathbf{0}$. We observe that if $\mathbf{u}\in H(Q)$ then
$\mathbf{u}\in H^{1}(Q)^{n}$ by the following argument. Obviously,
$\mathbf{u}\in L^{2}(Q)^{n}$. To prove $\frac{\partial u_{i}}{\partial
y_{j}}\in L^{2}(Q)$ for $i,j=1,2,3$, let $\phi$ be any $C^{\infty}$ test
function compactly supported in $Q$ and $h$ be the $i$-th component $u_{i}$
for any $i$. Then
$\int_{Q}h\nabla\phi
d\mathbf{y}=-\left(\int_{Q_{1}\cap\text{Supp}(\phi)}\phi\nabla
hd\mathbf{y}+\int_{Q_{2}\cap\text{Supp}(\phi)}\phi\nabla hd\mathbf{y}\right)$
here we used $\llbracket h\rrbracket=\mathbf{0}$. Now we can define a
candidate function $\mathbf{g}$ such that
$\begin{split}\mathbf{g}|_{Q_{i}}&:=\nabla h|_{Q_{i}},\,i=1,2\end{split}$ (16)
then clearly $\mathbf{g}\in L^{2}(Q)^{n}$ and
$\left<h,\nabla\phi\right>=-\left<g,\phi\right>$, where $\left<\cdot\right>$
denotes the usual $L^{2}$ inner product. Therefore ${h}\in H^{1}(Q)$ and hence
$u_{i}\in H^{1}(Q),\ i=1,\dots,n$.
Next, we show that $\|\cdot\|_{Q}$ is equivalent to the usual $H^{1}$ norm,
i.e., there exist constants $B_{1}$ and $B_{2}$ such that
$B_{1}\left\lVert\mathbf{u}\right\rVert_{H^{1}(Q)}\leq\left\lVert\mathbf{u}\right\rVert_{Q}\leq
B_{2}\left\lVert\mathbf{u}\right\rVert_{H^{1}(Q)}$ (17)
Because $H^{1}(Q)\cap\mathcal{R}(Q)=\\{0\\}$, by Theorem 2.5 in [34], there
exists a Korn’s constant $C_{1}$ such that
$C_{1}\left\lVert\mathbf{u}\right\rVert_{H^{1}(Q)}\leq\frac{1}{\sqrt{2\mu_{1}}}\left\lVert\mathbf{u}\right\rVert_{Q}$
(18)
where $C_{1}$ depends only on $Q$. Therefore, we can take
$B_{1}=\sqrt{2\mu_{1}}C_{1}$. To emphasize the dependence on $Q$, we will
write it as $B_{1}(Q)$. On the other hand, according to the orthogonal
decomposition that $\nabla\mathbf{u}=e(\mathbf{u})+\tilde{e}(\mathbf{u})$, see
(6),
$\left\lVert\mathbf{u}\right\rVert_{H^{1}(Q)}^{2}\geq\left\lVert\nabla\mathbf{u}\right\rVert_{L^{2}(Q)}^{2}=\left\lVert
e(\mathbf{u})\right\rVert_{L^{2}(Q)}^{2}+\left\lVert\tilde{e}(\mathbf{u})\right\rVert^{2}\geq\left\lVert
e(\mathbf{u})\right\rVert_{L^{2}(Q)}^{2}=\frac{1}{2\mu_{1}}\|\mathbf{u}\|_{Q}^{2}$
therefore $B_{2}=\sqrt{2\mu_{1}}$. The reason for introducing the $H(Q)$-norm
is that the self-permeability in (12) can be represented in terms of the inner
product. More specifically, using (12), (13) and the fact that
$\overline{\mathbf{e}_{i}}=\mathbf{e}_{i}$, by a calculation similar to (5)
and taking into account the interface condition $\mathbf{u}\cdot\mathbf{n}=0$
and the jump conditions in (13), (12) can be expressed in the following form
$\begin{split}K_{ij}(z)=\int_{Q}2{\mu}(\mathbf{y};\overline{z})\overline{e(\mathbf{u}^{{i}}(z))}:{e(\mathbf{u}^{{j}}(z))}d\mathbf{y}\end{split}$
(19)
and its conjugate transpose
$\mbox{\boldmath$K$}^{*}:=\overline{\mbox{\boldmath$K$}^{T}}$ is
${(K^{*})_{ij}}(z)=\int_{Q}2{\mu}(\mathbf{y};{z}){e(\mathbf{u}^{{j}}(z))}:\overline{e(\mathbf{u}^{{i}}(z))}d\mathbf{y}$
(20)
### 2.2 Weak solution of the Cell Problem (13)
The weak formulation of the cell problem (13) is
$\int_{Q_{1}\cup
Q_{2}}2\mu(\mathbf{y};z)e(\mathbf{u}^{k}):\overline{e(\mathbf{v})}d\mathbf{y}=\int_{Q_{1}\cup
Q_{2}}\mathbf{e}_{k}\cdot\bar{\mathbf{v}}d\mathbf{y},\qquad\forall\mathbf{v}\in
H(Q)$ (21)
From this, we see that the solutions satisfy the following symmetry
$\mathbf{u}^{k}(\mathbf{y};\overline{z})=\overline{\mathbf{u}^{k}(\mathbf{y};z)}$
Define the sesquilinear form on $H(Q)$
$a(\mathbf{u},\mathbf{v})=\int_{Q_{1}\cup
Q_{2}}2\mu(\mathbf{y};z)e(\mathbf{u}^{k}):\overline{e(\mathbf{v})}d\mathbf{y}$
(22)
It is clear that $a(\mathbf{u},\mathbf{v})$ is bounded in $H(Q)$. To check the
coercivity, assume $\mathbf{u}^{k}\neq 0$ and define the parameter
$\lambda:=\frac{\int_{Q}2\mu_{1}\chi_{2}e(\mathbf{u}^{k}):\overline{e(\mathbf{u}^{k})}d\mathbf{y}}{\int_{Q}2\mu_{1}e(\mathbf{u}^{k}):\overline{e(\mathbf{u}^{k})}d\mathbf{y}}$
(23)
then $0\leq\lambda\leq 1$. We note that
$\displaystyle\frac{a(\mathbf{u}^{k},\mathbf{u}^{k})}{\int_{Q}2\mu_{1}e(\mathbf{u}^{k}):\overline{e(\mathbf{u}^{k})}d\mathbf{y}}=\lambda
z+(1-\lambda)\cdot 1$ (24)
and hence as long as 0 is not on the line segment joining $z$ and 1, there
exist $\alpha(z):=\min_{0\leq\lambda\leq 1}|\lambda z+1-\lambda|>0$ such that
$\left|a(\mathbf{u}^{k},\mathbf{u}^{k})\right|\geq\alpha(z)\int_{Q}2\mu_{1}e(\mathbf{u}^{k}):\overline{e(\mathbf{u}^{k})}d\mathbf{y}=\alpha\|\mathbf{u}^{k}\|_{Q}^{2}$
(25)
Therefore for $z\in{\mathbb{C}}\backslash\left\\{\Re{z}\leq 0\right\\}$, by
the Lax-Milgram Lemma [13, Chapter 2], there exists a unique weak solution
$\mathbf{u}^{k}\in H(Q)$ to the cell problem (13) and with the solution
$\mathbf{u}^{k}$, we can construct $p^{k}\in L^{2}(Q)/\mathbb{C}$.
Since $\alpha(z)$ is a continuous function in $z$, the coercivity of the
sesquilinear form can be applied to conclude that $\mathbf{u}^{k}$ is analytic
in $z$ and its $m$-th derivative, $m\geq 1$, satisfies the following recursive
equation
$\int_{Q_{1}\cup
Q_{2}}2\mu(\mathbf{y};z)e\left(\frac{d^{m}\mathbf{u}^{k}}{dz^{m}}\right):\overline{e(\mathbf{v})}d\mathbf{y}=-\int_{Q_{2}}2{m}\mu_{1}e\left(\frac{d^{m-1}\mathbf{u}^{k}}{dz^{m-1}}\right):e(\overline{\mathbf{v}})d\mathbf{y},\,\forall\mathbf{v}\in
H(Q)$ (26)
As a result, $\mbox{\boldmath$K$}(z)$ is also analytic for
$z\in{\mathbb{C}}\backslash\left\\{\Re{z}\leq 0\right\\}$. To relate the two-
fluid problem with $\mbox{\boldmath$K$}^{(D)}$, we adapt the method used in
[16] to study the behavior of $\mbox{\boldmath$K$}(z)$ near $z=\infty$ in the
following section.
### 2.3 Analyticity of the Solution for large $|z|$
Let $w:=\frac{1}{z}$ and consider $Q$-periodic solution in the series form
near $w=0$
$\mathbf{u}_{\infty}(\mathbf{y};\mathbf{e},w):=\sum_{k=0}^{\infty}\mathbf{u}_{k}(\mathbf{y};\mathbf{e})w^{k}\mbox{
and
}p_{\infty}(\mathbf{y};\mathbf{e},w):=\sum_{k=0}^{\infty}p_{k}(\mathbf{y};\mathbf{e})w^{k}$
(27)
where the $\mathbf{e}$ is an arbitrary constant unit vector. To set up the
notation, we denote the restrictions of $\mathbf{u}_{k}$, $p_{k}$ in $Q_{2}$
(inclusion) and $Q_{1}$ as $\mathbf{u}^{in}_{k}$, $p^{in}_{k}$ and
$\mathbf{u}^{out}_{k}$, $p^{out}_{k}$ respectively and define
$\displaystyle\mathbf{u}_{\infty}^{in}(\mathbf{y};\mathbf{e},w):=\sum_{k=0}^{\infty}\mathbf{u}^{in}_{k}(\mathbf{y},\mathbf{e})w^{k},\qquad\mathbf{u}_{\infty}^{out}(\mathbf{y};\mathbf{e},w):=\sum_{k=0}^{\infty}\mathbf{u}^{out}_{k}(\mathbf{y},\mathbf{e})w^{k}$
(28)
By substituting (27) into (13) with the viscosity defined in (9), taking into
account the additional two interface conditions $\mathbf{u}\cdot\mathbf{n}=0$
and $\llbracket\mathbf{u}\rrbracket=0$, followed by equating terms of the same
order with respect to $w$, we arrive in the following equations in $Q_{1}$:
$\displaystyle
O(w^{0}):\qquad\text{div}_{\mathbf{y}}\left(2\mu_{1}e(\mathbf{u}_{0}^{out})-p_{0}^{out}\mathbf{I}\right)=-\mathbf{e}$
(29) $\displaystyle
O(w^{k}):\qquad\text{div}_{\mathbf{y}}\left(2\mu_{1}e(\mathbf{u}_{k}^{out})-p_{k}^{out}\mathbf{I}\right)=\mathbf{0}\mbox{
for }k\geq 1$ (30)
and in $Q_{2}$:
$\displaystyle
O(w^{-1}):\qquad\text{div}_{\mathbf{y}}\left(2\mu_{1}e(\mathbf{u}_{0}^{in})\right)=\mathbf{0}$
(31) $\displaystyle
O(w^{0}):\qquad\text{div}_{\mathbf{y}}\left(2\mu_{1}e(\mathbf{u}_{1}^{in})-p_{0}^{in}\mathbf{I}\right)=-\mathbf{e}$
(32) $\displaystyle
O(w^{k}):\qquad\text{div}_{\mathbf{y}}\left(2\mu_{1}e(\mathbf{u}_{k+1}^{in})-p_{k}^{in}\mathbf{I}\right)=\mathbf{0}\mbox{
for }k\geq 1$ (33)
and the following interface conditions on $\Gamma$
$\displaystyle O(w^{-1}):$ $\displaystyle\
2\mu_{1}(e(\mathbf{u}_{0}^{in})\cdot\mathbf{n})|_{\Gamma}=C(\mathbf{y})\mathbf{n}\text{
for some function }C(\mathbf{y})$ (34) $\displaystyle O(w^{k}),\,k\geq 0:$
$\displaystyle\
\left(\left(2\mu_{1}e(\mathbf{u}_{k}^{out})-p_{k}^{out}\mathbf{I}\right)-\left(2\mu_{1}e(\mathbf{u}_{k+1}^{in})-p_{k}^{in}\mathbf{I}\right)\right)\mathbf{n}$
(36)
$\displaystyle=\left\\{\left[\left(\left(2\mu_{1}e(\mathbf{u}_{k}^{out})-p_{k}^{out}\mathbf{I}\right)-\left(2\mu_{1}e(\mathbf{u}_{k}^{in})-p_{k}^{in}\mathbf{I}\right)\right)\mathbf{n}\right]\cdot\mathbf{n}\right\\}\mathbf{n},$
$\displaystyle\mathbf{u}^{in}_{k}\cdot\mathbf{n}=\mathbf{u}^{out}_{k}\cdot\mathbf{n}=0\mbox{
and }\mathbf{u}^{in}_{k}=\mathbf{u}^{out}_{k}$
We introduce the following spaces, $i=1,2$
$\displaystyle H(Q_{1})=\left\\{\mathbf{v}:\mathbf{v}\in
H^{1}(Q_{1})^{n}\biggr{|}\;\text{div}_{\mathbf{y}}\mathbf{v}=0,\mathbf{v}\cdot\mathbf{n}=0\text{
on }\Gamma,Q\text{-periodic}\right\\}$ $\displaystyle
H(Q_{2})=\left\\{\mathbf{v}:\mathbf{v}\in
H^{1}(Q_{2})^{n}\biggr{|}\;\text{div}_{\mathbf{y}}\mathbf{v}=0,\mathbf{v}\cdot\mathbf{n}=0\text{
on }\Gamma,\,(\mathbf{v},\mathcal{R}(Q_{2}))_{H^{1}(Q_{2})}=0\right.$
$\displaystyle\left.\qquad\qquad,Q\text{-periodic}\right\\}$
$\displaystyle\mathring{H}(Q_{i})=\left\\{\mathbf{v}:\mathbf{v}\in
H^{1}(Q_{i})^{n}\biggr{|}\;\text{div}_{\mathbf{y}}\mathbf{v}=0,\mathbf{v}|_{\Gamma}=\mathbf{0},Q\text{-periodic}\right\\}\subset
H(Q_{i}),\,$ $\displaystyle L(Q_{i})/\mathbb{C}=\left\\{p:p\in
L^{2}(Q_{i}),\int_{Q_{i}}p(\mathbf{y})d\mathbf{y}=0,Q\text{-periodic},\right\\}$
Note that $H(Q_{1})\cap\mathcal{R}(Q_{1})=\\{\mathbf{0}\\}$ because $\partial
Q\subset\partial Q_{1}$. For $H(Q_{2})$, the boundary condition
$\mathbf{u}\cdot\mathbf{n}=0$ implies
$H(Q_{2})\cap\mathcal{R}(Q_{2})=\\{\mathbf{0}\\}$ because of the extra
condition $(\mathbf{v},\mathcal{R}(Q_{2}))_{H^{1}(Q_{2})}=0$[34]. Therefore,
$H(Q_{i})$ and $\mathring{H}(Q_{i})$ are equipped with inner product
$(\mathbf{u},\mathbf{v})_{Q_{i}}=\int_{Q_{i}}2\mu_{1}e(\mathbf{u}):\overline{e(\mathbf{v})}d\mathbf{y}$
and Korn’s inequalities are valid in $H(Q_{i})$, $i=1,2$.
###### Lemma 2.1.
Let $Q_{2}$ be a connected, open bounded set such that $\partial
Q_{2}\cap\partial Q=\emptyset$ and $\partial Q_{2}$ is in
$\mathcal{C}^{k,\sigma}$ , $k,\sigma\geq 0$, $k+\sigma\geq 2$. For any vector
field $\mathbf{u}^{in}\in H(Q_{2})$, there exists a unique weak solution
$\mathbf{u}^{out}(\mathbf{y};\mathbf{f}^{out})\in H(Q_{1})$ that satisfies the
following system
$\left\\{\begin{split}\text{div}_{\mathbf{y}}\left(2\mu_{1}e(\mathbf{u}^{out})-p^{out}\mathbf{I}\right)&=\mathbf{f}^{out}\quad\text{
in }Q_{1}\\\ \mathbf{u}^{out}&=\mathbf{u}^{in}\quad\text{ on
}\Gamma\end{split}\right.$ (37)
where in our context, $\mathbf{f}^{out}=\mathbf{0}\text{ or
}\mathbf{f}^{out}=-\mathbf{e}$, a constant unit vector. Moreover,
$\left\lVert\mathbf{u}^{out}\right\rVert_{Q_{1}}\leq\frac{1}{B_{1}(Q)}\left\lVert\mathbf{f}^{out}\right\rVert_{L^{2}(Q_{1})}+2E_{1}\left\lVert\mathbf{u}^{in}\right\rVert_{Q_{2}}.$
(38)
where the positive constants $B_{1}(Q_{{1}})$ is defined in (17) and
$E_{1}\geq 1$ depends only on $Q_{1}$ and $Q_{2}$.
###### Proof.
To handle the inhomogeneous boundary condition, we proceed as follows. By [24,
Corollary 3.2], there exists a bounded, divergence free extension
$T\left(\mathbf{u}^{in}\right)$ of $\mathbf{u}^{in}$ to a small neighborhood
$O$ of $Q_{2}$ and vanishes at $\partial O\subset Q_{1}$ such that
$\left\lVert T\left(\mathbf{u}^{in}\right)\right\rVert_{Q}\leq
E_{1}\left\lVert\mathbf{u}^{in}\right\rVert_{Q_{2}}$ (39)
where $E_{1}\geq 1$ depends only on $Q_{1}$ and $Q_{2}$. Furthermore, the
extension $T\left(\mathbf{u}^{in}\right)$ on $O$ can be extended periodically
to $\mathbb{R}^{n}$ [18] since $T\left(\mathbf{u}^{in}\right)$ vanishes on
$\partial O$ and hence on $\partial Q$. We denote the restriction
$T(\mathbf{u}^{in})|_{Q_{1}}$ as $\tilde{\mathbf{u}}^{out}\in H(Q_{1})$ and
$\mathring{\mathbf{u}}^{out}\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\mathbf{u}^{out}-\tilde{\mathbf{u}}^{out}\in\mathring{H}(Q_{1})$
and (37) becomes
$\text{div}_{\mathbf{y}}\left(2\mu_{1}e(\mathring{\mathbf{u}}^{out})-p^{out}\mathbf{I}\right)=\mathbf{f}^{out}-\mu_{1}\Delta\tilde{\mathbf{u}}^{out}\quad\text{
in }Q_{1}$ (40)
Consider the variational formulation: Find
$\mathring{\mathbf{u}}^{out}\in\mathring{H}(Q_{1})$ such that
$\forall\Phi\in\mathring{H}(Q_{1})$,
$\int_{Q_{1}}2\mu_{1}e(\mathring{\mathbf{u}}^{out}):\overline{e(\Phi)}d\mathbf{y}=-\int_{Q_{1}}\mathbf{f}^{out}\cdot\overline{\Phi}d\mathbf{y}-\int_{Q_{1}}2\mu_{1}e(\tilde{\mathbf{u}}^{out}):\overline{e(\Phi)}d\mathbf{y},\quad$
(41)
The right hand side of (41) can be bounded as follows
$\displaystyle\left|\int_{Q_{1}}\mathbf{f}^{out}\cdot\overline{\Phi}d\mathbf{y}+\int_{Q_{1}}2\mu_{1}e(\tilde{\mathbf{u}}^{out}):\overline{e(\Phi)}d\mathbf{y}\right|\leq$
$\displaystyle\left(\frac{\left\lVert\mathbf{f}^{out}\right\rVert_{L^{2}(Q_{1})}}{B_{1}(Q_{{1}})}+\left\lVert\tilde{\mathbf{u}}^{out}\right\rVert_{Q_{1}}\right)\left\lVert\Phi\right\rVert_{Q_{1}}$
The sesquilinear form
$\int_{Q_{1}}2\mu_{1}e(\mathring{\mathbf{u}}^{out}):\overline{e(\Phi)}d\mathbf{y}$
is clearly bounded and coercive with constant 1. Hence by the Lax-Milgram
Lemmat there exists a unique weak solution
$\mathring{\mathbf{u}}^{out}\in\mathring{H}(Q_{1})$ to (40) such that
$\left\lVert\mathring{\mathbf{u}}^{out}\right\rVert_{Q_{1}}\leq(\frac{1}{B_{1}(Q_{{1}})}\left\lVert\mathbf{f}^{out}\right\rVert_{L^{2}(Q_{1})}+\left\lVert\tilde{\mathbf{u}}^{out}\right\rVert_{Q_{1}})$.
In terms of $\mathring{\mathbf{u}}^{out}$, $\mathbf{u}^{out}$ can be expressed
as $\mathbf{u}^{out}=\tilde{\mathbf{u}}^{out}+\mathring{\mathbf{u}}^{out}$ and
satisfies the estimate
$\left\lVert\mathbf{u}^{out}\right\rVert_{Q_{1}}\leq\left\lVert\mathring{\mathbf{u}}^{out}\right\rVert_{Q_{1}}+\left\lVert\tilde{\mathbf{u}}^{out}\right\rVert_{Q_{1}}\leq\frac{1}{B_{1}(Q_{{1}})}\left\lVert\mathbf{f}^{out}\right\rVert_{L^{2}(Q_{1})}+2E_{1}\left\lVert\mathbf{u}^{in}\right\rVert_{Q_{2}}$
(42)
To show the solution $\mathbf{u}^{out}$ is unique, suppose
$\mathbf{u}^{out}_{1}$ and $\mathbf{u}^{out}_{2}$ both solve (37) then the
difference
${\mathbf{w}}^{\text{diff}}=\mathbf{u}^{out}_{1}-\mathbf{u}^{out}_{2}\in\mathring{H}(Q_{1})$
must satisfy
$\int_{Q_{1}}2\mu_{1}e({\mathbf{w}}^{\text{diff}}):\overline{e(\Phi)}d\mathbf{y}=0,\qquad\forall\Phi\in\mathring{H}(Q_{1})$
Hence ${\mathbf{w}}^{\text{diff}}=0$ in $Q_{1}$ because
${\mathbf{w}}\in\mathring{H}(Q_{1})$. We note the two special cases:
$\displaystyle\left\lVert\mathbf{u}^{out}\right\rVert_{Q_{1}}\leq
2E_{1}\left\lVert\mathbf{u}^{in}\right\rVert_{Q_{2}}\mbox{ for
}\mathbf{f}^{out}=\mathbf{0}$ (43)
$\displaystyle\left\lVert\mathbf{u}^{out}\right\rVert_{Q_{1}}\leq\frac{1}{B_{1}(Q_{{1}})}\sqrt{|Q_{1}|}+2E_{1}\left\lVert\mathbf{u}^{in}\right\rVert_{Q_{2}}\mbox{
for }\mathbf{f}^{out}=-\mathbf{e}_{{j}},\,{j=1,\dots,n,}$ (44)
where $|Q_{1}|$ is the volume of $Q_{1}$.∎
###### Lemma 2.2.
Let $Q_{2}$ satisfy the same assumptions as those in Lemma 2.1. For any pair
of $\left(\mathbf{u}^{out},p^{out}\right)\in H(Q_{1})\times
L^{2}(Q_{1})/\mathbb{C}$ that satisfies (37), there exists a unique vector
$\mathbf{u}^{in}(\mathbf{y};\mathbf{f}^{in})\in H(Q_{2})$ that satisfies the
Stokes equation with continuity of tangential traction on $\Gamma$
$\left\\{\begin{split}\text{div}_{\mathbf{y}}\left(2\mu_{1}e(\mathbf{u}^{in})-p^{in}\mathbf{I}\right)&=\mathbf{f}^{in}\quad\text{in
}Q_{2},\\\
\mathbf{n}\times\mathbf{n}\times\left[\left(\left(2\mu_{1}e(\mathbf{u}^{out})-p^{out}\mathbf{I}\right)-\left(2\mu_{1}e(\mathbf{u}^{in})-p^{in}\mathbf{I}\right)\right)\cdot\mathbf{n}\right]&=\mathbf{0}\quad\text{on
}\Gamma,\end{split}\right.$ (45)
where in our context, $\mathbf{f}^{in}=\mathbf{0}\text{ or
}\mathbf{f}^{in}=-\mathbf{e}$. Moreover,
$\left\lVert\mathbf{u}^{in}\right\rVert_{Q_{2}}\leq\frac{1}{B_{1}(Q_{2})}\left\lVert\mathbf{f}^{in}\right\rVert_{L^{2}(Q_{2})}+\frac{E_{1}(Q_{1})}{B_{1}(Q_{1})}\left\lVert\mathbf{f}^{out}\right\rVert_{L^{2}(Q_{1})}+E_{1}(Q_{1})\left\lVert\mathbf{u}^{out}\right\rVert_{Q_{1}}$
and $E_{1}(Q_{1})$, $B_{1}(Q_{1})$ and $B_{1}(Q_{2})$ depend only on $Q$ and
$\Gamma$.
###### Proof.
Take ${\Phi}\in H(Q_{2})$, the variation formulation for the PDE is
$\int_{Q_{2}}\left(\text{div}_{\mathbf{y}}\left(2\mu_{1}e(\mathbf{u}^{in})-p^{in}\mathbf{I}\right)\right)\cdot\bar{\Phi}d\mathbf{y}=\int_{Q_{2}}\mathbf{f}^{in}\cdot\bar{\Phi}d\mathbf{y}.$
For the ease of notation, let
$\mbox{\boldmath$\pi$}^{in}:=\mbox{\boldmath$\tau$}(\mathbf{u}^{in},\mu_{1},p^{in})$
and
$\mbox{\boldmath$\pi$}^{out}:=\mbox{\boldmath$\tau$}(\mathbf{u}^{out},\mu_{1},p^{out})$
where the stress function $\tau$ is defined in (8). Applying integration by
parts on the left hand side, followed by an application of the divergence
theorem leads to
$-\int_{\Gamma}({\mbox{\boldmath$\pi$}^{in}\cdot\bar{\Phi}})\cdot\mathbf{n}dS-\int_{Q_{2}}2\mu_{1}e(\mathbf{u}^{in}):\overline{e(\Phi)}d\mathbf{y}=\int_{Q_{2}}\mathbf{f}^{in}\cdot\bar{\Phi}d\mathbf{y}$
(46)
Let $\mathbf{t}$ denote the unit vector in the tangent plane such that
$\mathbf{t}\cdot\mathbf{n}=0$. The conditions on $\Gamma$ in (45) imply
$\left\\{\begin{split}&\Phi\cdot\mathbf{n}=0\Rightarrow\Phi=d(\mathbf{y})\mathbf{t}\mbox{
for some function }d(\mathbf{y}),\\\
&\mathbf{n}\times\mathbf{n}\times\left[\left(\mbox{\boldmath$\pi$}^{out}-\mbox{\boldmath$\pi$}^{in}\right)\cdot\mathbf{n}\right]=\mathbf{0}\Rightarrow\left(\mbox{\boldmath$\pi$}^{out}-\mbox{\boldmath$\pi$}^{in}\right)\cdot\mathbf{n}=C(\mathbf{y})\mathbf{n}\mbox{
for some function }C(\mathbf{y})\end{split}\right.$
With these observations and the fact that $\mbox{\boldmath$\pi$}^{in}$ is
symmetric, the first term in (46) can be expressed as
$\displaystyle-\int_{\Gamma}{\bar{\Phi}\cdot\mbox{\boldmath$\pi$}^{in}}\cdot\mathbf{n}\,dS$
$\displaystyle=\int_{\Gamma}\overline{d(\mathbf{y})}\mathbf{t}\cdot\left[\left(\mbox{\boldmath$\pi$}^{out}-\mbox{\boldmath$\pi$}^{in}\right)\cdot\mathbf{n}\right]-\int_{\Gamma}\overline{d(\mathbf{y})}\mathbf{t}\cdot(\mbox{\boldmath$\pi$}^{out}\cdot\mathbf{n})\,dS$
$\displaystyle=$
$\displaystyle-\int_{\Gamma}\bar{\Phi}\cdot\mbox{\boldmath$\pi$}^{out}\cdot\mathbf{n}\,dS$
and hence the variational form (46) becomes for all $\Phi\in H(Q_{2})$
$-\int_{Q_{2}}2\mu_{1}e(\mathbf{u}^{in}):\overline{e(\Phi)}d\mathbf{y}=\int_{Q_{2}}\mathbf{f}^{in}\cdot\bar{\Phi}d\mathbf{y}+\int_{\Gamma}\bar{\Phi}\cdot\mbox{\boldmath$\pi$}^{out}\cdot\mathbf{n}dS$
(47)
To bound the right hand side of (47), we first extend $\Phi\in H(Q_{2})$ by
the operator $T$ described in (39)
$\left\lVert T(\Phi)\right\rVert_{Q}\leq
E_{1}\left\lVert\Phi\right\rVert_{Q_{2}}$ (48)
$T(\Phi)$ rapidly decays to zero in a small neighborhood of $Q_{2}$ and stays
$0$ for the rest of $Q_{1}$. The restriction of $T(\Phi)$ in $Q_{1}$, denoted
by $\Phi^{out}$, has the following estimate
$\left\lVert\Phi^{out}\right\rVert_{Q_{1}}\leq\left\lVert
T(\Phi)\right\rVert_{Q}\leq E_{1}\left\lVert\Phi\right\rVert_{Q_{2}}$ (49)
Hence
$\begin{split}&\left|\int_{\Gamma}\bar{\Phi}\cdot\mbox{\boldmath$\pi$}^{out}\cdot\mathbf{n}dS\right|=\left|\int_{\Gamma}\bar{\Phi}^{out}\cdot\mbox{\boldmath$\pi$}^{out}\cdot\mathbf{n}dS\right|\\\
=&\left|\int_{Q_{1}}\left[\text{div}_{\mathbf{y}}\left(2\mu_{1}e(\mathbf{u}^{out})-p^{out}\mathbf{I}\right)\right]\cdot\overline{\Phi^{out}}d\mathbf{y}+\int_{Q_{1}}2\mu_{1}e(\mathbf{u}^{out}):\overline{e(\Phi^{out})}d\mathbf{y}\right|\\\
\leq&\left|\int_{Q_{1}}\mathbf{f}^{out}\cdot\overline{\Phi^{out}}d\mathbf{y}\right|+\left|\int_{Q_{1}}2\mu_{1}e(\mathbf{u}^{out}):\overline{e(\Phi^{out})}d\mathbf{y}\right|\\\
\leq&\frac{E_{1}}{B_{1}(Q_{{1}})}\left\lVert\mathbf{f}^{out}\right\rVert_{L^{2}(Q_{1})}\left\lVert\Phi\right\rVert_{Q_{2}}+E_{1}\left\lVert\mathbf{u}^{out}\right\rVert_{Q_{1}}\left\lVert\Phi\right\rVert_{Q_{2}}\end{split}$
Therefore the right hand side of (47) is bounded by
$\left\lVert\Phi\right\rVert_{Q_{2}}\left(\frac{E_{1}\left\lVert\mathbf{f}^{in}\right\rVert_{L^{2}(Q_{2})}}{B_{1}(Q_{{2}})}+\frac{E_{1}\left\lVert\mathbf{f}^{out}\right\rVert_{L^{2}(Q_{1})}}{B_{1}(Q_{{1}})}+E_{1}\left\lVert\mathbf{u}^{out}\right\rVert_{Q_{1}}\right)$
Finally, by the Lax-Milgram Lemma a unique solution $\mathbf{u}^{in}$ exists
such that
$\left\lVert\mathbf{u}^{in}\right\rVert_{Q_{2}}\leq\frac{E_{1}}{B_{1}(Q_{{2}})}\left\lVert\mathbf{f}^{in}\right\rVert_{L^{2}(Q_{2})}+\frac{E_{1}}{B_{1}(Q_{{1}})}\left\lVert\mathbf{f}^{out}\right\rVert_{L^{2}(Q_{1})}+E_{1}\left\lVert\mathbf{u}^{out}\right\rVert_{Q_{1}}$
(50)
∎
The construction of the solution for $z$ with large magnitude will be carried
out using the following steps.
1. 1.
$O(w^{-1})$: Consider the system of (31) and (34) for
$\mathbf{u}_{0}^{in}(\mathbf{y};\mathbf{e})\in H(Q_{2})$.
$\left\\{\begin{split}\text{div}_{\mathbf{y}}\left(2\mu_{1}e(\mathbf{u}_{0}^{in})\right)&=\mathbf{0}\quad\text{in
}Q_{2}\\\
2\mu_{1}e(\mathbf{u}_{0}^{in})\cdot\mathbf{n}&=C(\mathbf{y})\mathbf{n}\quad\text{on
}\Gamma\end{split}\right.$ (51)
The variational formulation is
$-\int_{\Gamma}\bar{\mathbf{v}}\cdot
2\mu_{1}e(\mathbf{u}_{0}^{in})\cdot\mathbf{n}-\int_{Q_{2}}2\mu_{1}e(\mathbf{u}_{0}^{in}):\overline{e(\mathbf{v})}=0\quad\forall\mathbf{v}\in
H(Q_{2})$ (52)
The first term vanishes because of the boundary conditions. Hence
$\mathbf{u}_{0}^{in}(\mathbf{y};\mathbf{e})=\mathbf{0}\mbox{ in }Q_{2}\mbox{
{because $H(Q_{2})\perp\mathcal{R}(Q_{2})$.}}$
2. 2.
$O(w^{0})$ in $Q_{1}$: Solve the system of (36) and (29) for
$\mathbf{u}_{0}^{out}(\mathbf{y};\mathbf{e})\in H(Q_{1})$.
$\left\\{\begin{split}\text{div}_{\mathbf{y}}\left(2\mu_{1}e(\mathbf{u}_{0}^{out})-p_{0}^{out}\mathbf{I}\right)&=-\mathbf{e}\quad\text{in
}Q_{1}\\\ \mathbf{u}_{0}^{out}=\mathbf{u}_{0}^{in}&=\mathbf{0}\quad\text{on
}\Gamma\end{split}\right.$ (53)
An application of Lemma 2.1 and (44) leads to the following result
$\left\lVert\mathbf{u}_{0}^{out}\right\rVert_{Q_{1}}\leq\frac{\sqrt{|Q_{1}|}}{B_{1}(Q_{{1}})}+2E_{1}\left\lVert\mathbf{u}^{in}_{0}\right\rVert_{Q_{2}}=\frac{\sqrt{|Q_{1}|}}{B_{1}(Q_{{1}})}$
(54)
3. 3.
$O(w^{0})$ in $Q_{2}$: Consider the system of (32) and (36) for
$\mathbf{u}_{1}^{in}\in H(Q_{2})$:
$\left\\{\begin{split}&\text{div}_{\mathbf{y}}\left(2\mu_{1}e(\mathbf{u}_{1}^{in})-p_{0}^{in}\mathbf{I}\right)=-\mathbf{e}\quad\text{in
}Q_{2}\\\
&\mathbf{n}\times\mathbf{n}\times\left[\left(\left(2\mu_{1}e(\mathbf{u}_{0}^{out})-p_{0}^{out}\mathbf{I}\right)-\left(2\mu_{1}e(\mathbf{u}_{1}^{in})-p_{0}^{in}\mathbf{I}\right)\right)\cdot\mathbf{n}\right]=\mathbf{0}\quad\text{on
}\Gamma\end{split}\right.$
By applying Lemma 2.2 and (50) with $\mathbf{f}^{out}=-\mathbf{e}$ and
$\mathbf{f}^{in}=-\mathbf{e}$, we obtain
$\left\lVert\mathbf{u}_{1}^{in}\right\rVert_{Q_{2}}\leq{C_{1}E_{1}},\quad{C_{1}}:=\frac{\sqrt{|Q_{2}|}}{B_{1}(Q_{{2}})}+2\frac{\sqrt{|Q_{1}|}}{B_{1}(Q_{{1}})}$
(55)
4. 4.
Induction step, $k\geq 1$: Given $\mathbf{u}_{k}^{in}\in H(Q_{2})$ and
$\mathbf{u}_{k-1}^{out}\in H(Q_{1})$, find
$\mathbf{u}_{k+1}^{in}(\mathbf{y};\mathbf{0})\in H(Q_{2})$ and
$\mathbf{u}_{k}^{out}(\mathbf{y};\mathbf{0})\in H(Q_{1})$.
1. (a)
Applying Lemma 2.1 with $\mathbf{f}=\mathbf{0}$, we conclude that for a given
$\mathbf{u}_{k}^{in}\in H(Q_{2})$, $k\geq 1$, there exists a unique
$\mathbf{u}_{k}^{out}\in H(Q_{1})$ that solves the system of (30) and (36) and
assumes the estimate
$\left\\{\begin{split}\text{div}_{\mathbf{y}}\left(2\mu_{1}e(\mathbf{u}_{k}^{out})-p_{k}^{out}\mathbf{I}\right)&=\mathbf{0}\quad\text{in
}Q_{1}\\\ \mathbf{u}_{k}^{out}&=\mathbf{u}_{k}^{in}\quad\text{on
}\Gamma,\end{split}\right.$ (56)
$\left\lVert\mathbf{u}_{k}^{out}\right\rVert_{Q_{1}}\leq
2E_{1}\left\lVert\mathbf{u}_{k}^{in}\right\rVert_{Q_{2}}$ (57)
2. (b)
By applying Lemma 2.2 with $\mathbf{f}^{in}=\mathbf{0}=\mathbf{f}^{out}$, we
see that for any given $\mathbf{u}_{k}^{out}\in H(Q_{1})$, $k\geq 1$ that
satisfies (56), there exists a unique solution $\mathbf{u}_{k+1}^{in}\in
H(Q_{2})$ to the system of equations (33) and (36)
$\left\\{\begin{split}\text{div}_{\mathbf{y}}\left(2\mu_{1}e(\mathbf{u}_{k+1}^{in})-p_{k}^{in}\mathbf{I}\right)&=\mathbf{0}\quad\text{in
}Q_{2}\\\
\mathbf{n}\times\mathbf{n}\times\left[\left(\left(2\mu_{1}e(\mathbf{u}_{k}^{out})-p_{k}^{out}\mathbf{I}\right)-\left(2\mu_{1}e(\mathbf{u}_{k+1}^{in})-p_{k}^{in}\mathbf{I}\right)\right)\mathbf{n}\right]&=\mathbf{0}\quad\text{on
}\Gamma\end{split}\right.$ (58)
Moreover,
$\left\lVert\mathbf{u}_{k+1}^{in}\right\rVert_{Q_{2}}\leq
E_{1}\left\lVert\mathbf{u}_{k}^{out}\right\rVert_{Q_{1}}$ (59)
Now we have found the coefficients
$\mathbf{u}^{in}_{n}(\mathbf{y};\mathbf{e})$ and
$\mathbf{u}^{out}_{n}(\mathbf{y};\mathbf{e})$ in (28) iteratively. We prove
the convergence of the series in the following theorem by taking into account
the fact that $\mathbf{u}_{0}^{in}=\mathbf{0}$.
###### Theorem 2.1.
Define the partial sums
$\mathbf{S}_{q}^{in}(\mathbf{y};\mathbf{e},w):=\sum_{k=0}^{q}\mathbf{u}^{in}_{k+1}(\mathbf{y},\mathbf{e})w^{k+1},\,\mathbf{S}_{q}^{out}(\mathbf{y};\mathbf{e},w):=\sum_{k=0}^{q}\mathbf{u}^{out}_{k}(\mathbf{y};\mathbf{e})w^{k}.$
Let $R\in(0,1)$, in the disk $|w|\leq\frac{R}{2E_{1}^{2}}$, the series
$\mathbf{S}_{q}^{in}(\mathbf{y};\mathbf{e},w)$ and
$\mathbf{S}_{q}^{out}(\mathbf{y};\mathbf{e},w)$ converge uniformly to
$\mathbf{u}^{in}_{\infty}(\mathbf{y};\mathbf{e},w)\in H(Q_{2})$ and
$\mathbf{u}^{out}_{\infty}(\mathbf{y};\mathbf{e},w)\in H(Q_{1})$,
respectively. Therefore,
$\mathbf{u}_{\infty}(\mathbf{y};\mathbf{e},w)\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\mathbf{u}^{in}_{\infty}(\mathbf{y};\mathbf{e},w)\chi_{2}+\mathbf{u}^{out}_{\infty}(\mathbf{y};\mathbf{e},w)\chi_{1}\in
H(Q)$ solves the cell problem (13) and is analytic for
$|w|<\frac{1}{2E_{1}^{2}}$.
###### Proof.
For each $q\in\mathbb{N}$, $\mathbf{S}_{q}^{in}(\mathbf{y};\mathbf{e},w)$ is a
polynomial function of $w$ and maps from $\mathbb{C}$ to the Hilbert space
$H(Q_{2})$. Similarly, $\mathbf{S}_{q}^{out}(\mathbf{y};\mathbf{e},w)$ maps
from $\mathbb{C}$ to $H(Q_{1})$. To show uniform convergence, we note that
(57) and (59) imply there exists a positive constant $E_{1}$ that depends only
on $Q_{1}$ and $Q_{2}$ such that
$\left\lVert\mathbf{u}^{in}_{k+1}\right\rVert_{Q_{2}}\leq
E_{1}\left\lVert\mathbf{u}^{out}_{k}\right\rVert_{Q_{1}}\leq
2E_{1}^{2}\left\lVert\mathbf{u}^{in}_{k}\right\rVert_{Q_{2}}$. Therefore,
$\left\lVert\mathbf{u}_{k}^{in}\right\rVert_{Q_{2}}\leq\left(2E_{1}^{2}\right)^{k-1}\left\lVert\mathbf{u}_{1}^{in}\right\rVert_{Q_{2}},k\geq
1$ (60)
Let $m>q>N$, and define $r:=2E_{1}^{2}\left|w\right|$. Then by (55) implies
$\displaystyle\left\lVert\mathbf{S}_{m}^{in}(w)-\mathbf{S}_{q}^{in}(w)\right\rVert_{Q_{2}}\leq\left\lVert\mathbf{u}_{1}^{in}\right\rVert_{Q_{2}}\left((2E_{1}^{2})^{q}|w|^{q+1}+\cdots+(2E_{1}^{2})^{m-1}\left|w\right|^{m}\right)$
$\displaystyle\leq\frac{r^{q+1}-r^{m+1}}{1-r}\frac{\left\lVert\mathbf{u}_{1}^{in}\right\rVert_{Q_{2}}}{2E_{1}^{2}}\leq\frac{r^{q+1}-r^{m+1}}{1-r}\left({\frac{C_{1}}{2E_{1}^{2}}}\right)\mbox{
{,\, $C_{1}$ is defined in \eqref{bound_on_u1_in}}}.$
Therefore, for $r\leq R<1$, i.e. $|w|\leq\frac{R}{2E_{1}^{2}}$, where $R$ is
any fixed number in $(0,1)$,
$\left\lVert\mathbf{S}_{m}^{in}(w)-\mathbf{S}_{q}^{in}(w)\right\rVert_{Q_{2}}\leq\left({\frac{C_{1}}{2E_{1}^{2}}}\right)\left(\frac{R^{N+1}}{1-R}\right),\,\,\forall
m>q>N.$ (61)
For $\mathbf{S}_{q}^{out}(\mathbf{y};\mathbf{e},w)$ we have
$\left\lVert\mathbf{u}_{k}^{out}\right\rVert_{Q_{1}}\leq\left(2E_{1}^{2}\right)^{k-1}\left\lVert\mathbf{u}_{1}^{out}\right\rVert_{Q_{1}}$.
By a similar procedure, for $m>q>N$ and $|w|\leq\frac{R}{2E_{1}^{2}}$ the
following estimate is valid
$\left\lVert\mathbf{S}_{m}^{out}(w)-\mathbf{S}_{q}^{out}(w)\right\rVert_{Q_{1}}\leq{C_{1}}\left(\frac{R^{N+1}}{1-R}\right)$
Therefore, for every fixed $w$ satisfying $|w|\leq\frac{R}{2E_{1}^{2}}$ for
any $0<R<1$, $\mathbf{S}_{q}^{in}(\mathbf{y};w)$ and
$\mathbf{S}_{q}^{out}(\mathbf{y};w)$ converge uniformly to
$\mathbf{u}^{in}_{\infty}(\mathbf{y};w)\in H(Q_{2})$ and
$\mathbf{u}^{out}_{\infty}(\mathbf{y};w)\in H(Q_{1})$, respectively. Since for
each $q$, $\mathbf{S}_{q}^{in}(\mathbf{y};w)$ and
$\mathbf{S}_{q}^{out}(\mathbf{y};w)$ are polynomials of $w$, hence analytic,
the uniform convergence implies that the limit functions
$\mathbf{u}^{in}_{\infty}(\mathbf{y};w)$ and
$\mathbf{u}^{out}_{\infty}(\mathbf{y};w)$ are also analytic in
$|w|<\frac{1}{2E_{1}^{2}}$ with values in $H(Q_{1})$ and $H(Q_{2})$,
respectively, by applying Morera’s theorem for Banach space valued analytic
functions [27] to the uniformly converging sequences. By construction, the
function $\mathbf{u}_{\infty}(\mathbf{y};\mathbf{e},w)$ defined in (27) solves
the cell problem (13) for all $w$ in the disk
$\\{w:|w|<\frac{1}{2E_{1}^{2}}\\}:=B_{0}(\frac{1}{2E_{1}^{2}})$. Moreover, the
uniqueness of the solution implies that
$\mathbf{u}_{\infty}(\mathbf{y};\mathbf{e}_{k},w)=\mathbf{u}^{k}(\mathbf{y};\frac{1}{w})\text{
in }H(Q)$ for $w\in
B_{0}(\frac{1}{2E_{1}^{2}})\cap\\{w\in\mathbb{C}\setminus(-\infty,0]\\}$. ∎
The following theorem shows the relation between the two-fluid self-
permeability $K$ in (12) and the Darcy permeability
$\mbox{\boldmath$K$}^{(D)}$ in (5)
###### Theorem 2.2.
In the case of large viscosity $|z|>2E_{1}^{2}$ (or
$|w|<\frac{1}{2E_{1}^{2}}$), we have
1. 1.
$\mathbf{u}_{\infty}^{in}(\mathbf{y};\mathbf{e}_{i},0)=\mathbf{0}$ in $Q_{2}$
2. 2.
As $w\rightarrow 0$, the solution
$\mathbf{u}_{\infty}^{out}(\mathbf{y};\mathbf{e}_{i},w)$ converges uniformly
in $\mathring{H}(Q_{1})$ to the solution $\mathbf{u}_{D}^{i}(\mathbf{y})$ of
the classical cell problem (4).
3. 3.
For $w\in B_{0}(\frac{1}{2E_{1}^{2}})$, the difference between the self-
permeability $\mathbf{K}(\mathbf{y};\mathbf{e}_{i},w)$ and the classical
permeability tensor $\mathbf{K}^{(D)}(\mathbf{y};\mathbf{e}_{i})$ satisfies
$\lvert K_{ij}-(K^{(D)})_{ij}\rvert=O(|w|)$, hence
$\mathbf{K}\to\mathbf{K}^{(D)}$ uniformly as $|w|\to 0$.
###### Proof.
The uniform convergence allows passing the limit $w\rightarrow 0$ inside the
summation of (28) to obtain
$\mathbf{u}_{\infty}^{in}(\mathbf{y};\mathbf{e}_{i},0)=\mathbf{0}$. Similarly,
the uniform convergence allows passing the limit $w\rightarrow 0$ inside the
summation of (28) to obtain
$\mathbf{u}^{out}_{\infty}(\mathbf{y};\mathbf{e}_{{i}},0)=\mathbf{u}_{0}^{out}(\mathbf{y};\mathbf{e}_{i})$
Furthermore, $\mathbf{u}_{0}^{out}(\mathbf{y};\mathbf{e}_{i})\in H(Q_{1})$
satisfies (53) and in fact
$\mathbf{u}_{0}^{out}(\mathbf{y};\mathbf{e}_{i})\in\mathring{H}(Q_{1})$ since
$\mathbf{u}_{0}^{out}(\mathbf{y};\mathbf{e}_{i})|_{\Gamma}=\mathbf{0}$, which
is identical to the equation for $\mathbf{u}_{D}$ (4). The uniqueness of the
solution then ensures that
$\mathbf{u}_{0}^{out}(\mathbf{y};\mathbf{e}_{i})=\mathbf{u}_{D}^{i}(\mathbf{y})$.
Therefore the series
$\mathbf{u}_{\infty}^{out}(\mathbf{y};\mathbf{e}_{i},w)\to\mathbf{u}_{D}^{i}(\mathbf{y})$
uniformly as $|w|\to 0$ in $\mathring{H}(Q_{1})$. For (iii), we note that
$\begin{split}\left|K_{ij}(w)-K^{(D)}_{ij}\right|&=\left|\int_{Q}\left(\mathbf{u}^{i}-\chi_{1}\mathbf{u}_{D}^{i}\right)\cdot\mathbf{e}_{j}d\mathbf{y}\right|\leq\left\lVert\mathbf{u}^{i}-\chi_{1}\mathbf{u}^{i}_{D}\right\rVert_{L^{2}(Q)}\\\
&\leq\frac{1}{B_{1}(Q)}\left\lVert\sum_{k=1}^{\infty}\left(\mathbf{u}_{k}^{in}(\mathbf{y};\mathbf{e}_{i})\chi_{2}+\mathbf{u}_{k}^{out}(\mathbf{y};\mathbf{e}_{i})\chi_{1}\right)w^{k}\right\rVert_{Q}\end{split}$
From (57), (60) and (55), we have for $|w|<\frac{1}{2E_{1}^{2}}$, or
equivalently $|z|>2E_{1}^{2}$,
$\left|K_{ij}(w)-K^{(D)}_{ij}\right|\leq{C_{1}\left(\frac{E_{1}+1}{2E_{1}B_{1}(Q)}\right)}\frac{2E_{1}^{2}|w|}{1-2E_{1}^{2}|w|}$
(62)
∎
In the following section, we study the behavior of $\mbox{\boldmath$K$}(z)$
near $z=0$, i.e. the inclusion is an air bubble.
### 2.4 Analyticity of the solution for small $|z|$
Let $\mathbf{e}$ be a constant unit vector in $\mathbb{R}^{n}$. We seek
solutions of the following form
$\displaystyle\mathbf{u}_{null}^{in}(\mathbf{y};\mathbf{e},z)$
$\displaystyle=\sum_{k=0}^{\infty}\mathbf{u}^{in}_{k}(\mathbf{y};\mathbf{e})z^{k},\quad
p^{in}(\mathbf{y};\mathbf{e},z)=\sum_{k=0}^{\infty}p^{in}_{k}(\mathbf{y};\mathbf{e})z^{k}\text{
in }Q_{2},$ (63)
$\displaystyle\mathbf{u}_{null}^{out}(\mathbf{y};\mathbf{e},z)$
$\displaystyle=\sum_{k=0}^{\infty}\mathbf{u}^{out}_{k}(\mathbf{y};\mathbf{e})z^{k},\quad
p^{out}(\mathbf{y};\mathbf{e},z)=\sum_{k=0}^{\infty}p^{out}_{k}(\mathbf{y};\mathbf{e})z^{k}\text{
in }Q_{1}$ (64)
By a procedure similar to that in Section 2.3, the following equations are
obtained via collecting terms with respect to the order of $z$. The PDEs for
$Q_{1}$ are as follows.
$\displaystyle O(1)$ $\displaystyle:$
$\displaystyle\qquad\text{div}_{\mathbf{y}}\left(2\mu_{1}e(\mathbf{u}_{0}^{out})-p_{0}^{out}\mathbf{I}\right)=-\mathbf{e}$
(65) $\displaystyle O(z^{k}),\,k\geq 1$ $\displaystyle:$
$\displaystyle\qquad\text{div}_{\mathbf{y}}\left(2\mu_{1}e(\mathbf{u}_{k}^{out})-p_{k}^{out}\mathbf{I}\right)=\mathbf{0}$
(66)
Similarly, the PDEs for $Q_{2}$ are
$\displaystyle O(1)$ $\displaystyle:$ $\displaystyle\qquad-\nabla
p_{0}^{in}=-\mathbf{e}$ (67) $\displaystyle O(z^{k}),\,k\geq 1$
$\displaystyle:$
$\displaystyle\text{div}_{\mathbf{y}}\left(2\mu_{1}e(\mathbf{u}_{k-1}^{in})-p_{k}^{in}\mathbf{I}\right)=\mathbf{0}$
(68)
The interface condition (36) remains the same for the small $|z|$ case while
(34) and (36) now read
$\displaystyle\mathbf{n}\times\mathbf{n}\times\left[\left(\left(2\mu_{1}e(\mathbf{u}_{0}^{out})-p_{0}^{out}\mathbf{I}\right)\cdot\mathbf{n}-(-p_{0}^{in}\mathbf{I}\right)\cdot\mathbf{n}\right]=\mathbf{0}$
(69)
$\displaystyle\mathbf{n}\times\mathbf{n}\times\left[\left(\left(2\mu_{1}e(\mathbf{u}_{k}^{out})-p_{k}^{out}\mathbf{I}\right)-\left(2\mu_{1}e(\mathbf{u}_{k-1}^{in})-p_{k}^{in}\mathbf{I}\right)\right)\cdot\mathbf{n}\right]=\mathbf{0},\,k\geq
1$ (70)
The first equation to be solved is (67), whose solution is simply
$p_{0}^{in}(\mathbf{y})=\mathbf{e}\cdot\mathbf{y}+{c}-\int_{Q_{2}}(\mathbf{e}\cdot\mathbf{y}+{c})d\mathbf{y}\mbox{
in }Q_{2}$ (71)
where ${c}$ is a constant. The next problem is the system of (65) and (69).
Similar to the calculation in Lemma 2.2, the weak formulation of this system
is: Find $u_{0}^{out}\in H(Q_{1})$ such that for all
$\mbox{\boldmath$\Phi$}\in H(Q_{1})$ and
$\pi_{0}^{out}:=2\mu_{1}e(\mathbf{u}_{0}^{out})-p_{0}^{out}\mathbf{I}$
$-\int_{\Gamma}(\bar{\Phi}\cdot(\pi_{0}^{out}+p_{0}^{in}\mathbf{I})-p_{0}^{in}\mathbf{I})\cdot\mathbf{n}dS-\int_{Q_{1}}2\mu_{1}e(\mathbf{u}_{0}^{out}):\overline{e(\Phi)}d\mathbf{y}=\int_{Q_{1}}-\mathbf{e}\cdot\bar{\Phi}d\mathbf{y}$
Since $\mbox{\boldmath$\Phi$}\cdot\mathbf{n}=0$ and
$p_{0}^{in}\mathbf{I}\cdot\mathbf{n}$ is parallel to $\mathbf{n}$, (69)
implies the integral on $\Gamma$ vanishes. Hence by the Lax-Milgram lemma, we
have
$\|\mathbf{u}_{0}^{out}\|_{Q_{1}}\leq\frac{\sqrt{|Q_{1}|}}{B_{1}(Q_{{1}})}$
(72)
The system for $\mathbf{u}_{k-1}^{in}$, $k\geq 1$ (inner problem) is to find
$\mathbf{u}_{k-1}^{in}\in H(Q_{2})$ with given $\mathbf{u}_{0}^{out}\in
H(Q_{1})$ such that
$\left\\{\begin{split}\text{div}\left(2\mu_{1}e(\mathbf{u}_{k-1}^{in})-p_{k}^{in}\mathbf{I}\right)=\mathbf{0}\mbox{
in }Q_{2}\\\
\mathbf{u}_{k-1}^{in}|_{\Gamma}=\mathbf{u}_{k-1}^{out}|_{\Gamma}\end{split}\right.$
(73)
With an argument similar to the derivation of Lemma 2.1, the following
estimate can be derived for system (73)
###### Lemma 2.3.
Let $Q_{2}$ satisfy the same assumption in Lemma 2.1. For any given vector
field $\mathbf{u}^{out}\in H(Q_{1})$, there exists a unique weak solution
$\mathbf{u}^{in}(\mathbf{y})\in H(Q_{2})$ s.t.
$\displaystyle\left\\{\begin{split}\text{div}_{\mathbf{y}}\left(2\mu_{1}e(\mathbf{u}^{in})-p^{in}\mathbf{I}\right)&=\mathbf{f}^{in}\quad\text{
in }Q_{2}\\\ \mathbf{u}^{in}&=\mathbf{u}^{out}\quad\text{ on
}\Gamma\end{split}\right.$ (74)
$\displaystyle\left\lVert\mathbf{u}^{in}\right\rVert_{Q_{2}}\leq\frac{1}{B_{1}(Q_{{2}})}\left\lVert\mathbf{f}^{in}\right\rVert_{L^{2}(Q_{2})}+2E_{2}\left\lVert\mathbf{u}^{out}\right\rVert_{Q_{1}}.$
(75)
where $E_{2}>1$ is the constant associated with the extension operator $T$,
$\|T(\mbox{\boldmath$\Phi$})\|_{Q}\leq
E_{2}\|\mbox{\boldmath$\Phi$}\|_{Q_{{1}}}$ for all $\mbox{\boldmath$\Phi$}\in
H(Q_{{1}})$ and $T(\mbox{\boldmath$\Phi$})$ decays rapidly to 0 inside
$Q_{2}$. Note that the periodic condition of space $H(Q_{1})$ implies
$\int_{\Gamma}\mathbf{u}^{out}\cdot\mathbf{n}\,dS=0$.
The system for $\mathbf{u}_{k}^{out}$ and $p_{k}^{out}$ with given
$\mathbf{u}_{k-1}^{in}\in H(Q_{2})$ and $p_{k}^{in}$, $k\geq 1$ is
$\left\\{\begin{split}\text{div}\left(2\mu_{1}e(\mathbf{u}_{k}^{out})-p_{k}^{out}\mathbf{I}\right)=\mathbf{0}\\\
\mathbf{n}\times\mathbf{n}\times\left[\left(\left(2\mu_{1}e(\mathbf{u}_{k}^{out})-p_{k}^{out}\mathbf{I}\right)-\left(2\mu_{1}e(\mathbf{u}_{k-1}^{in})-p_{k}^{in}\mathbf{I}\right)\right)\cdot\mathbf{n}\right]=\mathbf{0}\end{split}\right.$
(76)
By an argument similar to the one for Lemma 2.2, the system above can be shown
to satisfy the following estimate.
###### Lemma 2.4.
Let $Q_{2}$ satisfy the same assumption in Lemma 2.1. For any given pair of
$\left(\mathbf{u}^{in},p^{in}\right)\in H(Q_{2})\times
L^{2}(Q_{1})/\mathbb{C}$ that satisfies (74), there exists a unique vector
$\mathbf{u}^{out}(\mathbf{y};\mathbf{f}^{out})\in H(Q_{1})$ solving the
following system
$\displaystyle\left\\{\begin{split}&\text{div}\left(2\mu_{1}e(\mathbf{u}^{out})-p^{out}\mathbf{I}\right)=\mathbf{f}^{out}\text{
in }Q_{1}\\\
&\mathbf{n}\times\mathbf{n}\times\left[\left(\left(2\mu_{1}e(\mathbf{u}^{in})-p^{in}\mathbf{I}\right)-\left(2\mu_{1}e(\mathbf{u}^{out})-p^{out}\mathbf{I}\right)\right)\cdot\mathbf{n}\right]=\mathbf{0}\text{
on }\Gamma\end{split}\right.$ (77)
$\displaystyle\left\lVert\mathbf{u}^{out}\right\rVert_{Q_{1}}\leq\frac{E_{2}}{B_{1}(Q_{{1}})}\left\lVert\mathbf{f}^{out}\right\rVert_{L^{2}(Q_{1})}+\frac{E_{2}}{B_{1}(Q_{{2}})}\left\lVert\mathbf{f}^{in}\right\rVert_{L^{2}(Q_{2})}+E_{2}\left\lVert\mathbf{u}^{in}\right\rVert_{Q_{2}}$
(78)
where $E_{2}$, $B_{1}$ depend only on $Q$ and $\Gamma$.
Equation (76), Lemma 2.3, Equation (73) and Lemma 2.4 imply that for all
$k\geq 0$, we have $\|\mathbf{u}_{k}^{in}\|_{Q_{2}}\leq
2E_{2}\|\mathbf{u}_{k}^{out}\|_{Q_{1}}$ and
$\|\mathbf{u}_{k+1}^{out}\|_{Q_{1}}\leq E_{2}\|\mathbf{u}_{k}^{in}\|_{Q_{2}}$.
Therefore,
$\displaystyle\|\mathbf{u}^{in}_{k}\|_{Q_{2}}\leq\frac{(2E_{2}^{2})^{k+1}}{E_{2}}\|\mathbf{u}_{0}^{out}\|_{Q_{1}}\leq{(2E_{2}^{2})^{k+1}}\left(\frac{|Q_{1}|}{E_{2}B_{1}(Q_{{1}})}\right)$
(79)
$\displaystyle\|\mathbf{u}^{out}_{k}\|_{Q_{1}}\leq(2E_{2}^{2})^{k}\|\mathbf{u}_{0}^{out}\|_{Q_{1}}\leq(2E_{2}^{2})^{k}\left(\frac{|Q_{1}|}{B_{1}(Q_{1})}\right)$
(80)
Therefore, the series in (63) and (64) converge uniformly in the disk
$|z|<\frac{1}{2(E_{2})^{2}}$ to an analytic function in $Q_{2}$ and $Q_{1}$,
respectively. The limit functions
$\mathbf{u}_{null}^{in}(\mathbf{y},\mathbf{e},z)$,
$\mathbf{u}_{null}^{out}(\mathbf{y},\mathbf{e},z)$ and the corresponding
permeability $K_{ij}(z)$ in (19) are analytic at $z=0$. Define the
permeability (’B’ for ’bubbles)
$K_{ij}^{(B)}:=\int_{Q}[\chi_{1}\mathbf{u}_{0}^{out}(\mathbf{y};\mathbf{e}_{i})+\chi_{2}\mathbf{u}_{0}^{in}(\mathbf{y};\mathbf{e}_{i})]\cdot\mathbf{e}_{j}\,d\mathbf{y}$
(81)
then the following estimate, valid for $|z|<\frac{1}{2E_{2}^{2}}$, holds
$\displaystyle|K_{ij}(z)-K^{B}_{ij}|\leq\frac{|Q_{1}|(1+2E_{2})}{{B_{1}(Q)B_{1}(Q_{1})}}\left(\frac{2E_{2}^{2}|z|}{1-2E_{2}^{2}|z|}\right)=O(|z|).$
(82)
In conclusion, $\mbox{\boldmath$K$}(z)$ in (12) is analytic for
$z\in\mathbb{C}\setminus[-2E_{1}^{2},-\frac{1}{2E_{2}^{2}}]$, $E_{1},E_{2}\geq
1$. In the next section, and integral representation formula (IRF) for
$\mbox{\boldmath$K$}(z)$ will be derived in two different ways.
## 3 Integral representation of permeability $\mbox{\boldmath$K$}(z)$
We first observe two properties of $K$ implied by (19).
###### Proposition 3.1.
$\displaystyle\frac{\mbox{\boldmath$K$}(z)-\mbox{\boldmath$K$}^{*}(z)}{z-\bar{z}}\leq
0\mbox{ if }Im(z)\neq 0$ (83) $\displaystyle\mbox{\boldmath$K$}(x)\geq 0\mbox{
for }x>0$ (84)
###### Proof.
Note that
$K_{ij}(z)-(K^{*})_{ij}(z)={2\mu_{1}}(\overline{z}-z)\int_{Q_{2}}{e(\mathbf{u}^{j}(z))}:\overline{e(\mathbf{u}^{i}(z))}\,d\mathbf{y}$.
Hence
$\frac{K_{ij}(z)-K^{*}_{ij}(z)}{z-\overline{z}}={-}{2\mu_{1}}\int_{Q_{2}}{e(\mathbf{u}^{j}(z))}:\overline{e(\mathbf{u}^{i}(z))}\,d\mathbf{y}={-}(\mathbf{u}^{j},\mathbf{u}^{i})_{Q_{2}}=:-A_{ij}$
The matrix $\boldsymbol{A}$ is obviously Hermitian and for any
$\boldsymbol{\xi}\in\mathbb{C}^{n}$, we have
$\overline{\xi_{i}}A_{ij}{\xi_{j}}=(\xi_{j}\mathbf{u}^{j},\xi_{i}\mathbf{u}^{i})_{Q_{2}}\geq
0$. This proves (83). Recall that
$K_{ij}(x)=\left((\mathbf{u}^{j},\mathbf{u}^{i})_{Q_{1}}+x(\mathbf{u}^{j},\mathbf{u}^{i})_{Q_{2}}\right)$.
With a similar argument, (84) follows. ∎
With these two properties and the fact that $K$ is holomorphic in
$\mathbb{C}\setminus(-\infty,0]$, the characterization theorem for matrix-
valued functions belonging to the Stieltjes class [20] implies that there
exists a monotonically increasing matrix-valued function
$\boldsymbol{\sigma}(t)$ such that the following integral representation
formula (IRF) holds for $z\in\mathbb{C}\setminus(-\infty,0]$
$\mbox{\boldmath$K$}(z)=\boldsymbol{A}+\frac{\boldsymbol{C}}{z}+\int_{+0}^{\infty}\frac{1}{z+t}d\boldsymbol{\sigma}(t)$
where $\boldsymbol{A}\geq 0$, $\boldsymbol{C}\geq 0$,
$\int_{+0}^{\infty}\frac{1}{1+t}d\boldsymbol{\sigma}(t){:=\displaystyle{\lim_{\epsilon\downarrow
0}\int_{\epsilon}^{\infty}}\frac{1}{1+t}d\boldsymbol{\sigma}(t)}<\infty$ and
$\boldsymbol{A}+\boldsymbol{C}+\int_{+0}^{\infty}\frac{1}{1+t}d\boldsymbol{\sigma}(t)>0$.
Since $\mbox{\boldmath$K$}(0)=\mbox{\boldmath$K$}^{(B)}$, we must have
$\boldsymbol{C}=\boldsymbol{0}$. Also,
$\boldsymbol{K}(\infty)=\boldsymbol{K}^{(D)}$ implies
$\boldsymbol{A}=\boldsymbol{K}^{(D)}$
$\mbox{\boldmath$K$}(z)=\boldsymbol{K}^{(D)}+\int_{\frac{1}{2E_{2}^{2}}}^{2E_{1}^{2}}\frac{1}{z+t}d\boldsymbol{\sigma}(t)$
Therefore, for real valued $z$, $\mbox{\boldmath$K$}(z)$ is decreasing as $z$
increases, i.e., $\mbox{\boldmath$K$}(x_{1})-\mbox{\boldmath$K$}(x_{2})$ is
negative semidefinite if $x_{1}>x_{2}$. To study how the measure
$d\boldsymbol{\sigma}$ is related to the microstructure, we derive the
spectral representation of $\mbox{\boldmath$K$}(z)$ by using the underlying
system (13).
### 3.1 Spectral representation of $\mbox{\boldmath$K$}(z)$
Adding
$\int_{Q_{2}}2\mu_{1}e(\mathbf{u}^{k}):\overline{e(\mathbf{v})}d\mathbf{y}$ to
both sides of (21), we have
$\int_{Q}2\mu_{1}e(\mathbf{u}^{k}):\overline{e(\mathbf{v})}d\mathbf{y}=-\frac{1}{s}\int_{Q}2\mu_{1}\chi_{2}e(\mathbf{u}^{k}):\overline{e(\mathbf{v})}d\mathbf{y}+\int_{Q}\mathbf{e}_{k}\cdot\bar{\mathbf{v}}d\mathbf{y}$
(85)
where the new variable $s$ is defined as
$s:=\frac{1}{z-1}$
Let $\Delta_{\\#}^{-1}$ be the operator that solves for
$\mathbf{w}(\mathbf{y};\mathbf{f})\in H(Q)$ in the following variational
formulation
$\int_{Q}2\mu_{1}e({\mathbf{w}}):\overline{e(\mathbf{v})}d\mathbf{y}=\int_{Q}\mathbf{f}\cdot\bar{\mathbf{v}}d\mathbf{y}$
(86)
where $\mathbf{f}\in L^{2}(Q)$ and $Q$-periodic. In other words, solution
${\mathbf{w}}(\mathbf{y})=\Delta_{\\#}^{-1}\mathbf{f}\in H(Q)$ is a weak
solution to the cell problem
$\left\\{\begin{split}-\mu_{1}\Delta{\mathbf{w}}&=\mathbf{f}\quad\text{in
}Q_{1}\cup Q_{2}\\\
\llbracket\mbox{\boldmath$\pi$}\rrbracket\mathbf{n}&=\left(\llbracket\mbox{\boldmath$\pi$}\mathbf{n}\rrbracket\cdot\mathbf{n}\right)\mathbf{n}\text{
on }\Gamma\end{split}\right.$ (87)
In order to get the spectral representation, we apply $\Delta_{\\#}^{-1}$ on
both sides of (85) and symbolically represent the resulted equations as
${\mathbf{w}}_{1}={-\frac{1}{s}}{\mathbf{w}}_{2}+{\mathbf{w}}_{3}$
Then clearly, we have ${\mathbf{w}}_{1}=\mathbf{u}^{k}$ and
${\mathbf{w}}_{3}=\Delta_{\\#}^{-1}\mathbf{e}_{k}$. Observe that
${\mathbf{w}}_{2}$ solves
$\int_{Q}2\mu_{1}e({\mathbf{w}}_{2}):\overline{e(\mathbf{v})}d\mathbf{y}=\int_{Q}2\mu_{1}\chi_{2}e(\mathbf{u}^{k}):\overline{e(\mathbf{v})}d\mathbf{y}\mbox{
for all }\mathbf{v}\in H(Q)$ (88)
Define the operator $\Gamma_{\chi}$ such that
${\mathbf{w}}_{2}=\Gamma_{\chi}\mathbf{u}^{k}$ and (88) can be expressed as
$(\Gamma_{\chi}\mathbf{u}^{{k}},\mathbf{v})_{Q}=\int_{Q}2\mu_{1}\chi_{2}e(\mathbf{u}^{{k}}):\overline{e(\mathbf{v})}d\mathbf{y}\mbox{
for all }\mathbf{v}\in H(Q).$ (89)
The subscript $\chi$ is used to signify the dependence of $\Gamma_{\chi}$ on
$\chi_{2}$, the characteristic function of $Q_{2}$. Clearly, $\Gamma_{\chi}$
is self-adjoint with respect to the inner product $(\cdot,\cdot)_{Q}$ because
$(\Gamma_{\chi}\mathbf{u},\mathbf{v})_{Q}=\overline{\int_{Q}2\mu_{1}\chi_{2}e(\mathbf{v}):\overline{e(\mathbf{u})}d\mathbf{y}}=\overline{(\Gamma_{\chi}\mathbf{v},\mathbf{u})_{Q}}={(\mathbf{u},\Gamma_{\chi}\mathbf{v})_{Q}.}$
Formally, we have
$\Gamma_{\chi}\mathbf{u}=\triangle_{\\#}^{-1}(\nabla\cdot\chi_{2}e(\mathbf{u}))$.
Now (85) becomes
$\mathbf{u}^{k}=-\frac{1}{s}\Gamma_{\chi}\mathbf{u}^{k}+\Delta_{\\#}^{-1}\mathbf{e}_{k}\Leftrightarrow\left(I+\frac{\Gamma_{\chi}}{s}\right)\mathbf{u}^{k}=\Delta_{\\#}^{-1}\mathbf{e}_{k}$
(90)
###### Proposition 3.2.
The self-adjoint operator $\Gamma_{\chi}$ defined in (89) is positive and
bounded with $\left\lVert\Gamma_{\chi}\right\rVert\leq 1$.
###### Proof.
It can be proved by choosing $\mathbf{v}=\mathbf{u}$ in (89) and observe that
$0\leq\int_{Q_{2}}2\mu_{1}e(\mathbf{u}):\overline{e(\mathbf{u})}d\mathbf{y}\leq\int_{Q}2\mu_{1}e(\mathbf{u}):\overline{e(\mathbf{u})}d\mathbf{y}=(\mathbf{u},\mathbf{u})_{Q}$.
∎
###### Theorem 3.1.
For $|s|>1$, the solution $\mathbf{u}^{k}\in H(Q)$ admits a series
representation
$\mathbf{u}^{k}(\mathbf{y};s)=\sum_{m=0}^{\infty}\left(-\frac{1}{s}\right)^{m}\left(\Gamma_{\chi}\right)^{m}\Delta_{\\#}^{-1}\mathbf{e}_{k}$
(91)
and the components of $K$ can be represented by the following IRF
$K_{kl}(s)=s\int_{0}^{1}\int_{Q}\frac{\left(\tilde{M}(d\lambda)\Delta_{\\#}^{-1}\mathbf{e}_{k}\right)_{l}}{s+\lambda}\,d\mathbf{y},\quad
k,l=1,\dots,n,$ (92)
for some projection-valued measures $\tilde{M}(d\lambda)$ and a series
representation
$K_{kl}(s)=\int_{Q}\left(\Delta_{\\#}^{-1}\mathbf{e}_{k}\right)_{l}d\mathbf{y}+\sum_{m=1}^{\infty}\frac{\tilde{\lambda}_{kl}^{m}}{(-s)^{m}}\quad\mbox{with
}\tilde{\lambda}_{kl}^{m}:=\int_{Q}\left(\left(\Gamma_{\chi}\right)^{m}\Delta_{\\#}^{-1}\mathbf{e}_{k}\right)_{l}\,d\mathbf{y}.$
###### Proof.
From (90), since $\Gamma_{\chi}$ is self-adjoint with norm bounded by 1, for
$|s|>1$, the spectral theory for self-adjoint operator implies the existence
of a projection-valued measure $\tilde{M}$ such that
$\mathbf{u}^{k}(\mathbf{y};s)=\left(I+\frac{\Gamma_{\chi}}{s}\right)^{-1}\Delta_{\\#}^{-1}\mathbf{e}_{k}={s}\int_{0}^{1}\frac{\tilde{M}(d\lambda)\left(\Delta_{\\#}^{-1}\mathbf{e}_{k}\right)}{s+\lambda}$
(93)
Hence the $kl$-the element of permeability $K$ has the following IRF
$K_{kl}(s)=\int_{Q}(\mathbf{u}^{k})_{l}d\mathbf{y}=s\int_{0}^{1}\int_{Q}\frac{\left(\tilde{M}(d\lambda)\Delta_{\\#}^{-1}\mathbf{e}_{k}\right)_{l}}{s+\lambda}d\mathbf{y}$
(94)
On the other hand, for $|s|>1$, the geometric expansion of the middle termnear
$s=\infty$ in (93) results in the following expression
$K_{kl}(s)=\int_{Q}\left[\sum_{m=0}^{\infty}\left(-\frac{1}{s}\right)^{m}\left(\Gamma_{\chi}\right)^{m}\Delta_{\\#}^{-1}\mathbf{e}_{k}\right]\cdot\mathbf{e}_{l}d\mathbf{y}=\sum_{m=0}^{\infty}\frac{\tilde{\lambda}_{kl}^{m}}{(-s)^{m}}$
(95)
where $\tilde{\lambda}_{kl}^{m}$ is defined as
$\tilde{\lambda}_{kl}^{m}:=\int_{Q}\left(\left(\Gamma_{\chi}\right)^{m}\Delta_{\\#}^{-1}\mathbf{e}_{k}\right)_{l}d\mathbf{y}.$
∎
For the three-dimensional space $n=3$, the expansion (94) can be cast in the
matrix form
$\mbox{\boldmath$K$}(s)=\sum_{m=0}^{\infty}\frac{\tilde{\mbox{\boldmath$\Lambda$}}_{m}}{(-s)^{m}}$
(96)
with the matrix-valued moments defined as
$\tilde{\mbox{\boldmath$\Lambda$}}_{m}:=\begin{pmatrix}\int_{Q}\left(\Gamma_{\chi}\right)^{m}\Delta_{\\#}^{-1}\mathbf{e}_{1}d\mathbf{y}&\int_{Q}\left(\Gamma_{\chi}\right)^{m}\Delta_{\\#}^{-1}\mathbf{e}_{2}d\mathbf{y}&\int_{Q}\left(\Gamma_{\chi}\right)^{m}\Delta_{\\#}^{-1}\mathbf{e}_{3}d\mathbf{y}\end{pmatrix}$
(97)
### 3.2 Relationships between two representations and characterization of the
microstructral information on permeability
The calculations in the previous section reveal that the variable
$s:=\frac{1}{z-1}$ is the natural one to use. Because of this, we will
consider $K$ as a function of $s$. Note that $s$ maps $(-\infty,0]$ on the
$z$-plane to $[-1,0]$ on the $s$-plane.The following properties of
$\mbox{\boldmath$K$}(s)$ can be easily deduced from the results in Proposition
3.1 .
1. 1.
$\mbox{\boldmath$K$}(s)$ is holomorphic in
$\mathbb{C}\setminus{[-\frac{2E_{2}^{2}}{1+2E_{2}^{2}},-\frac{1}{1+2E_{1}^{2}}]}.$
2. 2.
$\frac{\mbox{\boldmath$K$}(s)-(\mbox{\boldmath$K$}(s))^{*}}{s-\overline{s}}\geq
0$ for all $Im{(s)}\neq 0$
3. 3.
$\mbox{\boldmath$K$}(s)\geq 0\mbox{ for }\mathbb{R}\ni s>0$ because $s>0$ iff
$\mathbb{R}\ni z>1$.
Then by the representation theorem in [20, Theorem 3.1], there exists a
monotonically increasing matrix-valued function $\boldsymbol{\sigma}(t)$,
matrices $\boldsymbol{A}\geq 0$ and $\boldsymbol{C}\geq 0$ such that
$\int_{+0}^{\infty}\frac{d\boldsymbol{\sigma}}{1+t}<\infty$,
$\boldsymbol{A}+\boldsymbol{C}+\int_{+0}^{\infty}\frac{d\boldsymbol{\sigma}}{1+t}>0$
and
$\mbox{\boldmath$K$}(s)=\boldsymbol{A}+{\boldsymbol{C}}{s}+\int_{+0}^{\infty}\frac{s}{s+t}d\boldsymbol{\sigma}(t),$
(98)
As $s\rightarrow\infty$, $z\rightarrow 1$ and hence
$\mbox{\boldmath$K$}\rightarrow\mbox{\boldmath$K$}(z=1)$. Therefore, we must
have $\boldsymbol{C}=\boldsymbol{0}$. Moreover,
$\boldsymbol{A}=\mbox{\boldmath$K$}(s=0)=\mbox{\boldmath$K$}^{(D)}$. Also,
since $\mbox{\boldmath$K$}(s)$ is holomorphic in
$\mathbb{C}\setminus{[-\frac{2E_{2}^{2}}{1+2E_{2}^{2}},-\frac{1}{1+2E_{1}^{2}}]}$,
we have
$\mbox{\boldmath$K$}(s)=\boldsymbol{K}^{(D)}+\int_{\frac{1}{1+2E_{1}^{2}}}^{\frac{2E_{2}^{2}}{1+2E_{2}^{2}}}\frac{s}{s+t}d\boldsymbol{\sigma}(t),$
(99)
which is valid for all
$s\in\mathbb{C}\setminus{[-\frac{2E_{2}^{2}}{1+2E_{2}^{2}},-\frac{1}{1+2E_{1}^{2}}]}\subset(-1,0)$.
To compare with (96), which is valid only for $|s|>1$, we expand (99) near
$s=\infty$ to obtain the following series expansion
$\mbox{\boldmath$K$}(s)=\mbox{\boldmath$K$}^{(D)}+\sum_{m=0}^{\infty}(-1)^{m}\left(\frac{1}{s}\right)^{m+1}\boldsymbol{\mu}^{\sigma}_{m}$
(100)
where $\boldsymbol{\mu}^{\sigma}_{m}$ is the $m$-th moment of the measure
$d\boldsymbol{\sigma}$. Equating the coefficients term by term with (96) leads
to the following relation between $\boldsymbol{\mu}^{\sigma}_{m}$ and the
’geometrical information’ coefficients in (97)
$\displaystyle\mbox{\boldmath$K$}^{(D)}+\boldsymbol{\mu}^{\sigma}_{0}=\tilde{\mbox{\boldmath$\Lambda$}}_{0}={\mbox{\boldmath$K$}(s=\infty)},\mbox{
i.e. }$
$\displaystyle\boldsymbol{\mu}^{\sigma}_{0}=\mbox{\boldmath$K$}(z=1)-\mbox{\boldmath$K$}^{(D)}$
(102)
$\displaystyle\boldsymbol{\mu}^{\sigma}_{m}=\tilde{\mbox{\boldmath$\Lambda$}}_{m},\,m\geq
1$
Recall that $K$ can be regarded as a function of $s$ as well as a function of
$z$, $s:=\frac{1}{z-1}$. In particular, the first moment
$\boldsymbol{\mu}^{\sigma}_{1}$ can be calculated explicitly as follows
$\displaystyle\tilde{\lambda}_{kl}^{1}=(\Gamma_{\chi}\mathbf{u}^{k}(\mathbf{y};1),\mathbf{u}^{l}(\mathbf{y};1))_{Q}=2\mu_{1}\int_{Q}\chi_{2}e(\mathbf{u}^{k}(\mathbf{y};1)):\overline{e(\mathbf{u}^{l}(\mathbf{y};1))}\,d\mathbf{y}$
(103)
## 4 Numerical verification
The computational domain with $Q=(0,1)^{2}$, $Q_{2}=[1/4,3/4]^{2}$ and
$Q_{1}=Q\setminus Q_{2}$ is illustrated in Figure 2. is chosen in the first
two numerical examples, (104) and (105).
$Q_{1}$$\tilde{\Gamma}$$Q:$$Q_{2}$${\bf{e}}^{(1)}$ Figure 2: Computational
domain
We consider three cases: (1) $Q_{2}$ is a solid obstacle, (2) $Q_{2}$ is a
bubble, and (3) $Q_{2}$ is another fluid.
For case (1), we find $({\bf{u}}_{1},p_{1})\in{\bf{V}}_{1}\times P_{1}$, such
that
$\displaystyle\left\\{\begin{aligned}
(e({\bf{u}}_{1}),e({\bf{v}}))-(p_{1},\text{div}{\bf{v}})&=({\bf{e}}_{1},{\bf{v}})\quad\forall{\bf{v}}\in{\bf{V}}_{1},\\\
(q,\text{div}{\bf{u}}_{1})&=0\qquad\forall q\in P_{1},\end{aligned}\right.$
(104)
where
$\displaystyle{\bf{V}}_{1}$ $\displaystyle=\\{{\bf{v}}\in H^{1}(Q_{1})^{2}\
\mid{\bf{v}}|_{\partial Q_{2}}={\bf{0}},\text{\ ${\bf{v}}$ is
}Q\text{-periodic}\\},$ $\displaystyle P_{1}$ $\displaystyle=\\{q\in
L^{2}_{0}(Q_{1})\ \mid q=\text{div}{\bf{v}}\text{ \ for some
${\bf{v}}\in{\bf{V}}_{1}$ }\\}.$
For case (2), we find $({\bf{u}}_{2},p_{2})\in{\bf{V}}_{2}\times P_{2}$, such
that
$\displaystyle\left\\{\begin{aligned}
(e({\bf{u}}_{2}),e({\bf{v}}))-(p_{2},\text{div}{\bf{v}})&=({\bf{e}}_{1},{\bf{v}})\quad\forall{\bf{v}}\in{\bf{V}}_{2},\\\
(q,\text{div}{\bf{u}}_{2})&=0\qquad\forall q\in P_{2},\end{aligned}\right.$
(105)
where
$\displaystyle{\bf{V}}_{2}$ $\displaystyle=\\{{\bf{v}}\in H^{1}(Q_{1})^{2}\
\mid{\bf{v}}\cdot{\bf{n}}|_{\partial Q_{2}}=0,\text{\ ${\bf{v}}$ is
}Q-\text{periodic}\\},$ $\displaystyle P_{2}$ $\displaystyle=\\{q\in
L^{2}_{0}(Q_{1})\ \mid q=\text{div}{\bf{v}}\text{ \ for some
${\bf{v}}\in{\bf{V}}_{2}$ }\\}.$
For case (3), we set $\mu_{1}=1$ and $\mu_{2}=\mu$. We find
$({\bf{u}}_{3},p_{3})\in{\bf{V}}_{3}\times P_{3}$, such that
$\displaystyle\left\\{\begin{aligned} (\mu
e({\bf{u}}_{3}),e({\bf{v}}))-(p_{2},\text{div}{\bf{v}})&=({\bf{e}}_{1},{\bf{v}})\quad\forall{\bf{v}}\in{\bf{V}}_{3},\\\
(q,\text{div}{\bf{u}}_{2})&=0\qquad\forall q\in P_{3},\end{aligned}\right.$
(106)
where
$\displaystyle{\bf{V}}_{3}$ $\displaystyle=\\{{\bf{v}}\in H^{1}(Q)^{2}\
\mid{\bf{v}}\cdot{\bf{n}}|_{\partial Q_{2}}=0,\text{\ ${\bf{v}}$ is
}Q-\text{periodic}\\},$ $\displaystyle P_{3}$ $\displaystyle=\\{q\in
L^{2}_{0}(Q)\ \mid q=\text{div}{\bf{v}}\text{ \ for some
${\bf{v}}\in{\bf{V}}_{3}$ }\\}.$
The computation is done on square grids. The first level grid consists of 12
squares, for the first two cases. Each square is subdivided into 4 sub-squares
to get the next level grid, $\mathcal{T}_{h}=\\{T\\}$. We use the
$Q_{5,4}^{1,0}\times Q_{4,5}^{0,1}$ velocity finite element space with the
$Q_{4,4}^{0,0}$ pressure finite element space. Here $Q_{5,4}^{1,0}$ means the
space of polynomials of degree at most 5 in $y_{1}$ and of degree at most $4$
in $y_{2}$ which is $C^{1}$ in $y_{1}$-direction and $C^{0}$ in
$y_{2}$-direction. That is,
$\displaystyle Q_{5,4}^{1,0}$
$\displaystyle=\Big{\\{}u_{1}|_{T}=\sum_{i=0}^{5}\sum_{j=0}^{4}c_{ij}y_{1}^{i}y_{2}^{j}\
\Big{|}\ u_{1}\hbox{ and }\partial_{y_{1}}u_{1}\in C^{0}(Q_{1}),\hbox{and
}Q\hbox{-periodic}\Big{\\}},$ $\displaystyle Q_{4,4}^{0,0}$
$\displaystyle=\Big{\\{}p|_{T}=\sum_{i=0}^{4}\sum_{j=0}^{4}c_{ij}y_{1}^{i}y_{2}^{j}\
\Big{|}\ p\in C^{0}_{0}(Q),\hbox{and }Q\hbox{-periodic}\Big{\\}}.$
We note that $\text{div}(Q_{5,4}^{1,0}\times Q_{4,5}^{0,1})=Q_{4,4}^{0,0}$.
Therefore, the finite element velocity is also pointwise divergence-free. We
plot the velocity field of these two problems in Figure 3. We can see the
magnitude of the latter is much bigger, as the resistance from a slippery
bubble is much less.
Figure 3: The velocity field ${\bf{u}}^{1}$ for a solid obstacle $Q_{2}$
(104), and for a slippery bubble $Q_{2}$ (105).
In Figures 4 and 5, we plot the two velocity fields of two-fluid flow (106)
for two viscosity coefficients $\mu_{2}$. When $\mu_{2}$ is big, the sticky
inner fluid flows less and drags the outer fluid near the interface. When
$\mu_{2}$ approaches infinity, the inner fluid stops and it posts a zero
Dirichlet boundary condition for on tangential velocity of the outer fluid at
the inner boundary $\tilde{\Gamma}=\partial Q_{2}$. The model of a solid
obstacle (104) is a limit case of the model of two-fluid (106) when
$\mu_{2}\to\infty$. We can compare the left chart of Figure 3 and the left
chart of Figure 4.
Figure 4: The velocity field ${\bf{u}}_{3}$ for two-fluid flow (106) with
$\mu_{2}=10^{2}$ on $Q$ (left), on $Q_{2}$ (right, scaled by 200).
When $\mu_{2}$ approaches zero, the inner fluid flow freely which produces
little drag on the outer fluid. In theory, the force inside fluid $Q_{2}$ may
even push outer fluid somewhat. But due to the zero outflow boundary condition
on the velocity at $\partial Q_{2}$, such a force would be balanced by its
left portion and right portion of an edge of $\partial Q_{2}$. It is
equivalent to zero tangential stress boundary on the outer flow. That is,
model of a slippery bubble (105) is a limit model of two-fluid (106) with
$\mu_{2}\to 0$. We may compare the right chart of Figure 3 and the left chart
of Figure 5.
Figure 5: The velocity field ${\bf{u}}_{3}$ for two-fluid flow (106) with
$\mu_{2}=10^{-2}$ on $Q$ (left), on $Q_{2}$ (right, scaled by 2).
The homogenized permeability tensor
$\mbox{\boldmath$K$}=\begin{pmatrix}k_{11}&k_{12}\\\
k_{21}&k_{22}\end{pmatrix}$ is computed by
$\displaystyle k_{11}$ $\displaystyle=\frac{1}{|Q|}\int_{Q\setminus
Q_{2}}{\bf{u}}_{1}\cdot{\bf{e}}_{1}d{\bf{y}}\quad\text{ for }\eqref{rough},$
(107) $\displaystyle k_{11}$ $\displaystyle=\frac{1}{|Q|}\int_{Q\setminus
Q_{2}}{\bf{u}}_{2}\cdot{\bf{e}}_{1}d{\bf{y}}\quad\text{ for
}\eqref{slippery},$ (108) $\displaystyle k_{11}$
$\displaystyle=\frac{1}{|Q|}\int_{Q}{\bf{u}}_{3}\cdot{\bf{e}}_{1}d{\bf{y}}\quad\text{
for }\eqref{porous}.$ (109)
Due to the symmetry, in all our examples we have $k_{11}=k_{22}$ and
$k_{12}=k_{21}=0.$
Table 1: Computed permeability $k_{11}$ by (107)-(109). level | (104) | (106) | (105)
---|---|---|---
| | $\mu_{2}=10^{4}$ | $\mu_{2}=1$ | $\mu_{2}=10^{-4}$ |
1 | 0.0105 | 0.0105 | 0.0122 | 0.0140 | 0.0140
2 | 0.0119 | 0.0119 | 0.0144 | 0.0181 | 0.0181
3 | 0.0125 | 0.0125 | 0.0154 | 0.0209 | 0.0209
4 | 0.0128 | 0.0128 | 0.0159 | 0.0228 | 0.0228
5 | 0.0129 | 0.0129 | 0.0161 | 0.0240 | 0.0240
To verify the convergence results stated in (62) and (82), we solve the two-
fluid problem (106) with $\mu_{1}=1$ and $\mu_{2}=10^{-4},\,1,\,10^{4}$. In
Table 1, this model is between the two ‘limiting’ models (104) and (105).
To see how viscosity $\mu_{2}$ influences the flow, we plot
$({\bf{u}}_{3})_{1}$ in Figure 6 for two different $\mu_{2}$ with $\mu_{1}=1$.
Figure 6: The first component of velocity ${\bf{u}}_{3}$, from (106), for
$\mu=10^{2}$ and $\mu_{2}=10^{-2}$.
Though the magnitude of $({\bf{u}}_{3})_{1}$ is way larger than that of
$({\bf{u}}_{3})_{2}$, their corresponding stress are about the same size. In
Figure 7, we plot them for a comparison. We plot the stress intensity
$|\nabla{\bf{u}}^{1}|$ in Figure 8.
Figure 7: The stress $\nabla(u_{3})_{1}$, and $\nabla(u_{3})_{2}$ for (106)
with $\mu_{2}=10^{2}$. Figure 8: The stress intensity
$|e(({\bf{u}}_{3})_{1})|$ in (106) with $\mu_{2}=10^{2}$, $\mu_{2}=1$,
$\mu_{2}=10^{-2}$.
Finally we compute the energy of the two-fluid flow,
$\displaystyle E(Q_{2})$
$\displaystyle=\int_{Q_{2}}\mu({\bf{y}})e({\bf{u}}_{3}):e({\bf{u}}_{3})d{\bf{y}},$
(110) $\displaystyle E(Q)$
$\displaystyle=\int_{Q}\mu({\bf{y}})e({\bf{u}}_{3}):e({\bf{u}}_{3})d{\bf{y}}.$
(111)
The homogenized permeability can also be computed by the energy,
$\displaystyle k_{ij}$
$\displaystyle=\frac{1}{|Q|}\int_{Q}\mu({\bf{y}})e({\bf{u}}_{3}):e({\bf{u}}_{3})d{\bf{y}}.$
(112)
In Table 2, we demonstrate the equivalence of these two definitions for
$k_{11}$.
Table 2: Computed permeability $k_{11}$ both ways and energy. level | $k_{11}$ (109) | $k_{11}$ (112) | $E(Q_{2})$ (110) | $E(Q_{2})/E(Q)$ (111)
---|---|---|---|---
| For ${\bf{u}}_{3}$ in (106) with $\mu_{2}=10^{2}$
1 | 0.107E-01 | 0.107E-01 | 0.952E-04 | 0.888E-02
2 | 0.121E-01 | 0.121E-01 | 0.811E-04 | 0.670E-02
3 | 0.127E-01 | 0.126E-01 | 0.676E-04 | 0.534E-02
4 | 0.129E-01 | 0.129E-01 | 0.604E-04 | 0.468E-02
5 | 0.130E-01 | 0.130E-01 | 0.570E-04 | 0.438E-02
| For ${\bf{u}}_{3}$ in (106) with $\mu_{2}=1$
1 | 0.122E-01 | 0.122E-01 | 0.752E-03 | 0.615E-01
2 | 0.144E-01 | 0.144E-01 | 0.135E-02 | 0.936E-01
3 | 0.154E-01 | 0.154E-01 | 0.175E-02 | 0.113E+00
4 | 0.159E-01 | 0.159E-01 | 0.200E-02 | 0.125E+00
5 | 0.162E-01 | 0.162E-01 | 0.215E-02 | 0.132E+00
| For ${\bf{u}}_{3}$ in (106) with $\mu_{2}=10^{-4}$
1 | 0.140E-01 | 0.140E-01 | 0.432E-06 | 0.308E-04
2 | 0.181E-01 | 0.181E-01 | 0.109E-05 | 0.602E-04
3 | 0.209E-01 | 0.209E-01 | 0.183E-05 | 0.875E-04
4 | 0.228E-01 | 0.228E-01 | 0.254E-05 | 0.111E-03
5 | 0.240E-01 | 0.240E-01 | 0.316E-05 | 0.131E-03
## 5 Conclusion and future work
In this paper, we show that the permeability of a porous material [40] and
that of a bubbly fluid [29] are limiting cases of the complexified version of
the two-fluid models posed in [29]. We assume the viscosity of the inclusion
fluid is $z\mu_{1}$ and the viscosity of the hosting fluid is $\mu_{1}$,
$z\in\mathbb{C}$. The proof is carried out by construction of solutions for
large $|z|$ and small $|z|$ by an iteration process similar with the one used
in [16, 21] and analytic continuation. Moreover, we also show that for a fixed
microstructure, the permeabilities of these three cases share the same
integral representation formula (IRF) (99) with different values of $s$, as
long as the ’contrast parameter’ $s:=\frac{1}{z-1}$ is not in the interval
$[-\frac{2E_{2}^{2}}{1+2E_{2}^{2}},-\frac{1}{1+2E_{1}^{2}}]$, where the
constants $E_{1}$ and $E_{2}$ are the extension constants that depend on the
geometry of $Q_{1}$, $Q_{2}$ and $Q$. For the mixture with bubbles, $s=-1$ and
thus
$K^{(B)}=\mbox{\boldmath$K$}^{(D)}+\int_{\frac{1}{1+2E_{1}^{2}}}^{\frac{2E_{2}^{2}}{1+2E_{2}^{2}}}\frac{1}{1-t}d\boldsymbol{\sigma}(t)$
(113)
Also, we note that the matrix-valued measure in (92) has a Dirac measure
sitting at $\lambda=0$ with strength equal to $\mbox{\boldmath$K$}^{(D)}$. The
permeability $\mbox{\boldmath$K$}^{(D)}$ is related to the measure in the
sense that the zero-th moment of the measure is equal to
$\mbox{\boldmath$K$}(z=1)-\mbox{\boldmath$K$}^{(D)}$.
Clearly, the positive matrix-valued measure $d\boldsymbol{\sigma}$ is
independent of $s$ and it characterizes how the geometry influences the
permeability. We have shown that this measure is related to the projection
measure of the self-adjoint operator $\Gamma_{\chi}$ and its moments can be
computed by equation (102).
Because the IRF is valid for most of $s$ on the complex plane, the IRF will be
useful in the study of two-fluid mixture with complex viscosities such as
dehomogenization for these fluid. Also, the integration limits in the IRF
should imply bounds on the permeability tensors. We will explore the results
of this paper in these direction in the future.
Acknowledgement The work of CB and MYO was partially sponsored by the US
National Science foundation via grants NSF-DMS-1413039 and NSF-DMS-1821857.
## References
* [1] Allaire, G. Homogenization of the Stokes flow in a connected porous medium. Asymptotic Analysis 2, 3 (1989), 203–222.
* [2] Allaire, G. Continuity of the darcy’s law in the low-volume fraction limit. Annali della Scuola Normale Superiore di Pisa-Classe di Scienze 18, 4 (1991), 475–499.
* [3] Allaire, G. Homogenization and two-scale convergence. SIAM Journal on Mathematical Analysis 23, 6 (1992), 1482–1518.
* [4] Allaire, G. Homogenization in porous media. CEA-EDF-INRIA school on homogenization, 2010.
* [5] Auriault, J., Borne, L., and Chambon, R. Dynamics of porous saturated media, checking of the generalized law of Darcy. The Journal of the Acoustical Society of America 77 (1985), 1641\.
* [6] Avellaneda, M., and Majda, A. J. Stieltjes integral representation and effective diffusivity bounds for turbulent transport. Physical review letters 62, 7 (1989), 753.
* [7] Avellaneda, M., and Majda, A. J. An integral representation and bounds on the effective diffusivity in passive advection by laminar and turbulent flows. Communications in Mathematical Physics 138, 2 (1991), 339–391.
* [8] Avellaneda, M., and Torquato, S. Rigorous link between fluid permeability, electrical conductivity, and relaxation times for transport in porous media. Physics of Fluids A: Fluid Dynamics 3, 11 (1991), 2529–2540.
* [9] Beliaev, A. Y., and Kozlov, S. Darcy equation for random porous media. Communications on pure and applied mathematics 49, 1 (1996), 1–34.
* [10] Bensoussan, A., Lions, J.-L., and Papanicolaou, G. Asymptotic analysis for periodic structures, vol. 374. American Mathematical Soc., 2011.
* [11] Bergman, D. J. The dielectric constant of a composite material—a problem in classical physics. Physics Reports 43, 9 (1978), 377–407.
* [12] Biot, M. A. Mechanics of deformation and acoustic propagation in porous media. Journal of applied physics 33, 4 (1962), 1482–1498.
* [13] Brenner, S., and Scott, R. The mathematical theory of finite element methods, vol. 15. Springer Science & Business Media, 2007.
* [14] Brinkman, H. A calculation of the viscous force exerted by a flowing fluid on a dense swarm of particles. Flow, Turbulence and Combustion 1, 1 (1949), 27.
* [15] Bruno, O. P. The effective conductivity of strongly heterogeneous composites. Proceedings of the Royal Society of London. Series A: Mathematical and Physical Sciences 433, 1888 (1991), 353–381.
* [16] Bruno, O. P., and Leo, P. H. On the stiffness of materials containing a disordered array of microscopic holes or hard inclusions. Archive for rational mechanics and analysis 121, 4 (1993), 303–338.
* [17] Cioranescu, D., Donato, P., and Ene, H. I. Homogenization of the Stokes problem with non-homogeneous slip boundary conditions. Mathematical Methods in the Applied Sciences 19, 11 (1996), 857–881.
* [18] Conca, C. On the application of the homogenization theory to a class of problems arising in fluid mechanics. J. Math. Pures Appl 64, 1 (1985), 31–75.
* [19] Darcy, H. P. G. Les Fontaines publiques de la ville de Dijon. Exposition et application des principes à suivre et des formules à employer dans les questions de distribution d’eau, etc. V. Dalamont, 1856.
* [20] Dyukarev, Y., and Katsnelson, V. Multiplicative and additive classes of Stieltjes analytic matrix valued functions, and interpolation problems associated with them. American Mathematical Society Translations 131 (1986), 55–70.
* [21] Golden, K., and Papanicolaou, G. Bounds for effective parameters of heterogeneous media by analytic continuation. Communications in Mathematical Physics 90, 4 (1983), 473–491.
* [22] Johnson, D. L., Koplik, J., and Dashen, R. Theory of dynamic permeability and tortuosity in fluid-saturated porous media. Journal of fluid mechanics 176 (1987), 379–402.
* [23] Kantor, Y., and Bergman, D. J. Elastostatic resonances—a new approach to the calculation of the effective elastic constants of composites. Journal of the Mechanics and Physics of Solids 30, 5 (1982), 355–376.
* [24] Kato, T., Mitrea, M., Ponce, G., and Taylor, M. Extension and representation of divergence-free vector fields on bounded domains. Mathematical Research Letters 7, 5 (2000), 643–650.
* [25] Keller, J. B. Darcy’s law for flow in porous media and the two-space method. Tech. rep., STANFORD UNIV CA, 1980.
* [26] Lévy, T. Fluid flow through an array of fixed particles. International Journal of Engineering Science 21, 1 (1983), 11–23.
* [27] Limaye, B. V. Banach space-valued analytic functions. In Spectral perturbation and approximation with numerical experiements (Canberra AUS, 1987), Centre for Mathematical Analysis, The Australian National University, pp. 44–60.
* [28] Lions, J.-L. Some methods in the mathematical analysis of systems and their control(book). Beijing, Science Press (1981).
* [29] Lipton, R., and Avellaneda, M. Darcy’s law for slow viscous flow past a stationary array of bubbles. Proceedings of the Royal Society of Edinburgh Section A: Mathematics 114, 1–2 (1990), 71–79.
* [30] Lundgren, T. S. Slow flow through stationary random beds and suspensions of spheres. Journal of Fluid Mechanics 51, 2 (1972), 273–299.
* [31] Milton, G. W. The theory of composites. The Theory of Composites, by Graeme W. Milton, pp. 748. ISBN 0521781256\. Cambridge, UK: Cambridge University Press, May 2002. (2002), 748\.
* [32] Neuman, S. P. Theoretical derivation of darcy’s law. Acta Mechanica 25, 3-4 (1977), 153–170.
* [33] Nguetseng, G. A general convergence result for a functional related to the theory of homogenization. SIAM Journal on Mathematical Analysis 20, 3 (1989), 608–623.
* [34] O.A. Oleinik, A.S. Shamaev, G. Y. Mathematical Problems in Elasticity and Homogenization., 1 ed., vol. 26 of Studies in mathematics and its applications. North Holland, 1992.
* [35] Ou, M.-J. Y. On reconstruction of dynamic permeability and tortuosity from data at distinct frequencies. Inverse Problems 30, 9 (2014), 095002.
* [36] Poreh, M., and Elata, C. An Analytical Derivation of Darcy Law. Publication (Tekhniyon - Makhon tekhnologi le-Yisra’el. ha-Fakultah le-handasah ezrahit). Technion-I.I.T., Faculty of Civil Engineering, 1965.
* [37] Saffman, P. G. On the boundary condition at the surface of a porous medium. Studies in applied mathematics 50, 2 (1971), 93–101.
* [38] Sanchez-Palencia, E. Non-homogeneous media and vibration theory. Lecture notes in physics. Springer, 1980.
* [39] Tam, C. K. The drag on a cloud of spherical particles in low reynolds number flow. Journal of Fluid Mechanics 38, 3 (1969), 537–546.
* [40] Tartar, L. Incompressible fluid flow in a porous medium-convergence of the homogenization process. Appendix of Non-homogeneous media and vibration theory (1980).
* [41] Tice, I. From stokes flow to darcy’s law. CNA Working Group on Homogenization, 2014.
|
# On solving classes of positive-definite quantum linear systems with
quadratically improved runtime in the condition number
Davide Orsucci Institut für Kommunikation und Navigation, Deutsches Zentrum
für Luft- und Raumfahrt (DLR), Münchener Str. 20, 82234 Weßling, Germany
Vedran Dunjko Leiden University, Niels Bohrweg 1, 2333 CA Leiden, The
Netherlands
(November 1, 2021)
###### Abstract
Quantum algorithms for solving the Quantum Linear System (QLS) problem are
among the most investigated quantum algorithms of recent times, with potential
applications including the solution of computationally intractable
differential equations and speed-ups in machine learning. A fundamental
parameter governing the efficiency of QLS solvers is $\kappa$, the condition
number of the coefficient matrix $A$, as it has been known since the inception
of the QLS problem that for worst-case instances the runtime scales at least
linearly in $\kappa$ [1]. However, for the case of positive-definite matrices
classical algorithms can solve linear systems with a runtime scaling as
$\sqrt{\kappa}$, a quadratic improvement compared to the the indefinite case.
It is then natural to ask whether QLS solvers may hold an analogous
improvement. In this work we answer the question in the negative, showing that
solving a QLS entails a runtime linear in $\kappa$ also when $A$ is positive
definite. We then identify broad classes of positive-definite QLS where this
lower bound can be circumvented and present two new quantum algorithms
featuring a quadratic speed-up in $\kappa$: the first is based on efficiently
implementing a matrix-block-encoding of $A^{-1}$, the second constructs a
decomposition of the form $A=LL^{\dagger}$ to precondition the system. These
methods are widely applicable and both allow to efficiently solve BQP-complete
problems.
## 1 Introduction
Quantum computation is described using the formalism of linear algebra,
suggesting that quantum methods may be intrinsically well-suited to perform
linear algebraic tasks. Algorithms solving linear systems of equations, in
particular, are at the cornerstone of linear algebra [2, Chapter 2], having
many direct applications and playing a pivotal role in several computational
methods [3]. In the seminal work of Harrow, Hassadim, and Lloyd (HHL) the so-
called Quantum Linear System (QLS) problem was introduced and a quantum
algorithm was presented that allows solving the QLS exponentially faster than
classical algorithms solving classical linear systems [1]. In subsequent
works, several new algorithms have been put forward that solve QLS with
further increased efficiency in comparison to the original HHL algorithm,
improving the runtime dependence on the condition number [4], on the precision
[5] and on the sparsity [6]. Recently, a new approach inspired by adiabatic
quantum computation has introduced a significantly simpler quasi-optimal
solving algorithm [7, 8], significantly narrowing the gap with experimental
implementations [9], making the algorithm compatible with Near-term
Intermediate Scale Quantum (NISQ) devices [10, 11] and leading to the
development of the presently most efficient QLS solvers [12].
A key idea underpinning the possibility of achieving large quantum speed-ups
in linear algebra tasks is the fact that an exponentially large complex vector
can be compactly encoded in the amplitudes of a pure quantum state; e.g., a
$n$-qubit state is described via $2^{n}$ amplitudes. This intuition is indeed
correct for the QLS problem, which has been proven to be BQP-complete [1]: any
quantum computation can be re-formulated as a QLS with only a polynomial
overhead and therefore there exist families of QLS problems that afford super-
polynomial speed-ups compared to classical solution methods (unless
$\textsf{BPP}=\textsf{BQP}$111BPP is the class of decision problems that can
be solved in bounded-error probabilistic polynomial time, BQP are those that
can be solved with bounded-error polynomial time quantum computations. Loosely
speaking, $\textsf{BPP}=\textsf{BQP}$ would mean that the power of quantum
computers is equal to that of classical computers.). While this reduction
shows that almost certainly there exist families of QLS problems that allow an
exponential speed-up compared to all classical methods, the crucial question
is whether there are _natural_ problems that can be directly formulated and
solved as QLS. The ubiquity of linear systems seems to suggests their quantum
variant should be broadly applicable as well, but still it is not guaranteed,
since in the QLS setting further significant constraints have to be met to
obtain exponential speed-ups [13].
A prominent fundamental bottleneck of QLS solvers is that, to efficiently
obtain the solution, it is not sufficient that the coefficient matrix $A$ of
the system $A\textbf{x}=\textbf{b}$ is invertible, but it also have to be
_robustly_ invertible, that is _well-conditioned_ : it is required that the
_condition number_ of the matrix $A$, defined as the ratio between the largest
and the smallest singular values, is small. In fact, solving a QLS necessarily
entails a runtime scaling at least linearly in the condition number (unless
$\textsf{BQP}=\textsf{PSPACE}$222PSPACE is the class of decision problems that
can be solved in classical polynomial space. With a Feymann sum-over-paths
approach one can show that any quantum computation can be classically
simulated in exponential time but with only polynomial space, thus
$\textsf{BQP}\subseteq\textsf{PSPACE}$. It is widely believed that
$\textsf{BQP}\neq\textsf{PSPACE}$.) as was already proven in Ref. [1].
Therefore, an exponential quantum speed-up for QLS solving is achievable only
when the condition number scales polylogarithmically with the system size and,
unfortunately, it seems rather difficult to find natural examples of matrix
families that exhibit such mild growth of the condition number [14]. However,
polynomial quantum speed-ups for linear system solving should be rather
broadly achievable and could still provide a quantum advantage if the degree
of the polynomial is large enough [15]. In this view, obtaining a further
quadratic improvement in the dependence on the condition number for some
restricted classes of matrices, that are however of wide practical interest,
could be of the uttermost importance for obtaining a broader impact of QLS
solving algorithms. A previous publication showed that in the context of
quantum algorithms for solving certain Markov chain problems the use of
specialised QLS solvers for positive-definite matrices provides an improvement
in the condition number dependence [16]. Exploring the general positive-
definite QLS problem is the main focus of this work.
In the rest of the Introduction we motivate why quadratic speed-ups in the
condition number in positive-definite QLS may be expected (Section 1.1),
review some related results present in the literature (Section 1.2) and then
proceed to give high-level overviews of our two algorithms (Section 1.3 and
Section 1.4). In Section 2 we fix the notation and give the main definitions.
In Section 3 we prove that QLS solvers require, even when restricting to
positive-definite matrices, a runtime scaling linearly in the condition
number. We then move to the main results of this work, that is, achieving
improved runtime scaling for solving certain classes of positive-definite QLS:
in Section 4 we show a method based on an efficient implementation of a
matrix-block-encoding of $A^{-1}$ and in Section 5 a method based on
decomposing the coefficient matrix as $A=LL^{\dagger}$ to effectively
precondition the system. Finally, an outlook of possible future research
directions is given in Section 6.
### 1.1 Positive definite linear systems and quadratic speed-ups
In this work, we investigate the efficiency of QLS solving algorithms
specialised to the case where the coefficient matrix $A\in\mathds{C}^{N\times
N}$ is Hermitian and positive definite (PD). We will call this sub-class the
positive-definite Quantum Linear System (PD-QLS) problem.
A first reason to focus specifically on the PD case is that in the classical
setting several problems of practical relevance are formulated as linear
systems involving PD coefficient matrices. A second important reason is that
the few fully worked-out examples in the literature providing strong evidence
of polynomial quantum speed-ups for “natural” QLS problems involve PD
coefficient matrices [14, 17]. In particular, the discretization of a partial
differential equation (PDE) having a positive-definite kernel (such as, e.g.,
Poisson’s equation) results in a large linear system where the coefficient
matrix is positive definite. Montanaro and Pallister perform in Ref. [14] a
detailed analysis of how QLS may be employed to solve the finite-element
formulation of a PDE and show that the linear dependence on the condition
number is the main bottleneck to obtaining large quantum speed-ups. In fact,
the discretization of PDEs for functions defined in $\mathds{R}^{D}$ typically
results in positive-definite linear systems where the condition number scales
as ${\mathcal{O}}\\!\left(N^{2/D}\right)$ [14].
We highlight that one might have reasonably conjectured that it is possible to
have a quadratically better scaling in $\kappa$, the condition number of $A$,
in the PD case. First, note that the runtime lower bound of Ref. [1] is proven
using a special family of matrices that are indefinite (neither positive nor
negative definite) by construction, hence it is not directly applicable to the
PD case. A standard method allows to transform an indefinite linear system
into a PD one, but having a quadratically larger condition number333Namely,
for a given indefinite matrix $A$, the systems $A\textbf{x}=\textbf{b}$ and
$A^{\prime}\textbf{x}=\textbf{b}^{\prime}$, with
$\textbf{b}^{\prime}:=A^{\dagger}\textbf{b}$ and $A^{\prime}:=A^{\dagger}A$,
have the same solution. The matrix $A^{\prime}$ is positive definite and
$\kappa(A^{\prime})=\kappa(A)^{2}$.; hence, the lower bound in Ref. [1]
directly yields a $\sqrt{\kappa}$ runtime lower bound for PD-QLS solvers.
Second, the conjugate gradient (CG) descent method is the most efficient
classical algorithm for solving PD linear systems and requires only
${\mathcal{O}}\big{(}\sqrt{\kappa}\log(1/\varepsilon)\big{)}$ iterations to
converge to the correct solution, up to $\varepsilon$ approximation. Each
iteration consists in the update of some vectors having $N$ entries, where $N$
is the dimension of the linear system, thus resulting in a total runtime in
${\mathcal{O}}\big{(}N\sqrt{\kappa}\log(1/\varepsilon)\big{)}$ [18]. Then, it
might seem plausible that a quantization of the CG method could yield a
quantum algorithm having runtime in
${\mathcal{O}}\big{(}\mathrm{polylog}(N)\sqrt{\kappa}\log(1/\varepsilon)\big{)}$.
This conjectured quadratic speed-up is however not always achievable since, as
we prove in Section 3, PD-QLS solvers have runtime scaling linearly in
$\kappa$ in worst-case problem instances. But this no-go result can be used as
guidance to understand what is preventing us from achieving a better runtime
scaling and, conversely, what additional conditions have to be imposed in
order to achieve a quadratic speed-up in $\kappa$ for PD-QLS solvers.
Method | Result | Requirements
---|---|---
Reduction to majority problem | $\Omega(\kappa)$ query complexity lower bound Proposition 6 | Access to $A$ via a matrix-block-encoding or via a sparse-oracle access
Block-encoding of $A^{-1}$ | ${\mathcal{O}}(\sqrt{\kappa})$ query and gate complexity Proposition 12 and Proposition 13 | Access to normalised matrix-block-encoding of $B=(I-\eta A)$ and $\left|\left|{A^{-1}\left|{\textbf{b}}\right\rangle}\right|\right|$ is large
Decomposition as $A=LL^{\dagger}$ | ${\mathcal{O}}(\sqrt{\kappa})$ gate complexity Proposition 16 | $A$ is the sum of PD local Hamiltonians, b is sparse and $1/\gamma$ in Eq. (112) is small
Table 1: Summary of the main results of this work. The results provided in the
second column of the table hold under the conditions specified in the third
column. The big-$\Omega$ notation is used for runtime lower bounds (in query
complexity), the big-${\mathcal{O}}$ notation for runtime upper bounds
(exhibiting an explicit solving algorithm).
### 1.2 Previous related results
In context of general QLS solvers, i.e. solvers applicable also to indefinite
or non-Hermitian matrices, the best algorithms have a runtime scaling quasi-
linearly in $\kappa$, i.e. scaling as
${\mathcal{O}}\big{(}\kappa\,\mathrm{polylog}(\kappa)\big{)}$, thus almost
saturating the linear lower bound [1]. Note that the original HHL algorithm
has a worse performance, with a runtime scaling as
${\mathcal{O}}(\kappa^{2}/\varepsilon)$ where $\varepsilon$ is the target
precision. The first algorithm to achieve a quasi-linear scaling in $\kappa$
was proposed by Ambainis in Ref. [4], which introduces a technique called
Variable-Time Amplitude Amplification (VTAA) and employs it to optimize to the
HHL algorithm. Subsequently, Childs, Kothari and Somma [5] introduced
polynomial approximations of $A^{-1}$ to exponentially improve the runtime
dependence on the approximation error to
${\mathcal{O}}\big{(}\kappa^{2}\,\mathrm{polylog}(\kappa/\varepsilon)\big{)}$;
they show, furthermore, that the VTAA-based optimization can be used also for
this algorithm, thus yielding a
${\mathcal{O}}\big{(}\kappa\,\mathrm{polylog}(\kappa/\varepsilon)\big{)}$
runtime. Later, Chakraborty et al. showed that also the pseudo-inversion
problem, whereby the matrix $A$ may be non-invertible and even non-square, can
be solved with a runtime in
${\mathcal{O}}\big{(}\kappa\,\sqrt{\gamma}\,\mathrm{polylog}(\kappa/\varepsilon)\big{)}$,
where $\gamma$ parametrises the overlap of b with the subspace where $A$ is
non-singular [19]. Finally, the current state-of-the-art methods for general
QLS solving is given in Ref. [12], which does not rely on VTAA but instead is
based on ideas stemming from adiabatic quantum computation [7], which result
in conceptually simpler algorithms and in a significant improvement of the
polylogarithmic factors.
Furthermore, several specialized quantum algorithms have been introduced with
the scope of more efficiently solving QLS for particular sub-classes of
matrices. A few works, e.g. [17, 20, 21], have investigated the use of
preconditioning to speed-up the solution. The main idea, which is well-known
in classical linear system solving methods, is to look for an invertible
matrix $B$, a so-called _preconditioner_ , such that the matrix $BA$ has a
smaller condition number than $A$, and subsequently solve the equivalent
linear system $BA=B\,\textbf{b}$. An algorithm based on a sparse
preconditioning matrix was introduced in Ref. [17], but it has very little
formal guarantees of performance improvement. Another method based on
circulant preconditioners was presented in Ref. [20], for which it is clearer
how to assess when a runtime improvement can be achieved. Runtime improvements
have been obtained in Ref. [21] applying new preconditioning methods to
Hamiltonians arising in many-body physics. An entirely different approach,
based on hybrid classical-quantum algorithms, has been explored in Ref. [22],
which yields runtime speed-ups for the case where the rows or columns of the
coefficient matrix can be prepared with polylogarithmic-depth quantum
circuits. We also mention the result of Ref. [23], showing that it is possible
to solve QLS for the special class of tridiagonal Toeplitz matrix with a
runtime that scales polylogarithmically in all parameters, condition number
included; note, however, that matrices of this class can be fully specified
with just two real parameters. Finally, the PD-QLS problem has been previously
considered in Ref. [16], where it is suggested that positive-definite systems
could be solved more efficiently than indefinite ones.
### 1.3 Overview of the method based on a matrix-block-encoding of $A^{-1}$
Our first method for solving PD-QLS with improved runtime is based on
implementing as a quantum circuit a unitary matrix ${\mathcal{U}}_{A^{-1}}$
that encodes in a sub-block a matrix proportional to $A^{-1}$, i.e. a so-
called matrix-block-encoding [24, 25]. The solution state
$\left|{A^{-1}\textbf{b}}\right\rangle$ can be subsequently obtained via
matrix-vector multiplication, achieved by applying the circuit encoding
$A^{-1}$ and projecting onto the correct sub-block. This method is analogous
to the one introduced by Childs, Kothari and Somma in Ref. [5], where it is
shown that exponentially precise polynomial approximations of the inverse
function can be constructed, which then allow to implement a matrix-block-
encoding of $A^{-1}$ up to exponentially small error and finally solve the QLS
problem via matrix-vector multiplication. Here, we show that if $A$ is a PD
matrix, an equally good approximation of $A^{-1}$ can be obtained with
polynomials having a quadratically smaller degree, leading to the possibility
of a quadratic speed-up in PD-QLS solving.
More in detail, Ref. [5] considers a Hermitian matrix $A$ whose spectrum is by
assumption contained in the domain
$\mathcal{D}_{\Delta}:=[-1,-\Delta]\cup[+\Delta,+1]$, with $\Delta\leq
1/\kappa$. Then, families of real polynomials $P(x)$ are constructed such that
$|P(x)-1/x|\leq\varepsilon$ on the domain $\mathcal{D}_{\Delta}$ and have a
degree $\ell\in{\mathcal{O}}\big{(}\kappa\log(1/\varepsilon)\big{)}$, see
panel $(a)$ of Figure 1 for an illustrative example. Using either the Linear
Combination of Unitaries (LCU) [26, 27] or the Quantum Signal Processing (QSP)
[28, 24, 25] method, it is possible to implement a matrix-block-encoding of
$P(A)$, up to some rescaling factor $K>0$ such that $P(A)/K$ can be encoded a
sub-block of a unitary matrix, and then we have
$\left|\left|{P(A)-A^{-1}}\right|\right|\,\leq\,\varepsilon$ in operator norm.
The query complexities of LCU and QSP scale at least linearly in the degree of
the polynomial, hence the use of polynomials of low degree is crucial to
construct efficient QLS solvers. Moreover, the normalisation factor $K$ of the
matrix-block-encoding of $P(A)$ scales linearly in $\kappa$ and enters
multiplicatively in the cost of the matrix-vector multiplication step
necessary to produce $\left|{A^{-1}\textbf{b}}\right\rangle$.
In case $A$ is a PD matrix, we can exploit the knowledge that its spectrum is
contained in $\mathcal{D}_{\Delta}^{+}:=[\Delta,1]$ to perform the following
trick: we assume to have access to a _normalised_ matrix-block-encoding of
$B:=I-\eta\,A$, for some constant444We show in Section 4.3 that $B$ can be
efficiently constructed for $\eta=1$ when $A$ is diagonally dominant and for
$\eta=1/J$ when $A$ is the sum of $J$ positive semi-definite local Hamiltonian
terms. $\eta\in(0,2]$, so that the spectrum of $B$ is always contained in the
interval $\mathcal{D}_{B}=[-1,1-\eta\Delta]$. We then construct a polynomial
$P(x)$ that approximates the function $f(x):=1/(1-x)$ on the domain
${\mathcal{D}}_{B}$ up to $\varepsilon$ distance. Using the QSP method to
implement a matrix-block-encoding of the matrix $P(B)$ we then have
$\displaystyle P(B)\approx
f(B)=\frac{1}{I-B}=\frac{1}{I-(I-\eta\,A)}=\frac{1}{\eta}A^{-1}$ (1)
which is the required matrix inverse, up to a $1/\eta$ rescaling factor.
Importantly, our polynomial approximation of $1/(1-x)$ has a degree
$\ell\in{\mathcal{O}}\big{(}\sqrt{\kappa/\eta}\,\log(\kappa/(\eta\varepsilon))\big{)}$,
a quadratically better dependence on $\kappa$ with respect to the
approximation of $1/x$ given in Ref. [5], see panel $(b)$ of Figure 1.
Moreover, the normalisation factor $K$ scales again linearly in $\kappa$,
which is the best dependence achievable.
$(a)$ | $(b)$
---|---
|
Figure 1: In panel $(a)$ are shown two polynomials approximating the function
$1/x$ (in black) chosen from the family of polynomials given in Ref. [5, Lemma
14], using as parameters $b=200,j_{0}=10$ (red curve) and $b=200,j_{0}=20$
(blue curve curve). In panel $(b)$ are shown two polynomials approximating the
function $1/(1-x)$ (in black) chosen from the family given in Eq. (28), using
as parameters $\ell=6,\kappa=15$ (red curve) and $\ell=10,\kappa=9$ (blue
curve). The shaded regions in each panel indicate the intervals where the
polynomial approximation is not accurate.
From a mathematical standpoint, the possibility of a quadratic reduction of
the degree of the approximating polynomial can be interpreted as a consequence
of Bernstein’s inequality [29]. This inequality states that in the class of
real polynomials $P(x)$ of degree $\ell$ such that $|P(x)|\leq 1$ for all
$x\in[-1,+1]$ the derivative $P^{\prime}(x)$ satisfies
$|P^{\prime}(x)|\leq(1-x^{2})^{-1/2}\ell$ for all $-1<x<1$, while we have
$|P^{\prime}(x)|\leq\ell^{2}$ for $x=1$ and $x=-1$ (and these last bounds are
saturated by Chebyschev polynomials). Note that polynomials approximations of
$1/x$ and of $1/(x-1)$ have high derivative close to the singularities,
respectively in $x=0$ and in $x=1$, and because of Bernstein’s inequality the
latter case allows good polynomial approximations having a quadratically lower
degree.
We need next to perform matrix-vector multiplication to obtain the state
$\left|{A^{-1}\textbf{b}}\right\rangle$ and thus solve the QLS; this is
obtained by applying the quantum circuit that encodes the matrix $P(B)\propto
A^{-1}$ to a quantum state of the form
$\left|{0}\right\rangle\left|{\textbf{b}}\right\rangle$ and then post-
selecting the ancilla register to be in $\left|{0}\right\rangle$ or, more
efficiently, using amplitude amplification [30]. The amplitude amplification
step implies a multiplicative overhead of order $1/\kappa$ in the worst case,
yielding a total runtime in
${\mathcal{O}}\big{(}\kappa^{3/2}\log(\kappa/\varepsilon)\big{)}$ for this PD-
QLS solver. However, the efficiency the matrix-vector multiplication depends
on the post-selection success probability and thus on the specific choice for
the vector b. In a best-case scenario the post-selection success probability
is constant and thus the overall runtime of the PD-QLS solver is
${\mathcal{O}}\big{(}\sqrt{\kappa}\log(\kappa/\varepsilon)\big{)}$, while in
the same high-success-probability scenario the QLS solver of Ref. [5] has a
runtime in ${\mathcal{O}}\big{(}\kappa\log(\kappa/\varepsilon)\big{)}$, since
this is the cost of implementing a polynomial approximation of $A^{-1}$ for
indefinite matrices.
We remark that there is a technical assumption that has to be satisfied to
allow the realisation of our method, namely, that it is possible to implement
a _normalised_ matrix block encoding of $B=I-\eta\,A$. This is a non-trivial
task: standard methods could allow us to prepare, for instance, a normalised
matrix-block-encoding of $B/d$ where $d\geq 2$ is the _sparsity_ of $B$, i.e.
the maximum number of non-zero entries in each column of $B$ [5]. Note,
however, that we would then need to implement an approximation of the function
$g(x):=1/(1-xd)$ to obtain $g(B/d)=\frac{1}{\eta}A^{-1}$; the function $g(x)$
has a singularity in $x=1/d$, which is in the interior of the domain
$[-1,+1]$, and thus Bernstein’s inequality precludes us from achieving good
approximations with low-degree polynomials for this function.
We will show that it is possible to implement normalised matrix-block
encodings of $B$ for two special classes of QLS problems: the first is when
$A$ is a diagonally dominant matrix; the second, when $A$ is given as the sum
of local PD Hamiltonian terms, where by “local” we mean that it acts non-
trivially only on a small number of qubits. In these two cases it is therefore
possible to implement our improved PD-QLS solver. We leave the question
whether it is possible in broader generality to prepare normalised block-
encodings as an open research question.
### 1.4 Overview of the method based on the $A=LL^{\dagger}$ decomposition
Our second method for solving PD-QLS with improved runtime is based on finding
a decomposition $A=LL^{\dagger}$, akin to the classical Cholesky decomposition
[31] which exists for all PD matrices, and then use $L^{-1}$ as an efficient
and effective preconditioner. Note, in fact, that
$L^{\dagger}\textbf{x}=\textbf{b}^{\prime}$ for
$\textbf{b}^{\prime}=L^{-1}\textbf{b}$ is a linear system equivalent to the
original one, but the decomposition $A=LL^{\dagger}$ immediately implies
$\kappa(L)=\kappa(L^{\dagger})=\sqrt{\kappa(A)}$ and thus the new system
provably has a quadratically smaller condition number. This decomposition is
similar to a _spectral gap amplification_ of $A$ [32].
The method involves thus two main steps: in the first one we use classical
computation to efficiently obtain a description of a matrix $L$ such that
$LL^{\dagger}=A$ and such that it is possible to efficiently find, using only
classical computation, a description of the vector
$\left|{\textbf{b}^{\prime}}\right\rangle:=\left|{L^{-1}\textbf{b}}\right\rangle$;
in the second one, we use an efficient quantum algorithm, having runtime
quasi-linear in $\kappa$, to solve
$\left|{L^{\dagger}\textbf{x}}\right\rangle=\left|{\textbf{b}^{\prime}}\right\rangle$,
which thus gives
$\left|{\textbf{x}}\right\rangle=\left|{(L^{\dagger})^{-1}L^{-1}\textbf{b}}\right\rangle=\left|{A^{-1}\textbf{b}}\right\rangle$.
Note that the classical descriptions of $L^{\dagger}$ and of
$\left|{\textbf{b}^{\prime}}\right\rangle$ should also allow the efficient
compilation the quantum algorithm used to solve the preconditioned QLS.
The picture is not yet complete, since we actually use a matrix $L$ that is
non-square and thus singular. As a result, the inversion operation has to be
substituted by a pseudo-inversion and the condition number by the _effective_
condition number, the ratio between the largest and the smallest _non-zero_
singular value; when $A$ is invertible the effective condition number of $L$
is quadratically smaller than the condition number of $A$. We also use two
different pseudo-inverses in the classical and in the quantum part of the
computation: in the quantum step the Moore-Penrose pseudo-inverse
$(L^{\dagger})^{+}$ is employed and in the classical preconditioning a
_generalised_ pseudo-inverse $L^{g}$ chosen such that
$(L^{\dagger})^{+}L^{g}=A^{-1}$; thus, they yield the desired solution
$\left|{\textbf{x}}\right\rangle=\left|{A^{-1}\textbf{b}}\right\rangle$ when
applied in sequence. These extensions to non-square matrices and to different
pseudo-inverses are made to give leeway in the design of the classical part of
the algorithm, allowing us to meet the efficiency requirements mentioned
above. Finally, we will employ the QLS solver of [19], which can tackle
pseudo-inversion problems and has a runtime quasi-linear in the effective
condition number.
We show that a fully suitable decomposition of the form $A=LL^{\dagger}$ can
be constructed for the Sum-QLS problem, i.e., when $A$ is a sum of local PD
Hamiltonian terms. In this case, the matrix $L$ is formed by several blocks,
each constructed from a single Hamiltonian term, while the pseudo-inverse
$L^{g}$ is obtained inverting the individual blocks, operations involving only
small matrices and thus classically feasible. We also require that the vector
b is sparse, containing only polynomially many non-zero entries, thus allowing
to efficiently compute the description of
$\left|{\textbf{b}^{\prime}}\right\rangle=\left|{L^{g}\,\textbf{b}}\right\rangle$.
As a final technical caveat, we note that because of the mismatch between the
pseudo-inverse used in the classical and in the quantum part of the algorithm
($L^{g}$ and $(L^{\dagger})^{+}$), the vector $\textbf{b}^{\prime}$ is not
entirely contained in the support of $(L^{\dagger})^{+}$ and thus amplitude
amplification of the component in the correct subspace is required. This
incurs in a multiplicative overhead of a factor $1/\sqrt{\gamma}$, where
$\sqrt{\gamma}>0$ is a known bound for the amplitude of the “good” component
of $\textbf{b}^{\prime}$. This method thus has a provable runtime improvement
over competing QLS solvers whenever $1/\sqrt{\gamma}$ is sufficiently small.
## 2 Notation and definitions
We assume knowledge of the main quantum computation concepts, as given for
instance in Ref. [33]. A quantum computation is described using a Hilbert
space of dimension $2^{n}$ for some $n$, corresponds to a system of $n$
qubits, having a distinguished computational basis.
### 2.1 Linear algebra and asymptotic notation
We consistently denote with $N\in\mathds{N}$ the dimension of the QLS we aim
to solve and we define $n:=\lceil\log_{2}N\rceil$, so that a vector in
$\mathds{C}^{N}$ (possibly padded with zeroes at the end if $N$ is not a power
of two) can be described as pure state of $n$ qubits. For any complex matrix
$A\in\mathds{C}^{N\times M}$ having $N$ rows and $M$ columns we write its
Singular Value Decomposition (SVD) as $A=V\Sigma W^{\dagger}$ where $V$ and
$W$ are unitary matrices of size $N\times N$ and $M\times M$ respectively and
$\Sigma$ is a real non-negative matrix of size $N\times M$ which is uniquely
determined up to reordering of the diagonal entries and contains the singular
values of $A$ on the main diagonal. An Hermitian matrix $A$ is positive
definite if
$\left\langle{\textbf{v}}\right|A\left|{\textbf{v}}\right\rangle>0$ for all
$\left|{\textbf{v}}\right\rangle$ and is is positive semi-definite if
$\left\langle{\textbf{v}}\right|A\left|{\textbf{v}}\right\rangle\geq 0$. The
eigenvalues of $A$ are real, positive (in the definite case) or non-negative
(in the semi-definite case) and coincide with its singular values. For a
general $A$, $\varsigma_{\min},\varsigma_{\max}$ and
$\lambda_{\min},\lambda_{\max}$ denote the minimum and maximum singular values
and eigenvalues, respectively. The Moore-Penrose pseudo-inverse of $A$ is
obtained by applying to the singular values $\varsigma_{i}$ of $A$ the
function $f:\mathds{R}\mapsto\mathds{R}$ defined as $f(x)=1/x$ for $x\neq 0$
and $f(0)=0$. More precisely, for a diagonal matrix $\Sigma$ we define
$\Sigma^{+}=f(\Sigma)$, while for a general matrix $A=V\Sigma W^{\dagger}$ the
pseudo-inverse is given as $A^{+}:=W\Sigma^{+}V^{\dagger}$. Given
$A\in\mathds{C}^{N\times M}$ a _generalised_ pseudo-inverse
$A^{g}\in\mathds{C}^{M\times N}$ is any matrix satisfying the equation
$A\,A^{g}A=A$.
In this work we employ the $\ell^{2}$-norm for vectors
$\left|\left|{\textbf{v}}\right|\right|^{2}:=\sum_{i=1}^{N}\text{v}_{i}^{2}$
and the induced operator norm for matrices
$\left|\left|{A}\right|\right|:=\max_{\textbf{v}\neq
0}\left|\left|{A\textbf{v}}\right|\right|/\left|\left|{\textbf{v}}\right|\right|$.
The condition number of a matrix is given by
$\kappa(A):=\left|\left|{A}\right|\right|||A^{-1}||$. Since we have
$\left|\left|{A}\right|\right|=\varsigma_{\max}(A)$ and
$||A^{-1}||=1/\varsigma_{\min}(A)$, the condition number can be also written
as $\kappa(A)=\frac{\varsigma_{\max}(A)}{\varsigma_{\min}(A)}$. For a singular
matrix $A$ we define the _effective_ condition number to be
$\kappa_{\mathrm{eff}}(A):=\left|\left|{A}\right|\right|||A^{+}||$, which is
equal to the ratio between the largest and the smallest non-zero singular
value of $A$. A Hermitian matrix $A$ is diagonally dominant if $\sum_{j:j\neq
i}|A_{ij}|\leq|A_{ii}|$ for all $i$ and note that $|A_{ii}|=A_{ii}>0$ when $A$
is positive definite.
We use the standard big-${\mathcal{O}}$ and small-$\mathfrak{o}$ notations for
asymptotic scaling, together with the following definitions:
$f(x)\in\Omega(g(x))$ if and only if $g(x)\in{\mathcal{O}}(f(x))$, which is
used to give lower bounds, and
$\Theta(g(x))={\mathcal{O}}(g(x))\,\cap\,\Omega(g(x))$. We also use the
soft-${\mathcal{O}}$ and soft-$\Omega$ notations where
$f(x)\in\widetilde{O}(g(x))$ means
$f(x)\in{\mathcal{O}}\big{(}g(x)\,\text{polylog}\,[g(x)]\big{)}$, and
similarly for $\widetilde{\Omega}(g(x))$, which are used to give more coarse-
grained bounds.
### 2.2 Definition of the Quantum Linear System problem
In this section we introduce the main definitions that are relevant in the
contest of the QLS problem, which is a quantum analogue of the classical
linear algebra problem of solving the system of equations
$A\textbf{x}=\textbf{b}$, having solution $\textbf{x}=A^{-1}\textbf{b}$ when
$A$ is invertible.
We use pure quantum states to encode $N$-dimensional complex vectors. A vector
v enclosed in a bra or in a ket is always assumed to be normalised,
$\big{|}\big{|}\left|{\textbf{v}}\right\rangle\big{|}\big{|}=1$. In particular
we have:
$\displaystyle\left|{\textbf{b}}\right\rangle=\frac{\textbf{b}}{\left|\left|{\textbf{b}}\right|\right|}=\frac{\sum_{i=1}^{N}b_{i}\left|{i}\right\rangle}{\left(\sum_{i=1}^{N}|b_{i}|^{2}\right)^{\\!1/2}}$
(2)
$\displaystyle\left|{A^{-1}\textbf{b}}\right\rangle=\frac{A^{-1}\left|{\textbf{b}}\right\rangle}{\left|\left|{A^{-1}\left|{\textbf{b}}\right\rangle}\right|\right|}\;.$
(3)
We now give a general definition of the QLS problem. The formulation is
similar to the one provided in Ref. [7] and is intentionally not specifying
the access models employed for the coefficient matrix $A$ and the known-term
vector b, for sake of generality.
###### Definition 1 (Quantum Linear System).
Suppose we have access to a vector
$\textbf{b}\in\mathds{C}^{N}\setminus\\{0\\}$ and to a non-singular matrix
$A\in\mathds{C}^{N\times N}$ (access is given via quantum oracles, or some
kind of implicit or explicit description). We are given two real positive
parameters $\varsigma_{*}$ and $\varsigma^{*}$ such that
$\varsigma_{*}\leq\varsigma_{\min}(A)$ and
$\varsigma_{\max}(A)\leq\varsigma^{*}$, i.e. the singular values of $A$ are
contained in the interval
${\mathcal{D}}_{A}=\big{[}\varsigma_{*},\varsigma^{*}\big{]}$; equivalently,
we are given two parameters $\kappa>1$ and $\alpha>0$ that provide the upper
bounds $\kappa(A)\leq\kappa$ and $\left|\left|{A}\right|\right|\leq\alpha$. We
are also given a target precision $\varepsilon>0$.
The QLS problem then consists in preparing a density matrix
$\rho_{\textbf{x}}$ which is $\varepsilon$-close in trace distance to the
solution vector
$\left|{\textbf{x}}\right\rangle=\left|{A^{-1}\textbf{b}}\right\rangle$; that
is, we require that
$\displaystyle\big{|}\big{|}\,\rho_{\textbf{x}}-\left|{\hskip
0.5pt\textbf{x}\,}\right\rangle\\!\\!\left\langle{\hskip
0.5pt\textbf{x}\,}\right|\big{|}\big{|}_{\mathrm{Tr}}\leq\varepsilon$ (4)
where the trace norm is defined as
$\left|\left|{X}\right|\right|_{\mathrm{Tr}}:=\frac{1}{2}\mathrm{Tr}\left(\sqrt{XX^{\dagger}}\right)$.
###### Definition 2 (Positive-definite Quantum Linear System).
A PD-QLS problem is a QLS problem, as given in Definition 1, where the
coefficient matrix $A$ is Hermitian and positive definite.
We note that a more commonly employed definition requires that the QLS solver
outputs a state $|\widetilde{\textbf{x}}\rangle$ such that
$\big{|}\big{|}\,|\widetilde{\textbf{x}}\rangle-\left|{\textbf{x}}\right\rangle\big{|}\big{|}\leq\varepsilon$.
We prefer to use the trace distance since it is operationally motivated, as it
equals the probability of distinguishing two copies of two quantum states when
optimizing over all possible measurements, see [33, Section 9.2.1]. The trace
distance then also bounds the maximum relative error that can be introduced
when estimating the expectation value
$\mathrm{Tr}(\left|{\textbf{x}}\right\rangle\\!\\!\left\langle{\textbf{x}}\right|M)$
for a given observable $M$ and computing expectation values was the end goal
of Ref. [1]. Finally, for pure normalised states $\left|{\psi}\right\rangle$
and $\left|{\phi}\right\rangle$ the trace distance simplifies as
$d_{\mathrm{Tr}}(\psi,\phi):=\big{|}\big{|}\,\left|{\psi}\right\rangle\\!\\!\left\langle{\psi}\right|-\left|{\phi}\right\rangle\\!\\!\left\langle{\phi}\right|\big{|}\big{|}_{\mathrm{Tr}}=\sqrt{1-|\langle{\psi}|{\phi}\rangle|^{2}}$
and using $|\langle{\psi}|{\phi}\rangle|\geq
1-\frac{1}{2}\big{|}\big{|}\left|{\psi}\right\rangle-\left|{\phi}\right\rangle\big{|}\big{|}^{2}$
we obtain the inequality
$\displaystyle
d_{\mathrm{Tr}}(\psi,\phi)\leq\big{|}\big{|}\left|{\psi}\right\rangle-\left|{\phi}\right\rangle\big{|}\big{|}\,,$
(5)
thus the trace distance between $|\widetilde{\textbf{x}}\rangle$ and
$\left|{\textbf{x}}\right\rangle$ is at least as small as their $\ell^{2}$
distance.
It is customary to assume that $A$ has been rescaled by a factor
$\alpha\geq\left|\left|{A}\right|\right|$, so that we take, without loss of
generality, $\left|\left|{A}\right|\right|\leq 1$ and the only parameter that
needs to be specified is an upper bound to the condition number. This will be
also our convention, unless otherwise specified.
Figure 2: Different access models for QLS algorithms. Panel (a) illustrates
the case where a classical description of $A$ and b (and the target precision
$\varepsilon$) is provided and then used to compile a quantum algorithm
$\mathcal{A}$, solving the QLS for the given $A$ and b. The description of $A$
and b does not need to be fully explicit: it is sufficient that it allows to
efficiently compute the sequence of elementary quantum gates of $\mathcal{A}$.
Panel (b) illustrates the case of a _relativising_ quantum algorithm; in this
case, the sequence of elementary quantum gates only depends on a few
parameters (e.g., $\alpha,\kappa,\varepsilon$ as in Definition 1) while two
fixed sub-routines specify $A$ and b and these sub-routines can be treated as
black-boxes.
### 2.3 Oracles for quantum linear system solving
We now define and discuss a few different access models for $A$ and b, since
the results we present for the PD-QLS solvers crucially depend on which access
model is assumed. A fundamental distinction is between oracular and non-
oracular (a.k.a. relativising and non-relativising) algorithms, see Figure 2.
For instance, in oracular settings it is often possible to establish
unconditional query complexity lower bounds, while in non-oracular settings
non-trivial runtime lower bounds typically can be proven only under some
(reasonable) complexity theory assumptions, such as
$\textsf{P}\neq\textsf{NP}$. Note that most of the literature on QLS assumes
oracular access to $A$ and b [1, 4, 5, 6, 7, 12, 19].
We start defining the access model for b that we assume throughout this paper,
except where differently specified.
###### Definition 3 (State preparation oracle, as in [1]).
Given a vector $\textbf{b}\in\mathds{C}^{N}$ we say that we have quantum
access to a _state preparation oracle_ for b if there is a unitary operator
${\mathcal{U}}_{\textbf{b}}$ such that
${\mathcal{U}}_{\textbf{b}}\left|{0^{n}}\right\rangle=\left|{\textbf{b}^{\prime}}\right\rangle$,
where $\textbf{b}^{\prime}$ is obtained by padding b with zeroes until its
size is a power of $2$.
As already noted by HHL, a general setting where it is possible to efficiently
implement ${\mathcal{U}}_{B}$ is the one presented by Grover and Rudolph [34];
another possibility is that ${\mathcal{U}}_{\textbf{b}}$ is directly encoded
in a qRAM [35], as may be required in quantum machine learning contexts.
Next, we define two models for access to $A$, which we denote as
${\mathcal{P}}_{A}$ and ${\mathcal{U}}_{A}$ and which correspond to a sparse-
matrix-access and matrix-block-encoding, respectively.
###### Definition 4 (Sparse-matrix-access, as in [5]).
Given a Hermitian matrix555Since $A$ is Hermitian, access by rows and by
columns are equivalent. The definition can be extended to non-Hermitian
matrices, but we need to assume access both by rows and by columns. $A$ which
is $d$-sparse (i.e., has at most $d$ non-zero entries in each row and column)
a quantum _sparse-matrix-access_ ${\mathcal{P}}_{A}$ is given by a pair of
oracular functions
$\displaystyle{\mathcal{P}}_{A}:=({\mathcal{P}}_{A}^{pos},{\mathcal{P}}_{A}^{val})$
(6)
where ${\mathcal{P}}_{A}^{pos}$ and ${\mathcal{P}}_{A}^{val}$ specify the
positions of the (potentially) non-zero entries of $A$ and the values of those
entries, respectively, i.e.
$\displaystyle{\mathcal{P}}_{A}^{pos}:\left|{i,\nu}\right\rangle\ \mapsto\
\left|{i,j(i,\nu)}\right\rangle\hskip 15.649pt$
$\displaystyle\qquad\forall~{}i,j\in\\{1,\ldots,N\\}~{}\mathrm{and}~{}\nu\in\\{1,\ldots,d\\}$
(7)
$\displaystyle{\mathcal{P}}_{A}^{val}:\left|{i,j,z}\right\rangle\mapsto\left|{i,j,A_{i,j}\\!\oplus\\!z}\right\rangle$
$\displaystyle\qquad\forall~{}z\in\\{0,1\\}^{*}$ (8)
where $A_{i,j}\\!\oplus\\!z\in\\{0,1\\}^{*}$ denotes a bit string of arbitrary
length that encodes the value $A_{i,j}\in\mathds{C}$.
In order to keep the presentation simple, we assume here and throughout the
paper that numeric representations of complex numbers can be specified exactly
or with a sufficiently high number of digits of precision.
###### Definition 5 (Matrix block encoding, as in [28, 25]).
A unitary operator ${\mathcal{U}}_{A}$ acting on $n+a$ qubits is called an
$(\alpha,a,\varepsilon)$-matrix-block-encoding of a $n$-qubit operator $A$ if
666The circuit ${\mathcal{U}}_{A}$ may also act on other ancilla qubits, which
are in $\left|{0}\right\rangle$ both before and after the application of
${\mathcal{U}}_{A}$. For a given $(\cdot,a,\cdot)$-matrix-block-encoding we
only count the $a$ ancilla qubits that require post-selection to
$\left\langle{0^{a}}\right|$ to obtain the encoding of $A$.
$\displaystyle\big{|}\big{|}\,A-\alpha\,(\left\langle{0^{a}}\right|\otimes
I)\,{\mathcal{U}}_{A}\,(\left|{0^{a}}\right\rangle\otimes
I)\,\big{|}\big{|}\leq\varepsilon$ (9)
which can be expressed also as:
$\displaystyle{\mathcal{U}}_{A}=\left(\begin{array}[]{cc}\widetilde{A}/\alpha&*\\\
&*\end{array}\right)\quad\mathrm{with}~{}\left|\left|{\widetilde{A}-A}\right|\right|\leq\varepsilon\,,$
(12)
where the asterisks ($*$) denote arbitrary matrix blocks of appropriate
dimensions.
We call $\alpha$ the _normalization factor_ of the matrix-block-encoding and
we say in the special case where $\alpha=1$ that ${\mathcal{U}}_{A}$ is a
_normalised_ matrix-block-encoding of $A$.
A technique introduced by Childs allows to implement a $(d,1,0)$-matrix-block-
encoding of $A$, where the normalisation constant $d$ is equal to the sparsity
of $A$, using only a constant number of accesses to ${\mathcal{P}}_{A}$ and
${\mathcal{O}}\big{(}\mathrm{poly}(n)\big{)}$ extra elementary gates, see Ref.
[5] and references therein. In short, we have the reduction:
$\displaystyle{\mathcal{P}}_{A}\ \Longrightarrow\ {\mathcal{U}}_{A}$ (13)
where the arrows means that having access to an oracle of first type allows to
efficiently implement the oracle of the second type.
We also note that other access models to $A$ have been considered in the
literature; for example in Ref. [36] it is assumed that is possible to
efficiently prepare quantum states that are proportional to each one of the
columns of $A$.
### 2.4 Quantum linear systems in non-oracular settings
We consider in Section 5 (and also briefly in Section 4.3) a case that we call
the Sum-of-Hamiltonians QLS (Sum-QLS) problem, which is not formulated as an
oracular algorithm but is based, instead, on a classical description of $A$
and b. In order to obtain efficient Sum-QLS solving algorithms, it is thus
necessary that the descriptions of $A$ and b are compact, depending at most on
${\mathcal{O}}\big{(}\mathrm{poly}(n)\big{)}$ real or complex parameters.
For the known term vector b, we will simply assume that it is a sparse vector
in the computational basis, with at most
${\mathcal{O}}\big{(}\mathrm{poly}(n)\big{)}$ non-zero entries, whose position
is also known. Hence, a fully explicit classical description of b can be
provided and this also enables efficiently implementing a state preparation
circuit ${\mathcal{U}}_{\textbf{b}}$.
For the coefficient matrix $A$ we give a more implicit description: the
entries of $A$ are not specified one-by-one (which would be inefficient, as
the matrix size $N\in\Theta(2^{n})$ is assumed to be very large) but rather
can be computed from only a relatively small set of parameters, scaling
polynomially in the number of qubits. Specifically, we assume that $A$ is
given as the sum of polynomially many local Hamiltonian terms:
$\displaystyle A=\sum_{j=1}^{J}H_{(j)}\qquad\forall j\ H_{(j)}\ \textup{is
positive (semi)-definite,}$ (14)
where the number of terms is $J\in{\mathcal{O}}\big{(}\mathrm{poly}(n)\big{)}$
and each Hamiltonian $H_{(j)}$ acts on a small number of qubits, namely, on at
most ${\mathcal{O}}\big{(}\log(n)\big{)}$ qubits. This case has been
previously considered in Ref. [16].
## 3 Query complexity lower bound
In this section we prove a $\Omega(\kappa)$ lower bound on the runtime of QLS
which is alternative to the ones given by HHL in Ref. [1]. The main innovation
we introduce is that our lower bound applies also to the PD-QLS case, while
the proofs given by HHL only yield a $\widetilde{\Omega}(\sqrt{\kappa})$ lower
bound when specialised to PD matrices. More precisely, we have the following
result.
###### Proposition 6 (Query complexity lower bound).
Consider oracular quantum algorithms that solve the PD-QLS problem as
presented in Definition 2 for different access models to $A$ and b. Namely,
access to b is given via a state preparation oracle
${\mathcal{U}}_{\textbf{b}}$ (Definition 3), while access to $A$ is given
either via a sparse-matrix oracle ${\mathcal{P}}_{A}$ (Definition 4) or via a
matrix-block-encoding ${\mathcal{U}}_{A}$ (Definition 5). Then, PD-QLS solving
algorithms reaching a constant precision $\varepsilon\in{\mathcal{O}}(1)$ have
query complexities
$Q[{\mathcal{U}}_{\textbf{b}}],Q[{\mathcal{U}}_{A}],Q[{\mathcal{P}}_{A}]$ all
in $\Omega\big{(}\min(\kappa,N)\big{)}$.
The proof of these lower bounds is rather technical and can be found in
Appendix A. We now present a weaker result that, however, can be easily proven
as a consequence of the optimality of Grover search [37]; namely, we show that
PD-QLS solving has a linear scaling in $\kappa$ for all $\kappa\leq\sqrt{N}$.
###### Proposition 7.
Consider a PD-QLS problem as presented in Definition 2 and suppose that access
to $A$ is given by a sparse-matrix oracle ${\mathcal{P}}_{A}$ (Definition 4),
with no assumption on the access model for b. Then, a quantum algorithm that
solves the QLS up to any constant precision $\varepsilon\in[0,1)$ must make
$\Omega\big{(}\min(\kappa,\sqrt{N})\big{)}$ accesses to ${\mathcal{P}}_{A}$.
###### Proof.
Consider the search problem of finding an element $j\in\mathcal{S}$, where
${\mathcal{S}}\subseteq\\{1,\ldots,N\\}$ is a subset containing $M$ elements.
The membership of a element $j$ in $\mathcal{S}$ is encoded as a quantum
oracle ${\mathcal{P}}_{\mathcal{S}}$ which flips a ancilla qubit if
$j\in{\mathcal{S}}$ and leaves the ancilla unchanged if
$j\notin{\mathcal{S}}$.
Consider, next, a matrix $A$ that is diagonal (and thus 1-sparse) having
entries
$\displaystyle
A_{j,j}\;=\;\begin{cases}\alpha=\sqrt{\frac{N-M}{N}}&\mathrm{if}~{}j\notin{\mathcal{S}}\\\
\beta=\sqrt{\frac{M}{N}}&\mathrm{if}~{}j\in{\mathcal{S}}\,.\end{cases}$ (15)
The sparse-matrix oracle
${\mathcal{P}}_{A}=({\mathcal{P}}_{A}^{pos},{\mathcal{P}}_{A}^{val})$ for this
diagonal matrix $A$ can be implemented with exactly one access to the
membership oracle ${\mathcal{P}}_{\mathcal{S}}$. In fact,
${\mathcal{P}}_{A}^{pos}$ can be implemented without any access to
${\mathcal{P}}_{\mathcal{S}}$, since it is known that the non-zero entries are
on the diagonal, while ${\mathcal{P}}_{A}^{val}$ can be implemented with a
single access to ${\mathcal{P}}_{\mathcal{S}}$, assuming that $M$ and $N$ are
known: by definition we have
${\mathcal{P}}_{\mathcal{S}}\left|{j,0}\right\rangle=\left|{j,0}\right\rangle$
if $j\notin{\mathcal{S}}$ and
${\mathcal{P}}_{\mathcal{S}}\left|{j,0}\right\rangle=\left|{j,1}\right\rangle$
if $j\in{\mathcal{S}}$, thus it is sufficient to apply on the ancilla system
the transformation $\left|{0}\right\rangle\mapsto\left|{\alpha}\right\rangle$
and $\left|{1}\right\rangle\mapsto\left|{\beta}\right\rangle$, where the
quantum register contains a binary representation of the numbers $\alpha$ and
$\beta$, to implement ${\mathcal{P}}_{A}^{val}$.
Now notice that $A$ is a matrix having condition number
$\kappa=\frac{\alpha}{\beta}=\sqrt{\frac{N-M}{M}}\in\Theta\Big{(}\sqrt{\frac{N}{M}}\,\Big{)}$
assuming $M\leq N/2$. Moreover, $A^{-1}$ is also diagonal, with entries
$\displaystyle(A^{-1})_{j,j}\;=\;\begin{cases}\sqrt{\frac{N}{N-M}}&\text{if}~{}j\notin{\mathcal{S}}\\\
\sqrt{\frac{N}{M}}&\text{if}~{}j\in{\mathcal{S}}\,.\end{cases}$ (16)
Solving the QLS for the known-term vector
$\left|{\textbf{b}}\right\rangle=\left|{\boldsymbol{1}_{N}}\right\rangle=\frac{1}{\sqrt{N}}\sum_{j=1}^{N}\left|{j}\right\rangle$
yields
$\displaystyle\left|{\textbf{x}}\right\rangle\ $ $\displaystyle=\
\frac{A^{-1}\left|{\boldsymbol{1}_{N}}\right\rangle}{\left|\left|{A^{-1}\left|{\boldsymbol{1}_{N}}\right\rangle}\right|\right|}$
(17) $\displaystyle=\
\frac{1}{\sqrt{2N}}\bigg{(}\sqrt{\frac{N}{N-M}}\sum_{j\notin{\mathcal{S}}}\left|{j}\right\rangle+\sqrt{\frac{N}{M}}\sum_{j\in{\mathcal{S}}}\left|{j}\right\rangle\bigg{)}$
(18) $\displaystyle\equiv\
\frac{1}{\sqrt{2}}\Big{(}\left|{j\notin{\mathcal{S}}}\right\rangle+\left|{j\in{\mathcal{S}}}\right\rangle\Big{)},$
(19)
where in the last line we have introduced the normalised vectors
$\left|{j\notin{\mathcal{S}}}\right\rangle:=\frac{1}{\sqrt{N-M}}\sum_{j\notin{\mathcal{S}}}\left|{j}\right\rangle$
and
$\left|{j\in{\mathcal{S}}}\right\rangle:=\frac{1}{\sqrt{M}}\sum_{j\in{\mathcal{S}}}\left|{j}\right\rangle$.
Measuring $\left|{\textbf{x}}\right\rangle$ in the computational basis
therefore solves the search problem with probability $1/2$.
Suppose now that exists a quantum algorithm $\mathcal{A}$ that solves the QLS
problem exactly ($\varepsilon=0$) for PD matrices and that queries
$\mathfrak{o}\big{(}\kappa\big{)}$ times the oracle ${\mathcal{P}}_{A}$.
Applying $\mathcal{A}$ to the diagonal matrix $A$ and
$\left|{\textbf{b}}\right\rangle=\left|{\boldsymbol{1}_{N}}\right\rangle$
would then require only $\mathfrak{o}(\sqrt{N/M}\,)$ calls to
${\mathcal{P}}_{A}$, and hence to ${\mathcal{P}}_{\mathcal{S}}$, in order to
produce $\left|{\textbf{x}}\right\rangle$. This means that using $\mathcal{A}$
we can solve an unstructured search problem using $\mathfrak{o}(\sqrt{N/M}\,)$
queries to ${\mathcal{P}}_{\mathcal{S}}$, violating the optimality of Grover
search.
Next, suppose that the algorithm $\mathcal{A}$ is an approximate PD-QLS
solver, i.e. that it produces a state $\rho_{\textbf{x}}$ such that
$\big{|}\big{|}\,\rho_{\textbf{x}}-\left|{\hskip
0.5pt\textbf{x}\,}\right\rangle\\!\\!\left\langle{\hskip
0.5pt\textbf{x}\,}\right|\big{|}\big{|}_{\mathrm{Tr}}\leq\varepsilon$ for some
constant $\varepsilon<1/2$. Apply a projective measurement to
$\rho_{\textbf{x}}$ that projects it on the space spanned by
$\\{\left|{j}\right\rangle\\}_{j\in{\mathcal{S}}}$ with probability $p$ and
projects it onto the orthogonal subspace with probability $q=1-p$; in the
ideal case ($\varepsilon=0$) we would have $p=q=1/2$. By the operational
definition of the trace distance, the probability distribution $(p,q)$ must be
at most $\varepsilon$-distinguishable from $(1/2,1/2)$, i.e.
$\max\\{|p-1/2|,|q-1/2|\\}\leq\varepsilon$. Thus, the success probability is a
constant $p\geq 1/2-\varepsilon>0$.
Finally, this argument can be extended to any constant precision
$\varepsilon<1$. It is sufficient to define a new diagonal matrix
$\widehat{A}$ by changing the values $\alpha$ and $\beta$ in Eq. (15), so that
the probability $\widehat{p}$ of finding an element $j\in{\mathcal{S}}$ when
measuring
$|\widehat{\textbf{x}}\rangle=|\widehat{A}^{-1}\boldsymbol{1}_{N}\rangle$
satisfies $\widehat{p}>1-\varepsilon$.
∎
Notice that in the proof the vector
$\left|{\textbf{b}}\right\rangle=\left|{\boldsymbol{1}_{N}}\right\rangle$ is
fixed and easy to produce and hence the access model for
$\left|{\textbf{b}}\right\rangle$ plays no role in our reduction. We also
remark that this proof can be straightforwardly modified to prove that the
operation of quantum matrix-vector multiplication (i.e., obtaining a state
proportional to $A\left|{\textbf{b}}\right\rangle$) must also have a linear
cost in $\kappa$. Moreover, since a sparse oracle access ${\mathcal{P}}_{A}$
allows to efficiently implement also a matrix-block encoding
${\mathcal{U}}_{A}$ [5], the same reduction immediately rules out oracular
algorithms that use $\mathfrak{o}(\kappa)$ accesses to ${\mathcal{U}}_{A}$
(for $\kappa\leq\sqrt{N}$). Finally, a simple argument shows that a
$\Omega(\kappa)$ lower bound holds also for the
${\mathcal{U}}_{\textbf{b}}$-query complexity: an initial small difference
between two input states b and $\textbf{b}^{\prime}$ can be magnified
$\kappa$-fold in the corresponding outputs
$\left|{A^{-1}\textbf{b}}\right\rangle$ and
$\left|{A^{-1}\textbf{b}^{\prime}}\right\rangle$, which is impossible unless
one uses at least $\kappa$ accesses to ${\mathcal{U}}_{\textbf{b}}$ [38].
## 4 Method based on low-degree polynomial approximations of $A^{-1}$
We start this section giving a few details on how to use the Quantum Signal
Processing (QSP) method to implement polynomial functions of a matrix that we
can access through a matrix-block-encoding (Section 4.1) and then we provide
the explicit definition of the approximating polynomials of the inverse
function (Section 4.2). Next (Section 4.3) we discuss two cases where we can
implement a matrix-block-encoding of $B=I-\eta\,A$, as required to achieve a
quadratic reduction in the degree of the approximating polynomials. Finally
(Section 4.4) we discuss the cost of matrix-vector multiplication and
summarise the cost of PD-QLS solving via this approach.
### 4.1 The quantum signal processing method
We employ QSP as a tool to implement ${\mathcal{U}}_{A^{-1}}$, a matrix-block-
encoding of $A^{-1}$, assuming that we have access to a normalised matrix-
block-encoding ${\mathcal{U}}_{B}$ of
$\displaystyle B:=I-\eta\,A\,$ (20)
for some $\eta>0$. We assume that the spectrum of $B$ is contained in the
interval $\mathcal{D}_{B}=[-1,1-\eta/\kappa]$, where $\kappa$ is an upper
bound to the condition number of $A$. The QSP method can be stated, already
specialised to our case of interest, as follows [25, Theorem 56].
###### Theorem 8.
Consider a $(\beta,b,\epsilon)$-block-encoding ${\mathcal{U}}_{B}$ of a
Hermitian matrix $B$ and let $P(x)$ be a degree-$\ell$ real polynomial with
$|P(x)|\leq 1/2$ for all $x\in[-1,1]$. Then there is a quantum circuit
${\mathcal{U}}_{P(B/\beta)}$ which is a
$(1,b+2,4\ell\sqrt{\epsilon/\beta})$-block-encoding of $P(B/\beta)$, and
requires $\ell$ applications of $U_{B}$ and $U_{B}^{\dagger}$, a single
application of controlled-$U_{B}$ and ${\mathcal{O}}\big{(}(n+b)\ell\big{)}$
elementary quantum gates. Moreover, the same result holds for polynomials
satisfying $|P(x)|\leq 1$ if $P(x)$ has defined parity, i.e., $P(-x)=P(x)$ or
$P(-x)=-P(x)$.
Importantly, there are explicit classical algorithms that can efficiently
compute a parametrization of $\,{\mathcal{U}}_{P(B/\beta)}$ for any polynomial
$P$ and then compile an explicit quantum circuit that implements it, see Refs.
[39, 40] for the current state-of-the-art.
We now further motivate the need to choose $\beta=1$, i.e., that the matrix-
block-encoding of $B$ is normalised; equivalently, the part proportional to
the identity in the definition (20) must not be rescaled. We remind that, as
presented in Section 1.3, our goal is to implement an polynomial $P(B/\beta)$
approximating $A^{-1}$, that is
$\displaystyle P(B/\beta)\approx
f(B)=\frac{1}{I-B}=\frac{1}{I-(I-\eta\,A)}=\frac{1}{\eta}A^{-1}.$ (21)
Note, however, that in this expression actually we have
$P(x)\approx\frac{1}{1-\beta x}$, a function that has a singularity in
$x=1/\beta\leq 1$. As a consequence of Bernstein’s inequality [29] this
function may have polynomial approximations with quadratically better degree
only when the singularity is in $x=1$, i.e. when we have $\beta=1$.
### 4.2 Polynomial approximation of $1/(1-x)$
In this section we show analytical polynomial approximations of the function
$f(x)=1/(1-x)$ so that we can use the QSP method to implement it for the
matrix $B=I-\eta\,A$ as in Eq. (20). To keep the notation simple we assume
$\eta=1$ and that the spectrum of $A$ is contained in
$\mathcal{D}_{A}=\left[\,\frac{1}{\kappa},\,2\right]$, while we can account
for any value $\eta<1$ simply by rescaling $\kappa$ to $\kappa/\eta$.
Consequently, it is only necessary for the polynomial $P(x)$ to be a good
approximation of our target function in the domain
${\mathcal{D}}_{B}=\left[\,-1,\,1\\!-\\!\frac{1}{\kappa}\,\right]$.
Our starting point will be the polynomial
$\hat{{\mathcal{T}}}_{\ell,\kappa}(x)$, a shifted and rescaled version of
${\mathcal{T}}_{\ell}(x)$, the $\ell$-degree Chebyshev polynomial of the first
kind,
$\displaystyle\hat{{\mathcal{T}}}_{\ell,\kappa}(x):=\frac{{\mathcal{T}}_{\ell}\left(\frac{x+\frac{1}{2\kappa}}{1-\frac{1}{2\kappa}}\right)}{{\mathcal{T}}_{\ell}\left(\frac{1+\frac{1}{2\kappa}}{1-\frac{1}{2\kappa}}\right)}\,,$
(22)
which is the solution of the following minimax problem (see Ref. [3, Theorem
6.25]):
$\displaystyle\hat{{\mathcal{T}}}_{\ell,\kappa}(x)=\underset{\begin{subarray}{c}P\in\mathds{R}_{\ell},\\\
P(1)=1\end{subarray}}{\mathrm{argmin}}\max_{x\in[-1,1-\frac{1}{\kappa}]}\big{|}P(x)\big{|}\,.$
(23)
Chebyschev polynomials satisfy the property that
$|{\mathcal{T}}_{\ell}(x)|\leq 1$ for all $\ell\in\mathds{N}$ and all
$x\in[-1,+1]$, while
${\mathcal{T}}_{\ell}(1+\delta)\geq\frac{1}{2}e^{\ell\sqrt{\delta}}$ for
$0\leq\delta\leq 1/6$, see e.g. [12, Lemma 13]. Using the changes of variables
$\displaystyle
y(x):=\frac{x+\frac{1}{2\kappa}}{1-\frac{1}{2\kappa}}\qquad\quad\delta:=\frac{1+\frac{1}{2\kappa}}{1-\frac{1}{2\kappa}}-1=\frac{1}{\kappa-1/2}$
(24)
we can rewrite the definition in Eq. (22) as
$\displaystyle\hat{{\mathcal{T}}}_{\ell,\kappa}(x)=\frac{{\mathcal{T}}_{\ell}\big{(}y(x)\big{)}}{{\mathcal{T}}_{\ell}\big{(}1+\delta\big{)}}\,.$
(25)
Then, the numerator satisfies
$\big{|}{\mathcal{T}}_{\ell}\big{(}y(x)\big{)}\big{|}\leq 1$ for all
$x\in{\mathcal{D}}_{B}=[-1,1-\frac{1}{\kappa}]$, while the denominator is
${\mathcal{T}}_{\ell}\big{(}1+\delta\big{)}\geq\frac{1}{2}e^{\ell\sqrt{\delta}}$.
This means that it is sufficient to choose
$\ell\geq\frac{1}{\sqrt{\delta}}\log\\!\left(\frac{2}{\epsilon}\right)$, i.e.
$\ell\in\Theta\big{(}\sqrt{\kappa}\log(1/\epsilon)\big{)}$, to obtain
$\big{|}\hat{{\mathcal{T}}}_{\ell,\kappa}(x)\big{|}\leq\epsilon$ on the
interval ${\mathcal{D}}_{B}$.
We then use the following $(2\ell-1)$-degree polynomial as our approximation
of $f(x)=\frac{1}{1-x}$:
$\displaystyle
P_{2\ell-1,\kappa}(x):=\frac{1}{1-x}\left[1-\hat{{\mathcal{T}}}_{\ell,\kappa}(x)\right]^{2}.$
(26)
To see that $P_{2\ell-1,\kappa}(x)$ is indeed a polynomial, note that
$[1-\hat{{\mathcal{T}}}_{\ell,\kappa}(x)]$ has a twofold root in $x=1$, thus
is exactly divisible by $1-x$ and moreover $P_{2\ell-1,\kappa}(1)=0$. This
last property is useful because it allows us to implement the pseudo-inverse
$A^{+}$ for a singular matrix $A$; i.e., in the case in which $A$ has some
eigenvalues that are equal to $0$ (equivalently, $B=I-A$ has eigenvalues equal
to $1$) these will be mapped to $0$ and if moreover all the non-zero
eigenvalues are separated from zero by a gap $1/\kappa$, the the polynomial in
Eq. (26) is a close approximation of the mapping $(1-\lambda)\mapsto
1/\lambda$ for all $\lambda\neq 0$ in the spectrum of $A$. We choose the
degree $\ell\in\Theta\big{(}\sqrt{\kappa}\log(\kappa/\varepsilon)\big{)}$ in
such a way that
$\big{|}\hat{{\mathcal{T}}}_{\ell,\kappa}(x)\big{|}\leq\varepsilon/(3\kappa)$
for all $x\in{\mathcal{D}}_{B}$ and thus we obtain from Eq. (26)
$\displaystyle\left|P_{2\ell-1,\kappa}(x)-\frac{1}{1-x}\right|\;\leq\;\kappa\left(2\,\frac{\varepsilon}{3\kappa}+\frac{\varepsilon^{2}}{9\kappa^{2}}\right)\;\leq\;\varepsilon\qquad\quad\forall
x\in{\mathcal{D}}_{B}\,,$ (27)
that is, we have an $\varepsilon$-close polynomial approximation on the
interval ${\mathcal{D}}_{B}=[-1,1-\frac{1}{\kappa}]$. This directly implies
$\left|\left|{P_{2\ell-1,\kappa}(B)-A^{-1}}\right|\right|\leq\varepsilon$ in
operator norm.
Finally, the polynomial in Eq. (26) has to be normalised so that it becomes
compatible with the QSP method. Therefore we define
$\displaystyle\hat{P}_{2\ell-1,\kappa}(x):=\frac{P_{2\ell-1,\kappa}(x)}{K}\,,\qquad\mathrm{where}~{}K:=\,2\\!\\!\max_{x\in[-1,+1]}|P_{2\ell-1}(x)|\;.$
(28)
With this definition we have $|\hat{P}_{2\ell-1}(x)|\leq\frac{1}{2}$ for
$x\in[-1,+1]$, as required. The normalisation constant satisfies
$K\in\Theta(\kappa)$ for $\ell\in\Omega(\sqrt{\kappa})$, see Appendix B for
the proof of this bound. In conclusion, the QSP method allows us to implement
a $(K,b+2,\varepsilon)$-matrix-block-encoding of $A^{-1}$, where $b$ is the
number of ancilla qubits required in the block-encoding of $B$.
### 4.3 Implementing normalised matrix-block-encodings of $B=I$ \- $\eta A$
In this subsection we show two methods that, under different assumptions,
allow us to efficiently implement a normalised matrix-block-encoding of
$B=I-\eta\,A$. Preliminarily, we remark that this is a non-trivial task, as we
argue with the following three considerations.
First, any Hermitian matrix $M\in\mathds{C}^{N\times N}$ satisfying
$\left|\left|{M}\right|\right|\leq 1$ can be implemented as a normalised sub-
block of a unitary, since an explicit construction is given by [28]
$\displaystyle{\mathcal{U}}_{M}=\left(\begin{array}[]{cc}M&-\sqrt{I-M^{2}}\\\
\sqrt{I-M^{2}}&M\end{array}\right).$ (31)
Implementing the circuit corresponding to ${\mathcal{U}}_{M}$ in general
requires up to ${\mathcal{O}}(N^{2})$ elementary quantum gates [42] and is
thus inefficient; however, efficient constructions are possible in specialised
cases.
Second, standard quantum methods allow us to efficiently implement _sub-
normalised_ matrix-block-encodings of $B$. As a first example, Childs’ quantum
walk operator uses ${\mathcal{O}}(1)$ accesses to a sparse-matrix oracle
${\mathcal{P}}_{B}$ to implement ${\mathcal{U}}_{M}$ for $M=B/d$, where $d$ is
the column-sparsity of $B$ [5]. A second example is to assume that we have
access to ${\mathcal{U}}_{A}$, a block-encoding of $A$ with
$\left|\left|{A}\right|\right|\leq 1$, and then use the LCU lemma [26] to
implement a linear combination of $I$ and $-{\mathcal{U}}_{A}$, yielding a
normalised block-encoding of $p\,I-(1-p)A\equiv p\,B$, for some $p\in(0,1)$.
However, any “black-box” method that aims at amplifying a block-encoding of
$B/\beta$ (with $\beta>1$) to a normalised block-encoding of $B$ is in general
inefficient. This can be proven, for instance, applying the lower bound in
Ref. [25, Theorem 73] to the function $f(x)=\beta\,x$.
Third, it is currently an open problem whether it is possible or not to
implement a normalised matrix-block-encoding ${\mathcal{U}}_{M}$ given access
to a sparse-matrix oracle ${\mathcal{P}}_{M}$ with
$\left|\left|{M}\right|\right|\leq 1$. In absence of general results of this
kind, we then turn to developing specialised methods to efficiently
implementing a normalised block-encoding ${\mathcal{U}}_{B}$, with
$B=I-\eta\,A$, in the cases where $(i)$ $A$ is diagonally dominant or $(ii)$
$A$ is the sum of positive semi-definite local Hamiltonians.
#### 4.3.1 Diagonally-dominant coefficient matrix
In this section, we implement a normalised block-encoding of $B=I-A$ employing
the method described in Ref. [25, Lemma 47], which we report here in Lemma 9.
Our construction requires the preparation of some families of states
$\\{\left|{\psi_{i}}\right\rangle\\}_{i},\\{\left|{\phi_{j}}\right\rangle\\}_{j}$
that are well-defined only when $A$ is diagonally-dominant, while attempts at
extending the method to $B=I-\eta\,A$ for non-diagonally-dominant PD matrices
results in non-normalisable states for any $\eta>0$. We also remark that the
problem of solving linear systems involving diagonally dominant PD matrices
(which includes the noteworthy class of Laplacian matrices [43]) is well
studied in classical linear algebra: for these classes of matrices there are
classical algorithms that can solve a linear system substantially faster than
what is possible for more general matrices [44].
###### Lemma 9.
Suppose that we have access to two “state preparation” unitaries $U_{L}$ and
$U_{R}$ (left and right) acting on $a+s$ qubits such that
$\displaystyle U_{L}:\left|{0^{a}}\right\rangle\left|{i}\right\rangle\,$
$\displaystyle\mapsto\left|{\psi_{i}}\right\rangle$ (32) $\displaystyle
U_{R}:\left|{0^{a}}\right\rangle\left|{j}\right\rangle$
$\displaystyle\mapsto\left|{\phi_{j}}\right\rangle,$ (33)
for any $i,j\in\\{1,\ldots,2^{s}\\}$ and for some families of states
$\\{\left|{\psi_{i}}\right\rangle\\}_{i}$ and
$\\{\left|{\phi_{j}}\right\rangle\\}_{j}$. Then, it is immediate to see that
$U_{L}^{\dagger}U_{R}$ is a $(1,a,0)$-matrix-block-encoding of the Gram matrix
$H$ such that $H_{ij}=\langle{\psi_{i}}|{\phi_{j}}\rangle$.
Let $A$ be a Hermitian $d$-sparse diagonally-dominant matrix, i.e.
$\sum_{j\neq i}|A_{ij}|\leq A_{ii}\leq 1$ for all $i$. By Gershgorin theorem
[41], diagonal dominance of a Hermitian matrix is sufficient to guarantee
positive semi-definiteness, i.e. $\lambda_{\min}(A)\geq 0$, and both
$\lambda_{\min}(A)=0$ and $\lambda_{\min}(A)\neq 0$ are possible777If $A$ is
_strictly_ diagonally dominant we have
$\lambda_{\min}(A)\geq\min_{i}\big{\\{}A_{ii}-\sum_{j\neq
i}|A_{ij}|\big{\\}}>0$ and then the condition number is immediately bounded by
$\kappa(A)\leq 1/\lambda_{\min}(A)$ when $\left|\left|{A}\right|\right|\leq
1$.. Consider then the states
$\displaystyle\left|{\psi_{i}}\right\rangle:=\sum_{l\in\mathrm{supp}(A_{i})}\sqrt{\delta_{il}-A_{il}}\left|{l}\right\rangle\;+\;\sqrt{r_{i}}\left|{N+1}\right\rangle$
(34)
$\displaystyle\left|{\phi_{j}}\right\rangle:=\sum_{k\in\mathrm{supp}(A_{j})}\sqrt{\vphantom{A_{il}}\smash[b]{\delta_{jk}-A_{jk}^{*}}}\left|{k}\right\rangle\;+\;\sqrt{r_{j}}\left|{N+1}\right\rangle$
(35)
where $\mathrm{supp}(A_{i})$ are the position of the (at most) $d$ non-zero
entries of the column vector $A_{i}$ and the value $0\leq r_{i}\leq 1$ can be
computed so that the states are normalised, since we have
$\displaystyle|r_{i}|=1-\sum_{l\in\mathrm{supp}(A_{i})}\left|\sqrt{\delta_{il}-A_{il}}\right|^{2}=A_{ii}-\sum_{l\neq
i}\big{|}A_{il}\big{|}\geq 0$ (36)
where we have used $A_{ii}\leq 1$ and the diagonal dominance of $A$. We then
define the following state-preparation unitaries:
$\displaystyle U_{L}:\left|{0^{a}}\right\rangle\left|{i}\right\rangle$
$\displaystyle\;\mapsto\;|0^{b}\rangle\left|{i}\right\rangle\left|{\psi_{i}^{*}}\right\rangle$
(37) $\displaystyle U_{R}:\left|{0^{a}}\right\rangle\left|{j}\right\rangle$
$\displaystyle\;\mapsto\;|0^{b}\rangle\left|{\phi_{j}}\right\rangle\left|{j}\right\rangle,$
(38)
for certain numbers $a$ and $b$ of ancilla qubits initialised in
$\left|{0}\right\rangle$, and where $\left|{\psi_{i}^{*}}\right\rangle$ is the
complex conjugate of the state $\left|{\psi_{i}}\right\rangle$ w.r.t. the
computational basis. Then, $U_{L}^{\dagger}U_{R}$ is a normalised encoding of
the matrix $B=I-A$, as one can verify using $A_{ji}^{*}=A_{ij}$:
$\displaystyle
B_{ij}=\langle{0^{b},i,\psi_{i}^{*}}|{0^{b},\phi_{j},j}\rangle=\underbrace{\sqrt{\vphantom{A_{il}}\smash[b]{\delta_{ji}-A_{ji}^{*}}}}_{\langle{i}|{\phi_{j}}\rangle}\underbrace{\sqrt{\delta_{ij}-A_{ij}}}_{\langle{\psi_{i}^{*}}|{j}\rangle}=\delta_{ij}-A_{ij}\;.$
(39)
The quantum circuit $U_{L}$ can be implemented efficiently (and similarly for
$U_{R}$), as we now show. We use $d$ calls to ${\mathcal{P}}_{A}^{pos}$ to
recover the values $j_{\nu}\equiv j(i,\nu)$ for $\nu\in\\{1,\ldots,d\\}$,
i.e., the positions of all the (potentially) non-zero entries of $A_{i}$. This
corresponds to implementing the following isometry (i.e., a unitary circuit
plus the possibility of adding ancillas):
$\displaystyle\left|{i}\right\rangle\overset{d\times{\mathcal{P}}_{\\!A}^{pos}}{\mapsto}\left|{i}\right\rangle\left|{j_{1},\ldots,j_{d}}\right\rangle.$
(40)
Next, we use $d$ calls to ${\mathcal{P}}_{A}^{val}$ to recover all the values
$A_{i,j}$, i.e.:
$\displaystyle\eqref{eq:comp1}\overset{d\times{\mathcal{P}}_{\\!A}^{val}}{\mapsto}\left|{i}\right\rangle\left|{j_{1},\ldots,j_{d}}\right\rangle\left|{A_{ij_{1}},\ldots,A_{ij_{d}}}\right\rangle.$
(41)
We then use reversible (classical) computation to calculate the numerical
values of all the amplitudes
$\boldsymbol{\psi}^{(i)}:=(\sqrt{-A_{ij_{1}}}^{\,*},\ldots,\sqrt{1-A_{ii}}^{\,*},\ldots,\sqrt{-A_{ij_{d}}}^{\,*},\sqrt{r_{i}}^{\,*})^{T}$:
$\displaystyle\eqref{eq:comp2}\overset{\mathrm{compute}}{\mapsto}\left|{i}\right\rangle\left|{j_{1},\ldots,j_{d}}\right\rangle\left|{A_{ij_{1}},\ldots,A_{ij_{d}}}\right\rangle|\boldsymbol{\psi}^{(i)}\rangle\,.$
(42)
Then, we use a general state preparation quantum circuit which, given a
classical description of the amplitudes of a $(d+1)$-dimensional quantum
state, effectively prepares the corresponding state:
$\displaystyle\eqref{eq:comp3}\overset{\mathrm{prepare}}{\mapsto}\left|{i}\right\rangle\left|{j_{1},\ldots,j_{d}}\right\rangle\left|{A_{ij_{1}},\ldots,A_{ij_{d}}}\right\rangle|\boldsymbol{\psi}^{(i)}\rangle\sum_{\nu=1}^{d+1}\psi_{\nu}^{(i)}\left|{\nu}\right\rangle.$
(43)
Next, we use a single call to ${\mathcal{P}}_{A}^{pos}$ in quantum
superposition to map
$\left|{i,\nu}\right\rangle\mapsto\left|{i,j(i,\nu)}\right\rangle=\left|{i,j_{\nu}}\right\rangle$
and thus we obtain
$\displaystyle\eqref{eq:comp4}\overset{{\mathcal{P}}_{\\!A}^{pos}}{\mapsto}\left|{i}\right\rangle\left|{j_{1},\ldots,j_{d}}\right\rangle\left|{A_{ij_{1}},\ldots,A_{ij_{d}}}\right\rangle|\boldsymbol{\psi}^{(i)}\rangle\sum_{\nu=1}^{d+1}\psi_{\nu}^{(i)}\left|{j_{\nu}}\right\rangle$
(44)
and now note that, adopting the definition $j(i,d+1):=N+1$, on the right-hand
side we have obtained
$\sum_{\nu=1}^{d+1}\psi_{\nu}^{(i)}\left|{j_{\nu}}\right\rangle=\left|{\psi_{i}^{*}}\right\rangle$,
the complex conjugate (w.r.t. the computational basis) of the state defined in
Eq. (34). Finally, we “uncompute” the intermediate registers
$\left|{j_{1},\ldots,j_{d}}\right\rangle$,
$\left|{A_{ij_{1}},\ldots,A_{ij_{d}}}\right\rangle$ and
$|\boldsymbol{\psi}^{(i)}\rangle$, mapping them to
$\left|{0^{b}}\right\rangle$ (where $b$ is the number of left-over ancillas),
performing the steps (40) to (43) in reverse. With a final swapping of
$\left|{i}\right\rangle$ and $\left|{0^{b}}\right\rangle$, we have implemented
the circuit $U_{L}$ given in Eq. (37).
We can now estimate the query and gate cost of implementing $U_{L}$ (and the
cost of implementing $U_{R}$ is the same). Going through the derivation, we
see that $4d+1$ oracle calls to
${\mathcal{P}}_{A}=({\mathcal{P}}_{A}^{pos},{\mathcal{P}}_{A}^{val})$ are
required, that is the query complexity is
$Q[{\mathcal{P}}_{A}]\in{\mathcal{O}}(d)$. Regarding gate complexity, step
(42) requires ${\mathcal{O}}\big{(}d\,\mathrm{poly}(p)\big{)}$ Toffoli gates
(which are universal for classical reversible computation) to perform the
computation up to $p$ digits of precision; moreover, step (43) can be
performed using ${\mathcal{O}}(d)$ control-nots and single-qubit rotations
employing the methods of Ref. [42]. In conclusion, treating the number of
digits of precision as a constant and the single-qubit rotations as exact,
both the query and the gate complexities are in ${\mathcal{O}}(d)$ and we thus
obtain the following result.
###### Proposition 10 (Normalised matrix-block-encoding, diagonally-dominant
case).
Suppose that we have access to a $d$-sparse diagonally-dominant PD matrix
$A\in\mathds{C}^{N\times N}$ via ${\mathcal{P}}_{A}$ as in Definition 4. Then
it is possible to implement a normalised block-encoding of $B=I-A$ with
${\mathcal{O}}(d)$ query and gate complexity, assuming exact single-qubit
rotations and that all arithmetic operations are performed with a constant
number of digits of precision.
We finally remark that, in some cases, it might be possible to implement
$U_{L}$ and $U_{R}$ with a query and gate complexity in
${\mathcal{O}}(\sqrt{d})$ instead of linearly in $d$. First, we may assume
that we have directly access to an oracle ${\mathcal{P}}_{\psi}$ that directly
returns the amplitudes $\psi_{\nu}^{(i)}$ (including the value
$\sqrt{r_{i}}$), instead of needing to compute these vales from the non-zero
entries of $A_{i}$. Second, one can use an algorithm that generalises Grover
search to prepare the state $\left|{\psi_{i}^{*}}\right\rangle$ using
${\mathcal{O}}(\sqrt{d})$ accesses to ${\mathcal{P}}_{\psi}$ [45], and an even
more efficient implementation can be realised using the method of Ref. [46],
which avoids synthesising arithmetical operations and brings about an
improvement of two orders of magnitude over prior works for realistic levels
of precision.
#### 4.3.2 Sum of positive semi-definite Hamiltonians
We now consider the case where $A\in\mathds{C}^{N\times N}$ is given by the
sum of positive semi-definite Hamiltonian terms; i.e., we consider the case
$\displaystyle A=\sum_{j=1}^{J}H_{(j)}\qquad\forall j\ H_{(j)}\ \textup{is
positive semi-definite,}$ (45)
which is similar to the case presented in Section 5, but here we allow the
Hamiltonian terms to have eigenvalue zero. We assume that the number $J$ of
Hamiltonian terms scales polynomially in $n=\lceil\log_{2}N\rceil$ and that
each Hamiltonian term $H_{(j)}$ is local, i.e., that it acts upon a small
number of qubits; specifically, we require that the set
${\mathcal{S}}_{j}\subseteq\\{1,\ldots,n\\}$ of qubits upon which $H_{j}$ acts
non-trivially satisfies $|{\mathcal{S}}_{j}|\leq s\in{\mathcal{O}}(\log n)$
for all $j$. Each $H_{(j)}$ can thus be expressed as888 The correct expression
for an operator $H_{(j)}$ acting on a set
${\mathcal{S}}_{j}\subseteq\\{1,\ldots,n\\}$ of $s$ qubits is $\displaystyle
H_{(j)}=\sum_{\begin{subarray}{c}a_{1}\cdots a_{s}\in\\{0,1\\}^{s}\\\
b_{1}\cdots b_{s}\in\\{0,1\\}^{s}\end{subarray}}h^{(j)}_{a_{1}\cdots
a_{s},b_{1}\cdots
b_{s}}\bigotimes_{r=1}^{n}\;\mathfrak{O}_{r}\qquad\mathrm{with}~{}\mathfrak{O}_{r}=\begin{cases}I=\left|{0}\right\rangle\\!\\!\left\langle{0}\right|\\!+\\!\left|{1}\right\rangle\\!\\!\left\langle{1}\right|&\mathrm{if}~{}r\notin{\mathcal{S}}_{j}\\\
\left|{a_{r}}\right\rangle\\!\left\langle{b_{r}}\right|&\mathrm{if}~{}r\in{\mathcal{S}}_{j}\end{cases}\;.$
(46)
$\displaystyle H_{(j)}=h_{(j)}\otimes I_{\neg{\mathcal{S}}_{j}}$ (47)
where $h_{(j)}$ is a positive-definite matrix of dimension $2^{s}\times
2^{s}$, which can be fully specified with $2^{{\mathcal{O}}(\log
n)}={\mathcal{O}}(\mathrm{poly}\,n)$ parameters. We also impose
$\left|\left|{h_{(j)}}\right|\right|\leq 2$ for all $j$.
Now we define $w_{(j)}:=I-h_{(j)}$ and note that it is a small matrix, of at
most ${\mathcal{O}}(\mathrm{poly}\,n)$ size, with
$\left|\left|{w_{(j)}}\right|\right|\leq 1$. Then, we can rapidly compute,
with classical algorithms, a unitary extension $u_{(j)}$ for each $w_{(j)}$,
each requiring only one ancilla qubit. Specifically, we define
$\displaystyle
u_{(j)}:=\left(\begin{array}[]{cc}w_{(j)}&-\sqrt{I-w_{(j)}^{2}}\\\
\sqrt{I-w_{(j)}^{2}}&w_{(j)}\end{array}\right)$ (50)
and then we (implicitly) define $W_{(j)}\in\mathds{C}^{N\times N}$ and the
unitary $U_{(j)}\in\mathds{C}^{2N\times 2N}$
$\displaystyle W_{(j)}$ $\displaystyle=w_{(j)}\otimes
I_{\neg{\mathcal{S}}_{j}}$ (51) $\displaystyle U_{(j)}$
$\displaystyle=u_{(j)}\otimes I_{\neg{\mathcal{S}}_{j}}$ (52)
where the interpretations are the same as in Eq. (47). Note that each
$U_{(j)}$ can be efficiently compiled as a quantum circuit: it is sufficient
to determine the gate decomposition of the $(s+1)$-qubit matrix $u_{(j)}$ and
then embed the circuit in a $(n+1)$-qubit quantum register. We assume that the
ancilla qubit used for the extension given in Eq. (50) is one and shared
across all circuits $U_{(j)}$.
We then employ the LCU lemma [26] as follows. Given access to the controlled
version of each circuit $U_{(j)}$, it is possible to efficiently implement the
multi-controlled unitary
$\displaystyle
U_{\textsc{Select}}=\sum_{j=1}^{J}\left|{j}\right\rangle\\!\\!\left\langle{j}\right|_{c}\otimes
U_{(j)}$ (53)
where the subscript $c$ denotes the control register. Then, defining a unitary
$\mathrm{Had}$ that acts as
$\mathrm{Had}\left|{0}\right\rangle_{c}=\sum_{j=1}^{J}\frac{1}{\sqrt{J}}\left|{j}\right\rangle_{c}$,
we obtain:
$\displaystyle\big{(}\left\langle{0}\right|_{c}\mathrm{Had}^{\dagger}\otimes
I\big{)}\,U_{\textsc{Select}}\,\big{(}\mathrm{Had}\left|{0}\right\rangle_{c}\otimes
I\big{)}$ $\displaystyle=\sum_{j=1}^{J}\frac{1}{J}U_{(j)}$ (54)
and further post-selecting to the “top-left” corner of each $U_{(j)}$,
according to Eq. (50) we obtain:
$\displaystyle\big{(}\left\langle{0}\right|\otimes
I\big{)}\,\sum_{j=1}^{J}\frac{1}{J}U_{(j)}\,\big{(}\left|{0}\right\rangle\otimes
I\big{)}\,$
$\displaystyle=\sum_{j=1}^{J}\frac{1}{J}W_{(j)}=\sum_{j=1}^{J}\frac{1}{J}\left(I-H_{(j)}\right)=I-\frac{1}{J}A\,.$
(55)
In conclusion, ${\mathcal{U}}_{B}:=(\mathrm{Had}^{\dagger}\otimes
I)\,U_{\textsc{Select}}\,(\mathrm{Had}\otimes I)$ is a normalised matrix-
block-encoding of $B=I-\eta A$, where the factor $\eta=1/J$ scales, by
assumption, polynomially in $n$. The gate complexity of ${\mathcal{U}}_{B}$
can be estimated as follows. Each $u_{(j)}$ requires ${\mathcal{O}}(2^{2s})$
elementary gates to be implemented [42] and thus the gate complexity of
$U_{(j)}$ is also in ${\mathcal{O}}(2^{2s})$, assuming that two-qubit gates
can be applied among arbitrary pairs of qubits. Then, $U_{\textsc{Select}}$
has gate complexity scaling as the sum of the complexities of the individual
$U_{(j)}$ [27] and is thus in ${\mathcal{O}}(J2^{2s})$. The complexity of
$\mathrm{Had}$ is ${\mathcal{O}}(\log J)$, a sub-leading additive term that
can be neglected in the asymptotic gate complexity of ${\mathcal{U}}_{B}$.
Hence we have the following result.
###### Proposition 11 (Normalised matrix-block-encoding, Sum-of-Hamiltonians
case).
Suppose that we have an explicit classical description of positive semi-
definite matrices $h_{(j)}\in\mathds{C}^{2^{s}\times 2^{s}}$ with
$j\in\\{1,\ldots,J\\}$ and consider the positive semi-definite Hamiltonian
terms $H_{(j)}$ each obtained by applying $h_{(j)}$ to a subset of qubits
${\mathcal{S}}_{j}\subseteq\\{1,\ldots,n\\}$, as given in Eq. (47). Consider
then a coefficient matrix $A\in\mathds{C}^{2^{n}\times 2^{n}}$ as given in Eq.
(45). Then it is possible to implement a normalised block-encoding of
$B=I-\frac{1}{J}A$ with a gate complexity in ${\mathcal{O}}(J2^{s})$, assuming
exact single-qubit rotations.
### 4.4 From matrix inversion to solving the quantum linear system problem
Suppose now that we have a matrix-block-encoding of
$\displaystyle\hat{P}_{2\ell-1,\kappa}(B)\approx\frac{1}{K}\frac{1}{I-B}=\frac{A^{-1}}{\eta\,K}$
(56)
where $K\in\Theta(\kappa/\eta)$ is the normalization factor of the matrix-
block-encoding (recall the rescaling of $\kappa$ to $\kappa/\eta$), upper
bounded by ${\mathcal{O}}(\kappa/\eta)$ as we show in Appendix B. This is
equivalent to say that we have implemented the unitary
$\displaystyle{\mathcal{U}}_{A^{-1}}\approx\left(\begin{array}[]{cc}A^{-1}/(\eta\,K)&*\\\
&*\end{array}\right)$ (59)
where the left-upper block corresponds to having $a$ ancilla qubits in
$\left|{0^{a}}\right\rangle$. Then, one can directly solve a QLS by applying
the unitary quantum circuit ${\mathcal{U}}_{A^{-1}}$ to the vector
$\left|{0^{a}}\right\rangle\left|{\textbf{b}}\right\rangle$ and then post-
select the outcome $\left|{0^{a}}\right\rangle$ on the ancilla system;
however, post-selection might introduce large overheads, since we have
$\displaystyle{\mathcal{U}}_{A^{-1}}:\left|{0^{a}}\right\rangle\left|{\textbf{b}}\right\rangle\quad\mapsto\hskip
22.76219pt\frac{1}{\eta\,K}\hskip 19.91692pt$
$\displaystyle\left|{0^{a}}\right\rangle
A^{-1}\left|{\textbf{b}}\right\rangle+\sqrt{1-\frac{1}{\eta^{2}K^{2}}}\,\big{|}\Psi^{\perp}\big{\rangle}$
(60)
$\displaystyle=\,\frac{\left|\left|{A^{-1}\left|{\textbf{b}}\right\rangle}\right|\right|}{\eta\,K}$
$\displaystyle\left|{0^{a}}\right\rangle\left|{A^{-1}\textbf{b}}\right\rangle+\sqrt{1-\frac{1}{\eta^{2}K^{2}}}\,\big{|}\Psi^{\perp}\big{\rangle}$
(61)
where $\big{|}\Psi^{\perp}\big{\rangle}$ is a state perpendicular to all
states of the form $\left|{0^{a},\psi}\right\rangle$. Therefore, the
probability of successfully obtaining the state
$\left|{A^{-1}\textbf{b}}\right\rangle$ when post-selecting on the ancilla
measurement is
$\displaystyle
p_{\mathrm{succ}}=\frac{\left|\left|{A^{-1}\left|{\textbf{b}}\right\rangle}\right|\right|^{2}}{\eta^{2}K^{2}}\,.$
(62)
A PD-QLS solver that prepares a matrix-encoding of $A^{-1}$ and obtains
$\left|{A^{-1}\textbf{b}}\right\rangle$ via post-selection requires
${\mathcal{O}}(1/p_{\mathrm{succ}})$ accesses to ${\mathcal{U}}_{\textbf{b}}$
and ${\mathcal{O}}((2\ell-1)/p_{\mathrm{succ}})$ accesses to
${\mathcal{U}}_{B}$. Recall, $2\ell-1$ is the degree of
$\hat{P}_{2\ell-1,\kappa}(B)$ and
$\ell\in\Theta\\!\left(\sqrt{\frac{\kappa}{\eta}}\log\frac{\kappa}{\eta\varepsilon}\right)$.
The query complexities can be quadratically improved to
${\mathcal{O}}(1/\sqrt{p_{\mathrm{succ}}})$ and
${\mathcal{O}}((2\ell-1)/\sqrt{p_{\mathrm{succ}}})$, respectively, using
amplitude amplification [30]. Having implemented an approximate matrix-
encoding of $\widetilde{A}^{-1}$ that is $\varepsilon$-close to $A^{-1}$
(using Definition 5), the output state
$|{\widetilde{A}^{-1}\textbf{b}}\rangle=\widetilde{A}^{-1}\left|{\textbf{b}}\right\rangle/||\widetilde{A}^{-1}\left|{\textbf{b}}\right\rangle\\!||$
satisfies [5, Proposition 9]
$\displaystyle\left|\left|{\big{|}\widetilde{A}^{-1}\textbf{b}\big{\rangle}-\left|{A^{-1}\textbf{b}}\right\rangle}\right|\right|\leq
4\,\varepsilon\,.$ (63)
The same inequality holds in trace distance because of (5) and it is true both
when using post-selection and when using amplitude amplification. In
conclusion, we have the following results.
###### Proposition 12 (Complexity of the PD-QLS solver).
Suppose that we have access to a $(1,b,0)$-matrix-block-encoding
${\mathcal{U}}_{B}$ of $B=I-\eta\,A$, where $\eta\in(0,1]$ and
$A\in\mathds{C}^{N\times N}$ is a PD matrix with eigenvalues contained in the
interval ${\mathcal{D}}_{A}=\big{[}\frac{1}{\kappa},2\big{]}$ for some known
value $\kappa>1$; see e.g. Proposition 10 and Proposition 11 for explicit
constructions.
First, using the QSP method of Theorem 8 and the polynomial approximation
given in Eq. (26), it is possible to implement a $(K,b+2,\varepsilon)$-matrix-
block-encoding of $A^{-1}$, where $K\in\Theta(\kappa/\eta)$, and the method
has a query complexity
$\displaystyle Q[{\mathcal{U}}_{B}]$
$\displaystyle\in{\mathcal{O}}\\!\left(\sqrt{\frac{\kappa}{\eta}}\,\log\frac{\kappa}{\eta\,\varepsilon}\right),$
(64)
where $Q[{\mathcal{U}}_{B}]$ denotes the number of accesses to
${\mathcal{U}}_{B}$. Moreover, the algorithm is gate-efficient, i.e. it
requires ${\mathcal{O}}\big{(}\mathrm{poly}(n)\,Q[{\mathcal{U}}_{B}]\big{)}$
extra elementary quantum gates.
Second, suppose that we want to solve a PD-QLS as in Definition 2, where
access to $A$ is given (indirectly) by ${\mathcal{U}}_{B}$ and access to b via
a state preparation oracle ${\mathcal{U}}_{\textbf{b}}$ as in Definition 3.
Then, the QLS can be solved up to precision ${\mathcal{O}}(\varepsilon)$ using
the $(K,b+2,\varepsilon)$-matrix-block-encoding of $A^{-1}$ and employing
amplitude amplification to perform matrix-vector multiplication with constant
success probability. The total query complexities, in terms of accesses to
${\mathcal{U}}_{B}$ and ${\mathcal{U}}_{\textbf{b}}$, are
$\displaystyle Q[{\mathcal{U}}_{\textbf{b}}]$
$\displaystyle\in{\mathcal{O}}\\!\left(\frac{\kappa}{\left|\left|{A^{-1}\left|{\textbf{b}}\right\rangle}\right|\right|}\right)$
(65) $\displaystyle Q[{\mathcal{U}}_{B}]$
$\displaystyle\in{\mathcal{O}}\\!\left(\sqrt{\frac{\kappa}{\eta}}\frac{\kappa}{\left|\left|{A^{-1}\left|{\textbf{b}}\right\rangle}\right|\right|}\log\frac{\kappa}{\eta\,\varepsilon}\right)$
(66)
and the algorithm is gate efficient, that is, the gate complexity is in
${\mathcal{O}}\big{(}Q\,\mathrm{poly}(\log Q,\log N)\big{)}$. A quadratic
speed-up in $\kappa$ (up to polylogarithmic factors) is achieved over general
QLS solvers when
$\left|\left|{A^{-1}\left|{\textbf{b}}\right\rangle}\right|\right|\in{\mathcal{O}}(\kappa)$.
We now proceed to a worst-case, average case, and best-case scenario analysis
of a PD-QLS solver as given in the previous Proposition.
##### Worst-case scenario:
In the worst case we have
$\left|\left|{A^{-1}\left|{\textbf{b}}\right\rangle}\right|\right|\in{\mathcal{O}}(1)$
and consequently the post-selection success probability is
$p_{\mathrm{succ}}\in\Omega(1/\kappa^{2})$. One can in alternative use
${\mathcal{O}}(\kappa)$ rounds of amplitude amplification to reach a constant
success probability. Using amplitude amplification, the overall query
complexity is in ${\mathcal{O}}(\kappa^{3/2})$, which is an improvement
compared to the ${\mathcal{O}}(\kappa^{2})$ runtime achieved by the HHL
algorithm, but still falls shorts of the $\widetilde{{\mathcal{O}}}(\kappa)$
runtime that can be achieved using more advanced methods such as Variable-Time
Amplitude Amplification (VTAA) [4] or eigenpath transversal [7].
##### Average-case scenario:
We now look at the distribution of runtimes that arises when using a randomly
chosen vector $\left|{\textbf{b}}\right\rangle$. As observed in the work by
Subaşı and Somma [47, Section III.B] one could model the eigenvalues of a
positive-definite matrix $A$ as uniformly distributed over the interval
$[1/\kappa,1]$ while if $\left|{\textbf{b}}\right\rangle$ is chosen from the
outcome of a random quantum circuit its amplitudes are sampled according to a
Porter-Thomas distribution; then,
$\left|\left|{A^{-1}\left|{\textbf{b}}\right\rangle}\right|\right|\in{\mathcal{O}}(\sqrt{\kappa})$
almost surely in the regime $1\ll\kappa\ll N$. This implies that a randomly
sampled PD-QLS problem (according to the specified distribution) can be solved
almost surely with a query complexity in ${\mathcal{O}}(\kappa)$, employing
amplitude amplification in the post-selection step. This method then matches
(actually, improves by a polylog($\kappa$) factor) the asymptotic runtime of
more sophisticated methods (such as those that employ VTAA or adiabatic
evolution) when considering “typical” instances of PD-QLS.
##### Best-case scenario:
The largest value that
$\left|\left|{A^{-1}\left|{\textbf{b}}\right\rangle}\right|\right|$ can reach
is $\kappa$. In such case the post-selection probability is constant and the
overall query complexity is in ${\mathcal{O}}(\sqrt{\kappa})$, even without
employing amplitude-amplification. This is a quadratic improvement for PD-QLS
solving over competing methods working for indefinite QLS, since implementing
a block-encoding of $A^{-1}$ for indefinite matrices already requires
${\mathcal{O}}(\kappa)$ oracle calls to ${\mathcal{U}}_{A}$ [5]. We note these
best-case problems almost never occur under the probabilistic model described
before, but real-world problems have intrinsic structure that could make them
depart from the Porter-Thomas distribution and thus have
$\left|\left|{A^{-1}\left|{\textbf{b}}\right\rangle}\right|\right|\gg\sqrt{\kappa}$:
for instance, this is the case if $\left|{\textbf{b}}\right\rangle$ has
constant overlap with the eigenvector relative to the largest eigenvalue of
$A^{-1}$. Note, finally, that it is not required that we know in advance how
large the success probability is, since by definition
$\left|{0^{a}}\right\rangle$ heralds the success.
Figure 3: Bi-logarithmic plots of the asymptotic runtimes of general QLS
solvers and of our PD-QLS solvers. The plot on the left represents the scaling
of the query complexity for methods based on direct matrix-vector
multiplication (together with amplitude amplification) in terms of the
variable
$\left|\left|{A^{-1}\left|{\textbf{b}}\right\rangle}\right|\right|^{-1}\in[\,1,\kappa\,]$.
The plot on the right represents the scaling of the query complexity for
methods based on VTAA in terms of the variable
$\Gamma_{A,\textbf{b}}:=\sqrt{\kappa}\,\frac{\left|\left|{A^{-1/2}\left|{\textbf{b}}\right\rangle}\right|\right|}{\left|\left|{A^{-1}\left|{\textbf{b}}\right\rangle}\right|\right|}\in[\,1,\sqrt{\kappa}\,]$.
In both plots we ignore poly-logarithmic multiplicative factors and we express
all the variables in units of powers of $\kappa$, assuming $\kappa\gg 1$ as
constant. See the main text for details.
### 4.5 Optimization using Variable-Time Amplitude Amplification
In Appendix C we show how to use VTAA to obtain a PD-QLS solver having
improved asymptotic query complexities. We proceed as in Ref. [5, Section 5]
and Ref. [19, Section 3]: first we reformulate our algorithm as a variable-
stopping-time quantum algorithm and afterwards we apply the VTAA optimisation
to improve its runtime. There is, however, a technical hurdle to overcome: all
previous VTAA-based QLS solvers use a phase estimation subroutine having a
${\mathcal{O}}(\kappa)$ runtime; its use would then preclude us from achieving
a runtime sub-linear in $\kappa$. The main new idea we introduce is to replace
phase estimation with efficient “windowing functions” whose implementation via
QSP requires only $\widetilde{{\mathcal{O}}}(\sqrt{\kappa})$ accesses to
${\mathcal{U}}_{B}$.
More precisely, in Ref. [5] the so-called Gapped Phase Estimation (GPE) method
is introduced, with the purpose of reliably selecting eigenvalues of $A$ that
are larger than some value $\delta$. For these eigenvalues it is possible to
implement an approximate inverse at a reduced cost, scaling as
${\mathcal{O}}(1/\delta)$ instead of ${\mathcal{O}}(\kappa)$. Then, a sequence
of increasingly small values of $\delta$ is considered, until $\delta\leq
1/\kappa$, and VTAA is employed to enhance the success probability. Since GPE
has a query complexity in $\Omega(1/\delta)$ and the required precision is
$\delta\leq 1/\kappa$, its complexity is in $\Omega(\kappa)$.
Instead, we introduce a “windowing function” $W_{\epsilon,\delta}(\lambda)$ to
select eigenvalues of $B=I-\eta A$ that satisfy
$\lambda\in[-1+2\delta,1-2\delta]$ and to reject eigenvalues
$\lambda\in[-1,-1+\delta]\cup[1-\delta,+1]$, except for a small error
probability $\epsilon$. Thus, $W_{\epsilon,\delta}(x)$ is a polynomial
$\epsilon$-close to 1 in the center of the interval $[-1,+1]$ and
$\epsilon$-close to 0 near the edges of the interval, with a steep fall around
the points $\pm(1-1.5\delta)$. The intervals where the function derivative is
large (of order $1/\delta$) are very close to the extrema of the interval
$[-1,+1]$. According to Bernstein’s inequality [29] it is not prohibited that
a windowing function could be implemented with a polynomial having a degree
$\ell\in\widetilde{O}(1/\sqrt{\delta})$, a quadratically smaller degree
compared to case where the large derivative is near the center of the
interval. In Appendix C we then show, with an explicit construction, that
windowing polynomial with degree $\ell\in\widetilde{O}(1/\sqrt{\delta})$
indeed can be implemented. We defer to the Appendix for further details.
The end result is summarized in the following Proposition.
###### Proposition 13 (Complexity of PD-QLS with VTAA).
Consider a PD-QLS where we have access to a normalised matrix-encoding of
$B=I-\eta\,A$ and to a state preparation unitary for b. Then, there is a VTAA-
based solver having target precision $\varepsilon$, constant success
probability, and query complexities given by
$\displaystyle
Q[{\mathcal{U}}_{\textbf{b}}]\in{\mathcal{O}}\\!\left(\sqrt{\log(\kappa)}+\frac{\kappa}{\left|\left|{A^{-1}\left|{\textbf{b}}\right\rangle}\right|\right|}\right)$
(67) $\displaystyle
Q[{\mathcal{U}}_{B}]\in{\mathcal{O}}\\!\left(\sqrt{\frac{\kappa}{\eta}}\,\Gamma_{A,\textbf{b}}\,\mathrm{polylog}(\kappa,{\tilde{\epsilon}}^{-1},\eta^{-1})\right)$
(68) $\displaystyle\mathrm{with}\quad$
$\displaystyle\Gamma_{A,\textbf{b}}:=\sqrt{\kappa}\,\frac{\left|\left|{A^{-1/2}\left|{\textbf{b}}\right\rangle}\right|\right|}{\left|\left|{A^{-1}\left|{\textbf{b}}\right\rangle}\right|\right|},\qquad{\tilde{\epsilon}}\in{\mathcal{O}}\\!\left(\frac{\varepsilon}{\kappa\sqrt{\log\kappa}}\right)$
(69) $\displaystyle\mathrm{and}\quad$
$\displaystyle\mathrm{polylog}(\kappa,{\tilde{\epsilon}}^{-1},\eta^{-1})=\log^{2}(\kappa)\log^{7/4}({\tilde{\epsilon}}^{-1})\log^{3/2}(\eta^{-1}).$
(70)
Moreover, the algorithm is gate efficient.
Now we discuss the runtime of this VTAA PD-QLS solver.
* •
We always have $Q[{\mathcal{U}}_{B}]\geq Q[{\mathcal{U}}_{\textbf{b}}]$, thus
the ${\mathcal{U}}_{B}$-complexity is the dominant factor.
* •
Compared to Proposition 12, the ${\mathcal{U}}_{\textbf{b}}$ query complexity
increases here only by an additive $\sqrt{\log(\kappa)}$ factor, while the
${\mathcal{U}}_{B}$ complexity typically (i.e., for almost all values of
$\Gamma_{A,\textbf{b}}$) has a polynomial improvement. To prove it, note that
$\left|\left|{A^{-1/2}\left|{\textbf{b}}\right\rangle}\right|\right|^{2}=\left\langle{\textbf{b}}\right|A^{-1}\left|{\textbf{b}}\right\rangle\leq\kappa$
and thus
$\Gamma_{A,\textbf{b}}=\sqrt{\kappa}\,\frac{\left|\left|{A^{-1/2}\left|{\textbf{b}}\right\rangle}\right|\right|}{\left|\left|{A^{-1}\left|{\textbf{b}}\right\rangle}\right|\right|}\leq\frac{\displaystyle\kappa}{\left|\left|{A^{-1}\left|{\textbf{b}}\right\rangle}\right|\right|}$.
* •
Compared to the general VTAA-based QLS solver of Ref. [5], our PD-QLS solver
has a polynomial speed-up for almost all values of $\Gamma_{A,\textbf{b}}$,
see the right plot in Figure 3. This is a consequence of Lemma 20, where we
prove that $\Gamma_{A,\textbf{b}}\in[\,1,\sqrt{\kappa}\,]$.
## 5 Method based on a quadratic reduction of the condition number via matrix
decomposition
In this section we start giving some preliminary considerations on the
approach that we are going to present (Section 5.1) and then give a formal
statement of the Sum-QLS problem that we solve (Section 5.2). Next, we
describe a classical pre-processing step that quadratically reduces the
condition number (Section 5.3) and then present the quantum algorithm solving
the pseudo-inversion problem that originates from the preconditioning (Section
5.4). Finally, we estimate the gate complexity of the resulting Sum-QLS solver
(Section 5.5).
### 5.1 General considerations
We present now the general features of any algorithm that solves PD-QLS
exploiting a decomposition of the form $A=LL^{\dagger}$, as already summarised
in Section 1.4. Note that such $L$ exists for any PD matrix $A$ and that it
may not be unique, especially if we allow $L$ to be non-square. The key
property is that any $L$ such that $A=LL^{\dagger}$ satisfies
$\kappa_{\mathrm{eff}}(L)=\sqrt{\kappa(A)}$, hence a system of the form
$L^{\dagger}\textbf{x}=\textbf{b}^{\prime}$ (for any $\textbf{b}^{\prime}$) is
quadratically better conditioned than the original system
$A\textbf{x}=\textbf{b}$.
The method we introduce is based on finding matrices $L\in\mathds{C}^{N\times
M}$ and $L^{g}\in\mathds{C}^{M\times N}$, with $M>N$, such that the
decomposition $A=LL^{\dagger}$ holds and $L^{g}$ satisfies $LL^{g}=I$, i.e. it
is a right pseudo-inverse; moreover, $L^{g}$ is rectangular with more rows
than columns and thus $L^{g}L\neq I$, i.e. $L^{g}$ cannot be a left pseudo-
inverse. We make then the following observations.
1. 1.
$L^{g}LL^{\dagger}\textbf{x}=L^{g}\textbf{b}$ is a linear system equivalent to
the original one, having the vector $\textbf{x}=A^{-1}\textbf{b}$ as the
unique solution, but with no guarantee that the condition number of
$L^{g}LL^{\dagger}$ is small.
2. 2.
$L^{\dagger}\textbf{x}=L^{g}\textbf{b}$ is a over-constrained linear system
that may be inequivalent to the original one (since $L^{g}L\neq I$) and
typically has no proper solution x.
3. 3.
Finding
$\mathrm{argmin}_{\textbf{x}}\left|\left|{\,L^{\dagger}\textbf{x}-L^{g}\textbf{b}\,}\right|\right|$
is a problem equivalent to the original system. The unique solution is
$\textbf{x}=(L^{\dagger})^{+}L^{g}\textbf{b}$, therefore using999By assumption
$A$ is invertible, hence $L^{\dagger}\in\mathds{C}^{M\times N}$ is full-rank,
and thus using the SVD $L^{\dagger}=W\Sigma^{\dagger}V^{\dagger}$ we get
$(LL^{\dagger})^{-1}L=(V\Sigma\Sigma^{\dagger}V^{\dagger})^{-1}V\Sigma
W^{\dagger}=V(\Sigma^{\dagger})^{+}W^{\dagger}=(L^{\dagger})^{+}$.
$(L^{\dagger})^{+}=(LL^{\dagger})^{-1}L=A^{-1}L$ and $LL^{g}=I$ we get the
required result $\textbf{x}=A^{-1}LL^{g}\textbf{b}=A^{-1}\textbf{b}$.
The goal is thus to convert the linear system $A\textbf{x}=\textbf{b}$ into
the linear regression problem
$\mathrm{argmin}_{\textbf{x}}\left|\left|{\,L^{\dagger}\textbf{x}-L^{g}\textbf{b}\,}\right|\right|$,
having solution $\textbf{x}=(L^{\dagger})^{+}L^{g}\textbf{b}$. This is a non-
trivial task, as we need, given access to $A$ via sparse-matrix oracle or via
some succinct description, to construct a suitable access to $L^{\dagger}$
and, moreover, given access to b, to construct a suitable access to
$\textbf{b}^{\prime}:=L^{g}\textbf{b}$. The latter requirement seems
particularly worrisome, since it involves a pseudo-inversion of the
exponentially large matrix $L$. We show, however, that for the Sum-QLS
problem, whereby $A$ is provided as a sum of local PD terms, one can find a
suitable $L^{g}$ for which a compact classical description can be efficiently
computed.
We also remark that a pseudo-inversion problem can be interpreted as a regular
matrix inversion on the subspace where the matrix $L^{\dagger}$ is full-rank;
thus, solving pseudo-inversion entails a larger runtime compared to solving a
standard QLS, since the appropriate subspace has to be selected via projection
or via amplitude amplification. More formally, the operator
$(L^{\dagger})^{+}\\!:\mathds{C}^{M}\rightarrow\mathds{C}^{N}$ (with $M>N$)
has rank equal to $N$ and thus we have the orthogonal decomposition
$\displaystyle\mathds{C}^{M}=\mathrm{supp}\big{(}(L^{\dagger})^{+}\big{)}+\mathrm{ker}\big{(}(L^{\dagger})^{+}\big{)}\qquad\begin{cases}\dim\mathrm{supp}\big{(}(L^{\dagger})^{+}\big{)}=N\\\
\dim\mathrm{ker}\big{(}(L^{\dagger})^{+}\big{)}=M-N\end{cases}$ (71)
where the support is by definition the subspace orthogonal to the kernel.
Then, calling $\Pi$ and $\Pi^{\perp}=I-\Pi$ the orthogonal projectors on the
support and on the kernel of $(L^{\dagger})^{+}$, respectively101010Using the
identity $(L^{\dagger})^{+}=A^{-1}L$ we obtain
$\mathrm{supp}((L^{\dagger})^{+})=\mathrm{supp}(L)$ and moreover
$\mathrm{ker}((L^{\dagger})^{+})=\mathrm{ker}(L)$., we obtain the identity
$\displaystyle(L^{\dagger})^{+}\left|{\textbf{b}^{\prime}}\right\rangle=(L^{\dagger})^{+}\,\Pi\left|{\textbf{b}^{\prime}}\right\rangle+(L^{\dagger})^{+}\,\Pi^{\perp}\left|{\textbf{b}^{\prime}}\right\rangle=(L^{\dagger})^{+}\,\Pi\left|{\textbf{b}^{\prime}}\right\rangle$
(72)
since by definition we have $(L^{\dagger})^{+}\,\Pi^{\perp}=0$. It is then
evident that only the component $\Pi\left|{\textbf{b}^{\prime}}\right\rangle$,
which is in general a sub-normalised quantum state, plays a role in the
pseudo-inversion algorithm, while the orthogonal component
$\Pi^{\perp}\left|{\textbf{b}^{\prime}}\right\rangle$ can be arbitrary.
Therefore, any quantum pseudo-inversion algorithms implicitly requires the
amplification of the $\Pi\left|{\textbf{b}^{\prime}}\right\rangle$ component,
which therefore entails a gate complexity in ${\mathcal{O}}(1/\sqrt{\gamma})$,
for some known lower bound
$\sqrt{\gamma}\leq\left|\left|{\Pi\left|{\textbf{b}^{\prime}}\right\rangle}\right|\right|$.
### 5.2 Problem statement
In the Sum-QLS we assume that the coefficient matrix $A\in\mathds{C}^{N\times
N}$ is given by an _explicit classical description_ , rather than via oracular
access. This allows us to evade the lower bounds given in Proposition 6, since
those bounds are formulated for relativising (i.e. oracular) algorithms. We
assume, specifically, that $A$ has the form [16]
$\displaystyle A=\sum_{j=1}^{J}H_{(j)}\qquad\forall j\ H_{(j)}\ \textup{is
positive definite.}$ (73)
Here we impose that each $H_{(j)}$ is strictly positive definite (rather than
semi-definite) because of a technical condition that will become clear later;
in essence, the expression in Eq. (112) can diverge if any $H_{(j)}$ is
singular, resulting in an infinite runtime. Each term $H_{(j)}$ is a local
Hamiltonian, i.e. it can be expressed as [see also Eq. (46) in the footnote]
$\displaystyle H_{(j)}=h_{(j)}\otimes I_{\neg{\mathcal{S}}_{j}}$ (74)
where each $h_{(j)}$ is an operator acting on a subset
${\mathcal{S}}_{j}\subseteq\\{1,\ldots,n\\}$ of at most $s$ qubits,
corresponding to a matrix of size at most $2^{s}\times 2^{s}$. We assume that
$J$, the number of Hamiltonian terms, and $s$ are “small”, i.e. we take
$J\in{\mathcal{O}}(\mathrm{poly}\,n)$ and $s\in{\mathcal{O}}(\log n)$, and
that we have a complete classical description of each operator $h_{(j)}$ and
of each subset ${\mathcal{S}}_{j}$. The matrix $A$ is fully specified with at
most $J2^{2s}$ real parameters and with $Jn$ boolean values (defining the sets
${\mathcal{S}}_{j}$), i.e. the number of parameters is
$J2^{2s}+Jn\in{\mathcal{O}}(\mathrm{poly}\,n)$.
We require moreover that the known-term vector b is sparse, containing at most
$d_{\textbf{b}}$ non-zero entries in the computational basis, where
$d_{\textbf{b}}$ also scales polynomially in $n$. This implies that a
preparation circuit ${\mathcal{U}}_{\textbf{b}}$ can be given as an explicit
small quantum circuit. This leads to the following definition for the Sum-QLS
problem.
###### Definition 14 (Sum-of-Hamiltonians Quantum Linear System).
A Sum-QLS problem is a PD-QLS as in Definition 2 with the following
restrictions. The coefficient matrix $A\in\mathds{C}^{N\times N}$, for
$N=2^{n}$, is provided as the sum of PD Hamiltonian terms
$A=\sum_{j=1}^{J}H_{(j)}$, where each $H_{(j)}$ acts on at most $s$ qubits;
each $H_{(j)}$ is fully specified as a PD matrix $h_{(j)}$ of size
$2^{s_{j}}\times 2^{s_{j}}$ with $s_{j}\leq s$, together with the set
${\mathcal{S}}_{j}$ of $s_{j}$ qubits on which $H_{(j)}$ acts upon. The vector
$\textbf{b}\in\mathds{C}^{N}$ is $d_{\textbf{b}}$-sparse and the value and
position of each of the $d_{\textbf{b}}$ non-zero entries is provided.
In Appendix D we show that the Sum-QLS problem is $\mathsf{BQP}$-hard, by
adapting a proof given in HHL [1]. That is, we show that any polynomial-time
quantum computation (in the BQP class) can be re-formulated as a Sum-QLS
problem for some artfully constructed coefficient matrix $A$ and known-term
vector b; therefore no polynomial-time classical probabilistic algorithm (in
the BPP class) can solve the Sum-QLS problem111111In this context, a classical
probabilistic algorithm “solves” a QLS problem if it outputs a value
$n\in\\{1,\ldots,N\\}$ with probability approximately equal to $|x_{n}|^{2}$,
the square of the $n$-th amplitude of the quantum state
$\left|{\textbf{x}}\right\rangle=\left|{A^{-1}\textbf{b}}\right\rangle$.
(unless $\textsf{BPP}=\textsf{BQP}$).
### 5.3 Classical pre-processing step
In this section we describe the classical pre-processing step, providing a
quadratic improvement of the condition number. We decompose each matrix
$h_{(j)}$ as
$\displaystyle h_{(j)}=l_{(j)}l_{(j)}^{\dagger}\,,$ (75)
which can be accomplished for example via the Cholesky decomposition [31], in
which case $l_{(j)}$ is a lower triangular matrix. Notice that each matrix
$h_{(j)}$ is a small matrix of size $2^{s}\times
2^{s}\in{\mathcal{O}}(\mathrm{poly}\,n)$ and thus the Cholesky decomposition
can be performed numerically on a classical computer using
${\mathcal{O}}(\mathrm{poly}\,n)$ operations; specifically, Cholesky
factorisation of a $m\times m$ matrix requires $m^{3}/3$ arithmetic
operations, $m^{3}/6$ additions and $m^{3}/6$ multiplications [31]. The total
number of Hamiltonian terms is $J\in{\mathcal{O}}(\mathrm{poly}\,n)$, implying
that the total runtime for performing the Cholesky decomposition for all
Hamiltonian terms is ${\mathcal{O}}(2^{3s}J)$, which also is polynomial in $n$
under our assumptions.
We save the Cholesky decompositions $l_{(j)}$ in a classical memory, storing
${\mathcal{O}}(J2^{2}s)={\mathcal{O}}(\mathrm{poly}\,n)$ complex values, for
later use. These decompositions implicitly define the operators
$\displaystyle L_{(j)}=l_{(j)}\otimes I_{\neg{\mathcal{S}}_{j}}$ (76)
where each $L_{(j)}$ is of size $2^{n}\times 2^{n}$ and where the
interpretation of this equation is the same as in Eq. (74). We now introduce
the rectangular matrix $L\in\mathds{C}^{N\times JN}$, with $N=2^{n}$, given by
$\displaystyle L:=\left(L_{(1)}\ \bigg{|}\ \cdots\ \bigg{|}\ L_{(J)}\right)$
(77)
and thus we have the required decomposition
$\displaystyle
LL^{\dagger}=\sum_{j=1}^{J}L_{(j)}L_{(j)}^{\dagger}=\sum_{j=1}^{J}H_{(j)}=A\,.$
(78)
We can efficiently implement quantum circuits ${\mathcal{P}}_{L}$
(${\mathcal{P}}_{L^{\dagger}}$) that provide sparse-matrix access to $L$
($L^{\dagger}$). To this end, it is sufficient to convert the classical random
access memory that stores the positions and values of the entries of $L$
($L^{\dagger}$) into a qRAM [35]. The scheme presented in Ref. [48, Section
6.3.5] implements this qRAM with a gate complexity in
${\mathcal{O}}(nJ2^{2s})$ and a circuit depth in
${\mathcal{O}}\big{(}\log(J2^{2s})\big{)}={\mathcal{O}}\big{(}s+\log
J\big{)}$.
Since $h_{(j)}$ is by assumption non-singular, each $l_{(j)}$ is a non-
singular lower-triangular matrix and its inverse $l_{(j)}^{-1}$ is an upper-
triangular matrix which we can efficiently compute and store in a classical
memory using a polynomial amount of space. These matrices implicitly define
operators $L_{(j)}^{-1}=l_{(j)}^{-1}\otimes I_{\neg{\mathcal{S}}_{j}}$ such
that $L_{(j)}L_{(j)}^{-1}=I$. We can then define the matrix
$\displaystyle
L^{g}:=\frac{1}{J}\left(\begin{array}[]{c}L_{(1)}^{-1}\\\\[2.84526pt]
\hline\cr\vdots\\\\[2.84526pt] \hline\cr L_{(J)}^{-1}\end{array}\right).$ (82)
which is a generalised right pseudo-inverse of $L$, i.e. $L^{g}$ satisfies the
equation
$\displaystyle L\,L^{g}=\frac{1}{J}\sum_{j=1}^{J}I=I$ (83)
and thus using $(L^{\dagger})^{+}=A^{-1}L$ we get the required relation
$(L^{\dagger})^{+}L^{g}=A^{-1}L\,L^{g}=A^{-1}$. Using a qRAM the gate
complexity of ${\mathcal{P}}_{L^{g}}$ is equal to that of ${\mathcal{P}}_{L}$
and is thus in ${\mathcal{O}}(nJ2^{2s})$.
We finally introduce the quantum state
$\displaystyle\left|{\textbf{b}^{\prime}}\right\rangle:=\left|{L^{g}\,\textbf{b}}\right\rangle.$
(84)
By assumption, b is sparse and has
$d_{\textbf{b}}\in{\mathcal{O}}(\mathrm{poly}\,n)$ non-zero entries, hence the
vector $\textbf{b}^{\prime}$ has sparsity $d_{\textbf{b}^{\prime}}\leq
d_{\textbf{b}}J2^{s}\in{\mathcal{O}}(\mathrm{poly}\,n)$. It is then possible
efficiently classically compute the positions and the values of all the non-
zero entries of $\textbf{b}^{\prime}$, with a gate complexity in
${\mathcal{O}}(n\,d_{\textbf{b}^{\prime}})$, and thus also compute the
normalisation factor $\left|\left|{\textbf{b}^{\prime}}\right|\right|$, with a
gate complexity in ${\mathcal{O}}(d_{\textbf{b}^{\prime}})$. Using the method
described in Ref. [49], we can then efficiently compile a quantum circuit
${\mathcal{U}}_{\textbf{b}^{\prime}}$ that prepares
$\left|{\textbf{b}^{\prime}}\right\rangle$ and has a gate complexity in
${\mathcal{O}}(n\,d_{\textbf{b}^{\prime}})={\mathcal{O}}(n\,d_{\textbf{b}}J2^{s})$,
assuming that all single qubit rotations are performed exactly.
Employing the classical pre-processing described up to now, we can efficiently
implement quantum circuits ${\mathcal{P}}_{L^{\dagger}}$ and
${\mathcal{U}}_{\textbf{b}^{\prime}}$ that act as a sparse access to
$L^{\dagger}$ and state preparation circuit for $\textbf{b}^{\prime}$,
respectively; importantly, the gate complexities of these unitaries are
independent from $\kappa$. We thus have at hand the necessary tools to
implement the quantum pseudo-inversion algorithm that we present in the
upcoming Section 5.4.
### 5.4 Efficient pseudo-inversion quantum algorithm
In this section we look into a quantum algorithm for the linear regression
problem
$\displaystyle\underset{\textbf{x}}{\mathrm{argmin}}\left|\left|{\,L^{\dagger}\textbf{x}-\textbf{b}^{\prime}\,}\right|\right|$
(85)
having solution
$\left|{\textbf{x}}\right\rangle=\left|{(L^{\dagger})^{+}\textbf{b}^{\prime}}\right\rangle=\left|{A^{-1}\textbf{b}}\right\rangle$.
We then employ the quantum pseudo-inversion algorithm of [19, Corollary 31]
which we report here for completeness.
###### Proposition 15 (Complexity of pseudo-inverse state preparation).
Suppose $\widetilde{\kappa}\geq 2$, $\mathcal{L}\in\mathds{C}^{N\times N}$ is
a Hermitian matrix whose non-zero eigenvalues are contained in the domain
$[-1,-1/\widetilde{\kappa}]\cup[1/\widetilde{\kappa},1]$, and $\varepsilon$ is
the target precision. Assume that we have access to
$\mathcal{U}_{\mathcal{L}}$, an $(\alpha,a,\delta)$-matrix-block-encoding of
$\mathcal{L}$ with
$\delta\in\mathfrak{o}\left(\varepsilon/(\widetilde{\kappa}^{2}\log^{3}\frac{\widetilde{\kappa}}{\varepsilon})\right)$
and $a\in\Omega(\log N)$, and to a state preparation oracle
$\mathcal{U}_{\textbf{v}}$ for a vector v such that
$\left|\left|{\Pi_{\mathcal{L}}\left|{\textbf{v}}\right\rangle}\right|\right|\geq\sqrt{\gamma}$,
where $\Pi_{\mathcal{L}}$ is the orthogonal projector onto the support of
$\mathcal{L}^{+}$ and $\gamma$ is a known positive parameter. Then, there is a
(VTAA-based) quantum algorithm that produces a state $\varepsilon$-close to
$\left|{\mathcal{L}^{+}\textbf{v}}\right\rangle$ and has:
$\displaystyle Q\left[{\mathcal{U}}_{\mathcal{L}}\right]\;$
$\displaystyle\in\;{\mathcal{O}}\\!\left(\frac{\alpha}{\sqrt{\gamma}}\,\widetilde{\kappa}\log^{3}(\widetilde{\kappa})\log^{2}(1/\varepsilon)\right)$
(86) $\displaystyle Q\left[{\mathcal{U}}_{\textbf{v}}\right]\;$
$\displaystyle\in\;{\mathcal{O}}\\!\left(\frac{1}{\sqrt{\gamma}}\,\widetilde{\kappa}\log(\widetilde{\kappa})\right)$
(87)
where $Q[{\mathcal{U}}_{\mathcal{L}}]$ and $Q[{\mathcal{U}}_{\textbf{b}}]$ are
the query complexity in terms of access to ${\mathcal{U}}_{\mathcal{L}}$ and
${\mathcal{U}}_{\textbf{b}}$. The algorithm is gate-efficient, only requiring
${\mathcal{O}}\big{(}a\,Q\left[\mathcal{U}_{\mathcal{L}}\right]\big{)}$ extra
elementary gates.
We will apply Proposition 15 using as coefficient matrix $\mathcal{L}$ the
Hermitian extension of $L^{\dagger}$ and
$\widetilde{\kappa}\equiv\sqrt{\kappa}$. That is, we consider the Hermitian
matrix $\mathcal{L}\in\mathds{C}^{(J+1)N\times(J+1)N}$ and vector
$\textbf{v}\in\mathds{C}^{(J+1)N}$
$\displaystyle\mathcal{L}=\left(\begin{array}[]{cc}0&L^{\dagger}\\\
L&0\end{array}\right)\quad\textbf{v}=\left(\begin{array}[]{c}\textbf{b}^{\prime}\\\
0\end{array}\right)$ (92)
which, after pseudo-inversion, results in the vector
$\displaystyle\textbf{x}=\mathcal{L}^{+}\textbf{v}=\left(\begin{array}[]{cc}0&L^{+}\\\
(L^{\dagger})^{+}&0\end{array}\right)\left(\begin{array}[]{c}\textbf{b}^{\prime}\\\
0\end{array}\right)=\left(\begin{array}[]{c}0\\\
(L^{\dagger})^{+}\,\textbf{b}^{\prime}\end{array}\right)$ (99)
which encodes the quantum state
$\left|{J}\right\rangle\otimes\left|{(L^{\dagger})^{+}\,\textbf{b}^{\prime}}\right\rangle$
and the solution is obtained discarding the $(J+1)$-level ancilla system in
the state $\left|{J}\right\rangle$.
### 5.5 Runtime estimation
In this section we look into methods for estimating of the parameters
$\alpha,\kappa$ and $\gamma$ (i.e., the normalisation of the block-encoding of
$L^{\dagger}$, condition number, and overlap with the support space) that
determine the runtime of the pseudo-inversion algorithm of Proposition 15, and
thus determine the overall complexity of the Sum-QLS solver.
First, we can implement sparse-matrix-accesses ${\mathcal{P}}_{L}$ and
${\mathcal{P}}_{L^{\dagger}}$ using the information stored in a qRAM, as
previously explained. Childs’ walk operator [5] then allows to realise a
block-encoding ${\mathcal{U}}_{\mathcal{L}}$ of $\mathcal{L}$ using
${\mathcal{O}}(1)$ accesses to ${\mathcal{P}}_{L}$ and
${\mathcal{P}}_{L^{\dagger}}$. Note that ${\mathcal{U}}_{\mathcal{L}}$ is also
a block-encoding of $L^{\dagger}$ (after a swap of the position of the block).
The normalisation factor of this block-encoding is
$\alpha=J2^{s}\in{\mathcal{O}}(\mathrm{poly}\,n)$, equal to the sparsity of
$\mathcal{L}$.
Second, we can explicitly bound the condition number of $A$ as follows.
Positive-definiteness of the Hamiltonian terms $H_{(j)}$ implies positive-
definiteness of $A$ and, moreover, the smallest and largest eigenvalues of $A$
satisfy $\lambda_{\min}(A)\geq\sum_{j=1}^{J}\lambda_{\min}(h_{(j)})$ and
$\lambda_{\max}(A)\leq\sum_{j=1}^{J}\lambda_{\max}(h_{(j)})$. Since each
$h_{(j)}$ can be efficiently diagonalised, this means that it is possible to
classically compute these values, which then yield the explicit upper bound
$\displaystyle\kappa(A)\leq\frac{\sum_{j=1}^{J}\lambda_{\max}(h_{(j)})}{\sum_{j=1}^{J}\lambda_{\min}(h_{(j)})}\equiv\kappa\,.$
(100)
Tighter bounds to $\kappa(A)$ could also be obtained via more computationally
intensive numerical methods, e.g. by first summing together groups of
Hamiltonian terms and then diagonalizing each sum of Hamiltonians.
We now move on to lower-bounding the value of the overlap parameter
$\displaystyle\big{|}\big{|}\,\Pi_{\mathcal{L}}\left|{\textbf{v}}\right\rangle\big{|}\big{|}=\big{|}\big{|}\,\Pi_{L}\left|{\textbf{b}^{\prime}}\right\rangle\big{|}\big{|},$
(101)
where $\Pi_{\mathcal{L}}$ are the projectors on the supports of
$\mathcal{L}^{+}$ and of $(L^{\dagger})^{+}$, respectively, and note that the
supports of $L$ and $(L^{\dagger})^{+}$ are equal. Using the identity
$\Pi_{L}=L^{+}L$ we thus obtain
$\displaystyle\Pi_{L}=L^{\dagger}A^{-1}L$
$\displaystyle=\sum_{i,j=1}^{J}\left|{i}\right\rangle\\!\left\langle{j}\right|\otimes
L_{(i)}^{\dagger}A^{-1}L_{(j)}\,.$ (102)
Moreover, we have:
$\displaystyle\left|{\textbf{b}^{\prime}}\right\rangle=\left|{L^{g}\,\textbf{b}}\right\rangle=\frac{1}{\sqrt{{\mathcal{N}}}}\sum_{j=1}^{J}\left|{j}\right\rangle\otimes
L_{(j)}^{-1}\left|{\textbf{b}}\right\rangle$ (103)
where the normalisation factor ${\mathcal{N}}$ is given by
$\displaystyle{\mathcal{N}}=\sum_{j=1}^{J}\left|\left|{L_{(j)}^{-1}\left|{\textbf{b}}\right\rangle}\right|\right|^{2}=\sum_{j=1}^{J}\left\langle{\textbf{b}}\right|H_{(j)}^{-1}\left|{\textbf{b}}\right\rangle.$
(104)
Then we can compute
$\displaystyle\Pi_{L}\left|{\textbf{b}^{\prime}}\right\rangle$
$\displaystyle=\frac{1}{\sqrt{{\mathcal{N}}}}\sum_{i,j=1}^{J}\left|{i}\right\rangle\otimes
L_{(i)}^{\dagger}A^{-1}L_{(j)}L_{(j)}^{-1}\left|{\textbf{b}}\right\rangle$
(105)
$\displaystyle=\frac{J}{\sqrt{{\mathcal{N}}}}\sum_{i=1}^{J}\left|{i}\right\rangle\otimes
L_{(i)}^{\dagger}A^{-1}\left|{\textbf{b}}\right\rangle$ (106)
and finally we obtain
$\displaystyle\left|\left|{\Pi_{L}\left|{\textbf{b}^{\prime}}\right\rangle}\right|\right|^{-1}$
$\displaystyle=\frac{\sqrt{{\mathcal{N}}}}{J}\left[\left\langle{\textbf{b}}\right|A^{-1}{\textstyle\sum_{i=1}^{J}}\big{(}L_{(i)}L_{(i)}^{\dagger}\big{)}A^{-1}\left|{\textbf{b}}\right\rangle\right]^{-1/2}$
(107)
$\displaystyle=\frac{1}{J}\,\sqrt{\frac{\sum_{j=1}^{J}\left\langle{\textbf{b}}\right|H_{(j)}^{-1}\left|{\textbf{b}}\right\rangle}{\left\langle{\textbf{b}}\right|A^{-1}\left|{\textbf{b}}\right\rangle}}\;.$
(108)
We now suppose that a value $\gamma>0$ such that
$\left|\left|{\Pi_{\mathcal{L}}\left|{\textbf{v}}\right\rangle}\right|\right|\geq\sqrt{\gamma}$
is known and we remind that the quantum psuedo-inversion algorithm has a
runtime quasi-linear in $1/\sqrt{\gamma}$. We extensively comment on the
values that $\gamma$ can take in order to understand in which cases the Sum-
QLS solver yields an advantage over competing methods.
1. 1.
We have
$\left|\left|{\Pi_{L}\left|{\textbf{b}^{\prime}}\right\rangle}\right|\right|\leq
1$, and the inequality is saturated when $H_{(j)}=A/J$ for all $j$.
2. 2.
The bound
$\sum_{j=1}^{J}\left\langle{\textbf{b}}\right|H_{(j)}^{-1}\left|{\textbf{b}}\right\rangle\leq
J\,\lambda_{*}^{-1}$ holds, where
$\lambda_{*}:=\min_{j}\lambda_{\min}(H_{(j)})$. Assuming that
$\lambda_{*}\in\Omega\big{(}\lambda_{\min}(A)/J\big{)}$, i.e. there is no
Hamiltonian term having a minimum eigenvalue significantly smaller than the
average minimum eigenvalue, we obtain:
$\displaystyle\left|\left|{\Pi_{L}\left|{\textbf{b}^{\prime}}\right\rangle}\right|\right|^{-1}$
$\displaystyle\in{\mathcal{O}}\bigg{(}\frac{1}{J}\,\frac{\sqrt{J\lambda_{*}^{-1}}}{\sqrt{\left\langle{\textbf{b}}\right|A^{-1}\left|{\textbf{b}}\right\rangle}}\bigg{)}={\mathcal{O}}\bigg{(}\frac{\sqrt{\kappa(A)}}{\sqrt{\left\langle{\textbf{b}}\right|A^{-1}\left|{\textbf{b}}\right\rangle}}\bigg{)}\;.$
(109)
3. 3.
The numerator in Eq. (108) can be explicitly calculated, while the denominator
is in general difficult to compute121212One could use techniques related to
amplitude estimation to bound
$\big{|}\big{|}A^{-1/2}\left|{\textbf{b}}\right\rangle\big{|}\big{|}$, but
this operation could be as difficult as solving the QLS in the first place..
However, assuming $\left|\left|{A}\right|\right|\leq 1$, we have the bounds
$\displaystyle\sqrt{\left\langle{\textbf{b}}\right|A^{-1}\left|{\textbf{b}}\right\rangle}=\left|\left|{A^{-1/2}\left|{\textbf{b}}\right\rangle}\right|\right|\in\big{[}\,1,\sqrt{\kappa}\,\big{]}$
(110)
4. 4.
Importantly, the expression
$\left\langle{\textbf{b}}\right|A^{-1}\left|{\textbf{b}}\right\rangle$ appears
at the denominator, so that a more “ill-conditioned” vector b results in a
larger overlap and thus in a faster Sum-QLS solver.
5. 5.
As in the analysis of Section 4.4 we can study the runtime in an average-case
scenario. For randomly chosen $A$ and b (sampled according to suitable
probability distributions) we have
$\left|\left|{A^{-1}\left|{\textbf{b}}\right\rangle}\right|\right|\in\Theta(\sqrt{\kappa(A)})$
almost surely; under the same assumptions, we also have
$\left|\left|{A^{-1/2}\left|{\textbf{b}}\right\rangle}\right|\right|\in\Theta(\kappa(A)^{1/4})$
almost surely. Inserting this estimation in Eq. (109) we have that
$\displaystyle\left|\left|{\Pi_{L}\left|{\textbf{b}^{\prime}}\right\rangle}\right|\right|^{-1}$
$\displaystyle\;\in\;\Theta\Big{(}\kappa(A)^{1/4}\Big{)}$ (111)
holds almost surely. We conclude that for an average-case Sum-QLS problem the
runtime is in
$\widetilde{{\mathcal{O}}}(\sqrt{\kappa/\gamma})=\widetilde{{\mathcal{O}}}(\kappa^{3/4})$,
if $\sqrt{\gamma}$ is a tight lower bound for
$\left|\left|{\Pi_{L}\left|{\textbf{b}^{\prime}}\right\rangle}\right|\right|$.
Summarising, we have the following result.
###### Proposition 16 (Complexity of the Sum-QLS solver).
Consider the Sum-QLS problem with parameters $N,J,s,\kappa$ and
$d_{\textbf{b}}$ as in Definition 14. There is a classical-quantum algorithm
$\mathcal{A}$ solving the Sum-QLS that has the following features.
The first part of $\mathcal{A}$ consists of an efficient classical pre-
processing algorithm, which outputs a description of the quantum circuits
implementing ${\mathcal{U}}_{L^{\dagger}}$ and
${\mathcal{U}}_{\textbf{b}^{\prime}}$; here ${\mathcal{U}}_{L^{\dagger}}$ is a
$(2^{s}J,1,0)$-matrix-block-encoding of $L^{\dagger}$ [as given in Eq. (77)]
with gate complexity in ${\mathcal{O}}(nJ2^{2s})$ and circuit depth in
${\mathcal{O}}\big{(}s+\log J\big{)}$; while
${\mathcal{U}}_{\textbf{b}^{\prime}}$ is a state preparation unitary for
$\textbf{b}^{\prime}$ [as given in Eq. (82)] with gate complexity in
${\mathcal{O}}(n\,d_{\textbf{b}}\,J2^{s})$. By definition, $L^{\dagger}$ and
$\textbf{b}^{\prime}$ satisfy
$(L^{\dagger})^{+}\textbf{b}^{\prime}=A^{-1}\textbf{b}$.
The second part of $\mathcal{A}$ consists of using the quantum pseudo-
inversion algorithm given in Proposition 15, using the unitaries
${\mathcal{U}}_{L^{\dagger}}$ and ${\mathcal{U}}_{\textbf{b}^{\prime}}$ as
sub-routines, in order to produce a $\varepsilon$-close approximation of the
ideal output
$\left|{\textbf{x}}\right\rangle=\left|{A^{-1}\textbf{b}}\right\rangle$. This
algorithm requires the knowledge of a value $\sqrt{\gamma}$ that satisfies
$\sqrt{\gamma}\leq\left|\left|{\Pi_{L}\textbf{b}^{\prime}}\right|\right|$,
where $\Pi_{L}$ is the projector onto the support of $L$, equivalently:
$\displaystyle\frac{1}{J^{2}}\frac{\sum_{j=1}^{J}\left\langle{\textbf{b}}\right|H_{(j)}^{-1}\left|{\textbf{b}}\right\rangle}{\left\langle{\textbf{b}}\right|A^{-1}\left|{\textbf{b}}\right\rangle}\leq\frac{1}{\gamma}\,.$
(112)
Inserting the previously given expressions for the relevant parameters in Eqs.
(86-87) results in
gate complexity
$\displaystyle\in{\mathcal{O}}\\!\left(\big{(}n\,J2^{2s}\big{)}\,\frac{J2^{s}}{\sqrt{\gamma}}\,\kappa\log^{3}(\kappa)\log^{2}(1/\varepsilon)\,+\,\big{(}n\,d_{\textbf{b}}J2^{s}\big{)}\frac{1}{\sqrt{\gamma}}\,\kappa\log(\kappa)\right)$
(113)
$\displaystyle={\mathcal{O}}\\!\left(\sqrt{\frac{\kappa}{\gamma}}\;\mathrm{poly}\Big{(}n,\log(\kappa/\varepsilon)\Big{)}\right)$
(114)
assuming that $J,2^{s}$ and $d_{\textbf{b}}$ have polynomial dependence on
$n=\log_{2}N$. A quadratic speed-up in $\kappa$ (up to polylogarithmic
factors) is achieved over general QLS solvers when $\gamma\in\Omega(1)$.
We remark that the family of Sum-QLS instances where all the parameters in
Proposition 16 scale polynomially (with the promise, in particular, that Eq.
(112) holds for some $\gamma\in{\mathcal{O}}(\mathrm{poly}\,n)$), can be
solved in polynomial time on a quantum computer, as was already shown in Ref.
[16]. This means that the subset of problems having a polynomial scaling of
the parameters, which we denote Sum-QLSpoly, is contained in BQP. Moreover,
the reduction in Appendix D can map polynomial-sized quantum circuits onto an
instance of Sum-QLSpoly, thus showing that Sum-QLSpoly is also BQP-hard. These
two inclusion then show that Sum-QLSpoly is BQP-complete131313While the
original QLS problem was proven to be BQP-complete already in Ref. [1], our
contribution is to show that adding the constraint that $A$ is the sum of
positive-definite local Hamiltonians does not change the complexity class of
the problem. .
## 6 Discussion and outlook
In this work we have presented two algorithms aiming at solving QLS problems
in the case where the coefficient matrix is positive definite and having (for
certain problem instances) a runtime in ${\mathcal{O}}(\sqrt{\kappa})$, a
quadratic improvement compared to what can be obtained using general QLS
solvers. This improvement has the potential of greatly expanding the classes
of problems where quantum computation can provide a quantum speed-up. For
instance, the discretization of partial differential equations in $D$
dimensions results in PD linear system with $\kappa\in{\mathcal{O}}(N^{2/D})$
[14] and thus having a runtime improvement from ${\mathcal{O}}(\kappa)$ to
${\mathcal{O}}(\sqrt{\kappa})$ is crucial to yield a quantum speed-up in the
physically relevant cases $D=2$ and $D=3$. As a second example, it is possible
estimate the hitting time of a Markov chain solving a QLS for the matrix
$A=I-S$, where $S$ is related to the discriminant matrix of the Markov chain
[16]; since $A$ is positive definite and decomposable as a sum of PD local
Hamiltonian terms [16, Appendix A] our second algorithm could be applicable to
this problem.
In the spirit of finding in the near future real-world applications of quantum
algorithms, we note that there is considerable interest in the possibility of
realising QLS solvers in Noisy Intermediate-scale Quantum (NISQ) devices [10,
11] and we argue that some of our results might be implementable in NISQ
devices too. In particular, the crux of our first algorithm is to find a good
polynomial approximation for $A^{-1}$ with a degree in
${\mathcal{O}}(\sqrt{\kappa})$ and then implement it with the quantum signal
processing method [28, 25]. The quadratic reduction in the degree of the
polynomials renders their realisation more easily compatible with the next
generation of quantum processors.
We note that many further improvements and extensions to our algorithms may be
possible. Regarding the first algorithm (Section 4), it would be important to
extend the classes of matrices for which a normalised matrix-block-encoding of
$B=I-\eta\,A$ can be efficiently implemented to make the method more generally
applicable. Regarding the second algorithm (Section 5) we note that the
specific choice of the generalised pseudo-inverse $L^{g}$ in the classical
step results in a ${\mathcal{O}}(1/\sqrt{\gamma})$ multiplicative overhead in
the runtime, where $\gamma$ is given in Eq. (112). An open question is whether
a different choice of the pseudo-inverse could improve, or eliminate
altogether, this overhead. We also mention the possibility that the
decomposition of $A$ as a sum of local PD Hamiltonians could be computed on-
the-fly by the solver, instead of being given as an external input. We note
that if the sparsity pattern satisfies certain conditions (it is a _chordal_
graph) a decomposition $A=LL^{\dagger}$ that does not increase the sparsity of
$A$ exists, and the characterisation given in Ref. [50, Theorem 2.6] could be
employed to compute it.
We finally mention an open research idea that may be worth investigating. The
eigenpath transversal method has been used in some new algorithms to solve the
QLS problem with time complexity in $\widetilde{{\mathcal{O}}}(\kappa)$ [7, 8,
10, 11, 12]; this method is simpler than the Variable-Time Amplitude
Amplification (VTAA) method and also results in (marginally) improved
runtimes, however, it is not directly applicable to solve a pseudo-inversion
problem. It would be interesting to find a way to adapt the eigenpath
transversal method to make it work also in the case where the coefficient
matrix is singular. As a by-product, it could replace the algorithm given in
Ref. [19] as the sub-routine used in our second algorithm to solve the pseudo-
inversion problem, therefore making it more practical.
## Acknowledgments
This work was supported by the Dutch Research Council (NWO/OCW), as part of
the Quantum Software Consortium programme (project number 024.003.037). Some
ideas present in the paper originated from discussions with Anirban Chowdhury
while DO was visiting the Center for Quantum Information and Control (CQuIC).
The authors acknowledge discussion with Markus Mieth, Anderas Spörl, and thank
András Gilyén for reading the manuscript and for giving us several insightful
comments and suggestions.
## Appendices
## Appendix A Proof of the query complexity lower bound
In this Appendix we give the proof of the query complexity lower bound
presented in Section 3, which we present here in a more extended form.
###### Proposition 17 (Query complexity lower bound).
Consider oracular quantum algorithms that solve the PD-QLS problem as
presented in Definition 2 for different access models to $A$ and b. Namely,
access to b is given via a state preparation oracle
${\mathcal{U}}_{\textbf{b}}$ (Definition 3), while access to $A$ is given
either via a sparse-matrix oracle ${\mathcal{P}}_{A}$ (Definition 4) or via a
matrix-block-encoding ${\mathcal{U}}_{A}$ (Definition 5). Then, PD-QLS solving
algorithms reaching a constant precision $\varepsilon\in{\mathcal{O}}(1)$ have
query complexities
$Q[{\mathcal{U}}_{\textbf{b}}],Q[{\mathcal{U}}_{A}],Q[{\mathcal{P}}_{A}]$ all
in $\Omega\big{(}\min(\kappa,N)\big{)}$. More precisely, we have:
1. 1.
$Q[{\mathcal{U}}_{b}]\in\Omega\big{(}\min(\kappa,N)\big{)}$, independently
from the access model for $A$;
2. 2.
$Q[{\mathcal{U}}_{A}]\in\Omega\big{(}\min(\kappa,N)\big{)}$, independently
from the access model for b, when ${\mathcal{U}}_{A}$ is a normalised matrix-
block-encoding;
3. 3.
$Q[{\mathcal{P}}_{A}]\in\Omega\big{(}\min(\kappa,N)\big{)}$, independently
from the access model for b, when $A$ is a matrix with constant sparsity.
In the main text we prove a weaker result using a reduction to the quantum
search problem, which has a query complexity in $\Omega(\sqrt{N/M})$, where
$M\in[N]:=\\{1,\ldots,N\\}$ is the number of marked elements; here, we use
instead a reduction to a “promise majority” problem, which has a query
complexity in $\Omega(N/M)$, where $M\in[N]$ is the margin of the majority. As
a result, we can prove that solving a PD-QLS has linear scaling of the query
complexity in the condition number for all $\kappa\in{\mathcal{O}}(N)$. To
prove Proposition 17, we first introduce the PromiseMajorityM problem as
follows.
###### Definition 18 (PromiseMajorityM).
Given a vector $y\in\\{0,1\\}^{N}$, a value $M\in[N]$ (we assume for
simplicity that $N+M$ is even) and given the promise that we either have
* (Case 0)
$y_{i}=0$ for $N/2+M/2$ of the entries
* (Case 1)
$y_{i}=1$ for $N/2+M/2$ of the entries
the PromiseMajorityM problem consists in determining which of the two is the
case.
We assume that we have access to $y$ via a quantum oracle ${\mathcal{P}}_{y}$
that acts as ${\mathcal{P}}_{y}\left|{i,z}\right\rangle=\left|{i,z\oplus
y_{i}}\right\rangle$ for all $i\in[N]$ and for $z\in\\{0,1\\}$. We also remind
that the two-sided bounded-error quantum query complexity $\mathcal{Q}_{2}$ of
a boolean function is defined as the minimum number of accesses to the input
of the function (i.e., to ${\mathcal{P}}_{y}$) that are necessary to correctly
output the value of the function with probability at least $2/3$, both for the
positive and for the negative instances. Then we have the following Lemma:
###### Lemma 19.
The two-sided bounded-error quantum query complexity $\mathcal{Q}_{2}$ of
PromiseMajorityM, in terms of accesses to ${\mathcal{P}}_{y}$, is
$\mathcal{Q}_{2}(\textsc{PromiseMajority}_{M})\in\Omega(N/M)$.
###### Proof.
This follows immediately from Ref. [51, Corollary 1.2]. ∎
We now show that (relativising) PD-QLS solving algorithms can be used to
compute PromiseMajorityM; the lower bound on the query complexity of
PromiseMajorityM directly translates into a lower bound on the query
complexity of the PD-QLS solvers. We will prove separately the three cases of
Proposition 17, with each proof building upon the previous ones.
###### Proof.
_Case 1._
We assume that $y\in\\{0,1\\}^{N}$ is in the domain of PromiseMajorityM, i.e.
$y$ either contains exactly $N/2+M/2$ zeros or $N/2+M/2$ ones, and we define
$\textbf{b}\in\mathds{C}^{N+1}$:
$\displaystyle\left\\{\begin{array}[]{l}b_{i}=(-1)^{y_{i}}\qquad\qquad\mathrm{for}\
i\in[N]\\\\[5.69054pt] b_{N+1}=\sqrt{N+M}\end{array}\right.$ (A.3)
where the value $b_{N+1}$ is fixed to provide a “phase reference” and avoid
ambiguity on the global sign. We have
$\left|{\textbf{b}}\right\rangle=\textbf{b}/\sqrt{2N+M}$ and
$\left|{\textbf{b}}\right\rangle$ can be implemented by first preparing a
state proportional to $(1,\ldots,1,\sqrt{N+M})$ and then applying the correct
phases; this can be done with one oracle call to each of ${\mathcal{P}}_{y}$
and ${\mathcal{P}}_{y}^{\dagger}$, via the transformations
$\displaystyle\left|{i}\right\rangle\ \overset{\mathrm{ancilla}}{\mapsto}\
\left|{i,0}\right\rangle\ \overset{{\mathcal{P}}_{y}}{\mapsto}\
\left|{i,y_{i}}\right\rangle\ \overset{I\otimes Z}{\mapsto}\
(-1)^{y_{i}}\left|{i,y_{i}}\right\rangle\
\overset{{\mathcal{P}}_{y}^{\dagger}}{\mapsto}\
(-1)^{y_{i}}\left|{i,0}\right\rangle\ \overset{\mathrm{discard}}{\mapsto}\
(-1)^{y_{i}}\left|{i}\right\rangle$ (A.4)
and extended by linearity to superpositions. Next, we introduce the vector
$\boldsymbol{1}_{N}:=(1,\ldots,1)^{T}$ containing $N$ ones and then define
$\displaystyle K^{\prime}\equiv K\oplus
0:=\frac{1}{N}\boldsymbol{1}_{N}\boldsymbol{1}_{N}^{\,T}\oplus 0$ (A.5)
as a matrix of size $(N+1)\times(N+1)$, so that $K^{\prime\,2}=K^{\prime}$,
and finally
$\displaystyle A:=I-(1-\epsilon)K^{\prime}$ (A.6)
where $\epsilon$ is a small parameter that we will define shortly. The matrix
$A$ can be used as a coefficient matrix for a PD-QLS solver since $A$ is
positive definite and $\left|\left|{A}\right|\right|=1$. Moreover, the
condition number of $A$ is exactly $\kappa(A)=1/\epsilon$.
Next we have:
$\displaystyle A^{-1}$
$\displaystyle=I+\sum_{t=1}^{\infty}\big{(}(1-\epsilon)K^{\prime}\big{)}^{t}$
(A.7) $\displaystyle=I+\frac{1-\epsilon}{\epsilon}K^{\prime},$ (A.8)
where the summation converges since
$\left|\left|{(1-\epsilon)K}\right|\right|<1$. Let’s apply $A^{-1}$ to b:
$\displaystyle A^{-1}\textbf{b}$
$\displaystyle=\begin{cases}\textbf{b}+\frac{1-\epsilon}{\epsilon}\frac{M}{N}\,\boldsymbol{1}_{N}^{\prime}&\mathrm{if~{}}y\mathrm{~{}has~{}a~{}majority~{}of~{}0}\\\
\textbf{b}-\frac{1-\epsilon}{\epsilon}\frac{M}{N}\,\boldsymbol{1}_{N}^{\prime}&\mathrm{if~{}}y\mathrm{~{}has~{}a~{}majority~{}of~{}1}\end{cases}$
(A.9)
$\displaystyle=\begin{cases}\textbf{b}+\boldsymbol{1}_{N}^{\prime}&\qquad\ \
\,\mathrm{if~{}}y\mathrm{~{}has~{}a~{}majority~{}of~{}0}\\\
\textbf{b}-\boldsymbol{1}_{N}^{\prime}&\qquad\ \
\,\mathrm{if~{}}y\mathrm{~{}has~{}a~{}majority~{}of~{}1}\end{cases}$ (A.10)
where we have introduced $\boldsymbol{1}_{N}^{\prime}:=(1,\ldots,1,0)^{T}$ and
we have chosen $\epsilon$ so that $\frac{1-\epsilon}{\epsilon}\frac{M}{N}=1$,
giving $\kappa(A)=\frac{1}{\epsilon}=\frac{N+M}{M}$. Introducing a boolean
value $f=\textsc{PromiseMajority}_{M}(y)$, i.e. $f\in\\{0,1\\}$ is equal to
the majority of $y$, we can rewrite the vector $A^{-1}\textbf{b}$ entry-wise
as
$\displaystyle\left\\{\begin{array}[]{llc}{[A^{-1}\textbf{b}]_{i}}&=(-1)^{f}\cdot
2&\qquad\mathrm{if~{}}i\mathrm{~{}is~{}such~{}that~{}}y_{i}=f\\\
{[A^{-1}\textbf{b}]_{i}}&=0&\qquad\mathrm{if~{}}i\mathrm{~{}is~{}such~{}that~{}}y_{i}\neq
f\\\ {[A^{-1}\textbf{b}]_{N+1}}&=\sqrt{N+M}&\end{array}\right.$ (A.14)
Then we have
$\left|{A^{-1}\textbf{b}}\right\rangle=A^{-1}\textbf{b}/\left|\left|{A^{-1}\textbf{b}}\right|\right|$,
with
$\displaystyle\left|\left|{A^{-1}\textbf{b}}\right|\right|^{2}=2^{2}\,\frac{N+M}{2}+\sqrt{N+M}^{\,2}=3(N+M)$
(A.15)
so that we get
$\displaystyle\left|{A^{-1}\textbf{b}}\right\rangle=\sqrt{\frac{1}{3}}\left|{N+1}\right\rangle\,+\,(-1)^{f}\sqrt{\frac{2}{3}}\sum_{i:y_{i}=f}\sqrt{\frac{2}{N+M}}\left|{i}\right\rangle.$
(A.16)
We then perform a projective measurement where one of the possible measurement
outcomes is
$\displaystyle\left|{``+"}\right\rangle:=\sqrt{\frac{1}{2}}\left|{N+1}\right\rangle\,+\,\sqrt{\frac{1}{2}}\sum_{i=1}^{N}\frac{1}{\sqrt{N}}\left|{i}\right\rangle.$
(A.17)
The cases $f=0$ and $f=1$ in Eq. (A.16) can be distinguished with constant
advantage, since:
$\displaystyle\langle{``+"}|{A^{-1}\textbf{b}}\rangle$
$\displaystyle=\sqrt{\frac{1}{6}}\,+\,(-1)^{f}\frac{N+M}{2}\sqrt{\frac{1}{3}}\sqrt{\frac{2}{N(N+M)}}$
(A.18) $\displaystyle=\frac{1}{\sqrt{6}}\left(1+(-1)^{f}\sqrt{1+M/N}\right).$
(A.19)
Note that the two cases can still be distinguished with constant probability
if we replace
$\left|{\textbf{x}}\right\rangle=\left|{A^{-1}\textbf{b}}\right\rangle$ with
any approximation $\rho_{\textbf{x}}$ which is sufficiently close to it.
To summarise, suppose we have to solve a PromiseMajorityM problem and that we
can exploit as a subroutine an oracular quantum algorithm $\mathcal{A}$ that,
given access to ${\mathcal{U}}_{\textbf{b}}$, prepares the state
$\left|{A^{-1}\textbf{b}}\right\rangle$ with sufficiently high precision;
suppose moreover that $\mathcal{A}$ has a query complexity
$Q[{\mathcal{U}}_{\textbf{b}}]=g(\kappa)$, for some function
$g:\mathds{R}^{+}\rightarrow\mathds{N}$. Then, $\mathcal{A}$ can be used to
prepare the state in Eq. (A.16) and solve PromiseMajorityM with constant
distinguishing advantage. The ${\mathcal{P}}_{y}$-query complexity of
$\mathcal{A}$ is
$Q[{\mathcal{P}}_{y}]=2\,Q[{\mathcal{U}}_{\textbf{b}}]=2\,g\big{(}\kappa(A)\big{)}=2\,g\\!\left(\frac{N+M}{M}\right)$.
The lower bound $\mathcal{Q}_{2}(\textsc{PromiseMajority}_{M})\in\Omega(N/M)$
then directly implies $g(\kappa)\in\Omega\big{(}\min(\kappa,N)\big{)}$.
∎
###### Proof.
_Case 2._
We modify the construction given in the previous proof and encode the input
$y$ in the entries of the coefficient matrix, with the goal of showing that
$Q[{\mathcal{U}}_{A}]\in\Omega\big{(}\min(\kappa,N)\big{)}$. To this end, we
define the vector $\textbf{u}\in\mathds{R}^{N+1}$ and a diagonal matrix
$D\in\mathds{R}^{(N+1)\times(N+1)}$
$\displaystyle\left\\{\begin{array}[]{ll}u_{i}=1&\mathrm{for}\
i\in[N]\\\\[5.69054pt] u_{N+1}=\sqrt{N+M}&\end{array}\right.\hskip
28.45274pt\left\\{\begin{array}[]{ll}D_{i,i}=(-1)^{y_{i}}&\mathrm{for}\
i\in[N]\\\\[5.69054pt] D_{N+1,N+1}=1&\end{array}\right.$ (A.24)
and notice the vector b in Eq. (A.3) satisfies $\textbf{b}=D\textbf{u}$. We
also define the coefficient matrix $A^{\prime}$
$\displaystyle A^{\prime}:=DAD$ (A.25)
where $A$ is given in Eq. (A.6). Note that $D$ is unitary and self-inverse,
hence $A^{\prime}$ is positive definite, $\kappa(A^{\prime})=\kappa(A)$, and
moreover $A^{\prime-1}=D^{-1}A^{-1}D^{-1}=DA^{-1}D$.
It is possible to implement exactly (i.e., ideally with zero error) a
normalised matrix block of $A^{\prime}$ using at most $4$ calls to
${\mathcal{P}}_{y}$. First, we consider the unitary $\mathrm{Had}^{\prime}$
that prepares the state
$\left|{\boldsymbol{1}^{\prime}}\right\rangle=\left|{(1,1,\ldots,1,0)^{T}}\right\rangle$,
that is
$\mathrm{Had}\left|{0}\right\rangle=\left|{\boldsymbol{1}^{\prime}}\right\rangle$.
Then, the matrix
$\displaystyle{\mathcal{U}}_{A}:=\left(\begin{array}[]{cc}\\!\mathrm{Had}\\!&0\\\
0&\\!\mathrm{Had}\\!\end{array}\right)\left(\begin{array}[]{cc}I-(1-\epsilon)\left|{0}\right\rangle\\!\\!\left\langle{0}\right|&-\sqrt{1-(1-\epsilon)^{2}}\left|{0}\right\rangle\\!\\!\left\langle{0}\right|\\\
\sqrt{1-(1-\epsilon)^{2}}\left|{0}\right\rangle\\!\\!\left\langle{0}\right|&I-(1-\epsilon)\left|{0}\right\rangle\\!\\!\left\langle{0}\right|\end{array}\right)\left(\begin{array}[]{cc}\\!\mathrm{Had}^{\dagger}\\!&0\\\
0&\\!\mathrm{Had}^{\dagger}\\!\end{array}\right)$ (A.32)
is a normalised matrix-block-encoding of $A=I-(1-\epsilon)K^{\prime}$. Note
that the matrix in the centre can be interpreted as the
$\left|{0}\right\rangle\\!\\!\left\langle{0}\right|$-controlled version of the
Pauli-$X$ rotation $e^{i\theta X}$ (with $\cos\theta=-(1-\epsilon)$) and is
thus efficiently implementable. The operations in Eq. (A.4) correspond to a
unitary quantum circuit that can be written as $D\oplus{\mathcal{U}}$, for
some unitary ${\mathcal{U}}$, and finally we obtain that
${\mathcal{U}}_{A^{\prime}}:=(D\oplus{\mathcal{U}})\,{\mathcal{U}}_{A}\,(D\oplus{\mathcal{U}}^{\dagger})$
is a matrix-block-encoding of $A^{\prime}$.
We now consider the linear system $A^{\prime}\textbf{x}=\textbf{u}$ and a
quantum algorithm that prepares the corresponding solution state
$\displaystyle\left|{A^{\prime-1}\,\textbf{u}}\right\rangle=\left|{DA^{-1}D\,\textbf{u}}\right\rangle=D\left|{A^{-1}\,\textbf{b}}\right\rangle.$
(A.33)
The state $D\left|{A^{-1}\,\textbf{b}}\right\rangle$ can be transformed into
$\left|{A^{-1}\,\textbf{b}}\right\rangle$ using the steps given in Eq. (A.4),
which only requires two extra accesses to ${\mathcal{P}}_{y}$. This state
allows to solve PromiseMajorityM with constant probability and thus the same
considerations made in the preceding proof yield the result
$Q[{\mathcal{U}}_{A}]\in\Omega\big{(}\min(\kappa,N)\big{)}$.
∎
###### Proof.
_Case 3._
We start proving again a lower bound on the query complexity
$Q[{\mathcal{U}}_{\textbf{b}}]$, as in Case 1., but for a PD-QLS where the
coefficient matrix $A$ is sparse; then, we use the method used in the proof of
Case 2. to convert it into a lower bound on $Q[{\mathcal{P}}_{A}]$.
Given $y\in\\{0,1\\}^{N}$ satisfying the PromiseMajorityM condition, we
introduce the known term vector $\textbf{b}\in\mathds{R}^{N+1}$
$\displaystyle\left\\{\begin{array}[]{l}b_{i}=(-1)^{y_{i}}\qquad\qquad\qquad\mathrm{for}\
i\in[N]\\\\[5.69054pt] b_{N+1}=\sqrt{N}\,c_{0}\end{array}\right.$ (A.36)
where $c_{0}$ is a positive constant that we will fix later, and we have
$\left|{\textbf{b}}\right\rangle=\textbf{b}\Big{/}\sqrt{N(1+c_{0}^{2})}$.
Next, we define $B^{\prime}\in\mathds{R}^{(N+1)\times(N+1)}$ as
$\displaystyle B^{\prime}=B\oplus 0$ (A.37)
where $B\in\mathds{R}^{N\times N}$ is a symmetric sparse matrix, having $d$
non-zero entries in each row and column which are all equal to $\frac{1}{d}$,
for some constant $d$. Since each row and column of $B$ sums to one, $B$ can
be interpreted as the adjacency matrix of a Markov chain on a $d$-sparse
graph. We require that the graph corresponding to $B$ is not bipartite and
that the spectral gap of $B$ is large (i.e., the Markov chain is ergodic and
rapidly mixing). These properties guarantee that $B^{t}$ quickly converges to
$K=\frac{1}{N}\boldsymbol{1}_{N}\boldsymbol{1}_{N}^{\,T}$ for
$t\rightarrow\infty$. Since $B$ is symmetric, its spectrum is real and,
because of the Perron-Froebenius theorem [52], the spectrum is contained in
the interval $[-1,+1]$ and includes an eigenvalue $\lambda=1$ with
multiplicity one; the fact that $B^{t}$ converges to
$\frac{1}{N}\boldsymbol{1}_{N}\boldsymbol{1}_{N}^{\,T}$ implies that $-1$
cannot be an eigenvalue of $B$. We then define the spectral gap $\delta(B)$ as
the positive parameter $\delta(B):=\min_{\lambda\neq 1}\\{1-|\lambda|\\}$,
where the minimum is taken over the eigenvalues of $B$.
There are families of so-called _expander graphs_ such that both the sparsity
and the spectral gap are constant, see [53, Chapter 21]. We assume that $B$
belongs to one of these expander families and thus, in particular, there is a
(known) positive constant $c_{1}$ such that
$\displaystyle\frac{1}{\delta(B)}\leq c_{1}$ (A.38)
for all sizes $N\in\mathds{N}$. Moreover, $\boldsymbol{1}_{N}$ is the unique
$+1$ eigenvector and hence, from the spectral decomposition of $B$, we can
write
$\displaystyle
B=\frac{1}{N}\boldsymbol{1}_{N}\boldsymbol{1}_{N}^{\,T}+\sum_{\lambda\neq
1}\lambda\,\textbf{v}_{\lambda}\textbf{v}_{\lambda}^{\,{\dagger}}=K+R\,.$
(A.39)
Here $K=\frac{1}{N}\boldsymbol{1}_{N}\boldsymbol{1}_{N}^{\,T}$,
$\textbf{v}_{\lambda}$ are normalised eigenvectors of $B$, and thus
$R\in\mathds{R}^{N\times N}$ is a matrix of rank $N-1$ with
$\left|\left|{R}\right|\right|=1-\delta(B)$ and $KR=RK=0$. Then we define:
$\displaystyle A:=I-(1-\epsilon)B^{\prime}$ (A.40)
for some $\epsilon>0$ that we will define shortly. The matrix $A$ can be used
as a coefficient matrix in a PD-QLS solver since it is positive definite and
with norm one. Moreover, $A$ is $(d+1)$-sparse (where $d$ is the sparsity of
$B$, which is constant) and the condition number of $A$ is exactly
$\kappa(A)=1/\epsilon$.
Next we have
$\displaystyle A^{-1}$
$\displaystyle=I+\sum_{t=1}^{\infty}\big{(}(1-\epsilon)B^{\prime}\big{)}^{t}\equiv
I+\mathcal{B}$ (A.41)
and then defining
$\mathcal{K}:=\sum_{t=1}^{\infty}\big{(}(1-\epsilon)K^{\prime}\big{)}^{t}=\frac{1-\epsilon}{\epsilon}K^{\prime}$
we get
$\displaystyle\left|\left|{\mathcal{B}-\mathcal{K}}\right|\right|$
$\displaystyle\leq\sum_{t=1}^{\infty}\left|\left|{\big{[}(1-\epsilon)B\big{]}^{t}-\big{[}(1-\epsilon)K\big{]}^{t}}\right|\right|$
(A.42)
$\displaystyle=\sum_{t=1}^{\infty}(1-\epsilon)^{t}\left|\left|{(K+R)^{t}-K^{t}}\right|\right|$
(A.43)
$\displaystyle=\sum_{t=1}^{\infty}(1-\epsilon)^{t}\left|\left|{(K+R^{t})-K}\right|\right|$
(A.44) $\displaystyle\leq\sum_{t=1}^{\infty}(1-\delta(B))^{t}$ (A.45)
$\displaystyle\leq\frac{1}{\delta(B)}\leq c_{1}$ (A.46)
where we have used $\left|\left|{R}\right|\right|=1-\delta(B)$, $K^{2}=K$ and
$KR=RK=0$.
Applying $A^{-1}=I+\mathcal{B}$ to b we thus obtain:
$\displaystyle A^{-1}\textbf{b}$
$\displaystyle=\big{[}(I+\mathcal{B}-\mathcal{K})+\mathcal{K}\big{]}\textbf{b}$
(A.47)
$\displaystyle=(I+\mathcal{B}-\mathcal{K})\textbf{b}^{\prime}+\begin{cases}b_{N+1}\textbf{e}_{N+1}+\frac{1-\epsilon}{\epsilon}\frac{M}{N}\,\boldsymbol{1}_{N}^{\prime}&\
\mathrm{if~{}}y\mathrm{~{}has~{}a~{}majority~{}of~{}0}\\\
b_{N+1}\textbf{e}_{N+1}-\frac{1-\epsilon}{\epsilon}\frac{M}{N}\,\boldsymbol{1}_{N}^{\prime}&\
\mathrm{if~{}}y\mathrm{~{}has~{}a~{}majority~{}of~{}1}\end{cases}$ (A.48)
$\displaystyle=(I+\mathcal{B}-\mathcal{K})\textbf{b}^{\prime}+\begin{cases}b_{N+1}\textbf{e}_{N+1}+c_{0}\boldsymbol{1}_{N}^{\prime}&\qquad\;\mathrm{if~{}}y\mathrm{~{}has~{}a~{}majority~{}of~{}0}\\\
b_{N+1}\textbf{e}_{N+1}-c_{0}\boldsymbol{1}_{N}^{\prime}&\qquad\;\mathrm{if~{}}y\mathrm{~{}has~{}a~{}majority~{}of~{}1}\end{cases}$
(A.49)
where $\textbf{b}^{\prime}$ is a vector equal to the first $N$ entries of b
(and $b^{\prime}_{N+1}=0$), $\boldsymbol{1}_{N}^{\prime}:=(1,\ldots,1,0)^{T}$
while $\textbf{e}_{N+1}$ is the vector having a one in position $N+1$;
moreover, we choose $\epsilon$ such that
$\frac{1-\epsilon}{\epsilon}\frac{M}{N}=c_{0}$, where $c_{0}$ was introduced
in the definition of b in Eq. (A.36). Then, fixing the constant $c_{0}$ as
$c_{0}=100c_{1}$ and using the triangle inequality, we obtain the following
upper bound
$\displaystyle\mathcal{\sqrt{N}}=\left|\left|{A^{-1}\textbf{b}}\right|\right|$
$\displaystyle\leq\Big{(}\overbrace{c_{0}^{2}N}^{|b_{N+1}|^{2}}+\\!\\!\\!\overbrace{c_{0}^{2}N}^{\left|\left|{c_{0}\boldsymbol{1}_{N}^{\prime}}\right|\right|^{2}}\Big{)}^{1/2}+\overbrace{\sqrt{N}\,[1+c_{1}]}^{\geq\,\left|\left|{(I+\mathcal{B}-\mathcal{K})\textbf{b}^{\prime}}\right|\right|}$
(A.50)
$\displaystyle\leq\left(\sqrt{c_{0}^{2}+c_{0}^{2}}+2\,c_{1}\right)\sqrt{N}\ =\
\left(100\sqrt{2}+2\right)c_{1}\,\sqrt{N}$ (A.51) $\displaystyle\leq
144\,c_{1}\,\sqrt{N},$ (A.52)
where we have assumed $c_{1}\geq 1$. We also have the lower bound
$\displaystyle\sqrt{\mathcal{N}}=\left|\left|{A^{-1}\textbf{b}}\right|\right|$
$\displaystyle\geq\Big{(}\overbrace{c_{0}^{2}N}^{|b_{N+1}|^{2}}+\\!\\!\\!\overbrace{c_{0}^{2}N}^{\left|\left|{c_{0}\boldsymbol{1}_{N}^{\prime}}\right|\right|^{2}}\Big{)}^{1/2}-\overbrace{\sqrt{N}\,[1+c_{1}]}^{\geq\,\left|\left|{(I+\mathcal{B}-\mathcal{K})\textbf{b}^{\prime}}\right|\right|}$
(A.53) $\displaystyle\geq\left(100\sqrt{2}-2\right)c_{1}\,\sqrt{N}$ (A.54)
$\displaystyle\geq 139\,c_{1}\,\sqrt{N}.$ (A.55)
Then we have, using $f=\textsc{PromiseMajority}_{M}(y)$,
$\displaystyle\left|{A^{-1}\textbf{b}}\right\rangle=\frac{A^{-1}\textbf{b}}{\left|\left|{A^{-1}\textbf{b}}\right|\right|}$
$\displaystyle=\frac{c_{0}\,\sqrt{N}}{\sqrt{{\mathcal{N}}}}\left|{N+1}\right\rangle\,+\,(-1)^{f}\frac{c_{0}\,\sqrt{N}}{\sqrt{{\mathcal{N}}}}\sum_{i=1}^{N}\frac{1}{\sqrt{N}}\left|{i}\right\rangle+\left|{\psi}\right\rangle$
(A.56)
$\displaystyle=\frac{100}{144}\left|{N+1}\right\rangle\,+\,(-1)^{f}\frac{100}{144}\sum_{i=1}^{N}\frac{1}{\sqrt{N}}\left|{i}\right\rangle+\left|{\psi^{\prime}}\right\rangle,$
(A.57)
where
$\left|{\psi}\right\rangle:=(\mathcal{B}-\mathcal{K}+I)\textbf{b}^{\prime}/\mathcal{\sqrt{N}}$
is a sub-normalised perturbation vector with
$\big{|}\big{|}\left|{\psi}\right\rangle\big{|}\big{|}\leq\frac{2}{139}$,
while subtracting line (A.57) from line (A.56) one obtains
$\displaystyle\Big{|}\Big{|}\,\left|{\psi^{\prime}}\right\rangle-\left|{\psi}\right\rangle\,\Big{|}\Big{|}\leq\sqrt{2}\left(\frac{100}{139}-\frac{100}{144}\right)\leq
0.04$ (A.58)
and then we have:
$\displaystyle\Big{|}\Big{|}\,\left|{\psi^{\prime}}\right\rangle\,\Big{|}\Big{|}\leq
0.04+\frac{2}{139}\leq 0.06\,.$ (A.59)
The cases $f=0$ and $f=1$ in Eq. (A.57) can be distinguished with constant
advantage using the swap test with the state $\left|{``+"}\right\rangle$
defined in Eq. (A.17), since we have:
$\displaystyle~{}~{}\;\langle{``+"}|{A^{-1}\textbf{b}}\rangle\,=\,\frac{1}{\sqrt{2}}\frac{100}{144}\,+\,(-1)^{f}\frac{1}{\sqrt{2}}\frac{100}{144}\,+\,\langle{``+"}|{\,\psi^{\prime}\,}\rangle$
(A.60) $\displaystyle\Longleftrightarrow\quad$
$\displaystyle\begin{cases}\big{|}\,\langle{``+"}|{A^{-1}\textbf{b}}\rangle\,\big{|}\,\geq\,0.92&\mathrm{if~{}}y\mathrm{~{}has~{}a~{}majority~{}of~{}0}\\\
\big{|}\,\langle{``+"}|{A^{-1}\textbf{b}}\rangle\,\big{|}\,\leq\,0.06&\mathrm{if~{}}y\mathrm{~{}has~{}a~{}majority~{}of~{}1}\,.\end{cases}$
(A.61)
Next, we proceed as in the proof of Case 2. and define an equivalent PD-QLS
where the vector $y$ is encoded in the entries of the coefficient matrix. We
thus define the vector $\textbf{u}\in\mathds{R}^{N+1}$ and the diagonal matrix
$D\in\mathds{R}^{(N+1)\times(N+1)}$
$\displaystyle\left\\{\begin{array}[]{ll}u_{i}=1&\mathrm{for}\
i\in[N]\\\\[5.69054pt] u_{N+1}=\sqrt{N}\,c_{0}&\end{array}\right.\hskip
28.45274pt\left\\{\begin{array}[]{ll}D_{i,i}=(-1)^{y_{i}}&\mathrm{for}\
i\in[N]\\\\[5.69054pt] D_{N+1,N+1}=1&\end{array}\right.$ (A.66)
and thus the identity $\textbf{b}=D\textbf{u}$ holds. We then introduce
$A^{\prime}$, given by
$\displaystyle A^{\prime}:=DAD$ (A.67)
where $A$ is as in Eq. (A.40). The matrix $D$ is unitary and self-inverse,
hence $\kappa(A^{\prime})=\kappa(A)$, and
$A^{\prime-1}=D^{-1}A^{-1}D^{-1}=DA^{-1}D$.
Notice that the position of the non-zero entries of $A^{\prime}$ are the same
as in $A$ and thus independent from $y$, while the sign of a non-zero entry
$A^{\prime}_{i,j}=(-1)^{y_{i}+y_{j}}$ can be obtained querying
${\mathcal{P}}_{y}$ once with input $\left|{i}\right\rangle$ and once with
input $\left|{j}\right\rangle$. These queries can be performed in quantum
superposition and thus two accesses to ${\mathcal{P}}_{y}$ are sufficient to
implement a quantum sparse-matrix-access ${\mathcal{P}}_{A}^{\prime}$.
Therefore it is possible to prepare, with the same
${\mathcal{P}}_{y}$-complexity as discussed previously, the state
$\displaystyle\left|{A^{\prime-1}\,\textbf{u}}\right\rangle=\left|{DA^{-1}D\,\textbf{u}}\right\rangle=D\left|{A^{-1}\,\textbf{b}}\right\rangle\,.$
(A.68)
Finally, we can obtain $\left|{A^{-1}\,\textbf{b}}\right\rangle$ using two
extra accesses to ${\mathcal{P}}_{y}$ by applying the transformations given in
Eq. (A.4). This proves that a quantum algorithm that solves the PD-QLS having
access to ${\mathcal{P}}_{A}^{\prime}$ necessarily has a query complexity
$Q[{\mathcal{P}}_{A^{\prime}}]\in\Omega\big{(}\min(\kappa,N)\big{)}$.
∎
## Appendix B Scaling of the normalisation factor of the matrix-block-
encoding
In this Appendix, we consider the polynomial
$\displaystyle
P_{2\ell-1,\kappa}(x):=\frac{1}{1-x}\left[1-\hat{{\mathcal{T}}}_{\ell,\kappa}(x)\right]^{2}$
(B.1)
as was defined in Eq. (28) and where we have
$\displaystyle\hat{{\mathcal{T}}}_{\ell,\kappa}(x):=\frac{{\mathcal{T}}_{\ell}\left(\frac{x+\frac{1}{2\kappa}}{1-\frac{1}{2\kappa}}\right)}{{\mathcal{T}}_{\ell}\left(\frac{1+\frac{1}{2\kappa}}{1-\frac{1}{2\kappa}}\right)}\;.$
(B.2)
We prove that the normalisation factor
$K:=2\,\max_{x\in[-1,+1]}P_{2\ell-1,\kappa}(x)$ satisfies
$K\in\Theta(\kappa)$, provided that $\ell\geq c\sqrt{\kappa}$ for some
constant $c$ that we will determine later.
Notice that by construction $P_{2\ell-1,\kappa}(1)=0$ and the polynomial is
positive for $x\in[-1,+1)$. To study the properties of the local maxima of
$P_{2\ell-1,\kappa}(x)$ in the interval $[-1,+1]$ we compute the derivative of
$P(x)$ using the property $\frac{\partial}{\partial
x}{\mathcal{T}}_{\ell}(x)=\ell\,{\mathcal{U}}_{\ell-1}(x)$, where
${\mathcal{U}}_{\ell}(x)\in\mathds{R}_{\ell}[x]$ is a Chebyshev polynomial of
the second kind. We have:
$\displaystyle\frac{\partial P_{2\ell-1,\kappa}(x)}{\partial
x}=\frac{\left[1-\hat{{\mathcal{T}}}_{\ell,\kappa}(x)\right]^{2}}{(1-x)^{2}}\;-\;2\ell\,\frac{1-\hat{{\mathcal{T}}}_{\ell,\kappa}(x)}{1-x}\frac{{\mathcal{U}}_{\ell-1}\left(\frac{x+\frac{1}{2\kappa}}{1-\frac{1}{2\kappa}}\right)}{\left(1-\frac{1}{2\kappa}\right){\mathcal{T}}_{\ell}\left(\frac{1+\frac{1}{2\kappa}}{1-\frac{1}{2\kappa}}\right)}\;.$
(B.3)
We set the derivative equal to $0$ and simplify the expression assuming
$1-x\neq 0$ and $1-\hat{{\mathcal{T}}}_{\ell,\kappa}(x)\neq 0$:
$\displaystyle\left(1-\frac{1}{2\kappa}\right)\left[{\mathcal{T}}_{\ell}\\!\left(\frac{1+\frac{1}{2\kappa}}{1-\frac{1}{2\kappa}}\right)-{\mathcal{T}}_{\ell}\\!\left(\frac{x+\frac{1}{2\kappa}}{1-\frac{1}{2\kappa}}\right)\right]\;-\;2\ell\,(1-x)\,{\mathcal{U}}_{\ell-1}\\!\left(\frac{x+\frac{1}{2\kappa}}{1-\frac{1}{2\kappa}}\right)=0$
(B.4)
Then we use the change of variables $y(x)$ and $\delta(\kappa)$ given in Eqs.
(24) and their inverses $\kappa(\delta)=\frac{1}{\delta}+\frac{1}{2}$,
$x(y)=\frac{y-\delta/2}{1+\delta/2}$ to rewrite the previous equation as
$\displaystyle\left(\frac{1}{1+\delta/2}\right)\big{[}{\mathcal{T}}_{\ell}(1+\delta)-{\mathcal{T}}_{\ell}(y)\big{]}\;-\;2\ell\,\left(\frac{1+\delta-y}{1+\delta/2}\right)\,{\mathcal{U}}_{\ell-1}(y)=0$
(B.5)
which is equivalent to:
$\displaystyle\boxed{{\mathcal{T}}_{\ell}(1+\delta)-{\mathcal{T}}_{\ell}(y)\;=\;2\ell\,(1+\delta-y)\,{\mathcal{U}}_{\ell-1}(y)}$
(B.6)
for $x\in[-1,+1]$ or, equivalently, $y\in[-1,1+\delta]$.
The polynomial $P_{2\ell-1,\kappa}(x)$ is by construction non-negative on the
domain $x\in[-1,+1]$ and its derivative in $x=1-\frac{1}{\kappa}$
(corresponding to $y=1$) is positive, provided that
$\ell\in\Omega(1/\sqrt{\delta})$. In fact, the derivative of
$P_{2\ell-1,\kappa}(x)$ is positive if and only if the left hand side of Eq.
(B.6) is larger than the right hand side; using
${\mathcal{T}}_{\ell}(1+\delta)\geq\frac{1}{2}e^{\ell\sqrt{\delta}}$ for
$0\leq\delta\leq 3-2\sqrt{2}$, ${\mathcal{T}}_{\ell}(1)=1$,
${\mathcal{U}}_{\ell-1}(1)=\ell$ we obtain from (B.6) the inequality
$\frac{1}{2}e^{\ell\sqrt{\delta}}-1\overset{!}{>}2\,\ell^{2}\delta$, which is
satisfied for $\ell\geq 4.36/\sqrt{\delta}$. Since we have, moreover,
$P_{2\ell-1,\kappa}(1)=0$, the function $P_{2\ell-1,\kappa}(x)$ does not have
any local maximum in $\left[\,-1,1-\frac{1}{\kappa},\right]$ and must have one
or more local maxima $x_{*}\in\left(\,1\\!-\\!\frac{1}{\kappa},+1\,\right]$;
equivalently, Eq. (B.6) must have at least one solution
$y_{*}\in(1,1+\delta]$.
Any local maximum $y_{*}=y(x^{*})$ satisfies:
$\displaystyle{\mathcal{T}}_{\ell}(y_{*})={\mathcal{T}}_{\ell}(1+\delta)-2\ell\,(1+\delta-
y_{*})\,{\mathcal{U}}_{\ell-1}(y_{*})$ (B.7)
and substituting ${\mathcal{T}}_{\ell}(y_{*})$ in the definition (B.1) gives
$\displaystyle P_{2\ell-1,\kappa}(x_{*})$
$\displaystyle=\frac{1+\delta/2}{1+\delta-
y_{*}}\left[1-\frac{{\mathcal{T}}_{\ell}(y_{*})}{{\mathcal{T}}_{\ell}(1+\delta)}\right]^{2}$
(B.8) $\displaystyle=4\ell^{2}(1+\delta/2)(1+\delta-
y_{*})\,\frac{{\mathcal{U}}_{\ell-1}(y_{*})^{2}}{{\mathcal{T}}_{\ell}(1+\delta)^{2}}\;.$
(B.9)
From Eq. (B.6) we directly have
$\displaystyle{\mathcal{U}}_{\ell-1}(y_{*})\leq\frac{{\mathcal{T}}_{\ell}(1+\delta)}{2\ell(1+\delta-
y_{*})}$ (B.10)
and inserting this inequality in Eq. (B.9) we have that any local maximum
$x_{*}$ satisfies
$\displaystyle P_{2\ell-1,\kappa}(x_{*})$
$\displaystyle\leq\frac{1+\delta/2}{1+\delta-y_{*}}$ (B.11)
$\displaystyle\leq\frac{3/2}{1+\delta-y_{*}}\;.$ (B.12)
In summary, it will be sufficient to show $1+\delta-y_{*}\in\Omega(\delta)$ to
prove that the normalisation constant satisfies
$K=2\,\max_{\\{x_{*}\\}}|P_{2\ell-1,\kappa}(x_{*})|\in{\mathcal{O}}(\kappa)$,
where the maximisation is over the set of (potentially multiple) local maxima
$x_{*}\in\left[\,1\\!-\\!\frac{1}{\kappa},+1\,\right]$.
To this end, we rewrite Eq. (B.6) as:
$\displaystyle\frac{{\mathcal{T}}_{\ell}(1+\delta)-{\mathcal{T}}_{\ell}(y_{*})}{(1+\delta)-y_{*}}\;=\;2\,\frac{\partial{\mathcal{T}}_{\ell}}{\partial
y}(y_{*})\;.$ (B.13)
Since both the first and the second derivative of ${\mathcal{T}}_{\ell}(y)$
are positive for $y\geq 1$ (i.e., the derivative of ${\mathcal{T}}_{\ell}(y)$
is monotonically increasing), this equation can be satisfied only if141414In
this appendix we employ the notation $\overset{!}{\leq}$ to mark inequalities
that we still need to prove and with $\leq$ an inequality that has already
been proven.
$\displaystyle 2\frac{\partial{\mathcal{T}}_{\ell}}{\partial
y}(y_{*})\overset{!}{\leq}\frac{\partial{\mathcal{T}}_{\ell}}{\partial
y}(1+\delta)$ (B.14)
or equivalently, defining $1+\delta_{*}:=y_{*}$
$\displaystyle
2\,{\mathcal{U}}_{\ell-1}(1+\delta_{*})\,\overset{!}{\leq}\,{\mathcal{U}}_{\ell-1}(1+\delta)\;.$
(B.15)
A Chebyshev polynomial of the second kind can be written as
$\displaystyle{\mathcal{U}}_{\ell-1}(y)=\frac{(y+\sqrt{y^{2}-1})^{\ell}-(y+\sqrt{y^{2}-1})^{-\ell}}{2\sqrt{y^{2}-1}}$
(B.16)
hence we have
$\displaystyle{\mathcal{U}}_{\ell-1}(1+\delta_{*})$
$\displaystyle\leq\frac{(1+\delta_{*}+\sqrt{2\delta_{*}+\delta_{*}^{2}})^{\ell}}{2\sqrt{2\delta_{*}+\delta_{*}^{2}}}$
(B.17)
$\displaystyle\leq\frac{(1+1.1\sqrt{2\,\delta_{*}})^{\ell}}{2\sqrt{2\delta_{*}}}$
(B.18)
for sufficiently small $\delta_{*}$ (the constant $1.1$ is somewhat
arbitrary), while we have
$\displaystyle{\mathcal{U}}_{\ell-1}(1+\delta)$
$\displaystyle=\frac{(1+\delta+\sqrt{2\delta+\delta^{2}})^{\ell}\big{[}1-(1+\delta+\sqrt{2\delta+\delta^{2}})^{-2\ell}\big{]}}{2\sqrt{2\delta+\delta^{2}}}$
(B.19) $\displaystyle\geq\frac{(1+\sqrt{2\delta})^{\ell}\times
0.8}{2.4\sqrt{2\delta}}$ (B.20)
$\displaystyle=\frac{2}{3}\frac{(1+\sqrt{2\delta})^{\ell}}{2\sqrt{2\delta}}$
(B.21)
$\displaystyle\geq\frac{2}{3}\frac{(1+\sqrt{2\delta})^{\ell}}{2\sqrt{3\delta_{*}}}$
(B.22)
which holds for $\ell\geq 1/\sqrt{2\delta}$ and in the last step we have
assumed $\delta_{*}\geq\frac{2}{3}\delta$ (notice that in the opposite case
$\delta_{*}<\frac{2}{3}\delta$ we would already have $1+\delta-
y_{*}=\delta-\delta_{*}>\frac{1}{3}\delta$). Thus, the inequality in (B.15) is
implied by
$\displaystyle
2\,\frac{(1+1.1\sqrt{2\,\delta_{*}})^{\ell}}{2\sqrt{2\delta_{*}}}\;$
$\displaystyle\overset{!}{\leq}\;\frac{2}{3}\frac{(1+\sqrt{2\delta})^{\ell}}{2\sqrt{3\delta_{*}}}$
(B.23) $\displaystyle\Longleftrightarrow\hskip
14.22636pt(1+1.1\sqrt{2\,\delta_{*}})^{\ell}\;$
$\displaystyle\overset{!}{\leq}\;\frac{\sqrt{2}}{3\sqrt{3}}\,(1+\sqrt{2\delta})^{\ell}$
(B.24) $\displaystyle\Longleftrightarrow\hskip
19.91692pt1+1.1\sqrt{2\,\delta_{*}}\ \ \;$
$\displaystyle\overset{!}{\leq}\;e^{-\frac{1}{\ell}\log(3\sqrt{3/2})}\,(1+\sqrt{2\delta})\;.$
(B.25)
From the inequality $e^{-x}\geq 1-x$ for $x\geq 0$ we see that the inequality
above is implied by
$\displaystyle 1.1\sqrt{2\,\delta_{*}}\;$
$\displaystyle\overset{!}{\leq}\;\big{(}1-1.31/\ell\big{)}\,(1+\sqrt{2\delta})-1$
(B.26) $\displaystyle=\;\sqrt{2\delta}-\frac{1}{\ell}1.31\,(1+\sqrt{2\delta})$
(B.27)
with $1.31\geq\log(3\sqrt{3/2})$. We now make the assumption that
$\frac{1}{\ell}1.31\,(1+\sqrt{2\delta})\leq\frac{1}{10}\sqrt{2\delta}$, which
is implied by $\ell\geq 13.1+9.27/\sqrt{\delta}$ and is compatible with the
requirement $\ell\in\Omega(\delta^{-1/2})$. Thus it is sufficient to impose:
$\displaystyle 1.1\sqrt{2\,\delta_{*}}\;$
$\displaystyle\overset{!}{\leq}\;\frac{9}{10}\sqrt{2\delta}$ (B.28)
$\displaystyle\Longleftrightarrow\qquad\qquad\delta_{*}\;$
$\displaystyle\overset{!}{\leq}\;\left(\frac{9}{11}\right)^{2}\delta\;.$
(B.29)
We also remark that $(9/11)^{2}\geq 2/3$, so that this last inequality is
compatible with the assumption $\delta_{*}\geq\frac{2}{3}\delta$ we made
earlier.
Plugging the bound in Eq. (B.29) into (B.11), together with the definition of
$\delta(\kappa)$, results in the explicit bound $K\leq 6.05\,\kappa$, i.e.
$K\in{\mathcal{O}}(\kappa)$, provided that $\ell\geq
13.1+9.27\sqrt{\kappa-1/2}$, i.e. $\ell\in\Omega(\sqrt{\kappa})$.
## Appendix C VTAA optimization improving the runtime values of the first
algorithm
In this Appendix we use VTAA to speed-up the asymptotic runtime of the
algorithm based on polynomial approximations of $1/(1-x)$. Preliminarily, we
introduce a Lemma showing that this VTAA algorithm, as summarised in
Proposition 13, does provide an asymptotic query complexity improvement
(compared to general QLS solvers) for most values of $\Gamma_{A,\textbf{b}}$,
see the right plot in Figure 3.
###### Lemma 20.
Defining
$\Gamma_{A,\textbf{b}}:=\sqrt{\kappa}\,\frac{\left|\left|{A^{-1/2}\left|{\textbf{b}}\right\rangle}\right|\right|}{\left|\left|{A^{-1}\left|{\textbf{b}}\right\rangle}\right|\right|}$
we have $\Gamma_{A,\textbf{b}}\in[1,\sqrt{\kappa}\,]$, under the usual
assumption that the spectrum of $A$ is contained in $[1/\kappa,1]$.
###### Proof.
We expand $\left|{\textbf{b}}\right\rangle$ in a basis of eigenvectors of $A$,
that is
$\left|{\textbf{b}}\right\rangle=\sum_{\lambda}\beta_{\lambda}\left|{\lambda}\right\rangle$
with $A\left|{\lambda}\right\rangle=\lambda\left|{\lambda}\right\rangle$,
$\langle{\lambda}|{\lambda^{\prime}}\rangle=\delta_{\lambda,\lambda^{\prime}}$,
and $\sum_{\lambda}|\beta_{\lambda}|^{2}=1$. Then, we can write
$\displaystyle\frac{\left|\left|{A^{-1/2}\left|{\textbf{b}}\right\rangle}\right|\right|^{2}}{\left|\left|{A^{-1}\left|{\textbf{b}}\right\rangle}\right|\right|^{2}}=\frac{\sum_{\lambda}|\beta_{\lambda}|^{2}f_{\lambda}}{\sum_{\lambda}|\beta_{\lambda}|^{2}g_{\lambda}}\
$ $\displaystyle\in\
\big{[}\min_{\lambda}\\{f_{\lambda}/g_{\lambda}\\}\,,\,\max_{\lambda}\\{f_{\lambda}/g_{\lambda}\\}\big{]}\
\subseteq\ \big{[}\kappa^{-1},1\big{]}$ (C.1)
for $f_{\lambda}=\lambda^{-1}$, $g_{\lambda}=\lambda^{-2}$ and thus
$f_{\lambda}/g_{\lambda}=\lambda\in[\kappa^{-1},1]$. We take the square root
of both extrema and multiply by $\sqrt{\kappa}$ to conclude.
∎
### C.1 Variable-time amplitude amplification review
In this Section we review the general VTAA method, mainly following Ref. [19,
Section 3].
###### Definition 21 (Variable-stopping-time quantum algorithm).
A _variable-stopping-time quantum algorithm_
${\mathcal{A}}={\mathcal{A}}_{m}\cdot\ldots\cdot{\mathcal{A}}_{1}\cdot{\mathcal{A}}_{0}$
is given by the application of $m+1$ sub-algorithms ${\mathcal{A}}_{j}$ in
sequence, acting on the Hilbert space
$\mathcal{H}=\mathcal{H}_{C}\otimes\mathcal{H}_{F}\otimes\mathcal{H}_{S}$
where $S$ is a “system register” of arbitrary size, $F$ is a “flag qubit” that
heralds success and $\mathcal{H}_{C}=\bigotimes_{j=0}^{m}\mathcal{H}_{C_{j}}$
is a “clock register” containing $m+1$ qubits $C_{0},C_{1},\ldots,C_{m}$.
$\mathcal{H}$ is initially prepared in the all-zero state
$\left|{0}\right\rangle_{\mathrm{all}}=\left|{0,0,0}\right\rangle_{C,F,S}$.
Each ${\mathcal{A}}_{j}$ acts on
$\mathcal{H}_{C_{j}}\otimes\mathcal{H}_{F}\otimes\mathcal{H}_{S}$ and
$\left|{1}\right\rangle_{C_{j}}$ indicates that the algorithm stops after the
application of ${\mathcal{A}}_{j}$; i.e., each ${\mathcal{A}}_{j}$ is a
controlled algorithm that acts if and only if the previous $j$ qubits in the
clock register are in the state $\left|{0}\right\rangle^{\otimes
j}\in\bigotimes_{i=0}^{j-1}\mathcal{H}_{C_{i}}$. We assume that all branches
of the computation end by step $m$. The successful branches of the algorithm
are those where the flag is in the state $\left|{1}\right\rangle_{F}$ and thus
we define
$\displaystyle
p_{\mathrm{succ}}:=\big{|}\big{|}\,\Pi_{\checkmark}{\mathcal{A}}\left|{0}\right\rangle_{\mathrm{all}}\big{|}\big{|}^{2}\qquad\mathrm{and}\qquad\left|{\psi_{\mathrm{succ}}}\right\rangle:=\frac{\Pi_{\checkmark}{\mathcal{A}}\left|{0}\right\rangle_{\mathrm{all}}}{\big{|}\big{|}\,\Pi_{\checkmark}{\mathcal{A}}\left|{0}\right\rangle_{\mathrm{all}}\big{|}\big{|}}$
(C.2)
where
$\Pi_{\checkmark}:=I_{C}\otimes\left|{1}\right\rangle\\!\\!\left\langle{1}\right|_{F}\otimes
I_{S}$.
By construction, a variable-stopping-time quantum algorithm produces a quantum
state of the form
${\mathcal{A}}\left|{0}\right\rangle_{\mathrm{all}}=\sum_{j=0}^{m}\alpha_{j}\left|{1_{j}}\right\rangle_{C}\left|{\Psi_{j}}\right\rangle_{F,S}$,
where
$\displaystyle\left|{1_{j}}\right\rangle_{C}:=\left|{0}\right\rangle^{\otimes
m-j}\left|{1}\right\rangle\left|{0}\right\rangle^{\otimes j}$ (C.3)
is a state having the qubit $C_{j}$ in $\left|{1}\right\rangle$ and the other
clock qubits in $\left|{0}\right\rangle$, while
$\left|{\Psi_{j}}\right\rangle_{F,S}$ are normalised quantum states in
$\mathcal{H}_{F}\otimes\mathcal{H}_{S}$ with
$\sum_{j=0}^{m}|\alpha_{j}|^{2}=1$.
###### Definition 22 (Stopping times).
We introduce a sequence of _stopping times_ $t_{\mathrm{min}}\equiv
t_{0}<t_{1}<t_{2}<\ldots<t_{m}\equiv t_{\mathrm{max}}$, where each
$t_{j}\in\mathds{N}$ is the complexity of the sub-algorithm
${\mathcal{A}}_{\leq j}:={\mathcal{A}}_{j}\cdot\ldots\cdot{\mathcal{A}}_{0}$.
The probability $p_{j}$ of stopping at time $t_{j}$ (i.e., after the execution
of algorithm ${\mathcal{A}}_{\leq j}$) and the $\ell_{2}$-_average stopping
time_ are defined as
$\displaystyle p_{j}:=\big{|}\big{|}\,\Pi_{C_{j}}{\mathcal{A}}_{\leq
j}\left|{0}\right\rangle_{\mathrm{all}}\big{|}\big{|}^{2}\qquad\quad
t_{\mathrm{avg}}:=\sqrt{\sum_{j=0}^{m}p_{j}\,t_{j}^{2}}$ (C.4)
with
$\Pi_{C_{j}}:=\left|{1_{j}}\right\rangle\\!\\!\left\langle{1_{j}}\right|_{C}\otimes
I_{F}\otimes I_{S}$. We define $\Pi_{\mathrm{stop}\leq
j}:=\sum_{i=0}^{j}\Pi_{C_{i}}$ and
$\displaystyle\Pi_{\mathrm{bad}}^{j}$ $\displaystyle:=\Pi_{\mathrm{stop}\leq
j}\cdot\big{(}I_{C}\otimes\left|{0}\right\rangle\\!\\!\left\langle{0}\right|_{F}\otimes
I_{S}\big{)}$ (C.5) $\displaystyle\Pi_{\mathrm{mg}}^{j}$
$\displaystyle:=I-\Pi_{\mathrm{bad}}^{j}$ (C.6)
which project onto “bad” and “maybe good” subspaces after the application of
${\mathcal{A}}_{\leq j}$. The “maybe good” subspace at step $j+1$ is contained
in the “maybe good” subspace at step $j$ and, by construction,
$\Pi_{\mathrm{mg}}^{m}{\mathcal{A}}\left|{0}\right\rangle_{\mathrm{all}}=\Pi_{\checkmark}{\mathcal{A}}\left|{0}\right\rangle_{\mathrm{all}}$.
In the definition above the $t_{j}$ can represent any complexity measure
(e.g., query complexity, gate complexity, circuit depth). In this Appendix we
only compute the query complexity $Q$ of the sub-algorithm
${\mathcal{A}}_{\leq j}$, but we remark that all the algorithms we consider
here are gate-efficient, i.e., the gate complexity is in
${\mathcal{O}}\big{(}Q\,\mathrm{poly}(\log Q,\log N)\big{)}$.
###### Definition 23 (Variable-Time Amplitude Amplification).
Given a variable-stopping-time algorithm
${\mathcal{A}}={\mathcal{A}}_{m}\cdot\ldots\cdot{\mathcal{A}}_{0}$ as in
Definition 21 and given a sequence of $m+1$ non-negative integers
$(k_{0},k_{1},\ldots,k_{m})$, we recursively define a variable-time
amplification
${\mathcal{A}}^{\prime}={\mathcal{A}}_{m}^{\prime}\cdot\ldots\cdot{\mathcal{A}}_{0}^{\prime}$
as follows. Setting ${\mathcal{A}}_{-1}:=I$, each ${\mathcal{A}}_{j}^{\prime}$
implements a standard $k_{j}$-step amplitude amplification that uses
${\mathcal{A}}_{j}{\mathcal{A}}_{j-1}^{\prime}$ and its inverse a total of
$2k_{j}+1$ times, where the input state is
$|{\psi_{\mathrm{in}}^{j}}\rangle={\mathcal{A}}_{j}{\mathcal{A}}_{j-1}^{\prime}\left|{0}\right\rangle_{\mathrm{all}}$
and the target state is
$\Pi_{\mathrm{mg}}^{j}|{\psi_{\mathrm{in}}^{j}}\rangle$. That is,
${\mathcal{A}}_{j}^{\prime}$ begins preparing the input state
$\displaystyle~{}|{\psi_{\mathrm{in}}^{j}}\rangle=\
{\mathcal{A}}_{j}{\mathcal{A}}_{j-1}^{\prime}\left|{0}\right\rangle_{\mathrm{all}}\,=\,\sin(\theta_{j})\,|{\psi_{\mathrm{mg}}^{j}}\rangle+\cos(\theta_{j})\,|{\psi_{\mathrm{bad}}^{j}}\rangle$
(C.7) $\displaystyle\mathrm{with}~{}$
$\displaystyle\begin{cases}|{\psi_{\mathrm{mg}}^{j}}\rangle\,\propto\;\Pi_{\mathrm{mg}}^{j}|{\psi_{\mathrm{in}}^{j}}\rangle\\\
|{\psi_{\mathrm{bad}}^{j}}\rangle\propto\;\Pi_{\mathrm{bad}}^{j}|{\psi_{\mathrm{in}}^{j}}\rangle\end{cases}\qquad\theta_{j}:=\arcsin\\!\big{(}\left|\left|{\Pi_{\mathrm{mg}}^{j}{\mathcal{A}}_{j}{\mathcal{A}}_{j-1}^{\prime}\left|{0}\right\rangle_{\mathrm{all}}}\right|\right|\big{)}\;\in\;[0,\pi/2]$
(C.8)
and then uses $k_{j}$ accesses to the reflections
$R_{\mathrm{out}}^{j}:=I-2\,\Pi_{\mathrm{mg}}^{j}$ and
$R_{\mathrm{in}}^{j}:=2\,|{\psi_{\mathrm{in}}^{j}}\rangle\langle{\psi_{\mathrm{in}}^{j}}|-I$,
where $R_{\mathrm{in}}^{j}$ is implemented using
${\mathcal{A}}_{j}{\mathcal{A}}_{j-1}^{\prime}$ and
$({\mathcal{A}}_{j}{\mathcal{A}}_{j-1}^{\prime})^{\dagger}$ once, to obtain
the output state
$\displaystyle|{\psi_{\mathrm{out}}^{j}}\rangle=\ $
$\displaystyle{\mathcal{A}}_{j}^{\prime}\left|{0}\right\rangle_{\mathrm{all}}\,=\,\sin\\!\big{[}(2k_{j}+1)\theta_{j}\big{]}\,|{\psi_{\mathrm{mg}}^{j}}\rangle+\cos\\!\big{[}(2k_{j}+1)\theta_{j}\big{]}\,|{\psi_{\mathrm{bad}}^{j}}\rangle\,.$
(C.9)
Note that if we set $k_{0}=\ldots=k_{m}=0$ we have
${\mathcal{A}}^{\prime}\equiv{\mathcal{A}}$. Also, by the recursive structure
of VTAA the first sub-algorithm ${\mathcal{A}}_{1}$ is used a total of
$\prod_{j=0}^{m}(2k_{j}+1)$ times in ${\mathcal{A}}^{\prime}$, which grows
exponentially in $m$ if $k_{j}\geq 1$. Nonetheless, VTAA can provide a speed-
up when the amplification parameters $(k_{0},k_{1},\ldots,k_{m})$ are chosen
appropriately. More precisely, the following result can be derived from Ref.
[19, Lemma 22].
###### Proposition 24 (Result of VTAA).
Using the notation of the previous definitions, let ${\mathcal{A}}^{\prime}$
be a variable-time amplification such that each ${\mathcal{A}}^{\prime}_{j}$
uses $k_{j}$ steps of amplitude amplification, where
$\displaystyle\frac{\pi}{8\,\theta_{j}}-\frac{1}{2}\,\leq\,k_{j}\,\leq\,\frac{\pi}{4\,\theta_{j}}-\frac{1}{2}$
(C.10)
Then, with the definitions in Eq. (C.2), ${\mathcal{A}}^{\prime}$ outputs the
state
$\displaystyle{\mathcal{A}}^{\prime}\left|{0}\right\rangle_{\mathrm{all}}\,=\,\sqrt{p_{\mathrm{succ}}^{\prime}}\,|{\psi_{\mathrm{succ}}}\rangle_{C,F,S}\,+\,\sqrt{1-p_{\mathrm{succ}}^{\prime}}\,|{\psi^{\perp}}\rangle_{C,F,S}$
(C.11)
where success is heralded by $\left|{1}\right\rangle_{F}$, the success
probability satisfies $p_{\mathrm{succ}}^{\prime}\in\Theta(1)$, and the global
query complexity is
$\displaystyle
Q^{\prime}\in{\mathcal{O}}\\!\left(t_{\mathrm{max}}\sqrt{m}+\frac{t_{\mathrm{avg}}}{\sqrt{p_{\mathrm{succ}}}}\sqrt{m\,\log(t_{\mathrm{max}}/t_{\mathrm{min}})}\right).$
(C.12)
We can compare Eq. (C.12) with standard amplitude amplification, which has a
query complexity
$Q\in{\mathcal{O}}(t_{\mathrm{max}}/\sqrt{p_{\mathrm{succ}}})$. In our
algorithm, $m$ will scale logarithmically in $t_{\mathrm{max}}$, so the total
runtime is
$\widetilde{{\mathcal{O}}}(t_{\mathrm{max}}+t_{\mathrm{avg}}/\sqrt{p_{\mathrm{succ}}})$.
Hence, VTAA can provide a speed-up when the average runtime is much shorter
than the maximum runtime.
We remark that a sequence of good values $(k_{0},\ldots,k_{m})$ such that
conditions in Eq. (C.10) are satisfied can be obtained efficiently by means of
an iterative classical-quantum pre-processing algorithm. Specifically, suppose
that some values $(k_{0},\ldots,k_{j})$ satisfying (C.10) have been already
found; then, one can compile the corresponding VTAA algorithm
${\mathcal{A}}_{j}^{\prime}$ and use phase estimation on the state
$|{\psi_{\mathrm{in}}^{j+1}}\rangle={\mathcal{A}}_{j+1}{\mathcal{A}}_{j}^{\prime}\left|{0}\right\rangle_{\mathrm{all}}$
to obtain an estimate of $\sin(\theta_{j+1})$ up to constant multiplicative
precision; this allows to find a value $k_{j+1}$ which satisfies (C.10) with
probability $1-p_{\mathrm{fail}}$ with a cost
${\mathcal{O}}\big{(}\frac{1}{\theta_{j+1}}\log(1/p_{\mathrm{fail}})\big{)}$
[30], which is asymptotically equal to the query complexity of
${\mathcal{A}}_{j+1}^{\prime}$, apart from a multiplicative
$\log(1/p_{\mathrm{fail}})$ overhead. Once $k_{j+1}$ has been precomputed one
can directly compile ${\mathcal{A}}_{j+1}^{\prime}$, without the need of
performing phase estimation “online”. The total failure probability is
$p_{\mathrm{fail}}^{\mathrm{tot}}\leq m\,p_{\mathrm{fail}}$ and the query cost
of the hybrid classical-quantum pre-processing algorithm has only a
${\mathcal{O}}\big{(}\log(1/p_{\mathrm{fail}})\big{)}$ multiplicative overhead
compared to the expression in Eq. (C.12).
### C.2 Efficient implementation of windowing polynomials
In this Section we introduce the “windowing functions” that we use to replace
the Gapped Phase Estimation (GPE) subroutine, introduced in Ref. [5]. Using
GPE gives an additive ${\mathcal{O}}(\kappa)$ runtime overhead, which is too
costly for our purposes.
###### Lemma 25 (Efficient windowing polynomials).
Given $\epsilon,\delta\in(0,1/2]$, there exists an even polynomial
$W_{\epsilon,\delta}(x)=W_{\epsilon,\delta}(-x)$ of degree
$\ell\in{\mathcal{O}}\Big{(}\frac{1}{\sqrt{\delta}}\,\mathrm{polylog}(\delta^{-1},\epsilon^{-1})\Big{)}$,
where
$\mathrm{polylog}(\delta^{-1},\epsilon^{-1})\equiv\log^{1/4}(\epsilon^{-1})\big{[}\log(\epsilon^{-1})+\log(\delta^{-1})\big{]}$,
satisfying the inequalities
$\displaystyle
W_{\epsilon,\delta}(x)\in\begin{cases}[1-\epsilon,1]&\mathrm{if}~{}x\in[0,1-2\delta]\\\
[-1,+1]&\mathrm{if}~{}x\in(1-2\delta,1-\delta)\\\
[-\epsilon,+\epsilon]&\mathrm{if}~{}x\in[1-\delta,1]\;.\end{cases}$ (C.13)
The windowing function $W_{\epsilon,\delta}(x)$ can be computed efficiently
with classical algorithms.
A family of polynomials satisfying the inequalities in (C.13) is already given
in Ref. [25, Lemma 29], but they have a degree
$\ell\in\widetilde{{\mathcal{O}}}(\delta^{-1})$. We seek to achieve the same
result with a quadratically smaller degree,
$\ell\in\widetilde{{\mathcal{O}}}(\delta^{-1/2})$. Also, it is crucial that
the polynomial approximation has even parity: using QSP one can implement
matrix polynomials when the polynomial $P$ satisfies $|P(x)|\leq 1$ for
$|x|\leq 1$ and $P$ has definite parity, while for a polynomial $P^{\prime}$
without definite parity the more restrictive constraint $|P^{\prime}(x)|\leq
1/2$ has to be satisfied (see Theorem 8).
###### Proof.
The proof proceeds in two steps: first, we find an analytic function $f(x)$
that satisfies the inequalities in (C.13) within error $\epsilon/2$, and then
show that the Chebyschev expansion converges very quickly to it, so that
choosing a degree
$\ell\in\widetilde{{\mathcal{O}}}\big{(}\delta^{-1/2}\big{)}$ is sufficient to
be within $\epsilon/2$-distance from $f(x)$ for all $x$.
_First part:_ We introduce the normal distribution cumulative function
$\displaystyle\Phi(x):=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{x}e^{-\frac{t^{2}}{2}}\,dt\qquad
x\in\mathds{R}$ (C.14)
normalised so that $0\leq\Phi(x)\leq 1$. We then define
$\displaystyle\mathcal{W}_{\sigma,\delta}(x):=\Phi\\!\left(\frac{x+1-1.5\,\delta}{\sigma}\right)\Phi\\!\left(\frac{-x+1-1.5\,\delta}{\sigma}\right)$
(C.15)
where $\sigma>0$ has to be chosen so that
$\mathcal{W}_{\sigma,\delta}(1-2\delta)\geq 1-\epsilon/2$ and
$\mathcal{W}_{\sigma,\delta}(1-\delta)\leq\epsilon/2$. Using
$\Phi(-x)=1-\Phi(x)$ and the monotonicity of $\Phi$, both inequalities are
implied by $\Phi\\!\left(-\frac{0.5\,\delta}{\sigma}\right)\leq\epsilon/4$.
Using the bound
$\Phi(-x)\leq\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{-x}\frac{-t}{x}e^{-t^{2}/2}\,dt=\frac{e^{-x^{2}/2}}{\sqrt{2\pi}x}\leq\frac{e^{-x^{2}/2}}{\sqrt{2\pi}}$
for $x\geq 1$, it is sufficient to choose
$\sigma\in\Theta\big{(}\delta/\sqrt{\log(\epsilon^{-1})}\big{)}$ to obtain the
desired result.
_Second part:_ We consider the Chebyschev series associated to
$\mathcal{W}_{\sigma,\delta}(x)$. Note that the function
$\mathcal{W}_{\sigma,\delta}(x)$ has even parity, hence also the associated
Chebyschev series has even parity [54]. Truncating the Chebyschev series at
degree $\ell$, we get a sequence of polynomials
$[S_{\ell}\mathcal{W}_{\sigma,\delta}](x)$ converging uniformly to
$\mathcal{W}_{\sigma,\delta}(x)$ for $\ell\rightarrow\infty$. More precisely,
we have the following result [54, Theorem 5.16].
###### Lemma 26.
Suppose $f(x)$ can be extended to an analytic function on the ellipse
$E_{r}\subseteq\mathds{C}$
$\displaystyle E_{r}=\Big{\\{}\ a\cos\theta+i\,b\sin\theta\ \Big{|}\
a=\tfrac{1}{2}(r+r^{-1}),\ b=\tfrac{1}{2}(r-r^{-1}),\ \theta\in[0,2\pi)\
\Big{\\}}$ (C.16)
for some $r>1$; then, the $\ell$-th degree truncation of the Chebyschev series
is a polynomial $[S_{\ell}f](x)$ that satisfies for all $x\in[-1,+1]$
$\displaystyle\Big{|}f(x)-[S_{\ell}f](x)\Big{|}\leq\frac{M}{r^{\ell}(r-1)}\qquad
with~{}M=\sup\big{\\{}|f(z)|:z\in E_{r}\big{\\}}.$ (C.17)
We apply this theorem to $f(x)=\mathcal{W}_{\sigma,\delta}(x)$, which is
analytic in the whole complex plane. The main technical hurdle is to upper
bound $M$. We choose $r=\frac{1+\sqrt{\sigma}}{1-\sqrt{\sigma}}\geq
1+2\sqrt{\sigma}$, so that we have $a=\frac{1+\sigma}{1-\sigma}$ and
$b=\frac{2\sqrt{\sigma}}{1-\sigma}$, where $a$ and $b$ are the semi-axis of
the ellipse $E_{r}$ along the real and imaginary axis, respectively. We have,
for $x,y\in\mathds{R}$:
$\displaystyle|\Phi(x+iy)|$
$\displaystyle=\tfrac{1}{\sqrt{2\pi}}\Big{|}\int_{-\infty}^{x}e^{-\frac{t^{2}}{2}}dt+\int_{x}^{x+iy}e^{-\frac{t^{2}}{2}}dt\Big{|}$
(C.18) $\displaystyle\leq
1+\tfrac{1}{\sqrt{2\pi}}\Big{|}\int_{0}^{iy}e^{-\frac{(x+i\tau)^{2}}{2}}d\tau\Big{|}$
(C.19) $\displaystyle\leq
1+\tfrac{1}{\sqrt{2\pi}}\,e^{-x^{2}/2}\int_{0}^{|y|}e^{\frac{\tau^{2}}{2}}d\tau$
(C.20) $\displaystyle\leq
1+\tfrac{1}{\sqrt{2\pi}}\,e^{-x^{2}/2}\,e^{y^{2}/2}\,|y|$ (C.21)
$\displaystyle\leq 1+\tfrac{1}{\sqrt{2\pi}}e^{(-x^{2}+y^{2})/2}$ (C.22)
where the last inequality holds for $|y|\leq 1$. We now upper bound
$\displaystyle
M_{\mathrm{half}}:=\sup\left\\{\left|\Phi\\!\left(\tfrac{z+1-1.5\,\delta)}{\sigma}\right)\right|:z\in
E_{r}\right\\}$ (C.23)
so that we will have $M=\sup_{E_{r}}|\mathcal{W}_{\sigma,\delta}(z)|\leq
M_{\mathrm{half}}^{2}$. From (C.22) we only need to maximize the expression
$1+\frac{1}{\sqrt{2\pi}}e^{(-x^{2}+y^{2})/2}$ for
$\displaystyle x=\frac{a}{\sigma}\cos\theta+\frac{1-1.5\delta}{\sigma}\qquad
y=\frac{b}{\sigma}\sin\theta\,.$ (C.24)
Equivalently, we can maximize $-x^{2}+y^{2}$. Substituting $\cos\theta=u$ and
$\sin^{2}\theta=1-u^{2}$ we get
$\displaystyle-x^{2}+y^{2}$
$\displaystyle=\frac{1}{\sigma^{2}}\big{[}-(au+1-1.5\delta)^{2}+b^{2}(1-u^{2})\big{]}$
(C.25)
$\displaystyle\leq\frac{b^{2}}{2\sigma^{2}}\left[1-\frac{(1-1.5\delta)^{2}}{a^{2}+b^{2}}\right].$
(C.26)
We now have $b\in{\mathcal{O}}(\sqrt{\sigma})$,
$\frac{1}{a^{2}+b^{2}}=\frac{1-2\sigma+\sigma^{2}}{1+6\sigma+\sigma^{2}}=1-{\mathcal{O}}(\sigma)$
and thus
$\displaystyle\mathrm{sup}\,\\{-x^{2}+y^{2}\\}\ $
$\displaystyle\in{\mathcal{O}}\Big{(}\frac{\sigma}{\sigma^{2}}\Big{)}\Big{(}1-[1-{\mathcal{O}}(\delta)][1-{\mathcal{O}}(\sigma)]\Big{)}$
(C.27)
$\displaystyle={\mathcal{O}}\big{(}\sigma^{-1}\big{)}{\mathcal{O}}\big{(}\delta+\sigma\big{)}={\mathcal{O}}\big{(}\sqrt{\log\epsilon^{-1}}\,\big{)}$
(C.28)
where we have used
$\delta\in\Theta\big{(}\sigma\sqrt{\log\epsilon^{-1}}\big{)}$. We therefore
have $M_{\mathrm{half}}\in
e^{{\mathcal{O}}(\sqrt{\log\epsilon^{-1}}\,)}\subset{\mathcal{O}}\big{(}\epsilon^{-1}\big{)}$
and thus also $M\leq M_{\mathrm{half}}^{2}\in{\mathcal{O}}(\epsilon^{-2}\,)$.
We thus set
$\ell\in\Theta\big{(}\frac{1}{\sqrt{\sigma}}\log(\epsilon^{-3}\sigma^{-1/2})\big{)}$
and compute
$\displaystyle\Big{|}\mathcal{W}_{\sigma,\delta}(x)-[S_{\ell}\mathcal{W}_{\sigma,\delta}](x)\Big{|}$
$\displaystyle\leq\frac{M}{r^{\ell+1}(r-1)}$ (C.29)
$\displaystyle\leq\frac{C\,\epsilon^{-2}}{(1+2\sqrt{\sigma})^{\frac{D}{\sqrt{\sigma}}\log(\epsilon^{-3}\sigma^{-1/2})}\,2\sqrt{\sigma}}$
(C.30)
$\displaystyle\leq\frac{C\,\epsilon^{-2}}{e^{\log(\epsilon^{-3}\sigma^{-1/2})}\,2\sqrt{\sigma}}=(\epsilon^{3}\,\sigma^{1/2})\;\frac{C\,\epsilon^{-2}}{2\sqrt{\sigma}}$
(C.31) $\displaystyle\leq\epsilon/5$ (C.32)
where $C$ and $D$ are positive constants. We recall that
$\sigma\in\Theta\big{(}\delta/\sqrt{\log(\epsilon^{-1})}\big{)}$ and thus
$\displaystyle\ell$
$\displaystyle\in\Theta\Big{(}\frac{1}{\sqrt{\sigma}}\big{[}3\log(\epsilon^{-1})+\frac{1}{2}\log(\sigma^{-1})\big{]}\Big{)}$
(C.33)
$\displaystyle=\Theta\Big{(}\frac{1}{\sqrt{\delta}}\log^{1/4}(\epsilon^{-1})\big{[}\log(\epsilon^{-1})+\log(\delta^{-1})\big{]}\Big{)}$
(C.34)
is sufficient to guarantee that $[S_{\ell}\mathcal{W}_{\sigma,\delta}](x)$ is
within $\epsilon/5$ distance from $\mathcal{W}_{\sigma,\delta}(x)$.
Finally, we renormalise $[S_{\ell}\mathcal{W}_{\sigma,\delta}](x)$ dividing it
by the maximum value attained for $x\in[-1,1]$; by construction, this value is
smaller than $1+\epsilon/5$, thus the maximum distance from
$\mathcal{W}_{\sigma,\delta}$ after normalisation is bounded by
$1-\frac{1-\epsilon/5}{1+\epsilon/5}<\epsilon/2$.
∎
### C.3 Polynomial approximations on increasingly larger domains
Now we explain how the windowing functions can be used to select domain where
a polynomial of low degree can be a good approximation of the function
$1/(1-x)$, if it is applied to eigenvalues contained in such domain.
As usual, the spectrum of $A$ is contained in
${\mathcal{D}}_{A}=\left[\,\frac{1}{\kappa},\,1\right]$ and, correspondingly,
the spectrum of $B=I-\eta\,A$ is contained in
${\mathcal{D}}_{B}=\left[1-\eta,1-\frac{\eta}{\kappa}\,\right]\subseteq\left[0,1-\frac{\eta}{\kappa}\,\right]$.
We set $m:=\lceil\log_{2}\kappa\rceil+1$, $\delta_{j}:=\eta\,2^{-j}$ for
$j=\\{1,\ldots,m\\}$ and fix ${\tilde{\epsilon}}>0$, a parameter related to
the target precision $\varepsilon$. According to Eq. (27) and Eq. (C.13) we
can find polynomials $P_{{\tilde{\epsilon}},\delta_{j}}(x)$ and
$W_{{\tilde{\epsilon}},\delta_{j}}(x)$ such that
$\displaystyle\Big{|}P_{{\tilde{\epsilon}},\delta_{j}}(x)-\frac{1}{1-x}\Big{|}\leq{\tilde{\epsilon}}\qquad\forall
x\in[-1,1-\delta_{j}]$ (C.35) $\displaystyle
W_{{\tilde{\epsilon}},\delta_{j}}(x)\qquad\quad\mathrm{satisfies~{}Eq.}~{}\eqref{eq:W_inequalities}~{}\mathrm{for~{}each~{}}\delta_{j}$
(C.36)
with degrees in
${\mathcal{O}}\big{(}\delta^{-1/2}\log({\tilde{\epsilon}}^{-1}\delta^{-1})\big{)}$
and
${\mathcal{O}}\big{(}\delta^{-1/2}\log^{1/4}({\tilde{\epsilon}}^{-1})\log({\tilde{\epsilon}}^{-1}\delta^{-1})\big{)}$,
respectively. We also define a normalisation factor
$\displaystyle
K:=2\max_{j}\max_{x\in[-1,1]}|P_{{\tilde{\epsilon}},\delta_{j}}(x)|$ (C.37)
which can be set to coincide with the factor $K$ defined in Eq. (28) and
introduce the shorthands
$\displaystyle
P_{j}(x):=P_{{\tilde{\epsilon}},\delta_{j}}(x)/K\quad\mathrm{and}\quad
W_{j}(x):=W_{{\tilde{\epsilon}},\delta_{j}}(x).$ (C.38)
The windowing function $W_{j}(x)$ is used to select an interval
$[-1+2\delta_{j},1-2\delta_{j}]$ where $P_{j}(x)$ is a good approximation of
the inverse. For any eigenvalue $\lambda$ of $B$ we have
$P_{j}(\lambda)\approx\frac{1}{K}\frac{1}{1-\lambda}$ when $\lambda\leq
1-\delta_{j}$ and $W_{j}(\lambda)\approx 0$ when $\lambda\geq 1-\delta_{j}$.
More precisely, we have
$\displaystyle W_{j}(B)P_{j}(B)$
$\displaystyle=W_{j}(B)\frac{1}{K}\frac{1}{I-B}+\Delta_{j}^{\tilde{\epsilon}}$
(C.39)
$\displaystyle=W_{j}(B)\,\frac{A^{-1}}{\eta\,K}+\Delta_{j}^{\tilde{\epsilon}}$
(C.40)
where $\Delta_{j}^{\tilde{\epsilon}}$ are arbitrary matrices having operator
norm smaller or equal to ${\tilde{\epsilon}}$. Note that $W_{j}(B)$ and
$P_{j}(B)$ commute, since they are both polynomial functions of $B$. Moreover,
we can set without loss of generality $W_{m}(B)\equiv I$, since we already
have $P_{m}(B)\approx A^{-1}/(\eta\,K)$ on the entire domain of $A$. Finally,
we have $K\in\Theta(\kappa/\eta)$, as derived in the main text.
### C.4 Variable-stopping-time PD-QLS solver: definition
We now proceed to reformulate our PD-QLS solver as a variable-stopping-time
algorithm. For notation convenience, from now on we often drop the dependency
of a polynomial from its variable and simply write $P$ instead of $P(x)$.
###### Algorithm 27 (Variable-stopping-time linear system solving).
We preliminarily define two variable-stopping-time algorithms ${\mathcal{A}}$
and ${\mathcal{B}}$. ${\mathcal{A}}$ is the core PD-QLS solving function and
${\mathcal{B}}$ is used to un-compute the clock registers at the end of the
process.
The algorithm
${\mathcal{A}}={\mathcal{A}}_{m}\cdot\ldots\cdot{\mathcal{A}}_{1}\cdot{\mathcal{A}}_{0}$,
with $m:=\lceil\log_{2}\kappa\rceil+1$, acts on the Hilbert space
$\mathcal{H}=\mathcal{H}_{C}\otimes\mathcal{H}_{F}\otimes\mathcal{H}_{S}$
(respectively clock register, flag qubit, system register with
$\dim\mathcal{H}_{S}=N$), plus ancillary registers if needed. We set
${\mathcal{A}}_{0}=I_{C}\otimes I_{F}\otimes{\mathcal{U}}_{\textbf{b}}$ (i.e.,
${\mathcal{A}}_{0}$ prepares $\left|{\textbf{b}}\right\rangle$ in register
$S$) while the other sub-algorithms ${\mathcal{A}}_{j}$ use oracular access
only to ${\mathcal{U}}_{B}$. Each ${\mathcal{A}}_{j}$, for $j\geq 1$, consists
of the following two steps:
1. 1.
Conditioned on the qubits $C_{1},\ldots,C_{j-1}$ being in
$\left|{0}\right\rangle^{\otimes j-1}$, use QSP and a single Pauli $X$ gate to
implement the unitary
$\displaystyle{\mathcal{U}}_{j}^{\prime}:=\left(\begin{array}[]{cc}\sqrt{1-W_{j}^{2}}&-W_{j}\\\
W_{j}&\sqrt{1-W_{j}^{2}}\end{array}\right)\quad\mathrm{acting~{}on}\
\mathcal{H}_{C_{j}}\otimes\mathcal{H}_{S},$ (C.43)
with the window function $W_{j}(B)$ given in Eq. (C.38).
2. 2.
Conditioned on $C_{j}$ being in $\left|{1}\right\rangle_{C_{j}}$, use QSP and
a single Pauli $X$ gate to implement the unitary
$\displaystyle{\mathcal{U}}_{j}^{\prime\prime}:=\left(\begin{array}[]{cc}\sqrt{I-P_{j}^{2}}&-P_{j}\\\
P_{j}&\sqrt{I-P_{j}^{2}}\end{array}\right)\quad\mathrm{acting~{}on}\
\mathcal{H}_{F}\otimes\mathcal{H}_{S},$ (C.46)
with the polynomial approximations $P_{j}(B)$ given in Eq. (C.38).
The algorithm
${\mathcal{B}}={\mathcal{B}}_{m}\cdot\ldots\cdot{\mathcal{B}}_{1}\cdot{\mathcal{B}}_{0}$
acts on the same Hilbert space as ${\mathcal{A}}$. The initialisation step is
skipped, i.e., we set ${\mathcal{B}}_{0}=I$, and $\forall j\geq 1$ we define
${\mathcal{B}}_{j}$ as the application of the unitary
${\mathcal{U}}_{j}^{\prime}$ given in Eq. (C.43) controlled on
$C_{1},\ldots,C_{j-1}$ being in $\left|{0}\right\rangle^{\otimes j-1}$. The
unitaries ${\mathcal{U}}_{j}^{\prime\prime}$ are not applied.
The complete PD-QLS solving algorithm is then defined as follows.
1. 1.
In a initial pre-processing step (that needs to be run only once), a sequence
of integers $(k_{0},k_{1},\ldots,k_{m})$ that satisfy Eq. (C.10) is determined
using a phase estimation algorithm.
2. 2.
The main part of the algorithm consists in running ${\mathcal{A}}^{\prime}$,
which is a VTAA version of ${\mathcal{A}}$ where each
${\mathcal{A}}_{j}^{\prime}$ implements a $k_{j}$-step amplification of
${\mathcal{A}}_{j}{\mathcal{A}}_{j-1}^{\prime}$.
3. 3.
The final post-processing consists in applying ${\mathcal{B}}^{\dagger}$ to
the output of ${\mathcal{A}}^{\prime}$ in order to un-compute the clock
register. We then measure the flag qubit and when the result is
$\left|{1}\right\rangle_{F}$ (which happens with constant probability) we
output the $S$ register (which contains a state close to
$\left|{A^{-1}\textbf{b}}\right\rangle$).
### C.5 Variable-stopping-time PD-QLS solver: correctness
We will now analyse the correctness of Algorithm 27 and we begin introducing a
Lemma.
###### Lemma 28.
The state after the application of ${\mathcal{A}}_{k}$ defined in Algorithm 27
is given by
$\displaystyle{\mathcal{A}}_{\leq k}\left|{0}\right\rangle_{\mathrm{all}}=$
$\displaystyle\sum_{j=1}^{k}\left|{1_{j}}\right\rangle_{C}\left(\left|{0}\right\rangle_{F}M_{j-1}W_{j}\sqrt{I-P_{j}^{2}}\left|{\textbf{b}}\right\rangle_{S}+\left|{1}\right\rangle_{F}M_{j-1}W_{j}P_{j}\left|{\textbf{b}}\right\rangle_{S}\right)$
(C.47) $\displaystyle+M_{k}\left|{0,0,\textbf{b}}\right\rangle_{C,F,S}$ (C.48)
with $M_{0}:=I$, $M_{j}:=\prod_{i=1}^{j}\sqrt{I-W_{i}^{2}}$ and, using
$W_{m}=I$, at the last step we have $M_{m}=0$. Moreover, the algorithm
${\mathcal{B}}$ defined in Algorithm 27 acts as
$\displaystyle{\mathcal{B}}\left|{0,f,\phi}\right\rangle_{C,F,S}=$
$\displaystyle\sum_{j=1}^{k}\left|{1_{j}}\right\rangle_{C}\left|{f}\right\rangle_{F}M_{j-1}W_{j}\left|{\phi}\right\rangle_{S}$
(C.49)
for any input state $\left|{\phi}\right\rangle_{S}$ and $f\in\\{0,1\\}$.
###### Proof.
We proceed by induction over $k$. For $k=0$ we have ${\mathcal{A}}_{\leq
0}\left|{0}\right\rangle_{\mathrm{all}}={\mathcal{A}}_{0}\left|{0}\right\rangle_{\mathrm{all}}=\left|{0,0,\textbf{b}}\right\rangle_{C,F,S}$.
Using that $P_{j},M_{j},W_{j}$ are functions of $B$ and thus commute, we have
at step $k+1$
$\displaystyle\eqref{eq:induction1}\
\overset{{\mathcal{U}}_{k+1}^{\prime}}{\mapsto}\ $
$\displaystyle\sum_{j=1}^{k}\left|{1_{j}}\right\rangle_{C}\left(\left|{0}\right\rangle_{F}M_{j-1}W_{j}\sqrt{I-P_{j}^{2}}\left|{\textbf{b}}\right\rangle_{S}+\left|{1}\right\rangle_{F}M_{j-1}W_{j}P_{j}\left|{\textbf{b}}\right\rangle_{S}\right)$
(C.50)
$\displaystyle+\left|{1_{k+1}}\right\rangle_{C}\left|{0}\right\rangle_{F}M_{k}W_{k+1}\left|{\textbf{b}}\right\rangle_{F}+\left|{0,0}\right\rangle_{C,F}M_{k}\sqrt{1-W_{k+1}^{2}}\left|{\textbf{b}}\right\rangle_{C,F,S}$
(C.51) $\displaystyle\eqref{eq:induction2}\
\overset{{\mathcal{U}}_{k+1}^{\prime\prime}}{\mapsto}\ $
$\displaystyle\sum_{j=1}^{k}\left|{1_{j}}\right\rangle_{C}\left(\left|{0}\right\rangle_{F}M_{j-1}W_{j}\sqrt{I-P_{j}^{2}}\left|{\textbf{b}}\right\rangle_{S}+\left|{1}\right\rangle_{F}M_{j-1}W_{j}P_{j}\left|{\textbf{b}}\right\rangle_{S}\right)$
(C.52)
$\displaystyle+\left|{1_{k+1}}\right\rangle_{C}\left(\left|{0}\right\rangle_{F}M_{k}W_{k+1}\sqrt{I-P_{k+1}^{2}}\left|{\textbf{b}}\right\rangle_{S}+\left|{1}\right\rangle_{F}M_{k}W_{k+1}P_{k+1}\left|{\textbf{b}}\right\rangle_{S}\right)$
(C.53) $\displaystyle+M_{k+1}\left|{0,0,\textbf{b}}\right\rangle_{C,F,S}.$
(C.54)
Equation (C.49) can be verified similarly. ∎
Next, we consider the output state of ${\mathcal{A}}^{\prime}$, the VTAA
version of ${\mathcal{A}}$, which features an amplification of the amplitude
of the $\left|{1}\right\rangle_{F}$ component, i.e., it is a state of of the
form
$\displaystyle{\mathcal{A}}^{\prime}\left|{0}\right\rangle_{\mathrm{all}}$
$\displaystyle=\sqrt{p_{\mathrm{succ}}^{\prime}}\,|{1}\rangle_{F}|{\psi_{\mathrm{succ}}}\rangle_{C,S}\,+\,\sqrt{1-p_{\mathrm{succ}}^{\prime}}\,|{0}\rangle_{F}|{\psi_{\mathrm{fail}}}\rangle_{C,S}$
(C.55)
where the success probability is constant,
$p_{\mathrm{succ}}^{\prime}\in\Theta(1)$, and where we have
$\displaystyle|{\psi_{\mathrm{succ}}}\rangle_{C,S}$
$\displaystyle=\frac{1}{\sqrt{{\mathcal{N}}}}\sum_{j=1}^{m}\left|{1_{j}}\right\rangle_{C}M_{j-1}W_{j}P_{j}\left|{\textbf{b}}\right\rangle_{S}$
(C.56) $\displaystyle{\mathcal{N}}$
$\displaystyle=\sum_{j=1}^{m}\big{|}\big{|}M_{j-1}W_{j}P_{j}\left|{\textbf{b}}\right\rangle\big{|}\big{|}^{2}\,.$
(C.57)
We now claim that, for an appropriate choice of the algorithm parameters, the
error in $\left|{\psi_{\mathrm{succ}}}\right\rangle_{C,S}$ is upper bounded by
${\mathcal{O}}(\varepsilon)$, and we will prove this claim in the next
section. Thus we have:
$\displaystyle|{\psi_{\mathrm{succ}}}\rangle_{C,S}$
$\displaystyle=\sum_{j=1}^{m}\left|{1_{j}}\right\rangle_{C}M_{j-1}W_{j}\left|{A^{-1}\textbf{b}}\right\rangle_{S}+{\mathcal{O}}(\varepsilon)$
(C.58)
where ${\mathcal{O}}(x)$ here denotes an arbitrary vector with 2-norm bounded
by ${\mathcal{O}}(x)$. Note that Eq. (C.49) for
$\left|{\phi}\right\rangle_{S}=\left|{A^{-1}\textbf{b}}\right\rangle_{S}$
implies that
$\sum_{j}\left|{1_{j}}\right\rangle_{C}M_{j-1}W_{j}\left|{A^{-1}\textbf{b}}\right\rangle_{S}$
is a normalised state.
The final step of the PD-QLS algorithm consists in applying
${\mathcal{B}}^{\dagger}$ to the state in Eq. (C.55). Using again Eq. (C.49)
we obtain:
$\displaystyle{\mathcal{B}}^{\dagger}{\mathcal{A}}^{\prime}\left|{0}\right\rangle_{\mathrm{all}}=\sqrt{p_{\mathrm{succ}}^{\prime}}\left|{0,1,A^{-1}\textbf{b}}\right\rangle_{C,F,S}+\sqrt{1-p_{\mathrm{succ}}^{\prime}}\left|{0,0,\psi^{\prime}}\right\rangle_{C,F,S}+{\mathcal{O}}(\varepsilon)\,.$
(C.59)
Measuring the flag in $\left|{1}\right\rangle_{F}$ then results with constant
success probability in a vector that has ${\mathcal{O}}({\tilde{\epsilon}})$
distance from the ideal output $\left|{A^{-1}\textbf{b}}\right\rangle$.
### C.6 Variable-stopping-time PD-QLS solver: error bound
In order to derive Eq. (C.58) we use Eq. (C.40) and write
$\displaystyle|{\psi_{\mathrm{succ}}}\rangle_{C,S}$
$\displaystyle=\frac{1}{\sqrt{{\mathcal{N}}}}\sum_{j=1}^{m}\left|{1_{j}}\right\rangle_{C}M_{j-1}W_{j}\frac{A^{-1}}{\eta\,K}\left|{\textbf{b}}\right\rangle_{S}+\frac{1}{\sqrt{{\mathcal{N}}}}\sum_{j=1}^{m}\left|{1_{j}}\right\rangle_{C}\Delta_{j}^{\tilde{\epsilon}}\left|{\textbf{b}}\right\rangle_{S}$
(C.60)
$\displaystyle=\frac{\sqrt{\widetilde{{\mathcal{N}}}}}{\sqrt{{\mathcal{N}}}}\sum_{j=1}^{m}\left|{1_{j}}\right\rangle_{C}M_{j-1}W_{j}\left|{A^{-1}\textbf{b}}\right\rangle_{S}+{\mathcal{O}}({\tilde{\epsilon}}\sqrt{m/{\mathcal{N}}})$
(C.61) $\displaystyle=\hskip
22.76219pt\sum_{j=1}^{m}\left|{1_{j}}\right\rangle_{C}M_{j-1}W_{j}\left|{A^{-1}\textbf{b}}\right\rangle_{S}+{\mathcal{O}}({\tilde{\epsilon}}\sqrt{m/{\mathcal{N}}})$
(C.62)
and $\widetilde{{\mathcal{N}}}$ is defined below. To prove the last step,
recall that
$\sum_{j}\left|{1_{j}}\right\rangle_{C}M_{j-1}W_{j}\left|{A^{-1}\textbf{b}}\right\rangle_{S}$
is a normalised state, hence Eq. (C.61) implies
$\big{|}\sqrt{\widetilde{{\mathcal{N}}}/{\mathcal{N}}}-1\big{|}\in{\mathcal{O}}({\tilde{\epsilon}}\sqrt{m/{\mathcal{N}}})$.
Now we estimate ${\mathcal{N}}$ by using
$\big{|}\sqrt{\widetilde{{\mathcal{N}}}}-\sqrt{{\mathcal{N}}}\big{|}\in{\mathcal{O}}({\tilde{\epsilon}}\sqrt{m})$
and computing
$\displaystyle\widetilde{{\mathcal{N}}}$
$\displaystyle:=\sum_{j=1}^{m}\left|\left|{M_{j-1}W_{j}\frac{A^{-1}}{\eta\,K}\left|{\textbf{b}}\right\rangle}\right|\right|^{2}$
(C.63)
$\displaystyle=\frac{||A^{-1}\left|{\textbf{b}}\right\rangle||^{2}}{\eta^{2}K^{2}}\sum_{j=1}^{m}\left|\left|{M_{j-1}W_{j}\left|{A^{-1}\textbf{b}}\right\rangle}\right|\right|^{2}$
(C.64)
$\displaystyle=\frac{||A^{-1}\left|{\textbf{b}}\right\rangle||^{2}}{\eta^{2}K^{2}}.$
(C.65)
As proven in the main text, $K\in{\mathcal{O}}(\kappa/\eta)$. Thus we have
$\widetilde{{\mathcal{N}}}\in\Omega(1/\kappa^{2})$ and choosing
$\displaystyle{\tilde{\epsilon}}\in{\mathcal{O}}\\!\left(\frac{\varepsilon}{\kappa\,\sqrt{\log\kappa}}\right)$
(C.66)
we also have ${\mathcal{N}}\in\Omega(1/\kappa^{2})$ and
${\tilde{\epsilon}}\sqrt{m/{\mathcal{N}}}\in{\mathcal{O}}(\varepsilon)$. This
condition is sufficient to guarantee that the output state
$\left|{\psi_{\mathrm{succ}}}\right\rangle$ is within $\varepsilon$-distance
from the ideal output.
### C.7 Variable-stopping-time PD-QLS solver: query complexity
We can now provide a upper bounds the query complexity of the VTAA algorithm
using Eq. (C.12):
$\displaystyle
Q^{\prime}\in{\mathcal{O}}\\!\left(t_{\mathrm{max}}\sqrt{m}+\frac{t_{\mathrm{avg}}}{\sqrt{p_{\mathrm{succ}}}}\sqrt{m\,\log(t_{\mathrm{max}}/t_{\mathrm{min}})}\right).$
We first compute the ${\mathcal{U}}_{\textbf{b}}$-complexity. Since the non-
amplified algorithm ${\mathcal{A}}$ accesses ${\mathcal{U}}_{\textbf{b}}$ only
in the first step, we have
$t_{\mathrm{min}}=t_{\mathrm{max}}=t_{\mathrm{avg}}=1$. Using
$m\,{\tilde{\epsilon}}^{2}\in{\mathcal{O}}\big{(}\varepsilon^{2}/\kappa^{2}\big{)}$
we get
$\displaystyle p_{\mathrm{succ}}$ $\displaystyle={\mathcal{N}}\ \in\
{\mathcal{O}}\\!\left(\widetilde{{\mathcal{N}}}+m\,{\tilde{\epsilon}}^{2}\right)={\mathcal{O}}\\!\left(\frac{||A^{-1}\left|{\textbf{b}}\right\rangle||^{2}}{\kappa^{2}}\right)$
(C.67)
and the using $m=\lceil\log_{2}\kappa\rceil+1$ we have
$\displaystyle
Q[{\mathcal{U}}_{\textbf{b}}]\in{\mathcal{O}}\\!\left(\sqrt{\log(\kappa)}+\frac{\kappa}{\left|\left|{A^{-1}\left|{\textbf{b}}\right\rangle}\right|\right|}\right).$
(C.68)
We now compute the ${\mathcal{U}}_{B}$-complexity. We have:
$\displaystyle t_{j}$ $\displaystyle=\sum_{i=1}^{j}[\deg(P_{i})+\deg(W_{i})]\
\in\
{\mathcal{O}}\Big{(}\delta_{j}^{-1/2}\log^{1/4}({\tilde{\epsilon}}^{-1})\log({\tilde{\epsilon}}^{-1}\delta_{j}^{-1})\Big{)}$
(C.69) $\displaystyle t_{\mathrm{max}}$
$\displaystyle=\sum_{i=1}^{m}[\deg(P_{i})+\deg(W_{i})]\ \in\
{\mathcal{O}}\Big{(}\sqrt{\kappa/\eta}\,\log^{1/4}({\tilde{\epsilon}}^{-1})\log({\tilde{\epsilon}}^{-1}\kappa/\eta)\Big{)}$
(C.70) $\displaystyle t_{\mathrm{min}}$ $\displaystyle=\hskip
14.22636pt\deg(P_{1})+\deg(W_{1})\ \ \in\
{\mathcal{O}}\Big{(}\sqrt{1/\eta}\,\log^{1/4}({\tilde{\epsilon}}^{-1})\log({\tilde{\epsilon}}^{-1}/\eta)\Big{)}.$
(C.71)
Writing
$\left|{\textbf{b}}\right\rangle\equiv\sum_{\lambda}\beta_{\lambda}\left|{\lambda}\right\rangle$,
where $\left|{\lambda}\right\rangle$ denotes an eigenvector of $B$ relative to
the eigenvalue $\lambda$ and $\sum_{\lambda}|\beta_{\lambda}|^{2}=1$, the
probability of stopping at time $t_{j}$ is
$\displaystyle p_{j}$
$\displaystyle=\big{|}\big{|}\,\Pi_{C_{j}}{\mathcal{A}}_{\leq
j}\left|{0}\right\rangle_{\mathrm{all}}\big{|}\big{|}^{2}=\big{|}\big{|}\,M_{j-1}W_{j}\left|{\textbf{b}}\right\rangle\big{|}\big{|}^{2}$
(C.72)
$\displaystyle=\sum_{\lambda}|M_{j-1}(\lambda)W_{j}(\lambda)|^{2}\,|\beta_{\lambda}|^{2}$
(C.73) $\displaystyle\in\
{\mathcal{O}}\Big{(}{\tilde{\epsilon}}+\sum\nolimits_{\lambda\in(1-4\delta_{j},1-\delta_{j}]}|\beta_{\lambda}|^{2}\Big{)}.$
(C.74)
Here, we have used $W_{j}(\lambda)\leq{\tilde{\epsilon}}$ for $\lambda\geq
1-\delta_{j}$, while for $\lambda<1-2\delta_{j-1}=1-4\delta_{j}$ we have
$\displaystyle
M_{j-1}(\lambda)\leq\sqrt{1-W_{j-1}^{2}(\lambda)}\leq\sqrt{1-(1-{\tilde{\epsilon}})^{2}}\leq\sqrt{2{\tilde{\varepsilon}}}\,,$
(C.75)
i.e., the expression $|M_{j-1}(\lambda)W_{j}(\lambda)|^{2}$ is non-negligible
only for $\lambda\in(1-4\delta_{j},1-\delta_{j}]$. Now we estimate the
$\ell_{2}$-average runtime
$t_{\mathrm{avg}}=\sqrt{\sum_{j=1}^{m}p_{j}t_{j}^{2}}$ with
$\displaystyle t_{\mathrm{avg}}^{2}$
$\displaystyle\in{\mathcal{O}}\\!\left(\sum_{j=1}^{m}\Big{[}{\tilde{\epsilon}}+\sum\nolimits_{\lambda\in(1-4\delta_{j},1-\delta_{j}]}|\beta_{\lambda}|^{2}\Big{]}\Big{[}\delta_{j}^{-1/2}\log^{1/4}({\tilde{\epsilon}}^{-1})\log({\tilde{\epsilon}}^{-1}\delta_{j}^{-1})\Big{]}^{2}\right)$
(C.76)
$\displaystyle\subseteq{\mathcal{O}}\\!\left({\tilde{\epsilon}}\,t_{\mathrm{max}}^{2}+\sum\nolimits_{\lambda}\frac{|\beta_{\lambda}|^{2}}{1-\lambda}\,\log^{1/2}({\tilde{\epsilon}}^{-1})\log^{2}({\tilde{\epsilon}}^{-1}\kappa/\eta)\right)$
(C.77)
where we have used that for any positive function $f(\lambda)$
$\displaystyle\sum_{j=1}^{m}\sum_{\lambda\in(1-4\delta_{j},1-\delta_{j}]}f(\lambda)\,\frac{1}{\delta_{j}}\
\in\ \Theta\bigg{(}\sum_{\lambda}f(\lambda)\,\frac{1}{1-\lambda}\bigg{)}.$
(C.78)
We then write
$\sum_{\lambda}\frac{|\beta_{\lambda}|^{2}}{1-\lambda}=\left|\left|{\sum_{\lambda}\frac{\beta_{\lambda}}{\sqrt{I-B}}\left|{\lambda}\right\rangle}\right|\right|^{2}=\left|\left|{(\eta\,A)^{-1/2}\left|{\textbf{b}}\right\rangle}\right|\right|^{2}$
and obtain
$\displaystyle t_{\mathrm{avg}}$
$\displaystyle\in{\mathcal{O}}\\!\left(\frac{1}{\sqrt{\eta}}\big{|}\big{|}A^{-1/2}\left|{\textbf{b}}\right\rangle\big{|}\big{|}\,\log^{1/4}({\tilde{\epsilon}}^{-1})\log({\tilde{\epsilon}}^{-1}\kappa/\eta)\right)$
(C.79)
where we used
$\left|\left|{A^{-1/2}\left|{\textbf{b}}\right\rangle}\right|\right|\geq 1$
and thus
$\sqrt{{\tilde{\epsilon}}}\,t_{\mathrm{max}}\in{\mathcal{O}}\left(\frac{1}{\sqrt{\eta}\log^{1/4}\kappa}\log^{1/4}({\tilde{\epsilon}}^{-1})\log({\tilde{\epsilon}}^{-1}\kappa/\eta)\right)$
is a sub-leading contribution to $t_{\mathrm{avg}}$. Then, using
$1/\sqrt{p_{\mathrm{succ}}}\in{\mathcal{O}}\big{(}\kappa/\left|\left|{A^{-1}\left|{\textbf{b}}\right\rangle}\right|\right|\big{)}$
and
$\displaystyle\log(t_{\mathrm{max}}/t_{\mathrm{min}})\ \in\
{\mathcal{O}}\big{(}\log(\kappa)+\log\log({\tilde{\epsilon}}^{-1}\kappa/\eta)\big{)}\
\subseteq\
{\mathcal{O}}\big{(}\log({\tilde{\epsilon}}^{-1}\kappa/\eta)\big{)}$ (C.80)
we finally obtain
$\displaystyle Q[{\mathcal{U}}_{B}]$
$\displaystyle\in{\mathcal{O}}\Bigg{(}\sqrt{\frac{\kappa}{\eta}}\,\underbrace{\sqrt{\kappa}\frac{\left|\left|{A^{-1/2}\left|{\textbf{b}}\right\rangle}\right|\right|}{\big{|}\big{|}A^{-1}\left|{\textbf{b}}\right\rangle\big{|}\big{|}}}_{:=\Gamma_{A,\textbf{b}}}\underbrace{\log^{1/4}({\tilde{\epsilon}}^{-1})\log({\tilde{\epsilon}}^{-1}\kappa/\eta)\sqrt{\log(\kappa)\log({\tilde{\epsilon}}^{-1}\kappa/\eta)}}_{\mathrm{polylog}(\kappa,{\tilde{\epsilon}}^{-1},\eta^{-1})}\Bigg{)}.$
(C.81)
## Appendix D Proof that the Sum-QLS problem is BQP-hard
In this Appendix we prove that the Sum-QLS problem is $\mathsf{BQP}$-hard,
therefore no efficient classical algorithm can solve the problem (unless
$\textsf{BPP}=\textsf{BQP}$). The initial part of the proof is equal to the
reduction presented by HHL: it is possible to construct a QLS problem
$M\textbf{x}=\textbf{e}_{1}$ (where $\textbf{e}_{1}$ is a canonical vector
with a one in the first position) which is BQP-hard for a class of sparse
indefinite matrices $M$ that are easily constructible. This can be converted
to the equivalent QLS problem $M\textbf{x}=M^{\dagger}\textbf{e}_{1}$, where
the matrix $A=M^{\dagger}M$ is by construction PD and
$\left|{M^{\dagger}\textbf{e}_{1}}\right\rangle$ is easy to prepare. What
remains to be proven is that $A$ admits an explicit Sum-QLS structure, i.e.
that one can construct a decomposition as a sum of PD local Hamiltonian terms
and that the overlap with the support (bounded by the parameter $\gamma$)
scales polynomially.
Preliminarily, we introduce a couple of useful definitions: the specific
quantum circuit model (universal for quantum computation) that we aim to
“simulate” as a Sum-QLS, and the Feynman-Kitaev clock construction.
##### Quantum circuit model:
We describe an arbitrary quantum computation $\mathcal{C}$ on $n$ qubits as
the application of $T$ elementary gates $\\{U_{0},U_{1}\ldots,U_{T-1}\\}$ to
an initial state $\left|{0}\right\rangle^{\otimes n}$ and the output of the
computation is the quantum state $U_{T-1}\cdots
U_{1}U_{0}\left|{0}\right\rangle^{\otimes n}$. We assume that the elementary
gate set consists of gates acting either on one or two qubits, e.g. arbitrary
single-qubit rotations and control-nots. Hence, the $t$-th elementary gate is
described by a unitary $U_{t}\in\mathds{C}^{N\times N}$ with $N=2^{n}$ which
can be written as $U_{t}=u_{t}\otimes I_{\neg{\mathcal{S}}_{t}}$, where
$u_{t}$ acts on a subset ${\mathcal{S}}_{t}$ of the qubits and is either a
$2\times 2$ ($|{\mathcal{S}}_{t}|=1$) or a $4\times 4$
($|{\mathcal{S}}_{t}|=2)$ unitary matrix.
##### Feynman-Kitaev clock:
We introduce the following unitary matrix acting on
$\mathds{C}^{3T}\otimes\mathds{C}^{n}$, which is based on the _Feynman-Kitaev
clock_ construction (see e.g. [55] and references therein):
$\displaystyle U:=$
$\displaystyle\sum_{t=0}^{3T-1}\left|{t\\!+\\!1}\right\rangle\\!\left\langle{t}\right|_{c}\otimes
U_{t}$ (D.1) $\displaystyle=$ $\displaystyle\
\sum_{t=0}^{T-1}\,\left|{t\\!+\\!1}\right\rangle\\!\left\langle{t}\right|_{c}\otimes
U_{t}\,+\,\left|{T\\!+\\!t\\!+\\!1}\right\rangle\\!\left\langle{T\\!+\\!t}\right|_{c}\otimes
I\,+\,\left|{2T\\!+\\!t\\!+\\!1}\right\rangle\\!\left\langle{2T\\!+\\!t}\right|_{c}\otimes
U_{T-t-1}^{\dagger}$ (D.2)
where we have defined $U_{t}=I$ for $T\leq t\leq 2T-1$ and
$U_{t}=U^{\dagger}_{3T-t-1}$ for $2T\leq t\leq 3T-1$ and the sums in the clock
register (denoted by the subscript $c$) are taken modulo $3T$.
###### Proposition 29 (Sum-QLS is BQP-hard).
Let $\mathcal{C}$ be a given $n$-qubit $T$-gate quantum circuit. Then, there
is a Sum-QLS problem (as in Definition 14) with
1. 1.
$n^{\prime}\in{\mathcal{O}}(n+\log T)$
[$N^{\prime}\in{\mathcal{O}}(2^{n^{\prime}})$ is the size of $A$]
2. 2.
$\kappa\in{\mathcal{O}}(T^{2})$ [condition number]
3. 3.
$J\in{\mathcal{O}}(T)$ [number of PD Hamiltonian terms]
4. 4.
$s\in{\mathcal{O}}(\log T)$ [locality of the Hamiltonian terms]
5. 5.
$d_{\textbf{b}}\in{\mathcal{O}}(1)$ [sparsity of the known-term vector b]
6. 6.
$\gamma\in\Omega(T^{-2})$ [overlap with the support, Eq. (112)]
that is equivalent to $\mathcal{C}$, i.e., solving this Sum-QLS problem (up to
a small constant error $\varepsilon$) allows to obtain the output state of
$\mathcal{C}$ with constant probability and constant precision. According to
Eq. (113), the algorithm presented in Section 5 can solve this Sum-QLS problem
with a gate complexity in ${\mathcal{O}}\big{(}\mathrm{poly}(n,T)\big{)}$.
When $T\in{\mathcal{O}}(\mathrm{poly}\,n)$, the gate complexity of the Sum-QLS
solver is also in ${\mathcal{O}}(\mathrm{poly}\,n)$, i.e., Sum-QLS problem is
BQP-hard. Moreover, the Sum-QLSpoly, defined as the subclass of problems where
the six parameters listed above all scale polynomially in $n$, is BQP-
complete.
###### Proof.
Starting from the Feynman-Kitaev unitary $U$ encoding a quantum circuit
$\mathcal{C}$ as in Eq. (D.1), we introduce the matrix
$\displaystyle M:=I-U\,e^{-1/T}$ (D.3)
which can be written in block form (where each block has size $2^{n}\times
2^{n}$) as
$\displaystyle M=$
$\displaystyle\left(\begin{array}[]{ccccc}\ddots&~{}~{}\ddots{}{}{}{}&0&&\\\
\ddots&I&-U_{t}e^{-1/T}&0&\\\ &0&I&-U_{t+1}e^{-1/T}&\\\
&&0&I&~{}~{}~{}\ddots{}{}{}\\\ -U_{3T-1}e^{-1/T}&&&\ddots&\ddots\\\
\end{array}\right)$ (D.9)
and moreover we introduce:
$\displaystyle A$
$\displaystyle:=M^{\dagger}M=(1+e^{-2/T})I-e^{-1/T}(U+U^{\dagger})$ (D.10)
where $A$ is by construction Hermitian positive definite and the singular
values of $M$ are the square root of the eigenvalues of $A$. From the
eigenvalue inequality $-2\leq\lambda(U+U^{\dagger})\leq 2$ we obtain
$\displaystyle\begin{cases}\lambda_{\max}(A)\leq 4\\\
\lambda_{\min}(A)\geq\big{(}1-e^{-1/T}\big{)}^{2}\geq\frac{(1-e^{-1})^{2}}{T^{2}}\end{cases}\Longrightarrow\quad\kappa(A)\leq\frac{4}{(1-e^{-1})^{2}}\,T^{2}$
(D.11)
hence $\kappa(M)=\sqrt{\kappa(A)}\in{\mathcal{O}}(T)$. Using the identity
$U^{3T}=I$, the inverse of $M$ can be expanded as
$\displaystyle M^{-1}$
$\displaystyle=\sum_{t^{\prime}=0}^{\infty}U^{t^{\prime}}e^{-t^{\prime}/T}=\frac{e^{3}}{e^{3}-1}\sum_{t=0}^{3T-1}U^{t^{\prime}}e^{-t^{\prime}/T}$
(D.12)
$\displaystyle=\frac{e^{3}}{e^{3}-1}\sum_{t^{\prime}=0}^{3T-1}e^{-t^{\prime}/T}\sum_{t=0}^{3T-1}\left|{t\\!+\\!t^{\prime}}\right\rangle\\!\left\langle{t}\right|\otimes
U_{[t:t+t^{\prime}]}$ (D.13)
where the notation $U_{[a:b]}$ indicates the ordered product of unitary
operators from $a$ to $b-1$, $U_{[a:b]}:=U_{b-1}U_{b-2}\cdots U_{a}$, with
$U_{[t:t]}=I$.
As done already by HHL in Ref. [1] step, we consider the QLS problem
$\displaystyle M\textbf{x}=\textbf{e}_{1}$ (D.14)
where $\textbf{e}_{1}\in\mathds{C}^{3T2^{n}}$ is the unit vector with a one in
the first position (i.e., it is the vector representation of the state
$\left|{0}\right\rangle_{c}\left|{0}\right\rangle^{\otimes n}$). Using the
state
$\left|{\textbf{x}}\right\rangle=\left|{M^{-1}\textbf{e}_{1}}\right\rangle$
the output state of the quantum computation is obtained whenever upon
measurement of the clock register a time $t\in[T:2T-1]$ is returned, which
happens with probability $e^{-2}/(1+e^{-2}+e^{-4})\geq 0.11$. Therefore, any
quantum circuit $\mathcal{C}$ having $T$ gates can be restated as a QLS and
solved (using e.g. the HHL algorithm) with a gate complexity scaling as
$\mathrm{poly}(n,T)$. If the circuit $\mathcal{C}$ has
$T\in{\mathcal{O}}(\mathrm{poly}\,n)$, then also the QLS solver has gate
complexity in ${\mathcal{O}}(\mathrm{poly}\,n)$, showing that the QLS problem
is BQP-hard.
We now consider the PD linear system
$\displaystyle A\textbf{x}=\textbf{b}\qquad\mathrm{for}\ A=M^{\dagger}M\,,\
\;\textbf{b}$ $\displaystyle=M^{\dagger}\textbf{e}_{1}$ (D.15)
which is equivalent to the system in Eq. (D.14) and can be cast as a Sum-QLS
problem.
Notice, first, that b is a sparse vector, with $d_{\textbf{b}}\leq 3$: if the
elementary gate set consists of control-nots and single-qubit rotations, then
the first row of $M$ has at most $3$ non-zero entries.
The matrix $A$ has size $3T2^{n}\times 3T2^{n}$ and can be written as
$\displaystyle A=\sum_{t=0}^{3T-1}H_{(t)}\;.$ (D.16)
where each Hamiltonian terms $H_{(t)}$ is positive definite and only acts on
the clock register plus either one or two extra qubits (i.e., the same number
of qubits on which the gates from the elementary set act). We start writing
$A$ in block form
$\displaystyle A=\left(\begin{array}[]{ccccc}\ddots&\ddots&0&&\\\
\ddots&I(1+e^{-2/T})&-U_{t}e^{-1/T}&0&\\\
~{}~{}~{}0{}{}{}&-U_{t}^{\dagger}e^{-1/T}&I(1+e^{-2/T})&-U_{t+1}e^{-1/T}&~{}~{}~{}0{}{}{}\\\
&0&-U_{t+1}^{\dagger}e^{-1/T}&I(1+e^{-2/T})&\ddots\\\ &&0&\ddots&\ddots\\\
\end{array}\right)$ (D.22)
and then introduce, for each $t\in\\{0,\ldots,3T-1\\}$, the Hamiltonian term
$\displaystyle
H_{(t)}\;=\;I\,\delta\;+\;e^{-1/T}\left(\begin{array}[]{c|cc|c}~{}0{}&0&0&~{}0{}\\\
\hline\cr 0&I&-U_{t}&0\\\ 0&-U_{t}^{\dagger}&I&0\\\ \hline\cr 0&0&0&0\\\
\end{array}\right)\begin{array}[]{l}\hskip 1.0pt\\}~{}\mathrm{size}~{}t\cdot
2^{n}\\\\[6.25958pt] \Big{\\}}~{}\mathrm{size}~{}2\cdot 2^{n}\\\\[7.11317pt]
\hskip 1.0pt\\}~{}\mathrm{size}~{}(3T-t-2)\cdot 2^{n}\end{array}$ (D.30)
where we have defined
$\displaystyle\delta\;:=\;\frac{1}{3T}\big{(}1+e^{-2/T}-2e^{-1/T}\big{)}\;\geq\;\frac{(1-e^{-1})^{2}}{3T^{3}}\;\geq\;\frac{1}{7.51\,T^{3}}\;,$
(D.31)
and thus the decomposition in Eq. (D.22) holds. Note that the matrix
$\left(\begin{smallmatrix}I&-U_{t}\\\
-U_{t}^{\dagger}&I\end{smallmatrix}\right)$ has eigenvalues $\lambda=0$ and
$\lambda=2$, hence is positive semi-definite, while each $H_{(t)}$ has the
smallest eigenvalue equal to $\delta$. Moreover, each $H_{(t)}$ is a local
Hamiltonian. Specifically, we have $H_{(t)}=h_{(t)}\otimes
I_{\neg{\mathcal{S}}_{t}}$, where $h_{(t)}$ acts on a set ${\mathcal{S}}_{t}$
consisting of $s=\lceil\log_{2}3T\rceil+q$ qubits, where $q=1$ if $U_{t}$
corresponds to a single qubit gate and $q=2$ if it corresponds to a two-qubit
gate. Explicitly, we have
$\displaystyle
h_{(t)}\;=\;\left(\begin{array}[]{c|cc|c}I\,\delta&0&0&~{}0{}\\\ \hline\cr
0&I(e^{-1/T}\\!+\\!\delta)\\!&-u_{t}e^{-1/T}&0\\\
0&-u_{t}^{\dagger}e^{-1/T}&\\!I(e^{-1/T}\\!+\\!\delta)&0\\\ \hline\cr
0&0&0&I\,\delta\\\ \end{array}\right)\begin{array}[]{l}\hskip
1.0pt\\}~{}\mathrm{size}~{}t\cdot 2^{q}\\\\[6.25958pt]
\Big{\\}}~{}\mathrm{size}~{}2\cdot 2^{q}\\\\[7.11317pt] \hskip
1.0pt\\}~{}\mathrm{size}~{}(3T-t-2)\cdot 2^{q}\end{array}$ (D.39)
where each $u_{t}$ is a matrix specifying the $t$-th elementary quantum gate,
with $U_{t}=u_{t}\otimes I_{\neg{\mathcal{S}}_{t}}$.
We now need to estimate a lower bound $\gamma$ for the overlap parameter as in
Eq. (112). We have
$\displaystyle\left\langle{\textbf{b}}\right|A^{-1}\left|{\textbf{b}}\right\rangle$
$\displaystyle=\left\langle{M^{\dagger}\textbf{e}_{1}}\right|M^{-1}M^{-{\dagger}}\left|{M^{\dagger}\textbf{e}_{1}}\right\rangle$
(D.40)
$\displaystyle=\frac{\left\langle{\textbf{e}_{1}}\right|MM^{-1}M^{-{\dagger}}M^{\dagger}\left|{\textbf{e}_{1}}\right\rangle}{\left|\left|{M^{\dagger}\left|{\textbf{e}_{1}}\right\rangle}\right|\right|^{2}}$
(D.41)
$\displaystyle=\frac{1}{\left|\left|{M^{\dagger}\left|{\textbf{e}_{1}}\right\rangle}\right|\right|^{2}}=\frac{1}{1+e^{-2/T}}\geq
1/2$ (D.42)
while from (D.30) and (D.31) we obtain
$\displaystyle\left\langle{\textbf{b}}\right|H_{(j)}^{-1}\left|{\textbf{b}}\right\rangle\,\leq\,\delta^{-1}\,\leq\,7.51\,T^{3}\,.$
(D.43)
Using $J=3T$ we finally get
$\displaystyle\frac{1}{J^{2}}\,\frac{\sum_{j=1}^{J}\left\langle{\textbf{b}}\right|H_{(j)}^{-1}\left|{\textbf{b}}\right\rangle}{\left\langle{\textbf{b}}\right|A^{-1}\left|{\textbf{b}}\right\rangle}\,\leq\,\frac{1}{(3T)^{2}}\,\frac{3T\times
7.51\,T^{3}}{1/2}\,\leq\,5.01\,T^{2}\,=:\,\frac{1}{\gamma}\,.$ (D.44)
In conclusion, given the sequence of gates $u_{t}$ and the set of qubits
${\mathcal{S}}_{t}$ to which they are applied in the quantum circuit
$\mathcal{C}$, we can efficiently compute the values and the positions of the
non-zero entries of $H_{(t)}$, as given in Eq. (D.30). This then explicitly
describes $A$ as a sum of PD local Hamiltonian terms, as required by the
definition of the Sum-QLS problem. Going through the derivation, we see that
the relevant parameters scale as stated in points (1) to (6) in the statement
of the Proposition. Using Eq. (113), it follows that the algorithm presented
in Section 5 solves this problem with a gate complexity in
${\mathcal{O}}\big{(}\mathrm{poly}(n,T)\big{)}={\mathcal{O}}(\mathrm{poly}\,n)$,
in the case where the original circuit is itself a polynomial time quantum
circuit (i.e. $T\in{\mathcal{O}}(\mathrm{poly}\,n)$). Finally, note that the
$\varepsilon$-error in precision of Sum-QLS solver is amplified at most by a
constant factor when post-selecting the clock register to show a time
$t\in[T:2T-1]$. Thus, it is possible to obtain the output of $\mathcal{C}$
with an error that is bounded by a constant, as required per the definition of
the BQP class. This finally proves that the Sum-QLS problem is BQP-hard; at
the same time, it proves that Sum-QLSpoly, the sub-class of problems having
polynomially scaling parameters, is solvable in quantum polynomial time and is
thus BQP-complete.
∎
## References
* [1] A. W. Harrow, A. Hassidim, and S. Lloyd, _Quantum algorithm for linear systems of equations_ , Physical Review Letters 103, 150502 (2009) [arXiv:0811.3171].
* [2] W. H. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling, _Numerical Recipes: The Art of Scientific Computing, Third Edition_ , Cambridge University Press (2007).
* [3] Y. Saad, _Iterative methods for sparse linear systems_ , SIAM (2003).
* [4] A. Ambainis, _Variable time amplitude amplification and quantum algorithms for linear algebra problems_ , 29th Symposium on Theoretical Aspects of Computer Science 14, 636–647 (2012) [arXiv:1010.4458].
* [5] A. M. Childs, R. Kothari, and R. D. Somma, _Quantum linear systems algorithm with exponentially improved dependence on precision_ , SIAM J. Comput. 46, 1920–1950 (2017) [arXiv:1511.02306].
* [6] L. Wossnig, Z. Zhao, and A. Prakash, _Quantum linear system algorithm for dense matrices_ , Physical Review Letters 120, 050502 (2018) [arXiv:1704.06174].
* [7] Y. Subaşı, R. D. Somma, and D. Orsucci. _Quantum algorithms for linear systems of equations inspired by adiabatic quantum computing_ , Physical Review Letters 122, 060504 (2019) [arXiv:1805.10549].
* [8] Dong An and Lin Lin, _Quantum linear system solver based on time-optimal adiabatic quantum computing and quantum approximate optimization algorithm_ , arXiv:1909.05500 (2019).
* [9] J. Wen, X. Kong, S. Wei, B. Wang, T. Xin, and G. Long, _Experimental realization of quantum algorithms for a linear system inspired by adiabatic quantum computing_ , Physical Review A 99, 012320 (2019) [arXiv:1806.03295].
* [10] C. Bravo-Prieto, R. LaRose, M. Cerezo, Y. Subaşı, L. Cincio, and P. J. Coles, _Variational quantum linear solver: A hybrid algorithm for linear systems_ , arXiv:1909.05820 (2019).
* [11] H. Y. Huang, K. Bharti, and P. Rebentrost, _Near-term quantum algorithms for linear systems of equations_ , arXiv:1909.07344.
* [12] L. Lin and Y. Tong, _Optimal quantum eigenstate filtering with application to solving quantum linear systems_ , Quantum 4, 361 (2020) [arXiv:1910.14596].
* [13] S. Aaronson, _Read the fine print_ , Nature Physics 11, 291-293 (2015) [citeseerx].
* [14] A. Montanaro and S. Pallister, _Quantum algorithms and the finite element method_ , Physical Review A 93, 032324 (2016) [arXiv:1512.05903].
* [15] R. Babbush, J. McClean, C. Gidney, S. Boixo, and H. Neven, _Focus beyond quadratic speedups for error-corrected quantum advantage_ , PRX Quantum 2 (2021) [arXiv:2011.04149].
* [16] A. N. Chowdhury and R. D. Somma, _Quantum algorithms for Gibbs sampling and hitting-time estimation_ , Quant. Inf. Comp. 17, 0041–0064 (2017) [arXiv:1603.02940].
* [17] B. D. Clader, B. C. Jacobs, and C. R. Sprouse, _Preconditioned quantum linear system algorithm_ , Physical Review Letters 110, 250504 (2013) [arXiv:1301.2340].
* [18] J. R. Shewchuk, _An introduction to the conjugate gradient method without the agonizing pain_ , Carnegie Mellon University (1994).
* [19] S. Chakraborty, A. Gilyén, and S. Jeffery, _The power of block-encoded matrix powers: improved regression techniques via faster hamiltonian simulation_ , 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019) [arXiv:1804.01973].
* [20] C. Shao and H. Xiang, _Quantum circulant preconditioner for linear system of equations_ , Physical Review A 98, 062321 [arXiv:1807.04563].
* [21] Y. Tong, D. An, N. Wiebe, and L. Lin. _Fast inversion, preconditioned quantum linear system solvers, fast Green’s-function computation, and fast evaluation of matrix functions_ , Physical Review A 104, 032422 [arXiv:2008.13295].
* [22] B. Wu, M. Ray, L. Zhao, X. Sun, and P. Rebentrost, _Quantum-classical algorithms for skewed linear systems with optimized Hadamard test_ , Physical Review A 103, 042422 [arXiv:2009.13288].
* [23] A. C. Vazquez, R. Hiptmair, and S. Woerner, _Enhancing the Quantum Linear Systems Algorithm using Richardson Extrapolation_ , arXiv:2009.04484 (2020).
* [24] G. H. Low and I. L. Chuang, _Optimal Hamiltonian simulation by quantum signal processing_ , Physical Review Letters 118, 010501 (2017) [arXiv:1606.02685].
* [25] A. Gilyén, Y. Su, G. H. Low, and N. Wiebe, _Quantum singular value transformation and beyond: exponential improvements for quantum matrix arithmetics_ , 51st Annual ACM SIGACT Symposium on Theory of Computing, 193–204 (2019) [arXiv:1806.01838].
* [26] A. M. Childs and N. Wiebe, _Hamiltonian Simulation Using Linear Combinations of Unitary Operations_ , Quantum Information & Computation [arXiv:1202.5822].
* [27] D. W. Berry, A. M. Childs, R. Cleve, R. Kothari and R. D. Somma, _Simulating Hamiltonian dynamics with a truncated Taylor series_ , Physical Review Letters 114, 090502 (2015) [arXiv:1412.4687].
* [28] G. H. Low and I. L. Chuang, _Hamiltonian simulation by qubitization_ , Quantum 3, 163 (2019).
* [29] A. C. Schaeffer, _Inequalities of A. Markoff and S. Bernstein for polynomials and related functions_ , Bulletin of the American Mathematical Society 47, 565–579(1941).
* [30] G. Brassard, P. Hoyer, M. Mosca, and A. Tapp, _Quantum Amplitude Amplification and Estimation_ , Contemporary Mathematics 305, 53–74 (2002) [arXiv:0005055].
* [31] See e.g. the Cholesky decomposition page on Wikipedia.
* [32] R. D. Somma and S. Boixo, _Spectral gap amplification_ , SIAM Journal on Computing 42, 593-610 (2013) [arXiv:1110.2494].
* [33] M. A. Nielsen, and I. Chuang, _Quantum computation and quantum information_ , Cambridge University Press (2000).
* [34] L. Grover and T. Rudolph, _Creating superpositions that correspond to efficiently integrable probability distributions_ , arXiv:0208112 (2002).
* [35] V. Giovannetti, S. Lloyd, and L. Maccone, _Quantum random access memory_ , Physical Review Letters 100, 160501 (2008) [arXiv:0708.1879].
* [36] I. Kerenidis and A. Prakash, _Quantum recommendation systems_ , arXiv:1603.08675.
* [37] M. Boyer,G. Brassard, P. Høyer and A. Tapp, _Tight bounds on quantum searching_ , Progress of Physics 46, 493–505 (1998) [arXiv:9605034].
* [38] András Gilyén, private communication.
* [39] R. Chao, D. Ding, A. Gilyén, C. Huang and M. Szegedy, _Finding angles for quantum signal processing with machine precision_ , arXiv:2003.02831 (2020).
* [40] Y. Dong, X. Meng, K. B. Whaley and L. Lin, _Efficient phase factor evaluation in quantum signal processing_ , Phys. Rev. A 103, 042419 (2021) [arXiv:2002.11649].
* [41] See e.g. the Gershgorin circle theorem page on Wikipedia.
* [42] V. V. Shende, S. S. Bullock, and I. L. Markov, _Synthesis of quantum-logic circuits_ , IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 25, 1000–1010 (2006) [arXiv:0406176].
* [43] R. Merris, _Laplacian matrices of graphs: a survey_ , Linear algebra and its applications 197, 143–176 (1994).
* [44] D. A. Spielman, _Algorithms, graph theory, and linear equations in Laplacian matrices_ , Proceedings of the International Congress of Mathematicians 2010, 2698–2722 (2010).
* [45] L. K. Grover, _Synthesis of quantum superpositions by quantum computation_ , Physical Review Letters 85, 1334 (2000).
* [46] Y. R. Sanders, G. H. Low, A. Scherer and D. W. Berry, _Black-box quantum state preparation without arithmetic_ , Physical Review Letters 122, 020502 (2019) [arXiv:1807.03206].
* [47] R. D. Somma and Y. Subaşı, _Quantum state verification in the quantum linear systems problem_ , PRX Quantum 2, 010315 (2021) [arXiv:2007.15698].
* [48] A. Gilyén, _Quantum walk based search methods and algorithmic applications_ , Doctoral dissertation, Eötvös Loránd University (2014).
* [49] E. Malvetti, R. Iten, and R. Colbeck, _Quantum Circuits for Sparse Isometries_ arXiv:2006.00016 (2020).
* [50] X. Jiang, _Minimum rank positive semidefinite matrix completion with chordal sparsity pattern_ , Doctoral dissertation, UCLA (2017).
* [51] A. Nayak and F. Wu, _The quantum query complexity of approximating the median and related statistics_ , Proceedings of the 31st annual ACM symposium on Theory of computing, 384–393 (1999) [arXiv:9804066].
* [52] S. U. Pillai, T. Suel, and S. Cha, _The Perron-Frobenius theorem: some of its applications_ , IEEE Signal Processing Magazine 22, 62–75 (2005).
* [53] S. Arora and B. Barak, _Computational complexity: a modern approach_ , Cambridge University Press (2009).
* [54] J. C. Mason, and D. C. Handscomb, _Chebyshev polynomials_ , CRC press (2002).
* [55] J. Bausch and E. Crosson, _Analysis and limitations of modified circuit-to-Hamiltonian constructions_ , Quantum 2, 94 (2018).
|
# Robust Extrinsic Regression Analysis for Manifold Valued Data
Hwiyoung Lee
<EMAIL_ADDRESS>
(Department of Statistics, Florida State University
January 28, 2021)
###### Abstract
Recently, there has been a growing need in analyzing data on manifolds owing
to their important role in diverse fields of science and engineering. In the
literature of manifold-valued data analysis up till now, however, only a few
works have been carried out concerning the robustness of estimation against
noises, outliers, and other sources of perturbations. In this regard, we
introduce a novel extrinsic framework for analyzing manifold valued data in a
robust manner. First, by extending the notion of the geometric median, we
propose a new robust location parameter on manifolds, so-called the extrinsic
median. A robust extrinsic regression method is also developed by
incorporating the conditional extrinsic median into the classical local
polynomial regression method. We present the Weiszfeld’s algorithm for
implementing the proposed methods. The promising performance of our approach
against existing methods is illustrated through simulation studies.
Key words: Robust statistics, Extrinsic median, Nonparametric regression,
Riemannian manifolds
## 1 Introduction
Over the past few decades, analyzing data taking vales in non-Euclidean
spaces, mostly nonlinear manifolds has attracted increased attention in a wide
range of applications, because it allows a richer and more accurate
statistical inference based on the usage of the geometrical properties of the
underlying data space. Examples of such data types that especially lie on
Riemannian manifolds, include directions of points on a sphere (Fisher et al.,
1987; Mardia and Jupp, 1999), shapes of configurations extracted from images
(Bhattacharya and Bhattacharya, 2012), data sitting on Stiefel and Grassmann
manifolds (Chikuse, 2003), symmetric positive definite matrices arising as
observations in diffusion tensor magnetic resonance imaging (DT-MRI) (Zhu et
al., 2009; Yuan et al., 2012), and other types of medical images.
In common with the traditional Euclidean case, statistical inference on the
aforementioned manifolds begins by defining the notion of the mean on a
certain metric space where data resides. Suppose a random object $\mathbb{X}$
is defined on a metric space $(\mathcal{M},\rho)$, and let
$\mathcal{Q}(\cdot)$ be the probability measure of $\mathbb{X}$. Then one may
consider adopting the traditional definition of the mean to generalize the
notion of the mean on an arbitrary metric space, i.e.,
$\mathbb{E}(\mathbb{X})=\int_{\mathcal{M}}\mathbb{x}\mathcal{Q}(d\mathbb{x})$.
Unfortunately, however, this attempt at generalization is not directly
applicable to the non-Euclidean setting, because it contains non-vector valued
integral, which appears to be analytically intractable. Therefore, the
conventional definition of the mean needs to be adapted so that it can go
beyond Euclidean spaces. The most commonly used one in the literature is the
Fréchet mean (Fréchet, 1948), in which the mean is defined as a minimizer of
the real valued function defined on metric spaces. To be more specific, for
any $\mathbb{q}\in\mathcal{M}$, consider the Fréchet function of the following
form
$\displaystyle\mathcal{F}:$ $\displaystyle\ \mathcal{M}\rightarrow\mathbb{R}$
$\displaystyle\
\mathbb{q}\in\mathcal{M}\mapsto\mathcal{F}(\mathbb{q}):\mathbb{E}\left(\rho^{2}(\mathbb{X},\mathbb{q})\right)=\int_{\mathcal{M}}\rho^{2}(\mathbb{x},\mathbb{q})\mathcal{Q}(d\mathbb{x}),$
(1)
where $\rho$ denotes generic metric on $\mathcal{M}$. Then the Fréchet mean is
defined as the minimizer of the Fréchet function above, i.e.,
$\displaystyle\boldsymbol{\mu}_{F}=\operatornamewithlimits{argmin}_{\mathbb{q}\in\mathcal{M}}\int_{\mathcal{M}}\rho^{2}(\mathbb{x},\mathbb{q})\mathcal{Q}(d\mathbb{x}).$
(2)
And also, for a given observation
$\mathbb{x}_{1},\cdots,\mathbb{x}_{n}\in\mathcal{M}$, which consists $n$
independent realizations of $\mathbb{X}$, the sample Fréchet mean is defined
as
$\overline{\mathbb{X}}=\operatornamewithlimits{argmin}_{\mathbb{q}\in\mathcal{M}}\sum_{i=1}^{n}\rho^{2}(\mathbb{x}_{i},\mathbb{q})$.
Given the relation between the mean and the variance, the above generalization
of the mean makes intuitive sense, because it gives analogous definition of
the Euclidean mean which is characterized as the minimizer of the variance
function, the sum of the squared deviation. In this regard, the Fréchet
function itself is commonly referred to as the Fréchet variance.
But, first and foremost, what needs to be emphasized is that a metric $\rho$
is not unique for any particular manifold, and there are many possible
choices. Regarding this issue, two different types of distance functions have
been typically considered in the literature of manifold valued data analysis.
The first possible choice is the intrinsic distance, that is the geodesic
distance associated with the Riemannian structure $\mathrm{\mathbf{g}}$ on
$\mathcal{M}$. The other type of distance is the Euclidean distance induced by
the embedding $J:\mathcal{M}\rightarrow E^{d}$, which is also referred to as
the extrinsic distance. The former and the latter distances lead to the
intrinsic and the extrinsic data analysis, respectively.
Most of the previous works on analyzing manifold valued data have mainly
focused on developing statistical methods based on variants of the Fréchet
mean. For instance, by introducing the conditional Fréchet mean, Petersen and
Müller (2019) developed a regression model having a random object in a metric
space as a response variable. However, it is well known that the least squares
based methods are severely degraded when there exists outliers in the data or
the underlying data distribution is heavy tailed. Thus, the lack of
statistical robustness, incurred by the squared distance involved in (2),
becomes apparent in the Fréchet mean as well. Whereas, in the Euclidean space
setting, considerable efforts have been devoted to improving the robustness of
estimators(see Huber, 1964; Huber and Ronchetti, 2009; Hampel et al., 1986,
and references therein for a review), far less attention has been paid to
manifolds. Indeed, one simple way to enhance robustness of estimators is
replacing the squared distance by the unsquared distance. In the case of
Euclidean space, where
$\mathcal{M}=\mathbb{R}^{d},\rho(\mathbb{x},\mathbb{x}^{\prime})=\|\mathbb{x}-\mathbb{x}^{\prime}\|$,
this approach has a geometric median (Haldane, 1948) as a special case. Along
the same line, the intrinsic geometric median on Riemannian manifolds,
obtained by minimizing the Fréchet function associated with the unsquared
geodesic distance, has been proposed by Fletcher et al. (2009). Motivated by
the success of the intrinsic geometric median, the primary contribution of
this paper is to develop a novel robust location parameter within an extrinsic
framework, which entails a computationally efficient algorithm. Moreover,
adopting the concept of the classical local polynomial modeling, we implement
the robust extrinsic local regression model for manifold valued response and
Euclidean predictor. This can be accomplished by extending the concept of the
proposed extrinsic median to the notion of the conditional extrinsic median.
The rest of this paper is organized as follows. The proposed extrinsic median
is introduced in Section 2, along with a brief review of the extrinsic
framework for manifold data analysis. Application of the extrinsic median to
two different manifolds is also demonstrated in Section 3. In Section 4, we
develop the robust extrinsic local regression (RELR) model, with algorithmic
details. In Section 5, the proposed RELR is implemented in the Kendall’s
planar shape space and its promising properties are illustrated through
simulation studies. Finally, we conclude the paper in Section 6 with a short
discussion and possible directions for the future study.
## 2 Extrinsic Median
In this section, we develop the extrinsic median which provides a
statistically robust and computationally efficient way of estimating the
center point of the data residing on manifolds. Before describing our method,
it is useful to begin with a brief review of the extrinsic framework for
manifold valued data analysis on which the proposed scheme is based, and the
motivation that initiated this study.
### 2.1 Extrinsic framework on manifold valued data analysis
The essential idea behind the extrinsic analysis is that any $d$-dimensional
manifold $\mathcal{M}$ can be embedded in a higher-dimensional Euclidean space
$\mathbb{R}^{D}$, where $d<D$ (Whitney, 1944) via an embedding $J$. Thus, to
understand the extrinsic approach, it is necessary to recall the definition of
embedding $J$. First consider a differentiable map
$J:\mathcal{M}\rightarrow\mathbb{R}^{D}$, whose differential
$d_{\mathbb{p}}J:T_{\mathbb{p}}\mathcal{M}\rightarrow
T_{J({\mathbb{p}})}\mathbb{R}^{D}$ is a one-to-one, where
$T_{\mathbb{p}}\mathcal{M}$ and $T_{J({\mathbb{p}})}\mathbb{R}^{D}$ denote the
tangent space at $\mathbb{p}\in\mathcal{M}$ and the tangent space at
$J(\mathbb{p})$ on $\mathbb{R}^{D}$, respectively. Note that the class of
differentiable maps specified above is called an immersion. Then the one-to-
one immersion is called the embedding if it is a homeomorphism from
$\mathcal{M}$ to $J(\mathcal{M})$ with the induced topology. Also of note is
that the embedding is unfortunately not unique in general, and not all choices
of embedding lead to a good estimation result. In this context, the extrinsic
approach has been carried out under the premise that the selected embedding
preserves intrinsic geometry of the original manifold. Therefore, the
embedding satisfying the following condition is typically preferred within
extrinsic framework. For a Lie group $G$ acting on $\mathcal{M}$, the
embedding $J:\mathcal{M}\rightarrow\mathbb{R}^{D}$ is referred to as the $G$
equivariant embedding if there exists the group homomorphism
$\phi:G\rightarrow\operatorname{GL}_{D}(\mathbb{R}\ \text{or}\ \mathbb{C})$
satisfying
$J(g\mathbb{p})=\phi(g)J(\mathbb{p}),\forall\mathbb{p}\in\mathcal{M},g\in G$,
where $\operatorname{GL}_{D}(\mathbb{R}\ \text{or}\ \mathbb{C})$ denotes the
general linear group which is the group of $D\times D$ invertible real, or
complex matrices. This definition indicates that the group action of $G$ can
be recovered in the embedded space $J(\mathcal{M})$ through $\phi$. Therefore,
in light of the above, a great amount of geometric feature of the manifold is
preserved in the embedded Euclidean space via the equivariant embedding. And
the extrinsic distance between two points a manifold can be straightforwardly
computed in an embedded space via the Euclidean norm.
Considering all the notions described above, the extrinsic mean
$\boldsymbol{\mu}_{E}$ on a manifold is defined as the minimizer of the
Fréchet function associated with the extrinsic distance via an embedding
$J:\mathcal{M}\rightarrow\mathbb{R}^{D}$
$\displaystyle\boldsymbol{\mu}_{E}=\operatornamewithlimits{argmin}_{\mathbb{q}\in\mathcal{M}}\int_{\mathcal{M}}\|J(\mathbb{x})-J(\mathbb{q})\|^{2}\mathcal{Q}(d\mathbb{x}).$
(3)
As compared to the intrinsic mean, the use of the extrinsic approach has
several advantages, including (1) computational efficiency (Bhattacharya et
al., 2011), (2) milder conditions for existence and uniqueness of the
solution. Moreover, the sample extrinsic mean often has a closed form
solution. Thus, we here derive the extrinsic mean in an explicit form. To do
this the following definition which gives the uniqueness condition of the
extrinsic mean should be noted first. A point $\mathbb{y}\in\mathbb{R}^{D}$ is
said to be $J$-nonfocal if there exists a unique point
$\mathbb{p}\in\mathcal{M}$ satisfying
$\inf_{\mathbb{x}\in\mathcal{M}}\|\mathbb{y}-J(\mathbb{x})\|=\|\mathbb{y}-J(\mathbb{p})\|$.
Then we let
$\boldsymbol{\mu}=\int_{\mathbb{R}^{D}}\mathbb{u}\widetilde{\mathcal{Q}}(d\mathbb{u})$
be the mean vector of the induced probability measure
$\widetilde{\mathcal{Q}}=\mathcal{Q}\circ J^{-1}$, which is the image of
$\mathcal{Q}$ in $\mathbb{R}^{D}$. Then the Fréchet function associated with
the extrinsic distance, the right hand side of (3), also can be written as
$\displaystyle\mathcal{F}(\mathbb{q})=\|J(\mathbb{q})-\boldsymbol{\mu}\|^{2}+\int_{\mathbb{R}^{D}}\|\mathbb{x}-\boldsymbol{\mu}\|^{2}\widetilde{\mathcal{Q}}(d\mathbb{x}).$
(4)
Hence, we have
$\inf_{\mathbb{q}\in\mathcal{M}}\mathcal{F}(\mathbb{q})=\inf_{J(\mathbb{q})\in\widetilde{\mathcal{M}}}\|J(\mathbb{q})-\boldsymbol{\mu}\|^{2}+\int_{\mathbb{R}^{D}}\|\mathbb{x}-\boldsymbol{\mu}\|^{2}\widetilde{\mathcal{Q}}(d\mathbb{x})$,
where $\widetilde{\mathcal{M}}=J(\mathcal{M})$ denotes the image of the
embedding. This indicates the set of points $\mathbb{x}\in\mathcal{M}$
satisfying
$\inf_{j(\mathbb{q})\in\widetilde{\mathcal{M}}}\|J(\mathbb{q})-\boldsymbol{\mu}\|=\|J(\mathbb{x})-\boldsymbol{\mu}\|$
consists the extrinsic mean set. And since, (4) is minimized on
$\widetilde{\mathcal{M}}$ by $J(\mathbb{q})=\mathcal{P}(\boldsymbol{\mu})$,
where $\mathcal{P}:\mathbb{R}^{D}\rightarrow\widetilde{\mathcal{M}}$ such that
for $\forall\mathbb{u}^{\prime}\in\widetilde{\mathcal{M}}$,
$\mathcal{P}(\mathbb{y})=\\{\mathbb{u}\in\widetilde{\mathcal{M}}:\|\mathbb{u}-\mathbb{y}\|\leq\|\mathbb{u}^{\prime}-\mathbb{y}\|\\}$,
the extrinsic mean uniquely exists if and only if the mean vector
$\boldsymbol{\mu}$ is a $J$-nonfocal point. In that case, the extrinsic mean
is obtained by taking the inverse of the embedding, i.e.,
$\boldsymbol{\mu}_{E}=J^{-1}(\mathcal{P}(\boldsymbol{\mu}))$. Following from
the above, the sample extrinsic mean is obtained in a straightforward manner.
Suppose we observe $\mathbb{x}_{1},\dots,\mathbb{x}_{n}\in\mathcal{M}$,
consisting of independent and identically distributed copies of $\mathbb{X}$,
then the sample extrinsic mean is given by
$\overline{\mathbb{X}}_{E}=J^{-1}\\{\mathcal{P}(\overline{J(\mathbb{X})})\\},$
where $\overline{J(\mathbb{X})}=\sum_{i=1}^{n}J(\mathbb{x}_{i})/n$.
Theoretical properties of the sample extrinsic mean, including asymptotic
distribution, consistency, and the uniqueness conditions are well established
in Bhattacharya and Patrangenaru (2003, 2005).
### 2.2 Extrinsic Median
Before proceeding to present our proposed method, we begin by giving a quick
overview of the existing Euclidean geometric median. In the Euclidean
multivariate setting, a large body of research has been devoted to developing
the robust estimation of the central point (see for a review Small, 1990),
among which the geometric median, initially proposed by Haldane (1948), has
received the greatest attention over the last decades due both to its nice
robustness properties and computational efficiency (Cardot et al., 2013,
2017). The geometric median of a random variable $\mathbb{X}\in\mathbb{R}^{k}$
is defined by
$\mathbb{m}=\operatornamewithlimits{argmin}_{\mathbb{q}\in\mathbb{R}^{k}}\mathbb{E}\|\mathbb{X}-\mathbb{q}\|$,
or alternatively but equivalently, is obtained by minimizing
$\int_{\mathbb{R}^{k}}\left(\|\mathbb{x}-\mathbb{q}\|-\|\mathbb{x}\|\right)\mathcal{Q}(d\mathbb{x}).$
Note that the latter expression has been more commonly adopted in practice,
since no assumption regarding the first order moment of $\mathbb{X}$ needs to
be imposed. Moreover, when $k=1$ the above definition corresponds with the
classical notion of the median which is defined in terms of the cumulative
distribution function. In this sense, the geometric median plays a role of the
multivariate generalization of the univariate median. Now suppose that we
observe $\mathcal{X}=\\{\mathbb{x}_{1},\cdots,\mathbb{x}_{n}\\}$ consisting of
$n$ independent and identically distributed realizations of $\mathbb{X}$, then
the sample geometric median $\widehat{\mathbb{m}}$, which provides the natural
estimation of $\mathbb{m}$, is obtained by finding the optimal value that
minimizes the sum of Euclidean distances to given data points, i.e,
$\displaystyle\widehat{\mathbb{m}}=\operatornamewithlimits{argmin}_{\mathbb{q}\in\mathbb{R}^{k}}\sum_{i=1}^{n}\left(\|\mathbb{x}_{i}-\mathbb{q}\|-\|\mathbb{x}_{i}\|\right).$
The above optimization problem is also known as the Fermat-Weber problem
(Weber, 1929), and the numerical algorithm for solving the geometric median
problem was firstly introduced by Weiszfeld (1937). It is shown in Kemperman
(1987) that the sample geometric median is uniquely determined unless all the
given observations do not lie on the same line.
Although many nice properties have been investigated including invariance
under rotation and translation, asymptotic behavior (Möttönen et al., 2010),
concentration (Minsker, 2015), however, the most notable advantage of the
geometric median over the mean is that it provides a robust estimation of the
centrality under the presence of noise in the data. The robustness of the
estimator is usually measured by the breakdown point. For a given data
$\mathcal{X}$ in the above, we further consider the outlier-contaminated data
$\mathcal{X}^{\ast}_{m}=\\{\mathbb{x}_{1}^{\ast},\cdots,\mathbb{x}_{m}^{\ast},\mathbb{x}_{m+1},\cdots,\mathbb{x}_{n}\\}$,
where the first $m$ elements are replaced by extreme noises, then the
breakdown point of the estimator $T_{n}$ is defined by
$\displaystyle B(T_{n})=\min_{1\leq m\leq n}\left\\{\frac{m}{n}\ \left|\
\sup_{\mathcal{X}^{\ast}_{m}}\|T_{n}(\mathcal{X}^{\ast}_{m})-T_{n}(\mathcal{X})\|=\infty\right.\right\\}.$
In an intuitive sense, the breakdown point can be interpreted as the highest
proportion of contamination that the estimator can tolerate before the
difference between the estimated result obtained from the contaminated data
and the initial result goes to infinity. Being less affected by outliers, the
geometric median achieves the asymptotic breakdown point of $0.5$ (Lopuhaä and
Rousseeuw, 1991). This indicates that the geometric median can provide a good
estimation result even though up to half of the data is corrupted. Note that
the breakdown point of the sample mean is $1/n$, meaning that only one single
extreme value changes the estimation result arbitrary.
We now turn our attention to manifolds. Much of the research regarding
estimation of the central location of data on manifolds has focused on the
variants of the Fréchet mean including the intrinsic, extrinsic mean. However,
the common drawback of the least square based methods is their lack of
robustness to extreme values, which makes the Fréchet mean inevitably
sensitive to heavy tailed distributions and outlying values. Nevertheless,
even though there has been a considerable increase in the need for robust
statistical methods on manifolds, less has been done on this issue. The
pioneering attempt to address this is seen in the work of Fletcher et al.
(2009), where they proposed the intrinsic median by substituting the squared
geodesic distance employed in the Fréchet function with the unsquared one,
i.e.,
$\operatornamewithlimits{argmin}_{\mathbb{q}\in\mathcal{M}}\int_{\mathcal{M}}\rho(\mathbb{x},\mathbb{q})\mathcal{Q}(d\mathbb{x})$.
Although their approach has had a great deal of success in generalizing the
notion of the median to Riemannian manifolds by attaining the same breakdown
point as in the Euclidean case, it has some inherent drawbacks that may limit
its application. (1) For example, it is often difficult to derive conditions
for the uniqueness and the existence of the intrinsic median without
restrictions on its support. (2) Moreover, even when the intrinsic median
exists, it requires iterated algorithms on manifolds which may incur a large
amount of computational overhead. These drawbacks highlight the need for the
development of novel approaches aimed at giving a more computationally
efficient method in which the existence and uniqueness conditions are well
established and easy to understand.
In an attempt to address the methodological shortcomings of the intrinsic
median described above, we propose the following new robust location parameter
by making use of the unsquared extrinsic distance,
$\displaystyle\mathbb{m}_{E}=\operatornamewithlimits{argmin}_{\mathbb{q}\in\mathcal{M}}\int_{\mathcal{M}}\|J(\mathbb{x})-J(\mathbb{q})\|\mathcal{Q}(d\mathbb{x}).$
(5)
Given observations $\mathbb{x}_{1},\cdots,\mathbb{x}_{n}$ consisting of
independent and identically distributed copies of manifold valued random
variable $\mathbb{X}$, the above location parameter, which we call the
population extrinsic median, can be estimated by replacing $\mathcal{Q}$ with
the empirical measure
$\widehat{\mathcal{Q}}=1/n\sum_{i=1}^{n}\delta_{\mathbb{x}_{i}}$, i.e,
$\displaystyle\widehat{\mathbf{m}}_{E}$
$\displaystyle=\operatornamewithlimits{argmin}_{\mathbb{q}\in\mathcal{M}}\sum_{i=1}^{n}\|J(\mathbb{x}_{i})-J(\mathbb{q})\|$
$\displaystyle=J^{-1}\left(\mathcal{P}\Big{(}\operatornamewithlimits{argmin}_{\mathbb{m}\in\mathbb{R}^{D}}\sum_{i=1}^{n}\|J(\mathbb{x}_{i})-\mathbb{m}\|\Big{)}\right).$
(6)
Unlike the sample extrinsic mean which has a closed form expression depending
on the projection map, the sample extrinsic median requires an iterative
algorithm, called the Weiszfeld’s algorithm for solving the inner minimization
problem. But taking advantage of Euclidean geometry, the proposed extrinsic
approach allows us to exploit the original form of Weiszfeld algorithm without
requiring any further modifications. Indeed, the following Algorithm 1 for
solving (6) can be easily derived due to the convexity of the object function
associated with the Euclidean norm,
$f(\boldsymbol{m})=\sum_{i=1}^{n}\|J(\mathbb{x}_{i})-\boldsymbol{m}\|$.
Algorithm 1 Extrinsic Median
1:$n$ observations $\mathcal{X}=\\{\mathbb{x}_{1},\cdots,\mathbb{x}_{n}\\}$
2:$t=0,\boldsymbol{m}^{0}\ \text{and}\ \varepsilon$
3:while $\|\boldsymbol{m}^{t+1}-\boldsymbol{m}^{t}\|<\varepsilon$ do
4: Compute the gradient direction $\nabla f(\boldsymbol{m}^{t})$
$\displaystyle\sum_{i=1}^{n}\frac{\boldsymbol{m}^{t}-J(\mathbb{x}_{i})}{\|\boldsymbol{m}^{t}-J(\mathbb{x}_{i})\|}$
5: Compute the step size $\displaystyle
s^{t}=\left(\sum_{i=1}^{n}\frac{1}{\|\boldsymbol{m}^{t}-J(\mathbb{x}_{i})\|}\right)^{-1}$
6: Update $m^{t+1}$
$\displaystyle\boldsymbol{m}^{t+1}=\boldsymbol{m}^{t}-s^{t}\cdot\nabla
f(\boldsymbol{m}^{t})$
7: $t\leftarrow t+1$
8:end while
9:Estimated robust estimator
$\widehat{\mathbf{m}}_{E}=J^{-1}(\mathcal{P}(\boldsymbol{m}^{\ast}))$
$\triangleright$ $\boldsymbol{m}^{\ast}$ denotes the optimal value.
In fact, when incorporated into the extrinsic framework, without incurring the
computational overhead encountered in the Riemannian manifolds optimization,
our approach possess a practical advantage over the intrinsic geometric median
algorithm. Specifically, in contrast with the intrinsic geometric median
algorithm (Fletcher et al., 2009),
$\displaystyle\boldsymbol{m}^{t+1}=\texttt{Exp}_{\boldsymbol{m}^{t}}(\alpha\mathbf{v}^{t}),\
\mathbf{v}^{t}=\sum_{i=1}^{n}\frac{\texttt{Log}_{\boldsymbol{m}^{t}}(\mathbb{x}_{i})}{\rho(\boldsymbol{m}^{t},\mathbb{x}_{i})}\cdot\left(\sum_{i=1}^{n}\frac{1}{\rho(\boldsymbol{m}^{t},\mathbb{x}_{i})}\right)^{-1},$
in which $\texttt{Exp}:T_{\mathbb{m}^{t}}\rightarrow\mathcal{M}$,
$\texttt{Log}:\mathcal{M}\rightarrow T_{\mathbb{m}^{t}}$ have to be repeatedly
evaluated at each iteration, our method can reduce the additional
computational cost caused by the above exponential and logarithm mapping. As
indicated above, it should be emphasized that although the data lie on
manifolds, the proposed algorithm itself operates in Euclidean space without
suffering from geometrical restrictions and constraints posed by non-Euclidean
data domains.
Also of importance is that when all given data points are not colinear (i.e.,
there doesn’t exist $\mathbb{y},\mathbb{z}\in\mathbb{R}^{p}$ and
$\alpha_{1},\cdots\alpha_{n}\in\mathbb{R}$, such that $\forall i=1,\cdots,n$,
$\mathbb{x}_{i}=\mathbb{y}+\alpha_{i}\mathbb{z}$), the Weiszfeld’s algorithm
converges always to the unique optimal solution (Kuhn, 1973). For each
iteration step,
$\boldsymbol{m}^{t}\not\in\\{\mathbb{x}_{1},\cdots,\mathbb{x}_{n}\\}$ is
typically assumed in order to ensure that the proposed algorithm converges to
the global optimal solution. Details of the algorithm including derivation and
their convergence analysis are deferred to Section 4, in which we discuss the
algorithm for solving the robust extrinsic local regression (Algorithm 2) of
which Algorithm 1 is a special case.
## 3 Applications of Extrinsic Median
In this section, the practical applicability and performance regarding
robustness of the extrinsic median is examined through simulation studies on
two important manifolds. To gain further insights into the extrinsic median,
the experiment was carried out under different conditions which may possibly
be encountered in practice. Results are compared with competing methods
including the extrinsic mean.
### 3.1 Unit sphere
The first and simplest application is an $d$-unit sphere,
$\mathcal{S}^{d}=\\{\mathbb{x}\in\mathbb{R}^{d+1}:\|\mathbb{x}\|=1\\}$, which
is a $d$-dimensional submanifold of $\mathbb{R}^{d+1}$. It can be embedded
into $\mathbb{R}^{d+1}$ through the inclusion map
$\iota:\mathcal{S}^{d}\rightarrow\mathbb{R}^{d+1}$,
$\iota(\mathbb{x})=\mathbb{x}$. The projection map
$\mathcal{P}:\mathbb{R}^{d+1}\rightarrow\mathcal{S}^{d}$ is defined by
$\mathcal{P}(\boldsymbol{\mu})=\boldsymbol{\mu}/\|\boldsymbol{\mu}\|$, where
$\boldsymbol{\mu}=\int_{\mathbb{R}^{d+1}}\mathbb{x}\widetilde{\mathcal{Q}}(d\mathbb{x})$
is the mean vector calculated in the ambient space of $\mathbb{R}^{{d+1}}$ and
$\widetilde{\mathcal{Q}}=\mathcal{Q}\circ\iota^{-1}$ denotes the induced
probability measure. Note that $\boldsymbol{\mu}$ is $\iota$-nonfocal unless
$\boldsymbol{\mu}=\bf{0}$. For further details about statistical analysis on
$\mathcal{S}^{d}$, we refer to Fisher et al. (1987), Mardia and Jupp (1999)
and references therein.
In the following, the performance of extrinsic median on $\mathcal{S}^{d}$ is
illustrated by simulation studies. To ease visualization of how the generated
data looks like and how the extrinsic median is capable to provide robust
estimation than the extrinsic mean, the simplest case $\mathcal{S}^{1}$, for
which data is observed as the form of direction on a unit circle in
$2$-dimensional Euclidean plane $\mathbb{R}^{2}$, is considered. Note that the
data on $\mathcal{S}^{1}$ is typically represented by an angle measured in
radians $\theta\in[0,2\pi)$, or the unit vector
$\mathbb{x}=(\cos\theta,\sin\theta)^{\top}$ from the origin. The performance
of the extrinsic median is compared under two different simulation scenarios
as follows. In the first scenario, outliers are artificially imposed to the
von Mises ($\operatorname{VM}$) distribution, whereas in the second scenario,
heavy tailed random observations are generated from the general wrapped
$\alpha$ stable ($\operatorname{WS}$) distribution. The detailed description
of each scenario is given in the following.
Scenario 1) : The von Mises distribution with the Normal outliers.
We suppose $\theta$ follows the von Misese distribution,
$\operatorname{VM}(\mu,\kappa)$ with the density function
$\displaystyle f(\theta)=\frac{e^{\kappa\cos(\theta-\mu)}}{2\pi
I_{0}(\kappa)},$
where $\mu,\kappa$ denote the mean direction and concentration parameter,
respectively and $I_{0}(\kappa)$ is the modified Bessel function of order $0$.
Note that larger value of $\kappa$ means higher concentration towards $\mu$.
We first generate random data $\\{\theta_{i}\\}_{i=1}^{n}$ consisting
independent and identically distributed copies of
$\theta\sim\operatorname{VM}(\mu,\kappa)$, then $n_{\text{cont}}$ outliers
$o_{j}^{\prime}\stackrel{{\scriptstyle
iid}}{{\sim}}\operatorname{Normal}(\mu_{\textbf{out}},\sigma^{2})$, where
$\mu_{\textbf{out}}\neq\mu$, are added to the initial data set so that the
contamination level satisfies the prespecified value $r=n_{\text{cont}}/n$.
Additionally normalization of the generated outliers
$o_{j}=o_{j}^{\prime}\pmod{2\pi}$ is required to ensure $0\leq o_{j}<2\pi$.
Scenario 2) : The wrapped $\alpha$-stable random variable.
The density function of a wrapped $\alpha$-stable random variable $\theta$ is
given by
$\displaystyle
f(\theta)=\frac{1}{2\pi}+\frac{1}{\pi}\sum_{k=1}^{\infty}\exp(-\tau^{\alpha}k^{\alpha})\cos\left(k(\theta-\mu)-\tau^{\alpha}k^{\alpha}\beta\tan\frac{\alpha\pi}{2}\right),$
(7)
where $0<\alpha\leq 2$, $\tau\geq 0$ and $|\beta|\leq 1$ denote the shape,
dispersion and skewness parameters, respectively. Note that small values of
$\alpha$ yield heavy tailed distributions but larger $\tau$ values yield more
highly dispersed distributions. The benefit of using wrapped $\alpha$-stable
distribution is that it provides a high degree of flexibility in modeling
directional data in the sense that it contains many popular circular
distributions as special cases, including the wrapped normal distribution
($\alpha=2$) and the wrapped Cauchy distribution ($\alpha=1,\beta=0$); see
Jammalamadaka and SenGupta (2001) for further details.
Representative illustrations of simulation scenario 1 and scenario 2 are
displayed in Figure 1 (left and right panel, respectively), together with
estimated values. In both scenarios, we observed that extrinsic mean
estimations were forced by outliers to be pulled far away from the true mean
direction ($\mu=0$, i.e., $\mathbb{x}=(1,0)$ in the Cartesian coordinate
system). In Table 1, the extrinsic median is compared with the extrinsic mean
in terms of norm of difference between the true mean direction and the
estimated direction. The results are averaged over 20 replications. In the
first scenario, four different settings are considered according to the level
of contamination, $r\in\\{0,0.1,0.2,0.4\\}$, where 0 represents no outlier
exists. As would be expected, the result obtained from the first scenario
indicates that as the contamination level becomes higher, the extrinsic mean
is far more vulnerable to the presence of outliers than the extrinsic median.
The bottom panel of the table shows the result of the scenario 2 in which we
fix $\beta=0$ for the symmetry of the distribution and vary the tail heaviness
level by adjusting $\alpha$ from $0.1$ to $2$, and the dispersion of the data
is controlled by differing $\tau=0.2,2$. It is observed that extrinsic median
not only has a better predictive ability in the case of heavy tailed data
which corresponds to small values of $\alpha$, but also a comparable
performance was achieved even in non-heavy tailed data, generated from the
wrapped normal distribution ($\alpha=2$).
Figure 1: Left: an example of the Scenario 1), consisting of observations
drawn from the VM distribution and outliers displayed in light grey circles
and asterisks, respectively. Right: Scenario 2). In both setting, the fit of
the extrinsic mean and median are displayed with the true mean value.
Scenario 1 : The outlier contaminated case
---
| Ratio of outliler
| No outlier | 0.1 | 0.2 | 0.4
$N$ | E. Mean | E. Med | E. Mean | E. Med | E. Mean | E. Med | E. Mean | E. Med
10 | 0.0100 | 0.0090 | 0.0550 | 0.0129 | 0.1376 | 0.0132 | 0.2778 | 0.0169
50 | 0.0064 | 0.0066 | 0.0663 | 0.0106 | 0.1311 | 0.0115 | 0.2702 | 0.0122
100 | 0.0032 | 0.0033 | 0.0624 | 0.0101 | 0.1315 | 0.0087 | 0.2732 | 0.0118
200 | 0.0025 | 0.0028 | 0.0655 | 0.0099 | 0.1331 | 0.0101 | 0.2730 | 0.0136
Scenario 2 : The heavy tailed distribution |
---|---
| | $\alpha$
| | 0.1 | 0.5 | 1 | 2
$\tau$ | $N$ | E. Mean | E. Med | E. Mean | E. Med | E. Mean | E. Med | E. Mean | E. Med
0.2 | 10 | 0.3411 | 0.1658 | 0.1971 | 0.0540 | 0.1044 | 0.0505 | 0.0376 | 0.0393
50 | 0.1053 | 0.0011 | 0.0709 | 0.0135 | 0.0454 | 0.0199 | 0.0188 | 0.0197
100 | 0.0653 | 0.0003 | 0.0432 | 0.0095 | 0.0364 | 0.0141 | 0.0123 | 0.0194
200 | 0.0582 | 0.0003 | 0.0211 | 0.0062 | 0.0198 | 0.0091 | 0.0086 | 0.0095
2 | 10 | 0.3105 | 0.1294 | 0.4291 | 0.4054 | 0.4446 | 0.3963 | 0.3876 | 0.4501
50 | 0.1471 | 0.0014 | 0.2221 | 0.1112 | 0.1619 | 0.1550 | 0.1824 | 0.2097
100 | 0.1312 | 0.0023 | 0.1434 | 0.0858 | 0.1415 | 0.1339 | 0.2075 | 0.2641
200 | 0.0867 | 0.0005 | 0.0894 | 0.0567 | 0.0967 | 0.0845 | 0.1161 | 0.1421
Table 1: Results of the experiment described in Section 3.1. The result of
scenario 1 and 2 are presented in the top and bottom panels of the table,
respectively
### 3.2 Planar Shape
For the second application of the extrinsic median, we consider the Kendall’s
planar shape space of $k$-ads, denoted by $\Sigma_{2}^{k}$ (Kendall, 1984)
which is the most popular manifold in landmark based shape analysis
literature. Before proceeding to present simulation study on the planar shape
space, we give necessary preliminaries about this space.
The planar shape can be defined as a random object that is invariant under the
Euclidean similarity transformation. Therefore, the planar shape is identified
as the remaining geometric information after filtering out the effect of
translation, scaling, and rotation. To ease understanding of this nonlinear
manifold, let us begin by demonstrating the geometry of the planar shape
space. First, the unregistered $k$-ads which is a landmark configuration that
describes a shape of an object can be conveniently placed on a complex plane
as a set of $k$ complex numbers, i.e., $\mathbb{z}=(z_{1},\cdots,z_{k})$,
where $z_{j}=x_{j}+iy_{j}\in\mathbb{C}$. Then one can obtain the preshape of
$\mathbb{z}$ by quotienting out the effect of translation and scale
$\displaystyle\mathbb{u}=\frac{\mathbb{z}-\langle\mathbb{z}\rangle}{\|\mathbb{z}-\langle\mathbb{z}\rangle\|},$
where $\langle\mathbb{z}\rangle=(\bar{z},\cdots,\bar{z})$, and
$\bar{z}=\frac{1}{k}\sum_{j=1}^{k}z_{j}$. This indicates that the preshape
space is equivalent to a complex hypersphere,
$\mathbb{C}S^{k-1}=\left\\{\mathbb{u}\in\mathbb{C}^{k}|\sum_{i=1}^{k}\mathbb{u}_{j}=0,\|\mathbb{u}\|=1\right\\}$.
Then the shape $[\mathbb{z}]$ of $\mathbb{z}$ which is the geometric object
that is invariant under a rotation effect, is obtained by considering all
rotated version of $\mathbb{u}$,
i.e.,$[\mathbb{z}]=\left\\{e^{i\theta}\mathbb{u}:0\leq\theta<2\pi\right\\}$.
As the shape is defined as the orbit of $\mathbb{u}\in\mathbb{C}S^{k-1}$,
Kendall’s planar shape space $\Sigma_{2}^{k}=\mathbb{C}S^{k-1}/SO(2)$ is the
quotient space of the preshape space under the action of special orthogonal
group of dimension $2$,
$SO(2)=\\{\bf{A}\in\operatorname{GL}_{2}|\bf{A}^{-1}=\bf{A}^{\top},\operatorname{det}(\bf{A})=1\\}$.
Alternatively, the effects of scaling by a scalar $r>0$ and rotating by an
angle $0\leq\theta<2\pi$ can be simultaneously filtered out via multiplying by
the complex number $\lambda=re^{i\theta}$ from the centralized $k$-ad
configuration $\mathbb{z}-\langle\mathbb{z}\rangle$, i.e.,
$[\mathbb{z}]=\\{\lambda(\mathbb{z}-\langle\mathbb{z}\rangle):\lambda\in\mathbb{C}\setminus\\{0\\}\\}$.
Due to this algebraically simpler characterization, the planar shape space is
equivalently identified as the complex projective space
$\Sigma_{m}^{k}\simeq\mathbb{C}P^{k-2}$ that is the space of all complex lines
through the origin in $\mathbb{C}^{k-1}$. More detailed explanation of the
geometrical structure of the shape manifold is provided in Dryden and Mardia
(1998); Bhattacharya and Bhattacharya (2012).
We now describe the extrinsic approach in $\Sigma_{2}^{k}$. Due to Kent
(1992), in the Kendall’s planar shape space the Veronese–Whitney embedding is
typically used, which maps $\Sigma_{2}^{k}$ into the space of $k\times k$
complex Hermitian matrices $\mathcal{S}(k,\mathbb{C})$ by
$\displaystyle J:\ $
$\displaystyle\Sigma_{2}^{k}\rightarrow\mathcal{S}(k,\mathbb{C})$
$\displaystyle[\mathbb{z}]\mapsto
J([\mathbb{z}])=\mathbb{u}\mathbb{u}^{\ast},$ (8)
where $\mathbb{u}^{\ast}$ denotes the complex conjugate transpose of
$\mathbb{u}$. Furthermore, since
$J(\bf{A}[\mathbb{z}])=\bf{A}\mathbb{u}\mathbb{u}^{\ast}\bf{A}^{\ast}$ holds
for any ${\bf{A}}\in SU(k)$, where
$SU(k)=\left\\{\bf{A}\in\operatorname{GL}_{k}(\mathbb{C})\ |\
\bf{AA}^{\ast}=\bf{I},\det(\bf{A})=1\right\\}$ denotes the special unitary
group, the Veronese-Whitney embedding is shown to be the $SU(k)$ equivariant
embedding, i.e., $J({\bf{A}}[\mathbb{z}])=\phi({\bf{A}})J([\mathbb{z}])$. It
follows directly by taking the Lie group homomorphism
$\phi:\operatorname{SU}(k)\rightarrow\operatorname{GL}_{k}(\mathbb{C})$ such
that $\phi({\bf{A}}){\bf{B}}=\bf{ABA}^{\ast}$, where
${\bf{B}}\in\mathcal{S}(k,\mathbb{C})$. It also should be noted that the
squared extrinsic distance of two planar shapes is defined in terms of the
Frobenius norm of a complex matrix
$\displaystyle\rho_{E}^{2}([\mathbb{z}_{1}],[\mathbb{z}_{2}])$
$\displaystyle=\|J([\mathbb{z}_{1}])-J([\mathbb{z}_{2}])\|_{F}^{2}$
$\displaystyle=\text{Trace}\Big{(}\left\\{J([\mathbb{z}_{1}])-J([\mathbb{z}_{2}])\right\\}\left\\{J([\mathbb{z}_{1}])-J([\mathbb{z}_{2}])\right\\}^{\ast}\Big{)}$
$\displaystyle=\sum_{j=1}^{k}\sum_{i=1}^{k}\left|\\{J([\mathbb{z}_{1}])-J([\mathbb{z}_{2}])\\}_{i,j}\right|^{2}.$
(9)
Since the above extrinsic distance takes into account every $k^{2}$ element of
$J([\mathbb{z}_{1}])-J([\mathbb{z}_{2}])$, it can be viewed as the natural
Euclidean distance between the two embedded shapes $J([\mathbb{z}_{1}])$ and
$J([\mathbb{z}_{2}])$. Lastly, the inverse and projection map of the
embedding, $J^{-1}(\mathcal{P}(\cdot))$ in (6), remain to be identified. Let
$\widetilde{\mathbb{X}}$ be the arbitrary point on the ambient Euclidean space
$\mathbb{R}^{D}$, then the projection mapping of $\widetilde{\mathbb{X}}$ onto
the image of the embedding $\widetilde{\mathcal{M}}=J(\Sigma_{2}^{k})$ is
given by $\boldsymbol{\gamma}\boldsymbol{\gamma}^{\ast}$, where
$\boldsymbol{\gamma}$ is the unit eigenvector of $\widetilde{\mathbb{X}}$
corresponding to the largest eigenvalue. Subsequently, the inverse map of
$J^{-1}(\boldsymbol{\gamma}\boldsymbol{\gamma}^{\ast})=[\boldsymbol{\gamma}]$
can be obtained directly from (8) without extra operations.
Now, in order to gauge the performance of the proposed method, we perform
simulation experiments on the planar shape space by investigating the corpus
callosum (CC) data extracted from the subset of ADHD-200 dataset
(http://fcon_1000.projects.nitrc.org/indi/adhd200/). The original dataset
includes functional magnetic resonance imaging (fMRI) scans of subjects
categorized into four different groups based on their symptoms and conditions;
(1) Typically developing children, (2) ADHD-Hyperactive, (3) ADHD-Inattentive,
and (4) ADHD-Combined. The CC shapes of 647 subjects which consist of 50
landmarks were preprocessed and analyzed by Huang et al. (2015) to illustrate
their clustering method. In this experiment, however, only a subset of the
data (the CC shapes extracted from 404 typically developing children) was
utilized. Since the main aim of this simulation study is to see how the
extrinsic median behaves robustly in a noisy environment, where a number of
landmarks are contaminated by outliers, we further manipulated the data by
assigning random noises generated from
$\operatorname{Normal}(\mu=1000,\sigma=5)$ to the real parts (the $x$
cooridinates) of the $10$th $\sim$ $15$th landmarks. The number of outliers
were varied according to the noise level $r$ ranging from $0$ to $0.4$.
Additionally, the extrinsic median was compared to several competing methods
including the maximum likelihood estimator of the isotropic offset Gaussian
distribution (Mardia and Dryden, 1989; Dryden and Mardia, 1991) and different
variants of the Fréchet means such as intrinsic mean, the Fréchet mean
associated with the partial Procrustes distance and the extrinsic mean.
Figure 2: Examples of normal C.C. data and estimated shapes. Each individual
shape is displayed in light grey solid line, and the results of different
methods are represented by different colors and line styles.
Figure 2 shows the CC shapes obtained from several simulated data with
different noise level $r=\\{0,0.2,0.4\\}$. As shown in the left panel ($r=0$),
no remarkable difference was observed in estimated shapes between methods. On
the other hand, however, the middle and the right panels present that with the
exception of the extrinsic median, other methods appeared to be affected by
outliers and led to the distortion in the estimated shapes. Importantly,
although the deformation of the estimated shape is occurred as well in the
extrinsic median at the highest noise level tested, we have seen that it stays
much closer to its initial result, than those of the other methods compared.
We now introduce the measure that quantifies the robustness of estimators on
the planar shape space. To do this, we let $\overline{\mathbb{x}}$,
$\widehat{\mathbb{x}}^{\ast}$ denote the estimated shape obtained from the
uncontaminated and contaminated data, respectively. Then the full Procrustes
distance between $\overline{\mathbb{x}}$ and $\widehat{\mathbb{x}}^{\ast}$,
i.e.,
$\rho_{FP}(\widehat{\mathbb{x}}^{\ast},\overline{\mathbb{x}})=\sqrt{1-|\langle\widehat{\mathbb{x}}^{\ast},\overline{\mathbb{x}}\rangle|^{2}}$,
is considered to assess whether methods can provide the robust estimation
without being influenced by outlier values. This appears analogous to that
used by the breakdown point in which the Euclidean version of the foregoing
quantity, $\|T_{n}(\mathcal{X}^{\ast}_{m})-T_{n}(\mathcal{X})\|$ is exploited.
However, unlike in the case of the breakdown point, which gives the highest
fraction of gross outliers in the data that can be handled by an estimator,
the smaller value of
$\rho_{FP}(\widehat{\mathbb{x}}^{\ast},\overline{\mathbb{x}})$ implies the
method has more resistance to outliers. Results of this simulation, averaged
over 20 replications for the different contamination levels, are presented in
Figure 3. This illustrates that the proposed extrinsic median has a remarkable
ability to resist against outliers in the case where the data are contaminated
with significant levels of noise. All the methods, however, show deterioration
in performance which mainly caused by the squared distance term employed in
models.
Figure 3: Graphical result of the simulation study. Line plots give the full
Procrustes distances,
$\rho_{FP}(\widehat{\mathbb{x}}^{\ast},\overline{\mathbb{x}})$ for different
methods as a function of the contamination level $r$.
## 4 Robust Extrinsic Local Regression
In this section, we present the robust extrinsic local regression. To do this,
we first consider a nonparametric regression model
$\mathbb{Y}=f_{0}(\mathbb{X})+\boldsymbol{\varepsilon}$ with a response
$\mathbb{Y}$ taking value in $\mathcal{M}$, a Euclidean predictor
$\mathbb{X}\in\mathbb{R}^{p}$, and $f_{0}$ an unknown regression function of
interest. Suppose we observe
$\mathcal{D}=\\{(\mathbb{x}_{1},\mathbb{y}_{1}),\cdots,(\mathbb{x}_{n},\mathbb{y}_{n})\\}$
consisting of independent and identically distributed copies of
$(\mathbb{X},\mathbb{Y})$. One of the major challenges involved in developing
a regression model having a manifold valued response lies in the lack of
vector space structure of $\mathcal{M}$, which causes the traditional
Euclidean approaches including the least square method not to be obviously
applicable. For example, since linear operations are limited on
$(\mathcal{M},\rho)$, evaluating the difference between the estimated value
and the observed value, i.e., $\mathbb{y}_{i}-\widehat{f}(\mathbb{x}_{i})$, is
not practical. Moreover, the geometrical feasibility of the estimation, i.e.,
$\widehat{f}(\mathbb{x}_{i})\in\mathcal{M}$, can not be guaranteed unless
additional restrictions are imposed on the typical regression models. For the
reasons outlined above, there has been a great demand for the development of a
regression model having a manifold valued response, and a large body of
literature addressing this problem has accumulated over the past two decades
(Shi et al., 2009; Yuan et al., 2012; Cornea et al., 2017). In particular, the
extrinsic local regression (ELR) method has been initially established by Lin
et al. (2017). More recently, Petersen and Müller (2019) proposed the Fréchet
regression on general metric spaces by considering the following conditional
Fréchet mean
$\displaystyle
F(\boldsymbol{x})=\operatornamewithlimits{argmin}_{\mathbb{q}\in\mathcal{M}}\int_{\mathcal{M}}\rho^{2}(\mathbb{q},\mathbb{y})\mathcal{Q}(d\mathbb{y}|\boldsymbol{x}),$
where $\mathcal{Q}(\mathbb{y}|\boldsymbol{x})$ denotes the conditional
distribution of $\mathbb{Y}$ given $\mathbb{X}=\boldsymbol{x}$. Applications
of the above framework is very broad as its usage is not limited to manifolds.
However, despite promising progress in developing regression models for a non-
Euclidean valued response, all the aforementioned methods commonly suffer from
lack of robustness caused by the squared distances. To remedy this problem, we
propose the robust extrinsic local regression (RELR), which can be
accomplished easily by linking the extrinsic median to a classical
nonparametric local kernel regression. The remainder of this section is
dedicated to presenting the details of RELR, together with the proposed
numerical algorithm.
We begin by introducing the following population robust extrinsic regression
function, which extends the notion of the conditional median to manifolds,
$\displaystyle F_{RE}(\boldsymbol{x})$
$\displaystyle=\operatornamewithlimits{argmin}_{\mathbb{q}\in\mathcal{M}}\int_{\mathcal{M}}\|J(\mathbb{q})-J(\mathbb{y})\|\mathcal{Q}(d\mathbb{y}|\boldsymbol{x})$
(10)
$\displaystyle=\operatornamewithlimits{argmin}_{\mathbb{q}\in\mathcal{M}}\int_{\mathcal{\widetilde{\mathcal{M}}}}\|J(\mathbb{q})-\mathbb{z}\|\widetilde{\mathcal{Q}}(d\mathbb{z}|\boldsymbol{x}),$
where
$\widetilde{\mathcal{Q}}(\cdot|\boldsymbol{x})=\mathcal{Q}(\cdot|\boldsymbol{x})\circ
J^{-1}$ is the induced conditional probability measure of $\mathbb{Y}$ given
$\mathbb{X}=\boldsymbol{x}$ defined on $J(\mathcal{M})$. While the proposed
extrinsic approach is similar in spirit to those developed in Lin et al.
(2017), our work differs in that it makes use of the unsquared extrinsic
distance rather than the squared one. The unknown regression function
$F(\cdot)$ can be estimated at the evaluation point $\boldsymbol{x}$ by the
classical local polynomial fitting (Fan and Gijbels, 1996)
$\displaystyle\widehat{F}_{RE}(\boldsymbol{x})=J^{-1}\left(\mathcal{P}\bigg{(}\operatornamewithlimits{argmin}_{\boldsymbol{y}\in\mathbb{R}^{D}}\sum_{i=1}^{n}\dfrac{K_{\mathbb{H}}(\mathbb{x}_{i}-\boldsymbol{x})\|\boldsymbol{y}-J(\mathbb{y}_{i})\|}{\sum_{j=1}^{n}K_{\mathbb{H}}(\mathbb{x}_{j}-\boldsymbol{x})}\bigg{)}\right).$
(11)
In the above notation, $K_{\mathbb{H}}:\mathbb{R}^{p}\rightarrow\mathbb{R}$
denotes the multivariate kernel function which is defined as
$K_{\mathbb{H}}(\mathbb{u})=\frac{1}{\det{(\mathbb{H})}}K(\mathbb{H}^{-1}\mathbb{u})$,
where $\mathbb{u}=(u_{1},\cdots,u_{p})^{\top}\in\mathbb{R}^{P}$, $\mathbb{H}$
is a $p\times p$ symmetric and positive definite smoothing matrix, and
$K:\mathbb{R}^{p}\rightarrow\mathbb{R}$ satisfies
$\int_{\mathbb{R}^{p}}K(\mathbb{u})d\mathbb{u}=1,\int_{\mathbb{R}^{p}}\mathbb{u}K(\mathbb{u})d\mathbb{u}=0$,
and $\int_{\mathbb{R}^{p}}\mathbb{u}^{2}K(\mathbb{u})d\mathbb{u}<\infty$. Note
that the case $\mathbb{H}=\operatorname{Diag}(h_{1},\cdots,h_{p})$ corresponds
to using a product kernel obtained by multiplying $p$ univariate kernels with
different bandwidths, i.e.,
$K_{\mathbb{H}}(\mathbb{u})=\prod_{i=1}^{p}\frac{1}{h_{i}}{\mathbf{k}}_{i}\left(u_{i}/h_{i}\right)$.
Regarding solving the inner optimization problem in (11), note that since it
takes the form of the weighted Fermat-Weber problem, where the weight imposed
on the $i$th observation is formulated in terms of the kernel function
$w_{i}=K_{\mathbb{H}}(\mathbb{x}_{i}-\boldsymbol{x})/\sum_{j=1}^{n}K_{\mathbb{H}}(\mathbb{x}_{j}-\boldsymbol{x})$,
the robust extrinsic local regression can be readily solved by the generalized
Weiszfeld’s algorithm.
We now describe the numerical algorithm for obtaining the solution of the
localized robust regression estimator. As it has been assumed in the
development of the extrinsic median, the non-colinearity of the embedded
responses $J(\mathbb{y}_{1}),\cdots,J(\mathbb{y}_{n})$ is required in order to
ensure the convergence of the algorithm. We also let
$f(\boldsymbol{y})=\sum_{i=1}^{n}w_{i}\|\boldsymbol{y}-J(\mathbb{y}_{i})\|$ be
the objective function, then by the strict convexity of $f$, the optimal
solution is attained at the stationary point $\nabla
f(\boldsymbol{y})=\sum_{i=1}^{n}w_{i}\frac{\boldsymbol{y}-J(\mathbb{y}_{i})}{\|\boldsymbol{y}-J(\mathbb{y}_{i})\|}\equiv
0$. Then, since the optimal $\boldsymbol{y}^{\ast}$ satisfies the following
equation
$\left(\sum_{i=1}^{n}w_{i}/\|\boldsymbol{y}^{\ast}-J(\mathbb{y}_{i})\|\right)\boldsymbol{y}^{\ast}=\sum_{i=1}^{n}w_{i}J(\mathbb{y}_{i})/\|\boldsymbol{y}^{\ast}-J(\mathbb{y}_{i})\|$,
the iterative algorithm for updating $\boldsymbol{y}$ on the embedded space
has the following form
$\displaystyle\boldsymbol{y}^{t+1}=\left(\sum_{i=1}^{n}\frac{w_{i}}{\|\boldsymbol{y}^{t}-J(\mathbb{y}_{i})\|}\right)^{-1}\sum_{i=1}^{n}\frac{w_{i}J(\mathbb{y}_{i})}{\|\boldsymbol{y}^{t}-J(\mathbb{y}_{i})\|}.$
(12)
In Algorithm 2, after a simple algebraic calculation, we can reformulate the
update rule in (12) into the form of the gradient descent with the step size
$s^{t}=\left(\sum_{i=1}^{n}w_{i}/\|\boldsymbol{y}^{t}-J(\mathbb{y}_{i})\|\right)^{-1}$.
Once finding the optimal solution $\boldsymbol{y}^{\ast}$ of the inner
minimization problem on the ambient space in $\mathbb{R}^{D}$, the final
regression estimator of $\widehat{F}_{RE}(\boldsymbol{x})$ on $\mathcal{M}$
can be straightforwardly obtained by evaluating the projection map of
$\boldsymbol{y}^{\ast}$ onto the image of the $J(\mathcal{M})$ and taking the
inverse map of the embedding. Note that in order to prevent the algorithm from
getting stuck on the non optimal embedded points,
$\\{\boldsymbol{y}^{t}\\}_{t\geq
0}\not\in\\{J(\mathbb{y}_{1}),\cdots,J(\mathbb{y}_{n})\\}$ needs to be
assumed.
Algorithm 2 Robust extrinsic local regression (RELR)
1:$n$ observations
$\mathcal{D}=\\{(\mathbb{x}_{1},\mathbb{y}_{1}),\cdots,(\mathbb{x}_{n},\mathbb{y}_{n})\\}$,
evaluation point $\boldsymbol{x}$
2:$t=0,\boldsymbol{y}^{0}\ \text{and}\ \varepsilon$
3:while $\|\boldsymbol{y}^{t+1}-\boldsymbol{y}^{t}\|>\varepsilon$ do
4: Compute the gradient direction $\nabla f(\boldsymbol{y}^{t})$
$\displaystyle\sum_{i=1}^{n}\frac{K_{\mathbb{H}}(\mathbb{x}_{i}-\boldsymbol{x})}{\sum_{j=1}^{n}K_{\mathbb{H}}(\mathbb{x}_{j}-\boldsymbol{x})}\frac{\boldsymbol{y}^{t}-J(\mathbb{y}_{i})}{\|\boldsymbol{y}^{t}-J(\mathbb{y}_{i})\|}$
5: Compute the step size $\displaystyle
s^{t}=\left(\sum_{i=1}^{n}\frac{K_{\mathbb{H}}(\mathbb{x}_{i}-\boldsymbol{x})}{\sum_{j=1}^{n}K_{\mathbb{H}}(\mathbb{x}_{j}-\boldsymbol{x})}\middle/\|\boldsymbol{y}^{t}-J(\mathbb{y}_{i})\|\right)^{-1}$
6: Update $\boldsymbol{y}^{t+1}$
$\displaystyle\boldsymbol{y}^{t+1}=\boldsymbol{y}^{t}-s^{t}\cdot\nabla
f(\boldsymbol{y}^{t})$
7: $t\leftarrow t+1$
8:end while
9:Estimated robust estimator
$\widehat{F}_{RE}(\boldsymbol{x})=J^{-1}(\mathcal{P}(\boldsymbol{y}^{\ast}))$
$\triangleright$ $\boldsymbol{y}^{\ast}$ the optimal value
We are now in a position to present the convergence analysis of the proposed
algorithm. Before proceeding, the following results related to convergence of
the classical Weiszfeld’s algorithm should be noted. For nonsmooth convex
optimization problems, most existing gradient descent type algorithms are
known to converge to the optimal solution at a rate of
$\mathcal{O}(1/\sqrt{t})$, however, with the initial value proposed by Vardi
and Zhang (2001), one can show that the Weiszfeld’s algorithm attains a
sublinear convergence rate $\mathcal{O}(1/t)$ (see Beck and Sabach, 2015).
More surprisingly, using the smooth approximation of the object function
enables the algorithm to achieve an accelerated convergence rate
$\mathcal{O}(1/t^{2})$. Additionally, since the proposed algorithm for solving
RELR is performed on Euclidean space, similar algorithmic techniques and
convergence properties previously established for the classical Weiszfelds’s
algorithm can be immediately utilized.
First, we will henceforth make use of the following initial point of RELR
algorithm, which is the extension of the scheme in Vardi and Zhang (2001) to a
nonparametric regression setting. For
$p\in\operatornamewithlimits{argmin}_{i\in\\{1,\cdots,n\\}}f(J(\mathbb{y}_{i}))$,
we set the initial value for Algorithm 2 by
$\boldsymbol{y}^{0}=\mathbb{y}_{p}+t_{p}\mathbb{d}_{p}$, where
$\displaystyle\begin{dcases}R_{p}&:=\sum_{i\neq
p}\frac{K_{\mathbb{H}}(\mathbb{x}_{i}-\boldsymbol{x})}{\sum_{j=1}^{n}K_{\mathbb{H}}(\mathbb{x}_{j}-\boldsymbol{x})}\frac{J(\mathbb{y}_{p})-J(\mathbb{y}_{i})}{\|J(\mathbb{y}_{i})-J(\mathbb{y}_{p})\|}\\\
\mathbb{d}_{p}&=-\frac{R_{p}}{\|R_{p}\|}\\\
t_{p}&=\frac{\|R_{p}\|-K_{\mathbb{H}}(\mathbb{x}_{p}-\boldsymbol{x})/\sum_{j=1}^{n}K_{\mathbb{H}}(\mathbb{x}_{j}-\boldsymbol{x})}{L(J(\mathbb{y}_{p}))}.\end{dcases}$
And the operator $L$ is given by
$\displaystyle
L(\boldsymbol{y})=\begin{dcases}\sum_{i=1}^{n}\frac{K_{\mathbb{H}}(\mathbb{x}_{i}-\boldsymbol{x})/\sum_{j=1}^{n}K_{\mathbb{H}}(\mathbb{x}_{j}-\boldsymbol{x})}{\|\boldsymbol{y}-J(\mathbb{y}_{i})\|},&\boldsymbol{y}\not\in\\{J(\mathbb{y}_{1}),\cdots,J(\mathbb{y}_{n})\\}\\\
\sum_{i\neq
p}\frac{K_{\mathbb{H}}(\mathbb{x}_{i}-\boldsymbol{x})/\sum_{j=1}^{n}K_{\mathbb{H}}(\mathbb{x}_{j}-\boldsymbol{x})}{\|J(\mathbb{y}_{p})-J(\mathbb{y}_{i})\|},&\boldsymbol{y}=J(\mathbb{y}_{p})\
(1\leq p\leq n)\ .\end{dcases}$
Using the above initial point, we obtain the following result. 1 is of
interest in its own right, since without any further manipulation of the
algorithm, the carefully chosen starting value enables Algorithm 2 to achieve
the sublinear convergence rate, $\mathcal{O}(1/t)$.
###### Proposition 1.
Suppose that all embedded response values are not colinear. Then, for any
$t\geq 0$, we have
$\displaystyle
f(\boldsymbol{y}^{t})-f^{\ast}\leq\frac{L(J(\mathbb{y}_{p}))\|\boldsymbol{y}^{0}-\boldsymbol{y}^{\ast}\|^{2}}{t\left(\|R_{p}\|-\frac{K_{\mathbb{H}}(\mathbb{x}_{p}-\boldsymbol{x})}{\sum_{j=1}^{n}K_{\mathbb{H}}(\mathbb{x}_{j}-\boldsymbol{x})}\right)^{2}}\
,$ (13)
where $f^{\ast}$ is the minimum of $f$.
###### Proof.
To prove the sublinear convergence rate of the proposed Algorithm 2, we have
used a collection of results in Beck and Sabach (2015). We first need to
derive the upper bound of the sequence $\\{L(\boldsymbol{y}^{t})\\}_{t\geq
0}$, where $\\{\boldsymbol{y}^{t}\\}_{t\geq 0}$ is the sequence generated by
the algorithm. For any $i=1,\cdots,n$, and $\boldsymbol{y}$ satisfying
$f(\boldsymbol{y})\leq f(\boldsymbol{y}^{0})$, the following inequality
$\|\boldsymbol{y}-J(\mathbb{y}_{i})\|\geq
f(J(\mathbb{y}_{i}))-f(\boldsymbol{y}^{0})$ is satisfied (Lemma 8.1 in Beck
and Sabach, 2015). By combining this together with the monotonicity of
$f(\boldsymbol{y}^{t})\leq f(\boldsymbol{y}^{0})$ (Corollary 3.1 in Beck and
Sabach, 2015) and $f(J(\mathbb{y}_{p}))\leq f(J(\mathbb{y}_{i}))$, we have
$\|\boldsymbol{y}^{t}-J(\mathbb{y}_{i})\|\geq
f(J(\mathbb{y}_{p}))-f(\boldsymbol{y}^{0})$. Then by making use of the
definition of the operator $L$, we obtain the following result
$\displaystyle
L(\boldsymbol{y}^{t})=\sum_{i=1}^{n}\frac{\frac{K_{\mathbb{H}}(\mathbb{x}_{i}-\boldsymbol{x})}{\sum_{j=1}^{n}K_{\mathbb{H}}(\mathbb{x}_{j}-\boldsymbol{x})}}{\|\boldsymbol{y}^{t}-J(\mathbb{y}_{i})\|}\leq\frac{1}{f(J(\mathbb{y}_{p}))-f(\boldsymbol{y}^{0})}\leq\frac{2L(J(\mathbb{y}_{p}))}{\left(\|R_{p}\|-\frac{K_{\mathbb{H}}(\mathbb{x}_{p}-\boldsymbol{x})}{\sum_{j=1}^{n}K_{\mathbb{H}}(\mathbb{x}_{j}-\boldsymbol{x})}\right)^{2}}.$
(14)
In the last inequality, we have use the Lemma 7.1 in Beck and Sabach (2015)
that for some $j\in\\{1,\cdots,n\\}$,
$f(J(\mathbb{y}_{j}))-f(J(\mathbb{y}_{j})+t_{j}\mathbb{d}_{j})\geq(\|R_{j}\|-K_{\mathbb{H}}(\mathbb{x}_{j}-\boldsymbol{x})/\sum_{i=1}^{n}K_{\mathbb{H}}(\mathbb{x}_{i}-\boldsymbol{x})\|)^{2}/2L(J(\mathbb{y}_{j}))$.
Finally from Lemma 5.2 in Beck and Sabach (2015), which states
$f(\boldsymbol{y}^{n+1})-f^{\ast}\leq
L(\boldsymbol{y}^{n})\left(\|\boldsymbol{y}^{n}-\boldsymbol{y}^{\ast}\|^{2}-\|\boldsymbol{y}^{n+1}-\boldsymbol{y}^{\ast}\|^{2}\right)/2$
and the Fejér monotonicity of the sequence generated from Weiszfeld’s
algorithm (i.e.,
$\|\boldsymbol{y}^{t+1}-\boldsymbol{y}\|\leq\|\boldsymbol{y}^{t}-\boldsymbol{y}\|$),
the upper bound $f(\boldsymbol{y}^{t})-f^{\ast}$ can be derived in the
following manner.
$\displaystyle\sum_{n=0}^{t-1}\left(f(\boldsymbol{y}^{n+1})-f^{\ast}\right)$
$\displaystyle\leq\sum_{n=0}^{t-1}\frac{L(\boldsymbol{y}^{n})}{2}\left(\|\boldsymbol{y}^{n}-\boldsymbol{y}^{\ast}\|^{2}-\|\boldsymbol{y}^{n+1}-\boldsymbol{y}^{\ast}\|^{2}\right)$
$\displaystyle\leq\frac{L(J(\mathbb{y}_{p}))}{\left(\|R_{p}\|-\frac{K_{\mathbb{H}}(\mathbb{x}_{p}-\boldsymbol{x})}{\sum_{j=1}^{n}K_{\mathbb{H}}(\mathbb{x}_{j}-\boldsymbol{x})}\right)^{2}}\left(\|\boldsymbol{y}^{0}-\boldsymbol{y}^{\ast}\|^{2}-\|\boldsymbol{y}^{t}-\boldsymbol{y}^{\ast}\|^{2}\right)$
$\displaystyle\leq\frac{L(J(\mathbb{y}_{p}))\|\boldsymbol{y}^{0}-\boldsymbol{y}^{\ast}\|^{2}}{\left(\|R_{p}\|-\frac{K_{\mathbb{H}}(\mathbb{x}_{p}-\boldsymbol{x})}{\sum_{j=1}^{n}K_{\mathbb{H}}(\mathbb{x}_{j}-\boldsymbol{x})}\right)^{2}}.$
(15)
Since the sequence $\\{f(\boldsymbol{y}^{t})\\}_{t\geq 0}$ is non increasing,
$t(f(\boldsymbol{y}^{t})-f^{\ast})\leq\sum_{n=0}^{t-1}\left(f(\boldsymbol{y}^{n+1})-f^{\ast}\right)$
completes the proof. ∎
Even though we have achieved the improved rate of convergence
$\mathcal{O}(1/t)$, there still exists a gap in the order of the convergence
rate, as compared to $\mathcal{O}(1/t^{2})$ which is commonly attained by the
accelerated gradient based methods for solving smooth convex optimization
problems. To further enhance the convergence rate, we here introduce the
modified version of algorithm for RELR. Since the slow convergence rate is
essentially caused by the inherent nonsmoothness of the objective function
$f$, this can be resolved by using the smooth alternative of $f$, that always
gives the optimal solution exactly the same as it is supposed to be. Besides
the improved convergence rate, what makes this approach is even more
surprising is that for $t\geq 1$ the assumption
$(\\{\boldsymbol{y}^{t}\\}\not\in\\{J(\mathbb{y}_{1}),\cdots,J(\mathbb{y}_{n})\\})$
made on the the sequence generated by Algorithm 2 is not required. Given the
lack of knowledge on conditions under which the above assumption can be
guaranteed, the smooth approximation method has the practical advantage.
We finish this section by describing the derivation of the modified
Weiszfeld’s algorithm for RELR along with the convergence analysis. To do
this, we begin by considering the following smooth function
$\widetilde{f}_{s}(\boldsymbol{y}):\mathbb{R}^{D}\rightarrow\mathbb{R}$,
$\displaystyle\widetilde{f}_{s}(\boldsymbol{y})=\sum_{i=1}^{n}\frac{K_{\mathbb{H}}(\mathbb{x}_{i}-\boldsymbol{x})}{\sum_{j=1}^{n}K_{\mathbb{H}}(\mathbb{x}_{j}-\boldsymbol{x})}g_{b_{i}}(\boldsymbol{y}-J(\mathbb{y}_{i})),$
(16)
where
$\displaystyle
g_{b_{i}}(\boldsymbol{y}-J(\mathbb{y}_{i}))=\begin{dcases}\|\boldsymbol{y}-J(\mathbb{y}_{i})\|,&\|\boldsymbol{y}-J(\mathbb{y}_{i})\|\geq{b_{i}}\\\
\frac{\|\boldsymbol{y}-J(\mathbb{y}_{i})\|^{2}}{2{b_{i}}}+\frac{{b_{i}}}{2},&\|\boldsymbol{y}-J(\mathbb{y}_{i})\|<{b_{i}}\
,\end{dcases}$ (17)
and $b_{i}=f(J(\mathbb{y}_{i}))-f(\boldsymbol{y}^{0})$. Note first that the
function $\widetilde{f}_{s}(\boldsymbol{y})$ is convex and continuously
differentiable over $\mathbb{R}^{D}$ whose gradient is Lipschitz continuous,
i.e.,
$\|\nabla\widetilde{f}_{s}(\boldsymbol{y})-\nabla\widetilde{f}_{s}(\boldsymbol{z})\|\leq
L_{s}\|\boldsymbol{y}-\boldsymbol{z}\|\
\forall\boldsymbol{y},\boldsymbol{z}\in\mathbb{R}^{D}$, with the Lipschitz
constant
$\displaystyle
L_{s}=\sum_{i=1}^{n}\frac{K_{\mathbb{H}}(\mathbb{x}_{i}-\boldsymbol{x})/\sum_{j=1}^{n}K_{\mathbb{H}}(\mathbb{x}_{j}-\boldsymbol{x})}{f(J(\mathbb{y}_{i}))-f(\boldsymbol{y}^{0})}\
.$
Also observing that
$g_{b_{i}}(\boldsymbol{y}-J(\mathbb{y}_{i}))\geq\|\boldsymbol{y}-J(\mathbb{y}_{i})\|$
$\forall\boldsymbol{y}\in\mathbb{R}^{D}$, it follows that we have
$\widetilde{f}_{s}(\boldsymbol{y})\geq f(\boldsymbol{y})$ which indicates
$\widetilde{f}_{s}(\boldsymbol{y})$ plays a role of the upper bound of
$f(\boldsymbol{y})$. Moreover following from Lemma 8.1 in Beck and Sabach
(2015), $\|\boldsymbol{y}^{\ast}-J(\mathbb{y}_{i})\|\geq
f(J(\mathbb{y}_{i})-f(\boldsymbol{y}^{0})=b_{i}$ holds for $i=1,\cdots,n$,
where $\boldsymbol{y}^{\ast}$ is the strict global minimizer of the original
objective function $f$. Then, according to the construction in (17),
$g_{b_{i}}(\boldsymbol{y}^{\ast}-J(\mathbb{y}_{i}))=\|\mathbb{y}^{\ast}-J(\mathbb{y}_{i})\|$
is always satisfied, which indicates
$\widetilde{f}_{s}(\boldsymbol{y}^{\ast})=f(\boldsymbol{y}^{\ast})<f(\boldsymbol{y})\leq\widetilde{f}_{s}(\boldsymbol{y})$.
Thus, it is clear to see that the minimizer of the inner optimization problem
in (11) must also be a global minimizer of (16), i.e.,
$\boldsymbol{y}^{\ast}=\operatornamewithlimits{argmin}_{\boldsymbol{y}}\widetilde{f}_{s}(\boldsymbol{y})=\operatornamewithlimits{argmin}_{\boldsymbol{y}}f(\boldsymbol{y})$.
In the above sense, $\widetilde{f}_{s}(\boldsymbol{y})$ allows us to smoothly
approximate the original objective function $f(\boldsymbol{y})$. Therefore,
rather than directly working with $f(\boldsymbol{y})$, we should aim to
minimize $\widetilde{f}_{s}(\boldsymbol{y})$, which leads to Algorithm 3.
Algorithm 3 Fast Weiszfeld algorithm for RELR
1:$n$ observations
$(X,Y)=\\{(\mathbb{x}_{1},\mathbb{y}_{1}),\cdots,(\mathbb{x}_{n},\mathbb{y}_{n})\\}$,
evaluation point $\boldsymbol{x}$
2:$s_{1}=1,\boldsymbol{u}^{1}=\boldsymbol{y}^{0}\in\mathbb{R}^{d}\ \text{and}\
\varepsilon$
3:while $\|\boldsymbol{y}^{t}-\boldsymbol{y}^{t-1}\|>\varepsilon$ do
4:For $t=1,2,\cdots$
5: Update
$\displaystyle\boldsymbol{y}^{t}=\boldsymbol{u}^{t}-\frac{1}{L_{s}}\nabla\widetilde{f}_{s}(\boldsymbol{u}^{t})$,
where
$\displaystyle\nabla\widetilde{f}_{s}(\boldsymbol{u}^{t})=\sum_{i=1}^{n}\frac{K_{\mathbb{H}}(\mathbb{x}_{i}-\boldsymbol{x})}{\sum_{j=1}^{n}K_{\mathbb{H}}(\mathbb{x}_{j}-\boldsymbol{x})}\times\begin{cases}\dfrac{\boldsymbol{u}^{t}-J(\mathbb{y}_{i})}{\|\boldsymbol{u}^{t}-J(\mathbb{y}_{i})\|},\
&\text{if}\ \|\boldsymbol{u}^{t}-J(\mathbb{y}_{i})\|\geq b_{i}\\\ \\\
\dfrac{\boldsymbol{u}^{t}-J(\mathbb{y}_{i})}{b_{i}},\ &\text{if}\
\|\boldsymbol{u}^{t}-J(\mathbb{y}_{i})\|<b_{i}\end{cases}$
6: Update $\displaystyle s_{t+1}=\frac{1+\sqrt{1+4s_{t}^{2}}}{2}$
7: Update
$\displaystyle\boldsymbol{u}^{t+1}=\boldsymbol{y}^{t}+\left(\frac{s^{t}-1}{s^{t+1}}\right)(\boldsymbol{y}^{t}-\boldsymbol{y}^{t-1})$
8:end while
9:Estimated robust estimator
$\widehat{F}_{RE}(\boldsymbol{x})=J^{-1}(\mathcal{P}(\boldsymbol{y}^{\ast}))$
$\triangleright$ $y^{\ast}$ optimal value
Now we let $\\{\boldsymbol{y}^{t}\\}_{t\geq 0}$ be the sequence generated by
Algorithm 3, then we obtain
$\displaystyle\widetilde{f}_{s}(\boldsymbol{y}^{t})-f^{\ast}\leq\frac{2L_{s}\|\boldsymbol{y}^{0}-\boldsymbol{y}^{\ast}\|^{2}}{(t+1)^{2}}.$
(18)
The above convergence result follows immediately from Theorem 9.1 in Beck and
Sabach (2015), and see Beck and Teboulle (2009) for a proof. Furthermore, in
our RELR setting, $L_{s}$ is bounded from above by
$\displaystyle L_{s}$
$\displaystyle=\sum_{i=1}^{n}\frac{K_{\mathbb{H}}(\mathbb{x}_{i}-\boldsymbol{x})/\sum_{j=1}^{n}K_{\mathbb{H}}(\mathbb{x}_{j}-\boldsymbol{x})}{f(J(\mathbb{y}_{i}))-f(\boldsymbol{y}^{0})}\leq\sum_{i=1}^{n}\frac{K_{\mathbb{H}}(\mathbb{x}_{i}-\boldsymbol{x})/\sum_{j=1}^{n}K_{\mathbb{H}}(\mathbb{x}_{j}-\boldsymbol{x})}{f(J(\mathbb{y}_{p}))-f(\boldsymbol{y}^{0})}$
$\displaystyle=\frac{1}{f(J(\mathbb{y}_{p}))-f(\boldsymbol{y}^{0})}\leq\frac{2L(J(\mathbb{y}_{p}))}{\left(\|R_{p}\|-\frac{K_{\mathbb{H}}(\mathbb{x}_{p}-\boldsymbol{x})}{\sum_{j=1}^{n}K_{\mathbb{H}}(\mathbb{x}_{j}-\boldsymbol{x})}\right)^{2}}\
,$
where the last inequality uses (14). Putting all pieces together and using
$\widetilde{f}_{s}(\boldsymbol{y})-f^{\ast}\geq f(\boldsymbol{y})-f^{\ast}$,
we see that the fast Weiszfeld algorithm for RELR attains the following
convergence rate of $\mathcal{O}(1/t^{2})$ :
$\displaystyle
f(\boldsymbol{y}^{t})-f^{\ast}\leq\frac{4\|\boldsymbol{y}^{0}-\boldsymbol{y}^{\ast}\|^{2}L(J(\mathbb{y}_{p}))}{\left\\{(t+1)\left(\|R_{p}\|-\frac{K_{\mathbb{H}}(\mathbb{x}_{p}-\boldsymbol{x})}{\sum_{j=1}^{n}K_{\mathbb{H}}(\mathbb{x}_{j}-\boldsymbol{x})}\right)\right\\}^{2}}\
,$
which is a substantial improvement on the convergence rate in (13).
## 5 Application of RELR to the Planar Shape Space
In this section, the benefit of the robust extrinsic local regression over the
extrinsic regression is demonstrated on the basis of simulation studies. While
simulation conducted in this paper has focused only on the case of the planar
shape, the proposed method can be applied to other manifolds in a straight
forward manner. Recalling from Section 3.2, we let
$\mathbb{Y}=(z_{1},\cdots,z_{k})$ be the response variable which is a planar
shape of $k$-ads defined on $\Sigma_{2}^{k}$, and
$\mathbb{X}\in\mathbb{R}^{p}$ is an Euclidean predictor. By slightly modifying
the polar coordinate based scheme in Lin et al. (2017), we generate synthetic
planar shape data in the following manner :
$\displaystyle\textbf{Generate Covariate :}\ \mathbb{X}=(X_{1},\cdots,X_{p})\
,\ \text{where}\ X_{i}\sim\text{Uniform}(a,b)$
$\displaystyle\textbf{Coefficient :}\
\boldsymbol{\beta}=(\beta_{1},\cdots\beta_{k})=(1/k^{2},\cdots,k/k^{2})\in\mathbb{R}^{k}$
$\displaystyle\textbf{Generate Intercept angles :}\
{\boldsymbol{\phi_{0}}}=({\boldsymbol{\phi_{0}}}_{1},\cdots,{\boldsymbol{\phi_{0}}}_{k})=(1/2,\cdots,k/2)\in\mathbb{R}^{k}$
$\displaystyle\textbf{Generate Intercept radius :}\
{\boldsymbol{\gamma_{0}}}=({\boldsymbol{\gamma_{0}}}_{1},\cdots,{\boldsymbol{\gamma_{0}}}_{k})=(0.1,\cdots,0.1)\in\mathbb{R}^{k}$
$\displaystyle\textbf{Generate Shape angles :}\
\phi_{j}^{\prime}\sim\operatorname{Normal}\left({\boldsymbol{\phi_{0}}}_{j}+\beta_{j}\sum_{i=1}^{p}X_{i},\sigma_{\phi}^{2}\right)$
$\displaystyle\textbf{Standardize angles :}\
{\boldsymbol{\phi}}=(\phi_{1},\cdots\phi_{k}),\text{where}\
\phi_{j}={\phi_{j}^{\prime}}\pmod{2\pi}$ $\displaystyle\textbf{Generate Shape
radius :}\ {\boldsymbol{\gamma}}=(\gamma_{1},\cdots,\gamma_{k}),\
\text{where}\
\gamma_{j}\sim\operatorname{Normal}\left({\boldsymbol{\gamma_{0}}}_{j}+\beta_{j}\sum_{i=1}^{p}X_{i},\sigma_{\gamma}^{2}\right)$
$\displaystyle\textbf{Convert to complex form for the
landmark}:z_{j}=\gamma_{j}(\cos(\phi_{j})+i\sin(\phi_{j})).$
Further, in order to conduct simulation studies under the outlier contaminated
setting, we randomly add fixed number of outliers to the response variable,
i.e.,
$\mathbb{Y}^{\ast}=\mathbb{Y}+{\bf{\Psi}}=(z_{1}^{\ast},\cdots,z_{k}^{\ast})$.
The extreme value of outlier ${\bf\Psi}\in\mathbb{C}^{k}$ was generated from
the $k$-dimensional complex normal distribution,
$\mathbb{C}N(\boldsymbol{\mu},\bf{\Gamma})$, where
$\boldsymbol{\mu}=E({\bf\Psi})$ and
${\bf\Gamma}=E\left(({\bf\Psi}-\boldsymbol{\mu})({\bf\Psi}-\boldsymbol{\mu})^{H}\right)$.
We note that since the contaminated response $\mathbb{Y}^{\ast}$ is no longer
an element in $\Sigma_{2}^{k}$, both translation and scale effects have to be
filtered out. To help understand the process by which data are generated, the
representative illustration of the simulated data is presented on the left
panel of Figure 4. For illustrative purpose, planar shape data is generated
under the univariate setting, and only the first landmark is contaminated. In
the right panel of the same figure, the estimated curves are given, along with
true underlying function. As shown in the figure, the curve obtained from the
usual ELR (red) substantially deviates from the true curve (blue), while the
curve corresponding to the RELR (green) is almost overlapped with the supposed
true value.
Figure 4: Left : the example of the simulated data (univariate case) with
$r=0.2$, each point is colored according to the value of the predictor
$X_{1}$. Right : The result of estimations. The optimal bandwidths of RELR and
ELR, selected by the 5-fold cross validation are $h_{\text{Med}}=1.37$ and
$h_{\text{Mean}}=2.27$, respectively.
One important remaining issue of the proposed RELR model that has not been
highlighted in the previous section is the bandwidth selection. It is well
known that the performance of the local polynomial type method significantly
relies on a tuning parameter $h$, called the bandwidth which plays a crucial
role in controlling the degree of smoothing. To be specific, large $h$ value
leads to a smooth estimation, but by failing to account for a local variation
it may introduce a significant estimation bias, whereas small $h$ produces a
jagged estimation, leading to a large variance. Thus it should be properly
selected to balance the trade-off between variance and squared bias.
Throughout the simulation, we consider the smoothing matrix that gives the
same bandwidth in all $p$ dimensions, i.e., $\mathbb{H}=h{\bf{I}}_{p}$. Though
bandwidth selection methods have been extensively studied in the early days of
nonparametric regression modeling, in this paper we adopt 5-fold cross
validation for the sake of simplicity.
The performance of the RELR is evaluated by comparing the results with those
achieved by ELR in terms of two different measures associated with the full
Procrustes distance,
$\rho_{\text{FP}}=\sqrt{\left(1-|\langle\mathbb{z}_{1},\mathbb{z}_{2}\rangle|^{2}\right)}$,
where $\mathbb{z}_{1},\mathbb{z}_{2}\in\Sigma_{2}^{k}$. Firstly, we consider
$\operatorname{MD}_{\text{obs}}=\sum_{i=1}^{n}\rho_{\text{FP}}(\mathbb{y}_{i},\widehat{f}(\mathbb{x}_{i}))/n$,
that measures difference between the estimated value
$\widehat{f}(\mathbb{x}_{i})$ and the observed value $\mathbb{y}_{i}$.
Moreover, to assess whether the estimator has the benefit of capturing the
true signal, it is more appropriate to examine the following root mean squared
error like measure
$\operatorname{RMSE}_{\text{true}}=\sqrt{\sum_{i=1}^{n}\rho_{\text{FP}}(f_{0}(\mathbb{x}_{i}),\widehat{f}(\mathbb{x}_{i}))^{2}/n}$,
which quantifies the difference between the predictor and the true value
$f_{0}(\mathbb{x}_{i})$. To investigate how methods are affected by outliers,
the contamination level $r$ were varied on an evenly spaced grid over
$[0,0.3]$. The values (averaged over 20 replications per each setting) of
$\operatorname{MD}_{\text{obs}}$ and $\operatorname{RMSE}_{\text{true}}$ with
$n=200,p=1$, are presented in Figure 5 (left and right panel, respectively).
Figure 5: Results of the univariate case, as a function of the contamination
level $r=[0,0.3]$. Left : Averaged value of $\operatorname{MD}_{\text{obs}}$.
Right: Averaged value of RMSE against underlying true regression function
$f_{0}(x_{i})$. To assist in the visualization, we also give a loess smooth
with Monte-Carlo standard error.
Overall, the performance curves, obtained across the experimental conditions,
are roughly linear and slope upward from left to right as contamination rate
runs from 0 to 0.3, which indicates performances of both RELR and ELR degrade
as $r$ increases. We also have found that, values of both
$\operatorname{MD}_{\text{obs}}$ and $\operatorname{RMSE}_{\text{true}}$
corresponding to RELR are consistently lower than those of ELR, except for
$r=0$. The result from the left panel makes intuitive sense, as RELR minimizes
the empirical risk function associated with the unsquared extrinsic distance
which only differs from the full Procrustes distance by a multiplicative
constant. In the right panel, however, the value of the slope obtained from
ELR, inclined toward the upper right point $(0.3,0.2)$ starting from the
origin, is 1.67 which is approximately four times greater than that of RELR.
This clearly suggests that RELR outperforms ELR by a wide margin in
identifying true patterns of shape changes that are masked by noise. As shown
in Table 2, similar results were obtained from further experiments in a
multivariate setting ($p=3$) with different samples sizes. In this simulation
study, the proposed RELR is seen to universally perform very well across all
examined simulation scenarios. However, contrary to RELR, it appears that the
performance of ELR is prone to be adversely affected by the presence of
outliers and noises, which is consistent and in line with what we would have
been expected from the previous location estimation performed in Section 3.2.
| | | Ratio of outliler |
---|---|---|---|---
$N$ | Measures | Methods | 0 | 0.05 | 0.1 | 0.2 | 0.3
50 | $\operatorname{MD}_{\text{obs}}$ | ELR | 0.0323 | 0.0716 | 0.1389 | 0.2481 | 0.3505
RELR | 0.0320 | 0.0662 | 0.1192 | 0.2052 | 0.2806
$\operatorname{RMSE}_{\text{true}}$ | ELR | 0.0233 | 0.0348 | 0.0692 | 0.1446 | 0.2326
RELR | 0.0242 | 0.0257 | 0.0400 | 0.0669 | 0.0781
100 | $\operatorname{MD}_{\text{obs}}$ | ELR | 0.0333 | 0.0810 | 0.1306 | 0.2441 | 0.3524
RELR | 0.0338 | 0.0758 | 0.1200 | 0.2098 | 0.2884
$\operatorname{RMSE}_{\text{true}}$ | ELR | 0.0242 | 0.0358 | 0.0550 | 0.1279 | 0.2219
RELR | 0.0255 | 0.0274 | 0.0414 | 0.0617 | 0.0698
200 | $\operatorname{MD}_{\text{obs}}$ | ELR | 0.0336 | 0.0802 | 0.1351 | 0.2486 | 0.3516
RELR | 0.0341 | 0.0768 | 0.1235 | 0.2126 | 0.2880
$\operatorname{RMSE}_{\text{true}}$ | ELR | 0.0237 | 0.0327 | 0.0566 | 0.1226 | 0.2093
RELR | 0.0248 | 0.0277 | 0.0386 | 0.0586 | 0.0720
Table 2: Comparisons of RELR and ELR in terms of
$\operatorname{MD}_{\text{obs}}$ and $\operatorname{RMSE}_{\text{true}}$.
## 6 Discussion
In this paper, we have proposed the robust statistical methods on manifolds
and demonstrated that when outliers exist in the dataset, they are capable of
achieving considerable improvements over existing methods. Building upon the
idea of geometric median and the extrinsic framework, our method takes the
full advantage of what the two approaches can provide. (i) the robustness
property is attained by employing the unsquared extrinsic distance induced by
the Euclidean embedding, which prevents the estimator from amplifying the
effects of noise and outliers. (ii) Our approach also can be universally
adapted to any manifold, on which the proper Euclidean embedding is available.
For example, the RELR can be straight-forwardly implemented into the space of
$p\times p$ symmetric positive definite (SPD) matrices, which especially
emerged as the form of data in neuro imaging. To be specific, $3\times 3$ SPD
matrices have arisen as data elements in diffusion tensor imaging (DTI). In
this space, the equivariant embedding is given by the Riemannian logarithm map
from the SPD matrices to symmetric matrices,
$\log:\text{SPD}(3)\rightarrow\text{Sym}(3);\mathbb{X}=\mathbb{U}{\bf\Lambda}\mathbb{U}^{-1}\in\text{SPD}(3)\mapsto\text{log}(\mathbb{X})=\mathbb{U}\log{({\bf\Lambda})}\mathbb{U}^{-1}$,
and the extrinsic distance is defined as the Frobenius norm, i.e.,
$\rho_{E}(\mathbb{X}_{1},\mathbb{X}_{2})=\|\log{(\mathbb{X}_{1})}-\log{(\mathbb{X}_{2})}\|_{F}$.
While our main focus was on developing robust statistical methods for
estimating the central location and regression problem on manifolds, some
important problems still remain to be further investigated as part of future
work. Now, we end the paper by outlining some promising future directions for
research that may immediately benefit from our proposed framework. First,
borrowing ideas from the method of $K$-medians algorithms (Cardot et al.,
2012), clustering on manifolds can be carried out in a more robust manner.
Second, an intriguing research question raised by our study is the possible
extension of the notion of the quantile to manifolds. Unlike a measure of the
central tendency (mean and median), the generalization of the quantile on non-
Euclidean space is nontrivial, because it has to take into account the
direction and magnitude of changes from the central location. To the best of
our knowledge, there has been no reported research in this context yet, which
may be in part due to the difficulty in defining the direction on the surface
of manifolds. The challenge above may be tackled by utilizing the geometric
quantile (Chaudhuri, 1996), which has the geometric median as a special case
and our extrinsic framework. We expect that it will be especially useful in
medical imaging analysis. Because using the quantile information enables us to
quantify pathological state or the abnormality of a certain organ shape, it
will be a useful tool for diagnosing and prognosticating critically ill
patients
## References
* Beck and Sabach (2015) Beck, A. and Sabach, S. (2015). Weiszfeld’s method: Old and new results. Journal of Optimization Theory and Applications, 164:1–40.
* Beck and Teboulle (2009) Beck, A. and Teboulle, M. (2009). A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci., 2:183–202.
* Bhattacharya and Bhattacharya (2012) Bhattacharya, A. and Bhattacharya, R. (2012). Nonparametric Inference on Manifolds : With Applications to Shape Spaces. IMS Monograph #2. Cambridge University Press.
* Bhattacharya et al. (2011) Bhattacharya, R. N., Ellingson, L., Liu, X., Patrangenaru, V., and Crane, M. (2011). Extrinsic analysis on manifolds is computationally faster than intrinsic analysis with applications to quality control by machine vision. Applied Stochastic Models in Business and Industry, 28:222–235.
* Bhattacharya and Patrangenaru (2003) Bhattacharya, R. N. and Patrangenaru, V. (2003). Large sample theory of intrinsic and extrinsic sample means on manifolds-part i. Annals of Statistics, 31:1–29.
* Bhattacharya and Patrangenaru (2005) Bhattacharya, R. N. and Patrangenaru, V. (2005). Large sample theory of intrinsic and extrinsic sample means on manifolds- part ii. Annals of Statistics, 33:1211–1245.
* Cardot et al. (2017) Cardot, H., Cénac, P., and Godichon-Baggioni, A. (2017). Online estimation of the geometric median in hilbert spaces: Nonasymptotic confidence balls. Ann. Statist., 45(2):591–614.
* Cardot et al. (2012) Cardot, H., Cénac, P., and Monnez, J. M. (2012). A fast and recursive algorithm for clustering large datasets with k-medians. Computational Statistics & Data Analysis, 56(6):1434 – 1449.
* Cardot et al. (2013) Cardot, H., Cénac, P., and Zitt, P.-A. (2013). Efficient and fast estimation of the geometric median in hilbert spaces with an averaged stochastic gradient algorithm. Bernoulli, 19(1):18–43.
* Chaudhuri (1996) Chaudhuri, P. (1996). On a geometric notion of quantiles for multivariate data. Journal of the American Statistical Association, 91(434):862–872.
* Chikuse (2003) Chikuse, Y. (2003). Statistics on Special Manifolds. Springer-Verlag New York.
* Cornea et al. (2017) Cornea, E., Zhu, H., Kim, P., Ibrahim, J. G., and the Alzheimer’s Disease Neuroimaging Initiative (2017). Regression models on riemannian symmetric spaces. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 79(2):463–482.
* Dryden and Mardia (1991) Dryden, I. L. and Mardia, K. V. (1991). General shape distributions in a plane. Advances in Applied Probability, 23:259–276.
* Dryden and Mardia (1998) Dryden, I. L. and Mardia, K. V. (1998). Statistical Shape Analysis. Wiley.
* Fan and Gijbels (1996) Fan, J. and Gijbels, I. (1996). Local Polynomial Modelling and Its Applications. Chapman & Hall, London.
* Fisher et al. (1987) Fisher, N. I., Lewis, T., and Embleton, B. J. J. (1987). Statistical Analysis of Spherical Data. Cambridge University Press.
* Fletcher et al. (2009) Fletcher, P. T., Venkatasubramanian, S., and Joshi, S. (2009). The geometric median on riemannian manifolds with application to robust atlas estimation. NeuroImage, 45:S143–S152.
* Fréchet (1948) Fréchet, M. (1948). Les éléments al eatories de nature quelconque dans un espace distancié. In Annales de l’Institut Henri Pointcaré, 10:215–310.
* Haldane (1948) Haldane, J. B. S. (1948). Note on the median of a multivariate distribution. Biometrika, 35(3-4):414–417.
* Hampel et al. (1986) Hampel, F., Ronchetti, E., Rousseeuw, P., and Stahel, W. (1986). Robust Statistics: The Approach Based on Influence Functions. Wiley.
* Huang et al. (2015) Huang, C., Styner, M., and Zhu, H. (2015). Clustering high-dimensional landmark-based two-dimensional shape data. Journal of the American Statistical Association, 110(511):946–961.
* Huber (1964) Huber, P. J. (1964). Robust estimation of a location parameter. Ann. Math. Statist., 35(1):73–101.
* Huber and Ronchetti (2009) Huber, P. J. and Ronchetti, E. M. (2009). Robust Statistics. Wiley, 2 edition.
* Jammalamadaka and SenGupta (2001) Jammalamadaka, S. R. and SenGupta, A. (2001). Topics in Circular Statistics. Series on Multivariate analysis Vol.5. World Scientific Press, Singapore.
* Kemperman (1987) Kemperman, J. (1987). The median of a finite measure on a banach space. In Statistical Data Analysis Based on the $L_{1}$-norm and Related Methods. Amsterdam : North-Holland.
* Kendall (1984) Kendall, D. G. (1984). Shape manifolds, procrustean metrics, and complex projective spaces. Bulletin of the London Mathematical Society, 16:81–121.
* Kent (1992) Kent, J. T. (1992). New directions in shape analysis. In: The Art of Statistical Science. John Wiley & Sons, Ltd, Chichester.
* Kuhn (1973) Kuhn, H. W. (1973). A note on fermat’s problem. Math. Program., 4:98–107.
* Lin et al. (2017) Lin, L., Thomas, B. S., Zhu, H., and Dunson, D. B. (2017). Extrinsic local regression on manifold-valued data. Journal of the American Statistical Association, 112(519):1261–1273.
* Lopuhaä and Rousseeuw (1991) Lopuhaä, H. P. and Rousseeuw, P. J. (1991). Breakdown points of affine equivariant estimators of multivariate location and covariance matrices. Ann. Statist., 19(1):229–248.
* Mardia and Dryden (1989) Mardia, K. V. and Dryden, I. L. (1989). The statistical analysis of shape data. Biometrika, 76:271–281.
* Mardia and Jupp (1999) Mardia, K. V. and Jupp, P. E. (1999). Directional Statistics. Wiley.
* Minsker (2015) Minsker, S. (2015). Geometric median and robust estimation in banach spaces. Bernoulli, 21(4):2308–2335.
* Möttönen et al. (2010) Möttönen, J., Nordhausen, K., and Oja, H. (2010). Asymptotic theory of the spatial median. In Nonparametrics and Robustness in Modern Statistical Inference and Time Series Analysis: A Festschrift in honor of Professor Jana Jurečková, volume 7 of IMS Collections, pages 182–193. Institute of Mathematical Statistics, Beachwood, Ohio, USA.
* Petersen and Müller (2019) Petersen, A. and Müller, H.-G. (2019). Fréchet regression for random objects with euclidean predictors. Ann. Statist., 47(2):691–719.
* Shi et al. (2009) Shi, X., Styner, M., Lieberman, J., Ibrahim, J. G., Lin, W., and Zhu, H. (2009). Intrinsic regression models for manifold-valued data. In Yang, G.-Z., Hawkes, D., Rueckert, D., Noble, A., and Taylor, C., editors, Medical Image Computing and Computer-Assisted Intervention – MICCAI 2009, pages 192–199, Berlin, Heidelberg. Springer Berlin Heidelberg.
* Small (1990) Small, C. G. (1990). A survey of multidimensional medians. International Statistical Review, 58(3):263–277.
* Vardi and Zhang (2001) Vardi, Y. and Zhang, C. H. (2001). A modified weiszfeld algorithm for the fremat-weber location problem. Mathematical Programming, 90:559–566.
* Weber (1929) Weber, A. (1929). Uber Den Standort der Industrien (Alfred Weber’s Theory of the Location of Industries. Univ. Chicago Press.
* Weiszfeld (1937) Weiszfeld, E. (1937). Sur le point pour lequel la somme des distances denpoints donnés est minimum. Tohoku Mathematics Journal, 43:355–386.
* Whitney (1944) Whitney, H. (1944). The self-intersections of a smooth n-manifold in 2n-space. The Annals of Mathematics, 45:220–246.
* Yuan et al. (2012) Yuan, Y., Zhu, H., Lin, W., and Marron, J. S. (2012). Local polynomial regression for symmetric positive definite matrices. Journal of the Royal Statistical Society. Series B (Statistical Methodology), 74(4):697–719.
* Zhu et al. (2009) Zhu, H., Chen, Y., Ibrahim, J. G., Li, Y., Hall, C., and Lin, W. (2009). Intrinsic regression models for positive-definite matrices with applications to diffusion tensor imaging. Journal of the American Statistical Association, 104(487):1203–1212. PMID: 20174601.
|
# Revisiting the four-quark operator matrix elements for the lifetime of
$\Lambda_{b}$
Zhen-Xing Zhao1<EMAIL_ADDRESS>1 School of Physical Science and
Technology,
Inner Mongolia University, Hohhot 010021, China
###### Abstract
Heavy quark expansion can nicely explain the lifetime of $\Lambda_{b}$.
However, there still exsit sizable uncertainties from the four-quark operator
matrix elements in the power of $1/m_{b}^{3}$. In this work, the leading order
results of the four-quark operator matrix elements
$\langle\Lambda_{b}|(\bar{b}q)_{V-A}(\bar{q}b)_{V-A}|\Lambda_{b}\rangle$ and
$\langle\Lambda_{b}|(\bar{b}b)_{V-A}(\bar{q}q)_{V-A}|\Lambda_{b}\rangle$ are
obtained using full QCD sum rules. Contributions from dimension-0,3,5 are
considered. It turns out that the contributions from dimension-3,5 are
proportional to the mass of $u/d$ quark. Stable Borel region can be found, and
for this reason, the uncertainties caused by the QCD sum rule parameters are
small. The leading logarithmic corrections are also considered, which turn out
to be a little destructive. Our results are close to the lower bound of the
existing theoretical predictions.
## I Introduction
Recently, LHCb updated a measurement of the $\Omega_{c}$ lifetime with
$\tau(\Omega_{c}^{0})=268\pm 24\pm 10\pm 2$ fs Aaij:2018dso , which is nearly
four times larger than the current world-average value
$\tau(\Omega_{c}^{0})=69\pm 12$ fs Tanabashi:2018oca . On the one hand, the
experimental colleagues are re-examining their measurements; on the other
hand, theoretical explanation is highly demanded. Some attempts have been made
to solve this puzzle Cheng:2018rkz . However, there is still a lack of more
reliable calculation based on QCD for the hadronic matrix elements.
In fact, there was also a conflict between theory and experiment for the
lifetime ratio $\tau(\Lambda_{b})/\tau(B_{d})$ as early as 1996. Taking
$\tau(B^{0})=(1.519\pm 0.004)$ ps in PDG2020 Zyla:2020zbs as a benchmark, for
$\tau(\Lambda_{b})=(1.14\pm 0.08)$ ps in PDG1996 Barnett:1996hr , one can find
the ratio:
$\tau(\Lambda_{b}^{0})/\tau(B^{0})=0.75\pm 0.05.$ (1)
Theoretically, the ratio deviated from unity at the level of $20\%$ is
considered to be too large. Nowadays we know that the low value of
$\tau(\Lambda_{b})/\tau(B_{d})$ or the short $\Lambda_{b}$ lifetime was a
purely experimental issue. The world averages in PDG2020 are
$\tau(\Lambda_{b})=(1.471\pm 0.009)\ {\rm
ps},\qquad\tau(\Lambda_{b}^{0})/\tau(B^{0})=0.964\pm 0.007.$ (2)
The new measurements of the $\Lambda_{b}$ lifetime are in good agreement with
the Heavy Quark Expansion (HQE) result Lenz:2014jha :
$\displaystyle\frac{\tau(\Lambda_{b})}{\tau(B_{d})}^{{\rm HQE}\ 2014}$
$\displaystyle=1-(0.8\pm 0.5)\%_{1/m_{b}^{2}}-(4.2\pm
3.3)\%_{1/m_{b}^{3}}^{\Lambda_{b}}-(0.0\pm
0.5)\%_{1/m_{b}^{3}}^{B_{d}}-(1.6\pm 1.2)\%_{1/m_{b}^{4}}$
$\displaystyle=0.935\pm 0.054.$ (3)
Heavy quark expansion descibes inclusive weak decays of hadrons containing
heavy quarks and in particular lifetimes. It is a generalization of the
operator product expansion (OPE) in $1/m_{Q}$ in the Minkowski space, and
nonperturbative effects can be systematically studied.
The starting point of HQE is the following transition operator
${\cal T}=i\int d^{4}x\ T[{\cal L}_{W}(x){\cal L}_{W}^{\dagger}(0)],$ (4)
where ${\cal L}_{W}$ is the effective weak Lagrangian governing the decay
$Q\to X_{f}$. With the help of the optical theorem the total decay width of
$H_{Q}$ can be given as
$\Gamma(H_{Q})=\frac{2\ {\rm Im}\langle H_{Q}|{\cal T}|H_{Q}\rangle}{2M_{H}},$
(5)
where $H_{Q}$ denotes a hadron containing a heavy quark $Q$, and $M_{H}$ is
its mass. The right hand side of Eq. (5) is then calculated using OPE for the
transition operator ${\cal T}$ Cheng:2018rkz ; Lenz:2014jha
$2\ {\rm Im}{\cal
T}=\frac{G_{F}^{2}m_{Q}^{5}}{192\pi^{3}}\xi\left(c_{3,Q}\bar{Q}Q+\frac{c_{5,Q}}{m_{Q}^{2}}\bar{Q}\sigma\cdot
GQ+\frac{c_{6,Q}}{m_{Q}^{3}}T_{6}+\frac{c_{7,Q}}{m_{Q}^{4}}T_{7}+\cdots\right),$
(6)
where $\xi$ is the relevant CKM matrix element, $T_{6}$ consists of the four-
quark operators $(\bar{Q}\Gamma q)(\bar{q}\Gamma Q)$ with $\Gamma$
representing a combination of the Dirac and color matrices, and a subset of
$T_{7}$ is the four-quark operators containing derivative insertions.
However, it can also be seen from Eq. (3) that, the main uncertainty of the
lifetime ratio comes from the $1/m_{b}^{3}$ corrections from the $\Lambda_{b}$
matrix elements. The relevant baryon matrix elements can be parameterized in a
model-independent way Cheng:2018rkz :
$\displaystyle\langle\Lambda_{b}|(\bar{b}q)_{V-A}(\bar{q}b)_{V-A}|\Lambda_{b}\rangle$
$\displaystyle=f_{B_{q}}^{2}m_{B_{q}}m_{\Lambda_{b}}L_{1},$
$\displaystyle\langle\Lambda_{b}|(\bar{b}q)_{S-P}(\bar{q}b)_{S+P}|\Lambda_{b}\rangle$
$\displaystyle=f_{B_{q}}^{2}m_{B_{q}}m_{\Lambda_{b}}L_{2},$
$\displaystyle\langle\Lambda_{b}|(\bar{b}b)_{V-A}(\bar{q}q)_{V-A}|\Lambda_{b}\rangle$
$\displaystyle=f_{B_{q}}^{2}m_{B_{q}}m_{\Lambda_{b}}L_{3},$
$\displaystyle\langle\Lambda_{b}|(\bar{b}^{\alpha}q^{\beta})_{S-P}(\bar{q}^{\beta}b^{\alpha})_{S+P}|\Lambda_{b}\rangle$
$\displaystyle=f_{B_{q}}^{2}m_{B_{q}}m_{\Lambda_{b}}L_{4},$ (7)
where $V-A$ denotes the weak current and $S\pm P$ denote $1\pm\gamma_{5}$.
These matrix elements are not all independent Cheng:2018rkz , and in this work
we will only consider the parameters $L_{1}$ and $L_{3}$, which can be related
by $\tilde{B}$:
$L_{3}=-\tilde{B}\ L_{1}.$ (8)
One can see from Table 1 that $L_{1}$ ranges from $-0.60$ to $-0.03$. This
article intends to make some efforts in this direction.
Table 1: $L_{1}$ predicted by different theoretical methods. This table is copied from Lenz:2014jha . $L_{1}$ | $\tilde{B}$ |
---|---|---
$-0.103(10)$ | $1$ | 2014 Spectroscopy update Rosner:1996fy
$-0.22(4)$ | $1.21(34)$ | 1999 Exploratory Lattice DiPierro:1999tb
$-0.22(5)$ | $1$ | 1999 QCDSR v1 Huang:1999xj
$-0.60(15)$ | $1$ | 1999 QCDSR v2 Huang:1999xj
$-0.033(17)$ | $1$ | 1996 QCDSR Colangelo:1996ta
$\approx-0.03$ | $1$ | 1979 Bag model Guberina:1979xw
$\approx-0.08$ | $1$ | 1979 NRQM Guberina:1979xw
In Shi:2019hbf , we derived the transition form factors from doubly heavy
baryons to singly heavy baryons using QCD sum rules for the first time.
However, considering there were few theoretical results and experimental data
available to compare with, we then applied our calculation method to the
semileptonic decay of $\Lambda_{b}\to\Lambda_{c}l\bar{\nu}$ Zhao:2020mod . Our
predictions for the form factors and decay widths are comparable with those of
HQET and Lattice QCD. Similar to Zhao:2020mod , in this work, we will also
consider the leading order contributions from the perturbative, quark
condensate and quark-gluon mixed condensate diagrams.
The authors of Colangelo:1996ta also adopted Cutkosky cutting rules to obtain
the spectral density of the correlation function. While the main difference
between Colangelo:1996ta and this work is that, the former performed the
analysis under the framework of HQET, while in this work we will handle the
hadronic matrix elements in full QCD. As can be seen in Shuryak:1981fza ;
Zhao:2020wbw , our results can reduce to those of Colangelo:1996ta in the
heavy quark limit. In addition, we will also consider the leading logarithmic
corrections in this work.
The rest of this paper is arranged as follows. In Sec. II, we will show the
main steps of calculating the hadronic matrix elements. Numerical analysis
will be performed in Sec. III, and some discussions will also be given. We
conclude our paper in the last section.
## II QCD sum rule calculation
The following interpolating current is adopted for $\Lambda_{b}$:
$J=\epsilon_{abc}(u_{a}^{T}C\gamma_{5}d_{b})Q_{c},$ (9)
where $Q$ denotes the bottom quark, $a,b,c$ are the color indices and $C$ is
the charge conjugate matrix. The correlation function is defined as
$\Pi(p_{1},p_{2})=i^{2}\int d^{4}xd^{4}y\ e^{-ip_{1}\cdot x+ip_{2}\cdot
y}\langle 0|T\\{J(y)\Gamma_{6}(0)\bar{J}(x)\\}|0\rangle$ (10)
with $\Gamma_{6}$ being one four-quark operator.
Following the stardard procedure of QCD sum rules, the correlation function
will be calculated at hadronic level and QCD level. At the hadronic level,
after inserting the complete set of baryon states, the correlation function
can be written as
$\Pi^{{\rm
had}}(p_{1},p_{2})=\lambda_{H}^{2}\frac{(\not{p}_{2}+M)(a+b\gamma_{5})(\not{p}_{1}+M)}{(p_{2}^{2}-M^{2})(p_{1}^{2}-M^{2})}+\cdots,$
(11)
where $\lambda_{H}=\lambda_{\Lambda_{b}}$, $M=m_{\Lambda_{b}}$ are
respectively the pole residue and mass of $\Lambda_{b}$, $a$ and $b$ are used
to parameterize the hadronic matrix element of interest
$\langle\Lambda_{b}(q^{\prime},s^{\prime})|\Gamma_{6}|\Lambda_{b}(q,s)\rangle=\bar{u}(q^{\prime},s^{\prime})(a+b\gamma_{5})u(q,s)$,
and the ellipsis stands for the contribution from higher resonances and
continuum spectra.
It can be seen from Eq. (11) that there are 8 Dirac structures, but only 2
parameters need to be determined. Using the similar prescription as that used
in Shi:2019hbf ; Zhao:2020wbw ; Zhao:2020mod , that is, by considering the
contributions from the negative parity baryons, Eq. (11) is updated to
$\displaystyle\Pi^{{\rm had}}(p_{1},p_{2})$ $\displaystyle=$
$\displaystyle\lambda_{+}\lambda_{+}\frac{(\not{p}_{2}+M_{+})(a^{++}+b^{++}\gamma_{5})(\not{p}_{1}+M_{+})}{(p_{2}^{2}-M_{+}^{2})(p_{1}^{2}-M_{+}^{2})}$
(12) $\displaystyle+$
$\displaystyle\lambda_{+}\lambda_{-}\frac{(\not{p}_{2}+M_{+})(a^{+-}+b^{+-}\gamma_{5})(\not{p}_{1}-M_{-})}{(p_{2}^{2}-M_{+}^{2})(p_{1}^{2}-M_{-}^{2})}$
$\displaystyle+$
$\displaystyle\lambda_{-}\lambda_{+}\frac{(\not{p}_{2}-M_{-})(a^{-+}+b^{-+}\gamma_{5})(\not{p}_{1}+M_{+})}{(p_{2}^{2}-M_{-}^{2})(p_{1}^{2}-M_{+}^{2})}$
$\displaystyle+$
$\displaystyle\lambda_{-}\lambda_{-}\frac{(\not{p}_{2}-M_{-})(a^{--}+b^{--}\gamma_{5})(\not{p}_{1}-M_{-})}{(p_{2}^{2}-M_{-}^{2})(p_{1}^{2}-M_{-}^{2})}$
$\displaystyle+$ $\displaystyle\cdots.$
In Eq. (12), $M_{+(-)}$ and $\lambda_{+(-)}$ respectively denote the mass and
pole residue of $\Lambda_{b}(\frac{1}{2}^{+(-)})$, and $a^{-+}$ is the
parameter $a$ with the negative-parity final state
$\Lambda_{b}(\frac{1}{2}^{-})$, and the positive-parity initial state
$\Lambda_{b}(\frac{1}{2}^{+})$, and so forth. To arrive at Eq. (12) , we have
also adopted the following definitions:
$\displaystyle\langle\Lambda_{b+}(q^{\prime},s^{\prime})|\Gamma_{6}|\Lambda_{b+}(q,s)\rangle$
$\displaystyle=\bar{u}_{+}(q^{\prime},s^{\prime})(a^{++}+b^{++}\gamma_{5})u_{+}(q,s),$
$\displaystyle\langle\Lambda_{b+}(q^{\prime},s^{\prime})|\Gamma_{6}|\Lambda_{b-}(q,s)\rangle$
$\displaystyle=\bar{u}_{+}(q^{\prime},s^{\prime})(a^{+-}+b^{+-}\gamma_{5})(i\gamma_{5})u_{-}(q,s),$
$\displaystyle\langle\Lambda_{b-}(q^{\prime},s^{\prime})|\Gamma_{6}|\Lambda_{b+}(q,s)\rangle$
$\displaystyle=\bar{u}_{-}(q^{\prime},s^{\prime})(i\gamma_{5})(a^{-+}+b^{-+}\gamma_{5})u_{+}(q,s),$
$\displaystyle\langle\Lambda_{b-}(q^{\prime},s^{\prime})|\Gamma_{6}|\Lambda_{b-}(q,s)\rangle$
$\displaystyle=\bar{u}_{-}(q^{\prime},s^{\prime})(i\gamma_{5})(a^{--}+b^{--}\gamma_{5})(i\gamma_{5})u_{-}(q,s).$
(13)
In the above equations, it is not necessary to introduce $(i\gamma_{5})$, but
it is convenient in the calculation. One can then determine $a^{++}\equiv a$
and $b^{++}\equiv b$ unambiguously, thereby the following forward scattering
matrix element:
$\displaystyle\langle\Lambda_{b}(q,s)|\Gamma_{6}|\Lambda_{b}(q,s)\rangle$
$\displaystyle=\bar{u}(q,s)(a+b\gamma_{5})u(q,s)$ $\displaystyle=2\ a\
m_{\Lambda_{b}},$ (14)
where we have used $\bar{u}(q,s)u(q,s)=2\ m_{\Lambda_{b}}$ and
$\bar{u}(q,s)\gamma_{5}u(q,s)=0$. It can be seen that to obtain the matrix
element, we only need to obtain $a^{++}$ in Eq. (13).
At the QCD level, the correlation function is written as a double dispersion
relation
$\Pi^{{\rm
QCD}}(p_{1}^{2},p_{2}^{2},q^{2})=\int^{\infty}ds_{1}\int^{\infty}ds_{2}\frac{\rho^{{\rm
QCD}}(s_{1},s_{2},q^{2})}{(s_{1}-p_{1}^{2})(s_{2}-p_{2}^{2})},$ (15)
with $\rho^{{\rm QCD}}(s_{1},s_{2},q^{2})$ being the spectral function, which
can be obtained by applying Cutkosky cutting rules. The sum rule is given by
equating the four pole terms in Eq. (12) to
$\int^{s_{0}}ds_{1}\int^{s_{0}}ds_{2}\frac{\rho^{{\rm
QCD}}(s_{1},s_{2},q^{2})}{(s_{1}-p_{1}^{2})(s_{2}-p_{2}^{2})},$ (16)
where $s_{0}$ is the continuum threshold parameter. By comparing the
coefficients of different Dirac structures on both sides of the equation, one
can have 8 equations to solve 8 unknown parameters $a^{\pm\pm}$ and
$b^{\pm\pm}$, especially, one can arrive at
$a^{++}=\frac{\\{M_{-}^{2},M_{-},M_{-},1\\}.\\{{\cal B}A_{1},{\cal
B}A_{2},{\cal B}A_{3},{\cal
B}A_{4}\\}}{\lambda_{+}^{2}(M_{+}+M_{-})^{2}}\exp\left(\frac{2M_{+}^{2}}{T^{2}}\right),$
(17)
where $A_{1,2,3,4}$ are coefficients of $\not{p}_{2}\not{p}_{1}$,
$\not{p}_{2}$, $\not{p}_{1}$ and $1$ in Eq. (16), ${\cal B}A_{i}\equiv{\cal
B}_{T^{2},T^{2}}A_{i}$ are doubly Borel transformed coefficients, and $T^{2}$
is the Borel mass parameter. Shi:2019hbf ; Zhao:2020mod contain more details
for obtaining $A_{i}$.
At the QCD level, we consider the leading order contributions from the
perturbative (dimension-0), quark condensate (dimension-3) and quark-gluon
mixed condensate (dimension-5) diagrams. For
$\langle\Lambda_{b}|(\bar{b}q)_{V-A}(\bar{q}b)_{V-A}|\Lambda_{b}\rangle$, we
find that the latter two contributions are proportional to the mass of $u/d$
quark. It turns out that only the perturbative contribution survives, as can
be seen in Fig. 1. This situation is the same as those of two-point
correlation function of $\Lambda_{b}$ and three-point correlation function of
$\Lambda_{b}\to\Lambda_{c}$ transition form factors Zhao:2020mod . Since the
only difference between $(\bar{b}q)_{V-A}(\bar{q}b)_{V-A}$ and
$(\bar{b}b)_{V-A}(\bar{q}q)_{V-A}$ is in the color space, the same situation
occurs for
$\langle\Lambda_{b}|(\bar{b}b)_{V-A}(\bar{q}q)_{V-A}|\Lambda_{b}\rangle$,
except that there is one sign difference between the perturbative
contributions of
$\langle\Lambda_{b}|(\bar{b}b)_{V-A}(\bar{q}q)_{V-A}|\Lambda_{b}\rangle$ and
$\langle\Lambda_{b}|(\bar{b}q)_{V-A}(\bar{q}b)_{V-A}|\Lambda_{b}\rangle$.
Therefore
$\langle\Lambda_{b}|(\bar{b}b)_{V-A}(\bar{q}q)_{V-A}|\Lambda_{b}\rangle=-\langle\Lambda_{b}|(\bar{b}q)_{V-A}(\bar{q}b)_{V-A}|\Lambda_{b}\rangle$
as far as we are concerned. As can be seen in Shi:2019hbf , the contributions
from gluon condensate are small, therefore we do not consider them in this
work.
Figure 1: Perturbative contribution for the correlation function. Cutting
rules are also shown.
### II.1 The leading logarithmic corrections
In this work, we will also consider the leading logarithmic (LL) corrections
following Ioffe:1981kw . For convenience, we define
$\displaystyle\Gamma_{6}^{(1)}$
$\displaystyle=(\bar{b}q)_{V-A}(\bar{q}b)_{V-A},$
$\displaystyle\Gamma_{6}^{(2)}$
$\displaystyle=(\bar{b}b)_{V-A}(\bar{q}q)_{V-A},$
$\displaystyle\Gamma_{6}^{(\pm)}$
$\displaystyle=\frac{1}{2}(\Gamma_{6}^{(1)}\pm\Gamma_{6}^{(2)})$ (18)
with $q=u/d$ and denote
$\langle\Lambda_{b}|\Gamma_{6}|\Lambda_{b}\rangle\equiv\langle\Gamma_{6}\rangle_{\Lambda_{b}}$
(19)
for short. The new basis of $\Gamma_{6}^{(\pm)}$ are rescaled without mixing.
The following steps are adopted to obtain
$\langle\Gamma_{6}^{(1,2)}\rangle_{\Lambda_{b}}^{{\rm LL}}$ – the LL corrected
version of $\langle\Gamma_{6}^{(1,2)}\rangle_{\Lambda_{b}}$:
* •
Calculate $\langle\Gamma_{6}^{(1)}\rangle_{\Lambda_{b}}$ and
$\langle\Gamma_{6}^{(2)}\rangle_{\Lambda_{b}}$ to arrive at
$\langle\Gamma_{6}^{(\pm)}\rangle_{\Lambda_{b}}$. Since
$\langle\Gamma_{6}^{(2)}\rangle_{\Lambda_{b}}=-\langle\Gamma_{6}^{(1)}\rangle_{\Lambda_{b}}$
as far as dimension-0,3,5 are concerned, one has
$\langle\Gamma_{6}^{(+)}\rangle_{\Lambda_{b}}=0$ and
$\langle\Gamma_{6}^{(-)}\rangle_{\Lambda_{b}}=\langle\Gamma_{6}^{(1)}\rangle_{\Lambda_{b}}$.
* •
Multiply $\langle\Gamma_{6}^{(-)}\rangle_{\Lambda_{b}}$ by
$\left(\frac{\log(\mu_{0}/\Lambda_{{\rm
QCD}}^{(n_{f})})}{\log(\mu/\Lambda_{{\rm
QCD}}^{(n_{f})})}\right)^{2\gamma_{J}+\gamma_{\Gamma_{6}^{(-)}}}$ to arrive at
the LL corrected $\langle\Gamma_{6}^{(-)}\rangle_{\Lambda_{b}}^{{\rm LL}}$
given that only the perturbative contribution survives, where $\mu_{0}\approx
1\ {\rm GeV}$, $\mu\approx m_{b}$, $\gamma_{J}=-1/\beta_{0}$
Ovchinnikov:1991mu and $\gamma_{\Gamma_{6}^{(-)}}=4/\beta_{0}$ Peskin:1995ev
. $\langle\Gamma_{6}^{(+)}\rangle_{\Lambda_{b}}^{{\rm LL}}=0$ trivially.
* •
$\langle\Gamma_{6}^{(1)}\rangle_{\Lambda_{b}}^{{\rm
LL}}=\langle\Gamma_{6}^{(+)}\rangle_{\Lambda_{b}}^{{\rm
LL}}+\langle\Gamma_{6}^{(-)}\rangle_{\Lambda_{b}}^{{\rm
LL}}=\langle\Gamma_{6}^{(-)}\rangle_{\Lambda_{b}}^{{\rm LL}}$ and
$\langle\Gamma_{6}^{(2)}\rangle_{\Lambda_{b}}^{{\rm
LL}}=\langle\Gamma_{6}^{(+)}\rangle_{\Lambda_{b}}^{{\rm
LL}}-\langle\Gamma_{6}^{(-)}\rangle_{\Lambda_{b}}^{{\rm
LL}}=-\langle\Gamma_{6}^{(-)}\rangle_{\Lambda_{b}}^{{\rm LL}}$.
## III Numerical results
As can be seen in the end of the last section,
$\langle\Gamma_{6}^{(+)}\rangle_{\Lambda_{b}}^{{\rm LL}}=0$, in this section,
we will concentrate on determining
$\langle\Gamma_{6}^{(-)}\rangle_{\Lambda_{b}}^{{\rm LL}}$. Also because of Eq.
(14), we only need to determine the $a^{++}$ of
$\langle\Gamma_{6}^{(-)}\rangle_{\Lambda_{b}}^{{\rm LL}}$.
The renormalization scale is taken as $\mu=m_{b}$, and the bottom quark mass
is taken as Zyla:2020zbs : $m_{b}(m_{b})=4.18\pm 0.03\ {\rm GeV}$.
$\lambda_{+}=0.0432\pm 0.0022\ {\rm GeV}^{3}$ is taken from Zhao:2020mod .
The threshold parameter $s_{0}$ and the Borel parameters $T^{2}$ are
determined in a standard way. Adjust $s_{0}$ to find the flattest curve of
$a^{++}$ as a function of $T^{2}$, on which, there should exist a stable
extremum at some $T^{2}$. The corresponding $s_{0}$ and $T^{2}$ are considered
as the optimal choices for these parameters, as shown by the blue curve in
Fig. 2. In this figure, we have also plotted the suboptimal choices for error
estimation, as shown by the red curve. The corresponding numerical results are
listed in Table 2.
Figure 2: $a^{++}$ of $\langle\Gamma_{6}^{(-)}\rangle_{\Lambda_{b}}^{{\rm LL}}$ as a function of the Borel parameters $T^{2}$. The blue and red curves correspond to optimal and suboptimal choices for $s_{0}$. The values of $a^{++}$ at the extreme points of these curves are taken as the optimal and suboptimal evaluations. The explicit values for these $s_{0}$ and $T^{2}$ can be found in Table 2. Table 2: The predictions of $a^{++}[\langle\Gamma_{6}^{(-)}\rangle_{\Lambda_{b}}^{{\rm LL}}]$. | $s_{0}/{\rm GeV}^{2}$ | $T^{2}/{\rm GeV}^{2}$ | $a^{++}[\langle\Gamma_{6}^{(-)}\rangle_{\Lambda_{b}}^{{\rm LL}}]/(10^{-4}\ {\rm GeV}^{3})$
---|---|---|---
Optimal | $6.01^{2}$ | $20$ | $-23.8$
Suboptimal | $6.02^{2}$ | $22$ | $-24.9$
Our prediction for $a^{++}$ of
$\langle\Gamma_{6}^{(-)}\rangle_{\Lambda_{b}}^{{\rm LL}}$ is
$a^{++}[\langle\Gamma_{6}^{(-)}\rangle_{\Lambda_{b}}^{{\rm LL}}]=(-23.8\pm
1.1\pm 3.4\pm 2.2)\times 10^{-4}\ {\rm GeV}^{3},$ (20)
where the uncertainties respectively come from the QCDSR parameters $s_{0}$
and $T^{2}$, the bottom quark mass $m_{1}$ and the pole residue $\lambda_{+}$.
As can be seen in the last section,
$\langle\Gamma_{6}^{(1)}\rangle_{\Lambda_{b}}^{{\rm
LL}}=\langle\Gamma_{6}^{(-)}\rangle_{\Lambda_{b}}^{{\rm LL}}$, then our
prediction for $L_{1}$ is given by
$L_{1}=-0.0260\pm 0.0012\pm 0.0037\pm 0.0025,$ (21)
where we have used $m_{B_{q}}=5.280\ {\rm GeV}$ and the same decay constant
$f_{B_{q}}=186\ {\rm MeV}$ as in Cheng:2018rkz . It can be seen from Table 1
that, our result is close to that of HQET sum rules in Colangelo:1996ta and
that of the bag model in Guberina:1979xw .
Some comments are given in order.
* •
The spectral density in Eq. (15) also depends on the
$q^{2}\equiv(p_{1}-p_{2})^{2}$. For the forward scattering matrix elements of
interest, $q^{2}$ is taken to be 0.
* •
The optimal choice for the threshold parameter $s_{0}=(6.01\ {\rm GeV})^{2}$
is close to that of the two-point correlation function of $\Lambda_{b}$ with
$s_{0}=(5.95\ {\rm GeV})^{2}$ Zhao:2020mod , as expected.
* •
The optimal $T^{2}$ is taken around $T^{2}=20\ {\rm GeV}^{2}$, which is
$\sim\mu_{b}^{2}$, as expected.
* •
For $\tilde{B}$ defined in Eq. (8), $\tilde{B}=1$ is a good approximation: at
least for dimension-0,3,5 considered in this work, the equal sign is strictly
true.
* •
It turns out that the LL corrections are destructive.
$\langle\Gamma_{6}^{(1)}\rangle_{\Lambda_{b}}^{{\rm LL}}$ is about $13\%$
smaller than $\langle\Gamma_{6}^{(1)}\rangle_{\Lambda_{b}}$.
## IV Conclusions
Heavy quark expansion can nicely explain the lifetime of $\Lambda_{b}$.
However, there still exist sizable uncertainties in the $1/m_{b}^{3}$
corrections from the $\Lambda_{b}$ matrix elements. In this work, the leading
order results of the four-quark operator matrix elements
$\langle\Lambda_{b}|(\bar{b}q)_{V-A}(\bar{q}b)_{V-A}|\Lambda_{b}\rangle$ and
$\langle\Lambda_{b}|(\bar{b}b)_{V-A}(\bar{q}q)_{V-A}|\Lambda_{b}\rangle$ are
obtained using full QCD sum rules. Contributions from dimension-0,3,5 are
considered. It turns out that the contributions from dimension-3,5 are
proportional to the mass of $u/d$ quark. Stable Borel region can be found, and
for this reason, the uncertainties caused by the threshold parameter $s_{0}$
and the Borel parameter $T^{2}$ are small. We have also considered the leading
logarithmic corrections, which turn out to be a little destructive. Our
results are close to those of HQET sum rules in Colangelo:1996ta and those of
the bag model in Guberina:1979xw .
In Zhao:2020mod , we investigated the semileptonic form factors of
$\Lambda_{b}\to\Lambda_{c}$ and our results are comparable to those of HQET
and Lattice QCD. Using the same techniques, in this work, we investigate the
four-quark operator matrix elements for the lifetime of $\Lambda_{b}$. This
work can be viewed as the first one in a series of papers towards deciphering
the $\Omega_{c}$ lifetime puzzle.
## Acknowledgements
The author is grateful to Yue-Long Shen, Yu-Ji Shi, Wei Wang, Zhi-Gang Wang
and Fu-Sheng Yu for valuable discussions and to Wei Wang for constant
encouragements. This work is supported in part by National Natural Science
Foundation of China under Grant No. 12065020.
## References
* (1) R. Aaij et al. [LHCb], Phys. Rev. Lett. 121, no.9, 092003 (2018) doi:10.1103/PhysRevLett.121.092003 [arXiv:1807.02024 [hep-ex]].
* (2) M. Tanabashi et al. [Particle Data Group], Phys. Rev. D 98, no.3, 030001 (2018) doi:10.1103/PhysRevD.98.030001
* (3) H. Y. Cheng, JHEP 11, 014 (2018) doi:10.1007/JHEP11(2018)014 [arXiv:1807.00916 [hep-ph]].
* (4) P. A. Zyla et al. [Particle Data Group], PTEP 2020, no.8, 083C01 (2020) doi:10.1093/ptep/ptaa104
* (5) R. M. Barnett et al. [Particle Data Group], Phys. Rev. D 54, 1-720 (1996) doi:10.1103/PhysRevD.54.1
* (6) A. Lenz, Int. J. Mod. Phys. A 30, no.10, 1543005 (2015) doi:10.1142/S0217751X15430058 [arXiv:1405.3601 [hep-ph]].
* (7) J. L. Rosner, Phys. Lett. B 379, 267-271 (1996) doi:10.1016/0370-2693(96)00352-8 [arXiv:hep-ph/9602265 [hep-ph]].
* (8) M. Di Pierro et al. [UKQCD], Phys. Lett. B 468, 143 (1999) [erratum: Phys. Lett. B 525, 360-360 (2002)] doi:10.1016/S0370-2693(99)01166-1 [arXiv:hep-lat/9906031 [hep-lat]].
* (9) C. S. Huang, C. Liu and S. L. Zhu, Phys. Rev. D 61, 054004 (2000) doi:10.1103/PhysRevD.61.054004 [arXiv:hep-ph/9906300 [hep-ph]].
* (10) P. Colangelo and F. De Fazio, Phys. Lett. B 387, 371-378 (1996) doi:10.1016/0370-2693(96)01049-0 [arXiv:hep-ph/9604425 [hep-ph]].
* (11) B. Guberina, S. Nussinov, R. D. Peccei and R. Ruckl, Phys. Lett. B 89, 111-115 (1979) doi:10.1016/0370-2693(79)90086-8
* (12) Y. J. Shi, W. Wang and Z. X. Zhao, Eur. Phys. J. C 80, no.6, 568 (2020) doi:10.1140/epjc/s10052-020-8096-2 [arXiv:1902.01092 [hep-ph]].
* (13) Z. X. Zhao, R. H. Li, Y. L. Shen, Y. J. Shi and Y. S. Yang, Eur. Phys. J. C 80, no.12, 1181 (2020) doi:10.1140/epjc/s10052-020-08767-1 [arXiv:2010.07150 [hep-ph]].
* (14) E. V. Shuryak, Nucl. Phys. B 198, 83-101 (1982) doi:10.1016/0550-3213(82)90546-6
* (15) Z. X. Zhao, R. H. Li, Y. J. Shi and S. H. Zhou, [arXiv:2005.05279 [hep-ph]].
* (16) B. L. Ioffe, Nucl. Phys. B 188, 317-341 (1981) [erratum: Nucl. Phys. B 191, 591-592 (1981)] doi:10.1016/0550-3213(81)90259-5
* (17) A. A. Ovchinnikov, A. A. Pivovarov and L. R. Surguladze, Int. J. Mod. Phys. A 6, 2025-2034 (1991) doi:10.1142/S0217751X91001015
* (18) M. E. Peskin and D. V. Schroeder,
|
# A class of Finsler metrics admitting first integrals
Ioan Bucataru Faculty of Mathematics
Alexandru Ioan Cuza University
Iaşi, Romania<EMAIL_ADDRESS>http://www.math.uaic.ro/~bucataru/ , Oana
Constantinescu Faculty of Mathematics
Alexandru Ioan Cuza University
Iaşi, Romania<EMAIL_ADDRESS>and Georgeta Creţu Departments of Mathematics
Gheorghe Asachi Technical University
Iaşi, Romania<EMAIL_ADDRESS>
###### Abstract.
We use two non-Riemannian curvature tensors, the $\chi$-curvature and the mean
Berwald curvature to characterise a class of Finsler metrics admitting first
integrals.
###### Key words and phrases:
Finsler metric, $\chi$-curvature, scalar mean Berwald curvature, first
integral
###### 2000 Mathematics Subject Classification:
53C60, 53B40, 58E30, 49N45
## 1\. Introduction
Finsler geometry is a natural extension of Riemannian geometry and, while many
geometric structures can be extended from the Riemannian to the Finslerian
setting, within the Finslerian context there are many non-Riemannian geometric
quantities, [17, Ch. 6].
Existence of first integrals is of great importance, they provide a lot of
information about the corresponding geometry, including some rigidity results,
[6], [7], [21]
In Riemannian geometry, Topalov and Matveev obtained in [21], for two
projectively equivalent metrics on an $n$-dimensional manifold, a set of $n$
first integrals. An extension of this result to the Finslerian context has
been proposed by Sarlet in [16]. In [7], Foulon and Ruggiero have shown the
existence of a first integral for $k$-basic (of isotropic curvature) Finsler
surfaces.
It has been proven by Li and Shen in [11], that Finsler metrics of isotropic
curvature can be characterised using the $\chi$-curvature tensor. The
$\chi$-curvature has been introduced by Shen in [18], in terms of another
important non-Riemannian quantity, the Shen-function ($S$-function) [17,
§5.2]. Since then, a lot of effort has been devoted to study the
$\chi$-curvature, [10, 14, 19].
In this work we extend the result of Foulon and Ruggiero from [7] to Finsler
manifolds of arbitrary dimension, by providing a class of Finsler metrics that
admit first integrals. This class of Finsler metrics is characterised using
the $\chi$-curvature tensor and the mean Berwald curvature,
$E_{jk}=\frac{1}{2}B^{i}_{ijk}$, where $B^{i}_{jkl}$ is the Berwald curvature,
[17, §6.1]. Very important in our work is the fact that the mean Berwald
curvature can be expressed also in terms of the $S$-function. The $S$-function
is a Finsler function if and only if the mean Berwald curvature has maximal
possible rank, $n-1$. For a Finsler function $F$, we denote by $\det g$, the
determinant of its metric tensor
$g_{ij}=\frac{1}{2}\frac{\partial^{2}F^{2}}{\partial y^{i}\partial y^{j}}$,
where $y^{i}$ are the fiber coordinates in the tangent bundle $TM$.
The main result of this paper provides a class of Finsler metrics that admit a
first integral.
###### Theorem 1.1.
Consider $F$ a Finsler metric that satisfies the following two conditions:
* i)
the $\chi$-curvature vanishes;
* ii)
the mean Berwald curvature has rank $n-1$.
Then,
(1.1) $\displaystyle\lambda=\frac{-1}{\det
g}\begin{vmatrix}2FE_{ij}&\displaystyle\frac{\partial F}{\partial
y^{i}}\vspace{2mm}\\\ \displaystyle\frac{\partial F}{\partial
y^{j}}&0\end{vmatrix}$
is a first integral for the geodesic spray $G$ of the Finsler metric $F$,
which means that $G(\lambda)=0$.
For a Finsler surface, the first condition of Theorem 1.1, $\chi=0$, is
equivalent to the fact that the Finsler metric has isotropic curvature (it is
a $k$-basic Finsler metric). Also, in dimension $2$, the mean Berwald
curvature is proportional to the vertical Hessian of the Finsler metric, the
proportionality factor, the function $\lambda$, was known since Berwald, [1,
(8.7)]. Hence for Finsler surfaces, the second condition of Theorem 1.1 is
automatically satisfied. Moreover, the first integral $\lambda$, given by
formula (1.1), reduces in the $2$-dimensional case to the first integral $f$
obtained by Foulon and Ruggiero in [7, Theorem B].
For the proof of Theorem 1.1, the two conditions $\chi=0$ and
$\operatorname{rank}(E_{ij})=n-1$ tell us that the $S$-function is a Finsler
metric, projectively related to $F$. Then, we will obtain the first integral
$\lambda$, given by (1.1), using the Painlevé first integral, associated to
the two projectively related Finsler metrics $F$ and $S$.
Next theorem deals with a concrete class of Finsler metrics that satisfy the
second assumption of Theorem 1.1. We say that a Finsler metric $F$ has _scalar
mean Berwald curvature_ $f$ if the mean Berwald curvature is proportional to
the vertical Hessian of $F$, $2E_{ij}=fF_{y^{i}y^{j}}$.
###### Theorem 1.2.
Consider $F$ a Finsler metric that satisfies the following two conditions:
* i)
the $\chi$-curvature vanishes;
* ii)
the Finsler metric has scalar mean Berwald curvature $f$.
Then, the scalar mean Berwald curvature satisfies:
* 1)
$f$ is a first integral of the Finsler metric $F$.
* 2)
If $\dim M>2$ then the first integral $f$ is constant.
* 3)
If $M$ is compact and $\dim M>2$ then the first integral $f$ vanishes
identically.
The proof of Theorem 1.2 is a direct extension, to the $n$-dimensional case,
of the techniques used by Foulon and Ruggiero in [7] to prove the existence of
a first integral for $k$-basic Finsler surfaces. These techniques allow to
provide more information about the first integral and one can further use [6]
to obtain a rigidity result for the class of Finsler metrics with vanishing
$\chi$-curvature and scalar mean Berwald curvature.
## 2\. Finsler metrics: a geometric setting and some non-Riemannian
quantities
In this work, we assume that $M$ is an $n$-dimensional $C^{\infty}$\-
manifold, of dimension $n>1$. We consider $TM$ its tangent bundle and
$T_{0}M=TM\setminus\\{0\\}$ the tangent bundle with the zero section removed.
Local coordinates on $M$, denoted by $(x^{i})$, induce canonical coordinates
on $TM$ (and $T_{0}M$), denoted by $(x^{i},y^{i})$. On $TM$ there are two
canonical structures that we will use: the Liouville (dilation) vector field,
${\mathcal{C}}=y^{i}\frac{\partial}{\partial y^{i}}$, and the tangent
structure (vertical endomorphism), $J=\frac{\partial}{\partial y^{i}}\otimes
dx^{i}$.
### 2.1. A geometric setting for Finsler metrics
We will use the Frölicker-Nijenhuis theory to describe the geometric setting
we follow in this work. For a vector-valued $l$-form $L$ on $T_{0}M$, we
denote by $i_{L}$ the induced $i_{\ast}$-derivation of degree $(l-1)$ and by
$d_{L}:=[i_{L},d]$ the $d_{\ast}$ derivation of degree $l$, [3, 8, 9, 20, 22].
For two vector valued forms, an $l$-form $L$ and a $k$-form $K$, consider the
$(k+l)$-form $[K,L]$, uniquely determined by
$\displaystyle d_{[K,L]}=d_{K}d_{L}-(-1)^{kl}d_{L}d_{K}.$
A _spray_ is a second order vector field $G\in{\mathfrak{X}}(T_{0}M)$ such
that $JG={\mathcal{C}}$ and $[{\mathcal{C}},G]=G$. Locally, a spray can be
expressed as
$\displaystyle G=y^{i}\frac{\partial}{\partial
x^{i}}-2G^{i}\frac{\partial}{\partial y^{i}},$
with the functions $G^{i}(x,y)$ positively $2$-homogeneous in $y$
($2^{+}$-homogeneous). A _geodesic_ of a spray $G$ is a smooth curve $c$ on
$M$ whose velocity $\dot{c}$ is an integral curve of $G$,
$G(\dot{c}(t))=\ddot{c}(t)$.
For a given spray $G$, an orientation preserving reparameterization
$t\to\tilde{t}(t)$, of its geodesics, leads to a new spray
$\widetilde{G}=G-2P{\mathcal{C}}$, where $P$ is a $1^{+}$-homogeneous function
on $T_{0}M$. We say that the two sprays $G$ and $\widetilde{G}$ are
_projectively related_ , while $P$ is called the _projective factor_.
###### Definition 2.1.
A _Finsler structure_ on $M$ is a continuous function $F:TM\to[0,+\infty)$
that satisfies the following conditions:
* i)
$F$ is smooth on $T_{0}M$;
* ii)
$F$ is $1^{+}$-homogenous, $F(x,\lambda y)=\lambda F(x,y)$,
$\forall\lambda>0$, $\forall(x,y)\in T_{0}M$;
* iii)
the metric tensor
$\displaystyle g_{ij}=\frac{1}{2}\frac{\partial^{2}F^{2}}{\partial
y^{i}\partial y^{j}}$
is non-degenerate on $T_{0}M$.
A Finsler manifold is a pair $(M,F)$, with $F$ a Finsler structure on the
manifold $M$. For a Finsler manifold, one can identify the sphere bundle $SM$
with the indicatrix bundle $IM=\\{(x,y)\in TM,F(x,y)=1\\}$. Geometric objects
on $T_{0}M$ that are invariant under positive rescaling ($0^{+}$-homogeneous)
can be restricted to the sphere bundle $SM$.
For a Finsler structure $F$, the metric tensor $g_{ij}$ can be expressed in
terms of the angular metric $h_{ij}$ as follows:
$\displaystyle g_{ij}=h_{ij}+\frac{1}{F^{2}}y_{i}y_{j}=h_{ij}+\frac{\partial
F}{\partial y^{i}}\frac{\partial F}{\partial y^{j}},\quad
h_{ij}=F\frac{\partial^{2}F}{\partial y^{i}\partial y^{j}}=FF_{y^{i}y^{j}},$
where $y_{i}=g_{ik}y^{k}=F\frac{\partial F}{\partial y^{i}}.$ The regularity
condition iii) from Definition 2.1 is equivalent to the fact that the angular
metric $h_{ij}$ has rank $n-1$, [12, Proposition 16.2].
For a spray $G$ and a function $L$ on $T_{0}M$, we consider the _Euler-
Lagrange_ $1$-form
(2.1)
$\displaystyle\delta_{G}L:={\mathcal{L}}_{G}d_{J}L-dL=\left\\{G\left(\frac{\partial
L}{\partial y^{i}}\right)-\frac{\partial L}{\partial x^{i}}\right\\}dx^{i}.$
Every Finsler metric uniquely determines a _geodesics spray_ , solution of the
Euler-Lagrange equation $\delta_{G}F^{2}=0$.
We recall now the geometric structures induced by a Finsler metric, and its
geodesic spray $G$. We first have the _canonical nonlinear connection_ ,
characterised by a horizontal and a vertical projector on $T_{0}M$
$h=\frac{1}{2}(\operatorname{Id}-[G,J]),\quad
v=\frac{1}{2}(\operatorname{Id}+[G,J]).$
In induced local charts on $T_{0}M$, the two projectors can be expressed as:
$\displaystyle h=\frac{\delta}{\delta x^{i}}\otimes dx^{i},\
v=\frac{\partial}{\partial y^{i}}\otimes\delta y^{i},\ \textrm{ where }\
\frac{\delta}{\delta x^{i}}=\frac{\partial}{\partial
x^{i}}-N_{i}^{j}\frac{\partial}{\partial y^{j}},\ \delta
y^{i}=dy^{i}+N^{i}_{j}dx^{j}\ \textrm{ and }N^{i}_{j}=\frac{\partial
G^{i}}{\partial y^{j}}.$
###### Lemma 2.2.
Consider $F$ a Finsler metric and $\widetilde{F}$ a $1^{+}$-homogeneous
function, nowhere vanishing on $T_{0}M$. Then, we can express the determinant
of the metric tensor $g_{ij}$ as follows:
(2.2) $\displaystyle\det
g=-\frac{F^{n+1}}{\widetilde{F}^{2}}\begin{vmatrix}\displaystyle\frac{\partial^{2}F}{\partial
y^{i}\partial y^{j}}&\displaystyle\frac{\partial\widetilde{F}}{\partial
y^{i}}\vspace{2mm}\\\ \displaystyle\frac{\partial\widetilde{F}}{\partial
y^{j}}&0\end{vmatrix}.$
###### Proof.
First, we recall a formula that connects the determinant of the metric tensor
$g_{ij}$ in terms of the angular metric $h_{ij}$, [15, (1.26)]:
(2.3) $\displaystyle\det
g=-F^{n-1}\begin{vmatrix}\displaystyle\frac{\partial^{2}F}{\partial
y^{i}\partial y^{j}}&\displaystyle\frac{\partial F}{\partial
y^{i}}\vspace{2mm}\\\ \displaystyle\frac{\partial F}{\partial
y^{j}}&0\end{vmatrix}.$
For the metric tensor $g_{ij}$, consider $\\{h_{1}=\frac{G}{F}$,
$h_{2},...,h_{n}\\}$ an orthonormal horizontal basis and
$\\{h^{i},i=\overline{1,n}\\}$, the dual frame. Since, for $\alpha\geq 2$,
$h^{\alpha}(h_{1})=0$, and
$\displaystyle h^{\alpha}=h^{\alpha}_{i}dx^{i},\quad
h_{1}=\frac{y^{i}}{F}\frac{\delta}{\delta x^{i}},$
we obtain $h^{\alpha}_{i}y^{i}=0$. Using also $h_{ij}y^{j}=0$, we obtain, for
each $\alpha\geq 2$, that on $T_{0}M$,
$\displaystyle\begin{pmatrix}h_{ij}&h^{\alpha}_{i}\\\
h^{\alpha}_{j}&0\end{pmatrix}\begin{pmatrix}y^{1}\\\ \vdots\\\ y^{n}\\\
0\end{pmatrix}=0$
and consequently,
(2.4) $\displaystyle\begin{vmatrix}h_{ij}&h^{\alpha}_{i}\\\
h^{\alpha}_{j}&0\end{vmatrix}=0.$
The semi-basic $1$-form $d_{J}\widetilde{F}$ can be expressed as follows
$\displaystyle d_{J}\widetilde{F}$ $\displaystyle=$
$\displaystyle\frac{\partial\widetilde{F}}{\partial
y^{i}}dx^{i}=d_{J}\widetilde{F}(h_{i})h^{i}=\frac{\widetilde{F}}{F}h^{1}+\sum_{\alpha=2}^{n}d_{J}\widetilde{F}(h_{\alpha})h^{\alpha}$
$\displaystyle=$ $\displaystyle\left\\{\frac{\widetilde{F}}{F}\frac{\partial
F}{\partial
y^{i}}+\sum^{n}_{\alpha=2}J(h_{\alpha})(\widetilde{F})h^{\alpha}_{i}\right\\}dx^{i}.$
In the determinant from (2.3), we replace
$\frac{\widetilde{F}}{F}\frac{\partial F}{\partial y^{i}}$ and obtain
$\displaystyle\begin{vmatrix}\displaystyle\frac{\partial^{2}F}{\partial
y^{i}\partial y^{j}}&\dfrac{\partial F}{\partial y^{i}}\vspace{2mm}\\\
\dfrac{\partial F}{\partial
y^{j}}&0\end{vmatrix}=\left(\frac{F}{\widetilde{F}}\right)^{2}\begin{vmatrix}\displaystyle\frac{\partial^{2}F}{\partial
y^{i}\partial y^{j}}&\dfrac{\widetilde{F}}{F}\dfrac{\partial F}{\partial
y^{i}}\vspace{2mm}\\\ \dfrac{\widetilde{F}}{F}\dfrac{\partial F}{\partial
y^{j}}&0\end{vmatrix}=\left(\frac{F}{\widetilde{F}}\right)^{2}\left(\begin{vmatrix}\displaystyle\frac{\partial^{2}F}{\partial
y^{i}\partial y^{j}}&\dfrac{\partial\widetilde{F}}{\partial
y^{i}}\vspace{2mm}\\\ \dfrac{\partial\widetilde{F}}{\partial
y^{j}}&0\end{vmatrix}\right.$ $\displaystyle-$
$\displaystyle\sum_{\alpha=2}^{n}\left.\begin{vmatrix}\displaystyle\frac{\partial^{2}F}{\partial
y^{i}\partial y^{j}}&J(h_{\alpha})(\widetilde{F})h^{\alpha}_{i}\vspace{2mm}\\\
J(h_{\alpha})(\widetilde{F})h^{\alpha}_{i}&0\end{vmatrix}\right)\stackrel{{\scriptstyle\eqref{halpha}}}{{=}}\left(\frac{F}{\widetilde{F}}\right)^{2}\begin{vmatrix}\displaystyle\frac{\partial^{2}F}{\partial
y^{i}\partial y^{j}}&\dfrac{\partial\widetilde{F}}{\partial
y^{i}}\vspace{2mm}\\\ \dfrac{\partial\widetilde{F}}{\partial
y^{j}}&0\end{vmatrix}.$
We replace this in formula (2.3) to obtain (2.2) and complete the proof. ∎
For a Finsler metric $F$, the regularity condition iii) of Definition 2.1 can
be reformulated in terms of the Hilbert $1$-form $d_{J}F$. Since $d_{J}F$ is
$0^{+}$-homogeneous, we can view it as a $1$-form on $SM$. Due to the fact
that the Hilbert $2$-form can be expressed as
(2.5) $\displaystyle dd_{J}F=\frac{\partial^{2}F}{\partial y^{i}\partial
y^{j}}\delta y^{j}\wedge dx^{i}=\frac{1}{F}h_{ij}\delta y^{j}\wedge dx^{i},$
it follows that $d_{J}F$ is a contact structure on $SM$ and hence the
$(2n-1)$\- form $\omega_{SM}=d_{J}F\wedge dd_{J}F^{(n-1)}$ is a volume form on
$SM$.
### 2.2. Non-Riemannian structures in the Finslerian setting
The first non-Riemannian structures, associated to a Finsler metric $F$, are
the _Cartan torsion_ and the _mean Cartan torsion_ ,
$\displaystyle C_{ijk}=\frac{1}{4}\frac{\partial^{3}F^{2}}{\partial
y^{i}\partial y^{j}\partial y^{k}}=\frac{1}{2}\frac{\partial g_{ij}}{\partial
y^{k}},\quad I_{k}=g^{ij}C_{ijk}.$
A Finsler metric reduces to a Riemannian metric if and only if the (mean)
Cartan torsion vanishes.
The _curvature_ of the nonlinear connection determined by the geodesic spray
$G$ is defined by
$\displaystyle
R=\frac{1}{2}[h,h]=\frac{1}{2}R^{i}_{jk}\frac{\partial}{\partial y^{i}}\otimes
dx^{j}\wedge dx^{k}=\frac{1}{2}\left(\frac{\delta N^{i}_{j}}{\delta
x^{k}}-\frac{\delta N^{i}_{k}}{\delta x^{j}}\right)\frac{\partial}{\partial
y^{i}}\otimes dx^{j}\wedge dx^{k}.$
The canonical nonlinear connection provides a tensor derivation on $T_{0}M$,
the _dynamical covariant derivative_ $\nabla$, whose action on functions and
vector fields is given by [3, (21)]:
$\displaystyle\nabla(f)=G(f),\forall f\in C^{\infty}(TM),\quad\nabla
X=h[G,hX]+v[G,vX],\forall X\in{\mathfrak{X}}(TM).$
The geodesic spray $G$ of a Finsler metric induces a linear connection on
$T_{0}M$, the Berwald connection, [20, §8.1.1], with two curvature components,
the _Berwald curvature_ and the _Riemannian curvature_ , [17, §6.1,§, 8.1]:
$\displaystyle B^{i}_{jkl}=\frac{\partial^{3}G^{i}}{\partial y^{j}\partial
y^{k}\partial y^{l}},\quad R^{i}_{jkl}=\frac{\partial R^{i}_{kl}}{\partial
y^{j}}.$
The _mean Berwald curvature_ of a spray $G$ is defined as [17, Def. 6.1.2]
(2.6) $\displaystyle
E_{jk}=\frac{1}{2}B^{i}_{ijk}=\frac{1}{2}\frac{\partial^{3}G^{i}}{\partial
y^{i}\partial y^{j}\partial y^{k}}.$
###### Definition 2.3.
A Finsler metric has _scalar mean Berwald curvature_ if the mean Berwald
curvature is proportional to the angular metric, which means that there exists
a $0^{+}$-homogeneous function $f$ on $T_{0}M$ such that
(2.7) $\displaystyle
E_{ij}=\frac{1}{2}\frac{f}{F}h_{ij}=\frac{f}{2}\frac{\partial^{2}F}{\partial
y^{i}\partial y^{j}}.$
In [5], Chen and Shen study Finsler metrics of _isotropic mean Berwald
curvature_ , with a similar definition as above, with $f$ being a scalar
function on $M$.
In the next lemma we prove that in dimension $n>2$, Finsler metrics of _scalar
mean Berwald curvature_ have isotropic mean Berwald curvature. In other words,
the scalar mean Berwald curvature $f$ is constant along the fibres of
$T_{0}M$.
###### Lemma 2.4.
Consider $F$ a Finsler metric of _scalar mean Berwald curvature_ $f$. If
$n>2$, then $f$ is constant along the fibres of $T_{0}M$.
###### Proof.
From the definition of the mean Berwald curvature (2.6) we obtain that its
vertical derivative is a $(0,3)$-type tensor symmetric in all three arguments,
therefore for a Finsler metric of _scalar mean Berwald curvature_ we have
$\displaystyle\frac{\partial E_{ij}}{\partial y^{k}}=\frac{\partial
E_{ik}}{\partial y^{j}}$
$\displaystyle\stackrel{{\scriptstyle\eqref{ef}}}{{\Longrightarrow}}$
$\displaystyle\frac{\partial f}{\partial y^{k}}\frac{\partial^{2}F}{\partial
y^{i}\partial y^{j}}=\frac{\partial f}{\partial
y^{j}}\frac{\partial^{2}F}{\partial y^{i}\partial
y^{k}}\Longrightarrow\frac{\partial f}{\partial y^{k}}h_{ij}=\frac{\partial
f}{\partial y^{j}}h_{ik}$ $\displaystyle\Longrightarrow$
$\displaystyle\frac{\partial f}{\partial
y^{k}}\left(g_{ij}-\frac{1}{F^{2}}y_{i}y_{j}\right)=\frac{\partial f}{\partial
y^{j}}\left(g_{ik}-\frac{1}{F^{2}}y_{i}y_{k}\right).$
In the last formula above, we multiply with $g^{il}$, the inverse of the
metric tensor and obtain:
$\displaystyle\frac{\partial f}{\partial
y^{k}}\left(\delta^{l}_{j}-\frac{1}{F^{2}}y^{l}y_{j}\right)=\frac{\partial
f}{\partial y^{j}}\left(\delta^{l}_{k}-\frac{1}{F^{2}}y^{l}y_{k}\right).$
Now, if we use that $f$ is $0^{+}$-homogeneous and take the trace, $j=l$, we
obtain $(n-2){\partial f}/{\partial y^{k}}=0$. Since $n>2$, we obtain that the
function $f$ is constant along the fibres of $T_{0}M$. ∎
Due to the $2^{+}$-homogeneity of the spray coefficients $G^{i}$, it follows
that $E_{ij}y^{j}=0$, hence $\operatorname{rank}(E_{ij})\leq n-1$. In the
$2$-dimensional case, we obtain that the mean Berwald curvature has rank $1$,
it is proportional to the angular metric $h_{ij}$ (of rank $1$ as well), and
hence all $2$-dimensional Finsler manifolds have scalar mean Berwald
curvature. The proportionality factor has been known since Berwald, [1,
(8.7)], but it has been shown only recently that it is a first integral for
$k$-basic Finsler surfaces, [7, Theorem B].
The Berwald connection is not a metric connection, with respect to the metric
tensor of a Finsler structure. Due to this non-metricity property of the
Berwald connection, it follows that the $(0,4)$-type Riemann curvature tensor
$R_{ijkl}=g_{is}R^{s}_{jkl}$ is not skew-symmetric in the first two indices,
[17, (10.6)], and hence $R^{i}_{ikl}\neq 0$. A measure of this failure is
given by the $\chi$-curvature, [19, Lemma 3.1]:
$\displaystyle\chi_{j}=-\frac{1}{2}R^{i}_{ijk}y^{k}.$
This non-Riemannian quantity has been introduced by Shen in [18].
The key ingredients we will use in this work are the $\chi$-curvature, the
mean Berwald curvature, and the fact that both curvature tensors can be
expressed in terms of yet another non-Riemannian quantity, the $S$-function.
For a fixed vertically invariant volume form $\sigma(x)dx^{1}\wedge
dx^{2}\wedge\cdots\wedge dx^{n}\wedge dy^{1}\wedge dy^{2}\wedge\cdots\wedge
dy^{n}$ on $TM$, [20, p. 490], we consider the Shen-function ($S$-_function_)
and the _distortion_ $\tau$, [17, §5.2],
(2.8) $\displaystyle S=G\left(\tau\right),\quad\tau=\frac{1}{2}\ln{\frac{\det
g}{\sigma}}.$
From the various expressions of the $\chi$-curvature, we will use its
expression in terms of the $S$-function, [18, (1.10)],
(2.9)
$\displaystyle\chi=\frac{1}{2}\delta_{G}S=\frac{1}{2}\left\\{\nabla\left(\frac{\partial
S}{\partial y^{i}}\right)-\frac{\delta S}{\delta x^{i}}\right\\}dx^{i}.$
The mean Berwald curvature can also be expressed in terms of the $S$-function
as follows, [17, (6.13)]:
(2.10) $\displaystyle E_{ij}=\frac{1}{2}\frac{\partial^{2}S}{\partial
y^{i}\partial y^{j}}.$
In view of formula (2.10), the second assumption of Theorem 1.1 or 1.2,
assures that the vertical Hessian of the $S$-function has maximal rank
$(n-1)$. Therefore, we can interpret the $S$-function as a Finsler metric on
its own.
## 3\. Proof of Theorem 1.1
For the proof of Theorem 1.1, we proceed with the following steps. We show
first that the two assumptions of Theorem 1.1 assure that the $S$-function is
a Finsler metric, projectively related to $F$. Then, we obtain the first
integral (1.1) using the Painlevé first integral associated to the two
projectively related Finsler metrics $S$ and $F$.
Two Finsler metrics $F$ and $\widetilde{F}$ are projectively related if their
geodesic sprays $G$ and $\widetilde{G}$ are projectively related. One can
characterise projective equivalence of two Finsler metrics $F$ and
$\widetilde{F}$ using the following equivalent forms of Rapcsák equations,
[20, §9.2.3]:
* ($R_{1}$)
$\delta_{G}\widetilde{F}=0$;
* ($R_{2}$)
$d_{h}d_{J}\widetilde{F}=0$.
In Riemannian geometry, Topalov and Matveev [21, Theorem 1] associate to each
pair of geodesically equivalent metrics a set of $n$ first integrals. An
extension of this result, to the Finslerian setting, has been proposed by
Sarlet in [16] and his Ph.D student Vermeire [23].
In the next lemma, we show that two projectively related Finsler metrics $F$
and $\widetilde{F}$ induce a first integral (Painlevé first integral). This
first integral, given by formula (3.1), is the Finslerian extension of the
first integral determined by two projectively equivalent Riemannian metrics,
[13, Theorem 2].
###### Lemma 3.1.
Consider $F$ and $\widetilde{F}$, two projectively related Finsler metrics.
Then,
(3.1) $\displaystyle I_{0}=\frac{\widetilde{F}}{F}\left(\frac{\det
g}{\det\widetilde{g}}\right)^{\frac{1}{n+1}}$
is a first integral for $F$.
###### Proof.
For a Finsler metric $F$, the dynamical covariant derivative of its metric
tensor vanishes, [2], hence:
$\displaystyle\nabla(g_{ij})=G(g_{ij})-g_{im}N_{j}^{m}-g_{mj}N_{i}^{m}=0.$
Contracting with $g^{ij}$, we obtain
$\displaystyle
g^{ij}G(g_{ij})=g^{ij}(g_{im}N_{j}^{m}+g_{mj}N_{i}^{j})=2N_{i}^{i},\quad\textrm{and
\ hence \ }N_{i}^{i}=\frac{1}{2}G(\ln(\det g)).$
The two Finsler metrics $F$ and $\widetilde{F}$ being projectively related,
their geodesic sprays and nonlinear connections are connected through
$\widetilde{G}=G-2P\mathcal{C},\quad\widetilde{G^{i}}=G^{i}+Py^{i},\quad\widetilde{N^{i}_{j}}=N^{i}_{j}+\frac{\partial
P}{\partial y^{j}}y^{i}+P\delta^{i}_{j}.$
If in the last formula above we take the trace $i=j$, it follows that the
projective factor $P$ is given by
$\displaystyle
P=\frac{1}{n+1}(\widetilde{N}^{i}_{i}-N_{i}^{i})=\frac{1}{2(n+1)}G\left(\ln\left(\frac{\det\widetilde{g}}{\det
g}\right)\right).$
We also use the alternative expression of the projective factor $P$,
$\displaystyle
P=\frac{G(\widetilde{F})}{2\widetilde{F}}=\frac{1}{2}G\left(\ln\widetilde{F}\right).$
By comparing the two expressions of the projective factor $P$, we obtain
$G(I_{0})=0$, which concludes the proof of our lemma. ∎
We will give the proof of Theorem 1.1 now. The second assumption ii) on
Theorem 1.1 together with formula (2.10) assure that the angular metric of the
$S$-function has rank $n-1$ and therefore $S$ is a Finsler metric. The
vanishing of the $\chi$-curvature (2.9) assures that the Finsler metric $S$ is
projectively related to $F$. In view of Lemma 3.1 we obtain that
$\displaystyle I_{0}=\frac{S}{F}\left(\frac{\det g}{\det
s}\right)^{\frac{1}{n+1}}$
is a first integral for the Finsler metric $F$.
We will use Lemma 2.2 for the Finsler metric $S$ and the $1^{+}$-homogenous
function $F$. According to formula (2.2), we can express the determinant of
the metric tensor $s_{ij}$ as follows:
$\displaystyle\det
s=-\frac{S^{n+1}}{F^{2}}\begin{vmatrix}\displaystyle\frac{\partial^{2}S}{\partial
y^{i}\partial y^{j}}&\dfrac{\partial F}{\partial y^{i}}\vspace{2mm}\\\
\dfrac{\partial F}{\partial
y^{j}}&0\end{vmatrix}\stackrel{{\scriptstyle\eqref{es}}}{{=}}-\frac{S^{n+1}}{F^{2}}\begin{vmatrix}2E_{ij}&\dfrac{\partial
F}{\partial y^{i}}\vspace{2mm}\\\ \dfrac{\partial F}{\partial
y^{j}}&0\end{vmatrix}=-\frac{S^{n+1}}{F^{n+1}}\begin{vmatrix}2FE_{ij}&\dfrac{\partial
F}{\partial y^{i}}\vspace{2mm}\\\ \dfrac{\partial F}{\partial
y^{j}}&0\end{vmatrix}..$
Since $I_{0}$ is a first integral for the Finsler metric $F$, it follows that
$\displaystyle\frac{1}{I_{0}^{n+1}}=\frac{F^{n+1}}{S^{n+1}}\frac{\det s}{\det
g}=\frac{-1}{\det g}\begin{vmatrix}2FE_{ij}&\dfrac{\partial F}{\partial
y^{i}}\vspace{2mm}\\\ \dfrac{\partial F}{\partial y^{j}}&0\end{vmatrix}$
is also a first integral for $F$ that coincides with $\lambda$ given by
formula (1.1).
The first two assumptions of Theorem 1.1 tell us that $S$ is a Finsler metric
projectively related to $F$. One can use this and [21, Theorem 1] and [16,
Theorem 2] to provide a set of $n$ first integrals for Finsler metric with
vanishing $\chi$-curvature and mean Berwald curvature of maximal rank.
## 4\. Proof of Theorem 1.2
### 4.1. Partial proof of Theorem 1.2
First we prove the first two conclusions of Theorem 1.2, using Theorem 1.1.
For this proof it is essential that the scalar mean Berwald curvature $f$ is
nowhere vanishing, hence we cannot reach the third conclusion of Theorem 1.2
using these techniques.
In view of the equivalence of the two Rapcsák equations $R_{1}$ and $R_{2}$,
we can reformulate the vanishing of the $\chi$-curvature (2.9) as
$d_{h}d_{J}S=0$. Using also the assumption that $F$ has scalar mean Berwald
curvature, we obtain that the Hilbert $2$-form of the $S$-function can be
written as follows
(4.1) $\displaystyle dd_{J}S$ $\displaystyle=$ $\displaystyle
d_{v}d_{J}S=\dfrac{\partial^{2}S}{\partial y^{i}\partial y^{j}}\delta
y^{i}\wedge dx^{j}=2E_{ij}\delta y^{i}\wedge dx^{j}$ $\displaystyle=$
$\displaystyle f\dfrac{\partial^{2}F}{\partial y^{i}\partial y^{j}}\delta
y^{i}\wedge dx^{j}=fdd_{J}F.$
For a non-vanishing scalar mean Berwald curvature $f$, it follows from (4.1)
that $\operatorname{rank}\left(\dfrac{\partial^{2}S}{\partial y^{i}\partial
y^{j}}\right)=n-1$ and hence $S$ is a Finsler metric.
We will express now, the first integral $\lambda$, (1.1), using the assumption
that $F$ has scalar mean Berwald curvature $f$. We have
(4.2) $\displaystyle\lambda=\frac{-1}{\det
g}\begin{vmatrix}2FE_{ij}&\displaystyle\frac{\partial F}{\partial
y^{i}}\vspace{2mm}\\\ \displaystyle\frac{\partial F}{\partial
y^{j}}&0\end{vmatrix}=\frac{-F^{n-1}}{\det
g}\begin{vmatrix}f\dfrac{\partial^{2}F}{\partial y^{i}\partial
y^{j}}&\displaystyle\frac{\partial F}{\partial y^{i}}\vspace{2mm}\\\
\displaystyle\frac{\partial F}{\partial
y^{j}}&0\end{vmatrix}\stackrel{{\scriptstyle\eqref{gt}}}{{=}}f^{n-1}.$
Since $\lambda$ is a first integral, it follows that $f$ is also a first
integral for $F$, and this proves the first conclusion of Theorem 1.2.
According to Lemma 2.4, the scalar mean Berwald curvature $f$ is a scalar
function on $M$, which means that $d_{J}f=0$. We use now that $G(f)=0$, which
can be written as $\nabla f=0$. If we apply $d_{J}$ to this formula and use
the commutation rule for $\nabla$ and $d_{J}$, [4, (2.11)], we obtain
$\displaystyle 0=d_{J}\nabla f=\nabla d_{J}f+d_{h}f.$
Therefore, $d_{h}f=0$ and hence $f$ is a constant, which proves the second
conclusion of Theorem 1.2.
### 4.2. Complete proof of Theorem 1.2
In this section we present a proof of Theorem 1.2, independent of the results
of Theorem 1.1, by extending to the $n$-dimensional case, the techniques of
[7]. This method allows to provide more information about the first integral,
when the base manifold is compact.
The mean Cartan torsion can be expressed in terms of the distortion $\tau$,
and it does not depend on the fixed volume form on $M$,
$\displaystyle I_{k}=\frac{1}{2}g^{ij}\frac{\partial g_{ij}}{\partial
y^{k}}=\frac{\partial}{\partial y^{k}}(\ln\sqrt{\det
g})=\frac{\partial\tau}{\partial y^{k}},\ I=I_{k}dx^{k}=d_{J}(\ln\sqrt{\det
g})=d_{J}\tau.$
The key ingredient in this proof is the following $1$-form
$\displaystyle\alpha$ $\displaystyle=$ $\displaystyle
i_{[J,G]}{\mathcal{L}}_{G}I=\nabla I_{k}dx^{k}-I_{k}\delta y^{k}=\nabla
d_{J}\tau-d_{v}\tau$ $\displaystyle=$ $\displaystyle d_{J}\nabla\tau-
d_{h}\tau-d_{v}\tau=d_{J}\nabla\tau-d\tau=d_{J}S-d\tau.$
In the $2$-dimensional case, this form reduces to the form $\alpha$ from [7,
§2].
We will use the last expression from (4.2) of the form $\alpha$ to connect it
with the $\chi$-curvature:
(4.4)
$\displaystyle{\mathcal{L}}_{G}\alpha={\mathcal{L}}_{G}d_{J}S-{\mathcal{L}}_{G}d\tau={\mathcal{L}}_{G}d_{J}S-dS=\delta_{G}S=2\chi.$
In view of this formula, the $\chi$-curvature vanishes if and only if the form
$\alpha$ is invariant by the geodesic flow. Moreover, the $\chi$-curvature
vanishes if and only if the $S$-function satisfies the Rapcsák equation
$\delta_{G}S=0$, which is equivalent to $d_{h}d_{J}S=0$.
Therefore, we can express the $2$-form $d\alpha$ as follows:
$\displaystyle
d\alpha=dd_{J}S=d_{h}d_{J}S+d_{v}d_{J}S=\frac{\partial^{2}S}{\partial
y^{i}\partial y^{j}}\delta y^{i}\wedge dx^{j}=2E_{ij}\delta y^{i}\wedge
dx^{j}.$
If we consider now the assumption that the Finsler metric has scalar mean
Berwald curvature, then the $2$-form $d\alpha$ is proportional to the Hilbert
$2$-form $dd_{J}F$:
(4.5) $\displaystyle d\alpha=2E_{ij}\delta y^{i}\wedge
dx^{j}=\frac{f}{F}h_{ij}\delta y^{i}\wedge dx^{j}=fdd_{J}F.$
From formula (4.4) we obtain that $\chi=0$ implies ${\mathcal{L}}_{G}\alpha=0$
and therefore ${\mathcal{L}}_{G}d\alpha=0$. In view of formula (4.5) and using
the fact that ${\mathcal{L}}_{G}dd_{J}F=0$ we obtain $G(f)=0$, which means
that the scalar mean Berwald curvature $f$ is a first integral for the
geodesic flow $G$.
Using Lemma 2.4 we obtain that the scalar mean Berwald curvature $f$ is a
scalar function on $M$, hence $df=d_{h}f$. From formula (4.5), we obtain
$\displaystyle 0=d^{2}\alpha=df\wedge dd_{J}F=d_{h}f\wedge
d_{v}d_{J}F=\frac{1}{2}\left(\frac{\partial f}{\partial
x^{i}}h_{kj}-\frac{\partial f}{\partial x^{j}}h_{ki}\right)dx^{i}\wedge
dx^{j}\wedge\delta y^{k}.$
It follows that
$\displaystyle\frac{\partial f}{\partial x^{i}}h_{kj}=\frac{\partial
f}{\partial x^{j}}h_{ki}\Longrightarrow\frac{\partial f}{\partial
x^{i}}\left(g_{kj}-\frac{1}{F^{2}}y_{k}y_{j}\right)=\frac{\partial f}{\partial
x^{j}}\left(g_{ki}-\frac{1}{F^{2}}y_{k}y_{i}\right).$
In the last formula above, we multiply with $g^{il}$ and obtain:
$\displaystyle\frac{\partial f}{\partial
x^{i}}\left(\delta^{l}_{j}-\frac{1}{F^{2}}y^{l}y_{j}\right)=\frac{\partial
f}{\partial x^{j}}\left(\delta^{l}_{i}-\frac{1}{F^{2}}y^{l}ky_{i}\right).$
If we take the trace $l=j$, we obtain
$\displaystyle(n-2)\frac{\partial f}{\partial
x^{i}}=-\frac{1}{F^{2}}G(f)y_{i}.$
Now using that $G(f)=0$, we obtain that the scalar function $f$ is constant if
$\dim M>2$.
To complete the proof of Theorem 1.2, we need the following lemma that gives
new properties for the first integral $f$ and can be useful for some rigidity
results.
###### Lemma 4.1.
Let $(M,F)$ be a compact Finsler manifold with vanishing $\chi$-curvature and
of scalar mean Berwald curvature $f$. Then,
(4.6) $\displaystyle\int_{SM}f\omega_{SM}=0.$
###### Proof.
By Stokes Theorem we have that
$\displaystyle 0=\int_{SM}d\left(\alpha\wedge
d_{J}F\wedge(dd_{J}F)^{n-2}\right)=\int_{SM}d\alpha\wedge
d_{J}F\wedge(dd_{J}F)^{n-2}-\int_{SM}\alpha\wedge(dd_{J}F)^{n-1}.$
We will prove now that on $SM$, $\alpha\wedge(dd_{J}F)^{n-1}=0$.
Let $\lambda_{1},..,\lambda_{n-1}$ be the non-zero eigenvalues of the angular
metric $h_{ij}$, $h_{1},...,h_{n-1}$ the corresponding horizontal eigenvectors
and $v_{i}=Jh_{i}$, $i\in\\{1,...,n-1\\}$. Then,
$\\{h_{1},...,h_{n-1},v_{1},...,v_{n-1}\\}$ is a local frame of the
$(2n-2)$-dimensional distribution $\operatorname{Ker}(d_{J}F)$ on $SM$. We
consider also the local co-frame $\\{h^{1},...,h^{n-1},v^{1},...,v^{n-1}\\}$.
Using the expression (2.5) of the Hilbert $2$-form, $dd_{J}F$, it follows that
$\displaystyle(dd_{J}F)^{n-1}=\lambda_{1}\cdots\lambda_{n-1}h^{1}\wedge\cdots
h^{n-1}\wedge v^{1}\cdots\wedge v^{n-1}.$
Since $i_{G}\alpha=0$, it follows that
$\alpha\in\operatorname{span}\\{h_{1},...,h_{n-1},v_{1},...,v_{n-1}\\}=\operatorname{Ker}(d_{J}F)$,
we obtain that $\alpha\wedge(dd_{J}F)^{n-1}=0$ on $SM$. Now, using (4.5), we
obtain
$\displaystyle 0=\int_{SM}d\alpha\wedge
d_{J}F\wedge(dd_{J}F)^{n-2}=\int_{SM}fdd_{J}F\wedge
d_{J}F\wedge(dd_{J}F)^{n-2}=\int_{SM}f\omega_{SM}.$
∎
If $\dim M>2$ then $f$ is constant and using formula (4.6) we obtain that
$f=0$, which completes the proof of Theorem 1.2.
Existence of first integrals for Finsler manifolds can be used to provide
rigidity results under some topological restrictions:
* •
compact surface, without conjugate points and of genus greater than one, [7,
Theorem A];
* •
compact manifold, without conjugate points and of uniform visibility, for
dimension $n>2$, [6, Theorem A].
If $M$ is a compact manifold of dimension $n>2$, with vanishing
$\chi$-curvature and of scalar mean Berwald curvature $f$, we obtain that
$f=0$. Using formula (4.5), it follows that the form $\alpha$, given by (4.2),
is closed. Using the assumptions of [6, Theorem A] we can conclude that the
form $\alpha$ is exact. Assume $\alpha=dh$, for some function $h$ on $T_{0}M$.
Since $i_{G}\alpha=0$, it follows that $G(h)=0$ and $h$ is a first integral
for the geodesic flow. Using again [6, Theorem A] we obtain that the function
$h$ is constant, then $\alpha=0$. The expression (4.2) of the form $\alpha$
allows to conclude that the mean Cartan tensor vanishes, $I=0$, and hence
$(M,F)$ is a Riemannian manifold.
### Acknowledgments
We express our thanks to József Szilasi for his comments and suggestions on
this work.
## References
* [1] Berwald, L.: _On Finsler and Cartan geometries. III. Two-dimensional Finsler spaces with rectilinear extremals_ , Ann. of Math., 42 (1) (1941), 84–112.
* [2] Bucataru, I.: _Metric nonlinear connections_ , Differential Geom. Appl., 25(3) (2007), 335–343.
* [3] Bucataru, I., Dahl, M.: _Semi-basic 1-forms and Helmholtz conditions for the inverse problem of the calculus of variations_ , J. Geom. Mech., 1(2) (2009), 159–-180.
* [4] Bucataru, I., Constantinescu, O.: _Generalized Helmholtz conditions for non-conservative Lagrangian systems_ , Math. Phys. Anal. Geom., 18 (1) (2015), Art. 25, 24 pp.
* [5] Chen, X., Shen, Z.: _On Douglas metrics_ , Publ. Math. Debrecen, 66(3-4) (2005), 503–512.
* [6] Chimentona, A.G., Gomes, J.B., Ruggiero, R.O.: _Gromov-hyperbolicity and transitivity of geodesic flows in n-dimensional Finsler manifolds_ , Differential Geom. Appl., 68 (2020), 101588.
* [7] Foulon, P., Ruggiero, R.O.: _A first integral for $C^{\infty}$, $k$-basic Finsler surfaces and applications to rigidity_, Proc. Amer. Math. Soc.,144 (9) (2016), 3847–3858.
* [8] Grifone, J.: _Structure presque-tangente et connexions I_ , Ann. Inst. Fourier, 22 (1972), 287–334.
* [9] Grifone, J., Muzsnay, Z.: _Variational Principles For Second-Order Differential Equations_ , World Scientific, 2000.
* [10] Li, B., Shen, Z.: _Ricci curvature tensor and non-Riemannian quantities_ , Canad. Math. Bull. 58(3) (2015), 530–537.
* [11] Li, B., Shen, Z.: _Sprays of isotropic curvature_ , Int. J. Math., 29 (1) (2018), 1850003, 12pp.
* [12] Matsumoto, M.: _Foundations of Finsler geometry and special Finsler spaces_ , Kaiseisha Press, 1986.
* [13] Matveev, V.: _Geometric explanation of the Beltrami theorem_ , Int. J. Geom. Methods Mod. Phys., 3 (3) (2006), 623–629.
* [14] Mo, X.: _On the non-Riemannian quantity H of a Finsler metric_ , Differential Geom. Appl., 27 (1) (2009), 7–14.
* [15] Rund, H.: _The differential geometry of Finsler spaces_ , Springer, 1959.
* [16] Sarlet, W.: _A recursive scheme of first integrals of the geodesic flow of a Finsler manifold_ , SIGMA Symmetry Integrability Geom. Methods Appl., 3 (2007), Paper 024, 9 pp.
* [17] Shen, Z.: _Differential geometry of spray and Finsler spaces_ , Springer, 2001.
* [18] Shen, Z.: _On some non-Riemannian quantities in Finsler geometry_ , Canad. Math. Bull., 56 (1) (2013), 184–193.
* [19] Shen, Z.: _On sprays with vanishing X-curvature_ , arXiv:2008.07732.
* [20] Szilasi, J., Lovas, R., Kertész, D.: _Connections, sprays and Finsler structures_ , World Scientific, 2014\.
* [21] Topalov, P., Matveev, V.S.: _Geodesic equivalence via integrability_ , Geom. Dedicata, 96 (2003), 91–115.
* [22] Youssef, N.L.: _Semi-projective changes_ , Tensor, N.S. 55(1994), 131–141.
* [23] Vermeire, F.: _A class of recursion operators on a tangent bundle_ , Ph.D thesis, University of Gent, 2006.
|
# Proper Biharmonic Maps and $(2,1)$-Harmonic Morphisms from Some Wild
Geometries
Elsa Ghandour Mathematics, Faculty of Science
University of Lund
Box 118, Lund 221
Sweden<EMAIL_ADDRESS>and Sigmundur Gudmundsson Mathematics,
Faculty of Science
University of Lund
Box 118, Lund 221
Sweden<EMAIL_ADDRESS>
###### Abstract.
In this work we construct a variety of new complex-valued proper biharmonic
maps and $(2,1)$-harmonic morphisms on Riemannian manifolds with non-trivial
geometry. These are solutions to a non-linear system of partial differential
equations depending on the geometric data of the manifolds involved.
###### Key words and phrases:
Lie groups, conformal foliations, minimal foliations, harmonic morphisms
###### 2020 Mathematics Subject Classification:
53C30, 53C43, 58E20
## 1\. Introduction
The concept of a harmonic morphism $\phi:(M,g)\to(N,h)$, between Riemannian
manifolds, was introduced by Fuglede and Ishihara in the late 1970s
independently, see [2] and [6]. These are maps pulling back local real-valued
harmonic functions on $N$ to harmonic functions on $M$. These objects have an
interesting connection to the geometry of the manifolds involved and have lead
to vibrant research activities, as can be traced in the excellent work [1], by
Baird and Wood, and the regularly updated online bibliography [5], maintained
by the second author.
Recently, the notion was generalised to $(p,q)$-harmonic morphisms, pulling
back real-valued $q$-harmonic functions on $N$ to $p$-harmonic functions on
$M$, see [3]. The case of $(2,1)$ had earlier been studied in [4] under the
name generalised harmonic morphisms. In [3], the authors characterise complex-
valued $(p,q)$-harmonic morphisms $\phi:(M,g)\to{\mathbb{C}}$ in terms of a
heavily non-linear system of partial differential equations. They also provide
methods for producing explicit solutions in the case when the domain $(M,g)$
is the $m$-dimensional Euclidean space.
The principal aim of this work is to extend the study to complex-valued
$(2,1)$-harmonic morphisms from Riemannian manifolds $(M,g)$. We model our
manifolds $M$ as open subsets of ${\mathbb{R}}^{m}$, equipped with a
Riemannian metric $g$ of a particular form, see Section 3. We then investigate
when the natural projection $\Phi:(M,g)\to{\mathbb{C}}$ onto the first two
coordinates is horizontally conformal, harmonic and even biharmonic. This
leads to a non-linear system of partial differential equations involving the
geometric data on $(M,g)$. We then find several explicit solutions and thereby
construct metrics $g$ turning the projection $\Phi$ into a proper biharmonic
map and even a proper $(2,1)$-harmonic morphism. By this we provide the first
known complex-valued $(2,1)$-harmonic morphisms from Riemannian manifolds with
non-trivial geometry.
## 2\. Preliminaries
Let $(M,g)$ be an $m$-dimensional Riemannian manifold and $T^{{\mathbb{C}}}M$
be the complexification of the tangent bundle $TM$ of $M$. We extend the
metric $g$ to a complex-bilinear form on $T^{{\mathbb{C}}}M$. Then the
gradient $\nabla\phi$ of a complex-valued function $\phi:(M,g)\to{\mathbb{C}}$
is a section of $T^{{\mathbb{C}}}M$. In this situation, the well-known complex
linear Laplace-Beltrami operator (alt. tension field) $\tau$ on $(M,g)$ acts
locally on $\phi$ as follows
$\tau(\phi)=\operatorname{div}(\nabla\phi)=\sum_{i,j=1}^{m}\frac{1}{\sqrt{|g|}}\frac{\partial}{\partial
x_{j}}\left(g^{ij}\,\sqrt{|g|}\,\frac{\partial\phi}{\partial x_{i}}\right).$
For two complex-valued functions $\phi,\psi:(M,g)\to{\mathbb{C}}$ we have the
following well-known relation
$\tau(\phi\cdot\psi)=\tau(\phi)\cdot\psi+2\cdot\kappa(\phi,\psi)+\phi\cdot\tau(\psi),$
(2.1)
where the complex bilinear conformality operator $\kappa$ is given by
$\kappa(\phi,\psi)=g(\nabla\phi,\nabla\psi)$. Locally this satisfies
$\kappa(\phi,\psi)=\sum_{i,j=1}^{m}g^{ij}\cdot\frac{\partial\phi}{\partial
x_{i}}\frac{\partial\psi}{\partial x_{j}}.$
We are now ready to define the complex-valued proper $p$-harmonic functions.
###### Definition 2.1.
For a positive integer $p$, the iterated Laplace-Beltrami operator $\tau^{p}$
is given by
$\tau^{0}(\phi)=\phi\ \ \text{and}\ \
\tau^{p}(\phi)=\tau(\tau^{(p-1)}(\phi)).$
We say that a complex-valued function $\phi:(M,g)\to{\mathbb{C}}$ is
1. (a)
$p$-harmonic if $\tau^{p}(\phi)=0$, and
2. (b)
proper $p$-harmonic if $\tau^{p}(\phi)=0$ and $\tau^{(p-1)}(\phi)$ does not
vanish identically.
We now introduce the natural notion of a $(p,q)$-harmonic morphism. For
$(p,q)=(1,1)$ this is the classical case of harmonic morphisms introduced by
Fuglede and Ishihara, in [2] and [6], independently.
###### Definition 2.2.
A map $\phi:(M,g)\to(N,h)$ between Riemannian manifolds is said to be a
$(p,q)$-harmonic morphism if, for any $q$-harmonic function $f:U\subset
N\to{\mathbb{R}}$, defined on an open subset $U$ such that $\phi^{-1}(U)$ is
not empty, the composition $f\circ\phi:\phi^{-1}(U)\subset M\to{\mathbb{R}}$
is $p$-harmonic.
As an immediate consequence of Definition 2.2 we have the following natural
composition law.
###### Lemma 2.3.
Let $\phi:(M,g)\to(\bar{N},\bar{h})$ be a $(p,r)$-harmonic morphism between
Riemannian manifolds. If $\psi:(\bar{N},\bar{h})\to(N,h)$ is an
$(r,q)$-harmonic morphism then the composition $\psi\circ\phi:(M,g)\to(N,h)$
is a $(p,q)$-harmonic morphism.
## 3\. Some Rather Wild Geometries
The aim of this section is to describe a particular collection of Riemannian
manifolds $(M,g)$ investigated in this work. We present formulae for their
sectional curvatures to show that the geometry is here far from being trivial.
We then give two concrete examples that turn out to be useful later on.
For an open subset $M$ of ${\mathbb{R}}^{m}$, let
$\lambda,\lambda_{1},\dots,\lambda_{m}:M\to{\mathbb{R}}$ be $C^{3}$-functions
such that $\lambda=\lambda_{1}=\lambda_{2}$ and equip the manifold $M$ with
the Riemannian metric $g$ of the special form
$g=e^{-2\lambda}(dx^{2}+dy^{2})+e^{-2\lambda_{3}}dx_{3}^{2}+\cdots+e^{-2\lambda_{m}}dx_{m}^{2}.$
For our purposes it is practical to introduce the function
$f:M\to{\mathbb{R}}$ with
$f({\bf x})=\sum_{k=3}^{m}\lambda_{k}({\bf x}).$
For the tangent bundle $TM$ of $(M,g)$ we have the following global
orthonormal frame
$\left\\{X_{1}=e^{\lambda_{1}}\cdot\frac{\partial}{\partial
x_{1}},\,\dots\,,X_{m}=e^{\lambda_{m}}\cdot\frac{\partial}{\partial
x_{m}}\right\\}$
When appropriate, we shall by ${\bf x}=(x,y,x_{3},\dots,x_{m})$ denote the
canonical coordinates $(x_{1},\dots,x_{m})$ on ${\mathbb{R}}^{m}$ and set
$X=X_{1}$, $Y=X_{2}$. The Lie brackets for $TM$ satisfy
$[X_{j},X_{k}]=e^{\lambda_{j}}\,(\lambda_{k})_{x_{j}}X_{k}-e^{\lambda_{k}}\,(\lambda_{j})_{x_{k}}X_{j},$
where the subscript $x_{j}$ means the partial derivative with respect to the
j-th coordinate function. A standard computation shows that for the sectional
curvature $K(X_{j}\wedge X_{k})$ of the $2$-plane $X_{j}\wedge X_{k}$ we have
$\displaystyle K(X_{j}\wedge X_{k})$ $\displaystyle=$ $\displaystyle
e^{2\lambda_{j}}[(\lambda_{j})_{x_{j}}(\lambda_{k})_{x_{j}}+(\lambda_{k})_{x_{j}x_{j}}-(\lambda_{k})_{x_{j}}^{2}]$
$\displaystyle+\,e^{2\lambda_{k}}[(\lambda_{k})_{x_{k}}(\lambda_{j})_{x_{k}}+(\lambda_{j})_{x_{k}x_{k}}-(\lambda_{j})_{x_{k}}^{2}]$
$\displaystyle-\sum_{r\notin\\{j,k\\}}e^{2\lambda_{r}}(\lambda_{k})_{x_{r}}(\lambda_{j})_{x_{r}}.$
In particular, for the horizonal section $X\wedge Y$ we have
$\displaystyle K(X\wedge Y)$ $\displaystyle=$ $\displaystyle
e^{2\lambda}(\lambda_{xx}+\lambda_{yy})-\sum_{k=3}^{m}e^{2\lambda_{k}}\lambda_{x_{k}}^{2}.$
Let $\Phi:({\mathbb{R}}^{m},g)\to{\mathbb{C}}$ be the horizontally conformal
submersion
$\Phi:{\bf x}\mapsto(x+iy)\cong(x\cdot\text{\bf e}_{1}+y\cdot\text{\bf
e}_{2})$
with dilation $e^{\lambda}:{\mathbb{R}}^{m}\to{\mathbb{R}}^{+}$. For the
tangent bundle $TM$ we have the following orthogonal decomposition
$TM=\mathcal{H}\oplus\mathcal{V}$ into its horizontal and vertical subbundles
$\mathcal{H}$ and $\mathcal{V}$, respectively, where
$\mathcal{H}=\text{span}\\{X,Y\\}\ \ \text{and}\ \
\mathcal{V}=\text{span}\\{X_{3},\dots,X_{m}\\}.$
###### Definition 3.1.
For an open subset $M$ of ${\mathbb{R}}^{m}$ we denote by $\Omega(M)$ the set
of $C^{3}$-functions $\omega:M\to{\mathbb{R}}$ which are independent of the
first coordinates $(x,y)$ of ${\bf x}=(x,y,x_{3},\dots,x_{m})$ i.e.
$\Omega(M)=\\{\omega\in C^{3}(M,{\mathbb{R}})|\ \omega_{x}=\omega_{y}=0\\}.$
We now present two Riemannian manifolds $(M,g)$ with non-trivial geometry.
Later in this work, we then show that the complex-valued functions
$\Phi:(M,g)\to{\mathbb{C}}$ is proper biharmonic in these and other similar
cases.
###### Example 3.2.
For an open subset $M$ of ${\mathbb{R}}^{3}$, constants
$A,B,\theta\in{\mathbb{R}}$ and $\alpha\in\Omega(M)$ let the functions
$\lambda,f:M\to{\mathbb{R}}$ be defined by
$\lambda(x,y,z)=\alpha(z),\ \ f(x,y,z)=\log(1+\tan(\Theta)^{2}),$
where
$\Theta(x,y)=A\cdot(\cos\theta\cdot x+\sin\theta\cdot y)+B.$
Then equip $M$ with the Riemannian metric $g$ given by
$g=e^{-2\lambda}(dx^{2}+dy^{2})+e^{-2f}dz^{2}.$
Then the sectional curvature function $K$ of the manifold $(M^{3},g)$
satisfies
$K(X\wedge Y)=-\frac{\lambda_{z}^{2}}{\cos^{4}(\Theta)},$ $K(X\wedge
Z)=\frac{\lambda_{zz}-\lambda_{z}^{2}+2A^{2}e^{2\lambda}\cos^{2}(\theta)\cdot(2\cos^{4}(\Theta)-\cos^{2}(\Theta))}{\cos^{4}(\Theta)},$
$K(Y\wedge
Z)=\frac{\lambda_{zz}-\lambda_{z}^{2}+2A^{2}e^{2\lambda}\sin^{2}(\theta)\cdot(2\cos^{4}(\Theta)-\cos^{2}(\Theta))}{\cos^{4}(\Theta)}.$
If we assume that $A,B,\theta\in\Omega(M)$, rather than
$A,B,\theta\in{\mathbb{R}}$, then the geometry of $(M,g)$ runs rather wild.
The formulae for the sectional curvature $K$ become far too extensive to be
included in this work. For explicit proper biharmonic maps in that general
case, see Example 5.2.
###### Example 3.3.
For an open subset $M$ of ${\mathbb{R}}^{4}$, constants
$A,B,\Psi\in{\mathbb{R}}$ and $\alpha\in\Omega(M)$, let the functions
$\lambda,f:M\to{\mathbb{R}}$ be defined by
$\lambda({\bf x})=\alpha(z,w),$ $f({\bf x})=\lambda_{3}({\bf
x})+\lambda_{4}({\bf x})=-2\log(A\cdot(\cos(t)\cdot x+\sin(t)\cdot y)+B),$
where
$\lambda_{3}({\bf x})=-\log(A\cdot(\cos(t)\cdot x+\sin(t)\cdot y)+B)+\Psi,$
$\lambda_{4}({\bf x})=-\log(A\cdot(\cos(t)\cdot x+\sin(t)\cdot y)+B)-\Psi.$
Then equip $M$ with the Riemannian metric $g$ satifying
$g=e^{-2\lambda}(dx^{2}+dy^{2})+e^{-2\lambda_{3}}dz^{2}+e^{-2\lambda_{4}}dw^{2}.$
Then a standard computation shows that the sectional curvatures of $(M,g)$
fulfill
$K(X\wedge
Y)=\frac{e^{2\Psi}\cdot\lambda^{2}_{z}+e^{-2\Psi}\cdot\lambda^{2}_{w}}{(A\cdot(\cos(t)\cdot
x+\sin(t)\cdot y)+B)^{2}},$ $K(X\wedge Z)=K(Y\wedge
Z)=\frac{e^{2\Psi}\cdot(\lambda_{zz}-\lambda_{z}^{2})}{(A\cdot(\cos(t)\cdot
x+\sin(t)\cdot y)+B)^{2}},$ $K(X\wedge W)=K(Y\wedge
W)=\frac{e^{-2\Psi}\cdot(\lambda_{ww}-\lambda_{w}^{2})}{(A\cdot(\cos(t)\cdot
x+\sin(t)\cdot y)+B)^{2}},$ $K(Z\wedge W)=-\frac{e^{2\lambda}\cdot
A^{2}}{(A\cdot(\cos(t)\cdot x+\sin(t)\cdot y)+B)^{2}}.$
If we assume that $A,B,\Psi\in\Omega(M)$, rather than
$A,B,\Psi\in{\mathbb{R}}$, then the formulae for the sectional curvature $K$
become very complicated, including partial derivatives of these functions. For
explicit proper $(2,1)$-harmonic morphisms in that general case, see Example
7.1
## 4\. The tension Fields $\tau(\Phi)$ and $\tau^{2}(\Phi)$
Our first principal aim is to construct Riemannian manifolds $(M,g)$, of the
form introduced in Section 3, such that the horizontally conformal submersion
$\Phi:(M,g)\to{\mathbb{C}}$ with
$\Phi:{\bf x}\mapsto(x+iy)\cong(x\cdot\text{\bf e}_{1}+y\cdot\text{\bf
e}_{2})$
is a proper biharmonic map. For this purpose we now want to determine the
tension field $\tau(\Phi)$ and the bitension field $\tau^{2}(\Phi)$ of $\Phi$,
respectively.
###### Lemma 4.1.
Let $(M^{m},g)$ be a Riemannian manifold, as defined above, with the
orthonormal basis $\\{X_{1},\dots,X_{m}\\}$ for the tangent bundle $TM$. Then
its Levi-Civita connection satisfies
$\sum_{k=1}^{m}\nabla_{X_{k}}{X_{k}}=\sum_{j=1}^{m}\sum_{k\neq
j}^{m}e^{\lambda_{j}}\,(\lambda_{k})_{x_{j}}X_{j}.$
###### Proof.
The statement follows from the following computation
$\displaystyle\sum_{k=1}^{m}\nabla_{X_{k}}{X_{k}}$ $\displaystyle=$
$\displaystyle\sum_{j,k=1}^{m}g(\nabla_{X_{k}}{X_{k}},X_{j})\,X_{j}$
$\displaystyle=$ $\displaystyle\sum_{j,k=1}^{m}g([X_{j},X_{k}],X_{k})\,X_{j}$
$\displaystyle=$
$\displaystyle\sum_{j,k=1}^{m}g(e^{\lambda_{j}}\,(\lambda_{k})_{x_{j}}X_{k}-e^{\lambda_{k}}\,(\lambda_{j})_{x_{k}}X_{j},X_{k})\,X_{j}$
$\displaystyle=$
$\displaystyle\sum_{j,k=1}^{m}e^{\lambda_{j}}\,(\lambda_{k})_{x_{j}}\,X_{j}-\sum_{j=1}^{m}e^{\lambda_{j}}\,(\lambda_{j})_{x_{j}}\,X_{j}$
$\displaystyle=$ $\displaystyle\sum_{j=1}^{m}\sum_{k\neq
j}^{m}e^{\lambda_{j}}\,(\lambda_{k})_{x_{j}}X_{j}.$
∎
###### Lemma 4.2.
Let $\Phi:(M^{m},g)\to{\mathbb{C}}$ be the horizontally conformal submersion
$\Phi:{\bf x}\mapsto(x+iy)\cong(x\cdot\text{\bf e}_{1}+y\cdot\text{\bf
e}_{2})$
with dilation $e^{\lambda}:M\to{\mathbb{R}}^{+}$. Then we have the following
relation
$\sum_{k=1}^{m}d\Phi(\nabla_{X_{k}}{X_{k}})=e^{2\lambda}\,(\lambda_{x}+f_{x})\cdot\text{\bf
e}_{1}+e^{2\lambda}\,(\lambda_{y}+f_{y})\cdot\text{\bf e}_{2}.$
###### Proof.
It follows from Lemma 4.1 and the fact that the differential $d\Phi$ satisfies
$d\Phi(X_{3})=\cdots=d\Phi(X_{m})=0$ that
$\displaystyle\sum_{k=1}^{m}d\Phi(\nabla_{X_{k}}{X_{k}})$ $\displaystyle=$
$\displaystyle\sum_{j=1}^{m}\sum_{k\neq
j}^{m}e^{\lambda_{j}}\,(\lambda_{k})_{x_{j}}d\Phi(X_{j})$ $\displaystyle=$
$\displaystyle
e^{\lambda_{1}}\,(\lambda_{2})_{x_{1}}d\Phi(X_{1})+e^{\lambda_{2}}\,(\lambda_{1})_{x_{2}}d\Phi(X_{2})+\sum_{j=1}^{2}\sum_{k=3}^{m}e^{\lambda_{j}}\,(\lambda_{k})_{x_{j}}d\Phi(X_{j})$
$\displaystyle=$ $\displaystyle e^{2\lambda}\,\lambda_{x_{1}}\text{\bf
e}_{1}+e^{2\lambda}\,\lambda_{x_{2}}\text{\bf
e}_{2}+\sum_{j=1}^{2}e^{\lambda_{j}}\,f_{x_{j}}d\Phi(X_{j})$ $\displaystyle=$
$\displaystyle e^{2\lambda}\,(\lambda_{x}+f_{x})\,\text{\bf
e}_{1}+e^{2\lambda}\,(\lambda_{y}+f_{y})\,\text{\bf e}_{2}.$
∎
With the next result we provide a formula for the tension field $\tau(\Phi)$
of the horizontally conformal submersion $\Phi$.
###### Proposition 4.3.
Let $\Phi:(M,g)\to{\mathbb{C}}$ be the horizontally conformal submersion
$\Phi:{\bf x}\mapsto(x+iy)\cong(x\cdot\text{\bf e}_{1}+y\cdot\text{\bf
e}_{2})$
with dilation $e^{\lambda}:M\to{\mathbb{R}}^{+}$. Then the tension field
$\tau(\Phi)$ of $\Phi$ satisfies
$\displaystyle\tau(\Phi)$ $\displaystyle=$
$\displaystyle-\,e^{2\lambda}\,(f_{x}\cdot\text{\bf e}_{1}+f_{y}\cdot\text{\bf
e}_{2}).$
###### Proof.
The two vector fields $X_{1}$ and $X_{2}$ generate the horizontal distribution
$\mathcal{H}$ so the horizontal conformality of $\Phi$ is a direct consequence
of the fact that
$d\Phi(X_{1})=e^{\lambda}\cdot\text{\bf e}_{1},\
d\Phi(X_{2})=e^{\lambda}\cdot\text{\bf e}_{2},\ d\Phi(X_{3})=0,\ \dots\
,d\Phi(X_{m})=0.$
The tension field $\tau(\Phi)$ of $\Phi$ is defined by the well-known formula
$\displaystyle\tau(\Phi)$ $\displaystyle=$
$\displaystyle\sum_{k=1}^{m}\left\\{\nabla_{X_{k}}^{\Phi}d\Phi(X_{k})-d\Phi(\nabla_{X_{k}}X_{k})\right\\}.$
For the first part, we have
$\displaystyle\sum_{k=1}^{m}\nabla_{X_{k}}^{\Phi}d\Phi(X_{k})$
$\displaystyle=$
$\displaystyle\nabla^{\Phi}_{X_{1}}d\Phi(X_{1})+\nabla^{\Phi}_{X_{2}}d\Phi(X_{2})$
$\displaystyle=$ $\displaystyle
e^{\lambda}\cdot\frac{\partial}{\partial_{x_{1}}}(e^{\lambda}\,\text{\bf
e}_{1})+e^{\lambda}\cdot\frac{\partial}{\partial_{x_{2}}}(e^{\lambda}\,\text{\bf
e}_{2})$ $\displaystyle=$ $\displaystyle
e^{2\lambda}\,(\lambda_{x}\cdot\text{\bf e}_{1}+\lambda_{y}\cdot\text{\bf
e}_{2}).$
For the second part, we now employ Lemma 4.1 and yield
$\sum_{k=1}^{m}d\Phi(\nabla_{X_{k}}{X_{k}})=e^{2\lambda}\,(\lambda_{x}+f_{x})\cdot\text{\bf
e}_{1}+e^{2\lambda}\,(\lambda_{y}+f_{y})\cdot\text{\bf e}_{2}.$
The statement is now an immediate consequence of the above calculations. ∎
###### Corollary 4.4.
The horizontally conformal submersion $\Phi:(M,g)\to{\mathbb{C}}$, with
$\Phi:{\bf x}\mapsto(x+iy)\cong(x\cdot\text{\bf e}_{1}+y\cdot\text{\bf
e}_{2}),$
is harmonic and hence a harmonic morphism if and only if $(f_{x},f_{y})=0$.
###### Proof.
This is an immediate consequence of Proposition 4.3 and the characterisation
of harmonic morphisms, proven by Fuglede and Ishihara in [2] and [6],
respectively. ∎
After determining the tension field $\tau(\Phi)$, we now turn our attention to
the bitension field $\tau^{2}(\Phi)$.
###### Definition 4.5.
For an open subset $M$ of ${\mathbb{R}}^{m}$, let $\lambda,f:M\to{\mathbb{R}}$
be differentiable functions on $M$ with coordinates ${\bf
x}=(x,y,x_{3},\dots,x_{m})$. Then we define the non-linear partial
differential operators $D_{1},D_{2}$ by
$\displaystyle D_{1}(\lambda,f)$ $\displaystyle=$
$\displaystyle\\{\,(\lambda_{x}+f_{x})\cdot(2\lambda_{x}f_{x}+f_{xx})+\,(\lambda_{y}+f_{y})\cdot(2\lambda_{y}f_{x}+f_{xy})$
$\displaystyle\qquad\qquad-(6\,\lambda_{x}^{2}\,f_{x}+2\,\lambda_{xx}\,f_{x}+5\,\lambda_{x}\,f_{xx}+f_{xxx})$
$\displaystyle\qquad\qquad-(6\,\lambda_{y}^{2}\,f_{x}+2\,\lambda_{yy}\,f_{x}+5\,\lambda_{y}\,f_{xy}+f_{xyy})\,\\},$
$\displaystyle D_{2}(\lambda,f)$ $\displaystyle=$
$\displaystyle\\{\,(\lambda_{x}+f_{x})\cdot(2\lambda_{x}f_{y}+f_{yx})+\,(\lambda_{y}+f_{y})\cdot(2\lambda_{y}f_{y}+f_{yy})$
$\displaystyle\qquad\qquad-(6\,\lambda_{x}^{2}f_{y}+2\,\lambda_{xx}f_{y}+5\,\lambda_{x}\,f_{yx}+f_{yxx})$
$\displaystyle\qquad\qquad-(6\,\lambda_{y}^{2}f_{y}+2\,\lambda_{yy}f_{y}+5\,\lambda_{y}\,f_{yy}+f_{yyy})\,\\}.$
With the next result we present a formula for the bitension field
$\tau^{2}(\Phi)$ of the horizontally conformal submersion $\Phi$.
###### Theorem 4.6.
Let $\Phi:(M,g)\to{\mathbb{C}}$ be the horizontally conformal submersion
$\Phi:{\bf x}\mapsto(x+iy)\cong(x\cdot\text{\bf e}_{1}+y\cdot\text{\bf
e}_{2})$
with dilation $e^{\lambda}:M\to{\mathbb{R}}^{+}$. Then the bitension field
$\tau^{2}(\Phi)$ of $\Phi$ satisfies
$\displaystyle\tau^{2}(\Phi)$ $\displaystyle=$ $\displaystyle
e^{4\lambda}\cdot D_{1}(\lambda,f)\cdot\text{\bf e}_{1}+e^{4\lambda}\cdot
D_{2}(\lambda,f)\cdot\text{\bf e}_{2}.$
###### Proof.
The bitension field $\tau^{2}(\Phi)$ of the $C^{4}$-map $\Phi$ is given by
$\displaystyle\tau^{2}(\Phi)$ $\displaystyle=$
$\displaystyle\sum_{k=1}^{m}\\{\nabla_{X_{k}}^{\Phi}\nabla_{X_{k}}^{\Phi}\tau(\Phi)-\nabla^{\Phi}_{\nabla_{X_{k}}X_{k}}\tau(\Phi)\\}.$
First we notice that for $k=1,2$ we have
$\displaystyle\nabla_{X_{k}}^{\Phi}\tau(\phi)$ $\displaystyle=$
$\displaystyle-\,e^{\lambda}\cdot\frac{\partial}{\partial
x_{k}}\big{(}e^{2\lambda}\,\\{f_{x_{1}}\,\text{\bf e}_{1}+f_{x_{2}}\,\text{\bf
e}_{2}\\}\big{)}$ $\displaystyle=$
$\displaystyle-\,e^{3\lambda}\\{(2\,\lambda_{x_{k}}f_{x_{1}}+f_{x_{1}x_{k}})\,\text{\bf
e}_{1}$
$\displaystyle\qquad+\,(2\,\lambda_{x_{k}}f_{x_{2}}+f_{x_{2}x_{k}})\,\text{\bf
e}_{2}\\}$
Differentiating once more gives
$\displaystyle\sum_{k=1}^{m}\nabla_{X_{k}}^{\Phi}\nabla_{X_{k}}^{\Phi}\tau(\phi)$
$\displaystyle=$
$\displaystyle-\,\sum_{k=1}^{2}e^{\lambda}\cdot\frac{\partial}{\partial
x_{k}}(e^{3\lambda}(2\lambda_{x_{k}}f_{x_{1}}+f_{x_{1}x_{k}}))\,\text{\bf
e}_{1}$
$\displaystyle-\,\sum_{k=1}^{2}e^{\lambda}\cdot\frac{\partial}{\partial
x_{k}}(e^{3\lambda}(2\,\lambda_{x_{k}}f_{x_{2}}+f_{x_{2}x_{k}}))\,\text{\bf
e}_{2}$ $\displaystyle=$
$\displaystyle-\,e^{4\lambda}\cdot\sum_{k=1}^{2}(6\,\lambda_{x_{k}}^{2}f_{x_{1}}+2\,\lambda_{x_{k}x_{k}}f_{x_{1}}+5\,\lambda_{x_{k}}\,f_{x_{1}x_{k}}+f_{x_{1}x_{k}x_{k}})\,\text{\bf
e}_{1}$
$\displaystyle-\,e^{4\lambda}\cdot\sum_{k=1}^{2}(6\,\lambda_{x_{k}}^{2}f_{x_{2}}+2\,\lambda_{x_{k}x_{k}}f_{x_{2}}+5\,\lambda_{x_{k}}\,f_{x_{2}x_{k}}+f_{x_{2}x_{k}x_{k}})\,\text{\bf
e}_{2}.$
For the second part of the bitension field $\tau^{2}(\Phi)$ of $\Phi$ we now
yield
$\displaystyle-\sum_{k=1}^{m}\nabla^{\Phi}_{\nabla_{X_{k}}X_{k}}(\tau(\Phi))$
$\displaystyle=$
$\displaystyle-e^{2\lambda}\,(\lambda_{x_{1}}+f_{x_{1}})\cdot\frac{\partial}{\partial
x_{1}}\,\tau(\Phi)-e^{2\lambda}\,(\lambda_{x_{2}}+f_{x_{2}})\cdot\frac{\partial}{\partial
x_{2}}\,\tau(\Phi)$ $\displaystyle=$ $\displaystyle
e^{2\lambda}\,(\lambda_{x_{1}}+f_{x_{1}})\cdot\frac{\partial}{\partial
x_{1}}(\,e^{2\lambda}\cdot\\{f_{x_{1}}\,\text{\bf e}_{1}+f_{x_{2}}\,\text{\bf
e}_{2}\\})$
$\displaystyle+\,e^{2\lambda}\,(\lambda_{x_{2}}+f_{x_{2}})\cdot\frac{\partial}{\partial
x_{2}}(e^{2\lambda}\cdot\\{f_{x_{1}}\,\text{\bf e}_{1}+f_{x_{2}}\,\text{\bf
e}_{2}\\})$ $\displaystyle=$ $\displaystyle
e^{4\lambda}\,(\lambda_{x_{1}}+f_{x_{1}})\cdot\\{(2\lambda_{x_{1}}f_{x_{1}}+f_{x_{1}x_{1}})\,\text{\bf
e}_{1}+(2\lambda_{x_{1}}f_{x_{2}}+f_{x_{2}x_{1}})\,\text{\bf e}_{2}\\}$
$\displaystyle+\,e^{4\lambda}\,(\lambda_{x_{2}}+f_{x_{2}})\cdot\\{(2\lambda_{x_{2}}f_{x_{1}}+f_{x_{1}x_{2}})\,\text{\bf
e}_{1}+(2\lambda_{x_{2}}f_{x_{2}}+f_{x_{2}x_{2}})\,\text{\bf e}_{2}\\}$
We now easily obtain the stated result by adding the terms. ∎
## 5\. Explicit Proper Biharmonic Submersions.
In the Section 4 we have derived explicit formulae for the tension fields
$\tau(\Phi)$ and $\tau^{2}(\Phi)$. This leads to a system of non-linear
partial differential equations for the pair of functions $(\lambda,f)$. We are
now interested in constructing Riemannian metrics $g$, on open subsets $M$ of
${\mathbb{R}}^{m}$, turning the horizontally conformal submersion
$\Phi:(M,g)\to{\mathbb{C}}$ into proper biharmonic maps i.e. finding explicit
solutions $(\lambda,f)$ to the system
$(f_{x},f_{y})\neq 0,\ \ D_{1}(\lambda,f)=0\ \ \text{and}\ \
D_{2}(\lambda,f)=0.$
Let us first consider the case when $\lambda=\alpha\in\Omega(M)$ i.e.
independend of the two first coordinates $x$ and $y$. Then the differential
operators $D_{1}$ and $D_{2}$ simplify to
$\displaystyle D_{1}(\alpha,f)$ $\displaystyle=$ $\displaystyle(f_{x}\cdot
f_{xx}+f_{y}\cdot f_{xy}-f_{xxx}-f_{xyy})$ $\displaystyle=$
$\displaystyle\tfrac{1}{2}(f_{x}^{2}+f_{y}^{2}-2\,(f_{xx}+f_{yy}))_{x},$
$\displaystyle D_{2}(\alpha,f)$ $\displaystyle=$ $\displaystyle(f_{x}\cdot
f_{yx}+f_{y}\cdot f_{yy}-f_{yxx}-f_{yyy})$ $\displaystyle=$
$\displaystyle\tfrac{1}{2}(f_{x}^{2}+f_{y}^{2}-2\,(f_{xx}+f_{yy}))_{y}.$
As a first example we have the following which clearly gives solutions to the
system under consideration.
###### Example 5.1.
For an open subset $M$ of ${\mathbb{R}}^{m}$ and
$A,B,\alpha,\beta\in\Omega(M)$ let the functions $\lambda,f:M\to{\mathbb{R}}$
be defined by
$\lambda({\bf x})=\alpha,\ \ f({\bf x})=A\cdot x+B\cdot y+\beta.$
If $A^{2}+B^{2}\neq 0$, then the associated map $\Phi:(M,g)\to{\mathbb{C}}$ is
horizontally conformal and proper biharmonic i.e.
$(f_{x},f_{y})\neq 0,\ \ D_{1}(\alpha,f)=0\ \ \text{and}\ \
D_{2}(\alpha,f)=0.$
It is clear that the choice of $M$ and $A,B,\alpha,\beta\in\Omega(M)$ can lead
to rather non-trivial geometries $(M,g)$.
If we now assume that the function $f:M\to{\mathbb{R}}$ is independent of the
coordinate $y$ then we have that
$f_{x}\neq 0,\ \ D_{1}(\alpha,f)=f_{x}\cdot f_{xx}-f_{xxx}=0\ \ \text{and}\ \
D_{2}(\alpha,f)=0.$
The ordinary differential equation
$D_{1}(\alpha,f)=f_{x}\cdot
f_{xx}-f_{xxx}=\frac{1}{2}(f_{x}^{2}-2f_{xx})_{x}=0$
can easily be integrated to
$f_{x}({\bf x})=A\cdot\tan(\frac{A\cdot x}{2}+B),$
for some $A,B\in\Omega(M)$. Integrating yet again, we finally obtain
$f({\bf x})=\log(1+\tan(\frac{A\cdot x}{2}+B)^{2})+\beta,$
defined on the appropriate open subset $M$ of ${\mathbb{R}}^{m}$ and with
$\beta\in\Omega(M)$. From this we see that under the above mentioned
assumptions and modulo the functions $A,B,\beta\in\Omega(M)$, the solution is
uniquely determined. This leads to the following.
###### Example 5.2.
For an open subset $M$ of ${\mathbb{R}}^{m}$ and
$A,B,\alpha,\beta,\theta\in\Omega(M)$ let the functions
$\lambda,f:M\to{\mathbb{R}}$ be defined by
$\lambda({\bf x})=\alpha,\ \ f({\bf x})=\log(1+\tan({A\cdot(\cos\theta\cdot
x+\sin\theta\cdot y)}+B)^{2})+\beta.$
If $A\neq 0$, then the associated horizontally conformal map
$\Phi:M\to{\mathbb{C}}$ is proper biharmonic.
By the seperation of variables, one easily yields the next two families of
solutions.
###### Example 5.3.
For an open subset $M$ of ${\mathbb{R}}^{m}$ and
$A,B,C,D,\alpha,\beta\in\Omega(M)$ let the functions
$\lambda,f:M\to{\mathbb{R}}$ be defined by
$\lambda({\bf x})=\alpha,\ \ f({\bf x})=\log((1+\tan({A\cdot
x}+C)^{2})\cdot(1+\tan({B\cdot y}+D)^{2}))+\beta.$
If $A^{2}+B^{2}\neq 0$ then the associated horizontally conformal map
$\Phi:(M,g)\to{\mathbb{C}}$ is proper biharmonic.
###### Example 5.4.
For an open subset $M$ of ${\mathbb{R}}^{m}$ and
$A,B,C,D,\alpha,\beta\in\Omega(M)$ let the functions
$\lambda,f:M\to{\mathbb{R}}$ be defined by
$\lambda({\bf x})=\alpha,\ \ f({\bf x})=-2\,\log((A\cdot x+C)(B\cdot
y+D))+\beta.$
If $A^{2}+B^{2}\neq 0$ then the associated horizontally conformal map
$\Phi:(M,g)\to{\mathbb{C}}$ is proper biharmonic.
We also yield the following examples without assuming the condition
$\lambda\in\Omega(M)$.
###### Example 5.5.
For an open subset $M$ of ${\mathbb{R}}^{m}$ and
$A,B,C,D,\alpha,\beta\in\Omega(M)$ let the functions
$\lambda,f:M\to{\mathbb{R}}$ be defined by
$\lambda({\bf x})=A\cdot x+B\cdot y+\alpha,\ \ f({\bf x})=C\cdot x+D\cdot
y+\beta.$
If $A\,C+B\,D=2\,(A^{2}+B^{2})\neq 0$ then the associated horizontally
conformal map $\Phi:(M,g)\to{\mathbb{C}}$ is proper biharmonic.
###### Example 5.6.
For an open subset $M$ of ${\mathbb{R}}^{m}$ and
$A,B,r,\alpha,\beta,\theta\in\Omega(M)$ let the functions
$\lambda,f:M\to{\mathbb{R}}$ be defined by
$\lambda({\bf x})=-r\cdot(\cos\theta\cdot x+\sin\theta\cdot y)+\alpha,$
$f({\bf x})=A\cdot\exp(2\,r\cdot(\cos\theta\cdot x+\sin\theta\cdot
y)+B)+\beta.$
If $r\neq 0$, then the associated horizontally conformal map
$\Phi:(M,g)\to{\mathbb{C}}$ is proper biharmonic.
###### Example 5.7.
For an open subset $M$ of ${\mathbb{R}}^{m}$ and
$A,B,r,\alpha,\beta,\theta\in\Omega(M)$ let the functions
$\lambda,f:M\to{\mathbb{R}}$ be defined by
$\lambda({\bf x})=\tfrac{1}{2}\cdot\log(A\cdot\exp(r\cdot(\cos\theta\cdot
x+\sin\theta\cdot y))+B)+\alpha,$ $f({\bf x})=r\cdot(\cos\theta\cdot
x+\sin\theta\cdot y)+\beta.$
If $r\neq 0$, then the associated horizontally conformal map
$\Phi:(M,g)\to{\mathbb{C}}$ is proper biharmonic.
###### Example 5.8.
For an open subset $M$ of ${\mathbb{R}}^{m}$ and
$A,\alpha,\beta,\theta\in\Omega(M)$ let the functions
$\lambda,f:M\to{\mathbb{R}}$ be defined by
$\lambda({\bf x})=\tfrac{1}{2}\cdot\log(\cos\theta\cdot x+\sin\theta\cdot
y)+\alpha,$ $f({\bf x})=A\cdot\log(\cos\theta\cdot x+\sin\theta\cdot
y)+\beta.$
Then the associated horizontally conformal map $\Phi:(M,g)\to{\mathbb{C}}$ is
proper biharmonic.
###### Example 5.9.
For an open subset $M$ of ${\mathbb{R}}^{m}$ and
$A,\alpha,\beta,\theta\in\Omega(M)$ let the functions
$\lambda,f:M\to{\mathbb{R}}$ be defined by
$\lambda({\bf x})=A\cdot\log(\cos\theta\cdot x+\sin\theta\cdot y)+\alpha,$
$f({\bf x})=2\,(A-1)\cdot\log(\cos\theta\cdot x+\sin\theta\cdot y)+\beta.$
Then the associated horizontally conformal map $\Phi:(M,g)\to{\mathbb{C}}$ is
proper biharmonic if and only if $A\neq 1$.
## 6\. The tension Fields $\tau(\Phi^{2})$ and $\tau^{2}(\Phi^{2})$
Our second principal aim is to construct Riemannian manifolds $(M,g)$, of the
form introduced in Section 3, such that the horizontally conformal submersion
$\Phi:(M,g)\to{\mathbb{C}}$ with
$\Phi:{\bf x}\mapsto(x+iy)\cong(x\cdot\text{\bf e}_{1}+y\cdot\text{\bf
e}_{2})$
is a proper $(2,1)$-harmonic morphism. For this purpose we now want to
determine the tension field $\tau(\Phi^{2})$ and the bitension field
$\tau^{2}(\Phi^{2})$ of $\Phi^{2}$, respectively. For this we have the
following useful result.
###### Theorem 6.1.
[3] A complex-valued function $\phi:(M,g)\to{\mathbb{C}}$ from a Riemannian
manifold is a $(2,1)$-harmonic morphism if and only if
$\kappa(\phi,\phi)=0,\ \ \tau^{2}(\phi)=0\ \ \text{and}\ \
\tau^{2}(\phi^{2})=0.$
###### Lemma 6.2.
Let $\phi:(M,g)\to{\mathbb{C}}$ be a horizontally conformal proper biharmonic
function. Then the tension field $\tau(\phi^{2})$ and the bitension field
$\tau^{2}(\phi^{2})$ of $\phi^{2}$ satisfy
* (a)
$\tau(\phi^{2})=2\cdot\tau(\phi)\,\phi\neq 0$,
* (b)
$\tau^{2}(\phi^{2})=2\,(\tau(\phi)^{2}+2\cdot\kappa(\tau(\phi),\phi))$.
###### Proof.
Since the complex-valued function $\phi$ is horizontally conformal and
biharmonic we know that $\kappa(\phi,\phi)=0$ and $\tau^{2}(\phi)=0$. Then the
result follows from the following elementary calculations.
$\displaystyle\tau^{2}(\phi^{2})$ $\displaystyle=$
$\displaystyle\tau(\tau(\phi^{2}))$ $\displaystyle=$
$\displaystyle\tau(\tau(\phi)\,\phi+2\,\kappa(\phi,\phi)+\phi\,\tau(\phi))$
$\displaystyle=$ $\displaystyle 2\cdot\tau(\tau(\phi)\,\phi)$ $\displaystyle=$
$\displaystyle
2\cdot\big{\\{}\tau^{2}(\phi)\,\phi+2\,\kappa(\tau(\phi),\phi)+\tau(\phi)^{2}\big{\\}}.$
∎
###### Definition 6.3.
Let $\lambda,f:M\to{\mathbb{R}}$ be differentiable functions on an open subset
$M$ of ${\mathbb{R}}^{m}$ with coordinates ${\bf x}=(x,y,x_{3},\dots,x_{m})$.
Then we define the non-linear partial differential operators $D_{3},D_{4}$ by
$\displaystyle D_{3}(\lambda,f)$ $\displaystyle=$ $\displaystyle
2\cdot(f_{x}^{2}-f_{y}^{2})-8\cdot(\lambda_{x}\,f_{x}-\lambda_{y}\,f_{y})-4\cdot(f_{xx}-f_{yy}),$
$\displaystyle D_{4}(\lambda,f)$ $\displaystyle=$ $\displaystyle 4\cdot
f_{x}f_{y}-8\cdot(\lambda_{x}\,f_{y}+\lambda_{y}\,f_{x})-8\cdot f_{xy}.$
###### Proposition 6.4.
Let $\Phi:({\mathbb{R}}^{m},g)\to{\mathbb{C}}$ be the horizontally conformal
submersion
$\Phi:{\bf x}\mapsto(x+iy)\cong(x\cdot\text{\bf e}_{1}+y\cdot\text{\bf
e}_{2})$
with dilation $e^{\lambda}:{\mathbb{R}}^{m}\to{\mathbb{R}}^{+}$. If $\Phi$ is
proper biharmonic then $\tau(\Phi^{2})\neq 0$ and the bitension field
$\tau^{2}(\Phi^{2})$ of $\Phi^{2}$ satisfies
$\displaystyle\tau^{2}(\Phi^{2})$ $\displaystyle=$ $\displaystyle
e^{4\lambda}\cdot D_{3}(\lambda,f)\cdot\text{\bf e}_{1}+e^{4\lambda}\cdot
D_{4}(\lambda,f)\cdot\text{\bf e}_{2}.$
###### Proof.
It follows from Lemma 6.2 that $\tau(\Phi^{2})\neq 0$ and
$\tau^{2}(\Phi^{2})=2\,\tau(\Phi)^{2}+4\,\kappa(\tau(\Phi),\Phi).$
Then the following computations provide the result.
$\displaystyle\tau(\Phi)^{2}$ $\displaystyle=$
$\displaystyle\,e^{4\lambda}\cdot\\{((f_{x_{1}})^{2}-(f_{x_{2}})^{2})\,\text{\bf
e}_{1}+2\,f_{x_{1}}f_{x_{2}}\,\text{\bf e}_{2}\\}.$
$\displaystyle\kappa(\tau(\Phi),\Phi)$ $\displaystyle=$
$\displaystyle\sum_{k=1}^{m}X_{k}(\tau(\Phi))\cdot X_{k}(\Phi)$
$\displaystyle=$
$\displaystyle-\sum_{k=1}^{2}X_{k}(e^{2\lambda}\,\\{f_{x_{1}}\,\text{\bf
e}_{1}+f_{x_{2}}\,\text{\bf e}_{2}\\})\cdot e^{\lambda}\,\text{\bf e}_{k}$
$\displaystyle=$
$\displaystyle-\,e^{4\lambda}\big{\\{}\sum_{k=1}^{2}(2\,\lambda_{x_{k}}\,f_{x_{1}}+f_{x_{1}x_{k}})\,\text{\bf
e}_{1}\cdot\text{\bf e}_{k}$
$\displaystyle\qquad+\sum_{k=1}^{2}(2\,\lambda_{x_{k}}\,f_{x_{2}}+f_{x_{2}x_{k}})\,\text{\bf
e}_{2}\cdot\text{\bf e}_{k}\big{\\}}$ $\displaystyle=$
$\displaystyle-\,e^{4\lambda}(2\,\lambda_{x_{1}}\,f_{x_{1}}+f_{x_{1}x_{1}}-2\,\lambda_{x_{2}}\,f_{x_{2}}-f_{x_{2}x_{2}})\,\text{\bf
e}_{1}$
$\displaystyle-\,e^{4\lambda}\,(2\,\lambda_{x_{2}}\,f_{x_{1}}+f_{x_{1}x_{2}}+2\,\lambda_{x_{1}}\,f_{x_{2}}+f_{x_{2}x_{1}})\,\text{\bf
e}_{2}.$
∎
## 7\. Explicit $(2,1)$-Harmonic Morphisms
In Sections 4 and 6, we have defined the partial differential operators
$D_{1},D_{2},D_{3}$ and $D_{4}$. We will now use these to construct explicit
proper $(2,1)$-harmonic morphisms $\Phi:(M,g)\to{\mathbb{C}}$. We will then
show how these can be employed to produce a large variety of concrete proper
biharmonic maps.
###### Example 7.1.
For an open subset $M$ of ${\mathbb{R}}^{m}$ and
$A,B,\alpha,\beta\in\Omega(M)$ let the functions $\lambda,f:M\to{\mathbb{R}}$
be defined by
$\lambda({\bf x})=\alpha,\ \ f({\bf x})=-2\log(A\cdot(\cos(t)\cdot
x+\sin(t)\cdot y)+B)+\beta.$
If $A\neq 0$, then the associated horizontally conformal map
$\Phi:M\to{\mathbb{C}}$ is a proper $(2,1)$-harmonic morphism i.e.
$(f_{x},f_{y})\neq 0,\ D_{1}(\lambda,f)=0,\ D_{2}(\lambda,f)=0,\
D_{3}(\lambda,f)=0,\ D_{4}(\lambda,f)=0.$
The next result is a reformulation of Proposition 3.9 of [3], see also
Corollary 3.1 of [4]. Together with Example 7.1 it is a useful tool for
manufacturing a large variety of proper $(2,1)$-harmonic morphisms
$(M,g)\to{\mathbb{C}}$ on the non-trivial manifolds constructed there.
###### Proposition 7.2.
Let $(M,g)$ be a Riemannian manifold and $\phi:M\to{\mathbb{C}}$ be a proper
$(2,1)$-harmonic morphism. Further, let $F:U\to{\mathbb{C}}$ be a non-constant
holomorphic function defined on an open subset of ${\mathbb{C}}$ containing
$\phi(M)$. Then the composition $F\circ\phi:(M,g)\to{\mathbb{C}}$ is a proper
$(2,1)$-harmonic morphism, in particular a proper biharmonic map.
Every complex-valued harmonic function, locally defined in the plane
${\mathbb{C}}$, is the sum of a holomorphic and an anti-holomorphic one. This
leads us to the next statement.
###### Proposition 7.3.
Let $(M,g)$ be a Riemannian manifold and $\phi:M\to{\mathbb{C}}$ be a
submersive $(2,1)$-harmonic morphism. Further, let $F,G:U\to{\mathbb{C}}$ be
holomorphic functions defined on an open subset $U$ of ${\mathbb{C}}$
containing $\phi(M)$ and $\psi=F+\bar{G}$. Then the composition
$\psi\circ\phi:(M,g)\to{\mathbb{C}}$ is a biharmonic map. It is proper if and
only if
$F_{z}(\tau(\phi))+\overline{G_{z}(\tau(\phi))}\neq 0.$
###### Proof.
The tension field $\tau$ is linear so it follows from Proposition 7.2 that the
composition $\psi\circ\phi:U\to{\mathbb{C}}$ is biharmonic. Following the
well-known composition law, see Corollary 3.3.13 of [1], we have
$\displaystyle\tau(\psi\circ\phi)$ $\displaystyle=$ $\displaystyle
d\psi(\tau(\phi))+\operatorname{trace}\nabla d\psi(d\phi,d\phi).$
The map $\phi:(M,g)\to{\mathbb{C}}$ is horizontally conformal with dilation of
the form $e^{\lambda}:M\to{\mathbb{R}}^{+}$. The standard basis vectors ${\bf
e}_{1}$ and ${\bf e}_{2}$ form a global orthonormal frame on the open subset
$U$ of ${\mathbb{C}}$. Let $X$ and $Y$ be their horizontal lifts via $\phi$ so
that the vector fields $e^{-\lambda}X$ and $e^{-\lambda}X$ form an orthonormal
frame for the horizontal distribution $\mathcal{H}$ of the tangent bundle
$TM$. Then
$\displaystyle\operatorname{trace}\nabla d\psi(d\phi,d\phi)$ $\displaystyle=$
$\displaystyle\nabla d\psi(d\phi(e^{-\lambda}X),d\phi(e^{-\lambda}X))$
$\displaystyle\quad\quad+\nabla
d\psi(d\phi(e^{-\lambda}Y),d\phi(e^{-\lambda}Y))$ $\displaystyle=$
$\displaystyle e^{-2\lambda}\cdot(\nabla d\psi({\bf e}_{1},{\bf e}_{1})+\nabla
d\psi({\bf e}_{2},{\bf e}_{2}))$ $\displaystyle=$ $\displaystyle
e^{-2\lambda}(\tau(F+\bar{G}))$ $\displaystyle=$ $\displaystyle 0.$
Furthermore we have
$\displaystyle d\psi(\tau(\phi))$ $\displaystyle=$ $\displaystyle
dF(\tau(\phi))+d\bar{G}(\tau(\phi))$ $\displaystyle=$ $\displaystyle
F_{z}(\tau(\phi))+\overline{G_{z}(\tau(\phi))}.$
The stated result now follows from these calculations. ∎
## 8\. Acknowledgements
The first author would like to thank the Department of Mathematics at Lund
University for its great hospitality during her time there as a postdoc.
## References
* [1] P. Baird, J.C. Wood, Harmonic morphisms between Riemannian manifolds, The London Mathematical Society Monographs 29, Oxford University Press (2003).
* [2] B. Fuglede, Harmonic morphisms between Riemannian manifolds, Ann. Inst. Fourier 28 (1978), 107-144.
* [3] E. Ghandour, S. Gudmundsson, Complex-valued $(p,q)$-harmonic morphisms from Riemannian manifolds, preprint (2020).
* [4] E. Ghandour, Y.-L. Ou, Generalised harmonic morphisms and horizontally weakly conformal biharmonic maps, J. Math. Anal. Appl. 464 (2018), 924-938.
* [5] S. Gudmundsson, The Bibliography of Harmonic Morphisms, www.matematik.lu.se/ matematiklu/personal/sigma/harmonic/bibliography.html
* [6] T. Ishihara, A mapping of Riemannian manifolds which preserves harmonic functions, J. Math. Kyoto Univ. 19 (1979), 215-229.
|
2.5cm2.5cm2.5cm2.5cm
# Cosmic Gamma Ray Bursts
A. Janiuk1<EMAIL_ADDRESS>B. James1 K. Sapountzis1
1Center for Theoretical Physics, Polish Academy of Sciences, Al. Lotników
32/46, 02-668 Warsaw, Poland
(Key words: accretion, accretion disks – black hole physics –
Magnetohydrodynamics (MHD))
Gamma ray bursts (GRBs) are astronomical phenomena detected at highest
energies. The gamma ray photons carry energies on the order of mega-
electronovolts and arrive to us from the point-like sources that are uniformly
distributed on the sky. A typical burst has a form of a pulse that lasts for
about a minute. As the Earth’s atmosphere is not transparent to the very high
energy radiation, the bursts are detected by means of the telescopes onboard
satellites that are placed on the orbit (Gehrels et al., 2004). The total
energetics of GRB events, which is given by the integrated energy flux by the
detector unit area, implies that we are witnessing very powerful explosions,
where an enormously great power is released within a short time. There is only
one way to obtain such huge energies in cosmos: the disruption of a star
(Paczynski, 1986).
## 1 Introduction
Multi-wavelength and multi-messenger observations show that black holes are
the central engines responsible of the most violent astrophysical events such
as, for instance, active nuclei of galaxies, X-ray binaries, core-collapse
supernovae or gamma-ray bursts. This central engine is subject to strong
gravity, strong electromagnetic fields and rotation. The governing physical
laws of such engines are well known (General Relativistic
MagnetoHydroDynamics, hereafter GRMHD) but are nonlinear, time-dependent and
multidimensional. Thereby, it is necessary to develop a numerical approach to
simulate their evolution and observational appearance where a first-principles
theory can not be achieved.
In order to produce a gamma ray burst, the black hole is launching a jet of
plasma. In these jets, plasma is expanding with ultra-relativistic velocities.
Particles are accelerated in the strong magnetic field and produce high energy
radiation. The jet is powered by the Blandford-Znajek mechanism which can
extract energy from a rotating black hole Blandford and Znajek (1977). This
process requires a magnetized accretion disk with a strong poloidal magnetic
field generated around a spinning black hole. The gamma-ray emission produced
in the jet at large distances is not uniform and short time-scale variability
suggests that it originates close to the black hole.
## 2 Two classes of gamma ray bursts
Two disctinct classes of Gamma Ray bursts have been identified and are known
to constitute statistically distinct populations of sources (Kouveliotou et
al., 1993). The long bursts cluster around few tens of seconds of gamma ray
emission, while the short bursts are typically lasting a fraction of a second
only. Also, their characteristic spectral energy distributions have distinct
characteristics (see Table 1).
Type | Mean Duration | Peak in Energy Spectrum | Origin
---|---|---|---
Long | 25 [s] | $\log(E_{\rm peak})=2.2$ [keV] | Collapse of massive star
Short | 0.7 [s] | $\log(E_{\rm peak})=2.7$ [keV] | Merger of two compact stars
Table 1: Properties of short and long GRBs
### 2.1 Long gamma ray bursts
Already from 1990’s we know that the GRBs originate mostly in the distant
galaxies, and many of them are associated with supernovae. They have to be in
fact special supernova types (only 10 per cent of them meet the criteria),
because the core of collapsing star needs to form a black hole, surrounded by
a disk composed from the remnant matter from the stellar envelope. It is the
accretion of matter onto a rotating black hole that is able to provide energy
large enough to account for the observed properties of GRB phenomena. This
process is relatively long (several tens – hundreds seconds).
### 2.2 Short gamma ray bursts
Another mechanism of producing a shorter GRB is the coalescence of two neutron
stars. A transient structure is then formed, and collapses to a black hole.
The surrounding remnants of dense matter form a disk composed of elementary
particles and neutrinos. The process of disk accretion, mediated by magnetic
fields provides power to extract the rotational energy of the black hole and
launch a relativistic jet (Janiuk and Yuan, 2010). This collimated outflow is
where the gamma rays are produced.
## 3 Central engine
### 3.1 Progenitors
The collapsar scenario is able explain the long-duration GRBs, while the short
GRBs are associated with the mergers of compact objects. In the long GRB case,
the energetics of the explosion is consistent with the gravitational mass of
the progenitor star: $E=GM_{\star}^{2}/R\approx 10^{54}$ [erg]. The jet has to
have a highly relativistc speed, with a bulk lorentz factor of $\Gamma\sim
100$, in order to be able to break through the stellar envelope. Also, the
duration, time variability, and the following afterglow emission of these GRBs
at lower energies are consistent with the collapsing massive stars that reside
in star-forming host galaxies. In the short GRB case, the compact binary
megergers are almost equally energetic, while the duation of the burst is set
by the viscous timescale of an accretion disk formed from the tidally-
disrupted remnant, $t_{vis}\approx\alpha_{\rm vis}^{-1}(R_{\rm
disk}^{3}/GM_{NS})^{1/2}(H/R_{\rm disk})^{-2}$, and is about 0.5 [s] for
typical parameters of such system. The highly relativistic speeds of the jets
are also supported by the observation of non-thermal energy spectra, which
otherwise would not be possible because of a lare optical depht due to the
electron-positon pair production (see Thompson (1994)). Recently, the
discovery of the binary neutron star (NS-NS) mergers in the gravitational wave
observation made by LIGO (GW170817 and GW190425), as well as the detection of
associated electromagnetic counterparts, provided a direct proof of the NS-NS
system being sources of short GRBs (see review by Janiuk and Sapountzis
(2018)). Schematic view of a central engine of a GRB, in a unified aproach, is
depicted in Fig. 1.
Figure 1: Schematic idea of GRB. The ralativistic jet is ejected (Lorentz
factor about 100). Its origin is the “central engine” where the black hole
sits. The jet emits gamma ray radiation, collimated towards the observer.
### 3.2 Numerical modeling of the engine and jet structure
We compute the structure and evolution of black hole accretion disks using the
numerical simulations, governed by the equations of general relativistic
magnetohydrodynamics (GRMHD). In particular, such disks and outflows can be
found at the base of relativistic jets in the extragalactic sources, like
blazars, or gamma ray bursts. Long-lasting, detailed computations are
essential to properly determine the physics of these jets, and confront the
theoretical models with observables available from astrophysical observatories
in space and from ground-based detectors.
Our numerical scheme works in a conservative manner, by solving a set of non-
linear equations at each time-step, to advance the conserved quantities from
one time step to the next. The efficiency of computations is enhanced thanks
to the code parallelisation. We use the Message Passing Interface (MPI)
techniques to distribute the computational grid over the threads. Such
calculations are computationally demanding, and also required special fine-
tuning of the code algorithm.
Our calculations are started with a steady state model of the flow, based on
the analytical, equilibrium solution driven by the main physical parameters of
the black hole accretion disk (namely, BH mass, its spin, and size and density
of the accreting torus). Properties of the pressure equilibrium torus around a
black hole, supported by a constant specific angular momentum over radius,
were found by Fishbone and Moncrief (1976) and Chakrabarti (1985). In the
latter model, the angular momentum distribution is chosen to have a power law
distribution, which differs from the default disk model of Fishbone and
Moncrief (1976), where the angular momentum is assumed to be constant in the
disk. This model will allow us to create a initial torus with a large amount
of poloidal magnetic flux.
Starting from the Chakrabarti solution, implemented as the initial condition
for our simulations, we use a dynamical scheme, and we follow the flow
evolution. This is achieved by solving numerically the continuity and
momentum-energy conservation equations in GR MHD framework:
$(\rho u_{\mu})_{;\nu}=0;\hskip
28.45274ptT^{\mu\nu}=T^{\mu\nu}_{gas}+T^{\mu\nu}_{EM}=0$ (1)
$T^{\mu\nu}_{gas}=\rho
hu^{\mu}u^{\nu}+pg^{\mu\nu}=(\rho+u+p)u^{\mu}u^{\nu}+pg^{\mu\nu}$ (2)
$T^{\mu\nu}_{EM}=b^{2}u^{\mu}u^{\nu}+\frac{1}{2}b^{2}g^{\mu\nu}-b^{\mu}b^{\nu};b^{\mu}=u^{*}_{\nu}F^{\mu\nu}$
(3)
Here we account for both the gas and electromagnetic components of the stress
tensor, $u^{\mu}$ is the four-velocity of gas, $u$ is internal energy, $\rho$
is density, $p$ denotes pressure, and $b^{\mu}$ is the magnetic four-vector.
Here, $F$ is the Faraday tensor, and in the force-free approximation the
Lorentz force vanishes, $E_{\nu}=u^{\nu}F^{\mu\nu}=0$.
The model system of equations is supplemented with an equation of state, in
the polytropic form
$p_{\rm g}=K\rho^{\gamma}$ (4)
where $p_{\rm g}$ is the gas pressure, $\rho$ is density, $K$ is constant
specific entropy, and the polytropic index $\gamma=4/3$ is typically used in
the context of gamma ray bursts. The plasma $\beta-$parameter is defined as
the ratio of the fluid’s thermal to the magnetic pressure, $\beta\equiv p_{\rm
g}/p_{\rm mag}$. We normalize the magnetic field in the torus to have
$\beta=(\gamma-1)u_{\rm max}/(b^{2}_{\rm max}/2)$, where $u_{max}$ is the
internal energy at the torus pressure maximum radius, $r_{\rm max}$.
Resulting evolved structure of the GRB central engine (disk in the upper
panel, and jet in the bottom panel) is shown in Figure 2. The snapshot is
taken at time $t=2000~{}[t_{\rm g}]$, where $t_{\rm g}=GM/c^{3}$ denotes
geometric time unit, and scales with black hole mass. For GRB engine, typical
mass of central black hole is in the range from $M\sim 3$ up to $M\sim 30$
Solar masses. Note that in the evolved state we are able to obtain the
physically motivated structure of the accretion torus in the equatorial plane
of the rotating black hole, and low-density polar funnels, with dominant
magnetic field. These funnels carry both thermal and Poynting energy along the
jet axis. The ultimate bulk Lorentz factor in the jet, achieved at ’infinity’
(outside the computational domain) will possibly reach the order of magnitude
of this total energetics parameter, so $\Gamma\sim$ few hundreds.
Figure 2: Top: Distribution of mass density and configuration of magnetic
field lines in the region close to the black hole, at the base of gamma ray
burst. Bottom: Structure of jet launched from rotating black hole, in terms of
its total energetics (thermal plus Poynting energy). The results from
axisymmetric numerical simulation are shown, for black hole spin $a=J/M^{2}$ =
0.7. The snapshot is taken at time $t=2000~{}[t_{\rm g}]$.
+
## 4 Discussion
Figure 3: Time variability of the jet in terms of its total energetics
(thermal plus Poynting energy), as measured during the simulation at a chosen
point in the jet (located at an angle $\theta=5^{\circ}$ from the jet axis).
The result of numerical 2-dimensional simulation is shown, for black hole spin
$a=J/M^{2}=0.7$, and magnetic field normalized in torus with $\beta(r_{\rm
max})=60$. The dashed lines show the characteristic timescale of the MRI
instability, measured as the maximum growth rate timescale (Gammie 2004) which
correlates with the pulse width.
Our numerical simulations, performed by the computational astrophysics group
at the Center for Theoretical physics, PAS, provide working tools to model the
central engine of both long and short GRBs. The unified approach is possible
because we scale the size and timescale of the system by converting the
geometric units to physical ones, as adequate in the GR MHD modeling. We also
notice that the variability timescales, modeled in our simulations as the time
variability of the jet energetics (see Fig. 3), are governed by the timescale
of the magneto-rotational instability in the accretion disk. In order to
properly govern the regime of long-duration GRB, with a sustained magnetic
turbulence in the engine, full 3-dimensional simulations are needed. This is
the work in progress carried by our group (Fig. 4).
Figure 4: Structure of the accretion flow and large scale magnetic field in
the jet. Figure shows the result of 3-dimensional numerical simulation.
Exemplary field line is depicted as launched from the black hole horizon, with
dominant toroidal field component.
Such models, with a good spatial resolution to solve physical process close to
the black hole, need large computational resources. We ran already several
series of calculations on the OKEANOS supercomputer of the Warsaw ICM
supercomputing center, and in the Prometheus supercomputer within the PL-Grid.
The purpose is to study the polarimetric signatures of strong magnetic fields
near event horizon of a black hole in the Galaxy center (Moscibrodzka et al.,
2020, in preparation).
## Acknowledgement
The authors acknowledge the financial support by the grant No.
2016/23/B/ST9/03114 and 2019/35/B/ST9/04000 from the Polish National Science
Center. We have used the computational resources of the Warsaw ICM through
grant Gb79-9, and the PL-Grid through the grant grb3.
## References
* Gehrels et al. [2004] N. Gehrels, G. Chincarini, P. Giommi, K. O. Mason, J. A. Nousek, A. A. Wells, N. E. White, S. D. Barthelmy, D. N. Burrows, L. R. Cominsky, K. C. Hurley, F. E. Marshall, P. Mészáros, P. W. A. Roming, L. Angelini, L. M. Barbier, T. Belloni, S. Campana, P. A. Caraveo, M. M. Chester, O. Citterio, T. L. Cline, M. S. Cropper, J. R. Cummings, A. J. Dean, E. D. Feigelson, E. E. Fenimore, D. A. Frail, A. S. Fruchter, G. P. Garmire, K. Gendreau, G. Ghisellini, J. Greiner, J. E. Hill, S. D. Hunsberger, H. A. Krimm, S. R. Kulkarni, P. Kumar, F. Lebrun, N. M. Lloyd-Ronning, C. B. Markwardt, B. J. Mattson, R. F. Mushotzky, J. P. Norris, J. Osborne, B. Paczynski, D. M. Palmer, H. S. Park, A. M. Parsons, J. Paul, M. J. Rees, C. S. Reynolds, J. E. Rhoads, T. P. Sasseen, B. E. Schaefer, A. T. Short, A. P. Smale, I. A. Smith, L. Stella, G. Tagliaferri, T. Takahashi, M. Tashiro, L. K. Townsley, J. Tueller, M. J. L. Turner, M. Vietri, W. Voges, M. J. Ward, R. Willingale, F. M. Zerbi, and W. W. Zhang. The Swift Gamma-Ray Burst Mission. _$\rm ApJ$_ , 611(2):1005–1020, August 2004. doi:10.1086/422091.
* Paczynski [1986] B. Paczynski. Gamma-ray bursters at cosmological distances. _$\rm ApJL$_ , 308:L43–L46, September 1986. doi:10.1086/184740.
* Blandford and Znajek [1977] R. D. Blandford and R. L. Znajek. Electromagnetic extraction of energy from Kerr black holes. _$\rm MNRAS$_ , 179:433–456, May 1977. doi:10.1093/mnras/179.3.433.
* Kouveliotou et al. [1993] C. Kouveliotou, C. A. Meegan, G. J. Fishman, N. P. Bhat, M. S. Briggs, T. M. Koshut, W. S. Paciesas, and G. N. Pendleton. Identification of Two Classes of Gamma-Ray Bursts. _$\rm ApJL$_ , 413:L101, August 1993. doi:10.1086/186969.
* Janiuk and Yuan [2010] A. Janiuk and Y. F. Yuan. The role of black hole spin and magnetic field threading the unstable neutrino disk in gamma ray bursts. _$\rm A\ &A$_, 509:A55, January 2010. doi:10.1051/0004-6361/200912725.
* Thompson [1994] C. Thompson. A model of gamma-ray bursts. _$\rm MNRAS$_ , 270:480–498, October 1994. doi:10.1093/mnras/270.3.480.
* Janiuk and Sapountzis [2018] Agnieszka Janiuk and Konstantinos Sapountzis. Gamma ray bursts: Progenitors, accretion in the central engine, jet acceleration mechanisms. In Zbigniew Szadkowski, editor, _Cosmic Rays_ , chapter 2. IntechOpen, Rijeka, 2018. doi:10.5772/intechopen.76283. URL https://doi.org/10.5772/intechopen.76283.
* Fishbone and Moncrief [1976] L. G. Fishbone and V. Moncrief. Relativistic fluid disks in orbit around Kerr black holes. _$\rm ApJ$_ , 207:962–976, August 1976. doi:10.1086/154565.
* Chakrabarti [1985] S. K. Chakrabarti. The natural angular momentum distribution in the study of thick disks around black holes. _$\rm ApJ$_ , 288:1–6, January 1985. doi:10.1086/162755.
|
.tiffpng.pngconvert #1 .tiff
Deep learning via LSTM models for COVID-19 infection forecasting in India
Rohitash Chandra 1 *, Ayush Jain 2 ‡, Divyanshu Singh Chauhan 3 ‡
1\. UNSW Data Science Hub & School of Mathematics and Statistics, University
of New South Wales, Sydney, Australia
2\. Department of Electronics and Electrical Engineering, Indian Institute of
Technology Guwahati, Assam, India
3\. Department of Mechanical Engineering, Indian Institute of Technology
Guwahati, Assam, India
‡These authors contributed equally to this work.
* Corresponding author
E-mail<EMAIL_ADDRESS>(RC)
## Abstract
The COVID-19 pandemic continues to have major impact to health and medical
infrastructure, economy, and agriculture. Prominent computational and
mathematical models have been unreliable due to the complexity of the spread
of infections. Moreover, lack of data collection and reporting makes modelling
attempts difficult and unreliable. Hence, we need to re-look at the situation
with reliable data sources and innovative forecasting models. Deep learning
models such as recurrent neural networks are well suited for modelling
spatiotemporal sequences. In this paper, we apply recurrent neural networks
such as long short term memory (LSTM), bidirectional LSTM, and encoder-decoder
LSTM models for multi-step (short-term) COVID-19 infection forecasting. We
select Indian states with COVID-19 hotpots and capture the first (2020) and
second (2021) wave of infections and provide two months ahead forecast. Our
model predicts that the likelihood of another wave of infections in October
and November 2021 is low; however, the authorities need to be vigilant given
emerging variants of the virus. The accuracy of the predictions motivate the
application of the method in other countries and regions. Nevertheless, the
challenges in modelling remain due to the reliability of data and difficulties
in capturing factors such as population density, logistics, and social aspects
such as culture and lifestyle.
## 1 Introduction
The coronavirus disease 2019 (COVID-19) is an infectious disease caused by
severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) [1, 2, 3] which
became a global pandemic [4]. COVID-19 was first identified in December 2019
in Wuhan, Hubei, China with the first confirmed (index) case was traced back
to 17th November 2019 [5]. The COVID-19 pandemic forced many countries to
close their borders and enforced a partial or full lockdown which had a
devastating impact on the world economy [6, 7, 8]. Other major impact has been
on agriculture [9, 10] which is a major source of income for the population in
rural areas, especially in the developing world. The sudden lockdown in some
countries created a number of problems, especially for low income communities
[11, 12], and migrant workers [13]. Currently (8th October, 2021) [14], more
than 238 million cases have been reported across the world which resulted in
more than 4.8 million deaths, and about 215 million have recovered [15, 16]. A
significant portion of the recovered suffer from ”long-covid” [17, 18], a term
which refer to pro-longed health problems which can last for months to entire
lifetime. In terms of vaccinations (8th October, 2021) [14], 46.3 % of the
world population has received at least one dose and 6.4 billion doses have
been administered globally. Furthermore, about 2.4 % of population in low-
income countries have received at least one dose.
The case of India has been unique when it comes to management of COVID-19
pandemic [13]. The first COVID-19 case in India was reported on 30 January
2020. India had two major waves of infections, where the first wave was from
April to October 2020 and the second wave was from February to June 2021 [19].
India currently (8th October, 2021) [14, 15] has 33,935,309 confirmed cases
with 450,408 deaths which makes the largest in Asia, the second highest in the
world after the United States. The fatality rate of COVID-19 in India is among
the lowest in the world as it ranks 124th with 322 deaths in a million people.
In comparison, United States has 2,197 and the United Kingdom has 2,013 deaths
in a million people. The second wave (2021) had a devastating effect on the
Indian health system and during the time around its peak in daily cases, India
had highest daily infection in the world [14, 15]. On the bright side, India
also has one of the fastest recovery rates in the world with 236,610 active
cases, and ranks 9th in the world although 2nd in total cases.
In terms of COVID-19 forecasting, prominent computational and statistical
models have been unreliable due to the complexity of the spread of infections
[20, 21, 22]. Given lack of data, it is challenging to develop a model while
taking into consideration population density, effect of lock-downs, effect of
viral mutations and variants such as the delta variant [23], logistics and
travel, and qualitative social aspects such culture and lifestyle [24].
However, culture and lifestyle are examples of variable of interest that
cannot be measured quantitatively. Due to qualitative nature of certain
variables of interest such as lifestyle, and lack of data collection,
modelling attempts have been mostly unreliable [25]. We need to re-look at the
situation with latest data sources and most comprehensive forecasting models
[26, 27, 28]. Moreover, a number of other limitations exists, such as noisy or
unreliable data of active cases [29], changing mortality rate given different
variants, and asymptotic carriers [30, 31]. There have been reports that the
models lack a number of limitations and failed in several situations [25].
Despite these challenges, it has been shown that country based mitigation
factors in terms of lockdown level and monitoring has a major impact on the
rate of infection [32]. We note that limited work has been done using deep
learning-based forecasting models for COVID-19 in India, although the country
has one of world’s largest population with highly populated and highly dense
cities. There is a need to evaluate latest deep learning models for
forecasting COVID-19 in India, that takes into account both the first (2020)
and the second wave (2021) of infections.
Deep learning models such as recurrent neural networks (RNNs) are well suited
for modelling spatiotemporal sequences [33, 34, 35, 36, 37] and modeling
dynamical systems, when compared to simple neural networks [38, 39, 40]. The
limitation in training RNNs for long-term dependencies in sequences where data
sequences span hundreds or thousands of time-steps [41, 42] have been
addressed by long short-term memory networks (LSTMs) [35]. LSTMs have been
used for COVID-19 forecasting in China [43] with good performance results when
compared to epidemic models. LSTMs have also been used for COVID-19
forecasting in Canada [44]. Other deep learning models such as convolutional
neural networks (CNNs) have recently shown promising performance for time
series forecasting [45, 46]. Hence, they would also be suited for capturing
spatiotemporal relationship of COVID-19 transmission with neighbouring states
in India.
In this paper, we employ LSTM models in order to forecast the spread of COVID-
infections among selected states in India. We select Indian states with
COVID-19 hotpots and capture the first and second wave of infections in order
to later provide two months ahead forecast. We first employ univariate and
multivariate time series forecasting approaches and compare their performance
for short-term (4 days ahead) forecasting. We also present visualisation and
analysis of the COVID-19 infections and provide open source software framework
that can provide robust predictions as more data gets available. The software
framework can also be applied to different countries and regions.
The rest of the paper is organised as follows. Section 2 presents a background
and literature review of related work. Section 3 presents the proposed
methodology with data analysis and Section 4 presents experiments and results.
Section 5 provides a discussion with discussion of future work and Section 6
concludes the paper.
## 2 Related Work
The pandemic has greatly affected and transformed work environment and
lifestyle. COVID-19 lockdowns and restrictions of movement has given rise to
e-learning [47, 48, 49] and telemedicine [50], and created opportunities in
applications for geographical information systems [51]. The lockdown showed
positive impact on the environment [52, 53], especially for highly populated
and industrial nations with high air pollution rate [54]. Zambrano-Monserrate
et. al highlighted the positive indirect effects revolve around the reduction
air pollutants in China, France, Germany, Spain, and Italy [55]. However, the
way medical pollutants and domestic waste were discarded during lockdowns has
been an issue [55]. COVID-19 lockdowns and infection management raised
concerns about prejudices against minorities and people of colour in developed
countries such as the United States [56]. Furthermore, there has been a
significant impact on mental health across the globe [57, 58].
It has been shown that in some countries, comprehensive identification and
isolation policies have effectively suppressed the spread of COVID-19. Huang
et. al [59] presented an evaluation of identification and isolation policies
that effectively suppressed the spread of COVID-19 which further contributed
to reduce casualties during the phase of a dramatic increase in diagnosed
cases in Wuhan, China. The authors recommended that governments should swiftly
execute the forceful public health interventions in the initial stage of the
pandemic. However, such policies have not been that effective for other
countries, such as the first wave of infections and associated lockdowns in
India [26].
### 2.1 Modelling and forecasting COVID-19
A number of machine learning and statistical models have been used for
modelling and forecasting COVID-19 in different parts of the world. Saba and
Elsheikh presented simple autoregressive neural networks for forecasting the
prevalence of COVID-19 outbreak in Egypt which showed relatively good
performance when compared to officially reported cases [22]. Yousaf et. al
used auto-regressive integrated moving average (ARIMA) model for forecasting
COVID-19 for Pakistan [21]. The model predicted that the number of confirmed
cases would increase by factor of 2.7 giving 95 % prediction interval by the
end of May 2020, to 5681 - 33079 cases. However, Pakistan reported around
70,000 cases [14] end of May 2020, and hence the model was poor in prediction.
Velásquez and Lara used Gaussian process regression model for forecasting
COVID-19 infection in the United States [20]. The authors show that COVID-19
would peak in United States around July 14th 2020, with about 132,074 deaths
and 1,157,796 infected individuals at the peak stage. However, the actual
cases by July 14th reached more than 3.5 million with more than 139 thousand
deaths [14, 15] which shows that the model was close in forecasting deaths but
forecast of total cases was poor.
Chimmula and Zhand used LSTM neural networks for time series forecasting of
COVID-19 transmission in Canada [44]. The authors predicted the possible
ending point of the outbreak around June 2020 and compared transmission rate
of Canada with Italy and the United States. Canada reached the daily new cases
peak by 2nd May 2020 [14, 15], and since then, new cases has been drastically
reducing. Therefore we can say that the approach by the authors was somewhat
close in reporting the peak for COVID-19 in Canada. Chakraborty and Ghosh [27]
used hybrid ARIMA and wavelet-based forecasting model for short-term (ten days
ahead) forecasts of daily confirmed cases for Canada, France, India, South
Korea, and the United Kingdom. The authors also applied an optimal regression
tree algorithm to find essential causal variables that significantly affect
the case fatality rates for different countries. Maleki et. al [28] used
autoregressive time series models based on mixtures of normal distribution
from confirmed and recovered COVID-19 cases worldwide.
Ren et al. [24] analysed spatiotemporal variations of the epidemics before
utilizing the ecological niche models with nine socioeconomic variables for
identifying the potential risk zones for megacities such as Beijing, Guangzhou
and Shenzhen. The results demonstrated that the method was capable of being
employed as an early forecasting tool for identifying the potential COVID-19
infection risk zones. Alzahrani et al. [60] used autoregressive and ARIMA
models for COVID-19 in Saudi Arabia with data till 20th April 2020 and
predicted 7668 daily new cases by 21st May 2020 given stringent precautionary
control measures were not implemented. However, Saudi Arabia on 21st May 2020
reported 2532 actual cases [14, 15]; hence, the model has shown poor
performance. Singh et al. [61] presented a hybrid of discrete wavelet
decomposition and ARIMA models in application to one month forecast the
casualties cases of COVID-19 in most affected countries back then which
included France, Italy, Spain, United Kingdom and and United Sates. The study
found that the hybrid model was better than standalone models. Dasilva et al.
[26] employed machine learning methods such as Bayesian regression neural
network, cubist regression, k-nearest neighbors, quantile random forest, and
support vector regression with pre-processing based on variational mode
decomposition for forecasting one, three, and six-days-ahead the cumulative
COVID-19 cases in five Brazilian and American states up to April 28th, 2020.
Yang et al. [43] presented an epidemiological model that incorporated the
domestic migration data and the most recent COVID-19 epidemiological data to
predict the epidemic progression. The model predicted peak by late February,
showing gradual decline by end of April 2020. This was one of the few attempts
in prediction of COVID-19 infection trend in China [14, 15]; however, the
actual peak was observed in early February 2020 and the spread of infections
ended by the middle of March 2020.
Next, we review key studies of COVID-19 forecasting with deep learning models
in India. Anand et al. [62] focused on forecasting of COVID-19 cases in India
using RNNs such as LSTM and gated-recurrent units (GRU) with the dataset from
30th January 2020 to 21st July 2020. Bhimala et al. [63] incorporated the
weather conditions of different states to make improve forecasting of the
COVID-19 cases in different states of India. The authors made assumption that
different humidity levels in different states will lead to varying
transmission of infection within the population. They demonstrated that LSTM
model performed better in the medium and long range forecasting scale when
integrated with the weather data. Shetty [64] presented real-time forecasting
using a simple neural network for the COVID-19 cases in the state of Karnataka
in India where parameter selection for the model was based on cuckoo search
algorithm. The study reported that the mean-absolute percentage error (MAPE)
was reduced from 20.73 % to 7.03 % and the proposed model was further tested
on the Hungary COVID-19 dataset and reported promising results. Tomar and
Gupta [65] developed LSTM model for 30-day ahead prediction of COVID-19
positive cases in India where they also studied the effect of preventive
measures on the spread of COVID-19. They showed that with preventive measures
and lower transmission rate, the spread can be reduced significantly. Gupta et
al. [66] forecasted COVID-19 cases of India using support vector machines,
prophet, and linear regression models. Similarly, Bodapati et al. [67]
forecasted the COVID-19 daily cases, deaths caused and recovered cases with
the help of LSTM networks for the whole world. Chaurasia and Pal [68] used
several forecasting models such as simple average, single exponential
smoothing, Holt winter method, and ARIMA models for COVID-19 pandemic.
A number of machine learning methods used in conjunction with deep learning
models for COVID-19 forecasting for the rest of the world. Battineni et al.
[69] forecasted COVID-19 cases using a machine learning method known as
prophet logistic growth model which estimated that by late September 2020, the
outbreak can reach 7.56, 4.65, 3.01 and 1.22 million cases in the United
States, Brazil, India and Russia, respectively. Nadler et al. [70] used a
model embedded in a Bayesian framework coupled with a LSTM network to forecast
cases of COVID-19 in developed and developing countries. Istaiteh et al. [71]
compared the performance of ARIMA, LSTM, multilayer perceptron and
convolutional neural network (CNN) models for prediction of COVID-19 cases all
over the world. They reported that deep learning models outperformed ARIMA
model, and furthermore CNN outperformed LSTM networks and multi-layer
perceptron. Pinter et al. [72] used hybrid machine learning methods consisting
of adaptive network-based fuzzy inference systems (ANFIS) and mutlilayer
perceptron (simple neural network) for COVID-19 infections and mortality rate
in Hungary.
## 3 Methodology: Forecasting COVID-19 novel infections with deep learning
models
We need to reconstruct the original time series into a state-space vector in
order to train deep learning models for multi-step-ahead prediction. Taken’s
theorem expresses that the reconstruction can reproduce important features of
the original time series [73]. Hence, an embedded phase space
$Y(t)=[(x(t),x(t-T),...,x(t-(D-1)T)]$ can be generated given an observed time
series $x(t)$; where $T$ is the time delay, $D$ is the embedding dimension
(window size) $t=0,1,2,...,N-D-1$, and $N$ is the length of the original time
series. Appropriate values for $D$ and $T$ need to selected to efficiently
apply Taken’s theorem for reconstruction [74]. Taken’s proved that if the
original attractor is of dimension $d$, then $D=2d+1$ would be sufficient
[73].
### 3.1 LSTM network models
Recurrent neural networks (RNNs) have been prominent for modelling temporal
sequences. RNNs feature a context layer to act as memory in order to project
information from current state into future states, and eventually the output
layer. Although number of different RNN architectures exist, the Elman RNN
[33, 75] is one of the earliest which has been prominent for modelling
temporal sequences and dynamical systems [76, 39, 77].
Training RNNs in the early days has been a challenging task. Backpropagation-
through-time (BPTT) which is an extension of the backpropagation algorithm has
been prominent in training RNNs [34]. BPTT features gradient-descent where the
error is backpropagated for a deeper network architecture that features states
defined by time. The RNN unfolded in time is similar to a multilayer
perceptron that features multiple hidden layers. A major limitation of BPTT
for simple RNNs has been the problem of learning long-term dependencies given
vanishing and exploding gradients [78]. The LSTM network addressed this
limitation with better capabilities in remembering the long-term dependencies
using memory cells in the hidden layer [35]. The memory cells help in
remembering the long-term dependencies in data as shown in Figure 1.
Fig 1: LSTM memory cell in the LSTM recurrent neural network.
The LSTM network model calculates a hidden state output $h_{t}$ by
$\displaystyle i_{t}$
$\displaystyle=\sigma\big{(}x_{t}U^{i}+h_{t-1}W^{i}\big{)}$ (1) $\displaystyle
f_{t}$ $\displaystyle=\sigma\big{(}x_{t}U^{f}+h_{t-1}W^{f}\big{)}$
$\displaystyle o_{t}$
$\displaystyle=\sigma\big{(}x_{t}U^{o}+h_{t-1}W^{o}\big{)}$
$\displaystyle\tilde{C}_{t}$
$\displaystyle=\tanh\big{(}x_{t}U^{c}+h_{t-1}W^{c}\big{)}$ $\displaystyle
C_{t}$ $\displaystyle=\sigma\big{(}f_{t}\ast
C_{t-1}+i_{t}\ast\tilde{C}_{t}\big{)}$ $\displaystyle h_{t}$
$\displaystyle=\tanh(C_{t})\ast o_{t}$
where, $i_{t}$, $f_{t}$ and $o_{t}$ refer to the input, forget and output
gates, at time $t$, respectively. $c$ refers to the memory cell. $x_{t}$ and
$h_{t}$ refer to the number of input features and number of hidden units,
respectively. $W$ and $U$ are the weight matrices adjusted during learning
along with $b$, which is the bias. Note that all the gates have the same
dimensions $d_{h}$, which is given by the size of hidden state.
$\tilde{C}_{t}$ is the intermediate cell state, and $C_{t}$ is the current
cell memory. The initial values at $t=0$ are given by $C_{0}=0$ and $h_{0}=0$.
Note that we denote (*) as element-wise multiplication.
### 3.2 Bi-directional LSTM networks
Conventional RNN and LSTM networks only make use of previous context state for
determining future states. Bidirectional RNNs (BD-RNNs) [79] on the other
hand, process information in both directions with two separate hidden layers
which are then propagated forward to the same output layer. Hence, two
independent RNNs are placed together to allow both backward and forward
information about the sequence at every time step. The forward hidden sequence
$h_{f}$, the backward hidden sequence $h_{b}$, and the output sequence $y$ are
computed by iterating the backward layer from $t=T$ to $t=1$, and the forward
layer from $t=1$ to $t=T$.
Bi-directional LSTM networks (BD-LSTM) [80] can access longer-range context or
state in both directions similar to BD-RNNs. BD-LSTM networks were originally
proposed for word-embedding in natural language processing [80] tasks and have
been used in several real-world sequence processing problems such as phoneme
classification [80], continuous speech recognition [81], and speech synthesis
[82].
BD-LSTM networks intake inputs in two ways; one from past to future, and
another from future to past by running information backwards so that state
information from the future is preserved. Given two hidden states combined in
any point in time, the network can preserve information from both past and
future as shown in Figure 2.
Fig 2: Bi-directional LSTM recurrent neural network.
### 3.3 Encoder-Decoder LSTM networks
The encoder-decoder LSTM network (ED-LSTM) [83] was introduced as a sequence
to sequence model for mapping a fixed-length input to a fixed-length output.
The length of the input and output may differ which makes them applicable in
automatic language translation tasks (English to French for example) which can
be extended to multi-step series prediction where both the input and outputs
are of variable lengths. A latent vector representation is used to handle
variable-length input and outputs by first encoding the input sequences, one
at a time and then decoding it. We consider the input sequence
$(x_{1},...,x_{n})$ with corresponding output sequence $(y_{1},...,y_{m})$,
and estimate the conditional probability of the output sequence given an input
sequence, i.e. $p(y_{1},...,y_{m}|x_{1},...,x_{n})$. In the encoding phase,
given an input sequence, the ED-LSTM network computes a sequence of hidden
states. In the decoding phase, it defines a distribution over the output
sequence given the input sequence as shown in Figure 3.
Fig 3: Encoder-Decoder LSTM recurrent neural network. Note that the outputs
$(y_{1},...,y_{m})$ with corresponding input $(x_{1},...,x_{n})$ are not
explicitly shown.
### 3.4 India: Situation Report: 8th October, 2021
We provide a visual representation of the total number of COVID-19 infections
for different states and union territories in India based on data till 8th
October, 2021.
Tables 1 2, 3, and 4 provide a rank of top ten Indian states with total cases
1st of every month. We see that largely populated states such as Maharashtra
(population estimate of 123 million [84]) has been leading India in number of
total cases through-out 2020. We note that state of Uttar Pradesh has estimate
population of 238 million has managed better. Delhi has a relatively smaller
population (estimated 19 million [84]), but high population density and hence
been one of the leading states with COVID-19 infections (in top 6 throughout
2020). Tables 3 and 4 show that in 2021, Maharashtra continued leading;
however, the second position was overtaken by Kerala from February which
maintained the position since then. We note that from February to June 2021,
India experienced the second-wave of infections from the delta-variant of the
virus, with Maharashtra and Kerala leading most of the time in terms of
monthly infections. The first peak for novel cases in India was reached on
September 16th 2020 with 97,894 daily and 93,199 weekly average novel cases
[15]. The daily novel cases were steady for several months and then raised
again from February 2021 for the second wave of infections. The peak of the
second wave was reached around 7th May 2021, with 401,078 daily and weekly
average of 389,672 novel infections [15]. The peak of deaths was reached
around 21st May 2021 with 4194 deaths and 4188 weekly average.
Figures 4 and 5 present the total number of novel weekly cases for different
groups of Indian states and union territories, which covers both the first and
second wave of infections. We notice that the number of cases significantly
increased after May 2020 which marks the first wave and then declined. Figure
4 (Panel a) focusing on major affected states show that Maharastra led the
first and second wave of infections followed by Karnataka. In Figure 4 Panel
(b), considering the Eastern states, we find that new cases in West Bengal
drastically increased for the first wave of infections and it took Bihar
longer to reach the peak when compared to Odisha and the others. In the second
wave, West Bengal led the other states by a large margin. In the Northern
states, shown in Figure 5 (Panel a), we find that Uttar Pradesh leading the
first and second wave of infections which is not surprising since it is the
most populous state of India. In the case of the relatively lowly populated
states (small states) shown in Figure 5 (Panel b), we find that Goa and
Tripura lead the first wave of infections and later in the second wave, Goa
overtakes the rest of the states, significantly.
---
(a) Major affected states
(b) Eastern states
Fig 4: Weekly average of new cases for groups of Indian states and union
territories [15].
---
(a) Northern states
(b) Small states
Fig 5: Weekly average of new cases for groups of Indian states and union
territories [15].
Figure 6 presents daily active cases and cumulative (total) deaths for key
Indian states for 2021 [15]. We notice that the different states, such as
Maharastra and Tamil Nadu have few to several weeks of lag in reaching the
peak of novel daily cases. In terms of deaths, we do not see a sharp increase
after July 2021 in most of states. Note that we chose not to show daily deaths
in the same graphs since the scales between active cases and deaths are quite
different.
Fig 6: Daily active cases and cumulative death in key states of India for 2021 [15]. Rank | April | May | June | July | August
---|---|---|---|---|---
1 | Maharashtra | Maharashtra | Maharashtra | Maharashtra | Maharashtra
| (302) | (10498) | (67655) | (174761) | (422118)
2 | Kerala | Gujarat | Tamil Nadu | Tamil Nadu | Tamil Nadu
| (241) | (4395) | (22333) | (90167) | (245859)
3 | Tamil Nadu | Delhi | Delhi | Delhi | Andhra Pradesh
| (234) | (3515) | (19844) | (87360) | (140933)
4 | Delhi | Madhya Pradesh | Gujarat | Gujarat | Delhi
| (152) | (2719) | (16779) | (32557) | (135598)
5 | Uttar Pradesh | Rajasthan | Rajasthan | Uttar Pradesh | Karnataka
| (103) | (2584) | (8831) | (23492) | (124115)
6 | Karnataka | Tamil Nadu | Madhya Pradesh | West Bengal | Uttar Pradesh
| (101) | (2323) | (8089) | (18559) | (85461)
7 | Telengana | Uttar Pradesh | Uttar Pradesh | Rajasthan | West Bengal
| (96) | (2281) | (7823) | (18014) | (70188)
8 | Rajasthan | Andhra Pradesh | West Bengal | Telengana | Telengana
| (93) | (1463) | (5501) | (16339) | (62703)
9 | Andhra Pradesh | Telengana | Bihar | Karnataka | Gujarat
| (83) | (1039) | (3815) | (15242) | (61438)
10 | Gujarat | West Bengal | Andhra Pradesh | Andhra Pradesh | Bihar
| (82) | (795) | (3679) | (14595) | (51233)
Table 1: Rank of states by number of novel total cases taken at the 1st of every month for April to August 2020 [15]. The number of novel cases are shown in brackets. Rank | September | October | November | December
---|---|---|---|---
1 | Maharashtra | Maharashtra | Maharashtra | Maharashtra
| (792541) | (1384446) | (1678406) | (1823896)
2 | Andhra Pradesh | Andhra Pradesh | Karnataka | Karnataka
| (434771) | (693484) | (823412) | (884897)
3 | Tamil Nadu | Karnataka | Andhra Pradesh | Andhra Pradesh
| (428041) | (601767) | (823348) | (868064)
4 | Karnataka | Tamil Nadu | Tamil Nadu | Tamil Nadu
| (342423) | (597602) | (724522) | (781915)
5 | Uttar Pradesh | Uttar Pradesh | Uttar Pradesh | Kerala
| (230414) | (399082) | (481863) | (602982)
6 | Delhi | Delhi | Kerala | Delhi
| (174748) | (279715) | (43310) | (570374)
7 | West Bengal | West Bengal | Delhi | Uttar Pradesh
| (162778) | (257049) | (386706) | (543888)
8 | Bihar | Odisha | West Bengal | West Bengal
| (136457) | (219119) | (373664) | (483484)
9 | Telengana | Kerala | Odisha | Odisha
| (127697) | (196106) | (290116) | (318725)
10 | Assam | Telengana | Telengana | Telengana
| (109040) | (193600) | (240048) | (270318)
Table 2: Rank of states by number of novel total cases taken at the 1st of every month for September to December 2020 [15]. The number of novel cases are shown in brackets. Rank | January | February | March | April | May
---|---|---|---|---|---
1 | Maharashtra | Maharashtra | Maharashtra | Maharashtra | Maharashtra
| (2026399) | (2155070) | (2812980) | (4602472) | (5746892)
2 | Karnataka | Kerala | Kerala | Kerala | Karnataka
| (939387) | (1059403) | (1124584) | (1571183) | (2604431)
3 | Kerala | Karnataka | Karnataka | Karnataka | Kerala
| (929178) | (951251) | (997004) | (1523142) | (2526579)
4 | Andhra Pradesh | Andhra Pradesh | Andhra Pradesh | Uttar Pradesh | Tamil Nadu
| (887836) | (889916) | (901989) | (1252324) | (2096516)
5 | Tamil Nadu | Tamil Nadu | Tamil Nadu | Tamil Nadu | Andhra Pradesh
| (838340) | (851542) | (886673) | (1166756) | (1693085)
6 | Delhi | Delhi | Delhi | Delhi | Uttar Pradesh
| (635096) | (639289) | (662430) | (1149333) | (1691488)
7 | Uttar Pradesh | Uttar Pradesh | Uttar Pradesh | Andhra Pradesh | Delhi
| (600299) | (603527) | (617194) | (1101690) | (1426240)
8 | West Bengal | West Bengal | West Bengal | West Bengal | West Bengal
| (569998) | (575118) | (586915) | (828366) | (1376377)
9 | Odisha | Odisha | Chhattisgarh | Chhattisgarh | Chhattisgarh
| (335072) | (337191) | (349187) | (728700) | (971463)
10 | Rajasthan | Rajasthan | Odisha | Rajasthan | Rajasthan
| (317491) | (320336) | (340917) | (598001) | (939958)
Table 3: Rank of states by number of novel total cases taken at the 1st of every month for January to May 2021 [15]. The number of novel cases are shown in brackets. Rank | June | July | August | September
---|---|---|---|---
1 | Maharashtra | Maharashtra | Maharashtra | Maharashtra
| (6061404) | (6303715) | (6464876) | (6541119)
2 | Kerala | Kerala | Kerala | Kerala
| (2924165) | (3390761) | (4057233) | (4613937)
3 | Karnataka | Karnataka | Karnataka | Karnataka
| (2843810) | (2905124) | (2949445) | (2972620)
4 | Tamil Nadu | Tamil Nadu | Tamil Nadu | Tamil Nadu
| (2479696) | (2559597) | (2614872) | (2655572)
5 | Andhra Pradesh | Andhra Pradesh | Andhra Pradesh | Andhra Pradesh
| (1889513) | (1966175) | (2014116) | (2045657)
6 | Uttar Pradesh | Uttar Pradesh | Uttar Pradesh | Uttar Pradesh
| (1706107) | (1708441) | (1709335) | (1709761)
7 | West Bengal | West Bengal | West Bengal | West Bengal
| (1499783) | (1528019) | (1548604) | (1565645)
8 | Delhi | Delhi | Delhi | Delhi
| (1434188) | (1436265) | (1437764) | (1438685)
9 | Chhattisgarh | Chhattisgarh | Odisha | Odisha
| (994480) | (1002008) | (1007750) | (1023735)
10 | Rajasthan | Odisha | Chhattisgarh | Chhattisgarh
| (952422) | (977268) | (1004451) | (1005229)
Table 4: Rank of states by number of novel total cases taken at the 1st of
every month for June to September 2021 [15]. The number of novel cases are
shown in brackets.
## 4 Results
In this section, we present results of prediction of COVID-19 daily cases in
India using prominent LSTM neural network models that includes, BD-LSTM and
ED-LSTM with architectural details given earlier (Section 3).
### 4.1 Experimental Design
Our experiments consider the evaluation of the respective models for
univariate and multivariate prediction tasks. The data has been accessed from
Indian Institute of Statistical Science - Bangalore [85], which was originally
sourced from Ministry of Health and Family Welfare, Government of India
website [86]. The dataset is based on daily novel cases which is normalised
taking the maximum number of daily cases over the entire data into account. We
start our analysis from 15th April 2020 and use rolling mean of 3 days to
smoothen the original data. We reconstruct the univariate and multivariate
time series into a state-space vector using Taken’s theorem [73] with selected
values for embedding dimension window ($D=6$) and time-lag ($T=2$) for multi-
step ahead (MSA) prediction. We consider four prediction horizons; i.e.
$MSA=4$, where each step is a prediction horizon.
The Adam optimizer is used for training the respective LSTM models. Table 5
and Table 6 describe the topology for the respective LSTM models for
univariate and multivariate cases, respectively. In case of multivariate
model, the input contains four features which represents the adjacent states
in relation to the state taken into account; i.e. the case of Maharashtra
(Maharashtra, Gujarat, Madhya Pradesh, Uttar Pradesh) and the case of Delhi
(Delhi, Rajasthan, Uttar Pradesh, Haryana). In multivariate model of India, we
take all the states as input features to the multivariate model. We note that
similar to univariate model, the multivariate model considers a selected
embedding dimension window ($D=6$) for multi-step-ahead prediction ($MSA=4$).
Table 5 and 6 provides the details for the respective LSTM model typologies in
terms of the hidden neurons and layers.
Method | Input | Hidden layer 1 | Hidden layer 2 | Output
---|---|---|---|---
LSTM | (6,1) | 32 | 32 | (1,4)
BD-LSTM | (6,1) | 32 | 16 | (1,4)
ED-LSTM | (6,1) | 32 | - | (1,4)
Table 5: Respective LSTM model topologies for the univariate case. Method | Input | Hidden-layer | Output
---|---|---|---
LSTM | (6,4) | 32 | (1,4)
BD-LSTM | (6,4) | 32 | (1,4)
ED-LSTM | (6,4) | 32 | (1,4)
Table 6: Respective LSTM model topologies for the multivariate case.
We review the performance of the respective methods in terms of scalability
and robustness which refers to the ability of maintaining consistent
prediction performance as the prediction horizon increases. We use the root
mean squared error (RMSE) in Equation 2 as the main performance measure for
prediction accuracy
$RMSE=\sqrt{\frac{1}{N}\sum_{i=1}^{N}(y_{i}-\hat{y}_{i})^{2}}$ (2)
where $y_{i},\hat{y}_{i}$ are the observed data, predicted data, respectively.
$N$ is the length of the observed data. We use RMSE for each prediction
horizon and for each problem, we report the mean error for the respective
prediction horizons.
We present the mean and 95 % confidence interval for 30 experiment runs with
different initialisation of model parameter space (weights and biases) in all
the experiments. We use a dropout rate of 0.2 for the respective models in all
the experiments.
### 4.2 Prediction performance
We first evaluate the optimal strategy for creating training and testing
datasets. We use static-split of training samples from 15th April 2020 to 15th
May 2021. Our test set features data from 16th May 2021 to 27th September
2021; hence, the training data covers half of the second wave of the cases. In
random-split, we create the train and test sets by randomly shuffling the
dataset with the same size of the dataset as done for the static-split. We
show results for entire case of India, and two leading states of COVID-19
infections, i.e. Maharashtra and Delhi. We investigate the effect of the
univariate and multivariate approaches on the three models, (LSTM, BD-LSTM,
ED-LSTM). Finally, using the best model, we provide a two month outlook for
novel daily cases with a recursive approach, i.e. by feeding back the
predictions into the trained models.
Figure 7 shows univariate LSTM, BD-LSTM and ED-LSTM models with a static-split
of train/test dataset. Figure 8 shows univariate random splitting of
train/test datasets using the same models. We observe that the prediction for
the India dataset has a unique trend where the model is improving with
increase in the prediction horizon (steps) when compared to Maharashtra and
Delhi cases (Panels d and f). The corresponding cases in random-split given in
Figure 8 show a different trend and better accuracy (with lower RMSE) where
ED-LSTM provides the best test accuracy. In general, we find that random-split
with ED-LSTM provides the best performance accuracy for the given univariate
models.
Figure 9 shows results for the multivariate approach, where we use the same
methods used in the univariate approach (LSTM, ED-LSTM, BD-LSTM). We find that
ED-LSTM model gives best performance for the test datasets for all the
respective datasets. Figure 10 shows results for the case of random shuffling
of train/test dataset using the respective methods. We notice that ED-LSTM at
times, provides slightly worse performance when compared to BD-LSTM. Moreover,
the performance is not as good when compared to static-split (Figure 9) and in
general, we find that ED-LSTM with static-split provides the best performance
accuracy.
Tables 7 8 provide a summary of results in terms of the test dataset
performance accuracy (RMSE) by the respective models for random-split and
static-split, which have been given in Figures 7, 8, 9 and 10. In the
univariate models, ED-LSTM provides the best performance accuracy across most
of the three different datasets while LSTM performs best only for a single
case (Figure 7, Panel a). In multivariate models, BD-LSTM and ED-LSTM provide
the best performance accuracy for most cases while LSTM performs best only for
a single case (Figure 10, Panel a).
---
(a) India (train and test)
(b) India (test horizon)
(c) Maharashtra (train and test)
(d) Maharashtra (test horizon)
(e) Delhi (train and test)
(f) Delhi (test horizon)
Fig 7: Univariate LSTM, BD-LSTM and ED-LSTM model performance accuracy (RMSE)
with a static-split for the train and test datasets, and test prediction
horizons (steps). The error bars represent the mean and 95 % confidence
interval for 30 experiment runs.
---
(a) India (train and test)
(b) India (test horizon)
(c) Maharashtra (train and test)
(d) Maharashtra (test horizon)
(e) Delhi (train and test)
(f) Delhi (test horizon)
Fig 8: Univariate LSTM, BD-LSTM and ED-LSTM model performance accuracy (RMSE)
with a random-split for the train and test datasets, and the test prediction
horizons (steps). The error bars represent the mean and 95 % confidence
interval for 30 experiment runs.
---
(a) India (train and test)
(b) India (test horizon)
(c) Maharashtra (train and test)
(d) Maharashtra (test horizon)
(e) Delhi (train and test)
(f) Delhi (test horizon)
Fig 9: Multivariate model performance accuracy (RMSE) using LSTM, BD-LSTM and
ED-LSTM with a static-split for the train and test datasets, and test
prediction horizon (steps). The error bars represent the mean and 95 %
confidence interval for 30 experiment runs.
---
(a) India (train and test)
(b) India (test horizon)
(c) Maharashtra (train and test)
(d) Maharashtra (test horizon)
(e) Delhi (train and test)
(f) Delhi (test horizon)
Fig 10: Multivariate model performance accuracy (RMSE) using LSTM, BD-LSTM and
ED-LSTM with a random-split for the train and test datasets, and test
prediction horizons (steps). The error bars represent the mean and 95 %
confidence interval for 30 experiment runs.
Model | India | Delhi | Maharashtra
---|---|---|---
| Random Split | Static Split | Random Split | Static Split | Random Split | Static Split
| RMSE | Std. Dev. | RMSE | Std. Dev. | RMSE | Std. Dev. | RMSE | Std. Dev. | RMSE | Std. Dev. | RMSE | Std. Dev.
LSTM | 19734 | 1703 | 35540 | 6728 | 1568 | 34 | 266 | 30 | 4762 | 195 | 1925 | 160
BD-LSTM | 21058 | 1746 | 37948 | 6301 | 1628 | 75 | 242 | 22 | 4653 | 214 | 1748 | 195
ED-LSTM | 10732 | 1038 | 62595 | 8932 | 825 | 2 | 435 | 185 | 2365 | 146 | 1594 | 543
Table 7: Univariate model performance accuracy on test dataset (RMSE mean and
standard deviation for 30 experimental runs across 4 prediction horizons).
Model | India | Delhi | Maharashtra
---|---|---|---
| Random Split | Static Split | Random Split | Static Split | Random Split | Static Split
| RMSE | Std. Dev. | RMSE | Std. Dev. | RMSE | Std. Dev. | RMSE | Std. Dev. | RMSE | Std. Dev. | RMSE | Std. Dev.
LSTM | 19524 | 496 | 13325 | 2145 | 1413 | 38 | 1693 | 650 | 3981 | 162 | 1983 | 414
BD-LSTM | 18677 | 521 | 9271 | 1438 | 1449 | 43 | 2021 | 259 | 3888 | 108 | 1886 | 389
ED-LSTM | 22274 | 2174 | 15250 | 6356 | 1621 | 163 | 801 | 394 | 4925 | 503 | 2211 | 478
Table 8: Multivariate model prediction accuracy on the test dataset (RMSE mean
and standard deviation for 30 experimental runs across 4 prediction horizons).
---
(a) India LSTM
(b) India BD-LSTM
(c) Maharashtra LSTM
(d) Maharashtra BD-LSTM
(e) Delhi LSTM
(f) Delhi BD-LSTM
Fig 11: Recursive univariate LSTM and BD-LSTM models predictions for next 60
days (October and November 2021). The uncertainty (95 % confidence interval
shaded in green) and mean prediction is shown in solid black line.
Next, we select two univariate recursive models using random-split for the
three datasets to provide a two months outlook for COVID-19 daily infections.
In this approach, we use the predictions using the test dataset and extend it
further for two months (October and November 2021), recursively. Figure 11
presents results for univariate LSTM and BD-LSTM models. The uncertainty (95 %
confidence interval shaded in green) and mean prediction is shown in solid
black line for 30 experiment runs. We notice that there is a trend of general
decline in cases and we also find that the LSTM models well capture the spike
and fall in cases every few days.
## 5 Discussion
The COVID-19 pandemic in India was hit by two major peaks with one in May-
October 2020 and the other more deadly in April-June 2021. Surprisingly, the
first Indian peak in new cases was reached around the time when the government
began lifting nationwide lockdown and focused more on state-level and hot-spot
based lock downs [87, 88]; however, there were strict restrictions, such as
maintaining social distance and use of face-masks [89]. The second wave struck
due to multiple factors and highly-infectious variant-of-concern, also known
as SARS-CoV-2 delta variant [23]. A lack of preparation by the authorities in
setting up temporary hospitals, shortage of resources such as oxygen and poor
management of lockdowns led to major rise of the cases.
There are number of challenges in COVID-19 forecasting due to the nature of
the infections, reporting of cases, and effect of lock downs. Nevertheless,
despite the challenges and given limited dataset, we have been successful in
developing LSTM models for forecasting trend of daily new cases. Our long-term
forecasts for two months (October and November 2021) show a steady decline in
new cases in India in the respective states. We find that Delhi’s two monthly
forecasts provide (Figure 11) more uncertainty when compared to Maharashtra
and India datasets. We also notice that there is similar level of uncertainty
by LSTM and BD-LSTM models for India and Delhi datasets. The major reason the
uncertainties are different when you compare Delhi and Maharashtra with rest
of India is due to the difference in the trend of cases. In Delhi [15],
multiple peaks were observed since the first wave of infections in 2020,
whereas in the India dataset, there were only two major peaks. In Maharashtra,
a minor peak was observed in November 2021 (Figure 4, Panel a) after the first
wave of infections. Hence, the tends captured in the training dataset were
relatively different from the states (Maharashtra and Delhi), when compared to
India dataset. This suggests that the predictions for Maharashtra and Delhi
are less certain since they had multiple peaks and outbreaks in the past. The
second wave of infections in 2021 began in Maharashtra [90, 91] which has been
the state that held one of the most number of COVID-19 cases and novel daily
infections (Tables 1 - 4) [15].
The model uncertainty increases due to the limitations in the dataset and
models. Our framework has been limited in capturing social-cultural aspects,
population density, and level of lockdowns due to missing information and
data. Moreover, inter-state travels and the chaotic nature of spread of
COVID-19 infections makes it increasingly harder to provide reliable long-term
forecasts. In order to improve forecasting results, the models need to
incorporate more features in the data. The model needs to capture features
such as travel behaviour, level of lockdowns, compliance in masks and other
restrictions, social and cultural lifestyle, local area population density,
work and income thresholds, state-wise vaccination rate, and accessibility to
information.
We take into account the population density as five Indian cities make the top
50 mostly densely populated cities in the world [92], where Mumbai ranks 5th
and Delhi the 40th. The impact of COVID-19 on Indian gross domestic product
(GDP) is significant, but not as bad when compared to some of the developed
western nations [93, 94]. One of the most crucial aspect of management of
spread of COVID-19 infections is the role the government played in timely
closing their international borders and enforcing lock-downs to various
degrees. We need to note that different countries have different geographical
and population dynamics, such as population density and culture. It is not a
good idea to compare cities given difference in population density although
the overall population may be similar. Overall, it is also important to look
at cultural factors such as rituals [95], and role of nuclear and extended
families [96]. In countries such as India, there is large portion of inter-
state migrant workers [97] and also a large portion of the population is in
rural areas [98] that also have extended families. These factors made further
challenges in containing the spread of COVID-19 infections and are hard to be
captured by computational and mathematical models.
In future work, it is important to incorporate robust uncertainty
quantification in collection and sampling data model training and model
parameters; hence, Bayesian deep learning framework for COVID-19 forecasts
would be needed [99, 100, 101, 102]. Moreover, ensemble-based learning methods
can be used to combine the different types of LSTM models used in this study.
We could also develop similar models for death rate and other trends related
to COVID-19. Moreover, deep learning models could be used to jointly model the
rise and fall of cases and the effect it has on the economy.
## 6 Conclusions
We presented a framework for employing LSTM-based models for COVID-19 daily
novel infection forecasting for India. Our research incorporated some of the
latest and most prominent forecasting tools via deep learning, and highlighted
the challenges given limited data and the nature of the spread of infections.
Our results show the challenges of forecasting given limited data which is
highly biased given that we have two major peaks when considering the pandemic
in India. We found that the India and Maharashtra datasets had similar trend
in novel cases and model performance. We evaluated univariate and multivariate
LSTM-based models with different ways of creating training and test data. The
LSTM model variants showed certain strengths and limitations in different
scenarios that made it difficult to choose a single model. Generally, we found
that the univariate random-split ED-LSTM model provides the best test
performance in comparison to rest of the models. Therefore, the data from the
adjacent states did not have much effect in the multivariate model since it
could not outperform the univariate model. The two months ahead forecast
showed a general decline in new cases; however, the authorities need to be
vigilant.
## Software and Data
Data and open source code in Python is available for further analysis:
https://github.com/sydney-machine-learning/LSTM-COVID-19-India.
## References
* 1. A. E. Gorbalenya, S. C. Baker, R. S. Baric, R. J. de Groot, C. Drosten, A. A. Gulyaeva, B. L. Haagmans, C. Lauber, A. M. Leontovich, B. W. Neuman, D. Penzar, L. L. M. P. Stanley Perlman10, D. V. Samborskiy, I. A. Sidorov, I. Sola, and J. Ziebuhr, “The species Severe acute respiratory syndrome-related coronavirus: classifying 2019-nCoV and naming it SARS-CoV-2,” _Nature Microbiology_ , vol. 5, no. 4, p. 536, 2020.
* 2. V. Monteil, H. Kwon, P. Prado, A. Hagelkrüys, R. A. Wimmer, M. Stahl, A. Leopoldi, E. Garreta, C. H. Del Pozo, F. Prosper _et al._ , “Inhibition of SARS-CoV-2 infections in engineered human tissues using clinical-grade soluble human ACE2,” _Cell_ , vol. 181, no. 4, pp. 905–913, 2020.
* 3. WHO, “Coronavirus disease 2019 (COVID-19): situation report, 73,” pp. 1–13, 2020. [Online]. Available: https://apps.who.int/iris/handle/10665/331686
* 4. D. Cucinotta and M. Vanelli, “WHO declares COVID-19 a pandemic.” _Acta bio-medica: Atenei Parmensis_ , vol. 91, no. 1, pp. 157–160, 2020.
* 5. K. G. Andersen, A. Rambaut, W. I. Lipkin, E. C. Holmes, and R. F. Garry, “The proximal origin of SARS-CoV-2,” _Nature medicine_ , vol. 26, no. 4, pp. 450–452, 2020.
* 6. A. Atkeson, “What will be the economic impact of COVID-19 in the US? Rough estimates of disease scenarios,” National Bureau of Economic Research, Tech. Rep., 2020.
* 7. N. Fernandes, “Economic effects of coronavirus outbreak (COVID-19) on the world economy,” _Available at SSRN 3557504_ , 2020.
* 8. M. Maliszewska, A. Mattoo, and V. D. M. Dominique, “The potential impact of COVID-19 on GDP and trade: A preliminary assessment,” _The World Bank_ , vol. 9211, pp. 1–26, 2020.
* 9. C. E. Hart, D. J. Hayes, K. L. Jacobs, L. L. Schulz, and J. M. Crespi, “The impact of COVID-19 on Iowa’s corn, soybean, ethanol, pork, and beef sectors,” _Center for Agricultural and Rural Development, Iowa State University. CARD Policy Brief_ , 2020.
* 10. R. Siche, “What is the impact of COVID-19 disease on agriculture?” _Scientia Agropecuaria_ , vol. 11, no. 1, pp. 3–6, 2020.
* 11. A. Liem, C. Wang, Y. Wariyanti, C. A. Latkin, and B. J. Hall, “The neglected health of international migrant workers in the COVID-19 epidemic,” _The Lancet Psychiatry_ , vol. 7, no. 4, p. e20, 2020.
* 12. H. H. P. Kluge, Z. Jakab, J. Bartovic, V. D’Anna, and S. Severoni, “Refugee and migrant health in the COVID-19 response,” _The Lancet_ , vol. 395, no. 10232, pp. 1237–1239, 2020.
* 13. T. Lancet, “India under COVID-19 lockdown,” _Lancet_ , vol. 395, no. 10233, p. 1315, 2020.
* 14. H. Ritchie, E. Mathieu, L. Rodes-Guirao, C. Appel, C. Giattino, E. Ortiz-Ospina, J. Hasell, B. Macdonald, D. Beltekian, and M. Roser, “Coronavirus pandemic (COVID-19),” _Our World in Data_ , 2020, https://ourworldindata.org/coronavirus.
* 15. E. Dong, H. Du, and L. Gardner, “An interactive web-based dashboard to track COVID-19 in real time,” _The Lancet infectious diseases_ , vol. 20, no. 5, pp. 533–534, 2020.
* 16. Johns Hopkins Coronavirus Resource Center (CRC), “COVID-19 Dashboard, Center for Systems Science and Engineering (CSSE), Johns Hopkins University.” [Online]. Available: https://coronavirus.jhu.edu/map.html
* 17. F. Callard and E. Perego, “How and why patients made long covid,” _Social Science & Medicine_, vol. 268, p. 113426, 2021.
* 18. E. Mahase, “Covid-19: What do we know about “long covid”?” _BMJ_ , vol. 370, 2020.
* 19. V. K. Jain, K. P. Iyengar, and R. Vaishya, “Differences between first wave and second wave of COVID-19 in India,” _Diabetes & Metabolic Syndrome_, 2021\.
* 20. R. M. Arias Velásquez and J. V. Mejía Lara, “Forecast and evaluation of COVID-19 spreading in USA with reduced-space gaussian process regression,” _Chaos, Solitons & Fractals_, vol. 136, p. 109924, 2020.
* 21. M. Yousaf, S. Zahir, M. Riaz, S. M. Hussain, and K. Shah, “Statistical analysis of forecasting COVID-19 for upcoming month in Pakistan,” _Chaos, Solitons & Fractals_, vol. 138, p. 109926, 2020.
* 22. A. I. Saba and A. H. Elsheikh, “Forecasting the prevalence of COVID-19 outbreak in egypt using nonlinear autoregressive artificial neural networks,” _Process Safety and Environmental Protection_ , vol. 141, pp. 1 – 8, 2020.
* 23. C. Del Rio, P. N. Malani, and S. B. Omer, “Confronting the delta variant of SARS-CoV-2, summer 2021,” _JAMA_ , vol. 326, no. 11, pp. 1001–1002, 2021\.
* 24. H. Ren, L. Zhao, A. Zhang, L. Song, Y. Liao, W. Lu, and C. Cui, “Early forecasting of the potential risk zones of COVID-19 in China’s megacities,” _Science of The Total Environment_ , vol. 729, p. 138995, 2020\.
* 25. V. Chin, N. I. Samia, R. Marchant, O. Rosen, J. P. Ioannidis, M. A. Tanner, and S. Cripps, “A case study in model failure? COVID-19 daily deaths and ICU bed utilisation predictions in New York state,” _European Journal of Epidemiology_ , vol. 35, no. 8, pp. 733–742, 2020.
* 26. R. G. da Silva, M. H. D. M. Ribeiro, V. C. Mariani, and L. dos Santos Coelho, “Forecasting Brazilian and American COVID-19 cases based on artificial intelligence coupled with climatic exogenous variables,” _Chaos, Solitons & Fractals_, vol. 139, p. 110027, 2020.
* 27. T. Chakraborty and I. Ghosh, “Real-time forecasts and risk assessment of novel coronavirus (COVID-19) cases: A data-driven analysis,” _Chaos, Solitons & Fractals_, vol. 135, p. 109850, 2020.
* 28. M. Maleki, M. R. Mahmoudi, D. Wraith, and K.-H. Pho, “Time series modelling to forecast the confirmed and recovered cases of COVID-19,” _Travel Medicine and Infectious Disease_ , p. 101742, 2020.
* 29. A. S. Fauci, H. C. Lane, and R. R. Redfield, “Covid-19—navigating the uncharted,” _New England Journal of Medicine_ , vol. 382, no. 13, pp. 1268–1269, 2020.
* 30. F. Ye, S. Xu, Z. Rong, R. Xu, X. Liu, P. Deng, H. Liu, and X. Xu, “Delivery of infection from asymptomatic carriers of COVID-19 in a familial cluster,” _International Journal of Infectious Diseases_ , vol. 94, pp. 133–138, 2020\.
* 31. Y. Bai, L. Yao, T. Wei, F. Tian, D.-Y. Jin, L. Chen, and M. Wang, “Presumed asymptomatic carrier transmission of COVID-19,” _JAMA_ , vol. 323, no. 14, pp. 1406–1407, 2020.
* 32. R. M. Anderson, H. Heesterbeek, D. Klinkenberg, and T. D. Hollingsworth, “How will country-based mitigation measures influence the course of the COVID-19 epidemic?” _The Lancet_ , vol. 395, no. 10228, pp. 931–934, 2020.
* 33. J. L. Elman and D. Zipser, “Learning the hidden structure of speech,” _The Journal of the Acoustical Society of America_ , vol. 83, no. 4, pp. 1615–1626, 1988.
* 34. P. J. Werbos, “Backpropagation through time: what it does and how to do it,” _Proceedings of the IEEE_ , vol. 78, no. 10, pp. 1550–1560, 1990.
* 35. S. Hochreiter and J. Schmidhuber, “Long short-term memory,” _Neural computation_ , vol. 9, no. 8, pp. 1735–1780, 1997.
* 36. J. Schmidhuber, “Deep learning in neural networks: An overview,” _Neural networks_ , vol. 61, pp. 85–117, 2015.
* 37. J. T. Connor, R. D. Martin, and L. E. Atlas, “Recurrent neural networks and robust time series prediction,” _IEEE transactions on neural networks_ , vol. 5, no. 2, pp. 240–254, 1994.
* 38. C. W. Omlin, K. K. Thornber, and C. L. Giles, “Fuzzy finite state automata can be deterministically encoded into recurrent neural networks,” _IEEE Trans. Fuzzy Syst._ , vol. 6, pp. 76–89, 1998.
* 39. C. W. Omlin and C. L. Giles, “Training second-order recurrent neural networks using hints,” in _Proceedings of the Ninth International Conference on Machine Learning_. Morgan Kaufmann, 1992, pp. 363–368.
* 40. C. L. Giles, C. Omlin, and K. K. Thornber, “Equivalence in knowledge representation: Automata, recurrent neural networks, and dynamical fuzzy systems,” _Proceedings of the IEEE_ , vol. 87, no. 9, pp. 1623–1640, 1999\.
* 41. S. Hochreiter, “The vanishing gradient problem during learning recurrent neural nets and problem solutions,” _International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems_ , vol. 6, no. 02, pp. 107–116, 1998.
* 42. Y. Bengio, P. Simard, P. Frasconi _et al._ , “Learning long-term dependencies with gradient descent is difficult,” _IEEE transactions on neural networks_ , vol. 5, no. 2, pp. 157–166, 1994.
* 43. Z. Yang, Z. Zeng, K. Wang, S.-S. Wong, W. Liang, M. Zanin, P. Liu, X. Cao, Z. Gao, Z. Mai _et al._ , “Modified seir and ai prediction of the epidemics trend of COVID-19 in china under public health interventions,” _Journal of Thoracic Disease_ , vol. 12, no. 3, p. 165, 2020.
* 44. V. K. R. Chimmula and L. Zhang, “Time series forecasting of COVID-19 transmission in canada using lstm networks,” _Chaos, Solitons & Fractals_, vol. 135, p. 109864, 2020.
* 45. S. Xingjian, Z. Chen, H. Wang, D.-Y. Yeung, W.-K. Wong, and W.-c. Woo, “Convolutional lstm network: A machine learning approach for precipitation nowcasting,” in _Advances in neural information processing systems_ , 2015, pp. 802–810.
* 46. H.-z. Wang, G.-q. Li, G.-b. Wang, J.-c. Peng, H. Jiang, and Y.-t. Liu, “Deep learning based ensemble approach for probabilistic wind power forecasting,” _Applied energy_ , vol. 188, pp. 56–70, 2017.
* 47. C. Owusu-Fordjour, C. Koomson, and D. Hanson, “The impact of COVID-19 on learning-the perspective of the Ghanaian student,” _European Journal of Education Studies_ , vol. 7, no. 3, pp. 88–101, 2020.
* 48. D. S. W. Ting, L. Carin, V. Dzau, and T. Y. Wong, “Digital technology and COVID-19,” _Nature medicine_ , vol. 26, no. 4, pp. 459–461, 2020.
* 49. N. G. Biavardi, “Being an Italian medical student during the COVID-19 outbreak,” _International Journal of Medical Students_ , vol. 8, no. 1, pp. 49–50, 2020.
* 50. H. Leite, I. R. Hodgkinson, and T. Gruber, “New development:‘Healing at a distance’—telemedicine and COVID-19,” _Public Money & Management_, pp. 1–3, 2020.
* 51. C. Zhou, F. Su, T. Pei, A. Zhang, Y. Du, B. Luo, Z. Cao, J. Wang, W. Yuan, Y. Zhu _et al._ , “COVID-19: Challenges to GIS with big data,” _Geography and Sustainability_ , vol. 1, no. 1, pp. 77–87, 2020.
* 52. M. A. Zambrano-Monserrate, M. A. Ruano, and L. Sanchez-Alcalde, “Indirect effects of COVID-19 on the environment,” _Science of the Total Environment_ , vol. 728, p. 138813, 2020.
* 53. S. Muhammad, X. Long, and M. Salman, “COVID-19 pandemic and environmental pollution: a blessing in disguise?” _Science of The Total Environment_ , vol. 728, p. 138820, 2020.
* 54. A. Kerimray, N. Baimatova, O. P. Ibragimova, B. Bukenov, B. Kenessov, P. Plotitsyn, and F. Karaca, “Assessing air quality changes in large cities during COVID-19 lockdowns: The impacts of traffic-free urban conditions in almaty, kazakhstan,” _Science of The Total Environment_ , vol. 730, p. 139179, 2020.
* 55. M. A. Zambrano-Monserrate, M. A. Ruano, and L. Sanchez-Alcalde, “Indirect effects of COVID-19 on the environment,” _Science of The Total Environment_ , vol. 728, p. 138813, 2020.
* 56. G. A. Millett, A. T. Jones, D. Benkeser, S. Baral, L. Mercer, C. Beyrer, B. Honermann, E. Lankiewicz, L. Mena, J. S. Crowley _et al._ , “Assessing differential impacts of COVID-19 on Black communities,” _Annals of Epidemiology_ , 2020.
* 57. R. P. Rajkumar, “COVID-19 and mental health: A review of the existing literature,” _Asian journal of psychiatry_ , vol. 52, p. 102066, 2020.
* 58. J. Gao, P. Zheng, Y. Jia, H. Chen, Y. Mao, S. Chen, Y. Wang, H. Fu, and J. Dai, “Mental health problems and social media exposure during COVID-19 outbreak,” _PLOS One_ , vol. 15, no. 4, p. e0231924, 2020.
* 59. Y. Huang, Y. Wu, and W. Zhang, “Comprehensive identification and isolation policies have effectively suppressed the spread of COVID-19,” _Chaos, Solitons & Fractals_, vol. 139, p. 110041, 2020.
* 60. S. I. Alzahrani, I. A. Aljamaan, and E. A. Al-Fakih, “Forecasting the spread of the COVID-19 pandemic in saudi arabia using arima prediction model under current public health interventions,” _Journal of Infection and Public Health_ , vol. 13, no. 7, pp. 914 – 919, 2020.
* 61. S. Singh, K. S. Parmar, J. Kumar, and S. J. S. Makkhan, “Development of new hybrid model of discrete wavelet decomposition and autoregressive integrated moving average (ARIMA) models in application to one month forecast the casualties cases of COVID-19,” _Chaos, Solitons & Fractals_, vol. 135, p. 109866, 2020.
* 62. A. Anand, Y. Lamba, and A. Roy, “Forecasting COVID-19 transmission in India using deep learning models,” _Letters in Applied NanoBioScience_ , vol. 10, no. 2, pp. 2044–2055.
* 63. K. R. Bhimala, G. K. PATRA, R. Mopuri, and S. R. Mutheneni, “A deep learning approach for prediction of SARS-CoV-2 cases using the weather factors in India,” _Authorea Preprints_ , 2020. [Online]. Available: https://doi.org/10.22541/au.160275979.91541585/v1
* 64. R. P. Shetty and P. S. Pai, “Forecasting of COVID 19 cases in Karnataka state using artificial neural network (ANN),” _Journal of The Institution of Engineers (India): Series B_ , pp. 1–11, 2021.
* 65. A. Tomar and N. Gupta, “Prediction for the spread of COVID-19 in India and effectiveness of preventive measures,” _Science of The Total Environment_ , vol. 728, p. 138762, 2020.
* 66. A. K. Gupta, V. Singh, P. Mathur, and C. M. Travieso-Gonzalez, “Prediction of COVID-19 pandemic measuring criteria using support vector machine, prophet and linear regression models in Indian scenario,” _Journal of Interdisciplinary Mathematics_ , pp. 1–20, 2020.
* 67. S. Bodapati, H. Bandarupally, and M. Trupthi, “COVID-19 time series forecasting of daily cases, deaths caused and recovered cases using long short term memory networks,” in _2020 IEEE 5th International Conference on Computing Communication and Automation (ICCCA)_. IEEE, 2020, pp. 525–530.
* 68. V. Chaurasia and S. Pal, “Application of machine learning time series analysis for prediction COVID-19 pandemic,” _Research on Biomedical Engineering_ , pp. 1–13, 2020.
* 69. G. Battineni, N. Chintalapudi, and F. Amenta, “Forecasting of COVID-19 epidemic size in four high hitting nations (USA, Brazil, India and Russia) by Fb-Prophet machine learning model,” _Applied Computing and Informatics_ , pp. 1–10, 2020.
* 70. P. Nadler, R. Arcucci, and Y. Guo, “A neural sir model for global forecasting,” in _Machine Learning for Health_. PMLR, 2020, pp. 254–266.
* 71. O. Istaiteh, T. Owais, N. Al-Madi, and S. Abu-Soud, “Machine learning approaches for covid-19 forecasting,” in _2020 International Conference on Intelligent Data Science Technologies and Applications (IDSTA)_. IEEE, 2020, pp. 50–57.
* 72. G. Pinter, I. Felde, A. Mosavi, P. Ghamisi, and R. Gloaguen, “COVID-19 pandemic prediction for hungary; a hybrid machine learning approach,” _Mathematics_ , vol. 8, no. 6, 2020. [Online]. Available: https://doi.org/10.3390/math8060890
* 73. F. Takens, “Detecting strange attractors in turbulence,” in _Dynamical Systems and Turbulence, Warwick 1980_ , ser. Lecture Notes in Mathematics, 1981, pp. 366–381.
* 74. C. Frazier and K. Kockelman, “Chaos theory and transportation systems: Instructive example,” _Transportation Research Record: Journal of the Transportation Research Board_ , vol. 20, pp. 9–17, 2004.
* 75. J. L. Elman, “Finding structure in time,” _Cognitive Science_ , vol. 14, pp. 179–211, 1990.
* 76. C. W. Omlin and C. L. Giles, “Constructing deterministic finite-state automata in recurrent neural networks,” _J. ACM_ , vol. 43, no. 6, pp. 937–972, 1996\.
* 77. R. Chandra, “Competition and collaboration in cooperative coevolution of Elman recurrent neural networks for time-series prediction,” _Neural Networks and Learning Systems, IEEE Transactions on_ , vol. 26, pp. 3123–3136, 2015.
* 78. S. Hochreiter, “The vanishing gradient problem during learning recurrent neural nets and problem solutions,” _Int. J. Uncertain. Fuzziness Knowl.-Based Syst._ , vol. 6, no. 2, pp. 107–116, 1998.
* 79. M. Schuster and K. Paliwal, “Bidirectional recurrent neural networks,” _Signal Processing, IEEE Transactions on_ , vol. 45, pp. 2673 – 2681, 12 1997\.
* 80. A. Graves and J. Schmidhuber, “Framewise phoneme classification with bidirectional lstm and other neural network architectures,” _Neural networks : the official journal of the International Neural Network Society_ , vol. 18, pp. 602–10, 07 2005.
* 81. Y. Fan, Y. Qian, F.-L. Xie, and F. K. Soong, “Tts synthesis with bidirectional lstm based recurrent neural networks,” in _INTERSPEECH_ , 2014.
* 82. A. Graves, N. Jaitly, and A.-r. Mohamed, “Hybrid speech recognition with deep bidirectional lstm,” in _2013 IEEE workshop on automatic speech recognition and understanding_. IEEE, 2013, pp. 273–278.
* 83. I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to sequence learning with neural networks,” in _Advances in Neural Information Processing Systems 27_ , Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, Eds., 2014, pp. 3104–3112.
* 84. “Unique identification authority of India,” May 2020, [Online; accessed 16-July-2020]. [Online]. Available: https://uidai.gov.in/images/state-wise-aadhaar-saturation.pdf
* 85. S. Athreya, N. Gadhiwala, and A. Mishra, “COVID-19 India-Timeline an understanding across states and union territories.” 2020, ongoing Study at http://www.isibang.ac.in/~athreya/incovid19.
* 86. “COVID-19 India data,” 2020, Ministry of Health and Welfare, Government of India: https://www.mohfw.gov.in/.
* 87. R. Debnath and R. Bardhan, “India nudges to contain COVID-19 pandemic: A reactive public policy analysis using machine-learning based topic modelling,” _PLOS One_ , vol. 15, no. 9, pp. 1–25, 09 2020.
* 88. B. Rai, A. Shukla, and L. K. Dwivedi, “Dynamics of COVID-19 in India: A review of different phases of lockdown,” _Population Medicine_ , vol. 2, no. July, 2020.
* 89. S. Rab, M. Javaid, A. Haleem, and R. Vaishya, “Face masks are new normal after COVID-19 pandemic,” _Diabetes & Metabolic Syndrome: Clinical Research & Reviews_, vol. 14, no. 6, pp. 1617–1619, 2020.
* 90. M. Joshi, M. Kumar, V. Srivastava, D. Kumar, D. Rathore, R. Pandit, and C. G. Joshi, “First detection of SARS-CoV-2 Delta variant (B. 1.617. 2) in the wastewater of (Ahmedabad), India,” _medRxiv_ , 2021. [Online]. Available: https://doi.org/10.1101/2021.07.07.21260142
* 91. W. Yang and J. Shaman, “COVID-19 pandemic dynamics in India and impact of the SARS-CoV-2 Delta (B. 1.617. 2) variant,” _medRxiv_ , 2021. [Online]. Available: https://doi.org/10.1101/2021.06.21.21259268
* 92. Wikipedia contributors, “List of cities proper by population density, Wikipedia,” 2020, [Online; accessed 16-July-2020]. [Online]. Available: https://en.wikipedia.org/wiki/List_of_cities_proper_by_population_density
* 93. S. M. Dev, R. Sengupta _et al._ , “COVID-19: impact on the indian economy,” _Indira Gandhi Institute of Development Research, Mumbai Working Papers, April_ , 2020. [Online]. Available: https://ideas.repec.org/p/ind/igiwpp/2020-013.html
* 94. “The World Economic Outlook April 2020: The Great Lockdown, International Monetary Fund,” April 2020, [Online; accessed 16-July-2020]. [Online]. Available: https://www.imf.org/en/Publications/WEO/Issues/2020/04/14/weo-april-2020
* 95. E. Imber-Black, “Rituals in the time of COVID-19: imagination, responsiveness, and the human spirit,” _Family process_ , vol. 59, no. 3, pp. 912–921, 2020.
* 96. J. L. Lebow, “Family in the age of COVID-19,” _Family process_ , vol. 59, no. 2, p. 309—312, June 2020. [Online]. Available: https://europepmc.org/articles/PMC7273068
* 97. A. Dandekar and R. Ghai, “Migration and reverse migration in the age of covid-19,” _Economic and Political Weekly_ , vol. 55, no. 19, pp. 28–31, 2020.
* 98. A. Kumar, K. R. Nayar, and S. F. Koya, “COVID-19: challenges and its consequences for rural health care in india,” _Public Health in Practice_ , p. 100009, 2020.
* 99. R. M. Neal, “Bayesian learning via stochastic dynamics,” in _Advances in Neural Information Processing Systems 5_. Morgan-Kaufmann, 1993, pp. 475–482.
* 100. J. Pall, R. Chandra, D. Azam, T. Salles, J. M. Webster, R. Scalzo, and S. Cripps, “Bayesreef: A bayesian inference framework for modelling reef growth in response to environmental change and biological dynamics,” _Environmental Modelling & Software_, p. 104610, 2020.
* 101. R. Chandra, K. Jain, R. V. Deo, and S. Cripps, “Langevin-gradient parallel tempering for Bayesian neural learning,” _Neurocomputing_ , vol. 359, pp. 315 – 326, 2019.
* 102. R. Chandra and A. Kapoor, “Bayesian neural multi-source transfer learning,” _Neurocomputing_ , vol. 378, pp. 54 – 64, 2020.
|
# Forecast for cosmological parameter estimation with gravitational-wave
standard sirens from the LISA-Taiji network
Ling-Feng Wang Department of Physics, College of Sciences, Northeastern
University, Shenyang 110819, China Shang-Jie Jin Department of Physics,
College of Sciences, Northeastern University, Shenyang 110819, China Jing-Fei
Zhang Department of Physics, College of Sciences, Northeastern University,
Shenyang 110819, China Xin Zhang111Corresponding author
<EMAIL_ADDRESS>Department of Physics, College of Sciences,
Northeastern University, Shenyang 110819, China Key Laboratory of Data
Analytics and Optimization for Smart Industry (Northeastern University),
Ministry of Education, Shenyang 110819, China
###### Abstract
LISA and Taiji are expected to form a space-based gravitational-wave (GW)
detection network in the future. In this work, we make a forecast for the
cosmological parameter estimation with the standard siren observation from the
LISA-Taiji network. We simulate the standard siren data based on a scenario
with configuration angle of $40^{\circ}$ between LISA and Taiji. Three models
for the population of massive black hole binary (MBHB), i.e., pop III, Q3d,
and Q3nod, are considered to predict the events of MBHB mergers. We find that,
based on the LISA-Taiji network, the number of electromagnetic (EM)
counterparts detected is almost doubled compared with the case of single Taiji
mission. Therefore, the LISA-Taiji network’s standard siren observation could
provide much tighter constraints on cosmological parameters. For example,
solely using the standard sirens from the LISA-Taiji network, the constraint
precision of $H_{0}$ could reach $1.3\%$. Moreover, combined with the CMB
data, the GW-EM observation based on the LISA-Taiji network could also tightly
constrain the equation of state of dark energy, e.g., the constraint precision
of $w$ reaches about $4\%$, which is comparable with the result of CMB+BAO+SN.
It is concluded that the GW standard sirens from the LISA-Taiji network will
become a useful cosmological probe in understanding the nature of dark energy
in the future.
gravitational wave, space-based, network
LISA-Taiji network, gravitational-wave standard siren, cosmological parameter
estimation, massive black bole binary, electromagnetic counterpart
###### pacs:
95.36.+x, 98.80.Es, 98.80.-k, 04.80.Nn, 95.55.Ym
## I Introduction
The precise measurement of the cosmic microwave background (CMB) anisotropies
initiated the era of precision cosmology Spergel _et al._ (2003); Bennett
_et al._ (2003). Constraining the standard cosmological model, i.e., the
$\Lambda$CDM model, with the high-precision CMB observations enables
cosmologists to have a comprehensive understanding of the evolution history of
the universe. However, the accurate measurements also led to some puzzling
issues. For example, there is a 4.4$\sigma$ tension between the $H_{0}$ values
inferred from the CMB observation Aghanim _et al._ (2020) and the distance
ladder measurement Riess _et al._ (2019). Essentially, the $H_{0}$ tension
reflects an inconsistency of measurements between the early universe and the
late universe Verde _et al._ (2019); Riess (2019). In addition to facing the
challenge of the Hubble tension, the $\Lambda$CDM model also has some
theoretical problems, such as the “fine-tuning” and “cosmic coincidence”
problems Weinberg (1989); Sahni and Starobinsky (2000); Bean _et al._ (2005),
which implies that the $\Lambda$CDM model needs to be further adjusted.
Therefore, the current development of cosmology can be divided into two main
aspects: (i) further extending the standard $\Lambda$CDM model Guo _et al._
(2019a); Zhao _et al._ (2020a); Di Valentino _et al._ (2021); Li _et al._
(2019); Vagnozzi (2020); Guo _et al._ (2019b); Li _et al._ (2020a); Feng
_et al._ (2020a); Zhang _et al._ (2020a); Feng _et al._ (2020b); Lin _et
al._ (2020); Guo _et al._ (2020); Hryczuk and Jodłowski (2020); Zhang _et
al._ (2020b); Gao _et al._ (2021), and (ii) developing more low-redshift
observation projects aimed at precisely measuring the late universe Zhang
(2019); Xu and Zhang (2020); Li and Zhang (2020); Zhang _et al._ (2021); Wang
_et al._ (2021). For the second aspect, the gravitational-wave (GW) standard
siren method Schutz (1986); Holz and Hughes (2005) is one of the most
promising options and has been widely discussed Holz and Hughes (2005); Cai
_et al._ (2018a); Di Valentino and Melchiorri (2018); Wei (2018); Zhao _et
al._ (2018); Di Valentino _et al._ (2018); Du _et al._ (2019); Mifsud and
van de Bruck (2019); Wei (2019); Gray _et al._ (2020); Howlett and Davis
(2020); Chassande-Mottin _et al._ (2019); Doctor (2020); Fu _et al._ (2019);
Mukherjee _et al._ (2021); Abbott _et al._ (2021); Palmese _et al._ (2020);
Chen (2020); Wang _et al._ (2020a); Chen _et al._ (2021); Zhao _et al._
(2011); Cai and Yang (2017); Cai _et al._ (2018b); Yang _et al._ (2019a);
Cai and Yang (2018); Wang _et al._ (2018); Zhang _et al._ (2019a); Li _et
al._ (2020b); Zhang _et al._ (2019b, 2020c); Yan _et al._ (2019); Yang _et
al._ (2019b); Jin _et al._ (2020); Qi _et al._ (2021).
The amplitude of GW generated by the merger of compact binary encodes
luminosity distance and chirp mass of the source. The absolute luminosity
distance could be obtained if the amplitude is measured precisely, and the
chirp mass could be inferred from the variation of GW frequency. The relation
between luminosity distance $d_{\rm L}$ and redshift $z$ can be established,
once the electromagnetic (EM) counterpart of a GW source is detected by
optical observatories. Since the $d_{\rm L}$–$z$ relation is determined by the
expansion history of the universe, cosmological models could be constrained by
this relation. This method is usually referred to as the “standard siren”
method Schutz (1986); Holz and Hughes (2005). GW170817 Abbott _et al._
(2017a, b, c), the first detected binary neutron star (BNS) merger event with
an EM counterpart (GRB 170817A), has provided an independent measurement of
the Hubble constant, giving the result of $H_{0}=70^{+12}_{-8}~{}{\rm
km}~{}{\rm s}^{-1}~{}{\rm Mpc}^{-1}$ Abbott _et al._ (2017d). Recently, the
event ZTF19abanrhr Graham _et al._ (2020) reported by the Zwicky Transient
Facility is regarded as a candidate of the first plausible optical EM
counterpart to the binary black hole (BBH) merger event GW190521 Abbott _et
al._ (2020). Chen _et al._ Chen _et al._ (2020) and Mukherjee _et al._
Mukherjee _et al._ (2020) have given the constraints on the $\Lambda$CDM and
$w$CDM models, assuming ZTF19abanrhr is the actual EM counterpart to GW190521.
Even if BBH merger events are expected to have no EM counterparts, these “dark
sirens” could also be used in cosmological fits using the statistical method
discussed in Refs. Del Pozzo (2012); Chen _et al._ (2018); Fishbach _et al._
(2019); Feeney _et al._ (2019); Ding _et al._ (2019); Soares-Santos _et
al._ (2019); Lagos _et al._ (2019); Yu _et al._ (2020).
The potential of standard siren method in constraining cosmological parameters
has been forecasted in Refs. Zhao _et al._ (2011); Cai and Yang (2017); Cai
_et al._ (2018b); Yang _et al._ (2019a); Cai and Yang (2018); Wang _et al._
(2018); Zhang _et al._ (2019a); Li _et al._ (2020b); Zhang _et al._ (2019b,
2020c); Yan _et al._ (2019); Yang _et al._ (2019b); Jin _et al._ (2020); Qi
_et al._ (2021), based on future ground-based GW detectors, e.g., Einstein
Telescope Punturo _et al._ (2010); ET- and Cosmic Explorer Abbott _et al._
(2017e); CE- . Several mechanisms for producing fast radio bursts (FRBs) by
the mergers of binaries, such as charged black holes or neutron stars, are
proposed in Refs. Totani (2013); Mingarelli _et al._ (2015); Wang _et al._
(2016); Liu _et al._ (2016); Zhang (2016); Yamasaki _et al._ (2018), and
subsequently, GW/FRB association systems as a complementary cosmological probe
are discussed in Refs. Wei _et al._ (2018); Cai _et al._ (2019) (see also
Ref. Zhao _et al._ (2020b)). The GW sources detected by future ground-based
GW detectors are BNSs or stellar-mass BBHs, which are mainly distributed at
$z<3$. In addition, the GWs produced by the massive black hole binaries
(MBHBs) with EM counterparts are also expected to serve as standard sirens
Holz and Hughes (2005). The MBHB standard sirens may be detected at the
redshift up to $z\simeq 10$ in the future Caldwell _et al._ (2019), providing
a promising method of measuring the expansion history of the universe back to
a much earlier time. Space-based GW detection missions have been proposed and
implemented to detect the GWs produced by MBHBs.
The Laser Interferometer Space Antenna (LISA) LIS ; Armano _et al._ (2016);
Amaro-Seoane _et al._ (2017); Armano _et al._ (2018); Abich _et al._
(2019); Speri _et al._ (2021) is a European space-based GW observatory, with
three identical drag-free spacecraft that form an equilateral triangle with
arm length of $2.5\times 10^{6}$ km. Taiji Wu (2018); Ruan _et al._ (2020a);
Hu and Wu (2017) and TianQin Luo _et al._ (2020); Wang _et al._ (2020b); Liu
_et al._ (2020); Milyukov (2020); Mei _et al._ (2020); Fan _et al._ (2020)
are two space-based GW observatories proposed by Chinese researchers. Taiji is
a LISA-like space-based GW observatory proposed by the Chinese Academy of
Sciences, also with a triangle of three satellites but with arm length of
$3\times 10^{6}$ km. Some forecasts for the capability of LISA and Taiji in
cosmological parameter estimation have been discussed in Refs. Tamanini _et
al._ (2016); Belgacem _et al._ (2019); Zhao _et al._ (2020c). Multiple GW
observatories can constitute a network to improve the measurement precision of
source parameters Abbott _et al._ (2017f); Fan _et al._ (2019), by measuring
phase differences and amplitude ratios of GWs in different detectors. LISA’s
orbit is proposed to be at the ecliptic plane behind the Earth with a
$20^{\circ}$ trailing angle, and Taiji is planed to be localized in front of
the Earth with a $20^{\circ}$ leading angle, so that LISA and Taiji could form
a space-based network aimed at detecting GW signals within an mHz range (i.e.
$10^{-4}$ Hz to $10^{-1}$ Hz) Ruan _et al._ (2020b); Wang _et al._ (2020c);
Hu _et al._ (2021); Wang _et al._ (2020d); Omiya and Seto (2020); Orlando
_et al._ (2021); Wang and Han (2021).
Recently, the LISA-Taiji network’s capability of localizing GW sources was
detailedly discussed in Refs. Ruan _et al._ (2020b); Wang _et al._ (2020c).
Wang _et al._ Wang _et al._ (2020d) showed that within 5-year operation time,
the LISA-Taiji network is able to constrain the Hubble constant within $1\%$
accuracy via dark sirens. Omiya and Seto Omiya and Seto (2020) explored the
detectability of vector and scalar polarization modes in a stochastic
gravitational wave background (SGWB) around 1 mHz with the LISA-Taiji network.
Orlando _et al._ Orlando _et al._ (2021) proposed that the chirality of an
isotropic SGWB can be detected by cross-correlating the data streams of LISA
and Taiji. Wang and Han Wang and Han (2021) showed that the LISA-Taiji network
could improve the observations on the anomalous polarization predicted by the
theories beyond general relativity.
In this work, we focus on the LISA-Taiji network’s capability of improving the
constraint accuracies of cosmological parameters. We first use the Fisher
information matrix method to estimate the uncertainty of luminosity distance
$d_{\rm L}$, and then simulate 5-year standard siren data based on the LISA-
Taiji network. Then, we constrain three typical dark-energy cosmological
models, i.e., the $\Lambda$CDM, $w$CDM, and CPL models, using the standard
siren (joint GW-EM detection) mock data. We mainly analyze the improvement on
the constraint accuracy of equation of state (EoS) of dark energy, and show
that the LISA-Taiji network will play an important role in exploring the
properties of dark energy in the future.
The rest of this paper is organized as follows. In Sec. II.1, we describe the
GW waveform and the detector response. In Sec. II.2, we describe the Fisher
matrix analysis for GW parameter estimation. In Sec. II.3, we discuss the
identifications of EM counterparts. In Sec. II.4, we introduce the methods of
simulating the standard siren catalog. In Sec. III, we display the constraint
results and make some discussions. The conclusion is given in Sec. IV. Unless
otherwise specified, we adopt the system of units in which $c=G=1$ throughout
this paper.
## II Simulation of standard siren observation
### II.1 GW waveform and detector response
The GW signal from the inspiral of a non-spinning MBHB can be modeled by the
restricted post-Newtonian (PN) waveform. The GW strain $h(t)$ can be described
by two independent polarizations ${h_{+,\times}}(t)$ in the transverse-
traceless gauge,
$\displaystyle
h(t)={F_{+}}(t;\theta,\phi,\psi){h_{+}}(t)+{F_{\times}}(t;\theta,\phi,\psi){h_{\times}}(t)\,,$
(1)
where $F_{+,\times}$ are antenna pattern functions, $(\theta,\phi)$ denote the
source’s polar angle and azimuthal angle in the ecliptic frame, and $\psi$ is
the polarization angle of GW. We can separate the antenna pattern function
into a polarization angle part and a $D_{+,\times}$ part that describes the
dependence of time,
$\displaystyle F_{+}(t)=$ $\displaystyle\frac{1}{2}\Big{(}{\rm
cos}(2\psi)D_{+}(t)-{\rm sin}(2\psi)D_{\times}(t)\Big{)},$ (2) $\displaystyle
F_{\times}(t)=$ $\displaystyle\frac{1}{2}\Big{(}{\rm sin}(2\psi)D_{+}(t)+{\rm
cos}(2\psi)D_{\times}(t)\Big{)}.$ (3)
For the inspiral process, the specific forms of $D_{+,\times}$ with the low-
frequency approximation are given in Ref. Ruan _et al._ (2020b),
$\displaystyle D_{+}(t)=$ $\displaystyle\frac{\sqrt{3}}{64}\bigg{[}-36{\rm
sin}^{2}\theta\,{\rm sin}\big{(}2\alpha(t)-2\beta\big{)}+\big{(}3+{\rm
cos(2\theta)}\big{)}$ $\displaystyle\times\bigg{(}{\rm
cos}(2\phi)\Big{(}9\sin(2\beta)-{\rm
sin}\big{(}4\alpha(t)-2\beta\big{)}\Big{)}$ $\displaystyle+{\rm
sin}(2\phi)\Big{(}{\rm
cos}\big{(}4\alpha(t)-2\beta\big{)}-9\cos(2\beta)\Big{)}\bigg{)}$
$\displaystyle-4\sqrt{3}{\rm sin}(2\theta)\Big{(}{\rm
sin}\big{(}3\alpha(t)-2\beta-\phi\big{)}-3{\rm sin}\big{(}\alpha(t)$
$\displaystyle-2\beta+\phi\big{)}\Big{)}\bigg{]}\,,$ (4) $\displaystyle
D_{\times}(t)=$ $\displaystyle\frac{1}{16}\bigg{[}\sqrt{3}{\rm
cos}\theta\Big{(}9{\rm cos}(2\phi-2\beta)-{\rm cos}\big{(}4\alpha(t)-2\beta$
$\displaystyle-2\phi\big{)}\Big{)}-6{\rm sin}\theta\Big{(}{\rm
cos}\big{(}3\alpha(t)-2\beta-\phi\big{)}$ $\displaystyle+3{\rm
cos}\big{(}\alpha(t)-2\beta+\phi\big{)}\Big{)}\bigg{]}\,,$ (5)
where $\alpha=2\pi f_{m}t+\kappa$ is the orbital phase of the guiding center,
and $\beta$ is the initial orientation of the constellation. We simply set
$\beta=0$ in our simulation. The triangular GW detectors with three arms, such
as LISA and Taiji, can be equivalent to two independent
$90^{\circ}$-interferometers (i.e., “L-shaped” interferometers). The second
interferometer is equivalent to the first one rotated by $\pi/4$ radians. The
response functions of the two interferometers are
${F_{+,\times}}(t;\theta,\phi,\psi)$ and
${F_{+,\times}}(t;\theta,\phi-\pi/4,\psi)$ Cutler (1998). Here $\kappa$ is the
initial ecliptic longitude of the guiding center and $f_{m}=1/{\rm yr}$. We
assume that $\kappa=0$ for LISA and $\kappa=40^{\circ}$ for Taiji, so that the
separation angle between LISA and Taiji is $40^{\circ}$.
For the sake of describing GW signals in the Fourier space, the observation
time $t$ in Eqs. (4) and (5) is replaced by Krolak _et al._ (1995); Buonanno
_et al._ (2009)
$t(f)=t_{\rm c}-\frac{5}{256}M_{\rm c}^{-5/3}(\pi f)^{-8/3},$ (6)
where $t_{\rm c}$ is the coalescence time of MBHB. The Fourier transformation
of the strain can be obtained, i.e.,
$\displaystyle\tilde{h}(f)=-\left(\frac{5\pi}{24}\right)^{1/2}M_{\rm
c}^{5/6}\left[\frac{(\pi f)^{-7/6}}{D_{{\rm eff}}}\right]e^{-i\Psi}.$ (7)
The effective luminosity distance, $D_{\rm eff}$, is defined as
$D_{{\rm eff}}=d_{\rm L}\left[F^{2}_{+}\left(\frac{1+{\rm
cos}^{2}\iota}{2}\right)^{2}+F^{2}_{\times}{\rm cos}^{2}\iota\right]^{-1/2},$
(8)
where $d_{\rm L}$ is the luminosity distance to a GW source and $\iota$ is the
inclination angle between the orbital angular momentum and the line of sight.
$\Psi$ can be written to the second PN order as
$\displaystyle\Psi(f;M_{c},\eta)=$ $\displaystyle 2\pi
ft_{0}-2\phi_{0}-\frac{\pi}{4}+\frac{3}{128\eta}\bigg{[}\nu^{-5}+\Big{(}\frac{3715}{756}$
$\displaystyle+\frac{55}{9}\eta\Big{)}\nu^{-3}-16\pi\nu^{-2}+\Big{(}\frac{15293365}{508032}$
$\displaystyle+\frac{27145}{504}\eta+\frac{3085}{72}{\eta}^{2}\Big{)}\nu^{-1}\bigg{]},$
(9) $\displaystyle\nu=$ $\displaystyle\Big{(}\frac{G\pi
M}{c^{3}}f\Big{)}^{1/3},$ (10)
where $t_{0}=t_{c}+\tau(t)$ is the coalescence time at a detector. According
to the forward modeling of LISA described in Ref. Rubbo _et al._ (2004), to
linear order in eccentricity, the time delay $\tau(t)$ and the phase
$\phi_{0}$ take the forms
$\displaystyle\tau(t)=$ $\displaystyle-\frac{R}{c}{\rm sin}\theta\,{\rm
cos}(\alpha-\phi)-\frac{1}{2}e\frac{R}{c}{\rm
sin}\theta\big{[}\cos(2\alpha-\phi-\beta)$
$\displaystyle-3\cos(\phi-\beta)\big{]},$ (11) $\displaystyle
2\phi_{0}=2\phi_{c}-{\rm
arctan}\left(\frac{F_{\times}(\theta,\phi,\iota,\psi;t)}{F_{+}(\theta,\phi,\iota,\psi;t)}\frac{2{\rm
cos}\iota}{1+{\rm cos}^{2}\iota}\right),$ (12)
where $R=1\ {\rm AU}$, and $e$ is the eccentricity of detector’s orbit. In the
observer’s reference frame, $M_{\rm c}=(1+z)\eta^{3/5}M$ is the redshifted
chirp mass. $M={M_{1}}+{M_{2}}$ is the total mass of MBHB with
${M_{1}}>{M_{2}}$, and ${\eta=M_{1}M_{2}/M^{2}}$ is the symmetric mass ratio.
### II.2 Fisher matrix analysis for GW parameter estimation
For a network including $N$ independent detectors, the Fisher information
matrix can be written as
$\bm{F}_{ij}=\left(\frac{\partial\bm{h}(f)}{\partial\theta_{i}}\bigg{|}\frac{\partial\bm{h}(f)}{\partial\theta_{j}}\right),$
(13)
with $\bm{h}$ being given by
$\bm{h}(f)=\left[\frac{\tilde{h}_{1}(f)}{\sqrt{S_{\rm
n}(f)}},\frac{\tilde{h}_{2}(f)}{\sqrt{S_{\rm
n}(f)}},\cdots,\frac{\tilde{h}_{N}(f)}{\sqrt{S_{\rm n}(f)}}\right]^{\rm T},$
(14)
where $\theta_{i}$ denotes nine parameters ($d_{L}$, $M_{c}$, $\eta$,
$\theta$, $\phi$, $\iota$, $t_{c}$, $\phi_{c}$, $\psi$) for a GW event. Here,
$S_{\rm n}(f)$ is the noise power spectral density. The specific forms of
$S_{\rm n}(f)$ for Taiji and LISA are obtained from Refs. Ruan _et al._
(2020a); Klein _et al._ (2016). The bracket in Eq. (13) for two functions
$a(t)$ and $b(t)$ is defined as
$(a|b)=4\int_{f_{\rm low}}^{f_{\rm
up}}\frac{\tilde{a}(f)\tilde{b}^{*}(f)+\tilde{a}^{*}(f)\tilde{b}(f)}{2}{\rm
d}f,$ (15)
where “$\sim$” above a function denotes the Fourier transform of the function.
The upper limit of the integral is set to the innermost stable circular orbit
(ISCO) frequency $f_{\rm ISCO}=c^{3}/(6\sqrt{6}\pi GM)$ Feng _et al._ (2019),
and the lower frequency cutoff is set to $f_{\rm low}=10^{-4}\,\rm Hz$. The
signal-to-noise ratio (SNR) of a GW event is given by
$\rho^{2}=(\bm{h}|\bm{h}),$ (16)
and we consider the SNR threshold of 8 in our simulation.
The Fisher matrix of the LISA-Taiji network is the sum of the Fisher matrix of
LISA and the one of Taiji, which can be expressed as
$F_{{\rm network}}=F_{{\rm LISA}}+F_{{\rm Taiji}}.$ (17)
Then, the errors of GW parameters can be estimated by the Fisher information
matrix,
$\Delta\theta_{i}=\sqrt{(F^{-1})_{ii}}.$ (18)
In our analysis, we take into account nine parameters ($d_{L}$, $M_{c}$,
$\eta$, $\theta$, $\phi$, $\iota$, $t_{c}$, $\phi_{c}$, $\psi$) in the Fisher
matrix. The error of luminosity distance, $\Delta d_{\rm L}$, and the angular
resolution, $\Delta\Omega$, could be calculated by the Fisher matrix. Here,
$\Delta\Omega$ is given by
$\Delta\Omega=2\pi|\textrm{sin}\theta|\sqrt{\langle\Delta\theta^{2}\rangle\langle\Delta\phi^{2}\rangle-\langle\Delta\theta\Delta\phi\rangle^{2}}$,
with $\langle\Delta\theta^{2}\rangle$, $\langle\Delta\phi^{2}\rangle$, and
$\langle\Delta\theta\Delta\phi\rangle$ being given by the inverse of the
9-parameter Fisher matrix Zhao and Wen (2018). Improving the angular
resolutions of GW sources is helpful to identify EM counterparts, so
$\Delta\Omega$ is important for determining the number of GW-EM events. In
addition, $\Delta d_{\rm L}$ could directly affect the constraint accuracies
of cosmological parameters. Therefore, before making a cosmological analysis,
it is necessary to study the reductions of $\Delta\Omega$ and $\Delta d_{\rm
L}$ made by the LISA-Taiji network.
For the sake of clearly showing the reductions of $\Delta\Omega$ and $\Delta
d_{\rm L}$, we plot $\Delta\Omega$ and $\Delta d_{\rm L}$ as functions of
redshift in Figure 1. In this figure, we simulate 500 GW events to show
statistical distributions of $\Delta\Omega$ and $\Delta d_{\rm L}$. We choose
$\kappa=0$ for LISA and $\kappa=40^{\circ}$ for Taiji. The mass of MBHs, the
sky location ($\theta$, $\phi$), the binary inclination $\iota$, the
polarization angle $\psi$, and the coalescence phase $\phi_{c}$ are randomly
chosen in the ranges of $[10^{4},10^{7}]M_{\odot}$, $[0,\pi]$, $[0,2\pi]$,
$[0,\pi]$, $[0,2\pi]$, and $[0,2\pi]$, respectively. We can clearly see that
the LISA-Taiji network could reduce $\Delta\Omega$ by several orders of
magnitude compared with the single Taiji mission, which implies that the LISA-
Taiji network could greatly improve the capability of locating GW sources, and
thus could increase the detection number of GW-EM events. For the uncertainty
of luminosity distance, $\Delta d_{\rm L}$, it is not reduced as much as
$\Delta\Omega$, but is still reduced by a factor of a few. We have a more
specific analysis on these aspects in Sec. II.4.
Figure 1: The uncertainties of angular resolutions and luminosity distances as
functions of redshift. Here we choose $\kappa=0$ for LISA, and
$\kappa=40^{\circ}$ for Taiji.
Actually, $\Delta d_{\rm L}$ estimated by the Fisher matrix is the
instrumental error, $\sigma_{d_{\rm L}}^{\rm inst}$. The total measurement
error of luminosity distance $\sigma_{d_{\rm L}}$ also consists of the lensing
error, the peculiar velocity error, and the redshift measurement error, which
can be expressed as Wang _et al._ (2020b)
$\displaystyle(\sigma_{d_{\rm L}})^{2}=(\sigma_{d_{\rm L}}^{\rm
inst})^{2}+(\sigma_{d_{\rm L}}^{\rm lens})^{2}+(\sigma_{d_{\rm L}}^{\rm
pv})^{2}+(\sigma_{d_{\rm L}}^{\rm reds})^{2}.$ (19)
The main systematic error caused by weak lensing is adopted from the fitting
formula Tamanini _et al._ (2016),
$\displaystyle\sigma_{d_{\rm L}}^{\rm lens}(z)=d_{\rm L}(z)\times
0.066\bigg{[}\frac{1-(1+z)^{-0.25}}{0.25}\bigg{]}^{1.8},$ (20)
and the error caused by the peculiar velocity of a source should also be
included Kocsis _et al._ (2006),
$\displaystyle\sigma_{d_{\rm L}}^{\rm pv}(z)=d_{\rm
L}(z)\times\bigg{[}1+\frac{c(1+z)^{2}}{H(z)d_{\rm
L}(z)}\bigg{]}\frac{\sqrt{\langle v^{2}\rangle}}{c},$ (21)
where the peculiar velocity $\sqrt{\langle v^{2}\rangle}$ of the source with
respect to the Hubble flow is roughly set to $500\,\mathrm{km\,s^{-1}}$.
The error from the redshift measurement of the EM counterpart could be ignored
if the redshift is measured spectroscopically. But when using photometric
redshift for a distant source, this factor should be taken into account. We
estimate the error on the redshift measurement as
$\displaystyle\sigma_{d_{\rm L}}^{\rm reds}=\frac{\partial d_{\rm L}}{\partial
z}(\Delta z)_{n},$ (22)
with $(\Delta z)_{n}\simeq 0.03(1+z_{n})$ Ilbert _et al._ (2013).
### II.3 Identifications of electromagnetic counterparts
In addition to the luminosity distance obtained from the GW waveform, the
realization of standard siren also requires the redshift information.
Moreover, the number of GW-EM events has a great impact on the estimations of
cosmological parameters, so we give an analysis on the EM counterpart below.
In the process of the merger of MBHB with external magnetic fields, it is
assumed that EM radiations could be emitted in both the radio and optical
bands Palenzuela _et al._ (2010); O’Shaughnessy _et al._ (2011); Moesta _et
al._ (2012); Kaplan _et al._ (2011); Shi _et al._ (2012); Blandford and
Znajek (1977); Meier (2001); Dotti _et al._ (2012). The radiation in the
radio frequency band can be detected using SKA SKA , to identify the host
galaxy of the GW source. Subsequently, the radiation in the optical band can
be measured spectroscopically or photometrically through optical/IR projects,
such as the Vera C. Rubin Observatory Ivezić _et al._ (2019) and the European
Extremely Large Telescope (E-ELT) ELT , to obtain the redshift information.
In practice, before the EM projects make observations, it is necessary to
accurately locate the GW source. Because the fields of view of the EM
projects, such as SKA and ELT, are about 10 ${\rm deg}^{2}$, in our
simulation, we choose those GW events with $\Delta\Omega<10~{}{\rm deg}^{2}$.
From our analyses in Sec. II.2, one can see that the most obvious advantage of
a GW detection network over a single detector is that it can greatly improve
the capability of locating GW sources, which is also discussed in Ref. Ruan
_et al._ (2020b). It should be mentioned that we use a conservative scenario
to estimate the location parameter by using only the inspiral phase Tamanini
_et al._ (2016). A more optimistic scenario is to include the merger and
ringdown phases, which could lead to more GW-EM events. We leave this issue
for future research.
After locating the GW source within $10~{}{\rm deg}^{2}$ by the GW detectors,
we need to further uniquely identify the host galaxy by the EM counterpart. Of
course, if the host galaxy cannot be uniquely identified, statistical methods
can also be applied in estimating cosmological parameters. We leave the
relevant discussion in the future work. To simulate the EM counterpart, it is
necessary to understand the formation mechanism of MBHB and its external
environment. Regarding these aspects, different theoretical models have been
proposed Klein _et al._ (2016). In this paper, we use an analysis method
similar to that used in Refs. Tamanini _et al._ (2016); Yang (2021). Next, we
briefly introduce this method. More details can be found in Ref. Tamanini _et
al._ (2016).
We first simulate the EM radiation in the radio band, which will be used to
uniquely identify the host galaxy. The total luminosity $L_{\rm radio}$ in the
radio band consists of two parts. One part is the dual jet $L_{\rm flare}$
emitted when a binary is close to merging, which is caused by the twisting of
external magnetic field lines by the rapidly inspiralling MBHB Palenzuela _et
al._ (2010); O’Shaughnessy _et al._ (2011); Moesta _et al._ (2012); Kaplan
_et al._ (2011); Shi _et al._ (2012), being given by
$\displaystyle L_{\rm flare}=\epsilon_{\rm edd}\epsilon_{\rm radio}(v/v_{\rm
max})^{2}q^{2}L_{\rm edd}.$ (23)
The factor $(v/v_{\rm max})^{2}$ describes the luminosity evolution as the
MBHB inspirals ($v_{\rm max}=c/\sqrt{3}$ is the circular speed at the
innermost stable circular orbit for a binary of BHs, and $v$ is the binary’s
coordinate relative circular velocity O’Shaughnessy _et al._ (2011)).
$\epsilon_{\rm radio}$ is the fraction of EM radiations emitted in the radio
band (i.e., a radio-to-bolometric luminosity correction), and is set to a
fiducial value of 0.1. $\epsilon_{\rm edd}$ is the Eddington ratio, which is
calculated according to the formulas shown in Appendix A of Ref. Tamanini _et
al._ (2016). $q=M_{2}/M_{1}\leq 1$ is the binary’s mass ratio. The other part
of $L_{\rm radio}$ is the standard radio jet due to the Blandford-Znajeck
effect Blandford and Znajek (1977); Meier (2001), with luminosity dependent on
the mass accretion rate. Following Ref. Tamanini _et al._ (2016), we use the
jet luminosity
$\displaystyle L_{\rm jet}=\begin{cases}10^{42.7}{\rm erg}~{}{\rm
s}^{-1}(\frac{\alpha}{0.01})^{-0.1}m_{9}^{0.9}(\frac{\dot{m}}{0.1})^{6/5}(1+1.1a_{1}+0.29a_{1}^{2}),~{}{\rm
if}~{}10^{-2}\leq\epsilon_{\rm edd}\leq 0.3,\\\ 10^{45.7}{\rm erg}~{}{\rm
s}^{-1}(\frac{\alpha}{0.3})^{-0.1}m_{9}(\frac{\dot{m}}{0.1})g^{2}(0.55f^{2}+1.5fa_{1}+a_{1}^{2}),~{}{\rm
otherwise.}\end{cases}$ (24)
We assume the Shakura-Sunyev viscosity parameter $\alpha=0.1$; $m_{9}$ is
defined as $m_{9}=M_{1}/(10^{9}M_{\odot})$; $\dot{m}$ is the central accretion
rate, which is calculated from Appendix A of Ref. Tamanini _et al._ (2016);
$a_{1}$ is the spin parameter of the BH with the mass of $M_{1}$; $f$ and $g$
are dimensionless quantities regulating the angular velocity and the azimuthal
magnetic field, respectively, and are set to $f=1$ and $g=2.3$ Meier (2001).
The total luminosity in the radio band is given by $L_{\rm radio}=L_{\rm
flare}+L_{\rm jet}$. For the GW event satisfying $L_{\rm radio}\geq 4\pi
d_{\rm L}^{2}F^{\rm SKA}_{\rm min}$ O’Shaughnessy _et al._ (2011), its EM
radiation in the radio band is expected to be detected by SKA, thus its host
galaxy could be uniquely identified. Here, $F^{\rm SKA}_{\rm min}=\nu_{\rm
SKA}F^{\rm SKA}_{\nu,{\rm min}}$ is the detector’s flux limit, with $\nu_{\rm
SKA}\simeq 1.4~{}{\rm GHz}$ and $F^{\rm SKA}_{\nu,{\rm min}}\simeq 1~{}\mu{\rm
Jy}$. It should be noted that we assume that the radio radiation is isotropic,
according to Ref. Tamanini _et al._ (2016). Actually, the synchrotron
emission within the jet is beamed along the jet, which makes it impossible to
detect the events with jets being not towards the earth, thus reducing the
number of GW-EM events. However, collimation also implies a larger flux for a
given source luminosity, which makes some intrinsically fainter sources being
observable, thus increasing the number of GW-EM events. Because these two
factors have opposite effects on the number of GW-EM events, and may
counteract each other, we assume that the radiation is isotropic for the
purpose of simplification.
Only the radio identification cannot complete the measurement of the redshift,
so the optical/IR facilities are needed to observe the spectral features to
obtain the redshift. The host galaxy’s luminosity in the $K$-band, $L_{k}$, is
computed by converting the host total stellar mass into luminosity Tamanini
_et al._ (2016). According to the results of Ref. Bruzual and Charlot (2003),
for young stellar populations at moderate redshift, the mass-to-light ratio
$M/L_{k}$ falls in the range $0.01-0.05$. We assume a fiducial $M/L_{k}=0.03$
in the simulation. By converting $L_{k}$ into apparent magnitude $m_{\rm
gal}$, we assume that the redshifts of MBHB merger events that satisfy the
following relationship can be measured by ELT,
$\displaystyle m_{\rm gal}=82.5-\frac{5}{2}{\rm
log}_{10}\left(\frac{L_{k}}{3.02}\frac{{\rm s}}{{\rm erg}}\right)+5{\rm
log}_{10}\left(\frac{d_{\rm L}}{{\rm pc}}\right)\leq m_{{\rm ELT}},$ (25)
with the detection threshold $m_{{\rm ELT}}$ being set to 31.3, which is the
photometric limiting magnitude of ELT corresponding to $J$-band and $H$-band
Davies _et al._ (2010). In principle, the detection threshold should be set
to 30.2 that is the limiting magnitude of $K$-band, because the host galaxy’s
luminosity in $K$-band is used to calculate apparent magnitude. Actually,
MICADO (Multi-AO Imaging Camera for Deep Observations) on ELT will cover the
wavelength range of 1000–2400 nm ($J$-band to $K$-band), so we simply choose
the highest limiting magnitude, 31.3, as the detection threshold. This
simplification may lead to a very small overestimation of the detection
threshold, but will have no an obvious effect on the number of GW-EM events
and the cosmological analysis. Based on the above criterion, we can pick out
the GW events whose redshifts can be determined. For the redshift error
$\sigma_{d_{\rm L}}^{\rm reds}$, the authors of Ref. Tamanini _et al._ (2016)
take into account it for the GW-EM events satisfying $27.2<m_{\rm gal}<31.3$,
with 27.2 being the spectroscopy limiting magnitude of ELT. In Ref. Speri _et
al._ (2021), the authors use a more simple method, namely taking into account
the redshift error for all the GW-EM events of $z>2$. Namely, it is assumed
that the redshifts of GW events with $z<2$ are measured by spectroscopy, while
those with $z>2$ are measured by photometry, because the spectroscopic
redshift in the range of $z>2$ is usually unavailable Dahlen _et al._ (2013);
Speri _et al._ (2021). In this work, we adopt the simplified method in Ref.
Speri _et al._ (2021). Actually, one can see from Figure 2 that most GW-EM
events are at $z>2$. Thus, in our analysis, the redshift error is actually
considered for most data points.
### II.4 Standard siren catalog
Based on the methods discussed in the previous sections, we construct the
standard siren catalogs in preparation for cosmological parameter constraints.
In this work, we discuss three population models of MBHB Madau and Rees
(2001); Volonteri _et al._ (2008), i.e., pop III, Q3d, and Q3nod, which are
proposed Klein _et al._ (2016) for the simulation of standard sirens,
according to the birth mechanism of MBHs and whether there exists a delay
between the mergers of MBHB and their host galaxies. For the total numbers of
the MBHB merger events within 5 years, we estimate them based on the event
rates given in Table I of Ref. Klein _et al._ (2016). Specifically, the
numbers are 877, 41, and 610 for pop III, Q3d, and Q3nod, respectively. For
the redshift distribution and the mass distribution of MBHBs, we give
numerical fitting formulas of the curves shown in Figure 3 of Ref. Klein _et
al._ (2016). The predicted MBHB merger rates as functions of redshift are
given by
$\displaystyle R(z)_{{\rm pop~{}III}}=$
$\displaystyle\begin{cases}2.11z,&0\leq z\leq 9,\\\ -1.8z+35.2,&9<z\leq 19,\\\
\end{cases}$ (26) $\displaystyle R(z)_{\rm{Q3d}}=$
$\displaystyle\begin{cases}0.43z,&0\leq z\leq 3.5,\\\ -0.18z+2.12,&3.5<z\leq
12,\\\ \end{cases}$ (27) $\displaystyle R(z)_{\rm{Q3nod}}=$
$\displaystyle\begin{cases}1.67z,&0\leq z\leq 6,\\\ -0.69z+14.62,&6<z\leq
19.\end{cases}$ (28)
The predicted MBHB merger rates as functions of the total redshifted mass,
$M_{z}$, are given by
$\displaystyle R(M_{z})_{{\rm pop~{}III}}=$
$\displaystyle\begin{cases}10^{-10.6}\left(\frac{M_{z}}{M_{\odot}}\right)^{3.2},&10^{3}M_{\odot}\leq
M_{z}\leq 10^{4}M_{\odot},\\\
10^{5.32}\left(\frac{M_{z}}{M_{\odot}}\right)^{-0.78},&10^{4}M_{\odot}<M_{z}\leq
10^{8}M_{\odot},\\\ \end{cases}$ (29) $\displaystyle R(M_{z})_{\rm{Q3d}}=$
$\displaystyle\begin{cases}10^{-4.4}\left(\frac{M_{z}}{M_{\odot}}\right)^{{0.81}},&10^{4}M_{\odot}\leq
M_{z}\leq 10^{6.3}M_{\odot},\\\
10^{6.65}\left(\frac{M_{z}}{M_{\odot}}\right)^{-0.94},&10^{6.3}M_{\odot}<M_{z}\leq
10^{8}M_{\odot},\end{cases}$ (30) $\displaystyle R(M_{z})_{\rm{Q3nod}}=$
$\displaystyle\begin{cases}10^{-5.44}\left(\frac{M_{z}}{M_{\odot}}\right)^{1.2},&10^{4}M_{\odot}\leq
M_{z}\leq 10^{6.2}M_{\odot},\\\
10^{9.75}\left(\frac{M_{z}}{M_{\odot}}\right)^{-1.25},&10^{6.2}M_{\odot}<M_{z}\leq
10^{8}M_{\odot},\end{cases}$ (31)
with $M_{z}=M(1+z)$. $R(z)$ and $R(M_{z})$ are in units of ${\rm yr}^{-1}$.
The sky location ($\theta$, $\phi$), the binary inclination $\iota$, the
polarization angle $\psi$, and the coalescence phase $\phi_{c}$ are randomly
chosen in the ranges of $[0,\pi]$, $[0,2\pi]$, $[0,\pi]$, $[0,2\pi]$, and
$[0,2\pi]$, respectively.
Figure 2: The standard siren catalogs simulated from Taiji and the LISA-Taiji
network within 5-year operation time based on the pop III, Q3d, and Q3nod
models.
After selecting the GW events satisfying SNR $>$ 8 and $\Delta\Omega<10~{}{\rm
deg}^{2}$, and further selecting the GW events whose redshifts can be detected
by SKA and ELT, we obtain the useful standard sirens. For each standard siren,
we calculate its luminosity distance $d_{\rm L}$ and the error of luminosity
distance $\sigma_{d_{\rm L}}$. The fiducial values of cosmological parameters
are set to the best-fit values of the Planck 2018 results Aghanim _et al._
(2020). Then, for each MBHB model, we can construct a standard siren catalog
including the redshift $z$, luminosity distance $d_{\rm L}$, and error of
luminosity distance $\sigma_{d_{\rm L}}$ of MBHBs.
We show the simulated standard sirens in Figure 2. Firstly, it is found that
the numbers of standard sirens detected by the LISA-Taiji network are much
more than those detected by the single Taiji mission, which also can be seen
in Table 1. For each MBHB model, the LISA-Taiji network approximately doubles
the detection number compared to the single Taiji mission. This is due to the
fact that the network could improve SNR and the location accuracy of the GW
event. Secondly, the network can detect the GW events at higher redshifts due
to the improvement of SNR. For example, from the middle panel of Figure 2, we
can see that for the Q3d model, the redshifts of the standard sirens detected
by the network could reach $z\sim 9$. Thirdly, the LISA-Taiji network can
improve the measurement accuracy of luminosity distance to some extent, which
can be seen from the error bars in Figure 2. These improvements make us expect
that the LISA-Taiji network can greatly improve the capability of constraining
cosmological parameters.
Table 1: The numbers of the standard sirens simulated from Taiji and the LISA-Taiji network within 5-year operation time, based on the pop III, Q3d, and Q3nod models of MBHB population. Model | Number of standard sirens
---|---
| Taiji | network
pop III | | 25 | 50
Q3d | | 12 | 20
Q3nod | | 24 | 44
## III Cosmological parameter estimation
In this section, we shall report the constraint results of cosmological
parameters. In theory, the luminosity distance $d_{\rm L}$ of a GW source at
redshift $z$ is determined by a specific cosmological model. The $\Lambda$CDM
model [$w(z)=-1$], the $w$CDM model [$w(z)=\rm{constant}$], and the
Chevallier–Polarski–Linder (CPL) model [$w(z)=w_{\rm{0}}+w_{\rm{a}}z/(1+z)$]
Chevallier and Polarski (2001); Linder (2003) are considered in this paper.
For the CMB data, we employ the “Planck distance priors” from the Planck 2018
observation Chen _et al._ (2019). We use $\sigma(\xi)$ and $\varepsilon(\xi)$
to represent the absolute error and the relative error of the parameter $\xi$,
respectively, with $\varepsilon(\xi)$ defined as
$\varepsilon(\xi)=\sigma(\xi)/\xi$.
Table 2: The absolute errors ($1\sigma$) and the relative errors of the cosmological parameters in the $\Lambda$CDM, $w$CDM, and CPL models using the mock data of the LISA-Taiji network. Here, $\sigma(\xi)$ and $\varepsilon(\xi)$ represent the absolute and relative errors of the parameter $\xi$, respectively. Note also that $\varepsilon(\xi)$ is defined as $\varepsilon(\xi)=\sigma(\xi)/\xi$, and $H_{0}$ is in units of ${\rm km\ s^{-1}\ Mpc^{-1}}$. Error | $\Lambda$CDM | | $w$CDM | | CPL |
---|---|---|---|---|---|---
pop III | Q3d | Q3nod | | pop III | Q3d | Q3nod | | pop III | Q3d | Q3nod |
$\sigma(\Omega_{\rm m})$ | $0.025$ | $0.073$ | $0.027$ | | $0.036$ | $0.078$ | $0.037$ | | $0.054$ | $0.099$ | $0.056$ |
$\sigma(H_{0})$ | $0.86$ | $3.25$ | $0.94$ | | $1.85$ | $10.10$ | $1.75$ | | $2.40$ | $10.00$ | $2.10$ |
$\sigma(w)$ | $-$ | $-$ | $-$ | | $0.245$ | $0.735$ | $0.230$ | | $-$ | $-$ | $-$ |
$\sigma(w_{0})$ | $-$ | $-$ | $-$ | | $-$ | $-$ | $-$ | | $0.380$ | $0.965$ | $0.340$ |
$\sigma(w_{a})$ | $-$ | $-$ | $-$ | | $-$ | $-$ | $-$ | | $1.95$ | $-$ | $2.05$ |
$\varepsilon(\Omega_{\rm m})$ | $0.078$ | $0.212$ | $0.084$ | | $0.113$ | $0.258$ | $0.114$ | | $0.165$ | $0.304$ | $0.171$ |
$\varepsilon(H_{0})$ | $0.013$ | $0.049$ | $0.014$ | | $0.027$ | $0.141$ | $0.026$ | | $0.036$ | $0.141$ | $0.031$ |
$\varepsilon(w)$ | $-$ | $-$ | $-$ | | $0.229$ | $0.525$ | $0.215$ | | $-$ | $-$ | $-$ |
$\varepsilon(w_{0})$ | $-$ | $-$ | $-$ | | $-$ | $-$ | $-$ | | $0.409$ | $0.832$ | $0.378$ |
Table 3: The absolute errors ($1\sigma$) and the relative errors of the
cosmological parameters in the $\Lambda$CDM, $w$CDM, and CPL models using the
CMB, Taiji(Q3nod), network(Q3nod), CMB+Taiji(Q3nod), and CMB+network(Q3nod)
data. Here, $\sigma(\xi)$ and $\varepsilon(\xi)$ represent the absolute and
relative errors of the parameter $\xi$, respectively. Note also that
$\varepsilon(\xi)$ is defined as $\varepsilon(\xi)=\sigma(\xi)/\xi$, and
$H_{0}$ is in units of ${\rm km\ s^{-1}\ Mpc^{-1}}$.
Error | $\Lambda$CDM | | $w$CDM | | CPL
---|---|---|---|---|---
CMB | Taiji | network | CMB+Taiji | CMB+network | | CMB | Taiji | network | CMB+Taiji | CMB+network | | CMB | Taiji | network | CMB+Taiji | CMB+network
$\sigma(\Omega_{\rm m})$ | $0.009$ | $0.068$ | $0.027$ | $0.008$ | $0.007$ | | $0.057$ | $0.087$ | $0.037$ | $0.026$ | $0.010$ | | $0.059$ | $0.086$ | $0.056$ | $0.032$ | $0.018$
$\sigma(H_{0})$ | $0.61$ | $2.60$ | $0.94$ | $0.58$ | $0.46$ | | $6.15$ | $3.90$ | $1.75$ | $2.80$ | $1.00$ | | $6.25$ | $4.05$ | $2.10$ | $3.25$ | $1.90$
$\sigma(w)$ | $-$ | $-$ | $-$ | $-$ | $-$ | | $0.215$ | $0.695$ | $0.230$ | $0.097$ | $0.042$ | | $-$ | $-$ | $-$ | $-$ | $-$
$\sigma(w_{0})$ | $-$ | $-$ | $-$ | $-$ | $-$ | | $-$ | $-$ | $-$ | $-$ | $-$ | | $0.575$ | $0.885$ | $0.340$ | $0.510$ | $0.230$
$\sigma(w_{a})$ | $-$ | $-$ | $-$ | $-$ | $-$ | | $-$ | $-$ | $-$ | $-$ | $-$ | | $-$ | $-$ | $2.05$ | $1.80$ | $0.67$
$\varepsilon(\Omega_{\rm m})$ | $0.027$ | $0.201$ | $0.084$ | $0.026$ | $0.020$ | | $0.178$ | $0.263$ | $0.114$ | $0.082$ | $0.030$ | | $0.183$ | $0.246$ | $0.171$ | $0.097$ | $0.056$
$\varepsilon(H_{0})$ | $0.009$ | $0.039$ | $0.014$ | $0.009$ | $0.007$ | | $0.090$ | $0.057$ | $0.026$ | $0.041$ | $0.015$ | | $0.092$ | $0.059$ | $0.031$ | $0.049$ | $0.028$
$\varepsilon(w)$ | $-$ | $-$ | $-$ | $-$ | $-$ | | $0.211$ | $0.489$ | $0.215$ | $0.097$ | $0.042$ | | $-$ | $-$ | $-$ | $-$ | $-$
$\varepsilon(w_{0})$ | $-$ | $-$ | $-$ | $-$ | $-$ | | $-$ | $-$ | $-$ | $-$ | $-$ | | $0.871$ | $0.651$ | $0.378$ | $0.823$ | $0.242$
Firstly, let us take the simplest $\Lambda$CDM model as an example to discuss
the constraint results for the three MBHB models. From Figure 3, we find that
the pop III model gives the best constraints because it has the maximal event
number among the three MBHB models. The Q3nod model is similar with the pop
III model, but the Q3d model gives the worst constraints because it has the
fewest event number. Quantitatively, the pop III model gives the relative
errors $\varepsilon(\Omega_{\rm m})=7.8\%$ and $\varepsilon(H_{0})=1.3\%$, the
Q3nod model gives $\varepsilon(\Omega_{\rm m})=8.4\%$ and
$\varepsilon(H_{0})=1.4\%$, and the Q3d model gives $\varepsilon(\Omega_{\rm
m})=21.2\%$ and $\varepsilon(H_{0})=4.9\%$, as shown in Table 2. Here, we
notice that solely using the standard sirens from the LISA-Taiji network could
provide a tight constraint on $H_{0}$, with the precision close to 1%, which
is the standard of precision cosmology. Compared with the single Taiji
mission, the LISA-Taiji network reduces the absolute errors of $\Omega_{\rm
m}$ and $H_{0}$ by 60.3% and 63.8%, respectively, which can be seen from Table
3. This indicates that the GW detection network could provide a more accurate
measurement of $H_{0}$ than a single detector, which is helpful for solving
the Hubble constant tension.
Now let us have a look at the improvements of the constraints on the EoS
parameter of dark energy. We first discuss the $w$CDM model that has only one
dark-energy EoS parameter. In Figure 4, we show the two-dimensional posterior
contours in the $\Omega_{\rm m}-w$ and $w-H_{0}$ planes using the data of
Taiji, network, CMB, CMB+Taiji, and CMB+network. From the purple contour and
the green contour, we can see that the network can give much better
constraints than the single Taiji mission. The gray contour represents the
constraint from the CMB data, and this contour is almost orthogonal to the
green contour, which indicates that the standard siren data could
significantly break the parameter degeneracies. By comparing the red contour
and the blue contour, we can see that, although both the single Taiji mission
and the network can break degeneracies, the network can improve the constraint
accuracies much better. As we mentioned above, this is because the LISA-Taiji
network can detect more GW-EM events than Taiji, and can detect the events at
higher redshifts. In addition, the network also improves SNR, thus reducing
the error of $d_{\rm L}$. Concretely, the LISA-Taiji network could reduce the
absolute error of $w$ by 66.9% compared with the single Taiji mission. The
data combination CMB+network could reduce the absolute error of $w$ by 56.7%
compared with CMB+Taiji. It is worth emphasizing that using the CMB+network
data, the constraint precision of $w$ could reach $4.2\%$, which is comparable
with the result of Planck 2018 TT,TE,EE+lowE+lensing+SNe+BAO Aghanim _et al._
(2020). Note also that here we use only the Planck distance priors, but not
the Planck full data of CMB power spectra.
For the CPL model that has two dark-energy EoS parameters $w_{0}$ and $w_{a}$,
we also find that the LISA-Taiji network data give better constraints compared
with the single Taiji mission. The detailed results are shown in Table 3. For
the parameter $w_{0}$, Taiji gives the relative error
$\varepsilon(w_{0})=65.1\%$, and the LISA-Taiji network gives the relative
error $\varepsilon(w_{0})=37.8\%$. Compared with the single Taiji mission, the
LISA-Taiji network can reduce the absolute error of $w_{0}$ by 61.6%. For the
parameter $w_{a}$, Taiji cannot constrain $w_{a}$ well, but the LISA-Taiji
network gives the absolute error $\sigma(w_{a})=2.05$. When the CMB data are
combined, CMB+Taiji gives $\varepsilon(w_{0})=82.3\%$ and
$\sigma(w_{a})=1.80$, and CMB+network gives $\varepsilon(w_{0})=24.2\%$ and
$\sigma(w_{a})=0.67$. Compared with CMB+Taiji, CMB+network could reduce the
absolute errors of $w_{0}$ and $w_{a}$ by 41.5% and 54.9%, respectively. This
indicates that the future LISA-Taiji network combined with the CMB observation
will play an important role in constraining the EoS parameter of dark energy.
Figure 3: The two-dimensional marginalized contours (68.3% and 95.4%
confidence level) in the $\Omega_{m}$–$H_{0}$ plane considering three MBHB
models for the $\Lambda$CDM model.
Figure 4: The two-dimensional marginalized contours (68.3% and 95.4%
confidence level) in the $\Omega_{\rm m}-w$ and $w-H_{0}$ planes for the
$w$CDM. The mock data of Taiji and the LISA-Taiji network are simulated based
on the Q3nod model.
It should be noted that we make some assumptions and approximations in this
work. For the redshift distribution and the mass distribution of MBHBs, we use
the numerical fitting formulas. For the redshift measurement, we assume that
the redshifts of GW events satisfying $z>2$ are measured by photometry, so we
take into account the redshift errors for these events. In addition, we simply
assume that the radio radiation in the EM counterpart is isotropic. We have
discussed the rationalities of these assumptions in the previous sections. The
main purpose of this work is to make a preliminary forecast on the capability
of the LISA-Taiji network of improving the estimations of cosmological
parameters, compared with a single GW detector. For this purpose, these
assumptions do not affect our main conclusions. For a more specific
investigation on these issues, we leave it for future works.
## IV Conclusion
In this work, we forecast the capability of the future space-based GW
detection network to constrain cosmological parameters. We consider a
detection network composed of the European LISA mission and the Chinese Taiji
mission. The configuration angle between the LISA and Taiji is considered to
be $40^{\circ}$. Three models for MBHB, i.e., pop III, Q3d, and Q3nod, are
used to simulate the EM counterpart detection and estimate the number of GW-EM
events. Three typical cosmological models, i.e., the $\Lambda$CDM, $w$CDM, and
CPL models, are chosen as representatives.
We find that the LISA-Taiji network could significantly improve the constraint
accuracies of cosmological parameters compared with the single Taiji mission.
This is mainly due to three aspects: (i) the LISA-Taiji network could increase
the number of the GW-EM events; (ii) the LISA-Taiji network could detect the
GW events at higher redshift; and (iii) the LISA-Taiji network could reduce
the error of luminosity distance. Taking the Q3nod model as an example, the
LISA-Taiji network increases the GW-EM number from 24 to 44, compared with the
singe Taiji mission. The redshifts of GW events detected by the LISA-Taiji
network can reach to $z\sim 8$. The errors of luminosity distances are also
reduced to some extent, which can be seen from the error bars in Figure 2.
For the simplest $\Lambda$CDM model, we find that the pop III model gives the
best constraints due to its maximal event number among these three MBHB
models. The Q3nod model is similar with pop III, but the Q3d model gives the
worst constraints due to its fewest event number. The pop III model gives the
relative errors $\varepsilon(\Omega_{\rm m})=7.8\%$ and
$\varepsilon(H_{0})=1.3\%$; the Q3nod model gives $\varepsilon(\Omega_{\rm
m})=8.4\%$ and $\varepsilon(H_{0})=1.4\%$; the Q3d model gives
$\varepsilon(\Omega_{\rm m})=21.2\%$ and $\varepsilon(H_{0})=4.9\%$. So, we
find that solely using the standard sirens from the LISA-Taiji network, the
constraint precision of $H_{0}$ could reach about 1%, which is the standard of
precision cosmology. In addition, compared with the single Taiji mission, the
LISA-Taiji network could reduce the relative error of $H_{0}$ by 63.8%. This
indicates that the LISA-Taiji network is helpful in addressing the $H_{0}$
tension.
For the dark-energy EoS parameter, we first discuss it in the $w$CDM model.
Taking the Q3nod model as an example, we find that the LISA-Taiji network
reduces the absolute error of $w$ by 66.9% compared with the single Taiji
mission. We also see that the LISA-Taiji network improves the capability of
breaking the parameter degeneracies inherent in the CMB data. Specifically,
the CMB+network data reduces the absolute error of $w$ by 56.7% compared with
CMB+Taiji. The constraint precision of $w$ could reach $4.2\%$ using the
CMB+network data, which is comparable with the result of $Planck$ 2018
TT,TE,EE+lowE+lensing+SNe+BAO. For the CPL model that has two dark-energy EoS
parameters $w_{0}$ and $w_{a}$, the LISA-Taiji network can reduce the absolute
error of $w_{0}$ by 61.6%, compared with the single Taiji mission. For the
parameter $w_{a}$, Taiji cannot constrain $w_{a}$ well, but the LISA-Taiji
network can give the absolute error $\sigma(w_{a})=2.05$. Compared with
CMB+Taiji, CMB+network could reduce the absolute errors of $w_{0}$ and $w_{a}$
by 41.5% and 54.9%, respectively. This indicates that the future LISA-Taiji
network combined with the CMB observation will play an important role in
constraining the EoS parameters of dark energy.
In the next few decades, GW detectors are expected to form powerful detection
networks. This allows researchers to use multiple detectors to jointly detect
a GW source, thereby improving the measurement accuracies of source
parameters. At the same time, the detections of different GW sources in
multiple GW frequency bands can help us test the theory of gravity, and can
also lead to a more comprehensive understanding of the properties of compact
objects and the evolution history of the universe. We expect that the LISA-
Taiji network could play an important role in these aspects, and we will
further discuss them in depth in our future work.
###### Acknowledgements.
We are very grateful to Antoine Klein, Alberto Mangiagli, Alberto Sesana, and
Nicola Tamanini for fruitful discussions on the identifications of EM
counterparts, and also grateful to Tao Yang, Wen-Hong Ruan, and Ze-Wei Zhao
for discussions on the detections of GW sources. This work was supported by
the National Natural Science Foundation of China (Grants Nos. 11975072,
11835009, 11875102, and 11690021), the Liaoning Revitalization Talents Program
(Grant No. XLYC1905011), the Fundamental Research Funds for the Central
Universities (Grant No. N2005030), and the National Program for Support of
Top-Notch Young Professionals (Grant No. W02070050).
## References
* Spergel _et al._ (2003) D. N. Spergel _et al._ (WMAP), Astrophys. J. Suppl. 148, 175 (2003), arXiv:astro-ph/0302209 .
* Bennett _et al._ (2003) C. L. Bennett _et al._ (WMAP), Astrophys. J. Suppl. 148, 1 (2003), arXiv:astro-ph/0302207 .
* Aghanim _et al._ (2020) N. Aghanim _et al._ (Planck), Astron. Astrophys. 641, A6 (2020), arXiv:1807.06209 [astro-ph.CO] .
* Riess _et al._ (2019) A. G. Riess, S. Casertano, W. Yuan, L. M. Macri, and D. Scolnic, Astrophys. J. 876, 85 (2019), arXiv:1903.07603 [astro-ph.CO] .
* Verde _et al._ (2019) L. Verde, T. Treu, and A. G. Riess, Nature Astron. 3, 891 (2019), arXiv:1907.10625 [astro-ph.CO] .
* Riess (2019) A. G. Riess, Nature Rev. Phys. 2, 10 (2019), arXiv:2001.03624 [astro-ph.CO] .
* Weinberg (1989) S. Weinberg, Rev. Mod. Phys. 61, 1 (1989).
* Sahni and Starobinsky (2000) V. Sahni and A. A. Starobinsky, Int. J. Mod. Phys. D 9, 373 (2000), arXiv:astro-ph/9904398 .
* Bean _et al._ (2005) R. Bean, S. M. Carroll, and M. Trodden, (2005), arXiv:astro-ph/0510059 .
* Guo _et al._ (2019a) R.-Y. Guo, J.-F. Zhang, and X. Zhang, JCAP 02, 054 (2019a), arXiv:1809.02340 [astro-ph.CO] .
* Zhao _et al._ (2020a) M. Zhao, R. Guo, D. He, J. Zhang, and X. Zhang, Sci. China Phys. Mech. Astron. 63, 230412 (2020a), arXiv:1810.11658 [astro-ph.CO] .
* Di Valentino _et al._ (2021) E. Di Valentino, O. Mena, S. Pan, L. Visinelli, W. Yang, A. Melchiorri, D. F. Mota, A. G. Riess, and J. Silk, (2021), 10.1088/1361-6382/ac086d, arXiv:2103.01183 [astro-ph.CO] .
* Li _et al._ (2019) H.-L. Li, L. Feng, J.-F. Zhang, and X. Zhang, Sci. China Phys. Mech. Astron. 62, 120411 (2019), arXiv:1812.00319 [astro-ph.CO] .
* Vagnozzi (2020) S. Vagnozzi, Phys. Rev. D 102, 023518 (2020), arXiv:1907.07569 [astro-ph.CO] .
* Guo _et al._ (2019b) R.-Y. Guo, L. Zhang, J.-F. Zhang, and X. Zhang, Sci. China Phys. Mech. Astron. 62, 30411 (2019b), arXiv:1801.02187 [astro-ph.CO] .
* Li _et al._ (2020a) H.-L. Li, J.-F. Zhang, and X. Zhang, Commun. Theor. Phys. 72, 125401 (2020a), arXiv:2005.12041 [astro-ph.CO] .
* Feng _et al._ (2020a) L. Feng, D.-Z. He, H.-L. Li, J.-F. Zhang, and X. Zhang, Sci. China Phys. Mech. Astron. 63, 290404 (2020a), arXiv:1910.03872 [astro-ph.CO] .
* Zhang _et al._ (2020a) M. Zhang, J.-F. Zhang, and X. Zhang, Commun. Theor. Phys. 72, 125402 (2020a), arXiv:2005.04647 [astro-ph.CO] .
* Feng _et al._ (2020b) L. Feng, H.-L. Li, J.-F. Zhang, and X. Zhang, Sci. China Phys. Mech. Astron. 63, 220401 (2020b), arXiv:1903.08848 [astro-ph.CO] .
* Lin _et al._ (2020) M.-X. Lin, W. Hu, and M. Raveri, Phys. Rev. D 102, 123523 (2020), arXiv:2009.08974 [astro-ph.CO] .
* Guo _et al._ (2020) R.-Y. Guo, J.-F. Zhang, and X. Zhang, Sci. China Phys. Mech. Astron. 63, 290406 (2020), arXiv:1910.13944 [astro-ph.CO] .
* Hryczuk and Jodłowski (2020) A. Hryczuk and K. Jodłowski, Phys. Rev. D 102, 043024 (2020), arXiv:2006.16139 [hep-ph] .
* Zhang _et al._ (2020b) J.-F. Zhang, B. Wang, and X. Zhang, Sci. China Phys. Mech. Astron. 63, 280411 (2020b), arXiv:1907.00179 [astro-ph.CO] .
* Gao _et al._ (2021) L.-Y. Gao, S.-S. Xue, and X. Zhang, (2021), arXiv:2101.10714 [astro-ph.CO] .
* Zhang (2019) X. Zhang, Sci. China Phys. Mech. Astron. 62, 110431 (2019), arXiv:1905.11122 [astro-ph.CO] .
* Xu and Zhang (2020) Y. Xu and X. Zhang, Sci. China Phys. Mech. Astron. 63, 270431 (2020), arXiv:2002.00572 [astro-ph.CO] .
* Li and Zhang (2020) H. Li and X. Zhang, Sci. Bull. 65, 1419 (2020), arXiv:2005.10458 [astro-ph.CO] .
* Zhang _et al._ (2021) M. Zhang, B. Wang, J.-Z. Qi, Y. Xu, J.-F. Zhang, and X. Zhang, (2021), arXiv:2102.03979 [astro-ph.CO] .
* Wang _et al._ (2021) L.-F. Wang, D.-Z. He, J.-F. Zhang, and X. Zhang, (2021), arXiv:2102.09331 [astro-ph.CO] .
* Schutz (1986) B. F. Schutz, Nature 323, 310 (1986).
* Holz and Hughes (2005) D. E. Holz and S. A. Hughes, Astrophys. J. 629, 15 (2005), arXiv:astro-ph/0504616 .
* Cai _et al._ (2018a) R.-G. Cai, T.-B. Liu, and S.-J. Wang, Phys. Rev. D 97, 023027 (2018a), arXiv:1710.02425 [hep-ph] .
* Di Valentino and Melchiorri (2018) E. Di Valentino and A. Melchiorri, Phys. Rev. D 97, 041301 (2018), arXiv:1710.06370 [astro-ph.CO] .
* Wei (2018) J.-J. Wei, Astrophys. J. 868, 29 (2018), arXiv:1806.09781 [astro-ph.CO] .
* Zhao _et al._ (2018) W. Zhao, B. S. Wright, and B. Li, JCAP 10, 052 (2018), arXiv:1804.03066 [astro-ph.CO] .
* Di Valentino _et al._ (2018) E. Di Valentino, D. E. Holz, A. Melchiorri, and F. Renzi, Phys. Rev. D 98, 083523 (2018), arXiv:1806.07463 [astro-ph.CO] .
* Du _et al._ (2019) M. Du, W. Yang, L. Xu, S. Pan, and D. F. Mota, Phys. Rev. D 100, 043535 (2019), arXiv:1812.01440 [astro-ph.CO] .
* Mifsud and van de Bruck (2019) J. Mifsud and C. van de Bruck, Mon. Not. Roy. Astron. Soc. 487, 900 (2019).
* Wei (2019) J.-J. Wei, Astrophys. J. 876, 66 (2019), arXiv:1902.00223 [astro-ph.CO] .
* Gray _et al._ (2020) R. Gray _et al._ , Phys. Rev. D 101, 122001 (2020), arXiv:1908.06050 [gr-qc] .
* Howlett and Davis (2020) C. Howlett and T. M. Davis, Mon. Not. Roy. Astron. Soc. 492, 3803 (2020), arXiv:1909.00587 [astro-ph.CO] .
* Chassande-Mottin _et al._ (2019) E. Chassande-Mottin, K. Leyde, S. Mastrogiovanni, and D. Steer, Phys. Rev. D 100, 083514 (2019), arXiv:1906.02670 [astro-ph.CO] .
* Doctor (2020) Z. Doctor, Astrophys. J. Lett. 892, L16 (2020), arXiv:1912.12218 [astro-ph.HE] .
* Fu _et al._ (2019) X. Fu, L. Zhou, and J. Chen, Phys. Rev. D 99, 083523 (2019), arXiv:1903.09913 [gr-qc] .
* Mukherjee _et al._ (2021) S. Mukherjee, G. Lavaux, F. R. Bouchet, J. Jasche, B. D. Wandelt, S. M. Nissanke, F. Leclercq, and K. Hotokezaka, Astron. Astrophys. 646, A65 (2021), arXiv:1909.08627 [astro-ph.CO] .
* Abbott _et al._ (2021) B. P. Abbott _et al._ (LIGO Scientific, Virgo), Astrophys. J. 909, 218 (2021), arXiv:1908.06060 [astro-ph.CO] .
* Palmese _et al._ (2020) A. Palmese _et al._ (DES), Astrophys. J. Lett. 900, L33 (2020), arXiv:2006.14961 [astro-ph.CO] .
* Chen (2020) H.-Y. Chen, Phys. Rev. Lett. 125, 201301 (2020), arXiv:2006.02779 [astro-ph.HE] .
* Wang _et al._ (2020a) B. Wang, Z. Zhu, A. Li, and W. Zhao, Astrophys. J. Suppl. 250, 6 (2020a), arXiv:2005.12875 [gr-qc] .
* Chen _et al._ (2021) H.-Y. Chen, P. S. Cowperthwaite, B. D. Metzger, and E. Berger, Astrophys. J. Lett. 908, L4 (2021), arXiv:2011.01211 [astro-ph.CO] .
* Zhao _et al._ (2011) W. Zhao, C. Van Den Broeck, D. Baskaran, and T. Li, Phys. Rev. D 83, 023005 (2011), arXiv:1009.0206 [astro-ph.CO] .
* Cai and Yang (2017) R.-G. Cai and T. Yang, Phys. Rev. D 95, 044024 (2017), arXiv:1608.08008 [astro-ph.CO] .
* Cai _et al._ (2018b) R.-G. Cai, T.-B. Liu, X.-W. Liu, S.-J. Wang, and T. Yang, Phys. Rev. D 97, 103005 (2018b), arXiv:1712.00952 [astro-ph.CO] .
* Yang _et al._ (2019a) T. Yang, R. Holanda, and B. Hu, Astropart. Phys. 108, 57 (2019a), arXiv:1710.10929 [astro-ph.CO] .
* Cai and Yang (2018) R.-G. Cai and T. Yang, EPJ Web Conf. 168, 01008 (2018), arXiv:1709.00837 [astro-ph.CO] .
* Wang _et al._ (2018) L.-F. Wang, X.-N. Zhang, J.-F. Zhang, and X. Zhang, Phys. Lett. B 782, 87 (2018), arXiv:1802.04720 [astro-ph.CO] .
* Zhang _et al._ (2019a) X.-N. Zhang, L.-F. Wang, J.-F. Zhang, and X. Zhang, Phys. Rev. D 99, 063510 (2019a), arXiv:1804.08379 [astro-ph.CO] .
* Li _et al._ (2020b) H.-L. Li, D.-Z. He, J.-F. Zhang, and X. Zhang, JCAP 06, 038 (2020b), arXiv:1908.03098 [astro-ph.CO] .
* Zhang _et al._ (2019b) J.-F. Zhang, M. Zhang, S.-J. Jin, J.-Z. Qi, and X. Zhang, JCAP 09, 068 (2019b), arXiv:1907.03238 [astro-ph.CO] .
* Zhang _et al._ (2020c) J.-F. Zhang, H.-Y. Dong, J.-Z. Qi, and X. Zhang, Eur. Phys. J. C 80, 217 (2020c), arXiv:1906.07504 [astro-ph.CO] .
* Yan _et al._ (2019) C. Yan, W. Zhao, and Y. Lu, (2019), 10.3847/1538-4357/ab60a6, arXiv:1912.04103 [astro-ph.GA] .
* Yang _et al._ (2019b) W. Yang, S. Vagnozzi, E. Di Valentino, R. C. Nunes, S. Pan, and D. F. Mota, JCAP 07, 037 (2019b), arXiv:1905.08286 [astro-ph.CO] .
* Jin _et al._ (2020) S.-J. Jin, D.-Z. He, Y. Xu, J.-F. Zhang, and X. Zhang, JCAP 03, 051 (2020), arXiv:2001.05393 [astro-ph.CO] .
* Qi _et al._ (2021) J.-Z. Qi, S.-J. Jin, X.-L. Fan, J.-F. Zhang, and X. Zhang, (2021), arXiv:2102.01292 [astro-ph.CO] .
* Abbott _et al._ (2017a) B. Abbott _et al._ (LIGO Scientific, Virgo), Phys. Rev. Lett. 119, 161101 (2017a), arXiv:1710.05832 [gr-qc] .
* Abbott _et al._ (2017b) B. Abbott _et al._ (LIGO Scientific, Virgo, Fermi GBM, INTEGRAL, IceCube, AstroSat Cadmium Zinc Telluride Imager Team, IPN, Insight-Hxmt, ANTARES, Swift, AGILE Team, 1M2H Team, Dark Energy Camera GW-EM, DES, DLT40, GRAWITA, Fermi-LAT, ATCA, ASKAP, Las Cumbres Observatory Group, OzGrav, DWF (Deeper Wider Faster Program), AST3, CAASTRO, VINROUGE, MASTER, J-GEM, GROWTH, JAGWAR, CaltechNRAO, TTU-NRAO, NuSTAR, Pan-STARRS, MAXI Team, TZAC Consortium, KU, Nordic Optical Telescope, ePESSTO, GROND, Texas Tech University, SALT Group, TOROS, BOOTES, MWA, CALET, IKI-GW Follow-up, H.E.S.S., LOFAR, LWA, HAWC, Pierre Auger, ALMA, Euro VLBI Team, Pi of Sky, Chandra Team at McGill University, DFN, ATLAS Telescopes, High Time Resolution Universe Survey, RIMAS, RATIR, SKA South Africa/MeerKAT), Astrophys. J. Lett. 848, L12 (2017b), arXiv:1710.05833 [astro-ph.HE] .
* Abbott _et al._ (2017c) B. Abbott _et al._ (LIGO Scientific, Virgo, Fermi-GBM, INTEGRAL), Astrophys. J. Lett. 848, L13 (2017c), arXiv:1710.05834 [astro-ph.HE] .
* Abbott _et al._ (2017d) B. Abbott _et al._ (LIGO Scientific, Virgo, 1M2H, Dark Energy Camera GW-E, DES, DLT40, Las Cumbres Observatory, VINROUGE, MASTER), Nature 551, 85 (2017d), arXiv:1710.05835 [astro-ph.CO] .
* Graham _et al._ (2020) M. Graham _et al._ , Phys. Rev. Lett. 124, 251102 (2020), arXiv:2006.14122 [astro-ph.HE] .
* Abbott _et al._ (2020) R. Abbott _et al._ (LIGO Scientific, Virgo), Phys. Rev. Lett. 125, 101102 (2020), arXiv:2009.01075 [gr-qc] .
* Chen _et al._ (2020) H.-Y. Chen, C.-J. Haster, S. Vitale, W. M. Farr, and M. Isi, (2020), arXiv:2009.14057 [astro-ph.CO] .
* Mukherjee _et al._ (2020) S. Mukherjee, A. Ghosh, M. J. Graham, C. Karathanasis, M. M. Kasliwal, I. Magaña Hernandez, S. M. Nissanke, A. Silvestri, and B. D. Wandelt, (2020), arXiv:2009.14199 [astro-ph.CO] .
* Del Pozzo (2012) W. Del Pozzo, Phys. Rev. D 86, 043011 (2012), arXiv:1108.1317 [astro-ph.CO] .
* Chen _et al._ (2018) H.-Y. Chen, M. Fishbach, and D. E. Holz, Nature 562, 545 (2018), arXiv:1712.06531 [astro-ph.CO] .
* Fishbach _et al._ (2019) M. Fishbach _et al._ (LIGO Scientific, Virgo), Astrophys. J. Lett. 871, L13 (2019), arXiv:1807.05667 [astro-ph.CO] .
* Feeney _et al._ (2019) S. M. Feeney, H. V. Peiris, A. R. Williamson, S. M. Nissanke, D. J. Mortlock, J. Alsing, and D. Scolnic, Phys. Rev. Lett. 122, 061105 (2019), arXiv:1802.03404 [astro-ph.CO] .
* Ding _et al._ (2019) X. Ding, M. Biesiada, X. Zheng, K. Liao, Z. Li, and Z.-H. Zhu, JCAP 04, 033 (2019), arXiv:1801.05073 [astro-ph.CO] .
* Soares-Santos _et al._ (2019) M. Soares-Santos _et al._ (DES, LIGO Scientific, Virgo), Astrophys. J. Lett. 876, L7 (2019), arXiv:1901.01540 [astro-ph.CO] .
* Lagos _et al._ (2019) M. Lagos, M. Fishbach, P. Landry, and D. E. Holz, Phys. Rev. D 99, 083504 (2019), arXiv:1901.03321 [astro-ph.CO] .
* Yu _et al._ (2020) J. Yu, Y. Wang, W. Zhao, and Y. Lu, Mon. Not. Roy. Astron. Soc. 498, 1786 (2020), arXiv:2003.06586 [astro-ph.CO] .
* Punturo _et al._ (2010) M. Punturo _et al._ , Class. Quant. Grav. 27, 194002 (2010).
* (82) “ET,” www.et-gw.eu/.
* Abbott _et al._ (2017e) B. P. Abbott _et al._ (LIGO Scientific), Class. Quant. Grav. 34, 044001 (2017e), arXiv:1607.08697 [astro-ph.IM] .
* (84) “CE,” https://cosmicexplorer.org/.
* Totani (2013) T. Totani, Pub. Astron. Soc. Jpn. 65, L12 (2013), arXiv:1307.4985 [astro-ph.HE] .
* Mingarelli _et al._ (2015) C. M. F. Mingarelli, J. Levin, and T. J. W. Lazio, Astrophys. J. Lett. 814, L20 (2015), arXiv:1511.02870 [astro-ph.HE] .
* Wang _et al._ (2016) J.-S. Wang, Y.-P. Yang, X.-F. Wu, Z.-G. Dai, and F.-Y. Wang, Astrophys. J. Lett. 822, L7 (2016), arXiv:1603.02014 [astro-ph.HE] .
* Liu _et al._ (2016) T. Liu, G. E. Romero, M.-L. Liu, and A. Li, Astrophys. J. 826, 82 (2016), arXiv:1602.06907 [astro-ph.HE] .
* Zhang (2016) B. Zhang, Astrophys. J. Lett. 827, L31 (2016), arXiv:1602.04542 [astro-ph.HE] .
* Yamasaki _et al._ (2018) S. Yamasaki, T. Totani, and K. Kiuchi, Publ. Astron. Soc. Jap. 70, Publications of the Astronomical Society of Japan, Volume 70, Issue 3, 1 June 2018, 39, https://doi.org/10.1093/pasj/psy029 (2018), arXiv:1710.02302 [astro-ph.HE] .
* Wei _et al._ (2018) J.-J. Wei, X.-F. Wu, and H. Gao, Astrophys. J. Lett. 860, L7 (2018), arXiv:1805.12265 [astro-ph.CO] .
* Cai _et al._ (2019) R.-G. Cai, T.-B. Liu, S.-J. Wang, and W.-T. Xu, JCAP 09, 016 (2019), arXiv:1905.01803 [astro-ph.CO] .
* Zhao _et al._ (2020b) Z.-W. Zhao, Z.-X. Li, J.-Z. Qi, H. Gao, J.-F. Zhang, and X. Zhang, Astrophys. J. 903, 83 (2020b), arXiv:2006.01450 [astro-ph.CO] .
* Caldwell _et al._ (2019) R. Caldwell _et al._ , (2019), arXiv:1903.04657 [astro-ph.CO] .
* (95) “LISA,” lisa.nasa.gov/.
* Armano _et al._ (2016) M. Armano _et al._ , Phys. Rev. Lett. 116, 231101 (2016).
* Amaro-Seoane _et al._ (2017) P. Amaro-Seoane _et al._ (LISA), (2017), arXiv:1702.00786 [astro-ph.IM] .
* Armano _et al._ (2018) M. Armano _et al._ , Phys. Rev. Lett. 120, 061101 (2018).
* Abich _et al._ (2019) K. Abich _et al._ , Phys. Rev. Lett. 123, 031101 (2019), arXiv:1907.00104 [astro-ph.IM] .
* Speri _et al._ (2021) L. Speri, N. Tamanini, R. R. Caldwell, J. R. Gair, and B. Wang, Phys. Rev. D 103, 083526 (2021), arXiv:2010.09049 [astro-ph.CO] .
* Wu (2018) Y.-L. Wu, Int. J. Mod. Phys. A 33, 1844014 (2018), arXiv:1805.10119 [physics.gen-ph] .
* Ruan _et al._ (2020a) W.-H. Ruan, Z.-K. Guo, R.-G. Cai, and Y.-Z. Zhang, Int. J. Mod. Phys. A 35, 2050075 (2020a), arXiv:1807.09495 [gr-qc] .
* Hu and Wu (2017) W.-R. Hu and Y.-L. Wu, Natl. Sci. Rev. 4, 685 (2017).
* Luo _et al._ (2020) J. Luo _et al._ , Class. Quant. Grav. 37, 185013 (2020), arXiv:2008.09534 [physics.ins-det] .
* Wang _et al._ (2020b) L.-F. Wang, Z.-W. Zhao, J.-F. Zhang, and X. Zhang, JCAP 11, 012 (2020b), arXiv:1907.01838 [astro-ph.CO] .
* Liu _et al._ (2020) S. Liu, Y.-M. Hu, J.-d. Zhang, and J. Mei, Phys. Rev. D 101, 103027 (2020), arXiv:2004.14242 [astro-ph.HE] .
* Milyukov (2020) V. Milyukov, Astron. Rep. 64, 1067 (2020).
* Mei _et al._ (2020) J. Mei _et al._ (TianQin), (2020), 10.1093/ptep/ptaa114, arXiv:2008.10332 [gr-qc] .
* Fan _et al._ (2020) H.-M. Fan, Y.-M. Hu, E. Barausse, A. Sesana, J.-d. Zhang, X. Zhang, T.-G. Zi, and J. Mei, Phys. Rev. D 102, 063016 (2020), arXiv:2005.08212 [astro-ph.HE] .
* Tamanini _et al._ (2016) N. Tamanini, C. Caprini, E. Barausse, A. Sesana, A. Klein, and A. Petiteau, JCAP 04, 002 (2016), arXiv:1601.07112 [astro-ph.CO] .
* Belgacem _et al._ (2019) E. Belgacem _et al._ (LISA Cosmology Working Group), JCAP 07, 024 (2019), arXiv:1906.01593 [astro-ph.CO] .
* Zhao _et al._ (2020c) Z.-W. Zhao, L.-F. Wang, J.-F. Zhang, and X. Zhang, Sci. Bull. 65, 1340 (2020c), arXiv:1912.11629 [astro-ph.CO] .
* Abbott _et al._ (2017f) B. Abbott _et al._ (LIGO Scientific, Virgo), Phys. Rev. Lett. 119, 141101 (2017f), arXiv:1709.09660 [gr-qc] .
* Fan _et al._ (2019) X. Fan, J. Li, X. Li, Y. Zhong, and J. Cao, Sci. China Phys. Mech. Astron. 62, 969512 (2019), arXiv:1811.01380 [astro-ph.IM] .
* Ruan _et al._ (2020b) W.-H. Ruan, C. Liu, Z.-K. Guo, Y.-L. Wu, and R.-G. Cai, Nature Astron. 4, 108 (2020b), arXiv:2002.03603 [gr-qc] .
* Wang _et al._ (2020c) G. Wang, W.-T. Ni, W.-B. Han, S.-C. Yang, and X.-Y. Zhong, Phys. Rev. D 102, 024089 (2020c), arXiv:2002.12628 [gr-qc] .
* Hu _et al._ (2021) Q. Hu, M. Li, R. Niu, and W. Zhao, Phys. Rev. D 103, 064057 (2021), arXiv:2006.05670 [gr-qc] .
* Wang _et al._ (2020d) R. Wang, W.-H. Ruan, Q. Yang, Z.-K. Guo, R.-G. Cai, and B. Hu, (2020d), arXiv:2010.14732 [astro-ph.CO] .
* Omiya and Seto (2020) H. Omiya and N. Seto, Phys. Rev. D 102, 084053 (2020), arXiv:2010.00771 [gr-qc] .
* Orlando _et al._ (2021) G. Orlando, M. Pieroni, and A. Ricciardone, JCAP 03, 069 (2021), arXiv:2011.07059 [astro-ph.CO] .
* Wang and Han (2021) G. Wang and W.-B. Han, Phys. Rev. D 103, 064021 (2021), arXiv:2101.01991 [gr-qc] .
* Cutler (1998) C. Cutler, Phys. Rev. D 57, 7089 (1998), arXiv:gr-qc/9703068 .
* Krolak _et al._ (1995) A. Krolak, K. D. Kokkotas, and G. Schaefer, Phys. Rev. D 52, 2089 (1995), arXiv:gr-qc/9503013 .
* Buonanno _et al._ (2009) A. Buonanno, B. Iyer, E. Ochsner, Y. Pan, and B. Sathyaprakash, Phys. Rev. D 80, 084043 (2009), arXiv:0907.0700 [gr-qc] .
* Rubbo _et al._ (2004) L. J. Rubbo, N. J. Cornish, and O. Poujade, Phys. Rev. D 69, 082003 (2004), arXiv:gr-qc/0311069 .
* Klein _et al._ (2016) A. Klein _et al._ , Phys. Rev. D 93, 024003 (2016), arXiv:1511.05581 [gr-qc] .
* Feng _et al._ (2019) W.-F. Feng, H.-T. Wang, X.-C. Hu, Y.-M. Hu, and Y. Wang, Phys. Rev. D 99, 123002 (2019), arXiv:1901.02159 [astro-ph.IM] .
* Zhao and Wen (2018) W. Zhao and L. Wen, Phys. Rev. D 97, 064031 (2018), arXiv:1710.05325 [astro-ph.CO] .
* Kocsis _et al._ (2006) B. Kocsis, Z. Frei, Z. Haiman, and K. Menou, Astrophys. J. 637, 27 (2006), arXiv:astro-ph/0505394 .
* Ilbert _et al._ (2013) O. Ilbert _et al._ , Astron. Astrophys. 556, A55 (2013), arXiv:1301.3157 [astro-ph.CO] .
* Palenzuela _et al._ (2010) C. Palenzuela, L. Lehner, and S. L. Liebling, Science 329, 927 (2010), arXiv:1005.1067 [astro-ph.HE] .
* O’Shaughnessy _et al._ (2011) R. O’Shaughnessy, D. L. Kaplan, A. Sesana, and A. Kamble, Astrophys. J. 743, 136 (2011), arXiv:1109.1050 [astro-ph.CO] .
* Moesta _et al._ (2012) P. Moesta, D. Alic, L. Rezzolla, O. Zanotti, and C. Palenzuela, Astrophys. J. Lett. 749, L32 (2012), arXiv:1109.1177 [gr-qc] .
* Kaplan _et al._ (2011) D. L. Kaplan, R. O’Shaughnessy, A. Sesana, and M. Volonteri, Astrophys. J. Lett. 734, L37 (2011), arXiv:1105.3653 [astro-ph.HE] .
* Shi _et al._ (2012) J.-M. Shi, J. H. Krolik, S. H. Lubow, and J. F. Hawley, Astrophys. J. 749, 118 (2012), arXiv:1110.4866 [astro-ph.HE] .
* Blandford and Znajek (1977) R. D. Blandford and R. L. Znajek, Mon. Not. Roy. Astron. Soc. 179, 433 (1977).
* Meier (2001) D. L. Meier, Astrophys. J. Lett. 548, L9 (2001), arXiv:astro-ph/0010231 .
* Dotti _et al._ (2012) M. Dotti, A. Sesana, and R. Decarli, Adv. Astron. 2012, 940568 (2012), arXiv:1111.0664 [astro-ph.CO] .
* (139) “SKA,” www.skatelescope.org.
* Ivezić _et al._ (2019) v. Ivezić _et al._ (LSST), Astrophys. J. 873, 111 (2019), arXiv:0805.2366 [astro-ph] .
* (141) “E-ELT,” www.eso.org/sci/facilities/eelt/.
* Yang (2021) T. Yang, JCAP 05, 044 (2021), arXiv:2103.01923 [astro-ph.CO] .
* Bruzual and Charlot (2003) G. Bruzual and S. Charlot, Mon. Not. Roy. Astron. Soc. 344, 1000 (2003), arXiv:astro-ph/0309134 .
* Davies _et al._ (2010) R. Davies _et al._ (MICADO Team), Proc. SPIE Int. Soc. Opt. Eng. 7735, 77352A (2010), arXiv:1005.5009 [astro-ph.IM] .
* Dahlen _et al._ (2013) T. Dahlen _et al._ , Astrophys. J. 775, 93 (2013), arXiv:1308.5353 [astro-ph.CO] .
* Madau and Rees (2001) P. Madau and M. J. Rees, Astrophys. J. Lett. 551, L27 (2001), arXiv:astro-ph/0101223 .
* Volonteri _et al._ (2008) M. Volonteri, G. Lodato, and P. Natarajan, Mon. Not. Roy. Astron. Soc. 383, 1079 (2008), arXiv:0709.0529 [astro-ph] .
* Chevallier and Polarski (2001) M. Chevallier and D. Polarski, Int. J. Mod. Phys. D 10, 213 (2001), arXiv:gr-qc/0009008 .
* Linder (2003) E. V. Linder, Phys. Rev. Lett. 90, 091301 (2003), arXiv:astro-ph/0208512 .
* Chen _et al._ (2019) L. Chen, Q.-G. Huang, and K. Wang, JCAP 02, 028 (2019), arXiv:1808.05724 [astro-ph.CO] .
|
# Does Typological Blinding Impede Cross-Lingual Sharing?
Johannes Bjerva
Department of Computer Science
Aalborg University
<EMAIL_ADDRESS>
&Isabelle Augenstein
Department of Computer Science
University of Copenhagen
<EMAIL_ADDRESS>
###### Abstract
Bridging the performance gap between high- and low-resource languages has been
the focus of much previous work. Typological features from databases such as
the World Atlas of Language Structures (WALS) are a prime candidate for this,
as such data exists even for very low-resource languages. However, previous
work has only found minor benefits from using typological information. Our
hypothesis is that a model trained in a cross-lingual setting will pick up on
typological cues from the input data, thus overshadowing the utility of
explicitly using such features. We verify this hypothesis by blinding a model
to typological information, and investigate how cross-lingual sharing and
performance is impacted. Our model is based on a cross-lingual architecture in
which the latent weights governing the sharing between languages is learnt
during training. We show that (i) preventing this model from exploiting
typology severely reduces performance, while a control experiment reaffirms
that (ii) encouraging sharing according to typology somewhat improves
performance.
## 1 Introduction
Most languages in the world have little access to NLP technology due to data
scarcity (Joshi et al., 2020). Nonetheless, high-quality multilingual
representations can be obtained using only a raw text signal, e.g. via
multilingual language modelling (Devlin et al., 2019). Furthermore, structural
similarities of languages are to a large extent documented in typological
databases such as the World Atlas of Language Structures (WALS, Dryer and
Haspelmath (2013)). Hence, developing models which can take use typological
similarities of languages is an important direction in order to alleviate
language technology inequalities.
Figure 1: A PoS tagger is exposed (or blinded with gradient reversal,
$-\lambda$) to typological features. Observing $\alpha$ values tells us how
typology affects sharing.
While previous work has attempted to use typological information to inform NLP
models, our work differs significantly from such efforts in that we blind a
model to this information. Most previous work includes language information as
features, by using language IDs, or language embeddings (e.g. Ammar et al.
(2016); O’Horan et al. (2016); Östling and Tiedemann (2017); Ponti et al.
(2019); Oncevay et al. (2020)). Notably, limited effects are usually observed
from including typological features explicitly. For instance, de Lhoneux et
al. (2018) observe positive cross-lingual sharing effects only in a handful of
their settings. We therefore hypothesise that relevant typological information
is learned as a by-product of cross-lingual training. Hence, although models
do benefit from this information, it is not necessary to provide it explicitly
in a high-resource scenario, where there is abundant training data. This is
confirmed by Bjerva and Augenstein (2018a), who find that, e.g., language
embeddings trained on a morphological task can encode morphological features
from WALS.
In contrast with previous work, we blind a model to typological information,
by using adversarial techniques based on gradient reversal (Ganin and
Lempitsky, 2014). We evaluate on the structured prediction and classification
tasks in XTREME (Hu et al., 2020), yielding a total of 40 languages and 4
tasks. We show that when a model is blinded to typological signals relating to
syntax and morphology, performance on related NLP tasks drops significantly.
For instance, the mean accuracy across 40 languages for POS tagging drops by
1.8% when blinding the model to morphological features.
## 2 Model
An overview of the model is shown in Figure 1. We model each task in this
paper using the following steps. First, contextual representations are
extracted using multilingual BERT (m-BERT, Devlin et al. (2019)), a
transformer-based model (Vaswani et al., 2017), trained with shared word-
pieces across languages. We either blind m-BERT to typological features, with
an added adversarial component based on gradient reversal (Ganin and
Lempitsky, 2014), or expose it to them via multi-task learning (MTL, (Caruana,
1997)). Representations from m-BERT are fed to a latent multi-task
architecture learning network (Ruder et al., 2019), which includes $\alpha$
parameters we seek to investigate. The model learns which parameters to share
between languages (e.g. $\alpha_{es,fr}$ denotes sharing between Spanish and
French).
### 2.1 Sharing architecture
Our sharing architecture is based on that of Ruder et al. (2019), which has
latent variables learned during training, governing which layers and subspaces
are shared between tasks, to what extent, as well as the relative weighting of
different task losses. We are most interested in the parameters which control
the sharing between the hidden layers allocated to each task, referred to as
$\alpha$ parameters (Ruder et al., 2019). Consider a setting with two tasks
$A$ and $B$. The outputs $h_{A,k}$ and $h_{B,k}$ of the $k$-th layer for task
$A$ and $B$ interact through the $\alpha$ parameters, for which the output is
defined as:
$\begin{bmatrix}\widetilde{h}_{A,k}\\\
\widetilde{h}_{B,k}\end{bmatrix}=\begin{bmatrix}\alpha_{AA}&\alpha_{AB}\\\
\alpha_{BA}&\alpha_{BB}\end{bmatrix}\begin{bmatrix}{h_{A,k}}^{\top}\>,&{h_{B,k}}^{\top}\end{bmatrix}$
(1)
where $\widetilde{h}_{A,k}$ is a linear combination of the activations for
task $A$ at layer $k$, weighted with the learned $\alpha$s. While their model
is an MTL model, we choose to interpret this differently by considering each
language as a task, yielding $\alpha\in\mathbf{R}^{l\times l}$, where $l$ is
the number of languages for the given task. Each activation
$\widetilde{h}_{A,k}$ is then a linear combination of the language specific
activations $h_{A,k}$. These are used for prediction in the downstream tasks,
as in the baselines from Hu et al. (2020).
Crucially, this model allows us to draw conclusions about parameter sharing
between languages by observing the $\alpha$ parameters under the blinding and
prediction conditions. We will combine this insight with observing downstream
task performance in order to draw conclusions about the effects of typological
feature blinding and prediction.
### 2.2 Blinding/Exposing a Model to Typology
We introduce a component which can either blind or expose the model to
typological features. We implement this as a single task-specific layer per
feature, using the [CLS] token from m-BERT model, without access to any of the
soft sharing between languages from $\alpha$-layers. Each layer optimises a
categorical cross-entropy loss function ($L_{typ}$).
For this task, we predict typological features drawn from WALS (Dryer and
Haspelmath, 2013), inspired by previous work (Bjerva and Augenstein, 2018a).
Unlike previous work, we also blind the model to such features by including a
gradient reversal layer (Ganin and Lempitsky, 2014), which multiplies the
gradient of the typological prediction task with a negative constant
($-\lambda$), inspired by previous work on adversarial learning (Goodfellow et
al., 2014; Zhang et al., 2019; Chen et al., 2019). We hypothesise that using a
gradient reversal layer for typology will yield typology-invariant features,
and that this will perform worse on tasks for which the typological feature at
hand is important. For instance, we expect that blinding a model to syntactic
features will severely reduce performance for tasks which rely heavily on
syntax, such as POS tagging.
## 3 Cross-Lingual Experiments
We investigate the effects of typological blinding, using typological
parameters as presented in WALS (Dryer and Haspelmath, 2013). The experiments
are run on XTREME (Hu et al., 2020), which includes up to 40 languages from 12
language families and two isolates. We experiment on the following languages
(ISO 639-1 codes): af, ar, bg, bn, de, el, en, es, et, eu, fa, fi, fr, he, hi,
hu, id, it, ja, jv, ka, kk, ko, ml, mr, ms, my, nl, pt, ru, sw, ta, te, th,
tl, tr, ur, vi, yo, and zh. We experiment on four tasks: POS (part of speech
tagging), NER (named entity recognition), XNLI (cross-lingual natural language
inference), and PAWS-X (paraphrase identification). Our general setup for the
structured prediction tasks (POS and NER) is that we train on all available
languages, and downsample to 1,000 samples per language. For the
classification tasks XNLI and PAWS-X, we train on the English training data
and fine-tune on the development sets, as no training data is available for
other languages. Hence, typological differences will be the main factor in our
results, rather than differences in dataset sizes.
### 3.1 Typological Prediction and Blinding
We first investigate whether prohibiting or allowing access to typological
features has an effect on model performance using our architecture. We
hypothesise that our multilingual model will leverage signals related to the
linguistic nature of a task when optimising its its sharing parameters
$\alpha$.
There exists a growing body of work on prediction of typological features
(Daumé III and Campbell, 2007; Murawaki, 2017; Bjerva and Augenstein, 2018b;
Bjerva et al., 2019a, b), most notably in a recent shared task on the subject
(Bjerva et al., 2020). While we are inspired by this direction of research,
our contribution is not concerned with the accuracy of the prediction of such
features, and this is therefore not evaluated in detail in the paper.
Moreover, an increasing amount of work measures the correlation of predictive
performance of cross-lingual models with typological features as a way of
probing what a model has learned about typology Malaviya et al. (2017);
Choenni and Shutova (2020); Gerz et al. (2018); Nooralahzadeh et al. (2020);
Zhao et al. (2020). In contrast to such post-hoc approaches, our experimental
setting allows for measuring the impact of typology on cross-lingual sharing
performance in a direct manner as part of the model architecture.
#### Syntactic Features
We first blind/expose the model to syntactic features from WALS (Dryer and
Haspelmath, 2013). We take the set of word order features which are annotated
for all languages in our experiments, resulting in 33 features. This includes
features such as 81A: Order of Subject, Object and Verb, which encodes what
the preferred word ordering is (if any) in a transitive clause. For all
features, we exclude feature values which do not occur for our set of
languages. We hypothesise that performance will drop for all four tasks, as
they all require syntactic understanding.
#### Morphological Features
We next attempt to blind/expose the model to the morphological features in
WALS. We use the same approach as above, resulting in a total of 8
morphological features. This includes features such as 26A: Prefixing vs.
Suffixing in Inflectional Morphology, indicating to what extent a language
uses prefixing or suffixing morphology. We hypothesise that mainly the POS
tagging task will suffer under this condition, whereas other tasks only to
some extent require morphology.
#### Phonological Features
We next consider a control experiment, in which we attempt to blind/expose the
model to phonological features in WALS. We arrive at a total of 15
phonological features, such as 1A: Consonant Inventories which indicates the
size of the consonant inventory of a language. We expect the performance to
remain relatively unaffected by this task, as phonology ought to have little
importance given a textual input.
#### Genealogical Features
Finally, we attempt to use what one might consider to be language meta-data.
We attempt to blind/expose the model to what language family a language
belongs to. This can be seen as a type of proxy to language similarity, and
correlates relatively strongly with structural similarities in languages.
Because of this correlation with structural similarities, we expect blinding
under this condition to only slightly reduce performance for all tasks, as
previous work has shown this type of relationship not to be central in
language representations (Bjerva et al., 2019c).
### 3.2 Results
In general, we observe a drop in performance when blinding the model to
relevant typological information, and an increase in performance when exposing
the model to it (Table 1). For phonological blinding or prediction, none of
the four tasks is noticeably affected. Although, e.g., both the syntactic and
morphological prediction tasks increase performance on POS tagging, it is not
straightforward to draw conclusions on which of these is the most efficient,
as there is a substantial correlation between syntactic and morphological
features. As for XNLI and PAWS-X, performance notably drops under both the
syntactic and genealogical blinding tasks.
Figure 2: PoS tagging results per language family across blinding and
prediction conditions
Figure 2 shows results for PoS tagging under prediction and blinding across
language families, following the same scheme as Hu et al. (2020).
Interestingly, the syntactic and morphological blinding settings are robust
across all language families, yielding a drop in accuracy across the board.
All other conditions yield mixed results. This further strengthens our
argument that preventing a model from learning syntactic and morphological
features can be severely detrimental.
Model | POS | NER | XNLI | PAWS-X
---|---|---|---|---
\+ Syntactic Blind. | 85.3- | 76.4 | 64.2- | 80.6-
\+ Morphological Blind. | 85.0- | 77.2 | 64.9 | 81.4
\+ Phonological Blind. | 86.7 | 77.1 | 65.0 | 81.6
\+ Genealogical Blind. | 86.1 | 77.0 | 64.7 | 81.1
m-BERT baseline | 86.8 | 77.3 | 65.1 | 81.7
\+ Syntactic Pred. | 87.0 | 77.5 | 65.3+ | 81.9+
\+ Morphological Pred. | 87.2+ | 77.3 | 65.2 | 81.7
\+ Phonological Pred. | 86.7 | 77.1 | 65.0 | 81.7
\+ Genealogical Pred. | 87.0 | 77.6 | 65.3+ | 81.8
Table 1: Typological Blinding and Prediction. Mean POS accuracy, NER F1
scores, XNLI accuracy and PAWS-X accuracy across all languages. + and -
indicate significantly better or worse performance respectively, as determined
by a one-tailed t-test ($p<0.01$).
### 3.3 The Effect of Typology on Latent Architecture Learning
The results show that preventing access to typological features hampers
performance, whereas providing access improves performance. We now turn to an
analysis of how the model shares parameters across languages in this setting.
Our hypothesis is that blinding will prevent models from sharing parameters
between similar languages, in spite of typological similarities. Concretely,
we expect that the drop in POS tagging performance under morphological
blinding is caused by lower $\alpha$ weights between languages which are
morphologically similar, and higher $\alpha$ weights between languages which
are dissimilar. Recall that these parameters are latent variables learned by
the model, regulating the amount of sharing between languages (see Eq. 1). We
investigate the correlations between the $\alpha$ sharing parameters, and two
proxies of language similarity. We focus on the POS task, as the results from
the typological blinding and prediction experiments were the most pronounced
here, as both morphological and syntactic blinding affected performance.
Our first measure of language similarity is based on Bjerva et al. (2019c),
who introduce what they refer to as structural similarity. This is based on
dependency statistics from the Universal Dependencies treebank (Zeman et al.,
2020), resulting in vectors which describe how different syntactic relations
are used in each language. Previous work has shown that this measure of
similarity correlates strongly with that learned in embedded language spaces
during multilingual training. In addition to considering these dependency
statistics, we also use language embeddings drawn form Östling and Tiedemann
(2017). For each language similarity measure we calculate its pairwise Pearson
correlation with the $\alpha$ values learned under each condition.
Table 2 shows correlations between $\alpha$ weights and similarities increase
when predicting typological features, and decreases when blinded to such
features. Hence, when the model has indirect access to, e.g., the SVO word
ordering features of languages, sharing also reflects this.
Model | Struct. | Lang. Emb.
---|---|---
Syntactic Blind. | 0.31 | 0.27
Morphological Blind. | 0.34 | 0.29
Phonological Blind. | 0.40 | 0.41
Genealogical Blind. | 0.29 | 0.31
No blind./pred. | 0.43 | 0.40
Syntactic Pred. | 0.52 | 0.53
Morphological Pred. | 0.49 | 0.56
Phonological Pred. | 0.41 | 0.39
Genealogical Pred. | 0.47 | 0.38
Table 2: Pearson correlations between $\alpha$ weights and language similarity
measures.
## 4 Discussion
We have shown that blinding a multilingual model to typological features
severely affects sharing across a relatively large language sample, and for
several NLP tasks. The effects on model performance, as evaluated over 40
languages and 4 tasks from XTREME (Hu et al., 2020), were the largest for POS
tagging. The fact that smaller effects were observed for NER, could be because
this task relies more on memorising NEs rather than using (morpho-)syntactic
cues (Augenstein et al., 2017). Furthermore, the relatively small effects on
XNLI and PAWS-X can also be interpreted as evidence for that typology is less
important in these tasks than in more traditional linguistic analysis.
A potential critique of our approach is that it merely blinds the model to
language identities. This could be the case, if only some latent
representation of, e.g., “SVO” ordering is used to represent a language
identity. However, previous work has shown that morphological information is
encoded by the type of model we investigate. Hence, since we only blind
features in a single category at a time, we expect that the model’s
representation of language identities is unaffected.
Not only do we observe a drop in performance when blinding a model to
syntactic features, but we also observe that the $\alpha$ sharing weights in
our model do not appear to correlate with linguistic similarities in this
setting. Conversely, encouraging a model to consider typology, by jointly
optimising it for typological feature prediction, improves performance in
general. Furthermore, $\alpha$ weights in this scenario converge towards
correlating with structural similarities of languages. This is in line with
recent work which has found that m-BERT uses fine-grained syntactic
distinctions in its cross-lingual representation space (Chi et al., 2020).
We interpret this as evidence for the fact that typology can be a necessity
for modelling in NLP. Our results furthermore corroborate previous work in
that we only find moderate benefits from including typological information
explicitly. We expect that this to a large degree is due to the typological
similarities of languages being encoded implicitly based on correlations
between patterns in the input data. As low-resource languages often do not
even have access to any substantial amount of raw text, but often do have
annotations in WALS, we expect that using typological information can go some
way towards building truly language-universal models.
## 5 Conclusions
We have shown that preventing access to typology can impede the performance of
cross-lingual sharing models. Investigating latent weights governing the
sharing between languages shows that this prevents the model from sharing
between typologically similar languages, which is otherwise learned based on
patterns in the input. We therefore expect that using typological information
can be of particular interest for building truly language-universal models for
low-resource languages.
## Acknowledgements
This research has received funding from the Swedish Research Council (grant No
2019-04129), and the NVIDIA Corporation (Titan Xp GPU).
## References
* Ammar et al. (2016) Waleed Ammar, George Mulcaire, Miguel Ballesteros, Chris Dyer, and Noah A Smith. 2016. Many languages, one parser. _Transactions of the Association for Computational Linguistics_ , 4:431–444.
* Augenstein et al. (2017) Isabelle Augenstein, Leon Derczynski, and Kalina Bontcheva. 2017. Generalisation in named entity recognition: A quantitative analysis. _Computer Speech & Language_, 44:61–83.
* Bjerva and Augenstein (2018a) Johannes Bjerva and Isabelle Augenstein. 2018a. From phonology to syntax: Unsupervised linguistic typology at different levels with language embeddings. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , pages 907–916, New Orleans, Louisiana. Association for Computational Linguistics.
* Bjerva and Augenstein (2018b) Johannes Bjerva and Isabelle Augenstein. 2018b. Tracking Typological Traits of Uralic Languages in Distributed Language Representations. In _Proceedings of the Fourth International Workshop on Computational Linguistics of Uralic Languages_ , pages 76–86, Helsinki, Finland. Association for Computational Linguistics.
* Bjerva et al. (2019a) Johannes Bjerva, Yova Kementchedjhieva, Ryan Cotterell, and Isabelle Augenstein. 2019a. A probabilistic generative model of linguistic typology. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 1529–1540, Minneapolis, Minnesota. Association for Computational Linguistics.
* Bjerva et al. (2019b) Johannes Bjerva, Yova Kementchedjhieva, Ryan Cotterell, and Isabelle Augenstein. 2019b. Uncovering probabilistic implications in typological knowledge bases. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 3924–3930, Florence, Italy. Association for Computational Linguistics.
* Bjerva et al. (2020) Johannes Bjerva, Elizabeth Salesky, Sabrina J. Mielke, Aditi Chaudhary, Celano Giuseppe, Edoardo Maria Ponti, Ekaterina Vylomova, Ryan Cotterell, and Isabelle Augenstein. 2020. SIGTYP 2020 shared task: Prediction of typological features. In _Proceedings of the Second Workshop on Computational Research in Linguistic Typology_ , pages 1–11, Online. Association for Computational Linguistics.
* Bjerva et al. (2019c) Johannes Bjerva, Robert Östling, Maria Han Veiga, Jörg Tiedemann, and Isabelle Augenstein. 2019c. What Do Language Representations Really Represent? _Computational Linguistics_ , 45(2):381–389.
* Caruana (1997) Rich Caruana. 1997. Multitask learning. _Machine Learning_ , 28 (1):41–75.
* Chen et al. (2019) Steven Chen, Nicholas Carlini, and David Wagner. 2019. Stateful detection of black-box adversarial attacks. _arXiv preprint arXiv:1907.05587_.
* Chi et al. (2020) Ethan A Chi, John Hewitt, and Christopher D Manning. 2020. Finding universal grammatical relations in multilingual bert. _arXiv preprint arXiv:2005.04511_.
* Choenni and Shutova (2020) Rochelle Choenni and Ekaterina Shutova. 2020. What does it mean to be language-agnostic? probing multilingual sentence encoders for typological properties. _CoRR_ , abs/2009.12862.
* Daumé III and Campbell (2007) Hal Daumé III and Lyle Campbell. 2007. A Bayesian Model for Discovering Typological Implications. In _Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics_ , pages 65–72, Prague, Czech Republic. Association for Computational Linguistics.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
* Dryer and Haspelmath (2013) Matthew S. Dryer and Martin Haspelmath, editors. 2013. _WALS Online_. Max Planck Institute for Evolutionary Anthropology, Leipzig.
* Ganin and Lempitsky (2014) Yaroslav Ganin and Victor Lempitsky. 2014. Unsupervised domain adaptation by backpropagation. _arXiv preprint arXiv:1409.7495_.
* Gerz et al. (2018) Daniela Gerz, Ivan Vulić, Edoardo Maria Ponti, Roi Reichart, and Anna Korhonen. 2018. On the relation between linguistic typology and (limitations of) multilingual language modeling. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 316–327, Brussels, Belgium. Association for Computational Linguistics.
* Goodfellow et al. (2014) Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. _arXiv preprint arXiv:1412.6572_.
* Hu et al. (2020) Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, and Melvin Johnson. 2020. Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalization. _arXiv preprint arXiv:2003.11080_.
* Joshi et al. (2020) Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The state and fate of linguistic diversity and inclusion in the nlp world. _arXiv preprint arXiv:2004.09095_.
* de Lhoneux et al. (2018) Miryam de Lhoneux, Johannes Bjerva, Isabelle Augenstein, and Anders Søgaard. 2018\. Parameter sharing between dependency parsers for related languages. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 4992–4997.
* Malaviya et al. (2017) Chaitanya Malaviya, Graham Neubig, and Patrick Littell. 2017. Learning language representations for typology prediction. In _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing_ , pages 2529–2535, Copenhagen, Denmark. Association for Computational Linguistics.
* Murawaki (2017) Yugo Murawaki. 2017. Diachrony-aware induction of binary latent representations from typological features. In _Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pages 451–461, Taipei, Taiwan. Asian Federation of Natural Language Processing.
* Nooralahzadeh et al. (2020) Farhad Nooralahzadeh, Giannis Bekoulis, Johannes Bjerva, and Isabelle Augenstein. 2020. Zero-Shot Cross-Lingual Transfer with Meta Learning. In _Proceedings of EMNLP_. Association for Computational Linguistics.
* O’Horan et al. (2016) Helen O’Horan, Yevgeni Berzak, Ivan Vulić, Roi Reichart, and Anna Korhonen. 2016\. Survey on the use of typological information in natural language processing. _arXiv preprint arXiv:1610.03349_.
* Oncevay et al. (2020) Arturo Oncevay, Barry Haddow, and Alexandra Birch. 2020. Bridging linguistic typology and multilingual machine translation with multi-view language representations. In _Proceedings of EMNLP_. Association for Computational Linguistics. ArXiv preprint arXiv:2004.14923.
* Östling and Tiedemann (2017) Robert Östling and Jörg Tiedemann. 2017. Continuous multilinguality with language vectors. In _Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers_ , pages 644–649, Valencia, Spain. Association for Computational Linguistics.
* Ponti et al. (2019) Edoardo Maria Ponti, Helen O’horan, Yevgeni Berzak, Ivan Vulić, Roi Reichart, Thierry Poibeau, Ekaterina Shutova, and Anna Korhonen. 2019. Modeling language variation and universals: A survey on typological linguistics for natural language processing. _Computational Linguistics_ , 45(3):559–601.
* Ruder et al. (2019) Sebastian Ruder, Joachim Bingel, Isabelle Augenstein, and Anders Søgaard. 2019\. Latent multi-task architecture learning. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 33, pages 4822–4829.
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In _Advances in neural information processing systems_ , pages 5998–6008.
* Zeman et al. (2020) Daniel Zeman, Joakim Nivre, and Mitchell Abrams et al. 2020. Universal dependencies 2.6. LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (ÚFAL), Faculty of Mathematics and Physics, Charles University.
* Zhang et al. (2019) Huan Zhang, Hongge Chen, Zhao Song, Duane Boning, Inderjit S Dhillon, and Cho-Jui Hsieh. 2019. The limitations of adversarial training and the blind-spot attack. _arXiv preprint arXiv:1901.04684_.
* Zhao et al. (2020) Wei Zhao, Steffen Eger, Johannes Bjerva, and Isabelle Augenstein. 2020. Inducing Language-Agnostic Multilingual Representations. _arXiv preprint arXiv:2008.09112_.
|
# LESA: Linguistic Encapsulation and Semantic Amalgamation Based Generalised
Claim Detection from Online Content
Shreya Gupta${}^{\dagger}{}^{*}$, Parantak Singh‡ , Megha Sundriyal†,
Md Shad Akhtar†, Tanmoy Chakraborty†
† IIIT-Delhi, India. ‡ Birla Institute of Technology and Science, Pilani, Goa,
India.
{shreyag, meghas, shad.akhtar<EMAIL_ADDRESS>
<EMAIL_ADDRESS>∗ First two authors have equal contributions.
The work was done when Parantak was an intern at LCS2 Lab, IIIT-Delhi.
###### Abstract
The conceptualization of a claim lies at the core of argument mining. The
segregation of claims is complex, owing to the divergence in textual syntax
and context across different distributions. Another pressing issue is the
unavailability of labeled unstructured text for experimentation. In this
paper, we propose LESA, a framework which aims at advancing headfirst into
expunging the former issue by assembling a source-independent generalized
model that captures syntactic features through part-of-speech and dependency
embeddings, as well as contextual features through a fine-tuned language
model. We resolve the latter issue by annotating a Twitter dataset which aims
at providing a testing ground on a large unstructured dataset. Experimental
results show that LESA improves upon the state-of-the-art performance across
six benchmark claim datasets by an average of 3 claim-F1 points for in-domain
experiments and by 2 claim-F1 points for general-domain experiments. On our
dataset too, LESA outperforms existing baselines by 1 claim-F1 point on the
in-domain experiments and 2 claim-F1 points on the general-domain experiments.
We also release comprehensive data annotation guidelines compiled during the
annotation phase (which was missing in the current literature).
## 1 Introduction
The concept of a claim lies at the core of the argument mining task. Toulmin
(2003), in his argumentation theory, described the term ‘claim’ as ‘an
assertion that deserves our attention’; albeit not very precise, it still
serves as an initial insight. In recent years, Govier (2013) described a
‘claim’ as ‘a disputed statement that we try to support with reasons.’
The predicament behind the claim detection task exists given the disparity in
conceptualization and lack of a proper definition of a claim. The task of
claim detection across different domains has garnered tremendous attention so
far owing to an uprise in social media consumption and by extension the
existence of fake news, online debates, widely-read blogs, etc. As an
elementary example, claim detection can be used as a precursor to fact-
checking; wherein segregation of claims aids in restricting the corpus that
needs a fact-check. A few examples are shown in Table 1.
Most of the existing works are built upon two fundamental pillars - semantic
encapsulation (Daxenberger et al., 2017; Chakrabarty et al., 2019) and
syntactic encapsulation (Levy et al., 2014; Lippi and Torroni, 2015). They
mainly focus on adapting to texts from similar distributions or topics or
both. Secondly, they often exercise against well-structured and laboriously
pre-processed formal texts owing to the lack of a labeled corpus consisting of
unstructured texts. As a result, claim detection from unstructured raw data
still lies under a relatively less explored umbrella.
Text | Claim?
---|---
Alcohol cures corona. | Yes
Wearing mask can prevent corona. | Yes
Lord, please protect my family & the Philippines from the corona virus. | No
If this corona scare doesn’t end soon imma have to intervene | No
Table 1: A few examples of claim and non-claim.
Motivation: Claims can be sourced from a variety of sources, e.g., online
social media texts, microblogs, Wikipedia articles, etc. It is, however,
crucial to pay special attention to claims observed on online social media
(OSM) sites Baum et al. (2020); WHO (2020). Twitter, being a major OSM
platform, provides the perfect playground for different ideologies and
perspectives. Over time, Twitter has emerged as the hub for short,
unstructured pieces of text that describe anything from news to personal life.
Most individuals view and believe things that align with their compass and
prior knowledge, aka conformity bias Whalen and Laland (2015) – users tend to
make bold claims that usually create a clash between users of varied opinions.
At times, these claims incite a negative impact on individuals and society. As
an example, a tweet that reads “alcohol cures corona” can lead to massive
retweeting and consequential unrest, especially in times of a pandemic, when
people are more vulnerable to suggestions. In such cases, automated promotion
of claims for immediate further checks could prove to be of utmost importance.
An automated system is pivotal since OSM data is far too voluminous to allow
for manual human checks, even if it was an expert.
At the same time deploying separate systems contingent on the source of a text
is inefficient and moves away from the goal of attaining human intelligence in
natural language processing tasks. An ideal situation would be a framework
that can effectively detect claims in the general setting. However, a major
bottleneck towards this goal is the unavailability of an annotated dataset
from noisy platforms like Twitter. We acknowledge this bottleneck and, in
addition to proposing a generalised framework, we develop a qualitative
annotated resource and guidelines for claim detection in tweets.
Proposed Method: There exists several claim detection models; however, the
downside is that most of them are trained on structured text from a specific
domain. Therefore, in this work, we propose LESA, a Linguistic Encapsulation
and Semantic Amalgamation based generalized claim detection model that is
capable of accounting for different text distributions, simultaneously. To
formalize this, we divide the text, contingent upon their structure, into
three broad categories – noisy text (tweets), semi-noisy text (comments), and
non-noisy text (news, essays, etc.). We model each category separately in a
joint framework and fuse them together using attention layers.
Since the task of claim detection has a strong association with the structure
of the input, as argued by Lippi and Torroni (2015), we leverage two
linguistic properties – part-of-speech (POS) and dependency tree, to capture
the linguistic variations of each category. Subsequently, we amalgamate these
features with BERT (Devlin et al., 2019) for classification.
We evaluate LESA on seven different datasets (including our Twitter dataset)
and observe efficient performance in each case. Moreover, we compare LESA’s
performance against various state-of-the-art systems for all seven datasets in
the general and individual settings. The comparative study advocates the
superior performance of LESA.
Summary of the Contributions: We summarize our major contributions below:
* •
Twitter claim detection dataset and comprehensive annotation guidelines. To
mitigate the unavailability of an annotated dataset for claim detection in
Twitter, we develop a large COVID-19 Twitter dataset, the first of its kind,
with $\sim 10,000$ labeled tweets, following a comprehensive set of claim
annotation guidelines.
* •
LESA, a generalized claim detection system. We propose a generalized claim
detection model, LESA, that identifies the presence of claims in any online
text, without prior knowledge of the source and independent of the domain. To
the best of our knowledge, this is the first attempt to define a model that
handles claim detection from both structured and unstructured data in
conjunction.
* •
Exhaustive evaluation and superior results. We evaluate LESA against multiple
state-of-the-art models on six benchmark claim detection datasets and our own
Twitter dataset. Comparison suggests LESA’s superior performance across
datasets and the significance of each model component.
Reproducibility: Code and dataset is publicly available at
https://github.com/LCS2-IIITD/LESA-EACL-2021. Appendix comprises of detailed
dataset description, annotation guidelines, hyper-parameters, and additional
results.
## 2 Related Work
In the past decade, the task of claim detection has become a popular research
area in text processing with an initial pioneering attempt by Rosenthal and
McKeown (2012). They worked on mining claims from discussion forums and
employed a supervised approach with features based on sentiment and word-
grams. Levy et al. (2014) proposed a context dependent claim detection (CDCD)
approach. They described CDC as ‘a general, concise statement that directly
supports or contests the given topic.’ Their approach was evaluated over
Wikipedia articles; it detected sentences that include CDCs using context-
based and context-free features. This was followed by ranking and detecting
CDCs using logistic regression. Lippi and Torroni (2015) proposed context-
independent claim detection (CICD) using linguistic reasoning, and
encapsulated structural information to detect claims. They used constituency
parsed trees to extract structural information and predicted parts of the
sentence holding a claim using SVM. Although their approach achieved promising
results, they also used a Wikipedia dataset which was highly engineered and
domain dependent.
Daxenberger et al. (2017) used six disparate datasets and contrasted the
performance of several supervised models. They performed two sets of
experiments – in-domain CD (trained and tested on the same dataset) and cross-
domain CD (trained on one and tested on another unseen dataset). They learned
divergent conceptualisations of claims over cross-domain datasets. Levy et al.
(2017) proposed the first unsupervised approach for claim detection. They
hypothesised a “claim sentence query” as an ordered triplet: $\langle$that
$\rightarrow$ MC $\rightarrow$ CL$\rangle$. According to the authors, a claim
begins with the word ‘that’ and is followed by the main concept (MC) or topic
name which is further followed by words from a pre-defined claim lexicon (CL).
This approach would not fit well for text stemming from social media platforms
owing to a lack of structure and the use of ‘that’ as an offset for claim.
In recent years transformer-based language models have been employed for claim
detection. Chakrabarty et al. (2019) used over 5 million self-labeled Reddit
comments that contained the abbreviations IMO (In My Opinion) or IMHO (In My
Honest Opinion) to fine-tune their model. However, they made no attempt to
explicitly encapsulate the structure of a sentence.
Recently, the CLEF-2020 shared task (Barrón-Cedeño et al., 2020) attracted
multiple models which are tweaked specifically for claim detection. Williams
et al. (2020) bagged the first position in the task using a fine-tuned RoBERTa
Liu et al. (2019) model with mean pooling and dropout. First runner up of the
challenge, Nikolov et al. (2020) used logistic regression on various meta-data
tweet features and a RoBERTa-based prediction. Cheema et al. (2020), the
second runner up, incorporated pre-trained BERT embeddings along with POS and
dependency tags as features trained using SVM.
Traditional approaches focused primarily on the syntactic representations of
claims and textual feature generation, while recent neural methods leverage
transformer models. With LESA, we attempt to learn from the past while
building for the future – we propose encapsulating syntactic representations
in the form of POS tags and dependency sequences along with the semantics of
the input text using transformer-based BERT Devlin et al. (2019). Another key
observation has been the use of highly structured and domain-engineered
datasets for training the existing models in claim detection. In the current
age of alarming disinformation, we recognise the augmented need for a system
that can detect claims in online text independent of its origin, context or
domain. Therefore, in addition to considering texts from different online
mediums, we incorporate, for the first time, a self-annotated large Twitter
dataset to the relatively structured datasets that exist in this field.
Figure 1: Schematic diagram of our proposed LESA model. The structure on the
right is a high level schematic diagram. Structure on the left shows POS and
DEP for one viewpoint.
## 3 Proposed Methodology
Traditionally, the narrative on claim detection is built around either
syntactic Levy et al. (2017); Lippi and Torroni (2015) or semantic Daxenberger
et al. (2017); Chakrabarty et al. (2019) properties of the text. However,
given our purview on the integration of both, we propose a combined model,
LESA , that incorporates exclusively linguistic features leveraged from part-
of-speech (POS) tags and dependency tree (DEP) as well as semantic features
leveraged from transformer-based model, BERT Devlin et al. (2019).
By the virtue of digital media, we generally deal with texts from three kind
of environments: (a) a controlled platform where content is pre-reviewed
(e.g., news, essays, etc.); (b) a free platform where authors have the freedom
to express themselves without any restrictions on the length (e.g., online
comments, Wikipedia talk pages); and (c) a free platform with restrictions on
the text length (e.g., tweets). The texts in the first category is usually
free of any grammatical and typographical mistakes, and thus belong to the
non-noisy category. On the other hand, in the third case, texts exhibit a
significant amount of noise, in terms of spelling variations, hashtags,
emojis, emoticons, abbreviations, etc., to express the desired information
within the permissible limit, thus it belongs to the noisy class. The second
case is a mixture of the two extreme cases and hence, constitutes the semi-
noisy category. We employ three pre-trained models representing noisy, semi-
noisy, and non-noisy data for both POS and dependency-based features. The
intuition is to leverage the structure-specific linguistic features in a joint
framework.
Domain adaptation from a structured environment to an unstructured one is non-
trivial and requires specific processing. Therefore, to ensure generalization,
we choose to process each input text from three different viewpoints
(structure-based segregation), and intelligently select the contributing
features among them through an attention mechanism. We use it to extract the
POS and DEP-based linguistic features. Subsequently, we fuse the linguistic
and semantic features using another attention layer before feeding it to a
multilayer perceptron (MLP) based classifier. The idea is to amalgamate
diverse set of features from different perspectives and leverage them for the
final classification. A high-level architectural diagram is depicted in Figure
1. We design parallel pillars for each viewpoint (right side of Figure 1) such
that the noisy pillar contains pre-trained information from the noisy source
and so on. When the common data is passed through the three pillars we
hypothesize each pillar’s contribution dependending on the type of input data.
For example, if the data source is from a noisy platform, we hypothesize that
the noisy pillars will have more significance than the other two viewpoints.
We demonstrate this effect in Table 5.
### A. Part-of-speech (POS) Module
The POS module consists of an embedding layer followed by a BiLSTM and an
attention layer to extract the syntactic formation of the input text. We pre-
train the POS module for each viewpoint, and later fine-tune them while
training the integrated model.
At first, each sequence of tokens $\\{x_{1},x_{2},...,x_{n}\\}$ is converted
to a sequence of corresponding POS tags resulting into the set
$\\{p_{1},p_{2},...,p_{n}\\}$. However, the foremost limitation of this
modeling strategy is the limited and small vocabulary size of $19$ owing to a
specific number of POS tags. To tackle this, we resort to using $k$-grams of
the sequence. The sequence of POS tags (with $k=3$) now becomes
$\\{(p_{0},p_{1},p_{2}),(p_{1},p_{2},p_{3}),(p_{2},p_{3},p_{4}),...,$
$(p_{n-2},p_{n-1},p_{n}),(p_{n-1},p_{n},p_{n+1})\\}$, where $p_{0}$ and
$p_{n+1}$ are dummy tags. Subsequently, a skip-gram model Mikolov et al.
(2013) is trained on the POS-transformed corpus of each dataset, which thereby
translates to a POS embedding, $E_{P}$.
### B. Dependency Tree (DEP) Module
Dependency parsing is the function of abstracting the grammatical assembly of
a sequence of tokens $\\{x_{1},x_{2},...,x_{n}\\}$ such that there exists a
directed relation (dependency), $d(x_{i},x_{j})$, between any two tokens
$x_{i}$ and $x_{j}$, where $x_{i}$ is the headword and $x_{j}$ is modified by
the headword. We obtain these dependency relations through
spaCy111www.spacy.io which uses the clearNLP guidelines. Initially, each
sequence is rendered into a combination of the dependency-tag arrangement
$\\{d_{1},d_{2},...,d_{n}\\}$ and a parent-position arrangement
$\\{pp_{1},pp_{2},...pp_{n}\\}$. Here, each $d_{j}$ represents a dependency
tag, where $x_{j}$ is modified by $x_{i}$, and $pp_{j}$ is the index of the
modifier (headword) $x_{i}$.
We then leverage the transformer encoder Vaswani et al. (2017), where
traditionally, a position-based signal is added to each token’s embedding to
help encode the placement of tokens. In our modified version, the token
sequence is the dependency-tag sequence $d_{e}=\\{d_{1},d_{2},...,d_{n}\\}$,
wherein a parent-position based signal is additionally added to encode the
position of the modifier words.
$\displaystyle
d^{\prime}_{e}=d_{e}+[(E_{p_{1}},E_{pp_{1}}),...,(E_{p_{n}},E_{pp_{n}})]$ (1)
where $d^{\prime}_{e}\in\mathbf{R}^{d\times n}$ is the modified dependency
embedding of a sequence of length $n$, $E_{p_{i}}$ and $E_{pp_{i}}$ are the
encodings for the token-position and the parent-position (position of token’s
modifier), and $(,)$ represent tuple brackets.
This helps us create a flat representation for a dependency graph. The
transformer-architecture that we employ comprises of 5 attention heads with an
embedding size of 20. Given that there are only a handful of dependency
relations, this, still poses the problem of a limited vocabulary size of 37.
Having accounted for the parent positions already, we decide to again employ
tri-gram sequences
$\\{(d_{0},d_{1},d_{2}),(d_{1},d_{2},d_{3}),(d_{2},d_{3},d_{4}),...,$
$(d_{n-2},d_{n-1},d_{n}),(d_{n-1},d_{n},d_{n+1})\\}$ in place of uni-grams.
## 4 Datasets
A number of datasets exist for the task of claim detection in online text
Peldszus and Stede (2015); Stab and Gurevych (2017); however, most of them are
formal and structured texts. As we discussed earlier, OSM platforms are
overwhelmed with various claim-ridden posts. Despite the abundance of tweets,
literature does not suggest any significant effort for detecting claims in
Twitter; Arguably, the prime reason is the lack of a large-scale dataset.
Recently, a workshop on claim detection and verification in Twitter was
organized under CLEF-2020 Barrón-Cedeño et al. (2020). It had two subtasks
related to claim identification with separate datasets. The first dataset
consists of $1,060$ COVID-19 tweets for claim detection; whereas, the second
one comprises of another $1,000$ tweets for claim retrieval. In total, there
were $2,069$ annotated tweets of which $1,704$ had claims and $365$ were non-
claims. Another recent in-progress dataset on claim detection, which currently
has only $305$ claim and $199$ non-claim tweets, was released by Alam et al.
(2020).
Unfortunately, the aforementioned limited instances are insufficient to
develop an efficient model. Therefore, we attempt to develop a new and
relatively larger dataset for claim detection in OSM platforms. We collected
$\sim 40,000$ tweets from various sources Carlson (2020); Smith (2020); Celin
(2020); Chen et al. (2020); Qazi et al. (2020) and manually annotated them. We
additionally included claim detection datasets of Alam et al. (2020) and
CLEF-2020 Barrón-Cedeño et al. (2020) and re-annotated them in accordance with
our guidelines. During the cleaning process, we filtered a majority of tweets
due to their irrelevancy and duplicacy. To ensure removal of duplicates, we
performed manual checking and exhaustive preprocessing.
| Our Annotation
---|---
CLEF-2020 | Non-claim | Claim
Non-claim | 301 | 47
Claim | 64 | 550
Table 2: Confusion matrix highlighting the differences and similarities
between Alam et al. (2020) and our annotation guidelines for CLEF-2020 claim
dataset.
Dataset | Text
---|---
Noisy | TWR | @realDonaldTrump Does ingesting bleach and shining a bright light in the rectal area really cure #COVID19? Have you tried it? Is that what killed Kim Jong Un? #TrumpIsALaughingStock #TrumpIsALoser
Semi-noisy | OC | *smacks blonde wig on Axel* I think as far as DiZ is concerned, he is very smart but also in certain areas very dumb - - witness the fact that he didn’t notice his apprentices were going to turn on him, when some of them (cough Vexen cough) aren’t exactly subtle by nature.
WTP | Not to mention one without any anonymous users TALKING IN CAPITAL LETTERS !!!!!!!!
Non-noisy | MT | Tax data that are not made available for free should not be acquired by the state.
PE | I believe that education is the single most important factor in the development of a country.
VG | When’s the last time you slipped on the concept of truth?
WD | The public schools are a bad place to send a kid for a good education anymore.
Table 3: One example from each dataset. Underlined text highlights noisy and
semi-noisy phrases.
Data Annotation: To annotate the tweets, we extend and adapt the claim
annotation guidelines of Alam et al. (2020). The authors targeted and
annotated only a subset of claims, i.e., factually verifiable claims. They did
not consider personal opinions, sarcastic comments, implicit claims, or claims
existing in a sub-sentence or sub-clause level. Subsequently, we propose our
definition of claims and extrapolate the existing guidelines to be more
inclusive, nuanced and applicable to a diverse set of claims. Our official
definition for claims, adopted from Oxford
dictionary222https://www.lexico.com/definition/claim, is to state or assert
that something is the case, with or without providing evidence or proof.
We present the details of annotation guidelines in Gupta et al. (2021).
Following the guidelines, we annotated the collected tweets, and to ensure
coherence and conformity, we re-annotated the tweets of Alam et al. (2020) and
CLEF-2020 Barrón-Cedeño et al. (2020). It is intriguing to see the differences
and similarities of the two guidelines; therefore, we compile a confusion
matrix for CLEF-2020 claim dataset, as presented in Table 2. Each tweet in our
corpus of $9,894$ tweets has been annotated by at least two annotators, with
an average Cohen’s kappa inter-annotator agreement Cohen (1960) score of
$0.62$. In case of a disagreement, the third annotator was considered and a
majority vote was used for the final label. All annotators were linguists.
Dataset | Noisy | Semi-noisy | Non-noisy
---|---|---|---
TWR | OC | WTP | MT | PE | VG | WD
Tr | Cl | 7354 | 623 | 1030 | 100 | 1885 | 495 | 190
N-cl | 1055 | 7387 | 7174 | 301 | 4499 | 2012 | 3332
Ts | Cl | 1296 | 64 | 105 | 12 | 223 | 57 | 14
N-cl | 189 | 730 | 759 | 36 | 509 | 221 | 221
Tot | Cl | 8650 | 687 | 1,135 | 112 | 2,108 | 552 | 204
N-cl | 1244 | 8117 | 7933 | 337 | 5008 | 2233 | 3553
Table 4: Statistics of the datasets (Abbreviations: Cl: Claim, N-Cl: Non-
claim, Tr: Train set, Ts: Test set; Tot: Total).
Other Datasets: Since we attempt to create a generalized model that is able to
detect the presence of a claim in any online text, we accumulate, in addition
to the Twitter dataset, six publicly available benchmark datasets: (i) Online
Comments (OC) containing Blog threads of LiveJournal Biran and Rambow (2011),
(ii) Wiki Talk Pages (WTP) Biran and Rambow (2011), (iii) German Micro-text
(MT) Peldszus and Stede (2015), (iv) Persuasive Student Essay (PE) Stab and
Gurevych (2017), (v) Various Genres (VG) containing newspaper editorials,
parliamentary records and judicial summaries, and (vi) Web Discourse (WD)
containing blog posts or user comments Habernal and Gurevych (2015). All
datasets utilised in this paper contain English texts only. For German
Microtexts (MT), we used the publicly available English translated version
published by MT’s original authors Peldszus and Stede (2015)). The same was
utilized by Chakrabarty et al. (2019).
The datasets are formed by considering text at the sentence level. For
example, in Persuasive Essays (PE) dataset, each essay is broken into
sentences and each sentence is individually annotated for a claim. Considering
the structure of the input texts in these datasets, we group them into three
categories as follows: Noisy (Twitter), Semi-noisy (OC, WTP), Non-noisy (MT,
PE, VG, WD). We list one example from each dataset in Table 3. We also
highlight the noisy and semi-noisy phrases in Twitter, and OC and WTP datasets
respectively. Moreover, we present detailed statistics of all seven datasets
in Table 4.
## 5 Experimental Setup
For all datasets besides twitter, we use the train, validation, and test
splits as provided by UKP Lab333https://tinyurl.com/yyckv29p. A mutually
exhaustive 70:15:15 split was maintained for Twitter dataset. We compute POS
embeddings by learning word2vec skip-gram model Mikolov et al. (2013) on the
tri-gram444Choice of $n=3$ is empirical. We report supporting experimental
results in Gupta et al. (2021). POS sequence. For the skip-gram model, we set
context window $=6$, embedding dimension $=20$, and discard the POS sequence
with frequency $\leq 2$. Subsequently, we compute dependency embeddings with
dimension $=20$ using Transformer Vaswani et al. (2017) encoder with $5$
attention heads. Please note that the choice of using Bi-LSTM, as oppose to
Transformers, for extracting the POS features is empirical555Gupta et al.
(2021) accompanies the supporting results..
Models | Noisy | Semi-Noisy | Non-Noisy | Wt Avg
---|---|---|---|---
Twitter | OC | WTP | MT | PE | VG | WD
$m\text{-}F1$ | $c\text{-}F1$ | $m\text{-}F1$ | $c\text{-}F1$ | $m\text{-}F1$ | $c\text{-}F1$ | $m\text{-}F1$ | $c\text{-}F1$ | $m\text{-}F1$ | $c\text{-}F1$ | $m\text{-}F1$ | $c\text{-}F1$ | $m\text{-}F1$ | $c\text{-}F1$
BERT | 0.60 | 0.83 | 0.52 | 0.24 | 0.53 | 0.32 | 0.70 | 0.63 | 0.69 | 0.64 | 0.58 | 0.43 | 0.48 | 0.22 | 0.58 | 0.73
BERT + POS | 0.61 | 0.84 | 0.53 | 0.24 | 0.54 | 0.31 | 0.75 | 0.69 | 0.72 | 0.64 | 0.59 | 0.43 | 0.51 | 0.24 | 0.60 | 0.74
BERT + Dependency | 0.59 | 0.82 | 0.51 | 0.23 | 0.52 | 0.30 | 0.79 | 0.73 | 0.69 | 0.62 | 0.56 | 0.41 | 0.48 | 0.22 | 0.57 | 0.72
POS + Dependency | 0.45 | 0.70 | 0.48 | 0.19 | 0.47 | 0.25 | 0.57 | 0.46 | 0.50 | 0.45 | 0.56 | 0.41 | 0.44 | 0.17 | 0.48 | 0.61
LESA (Combined-view) | 0.61 | 0.85 | 0.51 | 0.23 | 0.53 | 0.31 | 0.77 | 0.71 | 0.71 | 0.64 | 0.57 | 0.40 | 0.48 | 0.22 | 0.59 | 0.75
LESA(768dim) | 0.58 | 0.80 | 0.52 | 0.24 | 0.57 | 0.29 | 0.77 | 0.71 | 0.73 | 0.65 | 0.60 | 0.43 | 0.52 | 0.25 | 0.59 | 0.71
LESA(32dim) | 0.62 | 0.85 | 0.53 | 0.24 | 0.55 | 0.32 | 0.77 | 0.69 | 0.74 | 0.66 | 0.68 | 0.41 | 0.52 | 0.25 | 0.61 | 0.75
Table 5: Macro F1 ($m\text{-}F1$) and Claim-F1 ($c\text{-}F1$) for ablation
studies.
The outputs of the POS and dependency embedding layers are subsequently fed to
a BiLSTM and GlobalAveragePooling layers, respectively. Their respective
outputs are projected to a 32-dimensional representation for the fusion. We
employ HuggingFace’s BERT implementation for computing the tweet
representation. The 768-dimensional embeddings is projected to a
32-dimensional representation using linear layers. We progress with the
32-dimensional representation of BERT as we observe no elevation in results on
using the 768-dimensional representation, as can be seen in Table 5. Besides,
the latter results in $\sim 2$ million trainable parameters, whereas the
former requires $\sim 1.2$ million trainable parameters. We employ sparse
categorical cross-entropy loss with Adam optimizer and use softmax for the
final classification.666 Gupta et al. (2021) accompanies other
hyperparameters. For evaluation, we adopt macro-F1 ($m\text{-}F1$) and
claim-F1 ($c\text{-}F1$) scores used by the existing methods Daxenberger et
al. (2017); Chakrabarty et al. (2019).
We perform our experiments in two setups. In the first in-domain setup, we
train, validate and test on the same dataset and repeat it for all seven
datasets independently. In the second general-domain setup, we combine all
datasets and train a unified generic model. Subsequently, we evaluate the
trained model on all seven datasets individually.
Furthermore, for each experiment, we ensure a balanced training set by down-
sampling the dominant class at $1:1$ ratio. However, we use the original test
set for a fair comparison against the existing baselines and state-of-the-art
models.
## 6 Experimental Results
Table 5 shows $m\text{-}F1$ and $c\text{-}F1$ for different variants of LESA.
We begin with a fine-tuned BERT model and observe the performance on test sets
of all seven datasets. On the Twitter dataset, the BERT architecture yields
$m\text{-}F1$ score of 0.60 and $c\text{-}F1$ score of 0.83. We also report
the weighted-average score as 0.58 $m\text{-}F1$ and 0.73 $c\text{-}F1$, in
the last two columns of Table 5. Since we hypothesize that claim detection has
a strong association with the structure of the text, we amalgamate POS and
dependency (DEP) information with the BERT architecture in a step-wise manner.
The BERT+POS model reports an increase of 1% $m\text{-}F1$ and $c\text{-}F1$
scores on the Twitter dataset. We observe similar trends in other datasets and
the overall weighted-average score as well. We also perform experiments on
other permutations, and their results are listed in Table 5. Finally, we
combine both POS and DEP modules with the BERT architecture (aka. LESA). It
obtains improved results for most of the cases, as shown in the last row of
Table 5. The best result on average stands at 0.61 $m\text{-}F1$ and 0.75
$c\text{-}F1$ for the proposed LESA model. This serves as a testament to our
hypothesis, validating our assumption that combining syntactic and semantic
representations leads to better detection of claims.
Models | Noisy | Semi-Noisy | Non-Noisy | Wt Avg
---|---|---|---|---
Twitter | OC | WTP | MT | PE | VG | WD
$m\text{-}F1$ | $c\text{-}F1$ | $m\text{-}F1$ | $c\text{-}F1$ | $m\text{-}F1$ | $c\text{-}F1$ | $m\text{-}F1$ | $c\text{-}F1$ | $m\text{-}F1$ | $c\text{-}F1$ | $m\text{-}F1$ | $c\text{-}F1$ | $m\text{-}F1$ | $c\text{-}F1$
BERT | 0.50 | 0.67 | 0.50 | 0.24 | 0.36 | 0.27 | 0.75 | 0.69 | 0.73 | 0.67 | 0.61 | 0.48 | 0.48 | 0.23 | 0.52 | 0.62
XLNet | 0.52 | 0.70 | 0.45 | 0.24 | 0.55 | 0.30 | 0.49 | 0.43 | 0.71 | 0.64 | 0.53 | 0.43 | 0.51 | 0.12 | 0.54 | 0.64
Accenture | 0.48 | 0.15 | 0.44 | 0.16 | 0.50 | 0.23 | 0.48 | 0.28 | 0.45 | 0.11 | 0.39 | 0.27 | 0.34 | 0.11 | 0.46 | 0.15
Team Alex | 0.70 | 0.88 | 0.46 | 0.23 | 0.52 | 0.21 | 0.75 | 0.64 | 0.69 | 0.64 | 0.32 | 0.38 | 0.60 | 0.34 | 0.59 | 0.76
Check Square | 0.12 | 0.02 | 0.49 | 0.25 | 0.39 | 0.26 | 0.57 | 0.32 | 0.47 | 0.11 | 0.32 | 0.37 | 0.76 | 0.56 | 0.35 | 0.07
CrossDomain | 0.67 | 0.84 | 0.61* | 0.26* | 0.59* | 0.29* | 0.79* | 0.67* | 0.74* | 0.61* | 0.66* | 0.45* | 0.63* | 0.29* | 0.65 | 0.74
CrossDomain† | 0.67 | 0.84 | 0.50 | 0.24 | 0.52 | 0.27 | 0.85 | 0.79 | 0.71 | 0.63 | 0.60 | 0.46 | 0.59 | 0.31 | 0.61 | 0.74
LESA | 0.67 | 0.89 | 0.51 | 0.26 | 0.57 | 0.33 | 0.80 | 0.71 | 0.73 | 0.67 | 0.68 | 0.52 | 0.61 | 0.35 | 0.63 | 0.79
Table 6: Macro F1 ($m\text{-}F1$) and F1 for claims ($c\text{-}F1$) in the in-
domain setup. For CrossDomian, the asterisk (*) indicates results taken from
Daxenberger et al. (2017) and the dagger (${\dagger}$) represents the
reproduced results.
Model | Noisy | Semi-Noisy | Non-Noisy | Wt Avg
---|---|---|---|---
Twitter | OC | WTP | MT | PE | VG | WD
$m\text{-}F1$ | $c\text{-}F1$ | $m\text{-}F1$ | $c\text{-}F1$ | $m\text{-}F1$ | $c\text{-}F1$ | $m\text{-}F1$ | $c\text{-}F1$ | $m\text{-}F1$ | $c\text{-}F1$ | $m\text{-}F1$ | $c\text{-}F1$ | $m\text{-}F1$ | $c\text{-}F1$
BERT | 0.60 | 0.83 | 0.52 | 0.24 | 0.53 | 0.32 | 0.70 | 0.63 | 0.69 | 0.64 | 0.58 | 0.43 | 0.48 | 0.22 | 0.58 | 0.73
XLNet | 0.59 | 0.81 | 0.56 | 0.28 | 0.57 | 0.29 | 0.68 | 0.69 | 0.71 | 0.64 | 0.61 | 0.44 | 0.52 | 0.25 | 0.59 | 0.72
Accenture | 0.49 | 0.43 | 0.31 | 0.12 | 0.40 | 0.18 | 0.36 | 0.13 | 0.51 | 0.36 | 0.38 | 0.17 | 0.37 | 0.04 | 0.43 | 0.38
Team Alex | 0.54 | 0.75 | 0.54 | 0.25 | 0.54 | 0.30 | 0.71 | 0.65 | 0.71 | 0.63 | 0.61 | 0.43 | 0.48 | 0.19 | 0.57 | 0.67
Check Square | 0.58 | 0.82 | 0.51 | 0.23 | 0.48 | 0.28 | 0.56 | 0.53 | 0.68 | 0.59 | 0.56 | 0.38 | 0.47 | 0.21 | 0.56 | 0.72
CrossDomain | 0.65 | 0.82 | 0.57 | 0.27 | 0.53 | 0.28 | 0.71 | 0.63 | 0.66 | 0.57 | 0.61 | 0.43 | 0.52 | 0.25 | 0.60 | 0.71
LESA | 0.62 | 0.85 | 0.53 | 0.24 | 0.55 | 0.32 | 0.77 | 0.69 | 0.74 | 0.66 | 0.68 | 0.41 | 0.52 | 0.25 | 0.61 | 0.75
Table 7: Macro F1 ($m\text{-}F1$) and Claim-F1 ($c\text{-}F1$) in the general-
domain setup.
In all aforementioned experiments, we use our pre-defined concept of three
viewpoints, i.e., noisy, semi-noisy and non-noisy. Therefore, for
completeness, we also construct a combined viewpoint which does not contain
any structure-specific pillar in POS or DEP branches. The results from this
ablation experiment are reported in LESA (Combined-view) row. We observe that
the combined-view results are inferior to the variant with separate viewpoints
for each component (c.f. second last and last row of Table 5 respectively).
Thus, providing attention to datasets based on the noise in their content is
demonstrated by a significant increase of $\sim 2\%$ $m\text{-}F1$ from
combined viewpoint to separate viewpoints experiment.
### A. Baselines and Comparative Analysis
We employ the following baselines (some of them being state-of-the-art systems
for claim detection and text classification): $\triangleright$ XLNet Yang et
al. (2019): It is similar to the BERT model, where we fine-tune XLNet for the
claim detection; $\triangleright$ Accenture Williams et al. (2020): A RoBERTa-
based system that ranked first in the CLEF-2020 claim detection task Barrón-
Cedeño et al. (2020); $\triangleright$ Team Alex Nikolov et al. (2020): The
second-ranked system at CLEF-2020 task that fused tweet meta-data into RoBERTa
for the final prediction; $\triangleright$ CheckSquare Cheema et al. (2020):
An SVM-based system designed on top of pre-trained BERT embeddings in addition
to incorporating POS and dependency tags as external features.
$\triangleright$ CrossDomain Daxenberger et al. (2017): Among several
variations reported in the paper, their best model incorporates CNN (random
initialization) for the detection. We reproduce the top submissions from
CLEF-2020 challenge using the best performing models mentioned in the
referenced papers. Code for CheckSquare was provided online. For Accenture and
Team Alex we reproduce their methods using the hyper-parameters mentioned in
the paper. We evaluate all baselines using the same train and test set as for
LESA .
Models | Noisy | Semi-Noisy | Non-Noisy
---|---|---|---
$m\text{-}F1$ | $c\text{-}F1$ | $m\text{-}F1$ | $c\text{-}F1$ | $m\text{-}F1$ | $c\text{-}F1$
BERT | 0.60 | 0.83 | 0.52 | 0.29 | 0.63 | 0.58
XLNet | 0.59 | 0.81 | 0.57 | 0.29 | 0.65 | 0.59
Accenture | 0.49 | 0.43 | 0.36 | 0.16 | 0.45 | 0.30
Team Alex | 0.54 | 0.75 | 0.54 | 0.28 | 0.65 | 0.57
CheckSquare | 0.58 | 0.82 | 0.49 | 0.26 | 0.61 | 0.53
CrossDomain | 0.65 | 0.82 | 0.55 | 0.28 | 0.63 | 0.53
LESA | 0.62 | 0.85 | 0.54 | 0.29 | 0.69 | 0.60
Table 8: Category-wise weighted-average F1 scores.
| Example | Gold | Prediction
---|---|---|---
| LESA | CrossDomain
TWR | $x_{1}$ | 28 coronaoutbreak cases thus far in india italian tourists 16 their driver 1 kerala 3 cureddischarged agra 6 delhi 1 noida school dad telangana 1 coronavirusindia | 1 | 0 | 0
$x_{2}$ | can we just call this a cure now | 0 | 0 | 1
MT | $x_{3}$ | Besides it should be in the interest of the health insurers to recognize alternative medicine as treatment, since there is a chance of recovery. | 0 | 1 | 1
PE | $x_{4}$ | On the other hand, fossil fuels are abundant and inexpensive in many areas | 0 | 1 | 1
$x_{5}$ | Daily exercise will help also to develop children’s brain function. | 1 | 1 | 0
OC | $x_{6}$ | Skinny Puppy is headlining Festival Kinetik ! | 0 | 1 | 1
$x_{7}$ | I guess I’m not desensitized enough to just forget about people being murdered in my neighborhood. | 1 | 1 | 0
WD | $x_{8}$ | No wonder 50 million babies have been aborted since 1973 . | 0 | 1 | 1
Table 9: Error analysis of the outputs. Red texts highlight errors.
We report our comparative analysis for the in-domain setup in Table 6. We
observe that LESA obtains best $c\text{-}F1$ scores for six out of seven
datasets. Additionally, it achieves a weighted average $c\text{-}F1$ of $0.79$
which is $3.95\%$ improvement over the best performing baseline. In terms of
$m\text{-}F1$ values, our weighted average ranks second next to CrossDomain.
We reproduced CrossDomain baseline using their GitHub code UKPLab (2017). If
the reproduced values are considered, our model outperforms all other models
in $m\text{-}F1$ value as well.
Similarly, we compile the results for the general-domain setup in Table 7. In
the non-noisy category, LESA obtains better $m\text{-}F1$ scores than three of
the four state-of-the-art systems, i.e., it reports $0.77$, $0.74$, and $0.68$
$m\text{-}F1$ scores compared to $0.71$, $0.71$, and $0.61$ $m\text{-}F1$
scores of the comparative systems on MT, PE, and VG test sets, respectively.
On WD, we observe similar $m\text{-}F1$ and $c\text{-}F1$ scores for both the
best baseline and LESA. On the datasets in other categories, we observe
comparative $m\text{-}F1$ scores; however, none of the baselines are
consistent across all dataset – e.g., CrossDomain Daxenberger et al. (2017)
reports the best $m\text{-}F1$ scores on Twitter and OC, but yields (joint)
fourth-best performance on WTP. Moreover, LESA yields the best $m\text{-}F1$
score across the seven datasets on average with $\geq 1\%$ improvements. On
the other hand, we obtain best $c\text{-}F1$ scores for five out of seven
datasets. In addition, LESA reports overall $c\text{-}F1$ of 0.75 with a
significant improvement of $\geq 3\%$. Using a paired T-test, LESA showed
significant statistical improvement compared against BERT in $m\text{-}F1$ and
$c\text{-}F1$ for the noisy dataset with p-values $.00017$ and <$.00001$
respectively. Results were also significant for $m\text{-}F1$ and
$c\text{-}F1$ for PE and $m\text{-}F1$ for WD. The small sample size in some
datasets like MT and VG does not allow a reliable calculation of test
statistics.
Since our work intends to developing a model that is able to detect claims
irrespective of the source and origin of text, we also analyse the weighted-
average scores for each category in Table 8. We observe that LESA obtains the
best $c\text{-}F1$ scores in each category, in addition to the best
$m\text{-}F1$ score in non-noisy category as well. For the other two
categories, LESA yields comparative performances. The results are better for
noisy data than for non-noisy owing to the small size and skewness against
claims in the latter’s test set. Therefore, misclassification of a single
claim causes severe penalization to $c\text{-}F1$.
### B. Error Analysis
It is apparent from the results that all systems (including LESA) committed
some errors in claim detection. Thus, in this section, we explore where our
system misclassified the inputs by analysing some examples. Table 9 presents a
few instances along with the gold labels and the predictions of the best-
performing baseline, CrossDomain Daxenberger et al. (2017), for comparison. In
some cases, both LESA and CrossDomain failed to classify the instances
correctly, whereas in others, LESA classifies the instances correctly but
CrossDomain could not. We also report intuitions for the misclassification by
LESA in some cases. The presence of numbers and statistics could be the reason
behind the misclassifications in examples $x_{1}$ and $x_{8}$. Example $x_{3}$
contains two weak phrases (‘alternative medicine as treatment’ and ‘there is a
chance of recovery’) which are most likely the cause of misclassification. The
former might have been interpreted as suggestion backed up by some evidence,
while in the latter phrase, LESA might have misinterpreted the optimism with
claim. Furthermore, the phrase ‘fossil fuels are abundant’ in example $x_{4}$
reflects world knowledge instead of a claim, as interpreted by LESA.
## 7 Conclusion
In this paper, we addressed the task of claim detection from online posts. To
do this, we proposed a generic and novel deep neural framework, LESA, that
leverages the pre-trained language model and two linguistic features,
corresponding to the syntactic properties of input texts, for the final
classification. Additionally, we tackled the texts from distinct sources for
the claim detection task in a novel way. In particular, we categorized the
input text as noisy, non-noisy, and semi-noisy based on the source, and
modeled them separately. Subsequently, we fused them together through an
attention module as the combined representation.
One of the major bottlenecks of claim detection in online social media
platforms is the lack of qualitative annotation guidelines and a sufficiently
large annotated dataset. Therefore, we developed a large Twitter dataset of
$\sim 10,000$ manually annotated tweets for claim detection. In addition to
our twitter dataset, we employed six benchmark datasets (representing either
semi-noisy or non-noisy input channels) for evaluation of the proposed model.
We compared the performance of LESA against four state-of-the-art systems and
two pre-trained language models. Comparison showed the superiority of the
proposed model with $\geq 3\%$ claim-F1 and $\geq 1\%$ macro-F1 improvements
compared to the best performing baselines on average. As a by-product of the
study, we released a comprehensive guideline for claim annotation.
## Acknowledgement
We would like to thank Rituparna and LCS2 members for helping in data
annotation. The work was partially supported by Accenture Research Grant,
Ramanujan Fellowship, and CAI, IIIT-Delhi.
## References
* Alam et al. (2020) Alam, Shaden Shaar, Fahim Dalvi, Hassan Sajjad, Alex Nikolov, Hamdy Mubarak, Giovanni Da San Martino, Ahmed Abdelali, Nadir Durrani, Kareem Darwish, and Preslav Nakov. 2020. Fighting the covid-19 infodemic: Modeling the perspective of journalists, fact-checkers, social media platforms, policy makers, and the society. _arXiv preprint arXiv:2005.00033_.
* Barrón-Cedeño et al. (2020) Alberto Barrón-Cedeño, Tamer Elsayed, Preslav Nakov, Giovanni Da San Martino, Maram Hasanain, Reem Suwaileh, and Fatima Haouari. 2020. Checkthat! at clef 2020: Enabling the automatic identification and verification of claims in social media. In _Advances in Information Retrieval_ , pages 499–507, Cham. Springer International Publishing.
* Baum et al. (2020) Matthew A. Baum, Katherine Ognyanova, Hanyu Chwe, Alexi Quintana, Roy H. Perlis, David Lazer, James Druckman, Mauricio Santillana, Jennifer Lin, John Della Volpe, Matthew Simonson, and Jon Green. 2020. The state of the nation: A 50-state covid-19 survey report #14: Misinformation and vaccine acceptance.
* Biran and Rambow (2011) O. Biran and O. Rambow. 2011. Identifying justifications in written dialogs. In _2011 IEEE Fifth International Conference on Semantic Computing_ , pages 162–168.
* Biran and Rambow (2011) Or Biran and Owen Rambow. 2011. Identifying justifications in written dialogs by classifying text as argumentative. _International Journal of Semantic Computing_ , 05:363–381.
* Carlson (2020) Carlson. 2020. Coronavirus tweets.
* Celin (2020) Sven Celin. 2020. Covid-19 tweets afternoon 31.03.2020.
* Chakrabarty et al. (2019) Tuhin Chakrabarty, Christopher Hidey, and Kathleen McKeown. 2019. Imho fine-tuning improves claim detection. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 558–563.
* Cheema et al. (2020) Gullal S. Cheema, Sherzod Hakimov, and Ralph Ewerth. 2020. Check_square at checkthat! 2020: Claim detection in social media via fusion of transformer and syntactic features. _arXiv: 2007.10534_.
* Chen et al. (2020) Emily Chen, Kristina Lerman, and Emilio Ferrara. 2020. Tracking social media discourse about the covid-19 pandemic: Development of a public coronavirus twitter data set. _JMIR Public Health Surveill_ , 6(2):e19273.
* Cohen (1960) J. Cohen. 1960. A Coefficient of Agreement for Nominal Scales. _Educational and Psychological Measurement_ , 20(1):37.
* Daxenberger et al. (2017) Johannes Daxenberger, Steffen Eger, Ivan Habernal, Christian Stab, and Iryna Gurevych. 2017. What is the essence of a claim? cross-domain claim identification. In _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing_ , pages 2055–2066.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
* Govier (2013) Trudy Govier. 2013. _A practical study of argument_. Cengage Learning.
* Gupta et al. (2021) Shreya Gupta, Parantak Singh, Megha Sundriyal, Md Shad Akhtar, and Tanmoy Chakraborty. 2021. LESA: Linguistic Encapsulation and Semantic Amalgamation Based Generalised Claim Detection from Online Content (Supplementary).
* Habernal and Gurevych (2015) Ivan Habernal and Iryna Gurevych. 2015. Exploiting debate portals for semi-supervised argumentation mining in user-generated web discourse. In _Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing_ , pages 2127–2137, Lisbon, Portugal. Association for Computational Linguistics.
* Levy et al. (2014) Ran Levy, Yonatan Bilu, Daniel Hershcovich, Ehud Aharoni, and Noam Slonim. 2014\. Context dependent claim detection. In _Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers_ , pages 1489–1500.
* Levy et al. (2017) Ran Levy, Shai Gretz, Benjamin Sznajder, Shay Hummel, Ranit Aharonov, and Noam Slonim. 2017. Unsupervised corpus–wide claim detection. In _Proceedings of the 4th Workshop on Argument Mining_ , pages 79–84.
* Lippi and Torroni (2015) Marco Lippi and Paolo Torroni. 2015. Context-independent claim detection for argument mining. In _Twenty-Fourth International Joint Conference on Artificial Intelligence_ , pages 185–191.
* Liu et al. (2019) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. _arXiv preprint arXiv:1907.11692_.
* Mikolov et al. (2013) Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. _arXiv: 1301.3781_.
* Nikolov et al. (2020) Alex Nikolov, Giovanni Da San Martino, Ivan Koychev, and Preslav Nakov. 2020. Team alex at clef checkthat! 2020: Identifying check-worthy tweets with transformer models. _arXiv:2009.02931_.
* Peldszus and Stede (2015) Andreas Peldszus and Manfred Stede. 2015. Joint prediction in MST-style discourse parsing for argumentation mining. In _Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing_ , pages 938–948, Lisbon, Portugal. Association for Computational Linguistics.
* Qazi et al. (2020) Umair Qazi, Muhammad Imran, and Ferda Ofli. 2020. Geocov19: a dataset of hundreds of millions of multilingual covid-19 tweets with location information. _SIGSPATIAL Special_ , 12(1):6–15.
* Rosenthal and McKeown (2012) Sara Rosenthal and Kathleen McKeown. 2012. Detecting opinionated claims in online discussions. In _2012 IEEE sixth international conference on semantic computing_ , pages 30–37. IEEE.
* Smith (2020) Shane Smith. 2020. Coronavirus (covid19) tweets - early april.
* Stab and Gurevych (2017) Christian Stab and Iryna Gurevych. 2017. Parsing argumentation structures in persuasive essays. _Computational Linguistics_ , 43(3):619–659.
* Toulmin (2003) Stephen E Toulmin. 2003. _The uses of argument_. Cambridge university press.
* UKPLab (2017) UKPLab. 2017. Ukplab/emnlp2017-claim-identification.
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In _Advances in neural information processing systems_ , pages 5998–6008.
* Whalen and Laland (2015) Andrew Whalen and Kevin Laland. 2015. Conformity biased transmission in social networks. _Journal of Theoretical Biology_ , 380:542–549.
* WHO (2020) WHO. 2020. Immunizing the public against misinformation.
* Williams et al. (2020) Evan Williams, Paul Rodrigues, and Valerie Novak. 2020. Accenture at checkthat! 2020: If you say so: Post-hoc fact-checking of claims using transformer-based models. _arXiv: 2009.02431_.
* Yang et al. (2019) Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In _Advances in neural information processing systems_ , pages 5753–5763.
|
# Influence of Furnace Baking on Q-E Behavior of Superconducting Accelerating
Cavities
H. Ito<EMAIL_ADDRESS>High Energy Accelerator Research Organization (KEK),
305-0801 Tsukuba, Ibaraki, Japan H. Araki High Energy Accelerator Research
Organization (KEK), 305-0801 Tsukuba, Ibaraki, Japan K. Takahashi The
Graduate University for Advanced Studies, SOKENDAI, 305-0801 Tsukuba, Ibaraki,
Japan K. Umemori High Energy Accelerator Research Organization (KEK),
305-0801 Tsukuba, Ibaraki, Japan The Graduate University for Advanced
Studies, SOKENDAI, 305-0801 Tsukuba, Ibaraki, Japan
###### Abstract
The performance of superconducting radio-frequency (SRF) cavities depends on
the niobium surface condition. Recently, various heat-treatment methods have
been investigated to achieve unprecedented high quality factor (Q) and high
accelerating field (E). We report the influence of a new baking process called
furnace baking on the Q-E behavior of 1.3 GHz SRF cavities. Furnace baking is
performed as the final step of the cavity surface treatment; the cavities are
heated in a vacuum furnace for 3 h, followed by high-pressure rinsing and
radio-frequency measurement. This method is simpler and potentially more
reliable than previously reported heat-treatment methods, and it is therefore,
easier to apply to the SRF cavities. We find that the quality factor is
increased after furnace baking at temperatures ranging from 300∘C to 400∘C,
while strong decreasing the quality factor at high accelerating field is
observed after furnace baking at temperatures ranging from 600∘C to 800∘C. We
find significant differences in the surface resistance for various processing
temperatures.
††preprint: AIP/123-QED
A superconducting radio-frequency (SRF) cavity is a key component of particle
accelerators used to generate charged particle beams. An SRF cavity exhibits a
lower energy dissipation and a lower surface resistance ($R_{\rm s}$) under a
radio frequency (RF) field, compared to a normal-conducting accelerating
cavity, which enables continuous-wave operation at a high accelerating field
($E_{\rm acc}$). Owing to decades of research focused on the improvement of
SRF cavities Padamsee (2017); Gurevich (2017), various surface treatment
techniques have been established; thus SRF cavities with superior performance
in terms of the quality factor ($Q_{0}$) and $E_{\rm acc}$ have been developed
Geng _et al._ (2007); Watanabe _et al._ (2013); Kubo _et al._ (2014).
In recent years, further surface treatment techniques, such as nitrogen doping
Grassellino _et al._ (2013); Dhakal (2020), nitrogen infusion Grassellino
_et al._ (2017); Dhakal (2020), and two-step baking Grassellino _et al._
(2018), have been investigated to increase the $Q_{0}$ and $E_{\rm acc}$. The
nitrogen-doped cavities have an extremely high $Q_{0}$ and show increasing the
$Q_{0}$ as a function of the $E_{\rm acc}$ which is referred to as the anti-Q
slope. However, the maximum $E_{\rm acc}$ obtained with nitrogen doping is
lower than that of the conventional surface-treated cavity. Moreover, the
nitrogen-doped cavities are highly sensitive to trapped magnetic flux compared
with standard treated cavities Martinello _et al._ (2016). The nitrogen
doping process has been applied in the Linac Coherent Light Source (LCLS$\rm-
II$) cavity fabrication process Bishop _et al._ (2015); Gonnella _et al._
(2018) because it is highly reproducible and has resulted in high $Q_{0}$ and
anti-Q slope in several studies Dhakal (2020). In the nitrogen infusion
technique, the Q-E behavior does not change significantly, and both $Q_{0}$
and $E_{\rm acc}$ are improved compared with those values obtained using the
standard surface treatment methods Dhakal _et al._ (2018); Umemori (2020);
Wenskat _et al._ (2020); however, the reproducibility has been limited to a
few laboratories. The two-step baking process developed at Fermi National
Accelerator Laboratory (FNAL) can produce cavities with a maximum $E_{\rm
acc}$ of approximately 50 MV/m Grassellino _et al._ (2018); Bafia _et al._
(2019), and other laboratories are currently verifying the effectiveness of
two-step baking.
In the typical surface treatment planned for the International Linear Collider
(ILC), the following procedure is implemented: after fabricating the SRF
cavity, a 100 $\mu$m layer of the cavity inner surface is removed by bulk
electropolishing; this results in the elimination of the surface layer damaged
during cavity fabrication. After electropolishing, the surface is thoroughly
rinsed with ultrapure water, and ultrasonic cleaning is performed by filling
ultrapure water with a surfactant, followed by high-pressure ultrapure water
rinsing (HPR). Then, annealing is performed in a vacuum furnace at
approximately 800∘C to desorb the hydrogen that was absorbed on the niobium
surface during electropolishing. Subsequently, light electropolishing is used
to remove a layer of approximately 20 $\mu$m of the cavity inner surface to
eliminate dirt from the inner surface, followed by sufficient water rinsing,
ultrasonic cleaning, HPR, and assembly in a cleanroom. Next, as the final step
in the surface treatment process, the cavity is vacuumed and heat-treated at
120∘C for 48 h.
In this study, a new heat-treatment method, which is simpler and more reliable
than the surface treatment method described above, is investigated from the
viewpoint of oxygen diffusion of a niobium oxide layer, and the effects on the
properties of $Q_{0}$, the Bardeen-Cooper-Schrieffer (BCS) resistance ($R_{\rm
BCS}$), and the residual resistance ($R_{\rm res}$) for each $E_{\rm acc}$ are
studied. In the 1980s and 1990s, SRF cavities that were heat-treated in vacuum
in the range of 250 to 300∘C and 1100 to 1400∘C were investigated to
understand the effect of an oxide layer on the SRF cavity performance Palmer
(1987); Palmer _et al._ (1990); Palmer and Tigner (1985). It was revealed
that the heat-treatment at 250∘C dissolves the oxide layer and decreases
$R_{\rm BCS}$. Therefore, it is expected that the heat treatment in this study
can be performed in the same temperature range to create a cavity with high
$Q_{0}$. We used several 1.3 GHz TESLA and STF (TESLA-like) single-cell
cavities that had undergone various surface treatments. As a first step, a 10
or 20 $\mu$m layer of the cavity inner surface was removed by the light
electropolishing to reset the surface conditions in the cavity, followed by
HPR to eliminate any remaining impurities on the surface. Subsequently, the
cavities were placed in a large vacuum furnace. The inner diameter of the
furnace chamber is $\phi$ 950 mm, and the length is 2080 mm Umemori _et al._
(2018). This vacuum furnace can be depressurized to $1\times 10^{-6}$ Pa at
room temperature using a cryopump. Due to the insufficient cooling capacity of
the cryopump, heat treatment at 800∘C increases the temperature of the
cryopump. In such a case, we switch from the cryopump to a turbo molecular
pump for achieving the desired level of vacuum Umemori _et al._ (2018). The
Quadrupole Mass Spectrometer (Q-mass) was equipped with a vacuum furnace to
monitor the partial pressure of each element during heat treatment. The
cavities were baked in a temperature range of 200 to 800∘C for 3 h in this
vacuum furnace. This baking process is referred to as “furnace baking”, which
is different from the “medium-temperature bake” (mid-T bake) that is performed
at FNAL and Institute of High Energy Physics (IHEP) Posen _et al._ (2020);
Zhou _et al._ (2020). The mid-T bake process requires special heat treatment
equipment to perform the RF measurement without exposing the inner surface of
the cavity to the air after heat treatment at 250 to 400∘C, whereas the
furnace baking process is a simple method that can be performed with existing
cavity treatment systems because the heat treatment is performed in a vacuum
furnace. In the first step, the baking time was fixed at 3 h to investigate
the optimum temperature and achieve high $Q_{0}$. To perform the furnace
baking, the cavity temperature was ramped up from room temperature at a ramp
rate of 200∘C/h to a target temperature of 200 to 800∘C. The vacuum was
1-2$\times 10^{-6}$ Pa at room temperature, and it was maintained at the order
of $10^{-4}$ Pa even during baking. After the furnace baking process and
cooling down to below 50∘C, the furnace was purged with $\rm N_{2}$ gas, and
the cavity was packed and placed on the HPR stand for final rinsing before
assembly. A new niobium oxide layer grows on the inner surface during this
step because of exposure to air and water; however, this is not expected to
affect the oxygen diffusion region formed by furnace baking. After assembly,
no further baking was performed, and the RF measurements were conducted.
The cavity was then cooled down by depressurizing liquid helium to 1.5 K,
which is the lowest temperature that can be achieved in the cryostat available
at High Energy Accelerator Research Organization (KEK). Then, the Q-E curve at
each temperature was obtained by calculating the input, reflected, and
transmitted RF powers for 0.1 K temperature increments. Finally, the RF
measurement was performed up to the quench field at 2 K. To minimize the
magnetic flux trapping during the cooling process, the magnetic field around
the cavity was reduced to less than $\sim$1 mG using the magnetic shield and
solenoid coil, and a heater placed at the top beam tube was used to provide a
temperature gradient in the cavity for flux expulsion Huang, Kubo, and Geng
(2016); Posen _et al._ (2016); Dhakal, Ciovati, and Gurevich (2020). Figure 1
shows the Q-E curve for the cavity that was furnace-baked at 350∘C. The cavity
was quenched once during the measurement at 1.5 K, which caused it to trap the
magnetic flux and subsequently decrease the $Q_{0}$. However, higher $Q_{0}$
and anti-Q slope were still clearly observed in the 2 K measurement results
compared to the standard treated cavity (see Fig. 3).
Figure 1: Q-E curve at each temperature for the cavity that was furnace-baked
at 350∘C. Colored circles show Q-E curves at each temperature, and purple
squares show radiation levels at 2 K measurement.
$R_{\rm s}$ at each temperature and $E_{\rm acc}$ is calculated using $R_{\rm
s}=G/Q_{0}$, where G is the geometric factor that is independent of material
properties Padamsee, Knobloch, and Hays (1998). $R_{\rm s}$ can be expressed
as the sum of $R_{\rm BCS}$, which decreases exponentially with temperature,
and $R_{\rm res}$, which is a weak temperature-dependent or temperature-
independent term that cannot be accounted for in $R_{\rm BCS}$. $R_{\rm s}$ is
decomposed into $R_{\rm BCS}$ and $R_{\rm res}$ at each $E_{\rm acc}$ using
the data set at the same $E_{\rm acc}$ from the Q-E curve. The decomposition
is performed using the following fitting equation:
$R_{\rm s}(T)=\frac{A\omega^{2}}{T}e^{-(\Delta/kT)}+R_{\rm res},$ (1)
where A is a fitting constant that depends on superconducting properties, T is
the temperature, $k$ is the Boltzmann constant, 2$\Delta$ is the energy gap of
the superconductor, which is treated as a fitting parameter, and $\omega$ is
the frequency of the cavity. The first term in this equation corresponds to
$R_{\rm BCS}$. The colored curves in the upper figure of Fig. 2 show fitting
parameters for $R_{\rm s}(T)$ at each $E_{\rm acc}$. The red closed circles in
the lower figure of Fig. 2 illustrate the behavior of $R_{\rm res}$ for
$E_{\rm acc}$. The blue closed circles depict the behavior of $R_{\rm BCS}$,
which decreases sharply as $E_{\rm acc}$ increases in the case of 350∘C
furnace baking. This behavior is considerably different from the behavior of
standard treated cavities (120∘C and 48 h baked cavities), and it is similar
to the behavior observed in a nitrogen-doped cavity Grassellino _et al._
(2013). Red open circles depict the estimated behavior of $R_{\rm res}$ before
the flux trapping; $R_{\rm res}$ is smaller than that of the standard treated
cavities for 350∘C furnace baking.
Figure 2: Temperature dependence of $R_{\rm s}$ for each $E_{\rm acc}$ (upper
figure) and behavior of $R_{\rm BCS}$ and $R_{\rm res}$ for $E_{\rm acc}$
(lower figure). $R_{\rm res}$ before the trap was obtained by subtracting
$R_{\rm BCS}$ obtained after the trap from the $R_{\rm s}$ at 1.5 K before the
trap.
Figure 3 shows a comparison of the Q-E curves measured at 2 K for cavities
that were furnace-baked at various temperatures (200 to 800∘C) and a standard
treated cavity (120∘C and 48 h baking under vacuum directly followed by RF
measurement, no exposure to air or water). The Q-E behavior of the 200∘C and 3
h furnace-baked cavity (purple points) is similar to that of the standard
treated cavity (black points). This indicates that the conventional
performance can be achieved by replacing the 120∘C and 48 h baking with 200∘C
and 3 h furnace baking, which may be significantly effective for mass
production of cavities. Four furnace-baked cavities, baked at 300 to 400∘C,
have high $Q_{0}$ and anti-Q slope, but a low $E_{\rm acc}$ compared with the
standard treated cavity. This behavior is typically associated with nitrogen-
doped cavities Grassellino _et al._ (2013). In particular, 300∘C furnace
baking produces an extremely high Q cavity, with a $Q_{0}$ of over 5$\times
10^{10}$ at 16 MV/m. Furthermore, 300∘C furnace baking has the same effect for
two different cavities, indicating good reproducibility. These results are in
good agreement with those obtained for single-cell cavities that are furnace-
baked in the temperature range of 250 to 400∘C at IHEP He _et al._ (2020).
The high-temperature furnace-baked cavities baked at 600∘C and 800∘C did not
reach high $Q_{0}$, and the Q values were comparable to those of the standard
treated cavity. A phenomenon called high field Q slope (HFQS), in which the
$Q_{0}$ decreases significantly at high $E_{\rm acc}$ safa (2001), was
observed in these cavities. This HFQS is considered to be related to the
diffusion of oxygen and hydrogen on the inner surface of the cavity Benvenuti,
Calatroni, and Ruzinov (2001); Ciovati (2006); Ciovati _et al._ (2010);
Romanenko _et al._ (2013); Checchin and Grassellino (2020), and the effect of
suppressing the HFQS diminished due to oxygen diffusion at high temperatures.
These results indicate that diverse Q-E behaviors were obtained when the
furnace baking temperature was changed from 200 to 800∘C. In particular,
furnace baking at 300 to 400∘C resulted in high $Q_{0}$ and anti-Q slope. The
low $E_{\rm acc}$ is similar to that of the nitrogen-doped cavity and may be
related to the low superheating field at the dirty limitKubo (2020a).
Theoretical considerations suggest that the $Q_{0}$ varies depending on the
cavity surface conditionGurevich and Kubo (2017); Kubo and Gurevich (2019);
Kubo (2020b).
Figure 3: Comparison of Q-E behavior measured at 2 K for cavities that were
furnace-baked at various temperatures (200 to 800∘C) and a standard treated
cavity (120∘C and 48 h baking). Colored closed points show the Q-E curve for
each furnace-baked cavity, and the colored opened points show the radiation
levels corresponding to each colored closed point.
The upper panel of Fig. 4 shows the relationship between $R_{\rm BCS}$ at 2 K
and $E_{\rm acc}$ for each furnace-baked and standard treated cavity. $R_{\rm
BCS}$ behavior is classified into three types: one that increases with
increasing $E_{\rm acc}$, one that decreases with increasing $E_{\rm acc}$,
and one that does not increase as much as the first type. The 200∘C furnace-
baked cavity and the standard treated cavity correspond to the first type
mentioned above, and the slope of $R_{\rm BCS}$ is steep compared with those
obtained for other cavities. The 300 to 400∘C furnace-baked cavities
correspond to the second type. In these cavities, $R_{\rm BCS}$ decreases as
$E_{\rm acc}$ increases, which is the origin of the anti-Q slope. This
behavior is the most pronounced in 300∘C furnace-baked cavities. The 600∘C and
800∘C furnace-baked cavities correspond to the third type. These cavities
already have a high $R_{\rm BCS}$ at low $E_{\rm acc}$; however, the slope is
less steep compared with the first type. From these results, it was found that
$R_{\rm BCS}$ behavior varies significantly with differences in the baking
temperature, resulting in the variation of Q-E behavior. Further, it was
suggested that there is an inflection point between 200∘C and 300∘C, where the
behavior of $R_{\rm BCS}$ changes significantly, and that a similar inflection
point exists in the region between 400∘C and 600∘C. The lower panel of Fig. 4
shows the relationship between $R_{\rm res}$ and $E_{\rm acc}$ for each
furnace-baked cavity and standard treated cavity. It was found that $R_{\rm
res}$ is lower for all the furnace-baked cavities compared with that of the
standard treated cavity, and $R_{\rm res}$ behavior changes with differences
in baking temperature but not as drastically as $R_{\rm BCS}$ behavior.
Notably, the 600∘C furnace-baked cavity has an extremely low $R_{\rm res}$ of
0.2 $\rm n\Omega$, which corresponds to a $Q_{0}$ of over $1\times 10^{12}$.
Because $R_{\rm res}$ dominates $R_{\rm s}$ at temperatures of approximately 1
K, this 600∘C furnace baking has the potential to be a useful processing
method for superconducting devices, such as those used at cryogenic
temperatures in the mK region rather than the SRF accelerator application
operating at 2 K.
Figure 4: $R_{\rm BCS}$ behavior at 2 K for $E_{\rm acc}$ for each furnace-
baked cavity and standard treated cavity (upper panel). Relationship between
$R_{\rm res}$ and $E_{\rm acc}$ for each furnace-baked cavity and standard
treated cavity (lower panel).
The sensitivity of the mid-T (300 to 400∘C) furnace-baked cavity was estimated
by measuring the Q-E curve after cooling slowly in a 20 mG field. (Slow
cooling allowed the magnetic field of 95% to be trapped in the cavity.) The
sensitivity $S$ describes the amount of increase in $R_{\rm s}$ per unit of
trapped field $B_{\rm trap}$ and can be expressed as
$S=\frac{\Delta R_{\rm s}}{B_{\rm trap}}.$ (2)
Figure 5 shows the measurement results of the sensitivity of the mid-T
furnace-baked cavity and the comparison to a standard treated cavity. The
mid-T furnace-baked cavities have a high sensitivity compared with the
standard treated cavity and resemble the nitrogen-doped cavity Martinello _et
al._ (2016). The sensitivity of the 300∘C furnace-baked cavity is higher than
that of the nitrogen-doped cavity. When such a high-sensitivity cavity is
installed as an accelerator component, it is necessary to consider the effect
of the magnetic field trapping due to the ambient magnetic field more severely
than in the standard treated cavity because it is difficult to realize the
ideal magnetic field shielding in real accelerator components as in the RF
measurement of this study.
Figure 5: Sensitivity of mid-T furnace-baked cavities and standard treated
cavity.
In this study, a new heat-treatment process called furnace baking has been
investigated at several baking temperatures ranging from 200 to 800∘C and a
baking time of 3 h. The behavior of Q-E is found to be sensitive to the baking
temperature. Furthermore, the quench field changes with the baking
temperature. The mid-T (300 to 400∘C) furnace baking produces a cavity with
high $Q_{0}$ and anti-Q slope. In particular, the 300∘C furnace-baked cavity
has an extremely high $Q_{0}$ of over 5$\times 10^{10}$ at 16 MV/m and 2 K.
The quench field is 20 to 25 MV/m for the mid-T furnace-baked cavities, which
is lower than the quench field of the standard treated cavity. Although the
achievable $E_{\rm acc}$ is low, its high $Q_{0}$ is very impressive, and
combined with the simplicity of the furnace baking procedure, it is clear that
the mid-T furnace baking can be successfully adapted to various SRF
applications in the future. $R_{\rm res}$ is lower for all the furnace-baked
cavities compared with that of the standard treated cavity. For the 600∘C
furnace-baked cavity, an extremely low $R_{\rm res}$ of 0.2 $\rm n\Omega$ is
obtained. The sensitivity of the mid-T furnace-baked cavity is higher than
that of the standard treated cavity and resembles that of the nitrogen-doped
cavity. The 300∘C furnace-baked cavity, which has the highest $Q_{0}$, has a
higher sensitivity than that of the nitrogen-doped cavity. Further studies
will be undertaken by focusing on the surface analysis based on the sample
study to reveal the relationship between these behaviors and the cavity
surface condition. Further, process optimization will be performed by changing
the baking time.
This work was supported by JSPS Grant-in-Aid for Scientific Research(B) No.
19H04402.
## References
* Padamsee (2017) H. Padamsee, Supercond. Sci. Technol. 30, 053003 (2017).
* Gurevich (2017) A. Gurevich, Supercond. Sci. Technol. 30, 034004 (2017).
* Geng _et al._ (2007) R. L. Geng, G. V. Eremeev, H. Padamsee, and V. D. Shemelin, in _Proc. PAC2007_ (2007) pp. 2337–2339.
* Watanabe _et al._ (2013) K. Watanabe, S. Noguchi, E. Kako, K. Umemori, and T. Shishido, Nuclear Inst. and Methods in Physics Research, A 714, 67 – 82 (2013).
* Kubo _et al._ (2014) T. Kubo, Y. Ajima, H. Inoue, K. Umemori, Y. Watanabe, and M. Yamanaka, in _Proc. IPAC’14_ (2014) pp. 2519–2521.
* Grassellino _et al._ (2013) A. Grassellino, A. Romanenko, D. Sergatskov, O. Melnychuk, Y. Trenikhina, A. Crawford, A. Rowe, M. Wong, T. Khabiboulline, and F. Barkov, Supercond. Sci. Technol. 26, 102001 (2013).
* Dhakal (2020) P. Dhakal, Physics Open 5, 100034 (2020).
* Grassellino _et al._ (2017) A. Grassellino, A. Romanenko, Y. Trenikhina, M. Checchin, M. Martinello, O. S. Melnychuk, S. Chandrasekaran, D. A. Sergatskov, S. Posen, A. C. Crawford, S. Aderhold, and D. Bice, Supercond. Sci. Technol. 30, 094004 (2017).
* Grassellino _et al._ (2018) A. Grassellino, A. Romanenko, D. Bice, O. Melnychuk, A. C. Crawford, S. Chandrasekaran, Z. Sung, D. A. Sergatskov, M. Checchin, and S. Posen, arXiv e-prints , arXiv:1806.09824 (2018).
* Martinello _et al._ (2016) M. Martinello, A. Grassellino, M. Checchin, A. Romanenko, O. Melnychuk, D. A. Sergatskov, S. Posen, and J. F. Zasadzinski, Applied Physics Letters 109, 062601 (2016).
* Bishop _et al._ (2015) P. Bishop, M. Checchin, H. Conklin, A. Crawford, E. Daly, G. Davis, M. Drury, R. Eichhorn, J. Fischer, F. Furuta, G. M. Ge, D. Gonnella, A. Grassellino, C. Grimm, T. Gruber, D. Hall, A. Hocker, G. Hoffstaetter, J. Kaufman, G. Kulina, M. Liepe, J. Maniscalco, M. Martinello, O. Melnychuk, T. O’Connel, J. Ozelis, A. D. Palczewski, P. Quigley, C. Reece, A. Romanenko, M. Ross, A. Rowe, D. Sabol, J. Sears, D. A. Sergatskov, W. Soyars, R. Stanek, V. Veshcherevich, R. Wang, and G. Wu, in _Proc. SRF’15_ (2015) pp. 159–163.
* Gonnella _et al._ (2018) D. Gonnella, S. Aderhold, A. Burrill, E. Daly, K. Davis, A. Grassellino, C. Grimm, T. Khabiboulline, F. Marhauser, O. Melnychuk, A. Palczewski, S. Posen, M. Ross, D. Sergatskov, A. Sukhanov, Y. Trenikhina, and K. Wilson, Nuclear Inst. and Methods in Physics Research, A 883, 143 – 150 (2018).
* Dhakal _et al._ (2018) P. Dhakal, S. Chetri, S. Balachandran, P. J. Lee, and G. Ciovati, Phys. Rev. Accel. Beams 21, 032001 (2018).
* Umemori (2020) K. Umemori, “New results of kek infusion and mid-t bake,” in _TTC 2020_ (2020).
* Wenskat _et al._ (2020) M. Wenskat, C. Bate, A. D. Pandey, A. Jeromin, T. F. Keller, J. Knobloch, J. Köszegi, F. Kramer, O. Kugeler, S. Kulkarni, D. Reschke, J. Schaffran, G. D. L. Semione, S. Sievers, L. Steder, A. Stierle, and N. Walker, Supercond. Sci. Technol. 33, 115017 (2020).
* Bafia _et al._ (2019) D. Bafia, A. Grassellino, O. Melnychuk, A. Romanenko, Z.-H. Sung, and J. Zasadzinski, in _Proc. SRF’19_ , 19 (2019) pp. 586–591.
* Palmer (1987) F. Palmer, IEEE Transactions on Magnetics 23, 1617–1619 (1987).
* Palmer _et al._ (1990) F. Palmer, R. Kirby, F. King, and E. L. Garwin, Nuclear Inst. and Methods in Physics Research, A 297, 321 – 328 (1990).
* Palmer and Tigner (1985) F. Palmer and M. Tigner, IEEE Transactions on Magnetics 21, 1011–1013 (1985).
* Umemori _et al._ (2018) K. Umemori, M. Egi, E. Kako, T. Konomi, S. Michizono, and H. Sakai, in _Proc. 29th International Linear Accelerator Conference_ (2018).
* Posen _et al._ (2020) S. Posen, A. Romanenko, A. Grassellino, O. Melnychuk, and D. Sergatskov, Phys. Rev. Applied 13, 014024 (2020).
* Zhou _et al._ (2020) Q. Zhou, F.-S. He, W. Pan, P. Sha, Z. Mi, and B. Liu, Radiation Detection Technology and Methods (2020).
* Huang, Kubo, and Geng (2016) S. Huang, T. Kubo, and R. L. Geng, Phys. Rev. Accel. Beams 19, 082001 (2016).
* Posen _et al._ (2016) S. Posen, M. Checchin, A. C. Crawford, A. Grassellino, M. Martinello, O. S. Melnychuk, A. Romanenko, D. A. Sergatskov, and Y. Trenikhina, Journal of Applied Physics 119, 213903 (2016).
* Dhakal, Ciovati, and Gurevich (2020) P. Dhakal, G. Ciovati, and A. Gurevich, Phys. Rev. Accel. Beams 23, 023102 (2020).
* Padamsee, Knobloch, and Hays (1998) H. Padamsee, J. Knobloch, and T. Hays, _RF Superconductivity for Accelerators, Wiley-VCH_ (1998).
* He _et al._ (2020) F. He, W. Pan, P. Sha, J. Zhai, Z. Mi, X. Dai, S. Jin, Z. Zhang, C. Dong, B. Liu, H. Zhao, R. Ge, J. Zhao, Z. Mu, L. Du, L. Sun, L. Zhang, C. Yang, and X. Zheng, arXiv e-prints , arXiv:2012.04817 (2020).
* safa (2001) H. safa, in _Proc. SRF’01_ (2001).
* Benvenuti, Calatroni, and Ruzinov (2001) C. Benvenuti, S. Calatroni, and V. Ruzinov, in _Proc. SRF’01_ (2001).
* Ciovati (2006) G. Ciovati, Applied Physics Letters 89, 022507 (2006).
* Ciovati _et al._ (2010) G. Ciovati, G. Myneni, F. Stevie, P. Maheshwari, and D. Griffis, Phys. Rev. ST Accel. Beams 13, 022002 (2010).
* Romanenko _et al._ (2013) A. Romanenko, F. Barkov, L. D. Cooley, and A. Grassellino, Supercond. Sci. Technol. 26, 035003 (2013).
* Checchin and Grassellino (2020) M. Checchin and A. Grassellino, Applied Physics Letters 117, 032601 (2020).
* Kubo (2020a) T. Kubo, Phys. Rev. Research 2, 033203 (2020a).
* Gurevich and Kubo (2017) A. Gurevich and T. Kubo, Phys. Rev. B 96, 184515 (2017).
* Kubo and Gurevich (2019) T. Kubo and A. Gurevich, Phys. Rev. B 100, 064522 (2019).
* Kubo (2020b) T. Kubo, Phys. Rev. Research 2, 013302 (2020b).
|
# Quasideterminant Solutions of Noncommutative Equation of Langmuir
Oscillations
Irfan Mahmood and Hira Sohail Centre for High Energy Physics, University of
the Punjab, 54590, Lahore, Pakistan<EMAIL_ADDRESS>
###### Abstract.
In this article we present nonncommutative analogue of equation of Langmuir
oscillations and its Darbboux solutions in additive form with their $N$-th
generalization in terms of quasideterminants. We also derive the
noncommutative ricatti equation from its linear system that yields the
Bäcklund transformation for the commutative version of equation of Langmuir
oscillations with classical limit. The last section, involves the derivation
of reduction of that Bäcklund transformation to the Darboux solutions under
commutative limit.
Keywords: Noncommutative equation of Langmuir oscillations , Darboux
transformation, Quasideterminats, Riccati equation, Bäcklund transformation
## 1\. Introduction
The differential difference equation
$u_{nt}=u_{n}(u_{n-1}-u_{n+1})$ (1)
appears in analysis of spectrum structure associated to the Langmuir
oscillations in plasma which completely integrable in classical framework as
it possesses Lax representation [1]. The non-abelian analogue of equation (1)
$u_{nt}=u_{n-1}u_{n}-u_{n}u_{n+1}$ (2)
has been computed [2] from the compatibility of following linear system
$u_{n}\psi_{n+1}=\lambda\psi_{n}-\psi_{n-1}$ (3)
$\psi_{(n)t}=-u_{n}u_{n+1}\psi_{n+2}.$ (4)
with its connection to discrete nonlinear Schr$\ddot{o}$dinger equation [3,
4]. Moreover Darboux transformation for that non-abelian version has been
presented in [2] in multiplicative form as
$u_{n}\left[1\right]=\varphi_{n-2}\varphi^{-1}_{n}u_{n}(\varphi_{n-1}\varphi^{-1}_{n+1})^{-1}.$
(5)
that generates all solutions with non-zero seed solution, means $u_{n}\neq 0$.
Initialy, That Darboux method was developed by [5] to find transformations on
potential of the Schr$\ddot{o}$dinger equation which satisfies Korteweg-de
Vries equation [6] in framework of Lax formalism. Later on few remarkable
results on DT were analysed by [7] to reveal its importance in theory of
integrable system and the more efficient implementations on various nonlinear
physical systems were developed by V. Matveev [8] to construct the exact
solutions of these systems.. The successful implementations of these
transformations have been shown in the analysis of various mathematical
features of graphene [9] and has also fruitful applications in cavity quantum
electrodynamics [10, 11] for the dynamical analysis of the propagation of
associated disturbance Moreover these transformations significantly extended
to construct the determinantal solutions of noncommutative integrable systems
such as in case of noncommutative Painlevé second equation [12] with its
associated nocommutative Toda equation [13]
In this article, we construct the Darboux transformation (DT) for the
noncommutative analogue of equation (1) in additive form as
$u_{n}\left[1\right]=u_{n}-\varphi^{\prime}_{n}\varphi^{-1}_{n-1}.$ (6)
which generates all possible solutions in zero background as seed solution
$u_{n}=0$, that is initial trivial solution of equation (2) and $\varphi_{n}$,
$\varphi_{n}$ are the particular solutions at particular values of $\lambda$.
Further the NC DT (6) is generalized to $N$-th form fin terms of
quasideterminat with exact solution in commutative case. The end section
encloses the derivation of Ricatti equation associated to noncommutative
analogue of equation (1) that yields its Backlund transformation which are
reducible to NC DT (6) .
## 2\. Noncommutative Darboux Solutions of Equations of Langmuir
Oscillations
For the noncommutative extension of equation (2), we consider $u_{i}$ and
independent variable $t$ are purely noncommuting objects such as
$[u_{i},t]\neq 0$ and the time derivation is defined as
$\partial_{t}f^{-1}(t)=-f^{-1}\partial_{t}ff^{-1}$, further the fields and
their derivatives are also noncommuting elements. From the compatibility
condition of linear systems (3) and (4) in nocommutative frame work, we obtain
$u_{nt}=u_{n-1}u_{n}-u_{n}u_{n+1}$ (7)
the NC verion of equation (2) and the Darboux solution of above equation can
be derived through its associated linear systems with Darboux transformation
[15] on arbitrary function $\psi_{n}$ defining in NC framework as
$\psi_{n}\left[1\right]=\psi_{n}-\varphi_{n}\varphi^{-1}_{n+1}\psi_{n+1}$ (8)
Now under above transformation (8) the linear system (3) can be written in
following form
$u_{n}\left[1\right]\psi_{n+1}\left[1\right]=\lambda\psi_{n}\left[1\right]-\psi_{n-1}\left[1\right].$
(9)
Now after substituting the values for transformed eigenfuctions from (8) into
above transformed expression and then with the help of system (4) the
resulting expression yields Darboux transformation on $u_{n}$ as below
$u_{n}\left[1\right]=u_{n}-\lambda\varphi_{n}\varphi^{-1}_{n-1}.$ (10)
we can also express above result as follow, taking
$\lambda\varphi_{n}=\varphi^{\prime}_{n}$
$u_{n}\left[1\right]=u_{n}-\varphi^{\prime}_{n}\varphi^{-1}_{n-1}.$ (11)
The above transformation involves new solution $u_{n}\left[1\right]$ , old
solution $u_{n}$ of equation (2) also called the seed solution and the
particular solutions of linear systems (3) and (4). Here the comparison of
Darboux solution (11) and with result on Darboux solution obtained in [2]
shows a difference, the transformations (11) are additive holds for all seed
solution even for the trivial solution $u_{n}=0$ of equation (7) in NC frame
as well as in non-abelian case and also in classical framework under
commutative limit.
The one fold Darboux transformation (8) with its second iteration can be
expressed in form of quasideterminat as below with setting
$\psi_{n}=\psi_{0}$,,$\psi_{n+1}=\psi^{\prime}_{0}$ and $\varphi_{n}=\psi_{1}$
, $\varphi_{n+1}=\psi^{\prime}_{1}$ and defining $\lambda\psi=\psi^{\prime}$ ,
then we can present one fold Darboux transformation in terms of
quasideterminant as
$\psi_{n}\left[1\right]=\begin{vmatrix}\psi_{0}&\psi_{1}\\\
\lambda_{0}\psi_{0}&{\boxed{\lambda_{1}\psi_{1}}}\end{vmatrix}$ (12)
and the two fold NC Darboux transformation can be evaluated as
$\psi_{n}\left[2\right]=\begin{vmatrix}\psi_{0}&\psi_{1}&\psi_{2}\\\
\lambda_{0}\psi_{0}&\lambda_{1}\psi_{1}&\lambda_{2}\psi_{2}\\\
\lambda^{2}_{0}\psi_{0}&\lambda^{2}_{1}\psi_{1}&{\boxed{\lambda^{2}_{2}\psi_{2}}}\end{vmatrix}$
(13)
further can be generalized to $N$-th form as below
$\psi_{n}\left[N\right]=\begin{vmatrix}\psi_{0}&\psi_{1}&\cdots&\psi_{N-1}&\psi_{N}\\\
\lambda_{0}\psi_{0}&\lambda_{1}\psi_{1}&\cdots&\lambda_{N-1}\psi_{N-1}&\lambda_{N}\psi_{N}\\\
\vdots&\vdots&\cdots&\vdots&\vdots\\\
\lambda^{N-1}_{0}\psi_{0}&\lambda^{N-1}_{1}\psi_{1}&\cdots&\lambda^{N-1}_{N-1}\psi_{N-1}&\lambda^{N-1}_{N}\psi_{N}\\\
\lambda^{N}_{0}\psi_{0}&\lambda^{N}_{1}\psi_{1}&\cdots&\lambda^{N}_{N-1}\psi_{N-1}&{\boxed{\lambda^{N}_{N}\psi_{N}}}\end{vmatrix}$
(14)
Now in the similar way the one fold Darboux solution (6) can be generalized to
$N$-th form in terms of quasideterminants as
$u_{n}\left[N+1\right]=u_{n}\left[N\right]-\psi^{\prime}_{n}\left[N\right]\psi_{n-1}\left[N\right]^{-1}.$
(15)
Here for $N=0$, we have $u_{n}\left[0\right]=u_{n}$, initial solution and
$\psi_{n}\left[0\right]=\varphi_{n}$ ,
$\psi_{n-1}\left[0\right]=\varphi_{n-1}$ are the particular solutions.
Further, we may construct the $N$ fold expression for
$\psi_{n-1}\left[N\right]$ with the help of (8) with the replacement of $n$ by
$n-1$ and setting $\psi_{n-1}=\lambda_{0}\psi_{0}$
$\psi_{n-1}\left[N\right]=\begin{vmatrix}\psi_{N}&\psi_{N-1}&\cdots&\psi_{1}&\psi_{0}\\\
\lambda_{N}\psi_{N}&\lambda_{N-1}\psi_{N-1}&\cdots&\lambda_{1}\psi_{N-1}&\lambda_{0}\psi_{0}\\\
\vdots&\vdots&\cdots&\vdots&\vdots\\\
\lambda^{N-1}_{N}\psi_{N}&\lambda^{N-1}_{N-1}\psi_{N-1}&\cdots&\lambda^{N-1}_{1}\psi_{1}&\lambda^{N-1}_{0}\psi_{0}\\\
\lambda^{N}_{N}\psi_{N}&\lambda^{N}_{N-1}\psi_{N-1}&\cdots&\lambda^{N}_{1}\psi_{1}&{\boxed{\lambda^{N}_{0}\psi_{0}}}\end{vmatrix}$
(16)
here we can assume $\psi_{n-1}\left[0\right]=\varphi_{n-1}$ as the initial
untransformed solution, that is particular solution of the linear system.
## 3\. NC Riccati euation
To construct the NC riccati equation, let us start with setting
$R_{n}=\psi_{n}\psi^{-1}_{n-1}$ (17)
and taking time derivation of above expression (17), we obtain
$R^{\prime}_{n}=u_{n-1}R_{n}-\lambda u_{n}+\lambda^{2}R_{n}-\lambda
R^{2}_{n}-R_{n}u_{n}$ (18)
by using the values from linear systems (2) and (3), where $R^{\prime}=R_{t}$.
Now substituting the value for $(u_{n}-\lambda R_{n})$ from system (3),
finally we get NC riccati equation associated to equation (7) as below
$R^{\prime}_{n}=u_{n-1}R_{n}-R_{n}u_{n}+\lambda R_{n-1}-\lambda R^{2}_{n}$
(19)
In next section, we show the reduction of above NC riccati equation (19) into
Bäcklund transformation under the commutative limt, further can be simplified
to the Darboux transformtion for the classical analogue of equation (7) .
## 4\. NC Ricatti equation with commutative limit
This can be shown that for the classical version (1) the Darboux solution (6)
will take the following form
$u_{n}\left[1\right]=u_{n}-\frac{\varphi^{\prime}_{n}}{\varphi_{n-1}}.$ (20)
where $u_{n}$ and $\varphi_{n}$ are scalars with same linear systems (3) and
(4). Now the NC ricatti equation under the commutative limit becomes
$R^{\prime}_{n}=(u_{n-1}-u_{n})R_{n}+\lambda R_{n-1}-\lambda R^{2}_{n}.$ (21)
The above ricatti equation (21) under charge partity time reversal (CPT)
symmetry transformation [16] becomes
$-R^{\prime}_{n}=(u_{n-1}\left[1\right]-u_{n}\left[1\right])R_{n}+\lambda
R_{n-1}-\lambda R^{2}_{n}.$ (22)
where $u_{n}\left[1\right]$ is new solution and $R^{\prime}_{n}$ is replaced
by $-R^{\prime}_{n}$ . Now subtracting (21) from (22) , we get
$u_{n-1}\left[1\right]-u_{n-1}=(u_{n}\left[1\right]-u_{n})+(\lambda\left[1\right]-\lambda)\frac{R_{n-1}}{R_{n}}-(\lambda\left[1\right]-\lambda)R_{n}.$
(23)
that equation may be regarded as Bäcklund transformation for equation (1) with
Darboux solution (20). Now we assume that
$\lambda\left[1\right]-\lambda=\epsilon$ a very small difference, then above
expression can be written as
$\frac{u_{n-1}\left[1\right]-u_{n}\left[1\right]}{\epsilon}=-(\frac{u_{n-1}-u_{n}}{\epsilon})+\frac{R_{n-1}}{R_{n}}-R_{n}$
(24)
and under the limiting case $\epsilon 0$, then finally we get
$\frac{du_{n}\left[1\right]}{dt}=\frac{du_{n}}{dt}+\frac{R_{n-1}}{R_{n}}-R_{n}.$
(25)
The above expression becomes equivalent to the Darboux solution (20),
$u_{n}\left[1\right]=u_{n}-\frac{\varphi^{\prime}_{n}}{\varphi_{n-1}}$, with
condition
$\frac{R_{n-1}}{R_{n}}-R_{n}=\frac{d}{dt}\frac{\varphi^{\prime}_{n}}{\varphi_{n-1}}$
and taking constant of integration as zero. Here it has been shown that the
commutative version of ricatti equation yields the Darboux solution through
the Bäcklund transformation in classical framework.
## 5\. Conclusion:
In this paper, the noncommutative analogue equations of Langmuir oscillations
has been presented with its Darboux transformation in additive structure as
most of the integrable posses in noncommutative as well as in classical
frameworks. Further $N$-fold darboux solutions have presented in term of
quasideterminants applicable for zero background seed solution. Moreover, the
associated NC ricatti equation is presented which reduced to the Darboux
expression through the Bäcklund transformation in classical framework. Further
motivation involves its connection to Discrete noncommutative NLS equation as
possesses in classical case and also to investigate its solutions in terms of
quantum determinants for its matrix version.
Acknowledgments:I am very thankful to the Punjab University 54590 on providing
me facilities. My special thanks to Science and technology commission of
Shanghai, China recruited me to work on project, No $.20590742900$, as Belt
and Road Yung Scientist fromm 2020 to 2023. This work is completed as one of
the elements of that project.
## References
* [1] V. E. Zakharov , S. V. Manakov, S P. Novikov, and L. P,. Pitaevskii , 1980,The Theory of Solitons: The Inverse Problem Method [in Russian], Nauka, Moscow, .
* [2] M A, Salle, 1982 ,Darboux transformations for nonabelian and nonlocal equations of the Toda lattice type. Theor. Math. Phys. 53:2, 227–237., .
* [3] R. Hirota, Exact Solution of the Sine-Gordon Equation for Multiple Collisions of Solitons J. Phys. Soc. Jpn, 35 (1973) 289.
* [4] M J Ablowitz and J F, Ladik n Nonlinear differential difference equations J. Math. Phys. 14 (1975 ) 594 .
* [5] M S V Matveev, Darboux, 1990, Transformations and Solitons, Springer Series in Nonlinear Dynamics, Springer,
* [6] L. Debnath, 2012, Nonlinear partial diffrential equation for scientists and engineers, ISBN 978-0-8176-8264-4, Springer
* [7] H. Wahlquist, Leet. Notes in Math. , 515 (1972 )162.
* [8] V B, Matveev, Lett. Math. Phys. 3 (!1979 )217.
* [9] A. Trisetyarso, Quantum Information Computation, 12 (2012) 989,
* [10] A. Trisetyarso, Journal of Mathematical Physics, 51 (2010 ) 072103,
* [11] A. Trisetyarso, Journal of Mathematical Physics, 52 (2011) 019902.
* [12] I. Mahmood, Lax pair representation and Darboux transformation of noncommutative Painlevé second equation, Journal of Geometry and Physics, 62 (2012) 15751582,
* [13] I. Mahmood, Quasideterminant solution of NC Painlevé II equation with the Toda solution at n=1 as a seed solution in its Darboux transformation, Journal of Geometry and Physics 95 (2015) 127-136,
* [14] I. Mahmood, and M. Waseem, Lax Representa- tion and Darboux Solutions of the Classical Painlevé Sec- ond Equation, Advances in Mathematical Physics 2021 (2021 )8851043.
* [15] V. Spiridonov and A. Zhedanov, Discrete Darboux transformations, the discrete-time Toda lattice, and the Askey Wilson polynomials, Methods and Applications of Analysis, 2 (4), (1995) ,369–398.
* [16] H. J. Wospakrik and F. P.Zen, 1999 CPT Symmetries and the Bäcklund Transformations, https://arxiv.org/abs/solv-int/9909007v1,
|
# A New Probabilistic Wave Breaking Model for Dominant Wind-sea Waves Based on
the Gaussian Field Theory
###### Abstract
This paper presents a novel method for obtaining the probability wave of
breaking ($P_{b}$) of deep water, dominant wind-sea waves (that is, waves made
of the energy within $\pm$30% of the peak wave frequency) derived from
Gaussian wave field theory. For a given input wave spectrum we demonstrate how
it is possible to derive a joint probability density function between wave
phase speed ($c$) and horizontal orbital velocity at wave crest ($u$) from
which a model for $P_{b}$ can be obtained. A non-linear kinematic wave
breaking criterion consistent with the Gaussian framework is further proposed.
Our model would allow, therefore, for application of the classical wave
breaking criterion (that is, wave breaking occurs if $u/c>1$) in spectral wave
models which, to the authors’ knowledge, has not been done to date. Our
results show that the proposed theoretical model has errors in the same order
of magnitude as six other historical models when assessed using three field
datasets. With optimization of the proposed model’s single free parameter, it
can become the best performing model for specific datasets. Although our
results are promising, additional, more complete wave breaking datasets
collected in the field are needed to comprehensively assess the present model,
especially in regards to the dependence on phenomena such as direct wind
forcing, long wave modulation and wave directionality.
JGR: Oceans
France Energies Marines, Plouzané, France Institut Français de Recherche pour
l’Exploitation de la Mer, Plouzané, France PPGOceano, Federal University of
Santa Catarina, Florianópolis, 88040-900, Brazil
C.E<EMAIL_ADDRESS>J.F.
<EMAIL_ADDRESS>
A new probabilistic wave breaking model based on Gaussian field theory is
presented for dominant, wind-sea waves.
Wave breaking probabilities are modeled from the joint probability density
between wave phase speed and particle orbital velocity.
The proposed model performs well when compared to six other historical models
using three field datasets.
## Plain Language Summary
Waves will break if the speed of the water particles on the wave crest is
greater than the speed of the wave itself, causing the wave crest to overtake
the front part of the wave, leading to wave breaking. Precisely simulating
real ocean waves requires, therefore, a particle-by-particle description of
the water motion, which is too expensive for the current computers to handle
in real-world applications. Instead, wave models describe waves by means of
their statistical properties, that is, averaged over a large number of waves.
In this paper, we present a mathematical formulation that allows to calculate
the combined probability between the speed of particles on the wave crest and
the wave speed based only on statistical properties. From these combined
probabilities, we model the probability of wave breaking. Our results indicate
that our model performed relatively well when compared to six other models
using three historical datasets. Because of a lack of observed data to assess
our model, we recommend that future research should focus on collecting more
wave breaking data measured in the field. Future advances on this line of
research could lead, for example, to improvements on operational weather
forecast models.
## 1 Introduction
A robust description of wave breaking is a crucial aspect of wave modelling.
It is via wave breaking that most of the wave energy is dissipated and a
precise formulation of this phenomenon is required to obtain reliable models.
Despite of its importance, energy dissipation due to wave breaking is still
modelled as a semi-empirical process due to the difficulty to represent
physically-derived wave breaking criteria on phase-averaged wave models
[Battjes Janssen (1978), Thornton Guza (1983), Banner . (2000), Filipot .
(2010), Filipot Ardhuin (2012), Banner . (2002), Ardhuin . (2010), Banner .
(2014), Zieger . (2015), Ardag Resio (2020)]. The available probabilistic
(that is, parametric, or empirical) formulations included in these models have
been derived from limited datasets and without rigorous theoretical frameworks
and, therefore, they currently lack a solid physical background. While the
current operational (spectral) models are capable of reproducing field
observations of integrated spectral parameters (for example, significant wave
height, peak wave period and peak wave direction) with good accuracy, it
remains unclear if their wave breaking parameterizations are entirely
reliable. This knowledge gap partly occurs because limited research has
focused on wave breaking statistics derived from field data, especially when
it comes to wave breaking observations distributed as a function of wave
scales (for example, wave frequency or wave phase speed). The research
developed here has, therefore, important implications for air-sea flux
parameterizations [Kudryavtsev . (2014)], safety at sea [Kjeldsen . (1980)]
and design of offshore structures [Filipot . (2019)], all of which directly
rely on the properties of breaking waves.
Historically, parametric wave breaking models have been constructed from two
different approaches: the first approach considers wave statistics (wave
steepness, most frequently) derived from a wave-by-wave analysis of the
surface elevation timeseries collected at a single point location where wave
breaking occurrences are synchronously identified (using video data, most
frequently). The wave breaking probability (that is, the ratio between the
total number of breaking waves over the total number of waves during a given
period of time) can then be expressed as a bulk quantity [Thornton Guza
(1983), Chawla Kirby (2002), Alsina Baldock (2007), Janssen Battjes (2007)] or
can be distributed over wave frequency ($f$), wavenumber ($k$), or wave speed
($c$) ranges, referred as to “wave scales” by the wave modelling community
[Eldeberky Battjes (1996), Banner . (2002), Filipot . (2010)].
The second approach follows from Phillips1985 who defined the distribution
$\Lambda(c)dc$ as the “average total length per unit surface area of breaking
fronts that have velocities in the range $c$ to $c+dc$”. This approach
therefore relates to the analysis of sea surface images in which individual
wave breaking patches are tracked in space and time. The main motivation for
introducing this new concept was clearly stated in Phillips1985: “There is
clearly some association of the breaking events with waves of different
scales, but it is difficult to make the association in an unambiguous way if
we consider only the surface configuration at one given instant. A breaking
crest may indeed be a local maximum in the instantaneous surface configuration
but there is no guarantee that a local wavelength of the breaking wave can be
defined clearly. It seems more satisfactory to use the velocity $c$ of the
breaking front as a measure of the scale of the breaking”. This quotation
clearly identify the limitations of directly relying on the analysis of single
point elevation timeseries. Different parameterizations have been proposed to
quantity $\Lambda(c)dc$ from theoretical [Phillips (1985)] or empirical
considerations [Melville Matusov (2002), Sutherland Melville (2013), Romero
(2019)]. However, Phillips’ 1985 framework remains controversial, particularly
regarding its practical application, given that different interpretations of
his concepts can generate differences of several orders of magnitude in the
calculations of $\Lambda(c)dc$ and its moments Banner . (2014). For a detailed
review of commonly used parametric wave breaking models please refer to A.
Interestingly, while the ratio between the horizontal orbital velocity at the
crest ($u$) to wave phase speed ($c$) appears the most reliable parameter to
determine wave breaking occurrence Saket . (2017); Barthelemy . (2018);
Derakhti . (2020); Varing . (2020), it was not used by any of the approaches
mentioned above. This paper provides a new promising wave breaking model by
revisiting Rice1944 and Longuet-Higgins1957 statistical descriptions of
Gaussian processes (that is, for linear waves) to obtain the theoretical joint
probability density between $c$ and $u$ ($p(c,u)$). We then model $P_{b}$
assuming a kinematic wave breaking criterion consistent with non-linear waves,
that is, a wave breaks if the fluid velocity at the wave crest is greater than
the wave phase speed ($u$ $>$ $c$). This study focuses on analysing dominant
waves, defined as waves that have frequencies within $\pm$30% of the spectral
peak frequency of the wind-sea Banner . (2000). Future research will be
dedicated to extend our efforts to broader wave scales. This paper is
organized as follows: Section 2 describes the proposed model, Section 3
presents three historical datasets used to evaluate the model, Section 4
presents the results, Section 5 discusses and Section 6 concludes.
## 2 Definition of a Probabilistic Wave Breaking Model Based on Gaussian
Field Theory
The kinematic wave breaking criterion $u/c=1$ has been historically used as
the onset of wave breaking for non-linear, real waves (see Perlin2013 for a
review). Recently, Barthelemy2018 found and Derakhti2020 confirmed via
numerical simulations that waves will inevitably start to break shortly after
$u/c$ exceeds 0.85 in deep and shallow water. Further numerical simulations
showed that wave breaking occurs when the maximum orbital velocity ($u_{max}$)
equals $c$ somewhere along the wave profile and not necessarily at the wave
crest Varing . (2020). Although the relationship $u/c$ provides a solid
physical background to establish the onset of wave breaking, this approach has
never been applied to spectral wave models because it requires phase-resolving
the wave field. In the sections below, we circumvent this difficulty by
defining a wave breaking probability model using the joint probability density
between $c$ and $u$ corresponding to a given wave energy spectrum ($E(f)$).
The efforts in this paper are consistent with part of the recent work from
Ardag2020 in the sense that both works aim to solidify the use of the
kinematic wave breaking criterion as the standard approach for modelling wave
breaking.
### 2.1 Theoretical Derivation of the Joint Probability Density Distribution
of Orbital Velocity at the Wave Crest and Phase Speed
Longuet-Higgins1957 published a very complete work on the statistics of
Gaussian wave fields. In particular, Longuet-Higgins1957 studied the
probability density of the speed of zero-crossings along a given line that is
of interest for us in this work. In his paper, the speed of zero-crossings
were applied in particular to the zero-crossings of the space derivative of a
Gaussian process, that is, the velocities of the local maxima in space
(Longuet-Higgins1957, pp. 356-357). The present work describes how the same
methodology can be extended to derive the joint density of the speed of space
local maxima (or local crests) and simultaneous wave horizontal orbital
velocity for a one-dimensional Gaussian sea state. For simplicity, this paper
follows the same notations as those of Longuet-Higgins1957 and the reader is
directed to Section 2.5 in Longuet-Higgins1957 for further details.
As explained in Longuet-Higgins1957, if $\xi_{1}\left(x,t\right)$ is a
stationary-homogeneous process and we are interested in the points (for
example, in space) were this process crosses a level $x_{1}$, the joint
distribution of the space derivative of $\xi_{1}$ noted $\xi_{2}$, with other
related processes $\xi_{3}$, $\xi_{4}$, $\ldots$ at $\xi_{1}=x_{1}$ is given
by:
$p\left(\xi_{2},\xi_{3},\xi_{4},...\right)_{x_{1}}=\frac{\left.\left|\xi_{2}\right|p\left(\xi_{1},\xi_{2},\xi_{3},\xi_{4},...\right)\right|_{\xi_{1}=x_{1}}}{N_{0}\left(x_{1}\right)}$
(1)
where $N_{0}\left(x_{1}\right)$ is the number of crossings of the level
$x_{1}$ by $\xi_{1}$ (see Equation 2.2.5 in Longuet-Higgins1957). In this
paper we are interested in joint distributions at the local maxima in space of
the wave elevation process $\xi_{0}$. Therefore, $\xi_{1}$ is the space
derivative of the wave process and local maxima correspond to down-crossings
of the zero level by $\xi_{1}=\partial\xi_{0}/\partial x$.
$\xi_{1}=\frac{\partial\xi_{0}}{\partial
x}\;,\;\xi_{2}=\frac{\partial^{2}\xi_{0}}{\partial
x^{2}}=\frac{\partial\xi_{1}}{\partial x}.$ (2)
In the case of Gaussian processes, $N_{0}^{-}\left(x_{1}\right)$ is:
$N_{0}^{-}\left(x_{1}\right)=\frac{1}{2\pi}\sqrt{\frac{m_{4}}{m_{2}}}\exp\left(-\frac{x_{1}^{2}}{2m_{2}}\right)\;,\;N_{0}^{-}=N_{0}^{-}\left(0\right)=\frac{1}{2\pi}\sqrt{\frac{m_{4}}{m_{2}}}$
(3)
where $m_{0},\ m_{1},\ \ldots,\ m_{i}$ are the $i$-th wavenumber spectral
moments and the minus sign indicates that we consider only down-crossings.
#### 2.1.1 Speed of Local Maxima (Phase Speed)
Following Longuet-Higgins1957, if we are interested in the speed $c$ of the
local maxima in space, that is, the speed of the down-crossings of $\xi_{1}$,
we have:
$c=-\frac{\left.\partial\xi_{1}\right/\partial
t}{\left.\partial\xi_{1}\right/\partial
x}=-\frac{\xi_{3}}{\xi_{2}}\;\mathrm{with}\;\xi_{2}=\frac{\partial^{2}\xi_{0}}{\partial
x^{2}}\mathrm{and}\;\xi_{3}=\left.\partial\xi_{1}\right/\partial t.$ (4)
Using Equation 1,
$p\left(\xi_{2},\xi_{3}\right)_{0}=\frac{\left.\left|\xi_{2}\right|p\left(\xi_{1},\xi_{2},\xi_{3}\right)\right|_{\xi_{1}=0}}{N_{0}^{-}}$
(5)
with $p\left(\xi_{1},\xi_{2},\xi_{3}\right)$ the point joint distribution of
the three Gaussian processes $\frac{\partial\xi_{0}}{\partial
x},\frac{\partial^{2}\xi_{0}}{\partial
x^{2}},\frac{\partial^{2}\xi_{0}}{\partial x\partial t}$ is:
$p\left(\xi_{1},\xi_{2},\xi_{3}\right)=p\left(\xi_{1}\right)p\left(\xi_{2},\xi_{3}\right)=\frac{{\rm
e}^{-\frac{\xi_{1}^{2}}{2m_{2}}}}{2\pi\sqrt{m_{2}}}\frac{{\rm
e}^{-\frac{1}{2}\left(\left[\xi_{2}\xi_{3}\right]Q_{c}^{-1}\left[\begin{array}[]{c}\xi_{2}\\\
\xi_{3}\end{array}\right]\right)}}{\sqrt{\left(2\pi\right)^{3}\det(Q_{c})}}$
(6)
and covariance matrix:
$Q=\left[\begin{array}[]{ccc}m_{2}&0&0\\\ 0&m_{4}&m_{3}^{{}^{\prime}}\\\
0&m_{3}^{{}^{\prime}}&m_{2}^{{}^{\prime\prime}}\end{array}\right]=\left[\begin{array}[]{cc}m_{2}&0\\\
0&Q_{c}\end{array}\right].$ (7)
Note that following Longuet-Higgins1957 notations, $m_{i}^{{}^{\prime\prime}}$
indicates the mixed wavenumber-frequency $i$-th spectral moment, where the
number of quotes indicates the order of the frequency spectral moment, for
example,
$m_{3}^{{}^{\prime}}=\int_{0}^{\infty}2\pi f^{1}k^{3}E(k)dk,$ (8)
where $E(k)$ is a given wavenumber spectra.
Classically, to introduce $c$ in the joint density and obtain
$p\left(c,\xi_{3}\right)_{0}$, we apply a change of variables
$\xi_{2}=-\frac{\xi_{3}}{c}\;,\;\xi_{3}=\xi_{3}$ (9)
and after the integration of $p\left(c,\xi_{3}\right)_{0}$ over all the domain
of definition of $\xi_{3}$, we obtain the distribution of $c$ (Longuet-
Higgins1957, Eq. 2.5.19):
$p\left(c\right)_{0}=\frac{1}{2}\frac{m_{4}m_{2}^{{}^{\prime\prime}}-m_{3}^{{}^{\prime}}{}^{2}}{\sqrt{m_{4}}\left(c^{2}m_{4}+2cm_{3}^{{}^{\prime}}+m_{2}^{{}^{\prime\prime}}\right)^{3/2}}$
(10)
Note that the sign on $c$ (or on ${\it m_{3}^{{}^{\prime}}}$) depends on the
convention on the wave propagation direction. We have kept the convention used
by Longuet-Higgins1957 here.
#### 2.1.2 Introducing the Orbital Velocity
As indicated in Equation 1, we can introduce in the formula a variable which
represents the horizontal orbital velocity. For Gaussian waves the horizontal
orbital velocity $u$ is defined as
$u=\mathcal{H}_{t}\left(\frac{\partial\xi_{0}}{\partial t}\right)$ (11)
with $\mathcal{H}_{t}$ the Hilbert transform in time domain. Which means that
$\xi_{0}=\sum_{i}a_{i}\cos\left(k_{i}x-\omega_{i}t\right)$ (12)
is transformed in
$u=\sum_{i}a_{i}\omega_{i}\cos\left(k_{i}x-\omega_{i}t\right),$ (13)
with $a_{i}$ the wave amplitude, $k_{i}$ the wavenumber and $\omega_{i}$ the
angular wave frequency of the wave component $i$. As the Hilbert transform is
a linear operator, $u$ is also Gaussian. As previously, at the local maxima we
have:
$p\left(\xi_{2},\xi_{3},u\right)_{0}=\frac{\left.\left|\xi_{2}\right|p\left(\xi_{1},\xi_{2},\xi_{3},u\right)\right|_{\xi_{1}=0}}{N_{0}^{-}}$
(14)
with a new covariance matrix for $\xi_{1}$, $\xi_{2}$, $\xi_{3}$ and $u$:
$Q=\left[\begin{array}[]{cccc}m_{2}&0&0&0\\\
0&m_{4}&m_{3}^{{}^{\prime}}&m_{2}^{{}^{\prime}}\\\
0&m_{3}^{{}^{\prime}}&m_{2}^{{}^{\prime\prime}}&m_{1}^{{}^{\prime\prime}}\\\
&m_{2}^{{}^{\prime}}&m_{1}^{{}^{\prime\prime}}&m_{0}^{{}^{\prime\prime}}\end{array}\right]=\left[\begin{array}[]{cc}m_{2}&0\\\
0&Q_{c}\end{array}\right].$ (15)
As previously, we can apply a similar change of variables
$\xi_{2}=-\frac{\xi_{3}}{c}\;,\;\xi_{3}=\xi_{3}\;,\;u=u,$ (16)
or the easiest to deal with,
$\xi_{3}=-c\xi_{2}\;,\;\xi_{2}=\xi_{2}\;,\;u=u$ (17)
and integrate $p\left(c,\xi_{2},u\right)_{0}$ over all the domain of
definition of $\xi_{2}$. The result is a more complicated but again semi-
analytical. The body of the integral has the form
${\rm
e}^{-\frac{1}{2}\left[\xi\left(c\right)\xi_{2}^{2}+\beta(c,u)\xi_{2}+\alpha(u)\right]}\xi_{2}^{2}$
(18)
and its integration in $\xi_{2}$ on the down-crossings space $]-\infty,0]$
gives
$I\left(c,u\right)=\frac{\left(\left(2\phi^{2}+1\right)\sqrt{\pi}\left({\rm
erf}\left(\phi\right)+1\right){\rm
e}^{\phi^{2}}+2\phi\right)}{\sqrt{2}\xi^{3/2}}{\rm e}^{-\alpha/2}$ (19)
with
$\phi=\phi\left(c,u\right)=\frac{1}{2\sqrt{2}}\frac{\beta(c,u)}{\sqrt{\xi\left(c\right)}},\quad\alpha=\alpha(u),$
(20)
$\Delta=\det(Q_{c})=m_{3}^{{}^{\prime}}\left(m_{2}^{{}^{\prime}}m_{1}^{{}^{\prime\prime}}-m_{3}^{{}^{\prime}}m_{0}^{{}^{\prime\prime}}\right)+m_{4}\left(m_{0}^{{}^{\prime\prime}}m_{2}^{{}^{\prime\prime}}-m_{1}^{{}^{\prime\prime}}{}^{2}\right)+m_{2}^{{}^{\prime}}\left(m_{3}^{{}^{\prime}}m_{1}^{{}^{\prime\prime}}-m_{2}^{{}^{\prime}}m_{2}^{{}^{\prime\prime}}\right),$
(21)
$\alpha(u)=\frac{m_{4}m_{2}^{{}^{\prime\prime}}-m_{3}^{{}^{\prime}}{}^{2}}{\Delta}u^{2},$
(22)
$\beta(c,u)=2\frac{m_{3}^{{}^{\prime}}m_{1}^{{}^{\prime\prime}}-m_{2}^{{}^{\prime}}m_{2}^{{}^{\prime\prime}}}{\Delta}u+2\frac{m_{4}m_{1}^{{}^{\prime\prime}}-m_{2}^{{}^{\prime}}m_{3}^{{}^{\prime}}}{\Delta}uc$
(23)
and
$\xi\left(c\right)=\frac{m_{0}^{{}^{\prime\prime}}m_{2}^{{}^{\prime\prime}}-m_{1}^{{}^{\prime\prime}}{}^{2}}{\Delta}+2\frac{m_{3}^{{}^{\prime}}m_{0}^{{}^{\prime\prime}}-m_{2}^{{}^{\prime}}m_{1}^{{}^{\prime\prime}}}{\Delta}c+\frac{m_{4}m_{0}^{{}^{\prime\prime}}-m_{2}^{{}^{\prime}}{}^{2}}{\Delta}c^{2}.$
(24)
The joint probability density of $\left(c,u\right)$ is then:
$p\left(c,u\right)=\frac{1}{N_{0}^{-}}\frac{1}{(2\pi)^{2}\sqrt{m_{2}\Delta}}I\left(c,u\right)=\frac{I\left(c,u\right)}{2\pi\sqrt{m_{4}\Delta}}.$
(25)
Note again that the sign on $c$ and $u$ (or on $m_{2}^{{}^{\prime}}$ and
$m_{3}^{{}^{\prime}}$) depends on the convention on the wave propagation
direction and Longuet-Higgins1957’s convention is still used here. The
coefficients $\left(\alpha,\beta,\xi\right)$ can be calculated directly
numerically and $\Delta$ is the determinant of $Q_{c}$, the sub-matrix of $Q$,
and after the inverse of $Q_{c}$ is calculated:
$Q_{c}^{-1}=\left[\begin{array}[]{cc}R&\boldsymbol{s}\\\
\boldsymbol{s}^{t}&r\end{array}\right]$ (26)
we find
$\alpha(u)=ru^{2},$ (27)
$\beta(c,u)=2\left[\begin{array}[]{cc}1&c\end{array}\right]\boldsymbol{s}u,$
(28)
$\xi\left(c\right)=\left[\begin{array}[]{cc}1&c\end{array}\right]R\left[\begin{array}[]{c}1\\\
c\end{array}\right].$ (29)
An example of the joint density of the couple (phase speed, horizontal
particle velocity) at local maxima in space is shown in Figures 1-a and b for
a JONSWAP spectrum.
### 2.2 Modelling $P_{b}$ from $p(c,u$)
By using Equation 25 applied to the dominant spectral wave band (that is, that
contained in the interval [$0.7f_{p}$, $1.3f_{p}$], where $f_{p}$ is the peak
wave frequency), the probability of dominant wave breaking can be computed by
integrating Equation 25 over all phase speeds and for orbital velocities over
a threshold $Ac$, with $A$ a constant that will be in the next section:
$P_{b}=\int_{u>Ac}\int_{0}^{\infty}p(c,u)dcdu.$ (30)
$P_{b}$ will be modelled following Equation 30 hereafter. Note that from the
definitions in Equation 3, the proposed $P_{b}$ is defined as number of
breaking local maxima over the total number of local maxima. From the analysis
of $p(c,u)$ we observed that spurious, non-moving local maxima may exist
around $c$ = $0$ and $u$ = $0$; therefore, to avoid artificially increasing
$P_{b}$, we adopted a practical integration range of $c,u\in[0.05,+\infty]$
here. Note that this range may, however, only be valid for very narrow
spectra. Further, we draw attention that, following from Equation 1, our
$P_{b}$ model is defined in space domain, whereas all the previous $P_{b}$
models and data are (at least partially) defined in time domain (see A for
details). For the very narrow spectral band used here, the differences between
temporal and spatial definitions of $Pb$ are negligible. This is discussed
further in Section 5.
Finally, the proposed model can be extended to accommodate two-dimensional
spectra without changes on how $p(c,u)$ is calculated. This is done by
applying an appropriated spreading function to any given one-dimensional
spectra (or directly inputting a directional spectra) and by recalculating the
moments in Equations 8 to take directionality into account or, more
explicitly,
$m_{i}=\int_{0}^{2\pi}\int_{0}^{\infty}\left(f\cos\theta\cos\alpha+f\sin\theta\sin\alpha\right)^{i}E(f,\theta)dfd\theta.$
(31)
An example considering the simplified cosine spreading law ($D(\theta)$ =
$cos(\theta-\bar{\theta})^{2s}$) with $s$ = $20$, $\bar{\theta}$ = $0$ and
$\alpha$=$0$ applied to same JONSWAP spectrum shown in Figure 1-a is shown in
Figure 1-c. Note that the differences in $p(c,u)$ between the one-dimensional
(Figure 1-b) and the two-dimensional (Figure 1-d) spectra are negligible for
the present assumptions. This relatively simple extension allows for the
consideration of two-dimensional wave spectral but we caution the reader that
it may not be fully complete. A follow-up publication will be dedicated to
include and assess the effects of wave directionality in our method more
rigorously.
Figure 1: Example of the application of the method. a) JONSWAP spectrum for
$H_{m_{0}}$=15m, $T_{p}$=10s and shape parameter $\gamma_{js}$=10. b) Obtained
joint probability density between the wave phase speed ($c$) and the
horizontal particle velocity at wave crest ($u$) calculated using Equation 25.
Note that the joint probability density was computed using only the spectral
energy between $0.7f_{p}$ and $1.3f_{p}$, that is, corresponding to the
dominant wave band only. c) Directional spectrum for the same parameters as in
a) and directional spreading $D(\theta)$ = $cos(\theta-\bar{\theta})^{2s}$
with $s$ = $20$ and $\bar{\theta}$ = $0$. d) Obtained $p(c,u)$ considering
only the spectral energy in the direction $\alpha$ = $0$.
### 2.3 Definition of a Gaussian-equivalent Non-linear Wave Breaking
Criterion
The previously introduced joint probability density distribution $p(c,u)$ is
based on Gaussian theory and therefore assumes that waves are linear. Breaking
waves are, however, highly non-linear. For real non-linear waves, as detailed
in the introduction, it is widely accepted that wave breaking starts when the
water particle horizontal velocity at its crest ($u_{nl}$) reaches the wave
phase speed ($c_{nl}$). A non-linear wave breaking criterion can be thus be
defined as $A_{nl}$ = $u_{nl}/c_{nl}$ = $1$. Therefore, we assume that it is
possible to obtain an equivalent kinematic criterion, $A_{lin}$ = $constant$
that relates Gaussian waves to non-linear waves.
Based on numerical experiments, Cokelet1977 provided the potential and kinetic
energy of a fully non-linear regular wave in deep-water at the onset of wave
breaking (see the last row of his Table A.0). Based on his results, we define
the kinematic criterion as the linear wave that has total energy equals to the
nearly breaking non-linear regular wave computed by Cokelet1977. Following
Cokelet1977, where $k$, $g$ and $\rho$ are expressed as non-dimensional
variables, a deep-water wave at the breaking onset (see last row of his table
A.0) has kinetic energy $T$ = $3.827\times 10^{-2}$ and potential energy $V$ =
$3.457\times 10^{-2}$. The energy-equivalent linear wave (denote with
subscript $eq$) has, therefore, amplitude:
$a_{eq}=\sqrt{2\times E}=\sqrt{2\times(V+T)}=0.3817.$ (32)
For this particular case, the linear dispersion relation reads:
$\omega^{2}=gk=1,$ (33)
the fluid velocity at crest of the energy-equivalent linear wave is:
$u_{eq}=\omega a_{eq}=0.3817,$ (34)
and the phase speed of the linear wave is:
$c_{eq}=\sqrt{\frac{g}{k}}=1.$ (35)
Given these constants, we obtain:
$A_{lin}=\frac{u_{eq}}{c_{eq}}=\frac{0.3817}{1}=0.3817.$ (36)
Following this approach, we define the correction coefficient $A$ = $A_{lin}$
= $0.382$ that will be used as reference value hereafter for our tests. This
result is consistent with recent findings from Ardag2020 who reported from the
re-analysis of Duncan’s 1981 experimental results, a wave breaking threshold
between 0.75 and 1.02 (see their Figure 1). Note, however, that these authors
defined their wave breaking threshold as $u/c_{g}$, where $c_{g}$ is the group
velocity and $u$ was obtained from linear wave theory. Replacing wave group
velocity ($c_{g}$) by the wave phase speed ($c$) yields a range of possible
values between 0.35 and 0.50, which is consistent with $A_{lin}$.
Figure 2 illustrates the sensitivity in wave breaking probability with changes
in the wave breaking threshold $A$. For the given $p(c,u)$ in Figure 2-a,
letting $A$ to vary from $0$ to $1$ resulted in a exponential increase in
$P_{b}$ at $A\leq 0.2$ (Figure 2-b), which may be unrealistic. When setting
$A$=$A_{lin}$=0.382 and letting the significant wave height ($H_{m_{0}}$) and
wave peak period ($T_{p}$) vary in the definition of the JONSWAP spectrum, the
results indicate that steeper waves are more probable to break, which is
expected (Figure 2-c). Finally, note that the wave breaking threshold $A$
might be sensitive to other wave and atmospheric parameters such as wave
directionality or direct wind forcing (or, equivalently, wave age). In the
next sections, the accuracy of our model is assessed using field observations
and our results are compared with other parametric wave breaking formulations.
Figure 2: a) Example of joint probability density between $u$ and $c$ obtained
from Equation 30. The colored lines indicate different values of $A$ and the
red dashed line shows $A$=$A_{lin}$=0.382. b) Possible values of $P_{b}$ for
varying $A$ calculated using the joint PDF from a). The vertical dashed line
shows $A$=$A_{lin}$=0.382. c) Obtained $P_{b}$ for varying $H_{m_{0}}$ and
$T_{p}$ and fixed $A$ (0.382) and $\gamma_{js}$ (10). The dashed blue lines
and marker indicate the $H_{m_{0}}$ and $T_{p}$ values used in a) and b). Note
that as in Figure 1, these results only consider dominant waves, that is, they
were calculated from the spectrum between $0.7f_{p}$ and $1.3f_{p}$.
## 3 Field Data
Three historical datasets were used to evaluate the present model. Further,
six historical models (detailed in A) were chosen to contextualize our model
in relation to the state-of-the-art. These historical models range from
baseline models in which the only inputs are known environmental parameters
(wind speed in Melville2002 or wave steepness in Banner2000, for example) to
fairly complex models that account for combinations of several phenomena
(Romero2019, for example).
### 3.1 Thomson (2012) and Schwendeman et al. (2014) dataset (TSG14)
The first data are from Thomson2012 and Schwendeman2014, hereafter TSG14, and
were collected in the Strait of Juan de Fuca, Washington. These data were
collected by a gray scale video camera with a resolution of $640\times 480$
pixels installed above the wheelhouse of Research Vessel R/V Robertson which
recorded at an acquisition rate of $30$ Hz Schwendeman . (2014). These data
were then projected into a metric coordinate grid with resolution of 0.25m
(cross wave) and 0.075m (along wave) using the method proposed by Holland1997
and were then used to obtain $\Lambda(c)$ using the spectral approach of
Thomson2009a. The data were collected in a (usually) fetch-limited region and
for a young sea state; note, however, that the particular sea-states analyzed
here may not be fetch-limited. Figure 3-a shows the measured wave spectra,
Figure 3-b shows $\Lambda(c)$ distributions, and Table 1 shows a summary of
these data. For these data, $P_{b}$ was calculated using the measured
$\Lambda(c)$ distributions combined with the method described below in
Equation 37. Additional information regarding the data collection is available
from Thomson2012 and Schwendeman2014.
### 3.2 Sutherland and Melville (2013) dataset (SM13)
The second dataset is from Sutherland2013, hereafter SM13, and was collected
using the Research Platform R/P FLIP during a two-day field campaign in the
Southern California Bight under the scope of the SoCal 2010 experiment
Sutherland Melville (2013). Here, we focus only on the visible imagery
collected by these authors to keep consistency with the previously presented
data. Stereo video data were collected by a pair of video cameras mounted on
the R/P FLIP for 10 minutes at the start of each hour and $\Lambda(c)$ was
obtained using a variation of the method of Kleiss2011, that is, tracking the
temporal evolution of breakers obtained via pixel intensity threshold. Figure
3-c shows the measured wave spectra, Figure 3-d shows $\Lambda(c)$
distributions, and Table 1 shows a summary of these data. Note that because
wave breaking was not observed for frequencies below 0.2$Hz$ and from
numerical simulations (not shown) these waves corresponded to a cross-swell
not forced by the wind, our analyses only consider waves in the frequency
range $0.2<f<0.8Hz$. Additional information regarding the data collection is
available from Sutherland2013. For these and TSG14 data, $P_{b}$ was
calculated using the measured $\Lambda(c)$ distributions combined with the
formulas from Banner2010:
$P_{b}=\frac{\int_{c_{0}}^{c_{1}}c\Lambda(c)dc}{\int_{c_{0}}^{c_{1}}c\Pi(c)dc}$
(37)
where $c_{0}=\frac{g}{2\pi}\frac{1}{1.3f_{p}}$,
$c_{1}=\frac{g}{2\pi}\frac{1}{0.7f_{p}}$, $\Pi(c)=\chi g/(2\pi c^{3})$ and
$\chi=0.6$. The implication of this choice is discussed in further detail in
Section 5.
### 3.3 Banner, Babanin and Young (2000) dataset (B00)
The third dataset is from Banner2000, hereafter B00, and was collected in the
Black Sea (BS), Lake Washington (LW) and the Southern Ocean (SO). These
authors directly provide values for significant wave height $H_{m0}$, peak
period ($T_{p}$) and the wave breaking probability in their Tables 1 (Black
Sea, denoted as BS here) and 2 (Southern Ocean, denoted as SO here). The
majority of the data were collected in the Black Sea (13 data runs) and two
data runs are from the Southern Ocean. Given that the original spectral data
were not published alongside their paper, we approximate the observed spectra
using the provided pairs $H_{m0}$, $T_{p}$ assuming a JONSWAP shape with
$\gamma_{js}=3.3$ (as previously done in Filipot2010, for example). Given that
in this paper we are only interested in a very narrow spectral band, the
differences between observed and simulated spectra should be minimal. For more
details regarding this data refer to Banner2000.
Table 1: Data summary for the two experiments described in Sections 3.1 and
3.2. Note that the parameters obtained from wave spectra were computed
specifically for the bands shown in Figure 3 for TSG14 and SM13 cases. The
wave height ($H_{p}$) and wave steepness ($\epsilon$) parameters for dominant
waves were calculated as per Banner2002 (see Section A.1 for details). The
wave age parameter was calculated as $c_{p}/u_{*}$ .
Dataset | Date | Length | $H_{m_{0}}$ | $T_{p}$ | $H_{p}$ | $\epsilon$ | $U_{10}$ | $u_{*}$ | $c_{p}$ | Wave age | $P_{b}$
---|---|---|---|---|---|---|---|---|---|---|---
| $[-]$ | $[min]$ | $[m]$ | $[s]$ | $[m]$ | $[-]$ | $[ms^{-1}]$ | $[ms^{-1}]$ | $[ms^{-1}]$ | $[-]$ | $[-]$
TSG14 | 14/02/2011 20:33 | 6.5 | 0.75 | 2.88 | 0.66 | 0.160 | 11.50 | 0.373 | 4.50 | 12.07 | 3.54E-03
TSG14 | 14/02/2011 20:58 | 5.1 | 0.75 | 2.96 | 0.66 | 0.152 | 12.55 | 0.417 | 4.62 | 11.08 | 9.57E-03
TSG14 | 14/02/2011 21:30 | 6.5 | 0.91 | 2.99 | 0.82 | 0.184 | 15.07 | 0.561 | 4.67 | 8.33 | 6.29E-02
TSG14 | 14/02/2011 21:44 | 8.5 | 1.09 | 3.17 | 1.00 | 0.200 | 15.73 | 0.599 | 4.94 | 8.25 | 1.01E-01
TSG14 | 14/02/2011 22:29 | 6 | 1.21 | 3.44 | 1.09 | 0.186 | 17.24 | 0.636 | 5.36 | 8.44 | 1.51E-01
TSG14 | 14/02/2011 22:37 | 4.8 | 1.37 | 3.53 | 1.24 | 0.199 | 18.01 | 0.660 | 5.52 | 8.36 | 7.61E-02
TSG14 | 15/02/2011 19:04 | 10 | 0.87 | 3.29 | 0.79 | 0.146 | 14.45 | 0.360 | 5.13 | 14.28 | 3.75E-03
TSG14 | 15/02/2011 19:19 | 6 | 0.90 | 3.31 | 0.81 | 0.149 | 13.11 | 0.477 | 5.17 | 10.85 | 4.05E-02
SM13 | 06/12/2010 21:59 | 10 | 0.61 | 3.51 | 0.52 | 0.085 | 6.46 | 0.205 | 5.48 | 26.68 | 7.96E-03
SM13 | 06/12/2010 23:00 | 10 | 0.61 | 3.33 | 0.54 | 0.097 | 7.55 | 0.342 | 5.20 | 15.22 | 1.95E-03
SM13 | 07/12/2010 00:00 | 10 | 0.73 | 3.45 | 0.66 | 0.112 | 8.62 | 0.319 | 5.38 | 16.85 | 3.24E-03
SM13 | 08/12/2010 00:00 | 10 | 0.34 | 2.04 | 0.23 | 0.110 | 5.24 | 0.160 | 3.19 | 19.96 | 1.65E-02
B00 (SO) | 10/6/1992 | 5 | 9.20 | 13.46 | 8.02 | 0.089 | 19.80 | 0.835 | 21.01 | 25.17 | 2.70E-02
B00 (SO) | 11/6/1992 | 9 | 4.20 | 12.04 | 3.66 | 0.051 | 16.00 | 0.626 | 18.78 | 30.02 | 0.00E+00
B00 (BS) | 1993 | 34-68 | 0.39 | 2.78 | 0.34 | 0.089 | 11.70 | 0.414 | 4.34 | 10.49 | 3.80E-02
B00 (BS) | 1993 | 34-68 | 0.49 | 2.94 | 0.43 | 0.100 | 12.70 | 0.461 | 4.59 | 9.96 | 6.50E-02
B00(BS) | 1993 | 34-68 | 0.53 | 3.33 | 0.47 | 0.084 | 14.00 | 0.524 | 5.20 | 9.93 | 6.00E-02
B00 (BS) | 1993 | 34-68 | 0.54 | 3.23 | 0.47 | 0.092 | 14.40 | 0.544 | 5.04 | 9.26 | 5.20E-02
B00 (BS) | 1993 | 34-68 | 0.38 | 2.27 | 0.34 | 0.131 | 15.00 | 0.574 | 3.55 | 6.18 | 6.30E-02
B00 (BS) | 1993 | 34-68 | 0.45 | 2.56 | 0.40 | 0.121 | 14.60 | 0.554 | 4.00 | 7.23 | 6.70E-02
B00 (BS) | 1993 | 34-68 | 0.45 | 2.44 | 0.40 | 0.134 | 13.70 | 0.509 | 3.81 | 7.49 | 8.40E-02
B00 (BS) | 1993 | 34-68 | 1.19 | 5.88 | 1.04 | 0.061 | 8.70 | 0.295 | 9.18 | 31.10 | 0.00E+00
B00 (BS) | 1993 | 34-68 | 1.32 | 6.24 | 1.15 | 0.060 | 11.20 | 0.391 | 9.74 | 24.91 | 0.00E+00
B00 (BS) | 1993 | 34-68 | 0.83 | 6.24 | 0.73 | 0.038 | 9.50 | 0.322 | 9.74 | 30.22 | 0.00E+00
B00 (BS) | 1993 | 34-68 | 0.89 | 5.88 | 0.78 | 0.045 | 10.70 | 0.368 | 9.18 | 24.91 | 0.00E+00
B00 (BS) | 1993 | 34-68 | 0.99 | 3.71 | 0.87 | 0.127 | 10.00 | 0.339 | 5.79 | 17.06 | 3.40E-02
B00 (BS) | 1993 | 34-68 | 0.88 | 4.00 | 0.77 | 0.097 | 8.70 | 0.295 | 6.24 | 21.14 | 5.80E-02
Figure 3: Field data. a) Spectral data from TSG14. b) $\Lambda(c)$ data from
STG14. c) Spectral data from SM13. c) $\Lambda(c)$ data from SM13. The
coloured circular markers show in a) and c) show the peak frequency ($f_{p}$)
and the coloured circular markers show in b) and d) show the peak wave speed
($c_{p}$). The red dashed line in b) and d) shows the theoretical $c^{-6}$
decay predicted by Phillips1985. In all plots, the color scale shows the wave
age ($c_{p}/u_{*}$).
## 4 Results
### 4.1 Comparison with Field Data
Figure 4 shows the comparison between estimated (or observed) (x-axis) and
modelled (y-axis) values of $P_{b}$ for each model. In general, no model was
able to closely reproduce the trends seen in the combined observed data,
regardless of the underlying mathematical or physical formalism. Furthermore,
orders of magnitude of difference between the models and, more worryingly,
between the models and the measured data were observed. In general, models
based on a wave steepness-derived wave breaking criterion (Banner2000,
Banner2002, for example) overestimated data derived from $\Lambda(c)$ while
models based on $\Lambda(c)$ (Melville2002 and Sutherland2013, for example)
underestimated $P_{b}$ data that was not derived from $\Lambda$ (that is, B00
data). The model from Filipot2010 was found to be the most consistent model.
From Figure 4-g, the formulation presented in Section 2 with $A$ = $A_{lin}$ =
0.382 underestimated the observed $P_{b}$ for B00 and SM13 data (note that
$P_{b}$ was too low to be displayed on the plot) but performed relatively well
for the majority of TSG14 data. Using the mean absolute error (MAE) as a
convenient metric to assess the models, it was found that the present model
has errors in the same order of magnitude as the previous models. Given the
spread in the results seen in Figure 4, no model could be considered a clear
winner. For the discussion of these results, see Section 5.
Figure 4: Compassion between measured and computed $P_{b}$ for different
models and data. a) Banner2000, b) Banner2002, c) Melville2002, d)
Filipot2010, e) Sutherland2013, f) Romero2019, and g) present model with
$A$=$A_{lin}$=0.382. The thick black line shows the linear regression between
measured and modelled $P_{b}$ and the blue dashed line indicates the one-to-
one correspondence in all panels. Data points with modelled $P_{b}$ $<$
$10^{-5}$ or observed $P_{b}$ = $0$ are not shown in this plot. In all plots,
$r_{xy}$ is Pearson’s correlation coefficient and $MAE$ indicates the mean
absolute error. Note the logarithmic scale.
### 4.2 Model Optimization
From the analysis of Figure 2, minor changes in $A$ can lead to major
variations in $P_{b}$. Further, from the analysis of Figure 4, the proposed
model underestimated $P_{b}$ for $A$ = $A_{lin}$ = 0.382 particularly for S13
and B00 data. Given that it is a common practice to optimize wave breaking
models for particular datasets, we present two methods to do so using TSG14
data as an example. The same could be done for B00 and SM13 data but, for
brevity, this is not done here. Given that the present model is not
computationally expensive, the first approach consisted of varying $A$ from
0.1 to 0.5 in 0.001 intervals and finding the value of $A$ that resulted in
the lowest squared error
($\sqrt{\left(p_{b_{i}}^{d}-p_{b_{i}}^{m}\right)^{2}}$, where the superscripts
$d$ and $m$ indicate observed and modelled data, respectively) for each data
run. Figure 5-a shows the results of this procedure. The value $A$ = $A_{opt}$
= 0.24 was, on average, the optimal values of for this particular dataset. The
second approach consisted in parameterizing the optimal value of $A$ for each
data run as a function of a known environmental variable, in this example, the
waveage $c_{p}/u_{*}$ (Figure 5-b). The results of these two approaches are
show in Figures 5-c and d, respectively. Both approaches considerably improved
the model results from the baseline model presented in Figure 4, with the
parametric model (Figure 5-d) performing slightly better when considering
Pearson’s correlation coefficient ($r_{xy}$) as a comparison metric.
Figure 5: Results of the optimization procedures. a) Optimization curves for
each data record (coloured lines) and the global averaged (black line). The
vertical dashed line show s$A$ = $A_{opt}$ = 0.24. b) Parametrization of $A$
as a function of $c_{p}/u_{*}$. The blue swath indicates the 95% confidence
interval. For this particular case, $A$ = $0.008c_{p}/u_{*}$ $+$ $0.16$. Note
the logarithmic scale in a), c) and d). In all plots, the color scale shows
the wave age ($c_{p}/u_{*}$). In b) to d) $r_{xy}$ is Pearson’s correlation
coefficient.
## 5 Discussion
We have introduced a new model for obtaining the probability of wave breaking
($P_{b}$) for dominant waves based on the theoretical joint probability
density distribution between wave phase speed ($c$) and horizontal orbital
velocity at the wave crest ($u$) for unidirectional Gaussian wave fields. The
present model has only one parameter for defining the wave breaking threshold
($A$), which makes it relatively easy to optimize for a given dataset (as
shown in Section 4.2). While the proposed model performed relatively well for
one of the investigated datasets (TSG14), it greatly underestimated $P_{b}$
for the two other datasets (SM13 and B00). For the data investigated here,
such underestimation did not result in a high mean absolute error (MAE) and,
in fact, our model had one of the lowest MAE. Recent results of
Barthelemy2018, Derakhti2020 and Varing2020 showed that waves with horizontal
fluid velocity that exceeds 0.85 times the phase velocity will inevitably
break. These results suggest that the breaking threshold derived from
Cokelet1977 in Section 2.3 could be reduced by $\approx$15%. If we apply their
findings to our case, we obtain $A$ $=$ $0.382$ $\times$ $0.85$ $=$ $0.324$
which would help to reduce the underestimation of $P_{b}$, but not
significantly. It is more probable that other environmental phenomena such as
direct wind forcing, directional spreading and long wave modulation, which are
not accounted in our model, are the reason for such differences.
One of the most challenging aspects when assessing our model is, nevertheless,
regarding the field data. The attribution of wave breaking occurrences to wave
scales using timeseries analysis, as done in Banner2000 or Filipot2010, is
difficult because several wave scales can be present at the same time and
space. This lead us to use $\Lambda(c)$ observations as well as data from
Banner2000 to investigate our model. Different interpretations of how
$\Lambda(c)dc$ is computed from field data can, however, generate orders of
magnitude of difference in its moments Gemmrich . (2013); Banner . (2014) and,
consequently, in $P_{b}$. Next, it is difficult to relate the speed of the
wave breaking front to the phase speed of the carrying wave because small,
slower breaking waves could merely be traveling on top of longer, much faster
waves. In particular, we believe that these wave breaking events can
significantly contribute to the observed $\Lambda(c)dc$ distribution as they
would have $c$ close to the peak wave phase speed. This wave breaking “sub-
population” has not receive much research interest because of its apparent
small contribution to energy dissipation but, for our particular case, they
directly impact model validation.
Further, relating $\Lambda(c)$ to $P_{b}$ is also challenging. Here, we
adopted the convenient formula from Banner2010. While this formula has some
support from the literature Ardhuin . (2010), the actual functional form of
$\Pi(c)$ and the value for the constant $\chi$ (see Equation 37) are unknown
and changes in these will lead to changes in $P_{b}$. The Gaussian framework
developed in Section 2.1 provides an alternative method to obtain $\Pi(c)$
(from Equation 3, for example) but this is beyond the scope of this
introductory paper and will be the focus of a future publication.
Finally, we would like to re-emphasize that our model is derived in the space
domain whereas $P_{b}$ data is (at least partially) obtained in the time
domain. For the narrow spectral band investigated here, Monte-Carlo
simulations of linear waves indicate that the difference between $P_{b}$
modelled in space is less than five percent from $P_{b}$ modelled in time (not
shown). Given all these complications and the fact that some historical models
are being compared to data that was used to create them (Banner2000 and
Sutherland2013, for example), we are unable to provide an accurate ranking of
the existing models. Future research should focus, therefore, on obtaining
$P_{b}$ data that is unambiguous and widely available. In this regard, and
despite its own limitations, wave tank experiments could bring further insight
on the statistics of dominant (or not) breaking waves. Such a dataset would
ultimately allow researchers to focus on models derived from physical and
mathematical concepts (such as ours) rather than on empirical concepts.
## 6 Conclusion
We have presented a new statistical wave breaking model derived from Gaussian
field theory that we have applied to obtain the probability of wave breaking
for dominant, wind-sea waves. Although more mathematically complex than
previous formulations, the present model relies on the ratio between the crest
orbital velocity and the phase speed and uses only on a single free parameter,
the wave breaking threshold $A$. Using theoretical results obtained by
Cokelet1977 for regular nearly breaking waves, we derived a wave breaking
threshold to adapt our linear model to non-linear waves. The present model has
errors in the same order of magnitude as six other historical models when
assessed using three field datasets. For a particular dataset (TSG14), our
model performed well, especially if the free-parameter $A$ is fine tuned.
Additional observations are however required, to further understanding and
quantifying the dependence of $A$ on environmental parameters that are not
accounted for in our model (for example, wind forcing, wave directionality or
modulation by long waves). Future research should be dedicated to collect more
wave breaking observations in different and repeatable environmental
conditions to provide reliable constraints for the optimization of the present
and other wave breaking models. Still and although the research presented here
is in early stages, the present model should be extendable to waves of any
scale and, therefore, has the potential to be implemented in current state-of-
the-art spectral wave models as a new wave breaking dissipation source term
with relatively little effort.
## Appendix A Historic Parametric Wave Breaking Models
### A.1 Banner et al. (2000)
Banner .’s 2000 is a popular model for calculating wave breaking probabilities
for deep water, dominant waves. This model follows from observations and
results from Donelan&al.1972, Holthuijsen&Herbers1986 and Banner&Tian1998 who
demonstrated the importance of the wave group modulation on the wave breaking
onset. These authors conveniently obtained a parameterization for the
probability of wave breaking ($P_{b}$) based solely on the spectral steepness
of the dominant wave scale ($\epsilon_{p}$), assuming that their formulas
would capture the influence of the wave group modulation on the wave breaking
onset. Their formulation was derived using a dataset of measurements collected
in various environments ranging from lakes to open ocean conditions Banner .
(2000). From these observations, these authors were then able to obtain a wave
breaking threshold behaviour for the dominant waves as a function of the
dominant spectral wave steepness given by:
$\epsilon_{p}=\frac{H_{p}k_{p}}{2}$ (38)
in which $k_{p}$ is the wavenumber at peak frequency ($f_{p}$) and $H_{p}$ is
the significant wave height of the dominant waves calculated as:
$H_{p}=4\sqrt{\left(\int_{0.7f_{p}}^{1.3f_{p}}E(f)df\right)}$ (39)
where $E(f)$ is the spectra of wave heights as a function of frequency. For
their data, $P_{b}$ was then parameterized as a single equation with three
free parameters ($p_{1}$, $p_{2}$, $p_{3}$):
$P_{b}=p_{1}+(\epsilon_{p}-p_{2})^{p_{3}},$ (40)
For the available field data, Banner2000 found optimal values of $p_{1}=22$,
$p_{2}=0.055$, and $p_{3}=2.01$. Note that hereafter free parameters for the
different models will be denoted as $p_{n}$ where $n$ is a sequential number.
### A.2 Banner et al. (2002)
This work extended Banner2000 model to shorter wave scales (up to 2.48 times
the peak wave frequency). From field data Banner2002 reported that the waves
were breaking if the saturation spectrum
$\sigma(f)=2\pi^{4}f^{5}E(f)/2g^{2}=\sigma(k)=k^{4}E(k)$ exceeded a threshold
that was frequency dependent. These author’s related this dependence to the
directional spreading $\overline{\theta(k)}$ which later led Banner2010 to
explicitly define the following empirical formulation:
$P_{b}(k_{c})=\mathcal{H}_{h}(\tilde{\sigma}(k_{c})-p_{1})\times
p_{2}\times(\tilde{\sigma}(k_{c})-\tilde{\sigma}_{t}),$ (41)
in which $\mathcal{H}_{h}$ is the Heaviside step function, $k_{c}$ is the
central wavenumber for a given wavenumber range,
$\tilde{\sigma}(k_{c})=\sigma(k_{c})/\overline{\theta(k_{c})}$ is the
saturation spectrum normalized by the averaged directional spreading,
$p_{1}=0.0045$ and $p_{2}$ = $33$ are constants obtained from their
observations. Following Banner2002, the directional spreading angle is
calculated according to Hwang2000 (their equation 19a):
$\theta\left(\frac{k}{k_{p}}\right)=\left\\{\begin{array}[]{ll}0.35+1.05\left(1-\frac{k}{k_{p}}\right)&\mbox{if
}\frac{k}{k_{p}}<1.05\\\ 0.30+0.087\left(\frac{k}{k_{p}}-1\right)&\mbox{if
}1.05\leq\frac{k}{k_{p}}<5\\\ \end{array}\right.$ (42)
where $\theta$ is the directional spreading angle as a function of the
wavenumber.
### A.3 Filipot et al. (2010)
This method follows from the original works of LeMehaute1962, Battjes1978 and
Thornton1983 and assumes that the probability distribution function (PDF) of
breaking wave heights in the dominant wave scale is parameterized by its
central frequency $f_{c}$ or, equivalently, by its representative phase speed
$c(f_{c})$ and the product between a Rayleigh PDF for the wave heights
$P(H,f_{c})=\frac{2H}{H_{rms}^{2}(f_{c})}\exp{\left[-\left(\frac{H}{H_{rms}(f_{c})}\right)^{2}\right]}$
(43)
in which
$H_{r}(f_{c})=\frac{4}{\sqrt{2}}\sqrt{\int_{0}^{\infty}U_{fc}(f)E(f)df}$ (44)
and
$U_{f_{c}}=0.5-0.5\cos\left(\frac{\pi}{\delta}\left[\frac{f}{f_{c}}-1-\delta\right]\right)$
(45)
where $\delta$ is the bandwidth of a Hann window (in this study,
$\delta=0.6$), and a weighting function
$W(H,f_{c})=p_{1}\left[\frac{\beta_{r}}{\beta}\right]^{2}\left\\{1-\exp{\left[-\left(\frac{\beta}{\tilde{\beta}}\right)^{p_{2}}\right]}\right\\}$
(46)
in which $\beta=kH/\tanh(kh)$, and $p_{1}$ and $p_{2}$ are free parameters. In
order to extend the formulation outside the shallow water domain, these
authors replaced Thornton Guza’s 1983 breaking criterion based on the wave
height ($H$) to water depth ($h$) ratio ($\gamma=H/h=0.42$) with an adaptation
of Miche’s 1944 wave breaking parameter:
$\beta_{r}=\frac{\overline{k_{r}}(f_{c})H_{r}(f_{c})}{\tanh{(\overline{k_{r}}(f_{c})h)}}$
(47)
in which
$\overline{k_{r}}(f_{c})=\frac{\int_{0}^{\infty}U_{f_{c}}(f)k(f)E(f)df}{\int_{0}^{\infty}U_{f_{c}}(f)E(f)df}$
(48)
and
$\tilde{\beta}=b(b_{3}\tanh(kh)^{3}-b_{2}\tanh(kh)^{2}+b_{1}\tanh(kh)-b_{0})$
(49)
in which $b=0.48$, $b_{3}=1.0314$, $b_{2}=1.9958$, $b_{1}=1.5522$, and
$b_{0}=0.1885$. In their model, the variable $\tilde{\beta}$ was obtained via
numerical calculations of regular nearly breaking waves using the stream wave
theory of Dean1965. Finally, the wave breaking probability is obtained as:
$P_{b}(f_{c})=\int_{0}^{\infty}P(H,f_{c})W(H,f_{c})dH\leq 1.$ (50)
To keep consistency with Section A.2, $P_{b}$ will be only considered at the
spectral peak; other definitions are, however, also possible.
### A.4 Models based on Phillips’ (1985) $\Lambda(c)$
The major issue with the previous models is the difficulty to obtain reliable
observations of the wave breaking probabilities as a spectral distribution
solely from point measurements. Due to the presence of different wave scales
at the time and location, it is indeed difficult to assign the breaking
occurrence to a given wave frequency of wave number. To avoid this problem,
Phillips1985 proposed to use the speed of the breaking front as a proxy for
the phase speed of the carrying wave. Phillips1985 defined the parameter
$\Lambda(c)dc$ as the “average total length per unit surface area of breaking
fronts that have velocities in the range $c$ to $c+dc$” and then defined the
following quantities:
$L=\int\Lambda(c)dc$ (51)
and
$R=\int c\Lambda(c)dc$ (52)
which represent the “total length of breaking fronts per unit area” (Equation
51) and “the total number of breaking waves of all scales passing a given
point per unit time” (Equation 52). Assuming that Phillips1985 assumptions
hold, it is possible to obtain parametric models for $\Lambda$ from known
variables (e.g., wind speed) and, consequently, for $P_{b}$ (see Equation 37).
#### A.4.1 Melville and Matusov (2002)
Melville Matusov’s 2002 model for $\Lambda(c)$ relies only on the wind speed
measured at 10m ($U_{10}$) to obtain $\Lambda(c)$. Following Melville2002 and
using the explicit formula given by Reul2003, this parameterization is written
as:
$\Lambda(c)=p_{1}\left[\frac{U_{10}}{10}\right]^{3}10^{-4}\exp{[-(p_{2}c)]}$
(53)
in which $p_{1}$ and $p_{2}$ are constants. For their data, Melville2002 found
$p_{1}=3.3$ and $p_{2}=0.64$. As discussed by Reul2003, this formulation
approaches Phillips’s 1985 theoretical $c^{-6}$ but may overly estimates the
amount of small breakers.
#### A.4.2 Sutherland and Melville (2013)
Sutherland2013 used dimensional analysis to scale $\Lambda(c$) and obtain a
parameterization that is a function of the wind drag ($u_{*}$), peak wave
phase speed ($c_{p}$), significant wave height ($H_{s}$) and three constants.
From Sutherland Melville’s 2013 Equation 9 and their Figure 4, $\Lambda(c$) is
calculated as:
$\Lambda(c)=p_{1}\frac{g}{c_{p}^{3}}\left(\frac{u_{*}}{c_{p}}\right)^{p_{2}}\left(\frac{c}{\sqrt{gH_{s}}}\left(\frac{gH_{s}}{c_{p}^{2}}\right)^{p_{3}}\right)^{-6}$
(54)
where $p_{1}=0.05$, $p_{2}=0.5$, and $p_{3}=0.1$ are constants obtained from
the available data. Their formulation reproduces Phillips’s 1985 $c^{-6}$
frequency dependency but does not have the typical roll-off at low $c$ as
these authors chose to use infrared (other than visible) imagery to obtain and
model their $\Lambda(c)$. This choice included the contribution of micro-scale
breakers that do generate visible bubbles in their model, hence the
difference.
#### A.4.3 Romero (2019)
Recently, Romero2019 developed and implemented a new wave breaking
parameterization in WaveWatchIII which relies exclusively on $\Lambda(c)$.
Differently from previous parameterizations, Romero’s 2019 takes into account
both the modulations due to winds and long waves on $\Lambda(c)$. His model is
fairly general but depends on six free parameters that needed to be
laboriously obtained by comparing WaveWatchIII’s significant wave height
outputs with available measured significant wave heights from buoy data. In
Romero’s 2019 model, $\Lambda$ was modelled assuming that it is proportional
to the crest lengths exceeding a slope threshold:
$\Lambda(f,\theta)=\left(\frac{2(2\pi)^{2}p_{1}}{g}\right)f\exp{\left[-\left(\frac{p_{2}}{B(f,\theta)}\right)\right]M_{LW}M_{W}}$
(55)
where $p_{1}=3.5\mathsf{x}10^{-5}$ and $p_{2}=5\mathsf{x}10^{-3}$ are
constants to be obtained from the data, $M_{LW}$ is the modulation due to long
waves, $M_{W}$ is the modulation due to winds and $B(f,\theta)$ is the
directional wave breaking saturation spectra:
$B(f)=\int_{0}^{2\pi}B(f,\theta)d\theta=E(f)\left(\frac{2\pi
f^{5}}{2g}\right).$ (56)
The modulation due to long waves is calculated according to Guimaraes2018t:
$M_{LW}=\left[1+p_{3}\sqrt{\operatorname{cmss}{(E(f))}}\cos^{2}(\theta-\hat{\theta})\right]^{p_{4}}$
(57)
where $p_{3}=400$ and $p_{4}=3/2$ are also best-fit constants found by
Romero2019. The cumulative mean square slope ($\operatorname{cmss}$) is
defined as:
$\operatorname{cmss}=\int_{0}^{\infty}E(f)\left(\frac{(2\pi)^{4}f^{4}}{g^{2}}\right)df.$
(58)
and
$\hat{\theta}=\tan\left(\frac{\int E(f,\theta)\sin{(\theta)}dfd\theta}{\int
E(f,\theta)\cos{(\theta)}dfd\theta}\right)$ (59)
The modulation due to the wind is computed as:
$M_{W}=\frac{\left(1+p_{5}\max{\left(1,\frac{f}{f_{0}}\right)}\right)}{\left(1+p_{5}\right)}$
(60)
with
$f_{0}={p_{6}\frac{1}{u_{*}}\frac{g}{2\pi}}$ (61)
where $p_{5}=0.9$ is a constant related to the DIA algorithm and $p_{6}=3/28$
is yet another constant. Finally, the conversion from $\Lambda(f)$ to
$\Lambda(c)$ is done using the relation $\Lambda(c)dc=\Lambda(f)df$ and the
linear dispersion relation (see Romero’s 2019 Eqs. 17-23 for details).
###### Acknowledgements.
This work benefited from France Energies Marines and State financing managed
by the National Research Agency under the Investments for the Future program
bearing the reference numbers ANR-10-IED-0006-14 and ANR-10-IEED-0006-26 for
the projects DiME and CARAVELE. The authors’ thank Peter Sutherland and Jim
Thompson for kindly sharing their data.
## Data Availability
All data used in this publication has been previously published by Banner2000,
Sutherland2013, Schwendeman2014.
## References
* Alsina Baldock (2007) Alsina2007Alsina, JM. Baldock, TE. 2007\. Improved representation of breaking wave energy dissipation in parametric wave transformation models Improved representation of breaking wave energy dissipation in parametric wave transformation models. Coastal Engineering54765–769. 10.1016/j.coastaleng.2007.05.005
* Ardag Resio (2020) Ardag2020Ardag, D. Resio, DT. 2020\. A New Approach for Modeling Dissipation due to Breaking in Wind Wave Spectra A new approach for modeling dissipation due to breaking in wind wave spectra. Journal of Physical Oceanography502439–454.
* Ardhuin . (2010) Ardhuin2010Ardhuin, F., Rogers, E., Babanin, AV., Filipot, JF., Magne, R., Roland, A.Collard, F. 2010\. Semiempirical dissipation source functions for ocean waves. Part I: Definition, calibration, and validation Semiempirical dissipation source functions for ocean waves. Part I: Definition, calibration, and validation. Journal of Physical Oceanography4091917–1941. 10.1175/2010JPO4324.1
* Banner . (2000) Banner2000Banner, ML., Babanin, AV. Young, IR. 2000\. Breaking Probability for Dominant Waves on the Sea Surface Breaking Probability for Dominant Waves on the Sea Surface. Journal of Physical Oceanography30123145–3160. 10.1175/1520-0485(2000)030¡3145:BPFDWO¿2.0.CO;2
* Banner . (2002) Banner2002Banner, ML., Gemmrich, JR. Farmer, DM. 2002\. Multiscale measurements of ocean wave breaking probability Multiscale measurements of ocean wave breaking probability. Journal of Physical Oceanography32123364–3375. 10.1175/1520-0485(2002)032¡3364:MMOOWB¿2.0.CO;2
* Banner Morison (2010) Banner2010Banner, ML. Morison, RP. 2010\. Refined source terms in wind wave models with explicit wave breaking prediction. Part I: Model framework and validation against field data Refined source terms in wind wave models with explicit wave breaking prediction. Part I: Model framework and validation against field data. Ocean Modelling331-2177–189. http://dx.doi.org/10.1016/j.ocemod.2010.01.002 10.1016/j.ocemod.2010.01.002
* Banner Tian (1998) Banner&Tian1998Banner, ML. Tian, X. 1998\. On the determination of the onset of breaking for modulating surface gravity water waves On the determination of the onset of breaking for modulating surface gravity water waves. Journal of Fluid Mechanics367107–137.
* Banner . (2014) Banner2014Banner, ML., Zappa, CJ. Gemmrich, JR. 2014\. A note on the Phillips spectral framework for ocean whitecaps A note on the Phillips spectral framework for ocean whitecaps. Journal of Physical Oceanography4471727–1734. 10.1175/JPO-D-13-0126.1
* Barthelemy . (2018) Barthelemy2018Barthelemy, X., Banner, ML., Peirson, WL., Fedele, F., Allis, M. Dias, F. 2018\. On a unified breaking onset threshold for gravity waves in deep and intermediate depth water On a unified breaking onset threshold for gravity waves in deep and intermediate depth water. Journal of Fluid Mechanics841463–488. 10.1017/jfm.2018.93
* Battjes Janssen (1978) Battjes1978Battjes, JA. Janssen, J. 1978\. Energy loss and set-up due to breaking of random waves Energy loss and set-up due to breaking of random waves. Coastal Engineering321569–587.
* Chawla Kirby (2002) Chawla&Kirby2002Chawla, A. Kirby, JT. 2002\. Monochromatic and random wave breaking at blocking points Monochromatic and random wave breaking at blocking points. Journal of Geophysical Research: Oceans107C74–1.
* Cokelet (1977) Cokelet1977Cokelet, ED. 1977\. Steep Gravity Waves in Water of Arbitrary Uniform Depth Steep Gravity Waves in Water of Arbitrary Uniform Depth. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences2861335183–230. http://rsta.royalsocietypublishing.org/cgi/doi/10.1098/rsta.1977.0113 10.1098/rsta.1977.0113
* Dean (1965) Dean1965Dean, RG. 1965\. Stream Function Representation of Nonlinean Ocean Waves Stream Function Representation of Nonlinean Ocean Waves. Journal of Geophysical Research70184561–4572.
* Derakhti . (2020) Derakhti2020Derakhti, M., Kirby, JT., Banner, ML., Grilli, ST. Thomson, J. 2020\. A unified breaking onset criterion for surface gravity water waves in arbitrary depth A unified breaking onset criterion for surface gravity water waves in arbitrary depth. Journal of Geophysical Research: Oceans20131–28. 10.1029/2019jc015886
* Donelan . (1972) Donelan&al.1972Donelan, M., Longuet-Higgins, M. Turner, J. 1972\. Periodicity in whitecaps Periodicity in whitecaps. Nature2395373449–451.
* Duncan (1981) Duncan1981Duncan, JH. 1981\. An Experimental Investigation of Breaking Waves Produced by a Towed Hydrofoil An Experimental Investigation of Breaking Waves Produced by a Towed Hydrofoil. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences3771770331–348. http://rspa.royalsocietypublishing.org/cgi/doi/10.1098/rspa.1981.0127 10.1098/rspa.1981.0127
* Eldeberky Battjes (1996) Eldeberky1996Eldeberky, Y. Battjes, JA. 1996\. Spectral modeling of wave breaking: Application to Bousinesq equations Spectral modeling of wave breaking: Application to Bousinesq equations. Journal of Geophysical Research1011253–1264.
* Filipot Ardhuin (2012) Filipot2012Filipot, JF. Ardhuin, F. 2012\. A unified spectral parameterization for wave breaking: From the deep ocean to the surf zone A unified spectral parameterization for wave breaking: From the deep ocean to the surf zone. Journal of Geophysical Research: Oceans11741–19. 10.1029/2011JC007784
* Filipot . (2010) Filipot2010Filipot, JF., Ardhuin, F. Babanin, AV. 2010\. A unified deep-to-shallow water wave-breaking probability parameterization A unified deep-to-shallow water wave-breaking probability parameterization. Journal of Geophysical Research: Oceans11541–15. 10.1029/2009JC005448
* Filipot . (2019) Filipot2019Filipot, JF., Guimaraes, P., Leckler, F., Hortsmann, J., Carrasco, R., Leroy, E.Le Dantec, N. 2019\. La Jument lighthouse: a real-scale laboratory for the study of giant waves and their loading on marine structures La Jument lighthouse: a real-scale laboratory for the study of giant waves and their loading on marine structures. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences377215520190008. 10.1098/rsta.2019.0008
* Gemmrich . (2013) Gemmrich2013Gemmrich, J., Zappa, CJ., Banner, ML. Morison, RP. 2013\. Wave breaking in developing and mature seas Wave breaking in developing and mature seas. Journal of Geophysical Research: Oceans11894542–4552. 10.1002/jgrc.20334
* Guimarães (2018) Guimaraes2018tGuimarães, PV. 2018\. Sea surface and energy dissipation Sea surface and energy dissipation . Universitè de Bretagne Loire.
* Holland . (1997) Holland1997Holland, KT., Holman, RA., Lippmann, TC., Stanley, J., Member, A. Plant, N. 1997\. Practical Use of Video Imagery in Nearshore Oceanographic Field Studies Practical Use of Video Imagery in Nearshore Oceanographic Field Studies. IEEE Journal of Oceanic Engineering22181–92.
* Holthuijsen Herbers (1986) Holthuijsen&Herbers1986Holthuijsen, L. Herbers, T. 1986\. Statistics of breaking waves observed as whitecaps in the open sea Statistics of breaking waves observed as whitecaps in the open sea. Journal of Physical Oceanography162290–297.
* Hwang . (2000) Hwang2000Hwang, PA., Wang, DW., Walsh, EJ., Krabill, WB. Swift, RN. 2000\. Airborne Measurements of the Wavenumber Spectra of Ocean Surface Waves. Part II: Directional Distribution Airborne Measurements of the Wavenumber Spectra of Ocean Surface Waves. Part II: Directional Distribution. Journal of Physical Oceanography30112768–2787. 10.1175/1520-0485(2001)031¡2768:amotws¿2.0.co;2
* Janssen Battjes (2007) Janssen2007Janssen, TT. Battjes, JA. 2007\. A note on wave energy dissipation over steep beaches A note on wave energy dissipation over steep beaches. Coastal Engineering549711–716. 10.1016/j.coastaleng.2007.05.006
* Kjeldsen . (1980) Kjeldsen1980Kjeldsen, SP., Vinje, TP., Myrhaug, DP. Brdvig, PP. 1980\. Kinematics of deep water breaking waves Kinematics of deep water breaking waves. Offshore Technology Conference. Offshore technology conference.
* Kleiss Melville (2011) Kleiss2011Kleiss, JM. Melville, WK. 2011\. The analysis of sea surface imagery for whitecap kinematics The analysis of sea surface imagery for whitecap kinematics. Journal of Atmospheric and Oceanic Technology282219–243. 10.1175/2010JTECHO744.1
* Kudryavtsev . (2014) Kudryavtsev2014Kudryavtsev, V., Chapron, B. Makin, V. 2014\. Impact of wind waves on the air-sea fluxes: A coupled model Impact of wind waves on the air-sea fluxes: A coupled model. Journal of Geophysical Research: Oceans11921217–1236.
* Le Méhauté (1962) LeMehaute1962Le Méhauté, B. 1962\. On non-saturated breakers and the wave run-up On non-saturated breakers and the wave run-up. Proceedings of the 8th International Conference on Coastal EngineeringFigure 177–92. http://journals.tdl.org/icce/index.php/icce/article/viewArticle/2255
* Longuet-Higgins (1957) Longuet-Higgins1957Longuet-Higgins, MS. 1957\. The Statistical Analysis of a Random, Moving Surface The Statistical Analysis of a Random, Moving Surface. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences249966321–387. 10.1098/rsta.1957.0002
* Melville Matusov (2002) Melville2002Melville, WK. Matusov, P. 2002\. Distribution of breaking waves at the ocean surface Distribution of breaking waves at the ocean surface. Nature417688458–63. 10.1038/417058a
* Miche (1944) Miche1944bMiche, A. 1944\. Mouvements ondulatoires de la mer en profondeur croissante ou décroissante. Première partie. Mouvements ondulatoires périodiques et cylindriques en profondeur constante Mouvements ondulatoires de la mer en profondeur croissante ou décroissante. Première partie. Mouvements ondulatoires périodiques et cylindriques en profondeur constante. Annales des Ponts et ChausséesTome 11442–78.
* Perlin . (2013) Perlin2013Perlin, M., Choi, W. Tian, Z. 2013\. Breaking Waves in Deep and Intermediate Waters Breaking Waves in Deep and Intermediate Waters. Annual Review of Fluid Mechanics451115–145. 10.1146/annurev-fluid-011212-140721
* Phillips (1985) Phillips1985Phillips, OM. 1985\. Spectral and statistical properties of the equilibrium range in wind-generated gravity waves Spectral and statistical properties of the equilibrium range in wind-generated gravity waves. Journal of Fluid Mechanics156505–531. 10.1017/S0022112085002221
* Reul Chapron (2003) Reul2003Reul, N. Chapron, B. 2003\. A model of sea-foam thickness distribution for passive microwave remote sensing applications A model of sea-foam thickness distribution for passive microwave remote sensing applications. Journal of Geophysical Research C: Oceans1081019–1. 10.1029/2003jc001887
* Rice (1944) Rice1944Rice, SO. 1944\. Mathematical Analysis of Random Noise Mathematical Analysis of Random Noise. The Bell System Technical Journal233282 – 332. https://doi.org/10.1002/j.1538-7305.1944.tb00874.x
* Romero (2019) Romero2019Romero, L. 2019\. Distribution of Surface Wave Breaking Fronts Distribution of Surface Wave Breaking Fronts. Geophysical Research Letters4617-1810463–10474. 10.1029/2019GL083408
* Saket . (2017) Saket2017Saket, A., Peirson, WL., Banner, ML., Barthelemy, X. Allis, MJ. 2017\. On the threshold for wave breaking of two-dimensional deep water wave groups in the absence and presence of wind On the threshold for wave breaking of two-dimensional deep water wave groups in the absence and presence of wind. Journal of Fluid Mechanics811642.
* Schwendeman . (2014) Schwendeman2014Schwendeman, M., Thomson, J. Gemmrich, JR. 2014\. Wave breaking dissipation in a Young Wind Sea Wave breaking dissipation in a Young Wind Sea. Journal of Physical Oceanography441104–127. 10.1175/JPO-D-12-0237.1
* Sutherland Melville (2013) Sutherland2013Sutherland, P. Melville, WK. 2013\. Field measurements and scaling of ocean surface wave-breaking statistics Field measurements and scaling of ocean surface wave-breaking statistics. Geophysical Research Letters40123074–3079. 10.1002/grl.50584
* Thomson (2012) Thomson2012Thomson, J. 2012\. Wave breaking dissipation observed with ”swift” drifters Wave breaking dissipation observed with ”swift” drifters. Journal of Atmospheric and Oceanic Technology29121866–1882. 10.1175/JTECH-D-12-00018.1
* Thomson Jessup (2009) Thomson2009aThomson, J. Jessup, AT. 2009\. A fourier-based method for the distribution of breaking crests from video observations A fourier-based method for the distribution of breaking crests from video observations. Journal of Atmospheric and Oceanic Technology2681663–1671. 10.1175/2009JTECHO622.1
* Thornton Guza (1983) Thornton1983Thornton, EB. Guza, RT. 1983\. Transformation of Wave Height Distribution Transformation of Wave Height Distribution. Journal of Geophysical Research88C105925–5938.
* Varing . (2020) Varing2020Varing, A., Filipot, Jf., Grilli, S., Duarte, R., Roeber, V. Yates, M. 2020\. A new kinematic breaking onset criterion for spilling and plunging breaking waves in shallow water A new kinematic breaking onset criterion for spilling and plunging breaking waves in shallow water. Coastal Engineering1–24.
* Zieger . (2015) Zieger2015Zieger, S., Babanin, AV., Erick Rogers, W. Young, IR. 2015\. Observation-based source terms in the third-generation wave model WAVEWATCH Observation-based source terms in the third-generation wave model WAVEWATCH. Ocean Modelling96Vic2–25. http://dx.doi.org/10.1016/j.ocemod.2015.07.014 10.1016/j.ocemod.2015.07.014
|
# The fraud loss for selecting the model complexity in fraud detection
Simon Boge Brant<EMAIL_ADDRESS>Ingrid Hobæk Haff
###### Abstract
In fraud detection applications, the investigator is typically limited to
controlling a restricted number $k$ of cases. The most efficient manner of
allocating the resources is then to try selecting the $k$ cases with the
highest probability of being fraudulent. The prediction model used for this
purpose must normally be regularized to avoid overfitting and consequently bad
prediction performance. A new loss function, denoted the fraud loss, is
proposed for selecting the model complexity via a tuning parameter. A
simulation study is performed to find the optimal settings for validation.
Further, the performance of the proposed procedure is compared to the most
relevant competing procedure, based on the area under the receiver operating
characteristic curve (AUC), in a set of simulations, as well as on a VAT fraud
dataset. In most cases, choosing the complexity of the model according to the
fraud loss, gave a better than, or comparable performance to the AUC in terms
of the fraud loss.
## 1 Introduction
Fraud detection has cropped up as a term in statistical research at least
since the early 1990s. In the excellent review article by Bolton and Hand [1],
the authors identify some of the characteristics that make statistical fraud
detection a distinct field of the statistical literature, and not just a
special case of binary classification, or some other well-understood problem
class. The goal of statistical fraud detection, is to create a system that
automatically selects a subset of all cases, (insurance claims, financial
transactions, etc.), that are the most interesting cases for further
investigation. This is necessary because there is typically a much higher
number of claims than one realistically could investigate manually, and
because fraud is typically quite rare. In simple terms, statistical fraud
detection can be thought of as binary classification, potentially with highly
imbalanced classes, depending on the type of fraud. By imbalanced classes we
mean, in this case, that there are very few occurrences of one of the two
possible outcomes. In other words, the vast majority of financial transactions
or insurance claims are legitimate, and fraud is, relatively speaking, a rare
occurrence.
In fraud detection applications, the investigator is often required to
efficiently allocate limited resources. This amounts to selecting a restricted
number of cases, those that are most likely to be fraudulent, or most worthy
of investigation. In order to achieve this, a model should be fitted to
recorded data of previously investigated cases, and then used to predict the
probability of fraud on new cases. The set of cases to be investigated should
subsequently be determined from the predicted probabilities. In this respect,
we have a precise notion of what a good, or bad, model is for this purpose,
namely one that lets us pick a certain number of cases, such that as many as
possible of these are actual cases of fraud. Given the application, we term
this notion fraud loss. However, we acknowledge that it has been studied in
other contexts under different names, notably by Clémençon and Vayatis [4],
where they refer to the problem as finding the best instances, or
classification with a mass constraint.
The problem of minimising fraud loss, or finding the best instances is
equivalent to maximising a measure known as the precision at k in the field of
information retrieval. This is discussed amongst others by Robertson and
Zaragoza [17], Joachims [13], and Boyd et al. [2]. A related problem is that
of local bipartite ranking, where the aim is to find the best pairwise ranking
of a subset of the data. In the language of Clémençon and Vayatis [4], the
focus is then not only on finding the best instances, but to rank the best
instances. The goal in this setting is not only to select the $k$ most
relevant instances, but also to rank them as well as possible. In the context
of fraud, this corresponds to selecting $k$ cases such that as many as
possible are positive, and that if they are ordered according to their
predicted probability of fraud, the greatest possible number of the selected
positive cases are ranked higher than the selected negative cases.
There are several suggestions for how to estimate a model in order to solve
these and related problems. Boyd et al. [2] propose an estimation criterion
that should result in model that finds the best instances. This criterion
involves minimising a hinge-type loss function over all pairs of observations
with opposite outcome. The number of optimisation problems to solve in the
estimation procedure is then twice the number of pairs. Rudin [18] proposes an
estimation framework that concentrates on the top ranked cases, called the
P-norm push. Her method is inspired by the RankBoost algorithm of Freund et
al. [10] for minimising the ranking loss, and is an extension of RankBoost to
a more general class of objective functions. Eban et al. [7] propose
estimation methods aiming to maximise different measures relevant to ranking,
such as the area under the precision-recall curve, and the recall at a fixed
precision. They do this by approximating the false positive, and true positive
rates in the objective function. Their aim is to construct the objective
function in such a way that the derivatives have more or less the same
complexity as those of the log-likelihood function of a logistic regression
model. This makes the method a lot more scalable than those of Rudin [18], and
Boyd et al. [2].
As opposed to the papers mentioned above, we will not focus on the estimation
of the model parameters directly, but rather on choosing the complexity of the
model, via a tuning parameter. More specifically, we will consider
maximisation of the likelihood function of the statistical model with
regularisation, using penalised methods, or boosting. Different values of the
regularisation parameters will then result in models of varying complexity.
The model is to be used for a very specific purpose, namely to make
predictions in order to select the $k$ most likely cases of fraud among a new
set of cases. Therefore, it seems reasonable to try to choose the
regularisation parameters that are optimal for this particular application. In
that context, we define a loss function, which, broadly speaking, is the
number of non-fraudulent cases among the $k$ selected.
An important question, is how to estimate the out of sample value of the fraud
loss function, in order to select tuning parameters. There are a number of
different validation techniques that one may employ to mimic a new dataset,
using the training data. They all involve fitting models to different subsets
of the data, and evaluating the error on the data points that are left out. As
the application is somewhat special, standard settings and techniques may not
be adequate. For instance, predicted class labels will depend on an empirical
quantile of the predicted probabilities, which might require subsamples of a
certain size to be stable. Therefore, we will investigate what the best
strategies for out of sample validation are.
The paper is organized as follows. Section 2 defines the models that form the
basis of the problem, whereas Section 3 describes the actual problem. Further,
the approach for selecting the model complexity with fraud loss is presented
in Section 4. In Section 5, the properties of the proposed method is evaluated
in a simulation study, and the method is further tested on real data in
Section 6. Finally, Section 7 provides some concluding remarks.
## 2 Models
We want our models to produce predicted probabilities of fraud, or at least
predicted scores, with the same ordering as the probabilities. In order to
estimate probabilities of fraud, one should find a model that maximises the
likelihood of the data. In what follows, $Y$ denotes the binary outcome, which
is an indicator of whether a specific case is fraudulent, and $\mathbf{X}$
represents the $p$-dimensional random vector of covariates. We will in all
cases consider models where
$\displaystyle\log\left(\frac{\text{Pr}(\mathbf{Y}=1|\mathbf{X}=\mathbf{x})}{\text{Pr}(Y=0|\mathbf{X}=\mathbf{x})}\right)=\eta(\mathbf{x}).$
The model $\eta(\mathbf{x})$ could be a linear function of the covariates, or
an additive model where each component is a regression tree. This
specification implies that the conditional probabilities of an event will take
the form
$p_{i}=\text{Pr}\left(Y_{i}=1|\mathbf{X}_{i}=\mathbf{x}_{i}\right)=\frac{\exp(\eta_{i})}{1+\exp(\eta_{i})},$
where $\hat{\eta}_{i}$ is a shorthand for $\hat{\eta}(\mathbf{x}_{i}).$ The
fraud indicators $Y_{i}$ are assumed to be conditionally independent, and
Bernoulli-distributed, with probabilities $p_{i}$, given
$\mathbf{X}_{i}=\mathbf{x}_{i}$. That leads to a binary regression model with
a logit link function, that has the associated log-likelihood function
$\displaystyle ll$
$\displaystyle=\sum_{i=1}^{n}\log\left({p}_{i}^{y_{i}}(1-{p}_{i})^{1-y_{i}}\right)=\sum_{i=1}^{n}\left(y_{i}\log\left(\frac{{p}_{i}}{1-{p}_{i}}\right)-\log(1-{p}_{i})\right)=\sum_{i=1}^{n}\left(y_{i}\eta_{i}-\log(1-{p}_{i})\right).$
In many cases, for instance if the covariates are noisy, or high-dimensional,
the predictive accuracy of the model can be improved by shrinking the
predicted probabilities towards the common value
$\frac{1}{n}\sum_{i=1}^{n}y_{i}$, which is the estimate of the marginal
probability $\text{Pr}(Y=1)$. In the case of a parametric linear model, this
would correspond to shrinking the regression coefficients towards zero, and
for a tree model to a less complex model, in terms of the number of trees, the
depth of each tree, and possibly also the weights assigned to the leaf nodes.
One option, in the case of the linear model
$\eta(\mathbf{x})=\beta_{0}+\boldsymbol{\beta}^{t}\mathbf{x}$, is to only look
for solutions where $(\hat{\beta}_{0},\boldsymbol{\hat{\beta}})$ is at most
some distance from the origin, as measured by a euclidean distance. This
regularised estimator is the maximiser of the penalised log-likelihood
function
$ll(\beta_{0},\boldsymbol{\beta})-\lambda\sum_{j=1}^{p}\hat{\beta}_{j}^{2}=\sum_{i=1}^{n}\left(y_{i}\eta_{i}-\log(1-{p}_{i})\right)-\lambda\sum_{j=1}^{p}\hat{\beta}_{j}^{2},$
(1)
and is the solution to
$\displaystyle\underset{\hat{\beta}_{0},\;\boldsymbol{\hat{\beta}}}{\text{argmax}}\sum_{i=1}^{n}y_{i}\hat{\eta}_{i}-\log(1-\hat{p}_{i})-\lambda\sum_{j=1}^{p}\hat{\beta}_{j}^{2}.$
It is often referred to as ridge regression [12], or $L_{2}$-penalised
logistic regression, since it assigns a penalty to the squared $L_{2}$-norm of
the parameters. Similar estimators such as the lasso [19], or the elastic net
[20], can be constructed by considering different norms, $L_{1}$ in the case
of the lasso, and a convex combination of $L_{1},$ and squared $L_{2}$ norm,
in the case of the elastic net.
We will also consider additive models, that is models where
$\eta(\mathbf{x})=f_{0}+\sum_{j=1}^{M}f_{j}(\mathbf{x}).$ (2)
Here, $f_{0}$ is a constant function, and each $f_{j}(\mathbf{x})$ is a
regression tree. I.e., a function that for a partition $\\{R_{t}\\}_{t=1}^{T}$
of $\mathbb{R}^{p},$ takes a constant value $c_{t}$ for all $\mathbf{x}$ in
each $R_{t},$ such that
$f_{j}(\mathbf{x})=\sum_{t=1}^{T}c_{t}\mathcal{I}(\mathbf{x}\in R_{t}).$
These models are typically fit by gradient boosting [11], an iterative
procedure where one starts with a constant, then adds one component at a time,
by maximising a local Taylor approximation of the likelihood around the
current model. Usually, this update is scaled down by a factor, i.e.,
multiplied by some number $\nu\in(0,1),$ in order to avoid stepping too far
and move past an optimum of the likelihood function. Several highly efficient
implementations to fit such models exist, such as LightGBM [15], CatBoost [6],
and XGBoost [3]. The flexibility that such models offer, especially if the
individual trees are allowed to be complex, will easily lead to models that
capture too much of the variability in the training data. In order to avoid
this, and get a stable model, we will control the total number of components
in the model, and constrain the complexity of each tree.
## 3 Problem description
An informal description of the fraud detection problem was given in the
introduction. Here, the problem will be defined and explained in more detail.
Formally, we can describe our version of a fraud detection problem as follows.
We have two datasets, a training set
$\mathcal{D}^{tr}=\\{(Y_{i},\mathbf{X}_{i})\\}_{i=1}^{n_{tr}},$
that consists of previously investigated cases, and a test set
$\mathcal{D}^{te}=\\{(Y_{i},\mathbf{X}_{i})\\}_{i=1}^{n_{te}},$
that consists of cases that are yet to be investigated. The $(p+1)$
dimensional random vectors $(Y_{i},\mathbf{X}_{i}),\,i=1,...,n_{tr}+n_{te}$
are assumed to be iid, and the main interest is the conditional distribution
of $Y_{i},$ given $\mathbf{X}_{i}.$ In what follows, $n$ will denote for the
size of a sample, regardless of whether the sample in question is the test set
or the training set, unless it is unclear from the notation which one is
referred to. In some cases, if the data contains detailed information of past
cases, it could be useful to describe $Y_{i}$ as a categorical or an ordinal
random variable. However, we will concentrate on the binary case, so that each
$Y_{i}$ takes either the value $0$ or $1$. Since the goal of the investigation
is to uncover fraud, there should be as many actual cases of fraud as possible
among the ones selected for investigation. This amounts to producing $n$
predictions $\\{\widehat{Y}_{i}\\}_{i=1}^{n},$ such that $k$ of these have the
value $1$. Therefore, the minimiser of the loss function is a model that
minimises
$\displaystyle\mathcal{L}^{fraud}=\sum_{i=1}^{n}(1-Y_{i})\widehat{Y}_{i},\text{
s.t. }\sum_{i=1}^{n}\widehat{Y}_{i}=k.$
This is equivalent to minimising the classification error
$\displaystyle\mathcal{L}^{class}=\sum_{i=1}^{n}|Y_{i}-\widehat{Y}_{i}|=\sum_{i=1}^{n}(1-Y_{i})\widehat{Y}_{i}+\sum_{i=1}^{n}Y_{i}(1-\widehat{Y}_{i}),$
under the same constraint. Since $\sum_{i=1}^{n}\widehat{Y_{i}}=k,$
$\displaystyle\mathcal{L}^{fraud}=\sum_{i=1}^{n}(1-Y_{i})\widehat{Y}_{i}=k-\sum_{i=1}^{n}Y_{i}\widehat{Y_{i}},$
and
$\displaystyle\mathcal{L}^{class}$
$\displaystyle=\sum_{i=1}^{n}|Y_{i}-\widehat{Y}_{i}|=k+\sum_{i=1}^{n}Y_{i}-2\sum_{i=1}^{n}Y_{i}\widehat{Y_{i}},$
so the two must have the same minimiser.
The idea is to minimise the expected value of $\mathcal{L}^{fraud}_{te}$,
which is the fraud loss for the test set. It would therefore be illuminating
to know what the minimiser is, i.e., what it is that one is attempting to
estimate. We can write
$\displaystyle\mathbb{E}\left(\mathcal{L}_{te}^{fraud}\right)$
$\displaystyle=\mathbb{E}\left(\sum_{i=1}^{n}(1-Y_{i})\widehat{Y}_{i}\right)=\mathbb{E}\left(\mathbb{E}\left(\sum_{i=1}^{n}(1-Y_{i})\widehat{Y}_{i}\Big{|}\mathbf{X}_{te}\right)\right)=\mathbb{E}\left(\sum_{i=1}^{n}(1-P_{i})\widehat{Y}_{i}\right),$
where $P_{i}=Pr(Y_{i}=1|\mathbf{X_{i}}).$ The minimiser of the above
expectation over all vectors $\widehat{\mathbf{Y}},$ having all elements equal
to zero, except exactly $k$ elements that are equal to one, is the vector
$\widehat{\mathbf{Y}}^{*}$ satisfying
$\widehat{Y}^{*}_{i}=\mathcal{I}(P_{i}\geq P_{(n-k+1)}),$
where $P_{(1)}\leq\ldots\leq P_{(n)}$ are the conditional fraud probabilities
$P_{i}$ for the test set, sorted in ascending order. This is an indicator of
whether $P_{i}$ is among the $k$ largest in the test sample. Thus, a quite
natural approach is to fit a model for the regression function
$p(\mathbf{x})=\text{Pr}(Y=1|\mathbf{X}=\mathbf{x}),$
resulting in the estimated probabilities $\widehat{p}_{i}$, and then use the
prediction
$\widehat{Y}_{i}=\mathcal{I}(\widehat{p}_{i}\geq\widehat{p}_{(n-k+1)}).$
## 4 Selecting the model complexity
We want to find the model that minimises the fraud loss function. This is done
by fitting a sequence of models, either by maximising (1) for a sequence of
values of the penalty parameter $\lambda$, or by fitting the additive tree
model (2) via gradient boosting, where each model has a different number $M$
of components. Choosing the best of these corresponds to selecting a value of
$\lambda$, in the former case, or $M,$ in the latter case. However, the main
interest is not the best possible fit to the training data, but rather the
best possible performance on a new dataset. This may be determined by
estimating the relative out of sample performance of each model, via repeated
cross validation, or bootstrap validation. As mentioned in the introduction,
we will study how different validation schemes perform. Both bootstrap
validation and cross validation involve splitting the data in different
subsets, and repeatedly using some of the data for fitting, and the rest to
evaluate the fitted models. Since the model is evaluated on datasets whose
size $n_{eval}$ is not, in general, the same as the one $n_{te}$ of the test
set, we let the number of selected observations $k$ be the nearest integer to
$\tau n_{eval},$ where $\tau=\frac{k}{n_{te}}\in(0,1),$ is the proportion of
the cases in the test set we want to select.
We evaluate the classification error using cross validation with $L$ folds,
and $D$ repetitions. The data are split into different folds by randomly
assigning observations to each fold, thus creating $D$ sequences of $L$ non-
overlapping subsets of the integers $\\{1,2,\dots,n\\},$ which we denote as
$\\{A_{l}^{(d)}\\}_{l=1}^{L},d=1,\dots,D.$ The cross-validation statistic is
given by
$\displaystyle\widehat{\mathcal{L}}^{fraud}_{CV}=\frac{1}{D}\sum_{d=1}^{D}\frac{1}{L}\sum_{l=1}^{L}\frac{1}{|A_{l}^{(d)}|}\frac{\sum_{i\in
A_{l}^{(d)}}(1-y_{i})\hat{y}_{i}^{l,d}}{\sum_{i\in
A_{l}^{(d)}}\hat{y}_{i}^{l,d}}.$
An alternative to cross validation is bootstrap validation, where one for each
fold draws observations from the training set with replacement, usually as
many as the number of training observations. The left-out observations from
each fold are then used for validation. The probability that a specific
observation is left out of a bootstrap fold, when the bootstrap folds are of
the same size as the training set, is equal to
$\left(1-\frac{1}{n}\right)^{n},$ which when $n$ increases, converges to
$e^{-1}\approx 0.368.$ This means that the validation sets on average will
contain a little over a third of the total training data.
In a standard binary classification setting, one can compute a statistic that
mimics a leave one out cross validation error as
$\displaystyle\widehat{\text{Err}^{(1)}}=\frac{1}{n}\sum_{i=1}^{n}\frac{\sum_{b=1}^{B}\text{I}_{i}^{b}\text{L}(y_{i},\widehat{y}_{i}^{*b})}{\sum_{b=1}^{B}\text{I}_{i}^{b}},$
where $\text{I}_{i}^{b}$ is an indicator of whether the $i$-th observation is
not included in the $b$-th bootstrap sample, and $\widehat{y}_{i}^{*b}$ is the
prediction obtained for observation $i$ from the $b$-th model fit. The formula
above is from the paper by Efron and Tibshirani [9], an alternative
formulation of the statistic is
$\displaystyle\widehat{\text{Err}^{(1)}}=\frac{\sum_{i=1}^{n}\sum_{b=1}^{B}\text{I}_{i}^{b}\text{L}(y_{i},\hat{y}_{i}^{*b})}{\sum_{i=1}^{n}\sum_{b=1}^{B}\text{I}_{i}^{b}},$
found in the paper by Efron [8]. According to Efron and Tibshirani [9] these
will be close for larger values of $B$, The bootstrap statistic we will use is
based on the latter, and is given by
$\displaystyle\widehat{\mathcal{L}}^{fraud}_{BOOT}=\sum_{i=1}^{n}\frac{\sum_{b=1}^{B}\text{I}_{i}^{b}(1-y_{i})\hat{y}_{i}^{*b}}{\sum_{b=1}^{B}\sum_{i=1}^{n}\text{I}_{i}^{b}\hat{y}_{i}^{*b}}.$
## 5 Simulation study
In order to study how well the method for selecting the model complexity based
on the fraud loss works, we will perform a simulation study. First, we examine
and compare different setups of the selection approach. Subsequently, we make
a comparison to the most relevant alternative approach, based on the AUC.
### 5.1 Generating data
We want the synthetic datasets to possess many of the same characteristics as
real datasets from fraud detection applications, such as the dataset from the
Norwegian Tax Administration (Skatteetaten), that we study in Section 6. The
common traits that we want to replicate, at least to some degree, are
correlated covariates, with margins of different types, some continuous, some
discrete, and an imbalance in the marginal distribution of the outcome.
In order to draw the covariates, we follow a procedure that can be described
in a few stages. First, we draw a sample of a random vector, from a
multivariate distribution with uniform margins, i.e., a copula [16]. After
simulating observations from the copula, each of the margins is transformed to
one of the distributions listed in Table 1. The copula we will use is the
$t_{2,\mathbf{R}}$-copula. Data are then simulated by drawing from a
multivariate $t_{2,\mathbf{R}}$-distribution, i.e., a standard
$t$-distribution with a $p\times p$ correlation matrix $\mathbf{R}$ and $2$
degrees of freedom. Then, the margins are transformed via the (univariate)
$t_{2}$-quantile function, which makes the margins uniform, but still
dependent.
We specify the correlation matrix $\mathbf{R}$ of the multivariate
$t$-distribution by drawing a matrix from a uniform distribution over all
positive definite correlation matrices, using the algorithm described by Joe
[14]. When looking at comparisons across a number of different datasets, we
always keep the correlation matrix $\mathbf{R}$ fixed. Setting the correlation
matrix by simulating it via an algorithm is just a pragmatic way of specifying
a large correlation matrix, while ensuring that it is positive semidefinite.
As for the correlation matrix, the distribution for each margin is also drawn
randomly from a list, but these distributions are also kept fixed, whenever we
consider comparisons across different datasets.
Table 1: List of the 17 different marginal distributions used to simulate covariates. Family | Parameter | Value
---|---|---
Bernoulli | $p$ | 0.2
Bernoulli | $p$ | 0.4
Bernoulli | $p$ | 0.6
Bernoulli | $p$ | 0.8
Beta | $(\alpha,\beta)$ | $(1,2)$
Beta | $(\alpha,\beta)$ | $(2,1)$
Beta | $(\alpha,\beta)$ | $(2,2)$
Gamma | $(\alpha,\beta)$ | $(1,3)$
Gamma | $(\alpha,\beta)$ | $(3,1)$
Gamma | $(\alpha,\beta)$ | $(3,3)$
Normal | $(\mu,\sigma)$ | $(0,1)$
Student’s t | $\nu$ | $3$
Student’s t | $\nu$ | $4$
Student’s t | $\nu$ | $6$
Poisson | $\lambda$ | 1
Poisson | $\lambda$ | 3
Poisson | $\lambda$ | 5
Given the simulated covariates, probabilities $p(\mathbf{x})$ are computed,
and the binary outcomes $y_{i},\,i=1,\dots,n,$ are simulated from
$\text{Bernoulli}\left(p(\mathbf{x}_{i})\right)\,\text{distributions, where
}i=1,\dots,n.$ The probabilities follow the form
$p(\mathbf{x})=\frac{\exp(\beta_{0}+f(\mathbf{x}))}{1+\exp(\beta_{0}+f(\mathbf{x}))},$
where the model is either linear, or an additive model (see Section 2). Unless
otherwise stated, we simulate datasets with $p=100$ covariates. When
simulating from a model with a linear predictor, we let $15$ of the $100$
covariate effects be non-zero, and let these take values in the range
$(-0.77,0.62),$ with an average absolute value of $0.34.$
In order to specify a tree model, we draw a covariate matrix
$\mathbf{\tilde{X}}$, and a response $\tilde{Y}$ to be used only for
constructing the trees. The response is given by
$\tilde{Y}=BE_{1}-(1-B)E_{2},$ where $B$ is a $\text{Bernoulli}(0.5)$
variable, and $E_{1},$ $E_{2}$ are exponentially distributed with parameters
$\alpha_{1}=0.2$ and $\alpha_{2}=0.1$, respectively. We then fit an additive
tree model to this, based on $15$ of the $100$ covariates. The resulting model
is used to generate datasets, keeping the model fixed across the different
datasets.
The parameter $\beta_{0}$ can change from dataset to dataset, as the average
probability $p_{0}$ should be kept fixed. To achieve this, we solve the
equation
$\frac{1}{n}\sum_{i=1}^{n}p(\mathbf{x}_{i})=p_{0}$
numerically, given the model $f(\mathbf{x})$ and the covariate values of all
the observations in the dataset.
### 5.2 Data drawn from a logistic regression model
We first consider penalised logistic regression. We draw two datasets from a
logistic regression model, one for estimation and one for testing, by the
method described in Section 5.1, both with $n=1000$ observations. The
coefficient $\beta_{0}$ is set so that the sample mean of $p(\mathbf{x})$ is
$0.2$, which is quite high compared to a typical fraud detection setting, but
is close to the average outcome in the dataset that we will discuss in Section
6. In order to select the value of the regularisation parameter $\lambda$, we
compare bootstrap validation, and repeated cross validation with $10,5,3,$ or
$2$ folds. We also try drawing the folds stratified on the outcome, so that
each fold has the same proportion of positive outcomes as the training sample.
All the experiments are repeated for $S=100$ simulated datasets.
In order to make the comparison fair, we balance the number of bootstrap folds
and the number of repetitions for the cross validation, so that the
computational complexity is roughly the same, assuming that the computational
complexity of fitting a model is proportional to the number of observations.
All validation procedures are adjusted so that they are comparable to 10-fold
cross validation without repetition. Since this involves fitting models to
$10$ different datasets of $.9$ times the size of the total, we use $9$ folds
in the bootstrap validation, and $2,$ $4,$ and $9$ repetitions of $5$-, $3$-,
and $2$-fold cross validation, respectively. We also do the same, with double
the number of repetitions across 10-, 5-, 3- and 2- fold cross validation, and
with $18$ bootstrap folds.
Table 2: Relative fraud loss averaged over all values of $k$, for data simulated from the linear model with $p=100$ $n=1000$, and fitted with a linear model. Notes | 10-fold cv | 5-fold cv | 3-fold cv | 2-fold cv | Bootstrap
---|---|---|---|---|---
| 1.0764 | 1.0766 | 1.0753 | 1.0725 | 1.0763
stratified | 1.0745 | 1.0767 | 1.0730 | 1.0743 | 1.0768
2x repeat | 1.0385 | 1.0366 | 1.0364 | 1.0366 | 1.0389
2x repeat, stratified | 1.0396 | 1.0384 | 1.0355 | 1.0354 | 1.0366
We define relative fraud loss, compared to the minimum in each simulation
$s=1,2,\dots,S,$ over the alternatives for the tuning parameter as
$\displaystyle RFL(k)$
$\displaystyle=\frac{\sum_{s=1}^{S}\sum_{i=1}^{n}\widehat{y_{i,s}}^{{\lambda_{sel}}}(k)(1-y_{i,s})/\sum_{i=1}^{n}\widehat{y_{i,s}}^{{\lambda_{sel}}}(k)}{\sum_{s=1}^{S}\sum_{i=1}^{n}\widehat{y_{i,s}}^{{\lambda_{opt}}}(k)(1-y_{i,s})/\sum_{i=1}^{n}\widehat{y_{i,s}}^{\lambda_{opt}}(k)},$
(3)
for a given value of $k$. Here, $\widehat{y_{i,s}}^{{\lambda_{sel}}}(k)$ is
the prediction of $Y_{i,s}$ from the model resulting from a particular choice
of tuning parameter, for a given $k$, and
$\widehat{y_{i,s}}^{\lambda_{opt}}(k)$ is the corresponding prediction from
the model that is optimal, over all values of the tuning parameter, for that
simulation $s$. This is computed for $k=10,20,\dots,980,990,$ and the average
$\frac{1}{99}\sum_{j=1}^{99}RFL(k_{j})$ over all values $k_{j}$ of $k$ is
reported in Table 2. As expected, doubling the number of repetitions leads to
better performance. Looking at the different validation schemes, it seems like
there is a tendency that cross validation with 2 or 3 folds is the better
option, and that there is a slight advantage to stratification, but this does
not seem very conclusive. Based only on these results, we would therefore
suggest using 2-fold cross validation, and to stratify on $y$ at least if
there are so few observations where $y=1,$ that there is a risk of getting
folds where all observations have a negative outcome.
Figure 1: Plot of the log relative fraud loss as a function of the fraction
$\tau=k/n$ of the observations that are selected, for a selection of the
methods for setting the tuning parameter. The logistic regression model was
used both to simulate the data, and to make predictions.
In Figure 1, the logarithm of the relative fraud loss is plotted as a function
of the proportion $k/n$ of cases that are selected, for a selection of the
validation procedures. From these, we see that there is not a very large
difference between the different types. Further, it is hardest to select the
best cases for lower values of $k/n$, as expected.
We repeat this experiment for the same datasets, but instead of estimating the
probabilities using a penalised logistic regression model, we use an additive
tree model fitted by boosting. The penalty parameter to be selected, is now
the total number $M$ of components of the additive tree model. The average
relative fraud loss for the different ways of choosing the number of
iterations is reported in Table 3, and the logarithm of the relative fraud
loss is plotted as a function of $k/n$ in Figure 2. Compared to the ridge
regression fits, the relative fraud loss is now greater, meaning that the
selected model differs more in size, compared to the minimum for each value of
$k$. The best alternative now seems to be the bootstrap variants, with
stratified 3-fold cross validation being the closest contender.
Table 3: Relative fraud loss averaged over all values of $k$, for data simulated from the linear model with $p=100$ $n=1000$, and fitted with an additive tree model. Notes | 10-fold cv | 5-fold cv | 3-fold cv | 2-fold cv | Bootstrap
---|---|---|---|---|---
| 1.2060 | 1.1779 | 1.1617 | 1.1424 | 1.1578
stratified | 1.2123 | 1.1716 | 1.1534 | 1.1403 | 1.1496
2x repeat | 1.0586 | 1.0507 | 1.0526 | 1.0541 | 1.0464
2x repeat, stratified | 1.0552 | 1.0523 | 1.0499 | 1.0515 | 1.0472
Figure 2: Plot of the relative fraud loss as a function of the fraction
$\tau=k/n$ of the observations that are selected, for a selection of the
methods for setting the tuning parameter. The logistic regression model was
used both to simulate the data, and the additive tree model to make
predictions.
### 5.3 Data drawn from an additive tree model
Next, we do a similar experiment, the difference being the model that the data
are drawn from. Instead of a logistic regression model, we draw from a model
where the linear predictor is replaced with an additive tree model, as
previously outlined. We first estimate models using penalised logistic
regression.
Table 4: Relative fraud loss averaged over all values of $k$, for data simulated from the additive tree model with $p=100$ $n=1000$, and fitted with the linear model. Notes | 10-fold cv | 5-fold cv | 3-fold cv | 2-fold cv | Bootstrap
---|---|---|---|---|---
| 1.0190 | 1.0195 | 1.0193 | 1.0189 | 1.0194
stratified | 1.0199 | 1.0195 | 1.0191 | 1.0190 | 1.0193
2x repeat | 1.0197 | 1.0193 | 1.0193 | 1.0190 | 1.0199
2x repeat, stratified | 1.0196 | 1.0195 | 1.0192 | 1.0193 | 1.0196
The results of this are summarised in Table 4.
Table 5: Relative fraud loss averaged over all values of $k$, for data simulated from the additive tree model with $p=100$ $n=1000$, and fitted with the additive tree model. Notes | 10-fold cv | 5-fold cv | 3-fold cv | 2-fold cv | Bootstrap
---|---|---|---|---|---
| 1.0463 | 1.0450 | 1.0452 | 1.0454 | 1.0450
stratified | 1.0469 | 1.0474 | 1.0457 | 1.0451 | 1.0454
2x repeat | 1.0467 | 1.0464 | 1.0456 | 1.0435 | 1.0451
2x repeat, stratified | 1.0473 | 1.0453 | 1.0464 | 1.0450 | 1.0452
We also estimate additive tree models, the results for which are summarised in
Table 5. In the first case, 2-fold cross-validation gave the best results,
with stratified 3-fold cross validation in second place. Curiously, the error
does not seem to be smaller when doubling the number of repetitions, which
could suggest that it is easier to select the best model for the data
simulated from a more complex model. For the latter case, 2-fold cross
validation seems to be the best option, based on the average relative fraud
loss. It could be argued that real data are most likely to follow a model that
is more complex than a logistic regression model with a linear predictor, and
therefore that repeated 2-fold cross validation is the more reliable option,
overall.
### 5.4 Comparison with an alternative approach
Next, we want to compare our approach to other relevant methods. One such
method is to set the penalty parameter using the AUC as a criterion. This is a
popular measure for assessing the performance of a binary regression model in
terms of discrimination. It is also related to ranking, and is therefore a
natural alternative to fraud loss. In fact, the AUC can on a population level
be seen to be equivalent to the probability that an observation where $Y=0$
will be given a lower probability than one where $Y=1$. Hence, if one model
has a higher AUC than another, then the aforementioned probability will be
highest for the model with the highest AUC [5]. Symbolically, this can be
written as
$\displaystyle\text{AUC}\left(\hat{p}\right)$
$\displaystyle=P\left(\hat{p}(x_{i})\geq\hat{p}(x_{j})|Y_{i}=1,Y_{j}=0\right),$
which may be estimated by the Wilcoxon type statistic
$\displaystyle\widehat{\text{AUC}}=\frac{\sum_{i=1}^{n}\sum_{j=1}^{n}(1-y_{i})y_{j}\mathcal{I}\left(\hat{p}(\mathbf{x}_{j})>\hat{p}(\mathbf{x}_{i})\right)}{\sum_{i=1}^{n}y_{i}\sum_{i=1}^{n}(1-y_{i})}.$
Table 6: Relative fraud loss averaged over $k$ for the methods based on the AUC and the fraud loss, for data simulated with $p=100$ $n=1000$. Simulation model | Logistic | Logistic | Additive trees | Additive trees
---|---|---|---|---
Estimation model | Logistic | Additive trees | Logistic | Additive trees
Average over all $K=10,20,\dots 990$: | | |
auc | 1.0335 | 1.0763 | 1.0186 | 1.0482
fraud | 1.0371 | 1.0741 | 1.0195 | 1.0442
auc, 2x repeat | 1.0337 | 1.0730 | 1.0183 | 1.0470
fraud, 2x repeat | 1.0350 | 1.0721 | 1.0186 | 1.0454
Average over $K=160,170,\dots 240,250$: | | |
auc | 1.0349 | 1.0626 | 1.0228 | 1.0583
fraud | 1.0356 | 1.0591 | 1.0232 | 1.0531
auc, 2x repeat | 1.0338 | 1.0610 | 1.0228 | 1.0570
fraud, 2x repeat | 1.0359 | 1.0601 | 1.0225 | 1.0545
The log-likelihood function, or likelihood-based measures, such as the Akaike
information criterion (AIC), are also commonly used to set tuning parameters,
but we will here disregard these. However, they are not particularly relevant
for the problem, as we are not interested in finding the model that gives the
best fit to all the data. Further, the log-likelihood function often explodes
numerically when some of the probabilities become very close to $1$. In our
experiments, this happened often during cross validation when we evaluated the
log-likelihood function for the data that were not used for estimation.
The first simulations are based on the same models as previously discussed,
using 2-fold cross validation with and without repetition in both methods.
Table 6 reports the resulting average fraud loss. The average over a smaller
range of values of $k/n=0.16,0.17,\dots,0.25,$ which is realistic in practice
for the fraud setting is also shown in the table. When looking at the average
over $k/n$ from $0.01$ to $0.99$, it seems that it is advantageous to select
the model complexity via the cross validated fraud loss when estimating an
additive tree model, whatever the data generating model is. When we only
consider an aggregate over values of $k/n$ from $0.16$ to $0.25$, we see the
same, and in addition there also seems to be a slight benefit to using the
fraud loss, when fitting penalised logistic regression models to data
simulated from an additive tree model.
In Figure 3, the average difference in fraud loss for the two approaches
applied to the data simulated from logistic regression models, is plotted as a
function of $k/n.$ It seems that in neither case, one or the other method is
consistently better across the entire grid over $k/n$. However, there is a
tendency, especially when $k/n$ is larger than roughly $0.2$, that the fraud
loss works best when estimating an additive tree model, but not when
estimating a penalised logistic regression model. Figure 4 is a similar plot,
but where the data are simulated from the additive tree model. It seems, in
this case, that the fraud loss is more favourable when a penalised logistic
regression model is estimated, compared to when the data were simulated from
the logistic regression model.
Figure 3: Plot of the difference in fraud loss when selecting the model
complexity according to the AUC and the fraud loss, respectively. The data are
simulated from the logistic regression model with $n=1000$ and $p=100$.
Figure 4: Plot of the difference in fraud loss when selecting the model
complexity according to the AUC and the fraud loss, respectively. The data are
simulated from additive tree model with $n=1000$ and $p=100$.
We repeat the experiment, but we now expand the datasets so that the number of
covariates is $p=n=1000.$ These are simulated in the same way as for $p=100.$
We again simulate data both from a logistic regression model, and from an
additive tree model, and scale the number of covariates that the response
depends on with the dimension of the covariate matrix, so that it in both
cases depends on $150$ covariates. For the logistic regression model, the
$150$ non-zero effects take values in the interval $(-0.67,0.85),$ and the
average absolute value of these is $0.198.$ A comparison of the two approaches
for selecting the model complexity, in all four combinations of data-
generating and estimated model, is summarised in terms of the average relative
fraud loss in Table 7. The average relative fraud loss over the whole range of
$k/n$ is now lowest when using the approach based on the fraud loss, except
when both the data-generating and the estimated model are logistic. When we
only look at the average over $k/n=0.16,\dots,0.25,$ it seems to be beneficial
to select the model complexity with the fraud loss in all cases, although the
difference between the results from the two methods is quite small when
estimating a penalised regression model.
Figure 5 is a plot corresponding to Figure 3, but for $p=1000.$ The fraud loss
is now lowest overall, when the tuning parameter is selected with the cross
validated fraud loss for the boosted models, but not for the penalised
logistic regression models. When the data are simulated from an additive tree
model, as shown in Figure 6, there seems to be an advantage to using the fraud
loss when estimating penalised regression models, at least for $k/n$ up to
$0.2$. Fraud loss also seems to be best for the boosted tree models, perhaps
except when $k/n<0.2$, possibly due to a high variance. These results could
indicate that it is better to chose the penalty parameter by cross validating
the fraud loss, than by the AUC, when the model is misspecified.
Table 7: Relative fraud loss averaged over $k$ for the methods based on the AUC and the fraud loss, for data simulated with $p=1000$ $n=1000$. Simulation model | Logistic | Logistic | Additive trees | Additive trees
---|---|---|---|---
Estimation model | Logistic | Additive trees | Logistic | Additive trees
Average over all $k=10,20,\dots 990$: | | |
auc | 1.0338 | 1.0925 | 1.0232 | 1.0572
fraud | 1.0352 | 1.0919 | 1.0218 | 1.0537
auc, 2x repeat | 1.0337 | 1.0991 | 1.0239 | 1.0559
fraud, 2x repeat | 1.0351 | 1.0910 | 1.0216 | 1.0539
Average over $k=160,170,\dots 240,250$: | | |
auc | 1.0394 | 1.0875 | 1.0287 | 1.0729
fraud | 1.0375 | 1.0824 | 1.0316 | 1.0683
auc, 2x repeat | 1.0389 | 1.0920 | 1.0301 | 1.0705
fraud, 2x repeat | 1.0387 | 1.0847 | 1.0294 | 1.0693
Figure 5: Plot of the difference in fraud loss when selecting the model complexity according to the AUC and the fraud loss, respectively. The data are simulated from the logistic regression model with $n=1000$ and $p=1000$. covariates. Figure 6: Plot of the difference in fraud loss when selecting the model complexity according to the AUC and the fraud loss, respectively. The data are simulated from the additive tree model with $n=1000$ and $p=1000$. covariates. Table 8: Relative fraud loss averaged over $k$ for the methods based on the AUC and the fraud loss, for data simulated with $p=4000$ $n=1000$. Simulation model | Logistic | Logistic | Additive trees | Additive trees
---|---|---|---|---
Estimation model | Logistic | Additive trees | Logistic | Additive trees
Average over all $k=10,20,\dots 990$: | | |
auc | 1.0415 | 1.0526 | 1.0229 | 1.0453
fraud | 1.0400 | 1.0577 | 1.0182 | 1.0457
auc, 2x repeat | 1.0412 | 1.0520 | 1.0229 | 1.0443
fraud, 2x repeat | 1.0388 | 1.0547 | 1.0187 | 1.0438
Average over $k=160,170,\dots 240,250$: | | |
auc | 1.0538 | 1.0605 | 1.0187 | 1.0507
fraud | 1.0498 | 1.0655 | 1.0223 | 1.0585
auc, 2x repeat | 1.0532 | 1.0567 | 1.0189 | 1.0491
fraud, 2x repeat | 1.0468 | 1.0591 | 1.0239 | 1.0530
Figure 7: Plot of the difference in fraud loss when selecting the model
complexity according to the AUC and the fraud loss, respectively. The data are
simulated from the logistic regression model with $n=1000$ and $p=4000$.
covariates. Figure 8: Plot of the difference in fraud loss when selecting the
model complexity according to the AUC and the fraud loss, respectively. The
data are simulated from the additive tree model with $n=1000$ and $p=4000$.
covariates.
We repeat the experiment again, now in a context where $p>n$. Specifically, we
let $p=4000,$ while we keep the number of observations at $n=1000.$ With the
exception of the correlation matrix $R$, the model parameters are scaled up in
the same way as when the number of covariates was changed from $100$ to
$1000$. The correlation matrix is now constructed from one
$(1000)\times(1000)$ correlation matrix that is stacked diagonally, giving a
block-diagonal correlation matrix where $4\times{10}^{6}$ out of the
$16\times{10}^{6}$ entries are non-zero. When the data are simulated from the
logistic regression model, we see in Figure 8 that picking the tuning
parameter for a penalised logistic regression model using the fraud loss,
generally works better than the AUC, at least when $k/n<0.4,$ and that for the
boosted trees it seems to be consistently worse, or at least not better. For
the data drawn from an additive tree model, the fraud loss is better for
values of $k/n$ up to about $0.12$ for the penalised logistic regression
models, with more or less no difference for $k/n>0.4.$ For the boosted trees,
the fraud loss performs somewhat worse for values of $k/n$ up to about $0.3.$
All of the simulations are summed up in Table 8, demonstrating that the fraud
loss on average performs better than the AUC for the penalised logistic
regression models, regardless of whether the data are simulated from a
logistic regression model or an additive tree model when we look at the entire
range of $k$. If we only look at $k/n=0.16,0.17,\dots,0.25$, the fraud loss
performs better only for penalised logistic regression when the data are drawn
from a logistic regression model.
All in all, selecting the model complexity by cross validating the fraud loss
seems to work quite well, compared to using the AUC, especially for values of
$k/n$ close to the marginal probability of $Y$. This is good, as one could
argue that these are the most interesting values in most fraud detection
applications. The fraud loss does not outperform the AUC in all cases, such as
for the boosted tree models for $p>n$, or the penalised regression models when
the data are simulated from a logistic regression model, and $p<n$. A possible
explanation for the first case could be that when estimating the probabilities
is difficult due to $p>n,$ and the data generating model is complex, then
trying to adapt the model locally to $k/n$, as in the method using the fraud
loss, could introduce instability. For the latter case, it could be that the
estimation problem is so simple that one more or less recovers the data
generating model, and that there might not be too much to gain from adapting
the model to a specific $k/n$.
## 6 Illustration on VAT fraud data
In this section, we will consider a dataset of controlled cases of potential
VAT fraud from the Norwegian Tax Administration (Skatteetaten). The data are
sensitive. Therefore, they have been somewhat manipulated in order to be
anonymised, and very little meta-information, such as what the covariates
represent, is included. The data were collected in 39 different 2-month
periods, and include in total around $n=50000$ observations of $p=555$
covariates, where $20$ of the covariates are binary, $12$ are categorical, and
$523$ are numerical. We recode the categorical covariates as binary variables,
which then effectively gives us $p=616$ covariates. There are some missing
values in the dataset. In order to be able to fit penalised logistic
regression models, we impute the median for the missing values of the
numerical variables. For the categorical variables, we instead recode them
with an extra level that corresponds to a missing observation.
This dataset serves as an example of one particular case, where selecting the
top cases is relevant. As an illustration, we take the data from the six
2-month periods leading up to, but not including, the 12th 2-month period as a
training set, a total of $7843$ observations, out of which $1648$ are recorded
as fraudulent. We use repeated 2-fold cross validation, with $9$ repetitions,
to set the penalty parameter of a penalised logistic regression model, and the
number of components in a boosted tree model. We set these parameters by cross
validating the AUC and the fraud loss for $k/n=0.01,0.02,\dots,0.98,0.99.$ We
then evaluate the models chosen by cross validation on the $1072$ observations
collected in the 12th period, of which $216$ are recorded as fraudulent. The
results of this is summed up in Table 9 and Figure 9, where the fraud loss is
plotted as a function of $k/n$ for both the models, and both of the ways of
setting the tuning parameter. For the penalised logistic regression models, it
is evident from Figure 9 that the models chosen by the fraud loss are better
than the model chosen by the AUC, at least up to around $k/n\approx 0.3,$
while for the boosted trees it is harder to see a clear difference between the
two. The reported figures in Table 9 show that the fraud loss performs better,
both in terms of the average relative fraud loss aggregated over
$k/n=0.01,0.02,\dots,0.99$, and over the smaller selection of values
$k/n=0.16,0.17,\dots,0.25,$ for both of the models. The absolute figures
reported in parentheses also show that penalised logistic regression in this
case gave somewhat lower fraud loss than the boosted trees, which indicates
that this model is perhaps a little better suited for the given setting.
Table 9: Relative aggregated fraud loss, with aggregated fraud loss given in parentheses. Comparison of AUC and fraud loss. Dataset of VAT fraud from the Norwegian tax administration. Estimation model | Logistic | Additive trees
---|---|---
Average over all $k/n=0.01,0.02,\dots 0.99$: |
auc | 1.0412 (0.6974) | 1.0328 (0.6948)
fraud | 1.0276 (0.6911) | 1.0285 (0.6930)
Average over $k/n=0.16,0.17,\dots 0.24,0.25$: |
auc | 1.0854 (0.6361) | 1.0301 (0.6244)
fraud | 1.0609 (0.6218) | 1.0269 (0.6225)
Figure 9: Plot of the fraud loss when selecting the model according to the AUC
and fraud loss, respectively. Dataset of VAT fraud from the Norwegian Tax
Administration.
## 7 Concluding remarks
Statistical fraud detection consists in creating a system, that automatically
selects a subset of the cases that should be manually investigated. However,
the investigator is often limited to controlling a restricted number $k$ of
cases. In order to allocate the resources in the most efficient manner, one
should then try to select the $k$ cases with the highest probability of being
fraudulent. Prediction models that are used for this purpose, must typically
be regularised to avoid overfitting. In this paper, we propose a new loss
function, the fraud loss, for selecting the complexity of the prediction model
via a tuning parameter. More specifically, we suggest an approach where either
a penalised logistic regression model, or an additive tree model is fitted by
maximising the log-likelihood of a binary regression model with a logit-link,
and the tuning parameter is set by minimising the fraud loss function.
In a simulation study, we have investigated different ways of selecting the
model complexity with the fraud loss, taking the out-of-sample performance
into account, either by cross validation or bootstrapping. Based on this,
repeated cross validation with few folds seems to be the most favourable. In
particular, we have opted for repeated 2-fold cross validation without
stratification. Still, we recognise that stratification might be necessary if
there are very few cases of fraud in the training data.
Then, we carried out a larger simulation study, where we compared the
performance of setting tuning parameters by cross validating the fraud loss,
to cross validating the AUC. In these simulations, we saw that the fraud loss
gave the best results in most cases, particularly when the proportion of the
cases we to select is close to the marginal probability of fraud.
We have also illustrated our approach on a dataset of VAT fraud from the
Norwegian Tax Administration, making the same comparison as in the second
round of simulations. In this example, the fraud loss performed better than
the AUC, most substantially when fitting penalised logistic regression models,
which were also the model that were the most adequate for this application.
We have focussed on two particular estimation methods and corresponding
definitions of the model complexity. The first is maximising the logistic log-
likelihood function, subject to ridge regularisation, where the penalty
parameter is the one to be chosen. The second is boosting for an additive tree
model, where the number of trees is the focus. The first could however be
easily adapted to the other types of regularisation, such as the lasso or the
elastic net. For the second, one might define the complexity in terms of for
instance the size of each tree. One could also imagine using the fraud loss to
select the complexity for other types of binary classification models.
Further, one might search for new divergences to optimise that put more
emphasis on estimating the higher probabilities accurately, which is similar
the the work of Rudin [18], Boyd et al. [2] and Eban et al. [7]. Another
alternative is to adapt regression trees to the problem of picking a certain
number of cases. This might be done by fitting small trees directly combined
with bagging, or by pruning a decision tree to minimise the fraud loss after
growing the tree using a standard splitting criterion.
## 8 Acknowledgements
This work is funded by The Research Council of Norway centre Big Insight,
Project 237718. The authors would also like to thank Riccardo De Bin, for his
useful input, and participation in discussions.
## References
* Bolton and Hand [2002] Richard J Bolton and David J Hand. Statistical fraud detection: A review. _Statistical Science_ , 17(3):235–249, 2002.
* Boyd et al. [2012] Stephen Boyd, Corinna Cortes, Mehryar Mohri, and Ana Radovanovic. Accuracy at the top. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, _Advances in Neural Information Processing Systems 25_ , pages 953–961. Curran Associates, Inc., 2012. URL http://papers.nips.cc/paper/4635-accuracy-at-the-top.pdf.
* Chen and Guestrin [2016] Tianqi Chen and Carlos Guestrin. Xgboost: A scalable tree boosting system. In _Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining_ , pages 785–794. ACM, 2016.
* Clémençon and Vayatis [2007] Stéphan Clémençon and Nicolas Vayatis. Ranking the best instances. _Journal of Machine Learning Research_ , 8(Dec):2671–2699, 2007.
* Clémençon et al. [2008] Stéphan Clémençon, Gábor Lugosi, Nicolas Vayatis, et al. Ranking and empirical minimization of u-statistics. _The Annals of Statistics_ , 36(2):844–874, 2008\.
* Dorogush et al. [2018] Anna Veronika Dorogush, Vasily Ershov, and Andrey Gulin. Catboost: gradient boosting with categorical features support. _arXiv preprint arXiv:1810.11363_ , 2018.
* Eban et al. [2016] Elad ET Eban, Mariano Schain, Alan Mackey, Ariel Gordon, Rif A Saurous, and Gal Elidan. Scalable learning of non-decomposable objectives. _arXiv preprint arXiv:1608.04802_ , 2016.
* Efron [1983] Bradley Efron. Estimating the error rate of a prediction rule: improvement on cross-validation. _Journal of the American statistical association_ , 78(382):316–331, 1983.
* Efron and Tibshirani [1997] Bradley Efron and Robert Tibshirani. Improvements on cross-validation: the 632+ bootstrap method. _Journal of the American Statistical Association_ , 92(438):548–560, 1997.
* Freund et al. [2003] Yoav Freund, Raj Iyer, Robert E Schapire, and Yoram Singer. An efficient boosting algorithm for combining preferences. _Journal of machine learning research_ , 4(Nov):933–969, 2003.
* Friedman [2001] Jerome H Friedman. Greedy function approximation: a gradient boosting machine. _Annals of statistics_ , pages 1189–1232, 2001.
* Hoerl and Kennard [1970] Arthur E Hoerl and Robert W Kennard. Ridge regression: Biased estimation for nonorthogonal problems. _Technometrics_ , 12(1):55–67, 1970.
* Joachims [2005] Thorsten Joachims. A support vector method for multivariate performance measures. In _Proceedings of the 22nd international conference on Machine learning_ , pages 377–384. ACM, 2005.
* Joe [2006] Harry Joe. Generating random correlation matrices based on partial correlations. _Journal of Multivariate Analysis_ , 97(10):2177–2189, 2006.
* Ke et al. [2017] Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, and Tie-Yan Liu. Lightgbm: A highly efficient gradient boosting decision tree. In _Advances in Neural Information Processing Systems_ , pages 3146–3154, 2017.
* Nelsen [2007] Roger B Nelsen. _An introduction to copulas_. Springer Science & Business Media, 2007.
* Robertson and Zaragoza [2007] Stephen Robertson and Hugo Zaragoza. On rank-based effectiveness measures and optimization. _Information Retrieval_ , 10(3):321–339, 2007\.
* Rudin [2009] Cynthia Rudin. The p-norm push: A simple convex ranking algorithm that concentrates at the top of the list. _Journal of Machine Learning Research_ , 10(Oct):2233–2271, 2009.
* Tibshirani [1996] Robert Tibshirani. Regression shrinkage and selection via the lasso. _Journal of the Royal Statistical Society: Series B (Methodological)_ , 58(1):267–288, 1996.
* Zou and Hastie [2005] Hui Zou and Trevor Hastie. Regularization and variable selection via the elastic net. _Journal of the royal statistical society: series B (statistical methodology)_ , 67(2):301–320, 2005.
|
# Meromorphic functions of finite $\varphi$-order and linear Askey-Wilson
divided difference equations
Hui Yu , Janne Heittokangas∗ Department of Physics and Mathematics,
University of Eastern Finland, P.O. Box 111, 80101 Joensuu, Finland
<EMAIL_ADDRESS>
<EMAIL_ADDRESS>, Jun Wang School of Mathematical Sciences, Fudan
University, Shanghai 200433, P.R. China<EMAIL_ADDRESS>and Zhi-Tao Wen
Department of Mathematics, Shantou University, Shantou 515063, Guangdong, P.R.
China<EMAIL_ADDRESS>
###### Abstract.
The growth of meromorphic solutions of linear difference equations containing
Askey-Wilson divided difference operators is estimated. The $\varphi$-order is
used as a general growth indicator, which covers the growth spectrum between
the logarithmic order $\rho_{\log}(f)$ and the classical order $\rho(f)$ of a
meromorphic function $f$.
Key words: Askey-Wilson divided difference operator, Askey-Wilson divided
difference equation, lemma on the logarithmic difference, meromorphic
function, $\varphi$-order.
MSC 2020: Primary 39A13; Secondary 30D35.
∗Corresponding author.
## 1\. Introduction
Suppose that $q$ is a complex number satisfying $0<|q|<1$. In 1985, Askey and
Wilson evaluated a $q$-beta integral [1, Theorem 2.1], which allowed them to
construct a family of orthogonal polynomials [1, Theorems 2.2–2.5]. These
polynomials are eigensolutions of a second order difference equation [1, p.
36] that involves a divided difference operator $\mathcal{D}_{q}$ currently
known as the _Askey-Wilson operator_. We will define $\mathcal{D}_{q}$ below
and call it the _AW-operator_ for brevity. In general, any three consecutive
orthogonal polynomials satisfy a certain three term recurrence relation, see
[1, p. 4] or [8, p. 42].
Recently, Chiang and Feng [3] have obtained a full-fledged Nevanlinna theory
for meromorphic functions of finite logarithmic order with respect to the AW-
operator on the complex plane $\mathbb{C}$. The concluding remarks in [3]
admit that the logarithmic order of growth appears to be restrictive, even
though this class contains a large family of important meromorphic functions.
This encourages us to generalize some of the results in [3] in such a way that
the associated results for finite logarithmic order follow as special cases.
Let $\varphi:(R_{0},\infty)\to(0,\infty)$ be a non-decreasing unbounded
function. The $\varphi$-order of a meromorphic function $f$ in $\mathbb{C}$
was introduced in [5] as the quantity
$\rho_{\varphi}(f)=\limsup_{r\to\infty}\frac{\log T(r,f)}{\log\varphi(r)}.$
Prior to [5], the $\varphi$-order was used as a growth indicator for
meromorphic functions in the unit disc in [4]. In the plane case, the
logarithmic order $\rho_{\log}(f)$ and the classical order $\rho(f)$ of $f$
follow as special cases when choosing $\varphi(r)=\log r$ and $\varphi(r)=r$,
respectively. This leads us to impose a global growth restriction
$\log r\leq\varphi(r)\leq r,\quad r\geq R_{0}.$ (1.1)
Here and from now on, the notation $r\geq R_{0}$ is being used to express that
the associated inequality is valid ”for all $r$ large enough”.
For an entire function $f$, the Nevanlinna characteristic $T(r,f)$ can be
replaced with the logarithmic maximum modulus $\log M(r,f)$ in the quantities
$\rho(f)$ and $\rho_{\log}(f)$ by using a well-known relation between $T(r,f)$
and $\log M(r,f)$, see [7, p. 23]. The same is true for the $\varphi$-order,
namely
$\rho_{\varphi}(f)=\limsup_{r\to\infty}\frac{\log\log
M(r,f)}{\log\varphi(r)},$ (1.2)
provided that $\varphi$ is subadditive, that is,
$\varphi(a+b)\leq\varphi(a)+\varphi(b)$ for all $a,b\geq R_{0}$. In
particular, this gives $\varphi(2r)\leq 2\varphi(r)$, which yields (1.2).
Moreover, up to a normalization, subadditivity is implied by concavity, see
[5] for details.
Following the notation in [1] (see [3] and [6, p. 300] for an alternative
notation), we suppose that $f(x)$ is a meromorphic function in $\mathbb{C}$,
and let $x=\cos\theta$ and $z=e^{i\theta}$, where $\theta\in\mathbb{C}$. Then,
for $x\neq\pm 1$, the AW-operator is defined by
$(\mathcal{D}_{q}f)(x):=\frac{\breve{f}(q^{\frac{1}{2}}e^{i\theta})-\breve{f}(q^{-\frac{1}{2}}e^{i\theta})}{\breve{e}(q^{\frac{1}{2}}e^{i\theta})-\breve{e}(q^{-\frac{1}{2}}e^{i\theta})}=\frac{\breve{f}(q^{\frac{1}{2}}e^{i\theta})-\breve{f}(q^{-\frac{1}{2}}e^{i\theta})}{(q^{\frac{1}{2}}-q^{-\frac{1}{2}})(z-1/z)/2},$
(1.3)
where $x=(z+1/z)/2=\cos\theta$, $z=e^{i\theta}$, $e(x)=x$ and
$\breve{f}(z)=f((z+1/z)/2)=f(x)=f(\cos\theta).$
In the exceptional cases $x=\pm 1$, we define
$(\mathcal{D}_{q}f)(\pm 1)=\displaystyle\underset{x\neq\pm 1}{\lim_{x\to\pm
1}}(\mathcal{D}_{q}f)(x)=f^{\prime}(\pm(q^{\frac{1}{2}}+q^{-\frac{1}{2}})/2).$
The branch of the square root in $z=x+\sqrt{x^{2}-1}$ can be fixed in such a
way that for each $x\in\mathbb{C}$ there corresponds a unique
$z\in\mathbb{C}$, see [3] and [6, p. 300]. It is known that $\mathcal{D}_{q}f$
is meromorphic for a meromorphic function $f$ and entire for an entire
function $f$ [3, Theorem 2.1]. The AW-operator in (1.3) can be written in the
alternative form
$(\mathcal{D}_{q}f)(x)=\frac{f(\hat{x})-f(\check{x})}{\hat{x}-\check{x}},$
where $x=(z+1/z)/2=\cos\theta$ and
$\hat{x}=\frac{q^{\frac{1}{2}}z+q^{-\frac{1}{2}}z^{-1}}{2},\quad\check{x}=\frac{q^{-\frac{1}{2}}z+q^{\frac{1}{2}}z^{-1}}{2}.$
Finally, AW-operators of arbitrary order are defined by
$\mathcal{D}_{q}^{0}f=f$ and
$\mathcal{D}_{q}^{n}f=\mathcal{D}_{q}(\mathcal{D}_{q}^{n-1}f)$, where
$n\in\mathbb{N}$.
Lemma A below is a pointwise AW-type lemma on the logarithmic difference
proved in [3, Lemma 4.2], and it is used in [3] to study the growth of
meromorphic solutions of Askey-Wilson divided difference equations. We note
that finite logarithmic order implies finite $\varphi$-order because of the
growth restriction (1.1).
###### Lemma A.
Let $f(x)$ be a meromorphic function of finite logarithmic order such that
$\mathcal{D}_{q}f\not\equiv 0$, and let $\alpha_{1}\in(0,1)$ be arbitrary.
Then there exists a constant $C_{\alpha_{1}}>0$ such that for
$2(|q^{1/2}|+|q^{-1/2}|)|x|<R$, we have
$\begin{split}\log^{+}\left|\frac{\mathcal{D}_{q}f(x)}{f(x)}\right|&\leq\frac{4R(|q^{1/2}-1|+|q^{-1/2}-1|)|x|}{(R-|x|)(R-2(|q^{1/2}|+|q^{-1/2}|)|x|)}\left(m(R,f)+m(R,1/f)\right)\\\
&\quad+2(|q^{1/2}-1|+|q^{-1/2}-1|)|x|\left(\frac{1}{R-|x|}+\frac{1}{R-2(|q^{1/2}|+|q^{-1/2}|)|x|}\right)\\\
&\quad\quad\times\left(n(R,f)+n(R,1/f)\right)\\\
&\quad+2C_{\alpha_{1}}(|q^{1/2}-1|^{\alpha_{1}}+|q^{-1/2}-1|^{\alpha_{1}})|x|^{\alpha_{1}}\underset{|c_{n}|<R}{\sum}\frac{1}{|x-c_{n}|^{\alpha_{1}}}\\\
&\quad+2C_{\alpha_{1}}|q^{-1/2}-1|^{\alpha_{1}}|x|^{\alpha_{1}}\underset{|c_{n}|<R}{\sum}\frac{1}{|x+c(q)q^{-1/2}z^{-1}-q^{-1/2}c_{n}|^{\alpha_{1}}}\\\
&\quad+2C_{\alpha_{1}}|q^{1/2}-1|^{\alpha_{1}}|x|^{\alpha_{1}}\underset{|c_{n}|<R}{\sum}\frac{1}{|x-c(q)q^{1/2}z^{-1}-q^{1/2}c_{n}|^{\alpha_{1}}}+\log
2,\end{split}$ (1.4)
where $c(q)=(q^{-1/2}-q^{1/2})/2$ and $\\{c_{n}\\}$ is the combined sequence
of zeros and poles of $f$.
The choice $R=r\log r$ in Lemma A is made in proving [3, Theorem 3.1], which
is an AW-type lemma on the logarithmic difference asserting
$m\left(r,\frac{\mathcal{D}_{q}f(x)}{f(x)}\right)=O\left((\log
r)^{\rho_{\log}(f)-1+\varepsilon}\right),$ (1.5)
where $\varepsilon>0$ is arbitrary and $f$ is a meromorphic function of finite
logarithmic order $\rho_{\log}(f)$ such that $\mathcal{D}_{q}f\not\equiv 0$.
The estimate (1.5) in turn is used to prove a growth estimate [3, Theorem
12.4] for meromorphic solutions of AW-divided difference equations, stated as
follows.
###### Theorem B.
Let $a_{0}(x),a_{1}(x),\ldots,a_{n-1}(x)$ be entire functions such that
$\rho_{\log}(a_{0})>\max_{1\leq j\leq n}\\{\rho_{\log}(a_{j})\\}.$
Suppose that $f$ is an entire solution of the AW-divided difference equation
$\sum_{j=0}^{n}a_{j}(x)\mathcal{D}_{q}^{j}f(x)=0,$
where $a_{n}(x)=1$. Then $\rho_{\log}(f)\geq\rho_{\log}(a_{0})+1$.
Our main objectives are to find $\varphi$-order analogues of the estimate
(1.5) and of Theorem B. A non-decreasing function
$s:(R_{0},\infty)\to(0,\infty)$ satisfying a global growth restriction
$r<s(r)\leq r^{2},\quad r\geq R_{0},$ (1.6)
will take the role of $R$ in Lemma A. Suitable test functions for $\varphi$
and $s$ then are, for example,
$\varphi(r)=\log^{\alpha}r,\quad\varphi(r)=\exp(\log^{\beta}r),\quad\varphi(r)=r^{\beta},$
along with $s(r)=r\log r$ and $s(r)=r^{\alpha}$, where $\alpha\in(1,2]$ and
$\beta\in(0,1]$.
This paper is organized as follows. A generalization of Theorem B for
meromorphic solutions in terms of the $\varphi$-order is given in Section 2.
Two AW-type lemmas on the logarithmic difference in terms of the
$\varphi$-order are given in Section 3. One of them will be among the most
important individual tools later on. Section 4 consists of lemmas on AW-type
counting functions as well as on the Nevanlinna characteristic of
$\mathcal{D}_{q}f$. These lemmas are crucial in proving the main results,
which are Theorem 2.1 and 2.2 below. The details of the proofs are given in
Section 5.
## 2\. Results on Askey-Wilson divided difference equations
We consider the growth of meromorphic solutions of AW-divided difference
equations
$\sum_{j=0}^{n}a_{j}(x)\mathcal{D}_{q}^{j}f(x)=0$ (2.1)
and of the corresponding non-homogeneous AW-divided difference equations
$\sum_{j=0}^{n}a_{j}(x)\mathcal{D}_{q}^{j}f(x)=a_{n+1}(x),$ (2.2)
where $a_{0},\ldots,a_{n+1}$ are meromorphic functions, and
$a_{0}a_{n}\not\equiv 0$. The results that follow depend on growth parameters
introduced in [5] and defined by
$\alpha_{\varphi,s}=\liminf_{r\to\infty}\frac{\log\varphi(r)}{\log\varphi(s(r))}\quad\text{and}\quad\gamma_{\varphi,s}=\liminf_{r\to\infty}\frac{\log\log\frac{s(r)}{r}}{\log\varphi(r)}.$
(2.3)
Due to the assumptions (1.1) and (1.6), we always have
$\alpha_{\varphi,s}\in[0,1]$ and $\gamma_{\varphi,s}\in[-\infty,1]$. From now
on, we make a global assumption
$\liminf_{r\to\infty}\frac{s(r)}{r}>1,$
which ensures that $\gamma_{\varphi,s}\in[0,1]$. Further properties and
relations related to the growth parameters $\alpha_{\varphi,s}$ and
$\gamma_{\varphi,s}$ can be found in [5].
Theorem 2.1 below reduces to Theorem B when choosing $\varphi(r)=\log r$ and
$s(r)=r^{2}$ and when the coefficients and solutions are entire functions.
###### Theorem 2.1.
Suppose that $\varphi(r)$ is subadditive, and let $\alpha_{\varphi,s}$ and
$\gamma_{\varphi,s}$ be the constants in (2.3). Let $a_{0},\ldots,a_{n}$ be
meromorphic functions of finite $\varphi$-order such that
$\rho_{\varphi}(a_{0})>\max_{1\leq j\leq n}\\{\rho_{\varphi}(a_{j})\\}.$
* (a)
Suppose that $\displaystyle\limsup_{r\to\infty}\frac{s(r)}{r}=\infty$ and that
$s(r)$ is convex and differentiable. If $f$ is a non-constant meromorphic
solution of (2.1), then
$\rho_{\varphi}(f)\geq\alpha_{\varphi,s}^{n}\rho_{\varphi}(a_{0}).$ (2.4)
Moreover, if the coefficients $a_{0},\ldots,a_{n}$ are entire, then
$\rho_{\varphi}(f)\geq\alpha_{\varphi,s}^{n}\rho_{\varphi}(a_{0})+\alpha_{\varphi,s}^{n}\gamma_{\varphi,s}.$
(2.5)
* (b)
Suppose that $\displaystyle\limsup_{r\to\infty}\frac{s(r)}{r}<\infty$. If $f$
is a non-constant meromorphic solution of (2.1), then
$\rho_{\varphi}(f)\geq\alpha_{\varphi,s}^{n-1}\rho_{\varphi}(a_{0})$.
###### Remark 1.
For certain $\varphi(r)$, for example, for $\varphi(r)=\log^{\alpha}r$, where
$\alpha\in(1,2]$, the conclusion of Theorem 2.1(a) is stronger than that of
Theorem 2.1(b) due to different choices of $s(r)$. If the coefficients
$a_{0},\ldots,a_{n}$ are entire, then it follows from (2.3) and (2.5) that
$\rho_{\varphi}(f)\geq\rho_{\varphi}(a_{0})+1/\alpha$ in Theorem 2.1(a) when
choosing $s(r)=r^{2}$, which is stronger than the conclusion
$\rho_{\varphi}(f)\geq\rho_{\varphi}(a_{0})$ in Theorem 2.1(b) when choosing
$s(r)=2r$.
On the other hand, the opposite is true for some suitable $\varphi(r)$. For
instance, choose $\varphi(r)=r^{\beta}$, where $\beta\in(0,1]$, along with
$s(r)=2r$ and $s(r)=r^{2}$, respectively. Then we get
$\rho_{\varphi}(f)\geq\rho_{\varphi}(a_{0})$ from Theorem 2.1(b), which is
stronger than the conclusion
$\rho_{\varphi}(f)\geq(1/2)^{n}\rho_{\varphi}(a_{0})$ in Theorem 2.1(a), which
in turn follows from (2.3) and (2.4).
The following result is a growth estimate for meromorphic solutions of the
non-homogeneous equations (2.2).
###### Theorem 2.2.
Suppose that $\varphi(r)$ is subadditive. Let $a_{0},\ldots,a_{n}$ be
meromorphic functions of finite $\varphi$-order such that
$\rho_{\varphi}(a_{0})>\max_{1\leq j\leq n+1}\\{\rho_{\varphi}(a_{j})\\}.$
If $f$ is a non-constant meromorphic solution of (2.2), then
$\rho_{\varphi}(f)\geq\alpha_{\varphi,s}^{n-1}\rho_{\varphi}(a_{0})$.
The proofs of Theorems 2.1 and 2.2 in Section 5 are based on an AW-type lemma
on the logarithmic difference discussed in Section 3 as well as on estimates
for AW-type counting functions discussed in Section 4.
## 3\. Estimates for the Askey-Wilson type
logarithmic difference
Lemma 3.1 below is an AW-type lemma on the logarithmic difference, which
reduces to [3, Theorem 3.1] when choosing $\varphi(r)=\log r$ and
$s(r)=r^{2}$. The proof uses the notation $g(r)\lesssim h(r)$ to express that
there exists a constant $C\geq 1$ such that $g(r)\leq Ch(r)$ for all $r\geq
R_{0}$.
###### Lemma 3.1.
Let $f$ be a meromorphic function of finite $\varphi$-order
$\rho_{\varphi}(f)$ such that $\mathcal{D}_{q}f\not\equiv 0$. Let
$\alpha_{\varphi,s}$ and $\gamma_{\varphi,s}$ be the constants in (2.3), let
$\varepsilon>0$, and denote $|x|=r$.
* (a)
If $\displaystyle\limsup_{r\to\infty}\frac{s(r)}{r}=\infty$ and if $s(r)$ is
convex and differentiable, then
$m\left(r,\frac{\mathcal{D}_{q}f(x)}{f(x)}\right)=O\left(\frac{\varphi(s(r))^{\rho_{\varphi}(f)+\frac{\varepsilon}{2}}}{\log\frac{s(r)}{r}}+1\right)=O\left({\varphi(s(r))^{\rho_{\varphi}(f)-\alpha_{\varphi,s}\gamma_{\varphi,s}+\varepsilon}}\right).$
* (b)
If $\displaystyle\limsup_{r\to\infty}\frac{s(r)}{r}<\infty$ and if
$\varphi(r)$ is subadditive, then
$m\left(r,\frac{\mathcal{D}_{q}f(x)}{f(x)}\right)=O\left(\varphi(r)^{\rho_{\varphi}(f)+\varepsilon}\right).$
###### Proof.
(a) By the proof of [5, Lemma 3.1(a)], there exist non-decreasing functions
$u,v:[1,\infty)\to(0,\infty)$ with the following properties:
* (1)
$r<u(r)<s(r)$ and $r<v(r)<s(r)$ for all $r\geq R_{0}$,
* (2)
$u(r)/r\to\infty$ and $v(r)/r\to\infty$ as $r\to\infty$,
* (3)
$2^{-1}s(r)\leq v(u(r))\leq s(r)$ for all $r\geq R_{0}$,
* (4)
$2\log(u(r)/r)\leq\log(s(r)/r)\leq 2u(r)/r$ for all $r\geq R_{0}$.
Using the standard estimate
$N(v(r),f)-N(r,f)=\int_{r}^{v(r)}\frac{n(t,f)}{t}\,dt\geq
n(r,f)\log\frac{v(r)}{r}$
and the properties (3) and (4), we deduce that
$n(u(r),f)\leq\frac{T(s(r),f)}{\log\frac{s(r)}{2r}-\log\frac{u(r)}{r}}\lesssim\frac{T(s(r),f)}{\log\frac{s(r)}{r}}\lesssim\frac{\varphi(s(r))^{\rho_{\varphi}(f)+\frac{\varepsilon}{2}}}{\log\frac{s(r)}{r}},$
(3.1)
and similarly for $n(u(r),1/f)$. Choose $R=u(r)$. We integrate (1.4) from $0$
to $2\pi$, and we make use of the properties (1) and (4) together with (3.1)
and formulas (63)–(64) in [3], and obtain
$\begin{split}m\left(r,\frac{\mathcal{D}_{q}f(x)}{f(x)}\right)&\lesssim\frac{T(u(r),f)}{u(r)/r}+{n(u(r),f)+n(u(r),1/f)}+1\\\
&\lesssim\frac{\varphi(s(r))^{\rho_{\varphi}(f)+\frac{\varepsilon}{2}}}{\log\frac{s(r)}{r}}+1.\end{split}$
This proves the first identity in Case (a).
From (2.3), we get
$\alpha_{\varphi,s}\gamma_{\varphi,s}\leq\liminf_{r\to\infty}\left(\frac{\log\varphi(r)}{\log\varphi(s(r))}\cdot\frac{\log\log\frac{s(r)}{r}}{\log\varphi(r)}\right)=\liminf_{r\to\infty}\frac{\log\log\frac{s(r)}{r}}{\log\varphi(s(r))},$
and so
$\log\frac{s(r)}{r}\geq\varphi(s(r))^{\alpha_{\varphi,s}\gamma_{\varphi,s}-\frac{\varepsilon}{2}},\quad
r\geq R_{0}.$
Recall from [5, Corollary 4.3] that, for a non-constant meromorphic function
$f$ of finite $\varphi$-order $\rho_{\varphi}(f)$, we have
$\rho_{\varphi}(f)\geq\alpha_{\varphi,s}\gamma_{\varphi,s}$. Thus
$\frac{\varphi(s(r))^{\rho_{\varphi}(f)+\frac{\varepsilon}{2}}}{\log\frac{s(r)}{r}}\leq\varphi(s(r))^{\rho_{\varphi}(f)-\alpha_{\varphi,s}\gamma_{\varphi,s}+\varepsilon},\quad
r\geq R_{0},$ (3.2)
where the right-hand side tends to infinity as $r\to\infty$. This proves the
second identity in Case (a).
(b) By the assumptions on $s(r)$, there exists a constant $C\in(1,\infty)$
such that $r<s(r)<Cr$ for all $r\geq R_{0}$. We choose $R=Br$, where
$B=\max\\{[C],[2(|q^{1/2}|+|q^{-1/2}|)]\\}+1$ (3.3)
is an integer. Integrating (1.4) from $0$ to $2\pi$ and making use of formulas
(63)–(64) in [3] together with
$T(2r,f)\geq\int_{r}^{2r}\frac{n(t,f)}{t}\,dt\geq n(r,f)\log 2,$ (3.4)
we obtain
$\begin{split}m\left(r,\frac{\mathcal{D}_{q}f(x)}{f(x)}\right)&\lesssim
T(Br,f)+n(Br,f)+n(Br,1/f)+1\\\
&\lesssim\varphi(2Br)^{\rho_{\varphi}(f)+\varepsilon}+1.\end{split}$ (3.5)
Since the subadditivity of $\varphi$ yields $\varphi(2Br)\leq 2B\varphi(r)$,
the assertion follows from (3.5). This completes the proof. ∎
Lemma 3.2 below is a pointwise estimate for the AW-type logarithmic difference
that holds outside of an exceptional set. The result reduces to [3, Theorem
3.2] when choosing $\varphi(r)=\log r$ and $s(r)=r^{2}$.
###### Lemma 3.2.
Let $f$ be a meromorphic function of finite $\varphi$-order
$\rho_{\varphi}(f)$ such that $\mathcal{D}_{q}f\not\equiv 0$. Let
$\alpha_{\varphi,s}>0$ and $\gamma_{\varphi,s}$ be the constants in (2.3), let
$\varepsilon>0$, and denote $|x|=r$. Suppose that $\varphi(r)$ is continuous
and satisfies
$\displaystyle\limsup_{r\to\infty}\frac{\log\varphi(r)}{\log r}=0.$ (3.6)
* (a)
If $\displaystyle\limsup_{r\to\infty}\frac{s(r)}{r}=\infty$ and if $s(r)$ is
convex and differentiable, then
$\log^{+}\left|\frac{\mathcal{D}_{q}f(x)}{f(x)}\right|=O\left(\frac{\varphi(s(r))^{\rho_{\varphi}(f)+\frac{\varepsilon}{2}}}{\log\frac{s(r)}{r}}+1\right)=O\left({\varphi(s(r))^{\rho_{\varphi}(f)-\alpha_{\varphi,s}\gamma_{\varphi,s}+\varepsilon}}\right)$
holds outside of an exceptional set of finite logarithmic measure.
* (b)
If $\displaystyle\limsup_{r\to\infty}\frac{s(r)}{r}<\infty$ and if
$\varphi(r)$ is subadditive, then
$\log^{+}\left|\frac{\mathcal{D}_{q}f(x)}{f(x)}\right|=O\left(\varphi(r)^{\rho_{\varphi}(f)+\varepsilon}\right)$
holds outside of an exceptional set of finite logarithmic measure.
###### Proof.
We modify the proof of [3, Theorem 3.2] as follows.
(a) Denote
$\\{d_{n}\\}:=\\{c_{n}\\}\cup\\{q^{1/2}c_{n}\\}\cup\\{q^{-1/2}c_{n}\\},$ (3.7)
where $\\{c_{n}\\}$ is the combined sequence of zeros and poles of $f$. Let
$E_{n}=\left\\{r:r\in\left[|d_{n}|-\frac{|d_{n}|}{\varphi(|d_{n}|+3)^{\frac{\rho_{\varphi}(f)+\varepsilon}{\alpha_{\varphi,s}}}},\,|d_{n}|+\frac{|d_{n}|}{\varphi(|d_{n}|+3)^{\frac{\rho_{\varphi}(f)+\varepsilon}{\alpha_{\varphi,s}}}}\right]\right\\}$
and $E=\cup_{n}E_{n}$, where $\alpha_{\varphi,s}\in(0,1]$ is defined in (2.3).
In what follows, we consider $r\not\in E$. We proceed to prove that
$|x-d_{n}|\geq\frac{|x|}{2\varphi(|x|+3)^{\frac{\rho_{\varphi}(f)+\varepsilon}{\alpha_{\varphi,s}}}},\quad|x|=r\geq
R_{0}.$ (3.8)
The proof is divided into three cases in each of which $|x|\geq R_{0}$.
* (1)
Suppose that
$|x|<|d_{n}|-\frac{|d_{n}|}{\varphi(|d_{n}|+3)^{\frac{\rho_{\varphi}(f)+\varepsilon}{\alpha_{\varphi,s}}}}$.
From (3.6), the function
$\frac{|x|}{\varphi(|x|+3)^{\frac{\rho_{\varphi}(f)+\varepsilon}{\alpha_{\varphi,s}}}}$
is increasing, and so
$\displaystyle|x-d_{n}|$ $\displaystyle\geq$
$\displaystyle||x|-|d_{n}||\geq\frac{|d_{n}|}{\varphi(|d_{n}|+3)^{\frac{\rho_{\varphi}(f)+\varepsilon}{\alpha_{\varphi,s}}}}\geq\frac{|x|}{2\varphi(|x|+3)^{\frac{\rho_{\varphi}(f)+\varepsilon}{\alpha_{\varphi,s}}}}.$
* (2)
Suppose that
$|d_{n}|+\frac{|d_{n}|}{\varphi(|d_{n}|+3)^{\frac{\rho_{\varphi}(f)+\varepsilon}{\alpha_{\varphi,s}}}}\leq|x|-\frac{|x|}{\varphi(|x|+3)^{\frac{\rho_{\varphi}(f)+\varepsilon}{\alpha_{\varphi,s}}}}$.
Clearly,
$\displaystyle|x-d_{n}|$ $\displaystyle\geq$
$\displaystyle\frac{|d_{n}|}{\varphi(|d_{n}|+3)^{\frac{\rho_{\varphi}(f)+\varepsilon}{\alpha_{\varphi,s}}}}+\frac{|x|}{\varphi(|x|+3)^{\frac{\rho_{\varphi}(f)+\varepsilon}{\alpha_{\varphi,s}}}}\geq\frac{|x|}{2\varphi(|x|+3)^{\frac{\rho_{\varphi}(f)+\varepsilon}{\alpha_{\varphi,s}}}}.$
* (3)
Suppose that
$|d_{n}|+\frac{|d_{n}|}{\varphi(|d_{n}|+3)^{\frac{\rho_{\varphi}(f)+\varepsilon}{\alpha_{\varphi,s}}}}<|x|$
and
$|x|-\frac{|x|}{\varphi(|x|+3)^{\frac{\rho_{\varphi}(f)+\varepsilon}{\alpha_{\varphi,s}}}}\leq|d_{n}|+\frac{|d_{n}|}{\varphi(|d_{n}|+3)^{\frac{\rho_{\varphi}(f)+\varepsilon}{\alpha_{\varphi,s}}}}.$
Then we have
$|x-d_{n}|\geq\frac{|d_{n}|}{\varphi(|d_{n}|+3)^{\frac{\rho_{\varphi}(f)+\varepsilon}{\alpha_{\varphi,s}}}}$
and $|x|=|d_{n}|(1+o(1))$ as $|x|\to\infty$ (or as $n\to\infty$). This yields
(3.8) by the continuity of $\varphi(r)$.
Keeping in mind that $r\not\in E$, this completes the proof of (3.8).
Let $\alpha_{1}\in(0,1)$. From (3.8),
$\sum_{|c_{n}|<R}\frac{1}{|x-c_{n}|^{\alpha_{1}}}\leq\frac{2^{\alpha_{1}}\varphi(|x|+3)^{\frac{\alpha_{1}(\rho_{\varphi}(f)+\varepsilon)}{\alpha_{\varphi,s}}}}{|x|^{\alpha_{1}}}\left(n(R,f)+n(R,1/f)\right).$
(3.9)
From (3.6)–(3.8), we have, for all $|x|$ sufficiently large and hence for all
$|z|$ sufficiently large,
$\begin{split}|x+c(q)q^{-1/2}z^{-1}-q^{-1/2}c_{n}|&\geq|x-q^{-1/2}c_{n}|-|c(q)q^{-1/2}z^{-1}|\geq\frac{|x|}{3\varphi(|x|+3)^{\frac{\rho_{\varphi}(f)+\varepsilon}{\alpha_{\varphi,s}}}},\end{split}$
and similarly for $|x-c(q)q^{1/2}z^{-1}-q^{1/2}c_{n}|,$ where
$c(q)=(q^{-1/2}-q^{1/2})/2$. Therefore,
$\displaystyle\underset{|c_{n}|<R}{\sum}\frac{1}{|x+c(q)q^{-1/2}z^{-1}-q^{-1/2}c_{n}|^{\alpha_{1}}}+\underset{|c_{n}|<R}{\sum}\frac{1}{|x-c(q)q^{1/2}z^{-1}-q^{1/2}c_{n}|^{\alpha_{1}}}$
$\displaystyle\qquad\leq\frac{2\cdot
3^{\alpha_{1}}\varphi(|x|+3)^{\frac{\alpha_{1}(\rho_{\varphi}(f)+\varepsilon)}{\alpha_{\varphi,s}}}}{|x|^{\alpha_{1}}}\left(n(R,f)+n(R,1/f)\right).$
(3.10)
We make use of the proof of Lemma 3.1, according to which there exist non-
decreasing functions $u,v:[1,\infty)\to(0,\infty)$ satisfying the
aforementioned properties (1)–(4). Choose $R=u(r)$ and
$\alpha_{1}=\frac{\alpha_{\varphi,s}\varepsilon}{4(\rho_{\varphi}(f)+\varepsilon)}\in(0,1)$.
Since $\varepsilon>0$ is arbitrary, it follows from (3.1) that
$n(u(r),f)\lesssim\frac{\varphi(s(r))^{\rho_{\varphi}(f)+\frac{\varepsilon}{4}}}{\log\frac{s(r)}{r}}.$
(3.11)
By substituting (3.9)–(3.11) into (1.4), and by using (3.2), we have
$\begin{split}\log^{+}\left|\frac{\mathcal{D}_{q}f(x)}{f(x)}\right|&\lesssim\frac{T(u(r),f))}{u(r)/r}+\frac{n(u(r),f)+n(u(r),1/f)}{u(r)/r}\\\
&\quad+\varphi(r+3)^{\frac{\alpha_{1}}{{\alpha_{\varphi,s}}}(\rho_{\varphi}(f)+\varepsilon)}\cdot\frac{\varphi(s(r))^{\rho_{\varphi}(f)+\frac{\varepsilon}{4}}}{\log\frac{s(r)}{r}}+1\\\
&\lesssim\frac{\varphi(s(r))^{\rho_{\varphi}(f)+\frac{\varepsilon}{2}}}{\log\frac{s(r)}{r}}+1\lesssim{\varphi(s(r))^{\rho_{\varphi}(f)-\alpha_{\varphi,s}\gamma_{\varphi,s}+\varepsilon}},\quad
r\not\in E.\end{split}$ (3.12)
By (3.12), it suffices to prove that the logarithmic measure of the
exceptional set $E$ is finite. We recall from [2, p. 249] that, for a
meromorphic function $h(x)$,
$n(r,h(cx))=n(|c|r,h(x)),\quad c\in\mathbb{C}\setminus\\{0\\}.$
We apply this formula to the functions $f(q^{-1/2}x)$ and $f(q^{1/2}x)$ and
make use of [5, Lemmas 4.1–4.2] to get
$\lambda_{\varphi}=\rho_{\varphi}(n(Ar,f)+n(Ar,1/f))\leq\frac{\rho_{\varphi}(f)}{\alpha_{\varphi,s}}<\infty,$
where $\lambda_{\varphi}$ is the $\varphi$-exponent of convergence of the
sequence $\\{d_{n}\\}$ defined in (3.7), and
$A=\max\\{1,|q|^{-1/2},|q|^{1/2}\\}$. For $N\geq R_{0}$ and a given
sufficiently small $\delta>0$, we have
$\frac{1}{\varphi(|d_{N}|)^{\frac{\rho_{\varphi}(f)+\varepsilon}{\alpha_{\varphi,s}}}}<\delta$.
Using the fact that $\log(1+|x|)\leq|x|$ for all $|x|\geq 0$, the constant
$C_{\delta}=\frac{2}{1-\delta}>0$ satisfies
$\log\frac{1+\frac{1}{\varphi(|d_{N}|)^{\frac{\rho_{\varphi}(f)+\varepsilon}{\alpha_{\varphi,s}}}}}{1-\frac{1}{\varphi(|d_{N}|)^{\frac{\rho_{\varphi}(f)+\varepsilon}{\alpha_{\varphi,s}}}}}\leq
C_{\delta}\cdot\frac{1}{\varphi(|d_{N}|)^{\frac{\rho_{\varphi}(f)+\varepsilon}{\alpha_{\varphi,s}}}},\quad
N\geq R_{0}.$
Therefore,
$\begin{split}\text{log-
meas}\,(E)&=\left(\int_{E\cap[1,|d_{N}|]}+\int_{E\cap[|d_{N}|,\infty)}\right)\,\frac{dt}{t}\\\
&\leq\log|d_{N}|+\sum_{n=N}^{\infty}\int_{E_{n}}\,\frac{dt}{t}=\log|d_{N}|+\sum_{n=N}^{\infty}\log\frac{1+\frac{1}{\varphi(|d_{n}|)^{\frac{\rho_{\varphi}(f)+\varepsilon}{\alpha_{\varphi,s}}}}}{1-\frac{1}{\varphi(|d_{n}|)^{\frac{\rho_{\varphi}(f)+\varepsilon}{\alpha_{\varphi,s}}}}}\\\
&\leq\log|d_{N}|+C_{\delta}\sum_{n=N}^{\infty}\frac{1}{\varphi(|d_{n}|)^{\lambda_{\varphi}+\frac{\varepsilon}{\alpha_{\varphi,s}}}}<\infty,\end{split}$
which yields the assertion.
(b) By making use of the proof of Lemma 3.1(b) and following the same method
as in Case (a) above, we obtain (3.9) and (3.10). Choose $R=Br$ and
$\alpha_{1}=\frac{\alpha_{\varphi,s}\varepsilon}{2(\rho_{\varphi}(f)+\varepsilon)}\in(0,1)$,
where $B$ is defined in (3.3). Then by substituting (3.4), (3.9) and (3.10)
into (1.4), we have
$\begin{split}\log^{+}\left|\frac{\mathcal{D}_{q}f(x)}{f(x)}\right|&\lesssim
T(Br,f)+n(Br,f)+n(Br,1/f)\\\
&\quad+\varphi(r+3)^{\frac{\alpha_{1}}{{\alpha_{\varphi,s}}}(\rho_{\varphi}(f)+\varepsilon)}\cdot\varphi(2Br)^{\rho_{\varphi}(f)+\frac{\varepsilon}{2}}+1\\\
&\leq{\varphi(2Br)^{\rho_{\varphi}(f)+\varepsilon}},\quad r\not\in
E.\end{split}$
Then the assertion follows from the subadditivity of $\varphi$, that is,
$\varphi(2Br)\leq 2B\varphi(r)$. Similarly as in Case (a) above, we deduce
that the logarithmic measure of the exceptional set $E$ is finite. This
completes the proof. ∎
## 4\. Askey-Wilson type counting functions
and characteristic functions
In this section we state three lemmas, whose proofs are just minor
modifications of the corresponding results in [3]. For a non-constant
meromorphic function $f$, it follows from [5, Lemmas 4.1–4.2] that
$\rho_{\varphi}(f)\geq\alpha_{\varphi,s}\lambda_{\varphi}+\alpha_{\varphi,s}\gamma_{\varphi,s}$
and, if $\alpha_{\varphi,s}>0$, then
$n(r,a,f)=O(\varphi(r)^{\lambda_{\varphi}+\varepsilon})\leq
O\left(\varphi(r)^{\frac{\rho_{\varphi}(f)}{\alpha_{\varphi,s}}-\gamma_{\varphi,s}+\varepsilon}\right),$
where $\lambda_{\varphi}$ is the $\varphi$-exponent of convergence of the
$a$-points of $f$.
Lemma 4.1 below is essential in proving Lemma 4.2, and it reduces to [3,
Theorem 5.1] when choosing $\varphi(r)=\log r$ and $s(r)=r^{2}$.
###### Lemma 4.1.
Let $f$ be a non-constant meromorphic function of finite $\varphi$-order
$\rho_{\varphi}(f)$. Suppose that $\varphi(r)$ is subadditive. Let
$\alpha_{\varphi,s}>0$ and $\gamma_{\varphi,s}$ be the constants in (2.3), and
let $\varepsilon>0$ and $a\in\widehat{\mathbb{C}}$.
* (a)
If $\displaystyle\limsup_{r\to\infty}\frac{s(r)}{r}=\infty$ and if $s(r)$ is
convex and differentiable, then
$N(r,a,f(\hat{x}))=N(r,a,f(x))+O\left(\varphi(r)^{\frac{\rho_{\varphi}(f)}{\alpha_{\varphi,s}}-\gamma_{\varphi,s}+\varepsilon}\right)+O(\log
r),$
$N(r,a,f(\check{x}))=N(r,a,f(x))+O\left(\varphi(r)^{\frac{\rho_{\varphi}(f)}{\alpha_{\varphi,s}}-\gamma_{\varphi,s}+\varepsilon}\right)+O(\log
r).$
* (b)
If $\displaystyle\limsup_{r\to\infty}\frac{s(r)}{r}<\infty$, then
$N(r,a,f(\hat{x}))=N(r,a,f(x))+O\left(\varphi(r)^{\rho_{\varphi}(f)+\varepsilon}\right)+O(\log
r),$
$N(r,a,f(\check{x}))=N(r,a,f(x))+O\left(\varphi(r)^{\rho_{\varphi}(f)+\varepsilon}\right)+O(\log
r).$
Lemma 4.2 below is a direct consequence of Lemma 4.1 and the definition of the
AW-operator $\mathcal{D}_{q}f$, and it reduces to [3, Theorem 3.3] when
choosing $\varphi(r)=\log r$ and $s(r)=r^{2}$.
###### Lemma 4.2.
Let $f$ be a non-constant meromorphic function of finite $\varphi$-order
$\rho_{\varphi}(f)$. Suppose that $\varphi(r)$ is subadditive. Let
$\alpha_{\varphi,s}>0$ and $\gamma_{\varphi,s}$ be the constants in (2.3), and
let $\varepsilon>0$.
* (a)
If $\displaystyle\limsup_{r\to\infty}\frac{s(r)}{r}=\infty$ and if $s(r)$ is
convex and differentiable, then
$N\left(r,\mathcal{D}_{q}f\right)\leq
2N(r,f)+O\left(\varphi(r)^{\frac{\rho_{\varphi}(f)}{\alpha_{\varphi,s}}-\gamma_{\varphi,s}+\varepsilon}\right)+O(\log
r).$
* (b)
If $\displaystyle\limsup_{r\to\infty}\frac{s(r)}{r}<\infty$, then
$N\left(r,\mathcal{D}_{q}f\right)\leq
2N(r,f)+O\left(\varphi(r)^{\rho_{\varphi}(f)+\varepsilon}\right)+O(\log r).$
The following result reduces to [3, Theorem 3.4] when choosing
$\varphi(r)=\log r$ and $s(r)=r^{2}$.
###### Lemma 4.3.
Let $f$ be a non-constant meromorphic function of finite $\varphi$-order
$\rho_{\varphi}(f)$. Suppose that $\varphi(r)$ is subadditive. Let
$\alpha_{\varphi,s}>0$ and $\gamma_{\varphi,s}$ be the constants in (2.3), and
let $\varepsilon\in(0,1)$.
* (a)
If $\displaystyle\limsup_{r\to\infty}\frac{s(r)}{r}=\infty$ and if $s(r)$ is
convex and differentiable, then
$T\left(r,\mathcal{D}_{q}f\right)\leq
2T(r,f)+O\left(\varphi(r)^{\frac{\rho_{\varphi}(f)}{\alpha_{\varphi,s}}-\gamma_{\varphi,s}+\varepsilon}\right)+O(\log
r).$
* (b)
If $\displaystyle\limsup_{r\to\infty}\frac{s(r)}{r}<\infty$, then
$T\left(r,\mathcal{D}_{q}f\right)\leq
2T(r,f)+O\left(\varphi(r)^{\rho_{\varphi}(f)+\varepsilon}\right)+O(\log r).$
###### Proof.
Choose
$\varepsilon^{*}=\frac{\alpha_{\varphi,s}^{2}\varepsilon^{2}}{2(\rho_{\varphi}(f)+\alpha_{\varphi,s}\varepsilon)}\in\left(0,\frac{\alpha_{\varphi,s}}{2}\right)$.
By the definition of the constant $\alpha_{\varphi,s}$ in (2.3), it follows
that
$\varphi(s(r))\leq\varphi(r)^{\frac{1}{\alpha_{\varphi,s}-\varepsilon^{*}}},\quad
r\geq R_{0}.$ (4.1)
We replace $\varepsilon$ in Lemma 3.1(a) with
$\varepsilon^{\prime}=\frac{\alpha_{\varphi,s}\varepsilon}{2}+\gamma_{\varphi,s}\varepsilon^{*}=\left(\frac{\alpha_{\varphi,s}}{2}+\frac{\alpha_{\varphi,s}^{2}\gamma_{\varphi,s}\varepsilon}{2(\rho_{\varphi}(f)+\alpha_{\varphi,s}\varepsilon)}\right)\varepsilon$,
which we are allowed to do since
$0<\frac{\alpha_{\varphi,s}}{2}\leq\frac{\alpha_{\varphi,s}}{2}+\frac{\alpha_{\varphi,s}^{2}\gamma_{\varphi,s}\varepsilon}{2(\rho_{\varphi}(f)+\alpha_{\varphi,s}\varepsilon)}<\alpha_{\varphi,s}\leq
1$. Consequently, we deduce from (4.1) that
$\begin{split}\varphi(s(r))^{\rho_{\varphi}(f)-\alpha_{\varphi,s}\gamma_{\varphi,s}+\varepsilon^{\prime}}&\leq\varphi(r)^{\frac{\rho_{\varphi}(f)-\alpha_{\varphi,s}\gamma_{\varphi,s}+\varepsilon^{\prime}}{\alpha_{\varphi,s}-\varepsilon^{*}}}\\\
&\leq\varphi(r)^{\frac{\rho_{\varphi}(f)}{\alpha_{\varphi,s}}-\gamma_{\varphi,s}+\varepsilon},\quad
r\geq R_{0}.\end{split}$ (4.2)
Case (a) now follows directly from (4.2) and Lemmas 3.1(a) and 4.2(a). Case
(b) is more straight forward. ∎
###### Remark 2.
If $\alpha_{\varphi,s}>0$, it is easy to see that
$\rho_{\varphi}(\mathcal{D}_{q}f)\leq\max\left\\{\rho_{\varphi}(f),\,\frac{\rho_{\varphi}(f)}{\alpha_{\varphi,s}}-\gamma_{\varphi,s}\right\\}.$
## 5\. Proofs of theorems
Proof of Theorem 2.1. All assertions are true if $\rho_{\varphi}(f)=\infty$ or
if $\alpha_{\varphi,s}=0$, so we may suppose that $\rho_{\varphi}(f)<\infty$
and $\alpha_{\varphi,s}>0$.
(a) We begin by proving for every $k\in\mathbb{N}$ that
$\begin{split}\rho_{\varphi}(\mathcal{D}_{q}^{k}f)&\leq{\max}\left\\{\rho_{\varphi}(f),\,\max_{1\leq
l\leq
k}\left\\{\frac{\rho_{\varphi}(f)}{\alpha_{\varphi,s}^{l}}-\gamma_{\varphi,s}\sum_{j=0}^{l-1}\frac{1}{\alpha_{\varphi,s}^{j}}\right\\}\right\\}=:\rho_{\varphi,k}.\end{split}$
(5.1)
The case $k=1$ is obvious by Remark 2. We suppose that (5.1) holds for $k$,
and we aim to prove (5.1) for $k+1$. Applying Remark 2 to the meromorphic
function $\mathcal{D}_{q}^{k}f$ yields
$\begin{split}\rho_{\varphi}(\mathcal{D}_{q}^{k+1}f)&=\rho_{\varphi}(\mathcal{D}_{q}(\mathcal{D}_{q}^{k}f))\leq\max\left\\{\rho_{\varphi}(\mathcal{D}_{q}^{k}f),\,\frac{\rho_{\varphi}(\mathcal{D}_{q}^{k}f)}{\alpha_{\varphi,s}}-\gamma_{\varphi,s}\right\\}\\\
&\leq\max\left\\{\rho_{\varphi}(f),\,\max_{1\leq l\leq
k+1}\left\\{\frac{\rho_{\varphi}(f)}{\alpha_{\varphi,s}^{l}}-\gamma_{\varphi,s}\sum_{j=0}^{l-1}\frac{1}{\alpha_{\varphi,s}^{j}}\right\\}\right\\}=\rho_{\varphi,k+1}.\end{split}$
The assertion (5.1) is now proved. Moreover, it is easy to see that
$\rho_{\varphi}(f)\leq\rho_{\varphi,k}\leq\rho_{\varphi,k+1}$ for
$k\in\mathbb{N}$.
Suppose first that the coefficients $a_{0}(x),\ldots,a_{n}(x)$ are entire. We
divide (2.1) by $f(x)$ and make use of (4.2), (5.1) and Lemma 3.1(a) to obtain
$\begin{split}m(r,a_{0})&\leq\max_{1\leq j\leq n}\\{m(r,a_{j})\\}+{\sum_{1\leq
j\leq n}}m\left(r,\frac{\mathcal{D}_{q}^{j}f}{f}\right)\\\
&\lesssim\max_{1\leq j\leq n}\\{m(r,a_{j})\\}+{\max_{1\leq j\leq
n}}\left\\{m\left(r,\frac{\mathcal{D}_{q}^{j}f}{\mathcal{D}_{q}^{j-1}f}\right)\right\\}\\\
&\lesssim\varphi(r)^{\rho_{\varphi}(a_{0})-\varepsilon}+\varphi(r)^{\frac{\rho_{\varphi,n-1}}{\alpha_{\varphi,s}}-\gamma_{\varphi,s}+{\varepsilon}},\quad
r\geq R_{0}.\end{split}$ (5.2)
Since there exists a sequence $\\{r_{n}\\}$ of positive real numbers tending
to infinity such that
$m(r_{n},a_{0})\geq\varphi(r_{n})^{\rho_{\varphi}(a_{0})-\frac{\varepsilon}{2}}$,
we have
$\rho_{\varphi}(a_{0})-\frac{\varepsilon}{2}\leq\frac{\rho_{\varphi,n-1}}{\alpha_{\varphi,s}}-\gamma_{\varphi,s}+{\varepsilon},$
where we may let $\varepsilon\to 0^{+}$. This gives us
$\displaystyle\rho_{\varphi}(a_{0})$ $\displaystyle\leq$
$\displaystyle\max\left\\{\frac{\rho_{\varphi}(f)}{\alpha_{\varphi,s}}-\gamma_{\varphi,s},\,\max_{1\leq
l\leq
n-1}\left\\{\frac{\rho_{\varphi}(f)}{\alpha_{\varphi,s}^{l+1}}-\gamma_{\varphi,s}\sum_{j=0}^{l}\frac{1}{\alpha_{\varphi,s}^{j}}\right\\}\right\\}$
$\displaystyle=$ $\displaystyle\max_{1\leq l\leq
n}\left\\{\frac{\rho_{\varphi}(f)}{\alpha_{\varphi,s}^{l}}-\gamma_{\varphi,s}\sum_{j=0}^{l-1}\frac{1}{\alpha_{\varphi,s}^{j}}\right\\},$
and so
$\displaystyle\alpha_{\varphi,s}^{n}\rho_{\varphi}(a_{0})$ $\displaystyle\leq$
$\displaystyle\max_{1\leq l\leq
n}\left\\{\alpha_{\varphi,s}^{n-l}\rho_{\varphi}(f)-\gamma_{\varphi,s}\sum_{j=0}^{l-1}\alpha_{\varphi,s}^{n-j}\right\\}\leq\rho_{\varphi}(f)-\alpha_{\varphi,s}^{n}\gamma_{\varphi,s}.$
Then the assertion (2.5) follows.
Suppose then that some of the coefficients $a_{0}(x),\ldots,a_{n}(x)$ have
poles. We divide (2.1) by $f(x)$ and make use of (5.1) and Lemma 4.3(a) to
obtain
$\begin{split}N(r,a_{0})&\lesssim\max_{1\leq j\leq
n}\\{T(r,a_{j})\\}+\sum_{j=0}^{n}T(r,\mathcal{D}_{q}^{j}f)\\\
&\lesssim\max_{1\leq j\leq
n}\\{T(r,a_{j})\\}+T(r,f)+\varphi(r)^{\frac{\rho_{\varphi,n-1}}{\alpha_{\varphi,s}}-\gamma_{\varphi,s}+\varepsilon}+\log
r\\\
&\lesssim\varphi(r)^{\rho_{\varphi}(a_{0})-\varepsilon}+\varphi(r)^{\rho_{\varphi}(f)+\varepsilon}+\varphi(r)^{\frac{\rho_{\varphi,n-1}}{\alpha_{\varphi,s}}-\gamma_{\varphi,s}+\varepsilon}+\log
r,\quad r\geq R_{0}.\end{split}$
Combining this with (5.2) and noting the fact that $f$ is non-constant, we
obtain
$\rho_{\varphi}(a_{0})\leq\max\left\\{\rho_{\varphi}(f),\,\max_{1\leq l\leq
n}\left\\{\frac{\rho_{\varphi}(f)}{\alpha_{\varphi,s}^{l}}-\gamma_{\varphi,s}\sum_{j=0}^{l-1}\frac{1}{\alpha_{\varphi,s}^{j}}\right\\}\right\\}=\rho_{\varphi,n},$
and thus, similarly as above,
$\begin{split}\alpha_{\varphi,s}^{n}\rho_{\varphi}(a_{0})&\leq\max\left\\{\alpha_{\varphi,s}^{n}\rho_{\varphi}(f),\,\rho_{\varphi}(f)-\alpha_{\varphi,s}^{n}\gamma_{\varphi,s}\right\\}\leq\rho_{\varphi}(f).\end{split}$
Hence the assertion (2.4) follows.
(b) Similarly as in Case (a) above, we make use of (5.1) and Lemmas 3.1(b) and
4.3(b) to obtain
$\begin{split}T(r,a_{0})&=m(r,a_{0})+N(r,a_{0})\\\ &\lesssim\max_{1\leq j\leq
n}\\{T(r,a_{j})\\}+{\sum_{1\leq j\leq
n}}m\left(r,\frac{\mathcal{D}_{q}^{j}f}{f}\right)+\sum_{j=0}^{n}T(r,\mathcal{D}_{q}^{j}f)\\\
&\lesssim\varphi(r)^{\rho_{\varphi}(a_{0})-\varepsilon}+\varphi(r)^{\rho_{\varphi,n-1}+\varepsilon}+\log
r,\quad r\geq R_{0}.\end{split}$
This together with the fact that $f$ is non-constant, we deduce
$\rho_{\varphi}(a_{0})\leq\rho_{\varphi,n-1}$, and so
$\alpha_{\varphi,s}^{n-1}\rho_{\varphi}(a_{0})\leq\rho_{\varphi}(f).$ This
completes the proof. $\Box$
Proof of Theorem 2.2. Choose $s(r)$ satisfying the assumptions of Theorem
2.1(b). We divide (2.2) by $f(x)$ and make use of (5.1) and Lemmas 3.1(b) and
4.3(b) to obtain
$\begin{split}T(r,a_{0})&\lesssim\max_{1\leq j\leq
n+1}\\{T(r,a_{j})\\}+{\sum_{1\leq j\leq
n}}m\left(r,\frac{\mathcal{D}_{q}^{j}f}{f}\right)+m\left(r,\frac{1}{f}\right)+\sum_{j=0}^{n}T(r,\mathcal{D}_{q}^{j}f)\\\
&\lesssim\varphi(r)^{\rho_{\varphi}(a_{0})-\varepsilon}+\varphi(r)^{\rho_{\varphi,n-1}+\varepsilon}+\log
r,\quad r\geq R_{0}.\end{split}$
Similarly as in the proof of Theorem 2.1(b), the assertion follows. $\Box$
## Acknowledgements
The first author would like to thank the support of the China Scholarship
Council (No. 201806330120). The third author was supported by National Natural
Science Foundation of China (No. 11771090). The fourth author was supported by
the National Natural Science Foundation of China (No. 11971288 and No.
11771090) and Shantou University SRFT (NTF18029).
## References
* [1] Askey R. and J. Wilson, _Some basic hypergeometric orthogonal polynomials that generalize Jacobi polynomials_. Mem. Amer. Math. Soc. 54 (1985), no. 319, iv+55 pp.
* [2] Bergweiler W., K. Ishizaki and N. Yanagihara, _Meromorphic solutions of some functional equations_. Methods Appl. Anal. 5 (1998), no. 3, 248–258.
* [3] Chiang Y. M. and S. J. Feng, _Nevanlinna theory of the Askey-Wilson divided difference operator_. Adv. Math. 329 (2018), 217–272.
* [4] Chyzhykov I., J. Heittokangas and J. Rättyä, _Finiteness of $\varphi$-order of solutions of linear differential equations in the unit disc_. J. Anal. Math. 109 (2009), 163–198.
* [5] Heittokangas J., J. Wang, Z. T. Wen and H. Yu, _Meromorphic functions of finite $\varphi$-order and linear $q$-difference equations_. https://arxiv.org/abs/2010.12356, 28 pp.
* [6] Ismail M. E. H., _Classical and Quantum Orthogonal Polynomials in One Variable_. Encycl. Math. Appls., vol. 98, Camb. Univ. Press, 2005.
* [7] Rubel L. A., _Entire and Meromorphic Functions_. Springer-Verlag, New York, 1996.
* [8] Szegö G., _Orthogonal Polynomials._ Fourth edition. American Mathematical Society, Colloquium Publications, Vol. XXIII. American Mathematical Society, Providence, R.I., xiii+432 pp, 1975.
|
CP-Symmetry in Scattering of Neutrinos from Nuclei
R.B. Begzhanov and R.S. Sharafiddinov
Institute of Nuclear Physics, Uzbekistan Academy of Sciences,
Ulugbek, Tashkent 100214, Uzbekistan
Abstract
The elastic scattering of longitudinal and transversal neutrinos on a spinless
nucleus have been discussed taking into account the charge, magnetic, anapole
and electric dipole moments of fermions and their weak neutral currents.
Compound structure of the neutrino interaction cross section with nuclei have
been defined. Invariance of the considered process concerning the C - and
P-operations have been investigated in the polarization type dependence.
1\. Introduction
It has been established that the behavior of massive neutrinos plays an
important part in understanding the physical nature of elementary particles.
One of the modes of doing this is to study the possible neutrino-nucleus
interaction [1,2].
The neutrino interaction with field of emission may be expressed in the form
[3,4] of electromagnetic current
$j_{\mu}^{em}=\overline{u}(p^{\prime},s^{\prime})[\gamma_{\mu}F_{1\nu}(q^{2})-i\sigma_{\mu\lambda}q_{\lambda}F_{2\nu}(q^{2})+$
$+\gamma_{5}\gamma_{\mu}G_{1\nu}(q^{2})-i\gamma_{5}\sigma_{\mu\lambda}q_{\lambda}G_{2\nu}(q^{2})]u(p,s),$
(1)
where $\sigma_{\mu\lambda}=[\gamma_{\mu},\gamma_{\lambda}]/2,$
$q=p-p^{\prime}$ is the momentum transfer, $p(s)$ and $p^{\prime}(s^{\prime})$
imply the four-momentum (helicities) of initial and final neutrinos,
$F_{i\nu}(q^{2})$ and $G_{i\nu}(q^{2})$ are the interaction vector and axial-
vector parts respectively. The functions $F_{1\nu}(0),$ $F_{2\nu}(0)$ and
$G_{2\nu}(0)$ give the static estimates of the neutrino charge [5], magnetic
[6] and electric dipole [7] moments, on which there exist experimental and
cosmological bounds [8]. Insofar as $G_{1\nu}(0)$ is concerned, it defines the
size of the anapole moment [9], but its value has not yet been measured in the
laboratory [10].
It is known that $F_{i\nu}(q^{2})$ are invariant with respect to C - and
P-operations because the interaction of $F_{i\nu}(q^{2})$ with field of
emission must be CP-symmetrical. The term $G_{1\nu}(q^{2})$ is CP-even but
P-odd [9]. In contrast to this, the term $G_{2\nu}(q^{2})$ must be C-invariant
but CP-antisymmetrical [11]. Therefore, the form factors $G_{i\nu}(q^{2})$ may
be different from zero only in the case where P-symmetry is absent.
The violation of P-parity leads to the appearance of right-left asymmetry, for
example, at the polarized neutrinos scattering on nuclei. In many works [2,12]
the spin phenomena was studied with longitudinal neutrinos. Such an
investigation is important not only for elucidation of compound structure of
the interaction between leptons and hadrons but also for observation and
refinement of the most diverse symmetries of elementary particles. However,
the massive neutrino must have the longitudinal as well as the transversal
polarization. The account of the latter gives the possibility to directly look
at the nature of the discussed processes.
In the present work, we investigate the phenomena of symmetricality in the
massive neutrinos interactions with an electroweak field of emission. Section
2 is dedicated to the elastic scattering of longitudinal polarized neutrinos
on the nucleus electric $(Z)$ and weak $(Z_{W})$ charges
$\nu(\overline{\nu})+A(Z,Z_{W})\stackrel{{\scriptstyle\gamma,Z^{0}}}{{\rightarrow}}\nu^{\prime}(\overline{\nu^{\prime}})+A(Z,Z_{W}),$
(2)
going at the expense of neutral and electromagnetic currents. In Sec. 3 the
studied processes have been reanalysed for the transversal case of the
neutrino polarization. In Sec. 4 we make some concluding remarks.
2\. Longitudinal Polarized Neutrinos Scattering on a Nucleus
In the framework of the standard theory of electroweak interaction [13], the
Hamiltonian of the neutrino interaction with field of a nucleus has the form
$H=\frac{4\pi\alpha}{q^{2}}\overline{u}(p^{\prime},s^{\prime})[\gamma_{\mu}F_{1\nu}(q^{2})-i\sigma_{\mu\lambda}q_{\lambda}F_{2\nu}(q^{2})+$
$+\gamma_{5}\gamma_{\mu}G_{1\nu}(q^{2})-i\gamma_{5}\sigma_{\mu\lambda}q_{\lambda}G_{2\nu}(q^{2})]u(p,s)J_{\mu}^{\gamma}(q)+$
$+\frac{G_{F}}{\sqrt{2}}\overline{u}(p^{\prime},s^{\prime})\gamma_{\mu}(g_{V_{\nu}}+\gamma_{5}g_{A_{\nu}})u(p,s)J_{\mu}^{Z^{0}}(q).$
(3)
Here $J_{\mu}^{x}(q)$ are the nucleus electromagnetic $(x=\gamma)$ and weak
neutral $(x=Z^{0})$ currents [14], $g_{V_{\nu}}$ and $g_{A_{\nu}}$ are the
corresponding constants of the neutrino interaction vector $(V)$ and axial
$(A)$ parts.
In the case of the neutrino longitudinl polarization and of a zero-spin
nucleus, the cross-section of the process (2) on the basis of (3) can be
presented after the summing of $s^{\prime}$ as follows:
$d\sigma_{ew}(\theta,s)=d\sigma_{em}(\theta,s)+d\sigma_{int}(\theta,s)+d\sigma_{we}(\theta,s),$
(4)
where to purely electromagnetic interaction answers the expression
$\frac{d\sigma_{em}(\theta,s)}{d\Omega}=\sigma^{\nu}_{o}(1-\eta^{2}_{\nu})^{-1}\\{[F_{1\nu}+2\lambda_{c}s\sqrt{1-\eta_{\nu}^{2}}G_{1\nu}]F_{1\nu}+$
$+\eta^{2}_{\nu}[F_{1\nu}^{2}+4m_{\nu}^{2}(1-\eta^{-2}_{\nu})^{2}F_{2\nu}^{2}]tg^{2}\frac{\theta}{2}-8sE_{\nu}^{2}(1-\eta_{\nu}^{2})^{3/2}F_{2\nu}G_{2\nu}tg^{2}\frac{\theta}{2}+$
$+(1-\eta^{2}_{\nu})[G_{1\nu}^{2}+4E_{\nu}^{2}G_{2\nu}^{2}tg^{2}\frac{\theta}{2}]\\}F_{E}^{2}(q^{2}).$
(5)
The contribution explained by the interference of electroweak interaction is
written in the form
$\frac{d\sigma_{int}(\theta,s)}{d\Omega}=\rho\sigma^{\nu}_{o}(1-\eta^{2}_{\nu})^{-1}g_{V_{\nu}}\\{[1-$
$-\lambda_{c}s\frac{g_{A_{\nu}}}{g_{V_{\nu}}}\sqrt{1-\eta^{2}_{\nu}}][F_{1\nu}+\lambda_{c}s\sqrt{1-\eta_{\nu}^{2}}G_{1\nu}]+\eta^{2}_{\nu}F_{1\nu}tg^{2}\frac{\theta}{2}\\}F_{EV}(q^{2}).$
(6)
In the same way one can present the cross-section of purely weak interaction
with neutral currents
$\frac{d\sigma_{we}(\theta,s)}{d\Omega}=\frac{E_{\nu}^{2}G_{F}^{2}}{8\pi^{2}}\\{g_{V_{\nu}}^{2}(1+\eta_{\nu}^{2}tg^{2}\frac{\theta}{2})+g_{A_{\nu}}^{2}(1-\eta_{\nu}^{2})-$
$-2\lambda_{c}sg_{V_{\nu}}g_{A_{\nu}}\sqrt{1-\eta_{\nu}^{2}}\\}F_{W}^{2}(q^{2})cos^{2}\frac{\theta}{2}.$
(7)
Here we have also used the size
$\sigma_{o}^{\nu}=\frac{\alpha^{2}cos^{2}\frac{\theta}{2}}{4E_{\nu}^{2}(1-\eta^{2}_{\nu})sin^{4}\frac{\theta}{2}},\,\,\,\,\eta_{\nu}=\frac{m_{\nu}}{E_{\nu}},\,\,\,\,\rho=\frac{G_{F}q^{2}}{2\pi\sqrt{2}\alpha},$
$F_{E}(q^{2})=ZF_{c}(q^{2}),\,\,\,\,F_{EV}(q^{2})=ZZ_{W}F_{c}^{2}(q^{2}),\,\,\,\,F_{W}(q^{2})=Z_{W}F_{c}(q^{2}),$
$Z_{W}=\frac{1}{2}\\{\beta_{V}^{(0)}(Z+N)+\beta_{V}^{(1)}(Z-N)\\},\,\,\,\,A=Z+N,\,\,\,\,M_{T}=\frac{1}{2}(Z-N),$
where $\theta$ is the scattering angle, $E_{\nu}$ and $m_{\nu}$ are the
neutrino mass and energy, $F_{c}(q^{2})$ is the charge ($F_{c}(0)=1$) form
factor of a nucleus with isospin T and its projection $M_{T},\beta_{V}^{(0)}$
and $\beta_{V}^{(1)}$ are constants of isoscalar and isovector components of
vector neutral hadronic current.
The presence of the multiplier $s$ in Eqs. (5)-(7) implies their
antisymmetricality concerning the substitution of the left-handed $(s=-1)$
particle with the right-handed $(s=+1)$ and vice versa. We see in addition
that Eqs. (5)-(7) for the neutrino $(\lambda_{c}=+1)$ and the antineutrino
$(\lambda_{c}=-1)$ are different.
Taking into account Eqs. (5)-(7), the size of charge asymmetry
$A_{ch}^{ew}=A_{ch}^{em}+A_{ch}^{int}+A_{ch}^{we}=\frac{d\sigma^{\nu}_{ew}-d\sigma^{\overline{\nu}}_{ew}}{d\sigma^{\nu}_{ew}+d\sigma^{\overline{\nu}}_{ew}}$
(8)
is defined by the corresponding contributions
$A_{ch}^{em}(\theta)=2s\sqrt{1-\eta^{2}_{\nu}}F_{1\nu}G_{1\nu}\\{(1+\eta^{2}_{\nu}tg^{2}\frac{\theta}{2})F_{1\nu}^{2}+$
$+(1-\eta^{2}_{\nu})G_{1\nu}^{2}+4E_{\nu}^{2}[s\sqrt{1-\eta_{\nu}^{2}}G_{2\nu}-(1-\eta^{2}_{\nu})F_{2\nu}]^{2}tg^{2}\frac{\theta}{2}\\}^{-1},$
(9)
$A_{ch}^{int}(\theta)=s\sqrt{1-\eta_{\nu}^{2}}[G_{1\nu}-\frac{g_{A_{\nu}}}{g_{V_{\nu}}}F_{1\nu}]\\{(1+\eta^{2}_{\nu}tg^{2}\frac{\theta}{2})F_{1\nu}-$
$-\frac{g_{A_{\nu}}}{g_{V_{\nu}}}(1-\eta^{2}_{\nu})G_{1\nu}\\}^{-1},$ (10)
$A_{ch}^{we}(\theta)=-2s\frac{g_{A_{\nu}}}{g_{V_{\nu}}}\sqrt{1-\eta_{\nu}^{2}}\\{(1+\eta_{\nu}^{2}tg^{2}\frac{\theta}{2})+\frac{g_{A_{\nu}}^{2}}{g_{V_{\nu}}^{2}}(1-\eta_{\nu}^{2})\\}^{-1}.$
(11)
These formulas show clearly that C-invariance of the considered process can be
violated only in the case when the mirror symmetry is absent. Indeed, taking
$s=0,$ we find
$A_{ch}^{em}(\theta)=0,\,\,\,\,A_{ch}^{int}(\theta)=0,\,\,\,\,A_{ch}^{we}(\theta)=0,$
(12)
which are true at the conservation of P-parity.
Many authors state that one must use the electromagnetic current (1) in the
form [15] in which an $i$ is absent. If we start with such a procedure,
assuming that the interaction magnetic and electric dipole terms must not be
Hermitian even with $q^{2}<0,$ we would establish the other expressions for
the processes cross-sections instead of (5) and (6). They lead to the
implication [16] that C-invariance of elastic scattering is basically violated
at the expense of the neutrino nonzero rest mass. One can also make a
conclusion that this influence does not relate to the behavior of P-symmetry.
Taking into account that nonconservation of P-parity at the neutrino
interaction conveniently characterize by the right-left asymmetry, we have
$A_{RL}^{ew}=A_{RL}^{em}+A_{RL}^{int}+A_{RL}^{we}=\frac{d\sigma_{ew}^{R}-d\sigma_{ew}^{L}}{d\sigma_{ew}^{R}+d\sigma_{ew}^{L}},$
(13)
from Eqs. (5)-(7), we get
$A_{RL}^{em}(\theta)=2\sqrt{1-\eta_{\nu}^{2}}[\lambda_{c}F_{1\nu}G_{1\nu}-$
$-4E_{\nu}^{2}(1-\eta^{2}_{\nu})F_{2\nu}G_{2\nu}tg^{2}\frac{\theta}{2}]\\{(1+\eta^{2}_{\nu}tg^{2}\frac{\theta}{2})F_{1\nu}^{2}+$
$+(1-\eta_{\nu}^{2})[G_{1\nu}^{2}ctg^{2}\frac{\theta}{2}+4E_{\nu}^{2}(G_{2\nu}^{2}+(1-\eta^{2}_{\nu})F_{2\nu}^{2})]tg^{2}\frac{\theta}{2}\\}^{-1},$
(14)
$A_{RL}^{int}(\theta)=\lambda_{c}\sqrt{1-\eta_{\nu}^{2}}[G_{1\nu}-\frac{g_{A_{\nu}}}{g_{V_{\nu}}}F_{1\nu}]\\{(1+\eta^{2}_{\nu}tg^{2}\frac{\theta}{2})F_{1\nu}-$
$-\frac{g_{A_{\nu}}}{g_{V_{\nu}}}(1-\eta^{2}_{\nu})G_{1\nu}\\}^{-1},$ (15)
$A_{RL}^{we}(\theta)=-2\lambda_{c}\frac{g_{A_{\nu}}}{g_{V_{\nu}}}\sqrt{1-\eta_{\nu}^{2}}\\{(1+\eta_{\nu}^{2}tg^{2}\frac{\theta}{2})+\frac{g_{A_{\nu}}^{2}}{g_{V_{\nu}}^{2}}(1-\eta_{\nu}^{2})\\}^{-1}.$
(16)
The availability of the multiplier $\lambda_{c}$ in these formulas implies the
influence of the interaction C-antisymmetrical structure on the conservation
of P-symmetry. Indeed, the average cross-sections, Eqs. (5)-(7), over the two
values of $\lambda_{c}$ would leads us to the equalities
$A_{RL}^{em}(\theta)=-8E_{\nu}^{2}(1-\eta^{2}_{\nu})^{3/2}F_{2\nu}G_{2\nu}\\{(1+\eta^{2}_{\nu}tg^{2}\frac{\theta}{2})F_{1\nu}^{2}+$
$+(1-\eta_{\nu}^{2})[G_{1\nu}^{2}+4E_{\nu}^{2}(G_{2\nu}^{2}+(1-\eta^{2}_{\nu})F_{2\nu}^{2})tg^{2}\frac{\theta}{2}]\\}^{-1}tg^{2}\frac{\theta}{2},$
(17) $A_{RL}^{int}(\theta)=0,\,\,\,\,A_{RL}^{we}(\theta)=0,$ (18)
which take place at C-invariance.
Thus, it follows that regardless of the behavior of charge symmetry, the
right-left asymmetry of the process (2) can be explained by the interference
of the interaction axial-vector terms with its vector terms, if neutrinos do
not possess any new properties.
3\. Interaction of Transversal Polarized Neutrinos
with an Electroweak Field of a Nucleus
Starting from (3) and assuming that the neutrinos are strictly transversal,
for the elastic scattering cross-section we find an explicit expression which
can be reduced after the summing of $s^{\prime}$ to the form
$d\sigma_{ew}(\theta,\varphi,s)=d\sigma_{em}(\theta,\varphi,s)+d\sigma_{int}(\theta,\varphi,s)+d\sigma_{we}(\theta,\varphi,s).$
(19)
As well as in (4), each term here corresponds to the most diverse process and
has the different structure:
$\frac{d\sigma_{em}(\theta,\varphi,s)}{d\Omega}=\sigma^{\nu}_{o}(1-\eta^{2}_{\nu})^{-1}\\{F_{1\nu}^{2}+\eta^{2}_{\nu}[F_{1\nu}^{2}+4m_{\nu}^{2}(1-\eta^{-2}_{\nu})^{2}F_{2\nu}^{2}]tg^{2}\frac{\theta}{2}+$
$+2\lambda_{c}s\eta_{\nu}\sqrt{1-\eta_{\nu}^{2}}F_{1\nu}G_{1\nu}tg\frac{\theta}{2}cos^{2}\varphi+$
$+(1-\eta^{2}_{\nu})[G_{1\nu}^{2}+4E_{\nu}^{2}G_{2\nu}^{2}tg^{2}\frac{\theta}{2}]\\}F_{E}^{2}(q^{2}),$
(20)
$\frac{d\sigma_{int}(\theta,\varphi,s)}{d\Omega}=\rho\sigma^{\nu}_{o}(1-\eta^{2}_{\nu})^{-1}g_{V_{\nu}}\\{F_{1\nu}+\eta_{\nu}^{2}[1+$
$+\lambda_{c}s\frac{g_{A_{\nu}}}{g_{V_{\nu}}}\eta_{\nu}^{-1}\sqrt{1-\eta^{2}_{\nu}}ctg\frac{\theta}{2}cos^{2}\varphi]F_{1\nu}tg^{2}\frac{\theta}{2}-$
$-\lambda_{c}s\eta_{\nu}\sqrt{1-\eta^{2}_{\nu}}[tg\frac{\theta}{2}cos^{2}\varphi+\lambda_{c}s\frac{g_{A_{\nu}}}{g_{V_{\nu}}}\eta_{\nu}^{-1}\sqrt{1-\eta^{2}_{\nu}}]G_{1\nu}\\}F_{EV}(q^{2}),$
(21)
$\frac{d\sigma_{we}(\theta,\varphi,s)}{d\Omega}=\frac{E_{\nu}^{2}G_{F}^{2}}{8\pi^{2}}\\{g_{V_{\nu}}^{2}(1+\eta_{\nu}^{2}tg^{2}\frac{\theta}{2})+g_{A_{\nu}}^{2}(1-\eta_{\nu}^{2})-$
$-2\lambda_{c}sg_{V_{\nu}}g_{A_{\nu}}\eta_{\nu}\sqrt{1-\eta_{\nu}^{2}}tg\frac{\theta}{2}cos^{2}\varphi\\}F_{W}^{2}(q^{2})cos^{2}\frac{\theta}{2},$
(22)
where $\varphi$ is the azimuthal angle.
Using (20)-(22) and taking (8), for the C-odd asymmetry in the case of the
neutrino transversal polarization we get
$A_{ch}^{em}(\theta,\varphi)=2s\eta_{\nu}\sqrt{1-\eta_{\nu}^{2}}F_{1\nu}G_{1\nu}\\{(1+\eta^{2}_{\nu}tg^{2}\frac{\theta}{2})F_{1\nu}^{2}+$
$+(1-\eta^{2}_{\nu})[G_{1\nu}^{2}+4E_{\nu}^{2}(G_{2\nu}^{2}+(1-\eta^{2}_{\nu})F_{2\nu}^{2})tg^{2}\frac{\theta}{2}]\\}^{-1}tg\frac{\theta}{2}cos^{2}\varphi,$
(23)
$A_{ch}^{int}(\theta,\varphi)=-s\eta_{\nu}\sqrt{1-\eta_{\nu}^{2}}[G_{1\nu}-\frac{g_{A_{\nu}}}{g_{V_{\nu}}}F_{1\nu}]\\{(1+\eta^{2}_{\nu}tg^{2}\frac{\theta}{2})F_{1\nu}-$
$-\frac{g_{A_{\nu}}}{g_{V_{\nu}}}(1-\eta^{2}_{\nu})G_{1\nu}\\}^{-1}tg\frac{\theta}{2}cos^{2}\varphi,$
(24)
$A_{ch}^{we}(\theta,\varphi)=-2s\eta_{\nu}\frac{g_{A_{\nu}}}{g_{V_{\nu}}}\sqrt{1-\eta_{\nu}^{2}}\\{(1+\eta_{\nu}^{2}tg^{2}\frac{\theta}{2})+$
$+\frac{g_{A_{\nu}}^{2}}{g_{V_{\nu}}^{2}}(1-\eta_{\nu}^{2})\\}^{-1}tg\frac{\theta}{2}cos^{2}\varphi.$
(25)
The solutions (23)-(25) at $s=0$ coincide with the corresponding size from
(12) and that, consequently, the behavior of C-invariance in the P-symmetrical
interactions does not depend on the type of polarization.
In the same way one can see that the P-odd characteristics of elastic
scattering, according to (13), (20)-(22), has the form
$A_{RL}^{em}(\theta,\varphi)=2\lambda_{c}\eta_{\nu}\sqrt{1-\eta_{\nu}^{2}}F_{1\nu}G_{1\nu}\\{(1+\eta^{2}_{\nu}tg^{2}\frac{\theta}{2})F_{1\nu}^{2}+$
$+(1-\eta_{\nu}^{2})[G_{1\nu}^{2}ctg^{2}\frac{\theta}{2}+4E_{\nu}^{2}(G_{2\nu}^{2}+(1-\eta^{2}_{\nu})F_{2\nu}^{2})]tg^{2}\frac{\theta}{2}\\}^{-1}tg\frac{\theta}{2}cos^{2}\varphi,$
(26)
$A_{RL}^{int}(\theta,\varphi)=-\lambda_{c}\eta_{\nu}\sqrt{1-\eta_{\nu}^{2}}[G_{1\nu}-\frac{g_{A_{\nu}}}{g_{V_{\nu}}}F_{1\nu}]\\{(1+\eta^{2}_{\nu}tg^{2}\frac{\theta}{2})F_{1\nu}-$
$-\frac{g_{A_{\nu}}}{g_{V_{\nu}}}(1-\eta^{2}_{\nu})G_{1\nu}\\}^{-1}tg\frac{\theta}{2}cos^{2}\varphi,$
(27)
$A_{RL}^{we}(\theta,\varphi)=-2\lambda_{c}\eta_{\nu}\frac{g_{A_{\nu}}}{g_{V_{\nu}}}\sqrt{1-\eta_{\nu}^{2}}\\{(1+\eta_{\nu}^{2}tg^{2}\frac{\theta}{2})+$
$+\frac{g_{A_{\nu}}^{2}}{g_{V_{\nu}}^{2}}(1-\eta_{\nu}^{2})\\}^{-1}tg\frac{\theta}{2}cos^{2}\varphi.$
(28)
However, due to C-parity, it follows from (26)-(28) that
$A_{RL}^{em}(\theta,\varphi)=0,\,\,\,\,A_{RL}^{int}(\theta,\varphi)=0,\,\,\,\,A_{RL}^{we}(\theta,\varphi)=0.$
(29)
Comparing (17) and (18) with (29), it is easy to observe the differences which
may serve as an indication to the type of polarization dependence of
C-invariant processes right-left asymmetry.
4\. Conclusion
We have established an explicit form of the differential cross sections
describing the elastic electroweak scattering of longitudinal and transversal
polarized neutrinos (antineutrinos) on spinless nuclei as a consequence of the
availability of rest mass, charge, magnetic, anapole and electric dipole
moments of elementary particles and their weak neutral currents. With the use
of these formulas a proof has been obtained regardless of the nature of C,
nonconservation of P can be explained by the interference of the interaction
vector and axial-vector parts.
One of the new features of our results is the connection between the P-odd
phenomena and possible polarization types. Unlike the behavior of C-parity in
the P-symmetrical scattering, coefficients of right-left asymmetries
$A_{RL}^{ew}(\theta)$ and $A_{RL}^{ew}(\theta,\varphi)$ in the C-invariant
processes with longitudinal and transversal neutrinos are different.
Furthermore, if neutrinos are of high energies $(E_{\nu}\gg m_{\nu})$ then
$A_{RL}^{ew}(\theta,\varphi)=0,$ and the size of $A_{RL}^{ew}(\theta)$ is
reduced to the form
$A_{RL}^{ew}(\theta)=-\frac{2F_{2\nu}G_{2\nu}}{F_{2\nu}^{2}+G_{2\nu}^{2}}.$
(30)
It is expected that measurement of right-left asymmetry $A_{RL}^{ew}(\theta)$
for any two values of large energies will testify in favor of the equality of
the neutrino magnetic and electric dipole moments.
References
1. 1.
P.H. Frampton and P. Vogel, Phys. Rep. 82, 339 (1982); F. Boehm and P. Vogel,
Ann. Rev. Nucl. Part. Sci. 34, 125 (1984); P. Vogel and J. Engel, Phys. Rev.
D39, 3378 (1989).
2. 2.
B.K. Kerimov, T.R. Aruri and M.Ya. Safin, Izv. Acad. Nauk SSSR. Ser. Fiz. 37,
1768 (1973); B.K. Kerimov and M.Ya. Safin, Izv. Russ. Acad. Nauk Ser. Fiz. 61,
657 (1997).
3. 3.
M.A.B. Beg, W.J. Marciano and M. Ruderman, Phys. Rev. D17, 1395 (1978).
4. 4.
W. Bernreuther and M. Suzuki, Rev. Mod. Phys. 63, 313 (1991).
5. 5.
R.S. Sharafiddinov, Dokl. Akad. Nauk Ruz. Ser. Math. Tehn. Estest. 7, 25
(1998).
6. 6.
K. Fujikawa and R.E. Shrock, Phys. Rev. Lett. 45, 963 (1980).
7. 7.
R.B. Begzhanov and R.S. Sharafiddinov, in Proc. Int. Conf. on Nuclear Physics,
Moscow, June 16-19, 1998, St. Petersburg, 1998, p. 354.
8. 8.
S. Davidson, B. Campbell and K.D. Bailey, Phys. Rev. D43, 2314 (1991); J.A.
Morgan and D.B. Farrant, Phys. Lett. B128, 431 (1983).
9. 9.
Ya. B. Zel’dovich, Zh. Eksp. Teor. Fiz. 33, 1531 (1957); Ya. B. Zel’dovich and
A. M. Perelomov, ibid. 39, 1115 (1960).
10. 10.
M.J. Musolf and B.R. Holstein, Phys. Rev. D43,1956 (1991).
11. 11.
L.D. Landau, Zh. Eksp. Teor. Fiz. 32,405 (1957); Nucl. Phys. 3, 127 (1957).
12. 12.
B.K. Kerimov and M.Ya. Safin, Izv. Russ. Acad. Nauk Ser. Fiz. 57, 93 (1993).
13. 13.
S.L. Glashow, Nucl. Phys. 22, 579 (1961); A. Salam and J.C. Ward, Phys. Lett.
13, 168 (1964); S. Weinberg, Phys. Rev. Lett. 19, 1264 (1967).
14. 14.
T.W. Donnelly and R.D. Peccei, Phys. Rep. 50, 3 (1979).
15. 15.
J.Bernstein, M. Ruderman and G. Feinberg, Phys. Rev. 132 1227 (1963).
16. 16.
R.B. Begzhanov and R.S. Sharafiddinov, in Proc. Int. Conf. on Nuclear Physics,
Dubna, April 21-24, 1999, St. Petersburg, 1999, p. 408.
|
# The Extraordinary Outburst in the Massive Protostellar System NGC 6334
I-MM1: Spatio-kinematics of Water Masers during a Contemporaneous Flare Event
James O. Chibueze Centre for Space Research, Potchefstroom campus, North-West
University, Potchefstroom 2531, South Africa Department of Physics and
Astronomy, Faculty of Physical Sciences, University of Nigeria,
Carver Building, 1 University Road, Nsukka 410001, Nigeria Gordon C. MacLeod
Hartebeesthoek Radio Astronomy Observatory, PO Box 443, Krugersdorp 1741,
South Africa. The University of Western Ontario, 1151 Richmond Street,
London, ON N6A 3K7, Canada. Jakobus M. Vorster Centre for Space Research,
Potchefstroom campus, North-West University, Potchefstroom 2531, South Africa
Tomoya Hirota National Astronomical Observatory of Japan, National Institutes
of Natural Sciences, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan. Department
of Astronomical Sciences, SOKENDAI (The Graduate University for Advanced
Studies), Osawa 2-21-1, Mitaka-shi, Tokyo 181-8588, Japan Crystal L. Brogan
NRAO, 520 Edgemont Rd, Charlottesville, VA, 22903, USA. Todd R. Hunter NRAO,
520 Edgemont Rd, Charlottesville, VA, 22903, USA. Ruby van Rooyen South
African Radio Astronomy Observatory, The Park, Park Road, Pinelands, 2 Fir
Street, Black River Park, Observatory, 7925, South Africa
(Received October 30, 2020; Revised November 10, 2020; Accepted 17 December
2020)
###### Abstract
Following an eruptive accretion event in NGC 6334I-MM1, flares in the various
maser species, including water masers, were triggered. We report the observed
relative proper motion of the highly variable water masers associated with the
massive star-forming region, NGC 6334I. High velocity H2O maser proper motions
were detected in 5 maser clusters, CM2-W2 (bow-shock structure), MM1-W1,
MM1-W3, UCHII-W1 and UCHII-W3. The overall average of the derived relative
proper motion is 85 km s-1. This mean proper motion is in agreement with the
previous results from VLA multi-epoch observations. Our position and velocity
variance and co-variance matrix analyses of the maser proper motions show its
major axis to have a position angle of $-79.4^{\circ}$, cutting through the
dust cavity around MM1B and aligned in the northwest-southeast direction. We
interpret this as the axis of the jet driving the CM2 shock and the maser
motion. The complicated proper motions in MM1-W1 can be explained by the
combined influence of the MM1 northeast-southwest bipolar outflow, CS(6-5)
north-south collimated bipolar outflow, and the radio jet. The relative proper
motions of the H2O masers in UCHII-W1 are likely not driven by the jets of
MM1B protostar but by MM3-UCHII. Overall, the post-accretion burst relative
proper motions of the H2O masers trace shocks of jet motion.
ISM: kinematics and dynamics — ISM: molecules — ISM: individual (NGC 6334 I) —
ISM: outflows — stars: massive — stars: formation
††journal: ApJ
## 1 Introduction
Accretion in young massive protostars is a complex phenomenon. Irregular and
fragmented accretion disks lead to episodic accretion with long periods of
relatively slow accretion rate, and short periods of high mass gain (Meyer et
al., 2017). High accretion rates (also called accretion bursts) have been
directly observed in massive protostars of S255IR NIRS 3 (Caratti o Garatti et
al., 2017), NGC6334I-MM1 (Hunter et al., 2017) and G358.93-0.03-MM1 (Burns et
al., 2020). The high accretion rate heats up the protostellar disk, which in
turn can dramatically increase thermal radiation by the surrounding dust.
Among the consequences of accretion events include enhancement of existing
spectral line emission (Hunter et al., 2018; Brogan et al., 2018; Burns et
al., 2020) and the excitation of new maser lines (Brogan et al., 2019; MacLeod
et al., 2019; Chen et al., 2020a; Volvach et al., 2020; Chen et al., 2020b).
Accretion bursts can also significantly alter the chemical makeup of the
protostellar disk for a short time, as observed in low mass protostars (Visser
et al., 2015).
In general, astrophysical masers provide clues into the physical conditions
and the kinematics of protostellar systems. Maser flares have been found to
accompany accretion bursts initially identified in many observations: in 6.7
GHz CH3OH masers near S255IR (Moscadelli et al., 2017; Szymczak et al., 2018)
and many maser species from NGC6334I (MacLeod et al., 2018). Long term single-
dish monitoring observations of variable masers can provide an excellent
mechanism to identify the onset of an accretion burst. The early
identification of an accretion burst via maser monitoring was detected in
G358.93-0.03 by Sugiyama et al. (2019). This exciting result led to the
detection of a wide range of methanol maser transitions including several not
even predicted to exist (MacLeod et al., 2019; Brogan et al., 2019; Breen et
al., 2019). Monitoring of 19.967 GHz CH3OH masers towards G358.93-0.03 are
presented in Volvach et al. (2020) and for other transitions, in MacLeod et
al. (in preparation) and Yonekura et al. (in preparation).
High resolution multi-epoch Very Long Baseline Interferometry (VLBI) studies
of maser proper motion measurements provide useful insights into the gas
spatio-kinematics of protostellar disks, outflows and shocks (Moscadelli et
al., 2011; Chibueze et al., 2012; Torrelles et al., 2014).
NGC 6334 I, located at a parallax distance of $1.30\pm 0.09$ kpc (Chibueze et
al., 2014; Reid et al., 2014; Wu et al., 2014), is a massive star forming
region containing a massive protostar that has recently undergone an accretion
burst. Millimeter and sub-millimeter observations using the Submillimeter
Array (SMA) first identified four compact sources in NGC 6334I (MM1-MM4). MM1
and MM2 were the brightest dust sources while MM3 coincided with the ultra-
compact HII region NGC 6334 F (Hunter et al., 2006). Five more continuum
sources (MM5-MM9) were later identified using the Atacama Large
Millimeter/submillimeter Array (ALMA) and MM1 was also resolved into six
continuum components (A-F) at 1.3 mm (Brogan et al., 2016). Properties of the
individual components were modelled, with several having high dust and
brightness temperatures $T_{dust}>300\,K,T_{brightness}>200\,K$. Continuum
emission associated with MM1, MM3-UCHII and CM2 (located north of MM1) was
detected in the 5 cm observations of NGC 6334I made using the Karl G. Jansky
Very Large Array (VLA) (Brogan et al., 2016). Comparison of the ALMA images
with earlier SMA images revealed that the luminosity of MM1 increased by a
factor of $l_{inc}=70\pm 20$ (Hunter et al., 2017) between 2008 and 2015, with
the centroid of the increase aligned with protostar MM1B. In a parallel
discovery, MM1F, MM1G, MM1C and MM3-UCHII underwent their first observed
activation of 6.7 GHz masers (Hunter et al., 2018), contemporaneous with
flaring of nine other maser transitions beginning in January 2015 (MacLeod et
al., 2018). Followup 22 GHz H2O maser emission measurements using VLA
identified flaring of water masers in a bow shock shape in CM2 (Brogan et al.,
2018). In contrast, the H2O masers previously seen surrounding MM1B were also
damped significantly, likely due to increased dust temperatures.
Single-dish observations in various CO and CS transitions have consistently
shown a northeast-southwest (NE-SW) outflow at large scales ($\sim$0.5 pc)
whose origin is centered on MM1 or MM2 (Bachiller & Cernicharo, 1990;
McCutcheon et al., 2000; Leurini et al., 2006; Qiu et al., 2011). Later
interferometric imaging with ALMA of CS(6-5) resolved the central part of the
outflow, confirming MM1 as the primary origin, and revealing a north-south
(N-S) outflow centered on MM1B and a blue-shifted northwest (NW) lobe (Brogan
et al., 2018). Subsequent imaging of CS(18-17) and HDO in ALMA Band 10
(McGuire et al., 2018) demonstrated excellent spatial alignment between the
warm thermal gas tracing the compact outflow and the 22 GHz H2O masers
embedded in it.
In this paper, we present multi-epoch VLBI measurements of the 22 GHz H2O
maser emission using a combination of Korean VLBI Network (KVN) and VLBI
Exploration of Radio Astronomy (VERA) during the first year of the accretion
burst event. We derive the relative proper motion measurements for these H2O
masers in order to probe the kinematics of the gas surrounding MM1 and
MM3-UCHII in this region of active star formation.
## 2 Observations and data reduction
### 2.1 Single-dish monitoring observations
The ongoing 22.2 GHz (1.3 cm) water maser observations reported here were made
using the 26m telescope of Hartebeesthoek Radio Astronomy Observatory
(HartRAO). The single-dish results reported in this paper covers observations
taken between 7 May 2013 and 2 December 2020. The coordinates that the
telescope pointed to were ($\alpha,\delta$)=(17h20m53s.4,
$-$35o47′01$\farcs$5). The beam width for this receiver is 2.2$\farcm$
Pointing observations were made for each epoch. These observations were also
corrected for atmospheric absorption. Because of the large velocity extent
position switching was employed. The rest frequency of the receiver was set to
22.235120 GHz. The receiver system consisted of left (LCP) and right
circularly polarised (RCP) feeds. Dual polarization spectra were obtained
using a 1024-channel (per polarisation) spectrometer. The receiver is
cryogenically cooled. Each polarisation is calibrated independently relative
to Hydra A, 3C123, and Jupiter, assuming the flux scale of Ott et al. (1994).
The band width used was 8 MHz providing a velocity resolution of 0.105 km s-1
and a total velocity extent of 107.9 km s-1. Typical sensitivities achieved
per observation were 2.3 to 2.9 Jy. Typically, observations were made every 10
to 15d. However, the cadence of observations varied depending on the
availability of the telescope, and the weather conditions. At times
observations were done daily, but there are also observations separated by
weeks.
### 2.2 KaVA observations
H2O masers in NGC 6334I was observed with KVN and VERA Array (KaVA) in 3
epochs taken on 21 November 2015 (2015.89), 15 December 2015 (2015.95) and 4
January 2016 (2016.01), respectively. The position of the phase tracking
center of the NGC 6334I was ($\alpha,\delta$)=(17h20m53s.377, -35o46′55″.808)
in the J2000.0 epoch. NRAO530 was used as the band-pass calibrator while
The total bandwidth was 256 MHz (16 MHz $\times$ 16 IFs) and data were
recorded for the left-hand circular polarization at a 1 Gbps sampling rate. We
analyzed only the one 16 MHz IF channel that contained the H2O 616–523
transition. The spectral resolution is 15.625 kHz ($\sim$0.21 km s-1) for the
H2O maser line. The correlation process was carried out at the Korean-Japan
Correlation Center, Daejeon, Korea (KJCC: Lee et al. 2015).
The data calibration was carried out using the Astronomical Image Processing
System (AIPS) developed by National Radio Astronomy Observatory (NRAO) (van
Moorsel et al. 1996). First, the amplitude was calibrated by using AIPS task
APCAL using system temperature and measured antenna gains. Next, delays and
phase offsets were removed by running AIPS task FRING using NRAO530. Bandpass
response was also calibrated using NRAO530. The 3.9 km s-1 velocity component
of the masers was used as a reference maser component in NGC 6334I. Imaging
and CLEAN (deconvolution) were performed using the AIPS task IMAGR. The SAD
task was employed for the Gaussian fitting for extraction of the peak
intensities and offset positions of the maser spots. A maser ‘spot’ refers to
an individual maser emission peak in a spectral channel while a maser
‘feature’ denotes a group of maser spots considered to exist within the same
maser cloudlet and located physically close to each other. The synthesized
beams for the first, second, and third epochs was 2.48 mas $\times$ 0.97 mas
(position angle, PA=-0.38∘), 2.66 mas $\times$ 1.01 mas (PA=2.82∘) and 2.76
mas $\times$ 1.06 mas (PA=10.64∘), respectively.
Maser features, defined as clusters of masers spots having a position defined
by the position of the brightest peak, were carefully identified in each
epoch. Maser distributions in MM1-W1 and UCHII-W1 varied significantly from
epoch to epoch, and this complexity may have affected, in small measures, the
derived proper motions. Our single-dish results support the complex structures
of these masers (see Section 3.1). The proper motions $\mu_{x}$ in R.A., and
$\mu_{y}$ in Dec. were calculated using the displacement
($\Delta\alpha$cos$\delta,\Delta\delta$) of the maser feature over adjacent
epochs. For features detected in all three epochs, the average of the proper
motion between epochs 1 & 2, and epochs 2 & 3 were taken.
Fringe-rate mapping was used to derive the absolute position of the reference
maser spot and then compared to the closest epoch of the Very Large Array
(VLA) H2O maser map to obtain the absolute positions of the maser
spots/features. The positional accuracy of the masers are within 1.0 mas in
R.A. and 3.5 mas in Declination. To register the relative positions of the
maser features in the 3 epoch, we use the position of a bright maser spot in
UCHII-W1 region as a reference. The derived relative proper motions are
marginally affected by the intrinsic motion of the reference maser spot. The
overall uncertainty in our derived relative proper motions due to the motion
of the reference maser spot is $<$ 10%. This is obtained from the group motion
of all maser features around the reference maser spot. It should be noted that
all proper motions reported in this work are relative proper motions.
## 3 Results
### 3.1 Structures in the H2O maser dynamic spectra
The dynamic spectra of the long-term monitoring of H2O masers is shown in
Fig.1 (A and B). The image provides an interesting metric demonstrating the
longevity of emission in a given velocity extent. We note that a subset of
this data was presented in MacLeod et al. (2018). The water emission in
$-$14$\leq$ V${}_{LSR}\leq-$4 km s-1 suffers significant line blending making
it impossible to disentangle maser features and structure in this single-dish
data. In panels (a) to (d) more independent masers are visible. In these, and
during the MJD extent between the first and last epoch of VLBI observations,
little velocity drift appears present in most. Possible velocity drifts may be
present in this MJD extent for emission in $-$45$\leq$ V${}_{LSR}\leq-$35 km
s-1. This may be the result of multiple masers varying independently. Still
the continuity of maser emission during this MJD extent lends comfort to the
study of proper motion below.
Between the onset of the 6.7 GHz CH3OHmaser burst and the first maximum of the
burst (white and black lines in Fig.1 A, respectively), H2O masers in the
region are mostly destroyed or heavily suppressed. However, Fig. 1 B shows
that most of the maser features, though varied in the flux densities, survived
through the epochs of our VLBI observations.
Figure 2 shows the single-dish (HartRAO 26 m) spectra of the highly variable
H2O masers in NGC 6334I taken nearest the respective VLBI observations.
Significant variations can be seen between 18 November, 2015 and 01 January,
2016. The most prominent feature of the spectra, $-$7 km s-1, and a second,
$-$15 km s-1, feature are brightening, the rest of the maser features are
weakening.
Figure 1: (A) Dynamic spectra of the water masers associated with NGC 6334Ifor
the velocity extent (a) $-$4 to $+$15 km/s, (b) $-$8.5 to $-$4 km/s, (c)
$-$13.5 to $-$9.5 km/s, and (d) $-$60 to $-$14 km/s. The white solid lines
indicates 01 January 2020 (MJD 57023.5) marks the onset of the 6.7 GHz
CH3OHmaser burst and the black solid lines indicates 15 August 2020 (MJD
57249.5), which marks the first maximum of the bursting masers presented in
MacLeod et al. (2018). The dashed red lines mark the dates of each epoch of
VLBI observations reported here. (B) Zoom-in image of A showing a close-up
view of the dynamic spectra around the dates of our VLBI observations. The
zoom-in shows the maser features varied in their intensities but were present
in all 3 VLBI epochs. Figure 2: Single-dish H2O maser spectra of NGC
6334Itaken with HartRAO 26 m closest (within $\pm$ 4 days) to each of the 3
epochs of our KaVA observations. (a) shows the full spectra and (b) shows the
zoom-in into the weaker maser features.
### 3.2 Proper motions of the H2O masers
We obtained the absolute position of the reference maser spot (used for the
self calibration) in the first epoch. This position, with full consideration
of the proper motion of the reference maser spot, was used for the
registration of the maps in the 3 epochs. We traced 186 maser proper motions,
divided into groups according to the nomenclature used by Brogan et al.
(2018).
Figures 3, 4, 5, and 6 show the traced H2O maser proper motions overlaid on
the ALMA 1.3 mm dust continuum from a comparable epoch (2016.6, grey scale
from Hunter et al. (2017)) and the VLA 5 cm image (white contours from Hunter
et al. (2018)). The colored vectors (arrows) represent the H2O proper motions
traced in the region. The length of each arrow indicates the magnitude of the
proper motion and the direction of the arrow indicates the proper motion
direction. Proper motions are measured with respect to the reference maser
spot in UCHII-W1 located at $(\alpha,\delta)$ = $(\alpha,\delta)$ =
(17h20m52s.600, -35o46′50″.508). The grey circles are water maser detection
from Brogan et al. (2018) for comparison. Figures 4, 5 and 6 shows zoom-ins of
the proper motions of different maser groups with colors of the
$V_{\mbox{\scriptsize LSR}}$ of the masers (as in Figure 3).
We detected water maser proper motions in the regions CM2-W2, MM1-W1, MM1-W3,
UCHII-W1 and UCHII-W3. The positions, proper motions, $V_{\mbox{\scriptsize
LSR}}$ and epochs of detection are shown in Table 1. A ‘+’ indicates a
detection in a specific epoch of a specific maser feature, while ‘-’ signifies
a non-detection. The majority of all the proper motions (56.5%) were traced
using all 3 epochs, while the remaining 43.5% were traced in 2 epochs. The
overall mean of the 3D velocities of the masers is 85 km s-1. For the rest of
this section, average velocity refers to the magnitude of the average
3-dimensional velocity. In the following paragraphs, we describe the
properties of each of the groups.
The northernmost region, CM2-W2, is $\sim$ 2750 AU from MM1B with 87 proper
motions. Figure 4 shows the spacial distribution, and proper motions of the
H2O masers in the region. There were 39 maser features detected in all three
epochs. The proper motions have a spatial distribution comparable to a bow-
shock shaped structure. Most of the proper motions point north, with an
average velocity of 112 km s-1. The region also shows a drastic
$V_{\mbox{\scriptsize LSR}}$ gradient throughout the structure with $-46.98<$
$V_{\mbox{\scriptsize LSR}}$ $<$ 0.63 km s-1. The proper motions detected
spanned a linear size of $\sim$219 AU from east to west.
MM1-W1 is found just below MM1B ($\sim$ 510 AU) and is a more complicated
region, with proper motions pointing in various directions. The maser spots
showing a linear structure with a length of $\sim$ 18 AU. Figure 5 shows high
resolution images of the spatial distribution and proper motions of water
masers in MM1-W1 and MM1-W3. We detected 25 proper motions, 14 traced in three
epochs. The average velocity of the region is 43 km s-1 and $-0.21<$
$V_{\mbox{\scriptsize LSR}}$ $<-3.8$ km s-1. The region shows great variation
in proper motion direction and magnitude over a relatively small region
although there is not a large $V_{\mbox{\scriptsize LSR}}$ gradient. The
region contains a number of high velocity proper motions pointing northward
with an average velocity of 54 km s-1. The complexity of the observed proper
motions can be attributed to the combined influence of the MM1 northeast-
southwest, CS(6-5) north-source bipolar outflows, and the radio jet. The
relative error in proper motion of this region is only $\sim$20% for most of
the constituent proper motions, indicating that the proper motions do reflect
multiple influences on the motion of the masing cloudlets in the region.
MM1-W3 is the maser group just north of MM1B ($\sim$ 510 AU). We detected 13
proper motions with an average velocity of 106 km s-1. The region has two
distinct associations. The north-eastern association consists of 8 proper
motions, 5 are traced in all three epochs. The proper motions point north-west
with an average of 126 km s-1 and a radial velocity $V_{\mbox{\scriptsize
LSR}}$ $\approx-62$km s-1. The second association points northward, with 4 of
the 5 proper motions only being traced in two epochs. The average velocity of
the association is 72 km s-1 and $V_{\mbox{\scriptsize LSR}}$ $\approx$ 14 km
s-1. The linear separation between the two associations is $\sim$ 55 AU.
UCHII-W1 is about 4300 AU south of MM1B. We detected 48 proper motions, with
37 traced in three epochs. The region has an average velocity of 64 km s-1.
There is a small radial velocity gradient with $-16.0<$ $V_{\mbox{\scriptsize
LSR}}$ $<-8.2$ km s-1. It should be noted that the maser spot distribution in
this region was very complicated and the tracing of proper motions was
difficult. Figure 6 shows the spacial distribution and proper motions of
masers in the MM3-UCHII region and a high resolution image of proper motions
in UCHII-W1. Our results show a bulk motion to the north.
UCHII-W3 is well south of MM1 ($\sim$ 2600 AU), corresponding to the edge of a
jet traced by a CS (6-5) map from Brogan et al. (2018). We detected 4 proper
motions with an average velocity of 89 km s-1 pointing to the south-east. Two
maser associations are resolved $\sim$ 43 AU apart, the eastern association
has an average velocity of 96 km s-1 and $V_{\mbox{\scriptsize LSR}}$
$\approx-48$ km s-1. The western region has an average velocity of 81 km s-1
and $V_{\mbox{\scriptsize LSR}}$ $\approx-36$ km s-1.
Figure 3: H2O maser proper motions derived from our KaVA observations overlaid
on 2016.6 ALMA 1.3 mm continuum (brown scale) (Hunter et al., 2017). Grey
contours are 2016.9 VLA 5 cm continuum observations with levels 0.022 $\times$
[4,9,260,600] mJy beam-1 (Hunter et al., 2018). H2O maser regions are named
according to the corresponding maser groups of Brogan et al. (2018) from north
to south (black labels). The blue dashed line shows the axes of the MM1B NW
jet. The red dotted line shows the NE-SW wide angle outflow from MM1 and the
black dashed line shows the outflow traced in CS(6-5). The black circles trace
water masers measured by VLA in the 2017.8 epoch of Brogan et al. (2018). The
grey dotted line shows the main velocity axes derived from the VVCM analysis
(see Section 4.1).The linear scale and the transverse velocity scale is shown
in the top-left corner. The radial velocity of the proper motions is indicated
by the color scale. The synthesized beams are shown in the top left corner,
where the white and grey ellipses are VLA and ALMA’s beams respectively. The
offsets (visible in zoom-ins) in the positions of the VLA 2017.8 maser
features (black circles) could be due to error in the absolute position of our
KaVA reference maser spot and/or the relative position uncertainty (19 mas in
R.A and 66 mas in Declination) of the VLA observations. Figure 4: Zoom-in of
H2O maser proper motions associated with the CM2-W2 region (See Figure 3). The
$V_{\mbox{\scriptsize LSR}}$ scale, contour lines, black circles and grey
dotted line are the same as in Figure 3. Linear distance and velocity scale is
shown in the top left. The seeming large offset in the positions of the VLA
2017.8 maser features (black circles) could be due to error in the absolute
position of our KaVA reference maser spot and/or the relative position
uncertainty (19 mas in R.A and 66 mas in Declination) of the VLA observations.
Figure 5: Zoomed-in image of the MM1 region. With a high resolution image of
the proper motions of the MM1-W1 region (right) and MM1-W3 (left). The
$V_{\mbox{\scriptsize LSR}}$ scale is shown by the color bar of the center
image. Contour lines, black circles and grey dotted line are the same as in
Figure 3 for both images. Continuum was removed from the zoomed images for
clarity. Linear and velocity scales are shown in the top left corner of each
image. Figure 6: Left: Zoomed in image of the UCHII region. Right: High
resolution image of the proper motions of the UCHII-W1 region. The
$V_{\mbox{\scriptsize LSR}}$ scale for both images is shown on the colorbar of
the left image. Contour lines, black circles and grey dotted line are the same
as in Figure 3 for both images. Linear distance and velocity scale is shown in
the top left of both images.
## 4 Discussion
### 4.1 VVCM analysis
In order to characterize the proper motions of the outflow, we used the
position variance-covariance matrix and velocity variance-covariance matrix
(PVCM and VVCM) as described by Bloemhof (1993, 2000) and Chibueze et al.
(2012). These matrices provide a robust and objective means of extracting the
position and kinematic essentials from maser proper motions. The PVCM and
VVCM, $\sigma$, are constructed using:
$\centering\sigma_{i,j}=\frac{1}{N-1}\sum^{N}_{n=1}(v_{i,n}-\bar{v_{i}})(v_{j,n}-\bar{v_{j}})\@add@centering$
(1)
with $i,j$ iterating over the spatial axes ($\alpha,\delta$ for the position
variance-covariance matrix) and ($v_{\alpha},v_{\delta}$,$V_{\mbox{\scriptsize
LSR}}$ for the velocity variance-covariance matrix), $n$ the $n$th of $N$
maser spots/proper motions ($N$=186). The bar indicates the average over all
proper motions. The diagonal entries of the matrix $\sigma$ is the variance of
the variable while the off-diagonal entries are the covariance of two
variables.
The PVCM gives a 2 $\times$ 2 matrix and using all the regions except
UCHII-W1, the PVCM (in units of $10^{-6}$ arcsec2) and its diagonalization was
obtained to be:
$\begin{pmatrix}0.034&-0.163\\\
-0.163&0.867\end{pmatrix}\Rightarrow\begin{pmatrix}0.003&0\\\
0&0.897\end{pmatrix}$ (2)
The corresponding 3 $\times$ 3 VVCM matrix and its diagonalization (in units
of km2 s-2) is given by:
$\begin{pmatrix}454.48&-476.45&132.01\\\ -476.45&3431.65&-108.46\\\
132.01&-108.46&279.21\end{pmatrix}\Rightarrow\begin{pmatrix}3511.08&0&0\\\
0&452.48&0\\\ 0&0&201.79\end{pmatrix}$ (3)
Table 2 shows the results of a PVCM and VVCM analyses. In Table 2,
$\psi_{max}$ indicates the largest eigenvalue of the PVCM/VVCM matrix,
$\psi_{min}$ the smallest eigenvalue and $\psi_{mid}$ the middle-valued
eigenvalue for the VVCM matrix. The large difference in the magnitudes of the
eigenvalues of both position and velocity variance matrices demonstrates the
presence of a distinct spatial and kinematic axis in the data. The major axis
is defined by the eigenvector corresponding to the largest eigenvalue. The
position angle is calculated by projecting the major axis onto the celestial
sphere. The axis from the VVCM is plotted on Figure 3 (and its zoom-ins) with
a P.A. of -79.4∘ and passing through the position of MM1B from Brogan et al.
(2016). The error in the position angle was calculated using a Monte-Carlo
error of the velocity vectors. UCHII-W1 was not included as the direction of
its motion does not seem to be influenced by the jet from MM1B. It should also
be noted that including UCHII-W1 into the calculation makes only a marginal
difference in the results ($\Delta$PA${}_{\text{max}}\sim$ $-2.25^{\circ}$,
$\Delta\phi_{\text{max}}\sim$ $5.88^{\circ}$). The axis derived aligns very
well with a bipolar outflow terminating at CM2-W2 and UCHII-W3. Assuming the
bow shock in CM2 is symmetric, the inferred inclination angle for this outflow
from matrix 2 is $\phi_{\text{max}}=-6.0^{\circ}\pm 0.6^{\circ}$
### 4.2 Jet, cavity and shock structures in MM1
High proper motions of H2O near the path of the radio jet of Cepheus A-HW2 is
attributed to the influence of the fast moving jet (Torrelles et al., 2011).
Typical proper motions of low velocity outflows and expanding ring/bubble
structures are $\sim$10 km s-1(Torrelles et al., 2011; Chibueze et al., 2012,
2014). With the mean H2O maser proper motion of 86 km s-1, the masing
cloudlets in NGC 6334Iare driven by the jet in MM1.
Our VVCM analysis indicate the northwest-southeast axis of the jet driving the
maser proper motions (at least of CM2-W2, MM1-W3, MM1-W1, UCHII-W3) as shown
with dotted gray lines in Figure 3. Interestingly, this axis cut through dips
in ALMA dust continuum, one in the northwest and the other in the southeast.
We interpret these dips as cavities ploughed by the jets and this agrees with
the suggested excavated outflow cavity by Brogan et al. (2018).
To test the possibility of precession in the jet motion, we compare the
position angle of the VVCM results derived with all maser regions with those
of the inner regions. About 10∘ difference is observed between the two
position angles. This could be an indication of jet precession. MM1-W1 and
MM1-W3 masers are closer to MM1 (driving source of the jet) and assuming a jet
velocity of 150 km s-1, it will take 95 years for a jet launched by MM1 to
reach the location of CM2.
The synchrotron continuum point source CM2 (Brogan et al., 2018), located
north-west of the radio jet of MM1B, host the bright masers in the region and
its nature has been discussed in Brogan et al. (2018). The observed proper
motions of H2O masers in CM2 is similar to those reported in Burns et al.
(2016). In a study of S255IR-SMA1 they reported a bow shock shape traced in
H2O masers with a velocity of $\sim$ 20 km s-1, and a $V_{\mbox{\scriptsize
LSR}}$ gradient throughout the shock. They also reported three distinct
ejections, with the most recent ejection being the shock traced in H2O masers
with a dynamical timescale $t_{dyn}\leq 130$ years. Ogbodo et al. (2017) also
reported a bow-shock structure for IRAS 20231+3440 traced with H2O masers,
with an average maser velocity of 14.26 km s-1. These studies report bow-shock
maser velocities significantly lower than we found in NGC 6334I. Further
studies into the driving mechanisms of the jets and outflows of NGC 6334I and
other sources are necessary to explain the bow-shock velocity discrepancies.
This and the above mentioned studies (among others) indicate that a high
velocity (V${}_{ave}\geq$ 10 km s-1 H2O maser proper motions in a bow-shock
shape might be a common tracer for jets in massive protostars.
### 4.3 Impact of MM3-UCHII on UCHII-W1 maser spatio-kinematics
Brogan et al. (2018) reported a bulk motion of 112$\pm$12 km s-1for the
UCHII-W1 maser group using multi-epoch VLA observations between 2011 (pre-
burst) and 2017 (post-burst). They suggested that the H2O masers of the two
2017 epochs are possibly pumped by the beamed radiation from MM1B. The proper
motions of UCHII-W1 point northward against the direction of the jet. This
suggests that spatio-kinematics of the masing gas in this sub-region is not
driven by the jet but by the MM3-UCHII .
We investigated the possibility that the magnetic field reversal reported by
Caswell et al. (2011) and Hunter et al. (2018) are responsible for the
northward proper motion of UCHII-W1 H2O masers. A reversal in magnetic field
is reported in OH masers in UCHII-OH6 (located 0.35$\arcsec$ south-west of
UCHII-W1) and UCHII-OH7 (located 0.7$\arcsec$ south of UCHII-W3) (see Figure 5
and Table 8 of Hunter et al. (2018)). The reversed Zeeman splitted OH masers
are $>$ 500 au from UCHII-W1 and rather closer to UCHII-W3 and W2 (see Brogan
et al. (2018)), therefore may not be responsible for the observed northward
proper motions of UCHII-W1 masers. The UCHII-W1 proper motion is likely driven
by MM3-UCHII or by the outflow of TPR-9 (with X-ray counterpart, CXOU
172053.21-354726.4) infrared star (Tapia et al., 1996) as suggested by Brogan
et al. (2018).
## 5 Conclusions and summary
We reported for the first time the spatio-kinematic of H2O masers in a massive
star-forming region (NGC 6334I) just after an accretion event. The proper
motions of the H2O masers in CM2-W2, MM1-W3, MM1-W1, UCHII-W3 are mostly
driven by the radio jet of MM1-B. However, some influence from the outflowing
gas in MM1 NW-SE bipolar outflow and MM1B NW outflow cannot be completely
excluded.
Our results suggested that the motion of the UCHII-W1 H2O maser group is
largely driven by the expansion of MM3-UCHII . The significance of impact of
the accretion event on the proper motions of the H2O maser, with special
consideration of the destruction and re-excitation of the H2O masers in the
region, will be presented in Vorster et al. (in prep.), which will compare the
pre-burst and post-burst H2O maser proper motions. The impact of a heat wave,
such as the one reported in Burns et al. (2020), will be explored with the
pre-accretion burst VLBI H2O maser data.
JOC acknowledges support from the Italian Ministry of Foreign Affairs and
International Cooperation (MAECI Grant Number ZA18GR02) and the South African
Department of Science and Technology’s National Research Foundation (DST-NRF
Grant Number 113121) as part of the ISARP RADIOSKY2020 Joint Research Scheme.
T. Hirota is financially supported by the MEXT/JSPS KAKENHI Grant Number
17K05398. This paper makes use of the following ALMA data:
ADS/JAO.ALMA#2015.A.00022.T. ALMA is a partnership of ESO (representing its
member states), NSF (USA) and NINS (Japan), together with NRC (Canada) and NSC
and ASIAA (Taiwan) and KASI (Republic of Korea), in cooperation with the
Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and
NAOJ. The National Radio Astronomy Observatory is a facility of the National
Science Foundation operated under agreement by the Associated Universities,
Inc. This research made use of NASA’s Astrophysics Data System Bibliographic
Services. The Hartebeesthoek 26-m telescope is operated by the South African
Radio Astronomy Observatory, which is a facility of the National Research
Foundation, an agency of the Department of Science and Innovation.
Table 1: Parameters of the H2O Maser Proper Motions
ID${}^{\text{a}}$ | Region | Offset${}^{\text{b}}$ | Proper Motion | Radial Motion | Detections
---|---|---|---|---|---
| | $\alpha$ | $\delta$ | $\mu_{x}$ | $\sigma$ $\mu_{x}$ | $\mu_{y}$ | $\sigma\mu_{y}$ | $V_{\mbox{\scriptsize LSR}}$ | 1 | 2 | 3
| | (″) | (mas yr-1) | (km s-1) |
$1$ | CM2-W2 | $-0.656$ | $5.305$ | $-2.394$ | $0.168$ | $21.709$ | $0.168$ | $-27.594$ | $+$ | $+$ | $-$
$2$ | CM2-W2 | $-0.656$ | $5.305$ | $-3.711$ | $0.168$ | $24.368$ | $0.168$ | $-28.226$ | $+$ | $+$ | $-$
$3$ | CM2-W2 | $-0.688$ | $5.304$ | $-0.980$ | $2.200$ | $12.964$ | $0.413$ | $-5.477$ | $+$ | $+$ | $-$
$4$ | CM2-W2 | $-0.640$ | $5.364$ | $0.000$ | $0.213$ | $17.487$ | $0.213$ | $-14.535$ | $+$ | $+$ | $-$
$5$ | CM2-W2 | $-0.640$ | $5.364$ | $-1.400$ | $0.231$ | $35.020$ | $0.215$ | $-14.533$ | $+$ | $+$ | $-$
$6$ | CM2-W2 | $-0.627$ | $5.379$ | $-1.281$ | $0.373$ | $17.487$ | $0.413$ | $0.422$ | $+$ | $+$ | $-$
$7$ | CM2-W2 | $-0.628$ | $5.377$ | $1.658$ | $0.213$ | $21.105$ | $0.213$ | $-2.316$ | $+$ | $+$ | $-$
$8$ | CM2-W2 | $-0.628$ | $5.378$ | $1.960$ | $0.213$ | $19.145$ | $0.213$ | $-4.844$ | $+$ | $+$ | $-$
$9$ | CM2-W2 | $-0.627$ | $5.380$ | $-2.384$ | $1.643$ | $30.351$ | $3.193$ | $-0.206$ | $+$ | $+$ | $-$
$10$ | CM2-W2 | $-0.677$ | $5.330$ | $-4.179$ | $0.168$ | $17.412$ | $0.168$ | $-14.112$ | $+$ | $+$ | $-$
$11$ | CM2-W2 | $-0.678$ | $5.331$ | $-0.281$ | $0.168$ | $23.235$ | $0.168$ | $-3.578$ | $+$ | $+$ | $-$
$12$ | CM2-W2 | $-0.678$ | $5.331$ | $1.919$ | $0.168$ | $9.897$ | $0.168$ | $-3.789$ | $+$ | $+$ | $-$
$13$ | CM2-W2 | $-0.646$ | $5.326$ | $0.183$ | $0.259$ | $13.204$ | $0.259$ | $-24.010$ | $+$ | $+$ | $-$
$14$ | CM2-W2 | $-0.648$ | $5.337$ | $-8.619$ | $0.493$ | $25.858$ | $2.556$ | $-18.533$ | $+$ | $+$ | $-$
$15$ | CM2-W2 | $-0.674$ | $5.344$ | $-1.540$ | $0.168$ | $18.956$ | $0.168$ | $-22.327$ | $+$ | $+$ | $-$
$16$ | CM2-W2 | $-0.660$ | $5.354$ | $-7.336$ | $0.259$ | $27.692$ | $0.259$ | $-0.627$ | $+$ | $+$ | $-$
$17$ | CM2-W2 | $-0.660$ | $5.354$ | $-10.637$ | $0.259$ | $28.425$ | $0.259$ | $-1.259$ | $+$ | $+$ | $-$
$18$ | CM2-W2 | $-0.602$ | $5.378$ | $0.075$ | $0.861$ | $18.165$ | $0.576$ | $-21.698$ | $-$ | $+$ | $+$
$19$ | CM2-W2 | $-0.602$ | $5.378$ | $0.301$ | $0.337$ | $17.487$ | $0.282$ | $-22.119$ | $-$ | $+$ | $+$
$20$ | CM2-W2 | $-0.602$ | $5.378$ | $0.904$ | $0.213$ | $16.432$ | $0.213$ | $-22.330$ | $-$ | $+$ | $+$
$21$ | CM2-W2 | $-0.690$ | $5.288$ | $-1.448$ | $0.297$ | $15.272$ | $0.493$ | $-6.738$ | $-$ | $+$ | $+$
$22$ | CM2-W2 | $-0.662$ | $5.317$ | $4.000$ | $0.168$ | $18.241$ | $0.168$ | $-20.010$ | $-$ | $+$ | $+$
$23$ | CM2-W2 | $-0.662$ | $5.316$ | $-7.476$ | $0.168$ | $25.572$ | $0.168$ | $-26.330$ | $-$ | $+$ | $+$
$24$ | CM2-W2 | $-0.662$ | $5.316$ | $-6.602$ | $0.168$ | $23.152$ | $0.168$ | $-26.541$ | $-$ | $+$ | $+$
$25$ | CM2-W2 | $-0.662$ | $5.316$ | $-0.603$ | $0.213$ | $14.773$ | $0.213$ | $-26.754$ | $-$ | $+$ | $+$
$26$ | CM2-W2 | $-0.662$ | $5.316$ | $-0.904$ | $0.213$ | $18.090$ | $0.213$ | $-26.965$ | $-$ | $+$ | $+$
$27$ | CM2-W2 | $-0.648$ | $5.310$ | $-2.109$ | $0.453$ | $13.204$ | $0.608$ | $-21.693$ | $-$ | $+$ | $+$
$28$ | CM2-W2 | $-0.647$ | $5.305$ | $-10.133$ | $0.168$ | $14.082$ | $0.168$ | $-10.530$ | $-$ | $+$ | $+$
$29$ | CM2-W2 | $-0.654$ | $5.306$ | $3.759$ | $2.676$ | $11.554$ | $2.356$ | $-27.802$ | $-$ | $+$ | $+$
$30$ | CM2-W2 | $-0.654$ | $5.306$ | $-10.637$ | $0.259$ | $35.577$ | $0.259$ | $-28.434$ | $-$ | $+$ | $+$
$31$ | CM2-W2 | $-0.657$ | $5.306$ | $-8.152$ | $0.168$ | $27.284$ | $0.168$ | $-26.330$ | $-$ | $+$ | $+$
$32$ | CM2-W2 | $-0.690$ | $5.297$ | $-4.221$ | $0.213$ | $11.909$ | $0.213$ | $-5.055$ | $-$ | $+$ | $+$
$33$ | CM2-W2 | $-0.690$ | $5.297$ | $-0.350$ | $0.168$ | $6.364$ | $0.168$ | $-5.685$ | $-$ | $+$ | $+$
$34$ | CM2-W2 | $-0.691$ | $5.298$ | $9.949$ | $0.213$ | $-0.603$ | $0.213$ | $-6.741$ | $-$ | $+$ | $+$
$35$ | CM2-W2 | $-0.691$ | $5.298$ | $-1.809$ | $0.213$ | $12.211$ | $0.213$ | $-7.162$ | $-$ | $+$ | $+$
$36$ | CM2-W2 | $-0.691$ | $5.298$ | $-1.507$ | $0.213$ | $11.758$ | $0.213$ | $-7.373$ | $-$ | $+$ | $+$
$37$ | CM2-W2 | $-0.690$ | $5.298$ | $-3.301$ | $0.259$ | $7.886$ | $0.259$ | $-9.896$ | $-$ | $+$ | $+$
$38$ | CM2-W2 | $-0.686$ | $5.304$ | $-4.629$ | $1.477$ | $17.235$ | $0.324$ | $-11.373$ | $-$ | $+$ | $+$
$39$ | CM2-W2 | $-0.685$ | $5.304$ | $1.809$ | $0.213$ | $12.814$ | $0.213$ | $-13.693$ | $-$ | $+$ | $+$
$40$ | CM2-W2 | $-0.685$ | $5.304$ | $1.206$ | $0.213$ | $12.060$ | $0.213$ | $-13.903$ | $-$ | $+$ | $+$
$41$ | CM2-W2 | $-0.637$ | $5.367$ | $-1.583$ | $0.306$ | $16.884$ | $0.594$ | $-14.746$ | $-$ | $+$ | $+$
$42$ | CM2-W2 | $-0.642$ | $5.363$ | $4.899$ | $1.253$ | $26.306$ | $0.841$ | $-13.693$ | $-$ | $+$ | $+$
$43$ | CM2-W2 | $-0.643$ | $5.365$ | $-3.668$ | $0.205$ | $22.307$ | $0.217$ | $-14.322$ | $-$ | $+$ | $+$
$44$ | CM2-W2 | $-0.640$ | $5.361$ | $-6.683$ | $1.455$ | $20.150$ | $2.021$ | $-10.954$ | $-$ | $+$ | $+$
$45$ | CM2-W2 | $-0.627$ | $5.379$ | $0.377$ | $0.642$ | $19.823$ | $0.306$ | $0.633$ | $-$ | $+$ | $+$
$46$ | CM2-W2 | $-0.628$ | $5.378$ | $2.110$ | $0.213$ | $17.336$ | $0.213$ | $-5.055$ | $-$ | $+$ | $+$
$47$ | CM2-W2 | $-0.647$ | $5.340$ | $-1.324$ | $0.117$ | $15.224$ | $0.117$ | $-5.474$ | $-$ | $+$ | $+$
$48$ | CM2-W2 | $-0.660$ | $5.353$ | $-0.937$ | $0.168$ | $11.078$ | $0.168$ | $-3.789$ | $-$ | $+$ | $+$
$49$ | CM2-W2 | $-0.486$ | $5.330$ | $4.372$ | $0.213$ | $15.678$ | $0.213$ | $0.633$ | $+$ | $+$ | $+$
$50$ | CM2-W2 | $-0.486$ | $5.330$ | $4.975$ | $0.213$ | $12.663$ | $0.213$ | $0.422$ | $+$ | $+$ | $+$
$51$ | CM2-W2 | $-0.486$ | $5.330$ | $5.395$ | $0.168$ | $10.848$ | $0.168$ | $-0.418$ | $+$ | $+$ | $+$
$52$ | CM2-W2 | $-0.486$ | $5.330$ | $0.627$ | $0.168$ | $21.485$ | $0.168$ | $-0.629$ | $+$ | $+$ | $+$
$53$ | CM2-W2 | $-0.486$ | $5.330$ | $5.379$ | $0.168$ | $9.349$ | $0.168$ | $-0.840$ | $+$ | $+$ | $+$
$54$ | CM2-W2 | $-0.486$ | $5.330$ | $6.037$ | $0.168$ | $11.323$ | $0.168$ | $-1.050$ | $+$ | $+$ | $+$
$55$ | CM2-W2 | $-0.486$ | $5.330$ | $5.611$ | $0.168$ | $8.848$ | $0.168$ | $-1.261$ | $+$ | $+$ | $+$
$56$ | CM2-W2 | $-0.486$ | $5.330$ | $-2.550$ | $0.168$ | $19.441$ | $0.168$ | $-1.893$ | $+$ | $+$ | $+$
$57$ | CM2-W2 | $-0.572$ | $5.405$ | $0.452$ | $0.213$ | $24.120$ | $0.213$ | $-45.082$ | $+$ | $+$ | $+$
$58$ | CM2-W2 | $-0.572$ | $5.405$ | $-0.904$ | $0.213$ | $22.612$ | $0.213$ | $-45.293$ | $+$ | $+$ | $+$
$59$ | CM2-W2 | $-0.572$ | $5.405$ | $3.698$ | $0.168$ | $24.442$ | $0.168$ | $-46.975$ | $+$ | $+$ | $+$
$60$ | CM2-W2 | $-0.602$ | $5.378$ | $0.301$ | $0.433$ | $17.487$ | $0.406$ | $-21.909$ | $+$ | $+$ | $+$
$61$ | CM2-W2 | $-0.690$ | $5.289$ | $-1.585$ | $0.168$ | $16.134$ | $0.168$ | $-6.528$ | $+$ | $+$ | $+$
$62$ | CM2-W2 | $-0.662$ | $5.317$ | $-4.279$ | $0.168$ | $28.030$ | $0.168$ | $-21.485$ | $+$ | $+$ | $+$
$63$ | CM2-W2 | $-0.656$ | $5.307$ | $2.384$ | $0.259$ | $8.436$ | $0.259$ | $-26.959$ | $+$ | $+$ | $+$
$64$ | CM2-W2 | $-0.659$ | $5.308$ | $-1.055$ | $0.213$ | $26.532$ | $0.213$ | $-26.754$ | $+$ | $+$ | $+$
$65$ | CM2-W2 | $-0.659$ | $5.309$ | $4.768$ | $0.259$ | $20.723$ | $0.259$ | $-26.117$ | $+$ | $+$ | $+$
$66$ | CM2-W2 | $-0.614$ | $5.347$ | $1.507$ | $0.213$ | $23.215$ | $0.213$ | $-19.591$ | $+$ | $+$ | $+$
$67$ | CM2-W2 | $-0.690$ | $5.296$ | $0.754$ | $0.213$ | $14.924$ | $0.213$ | $-4.844$ | $+$ | $+$ | $+$
$68$ | CM2-W2 | $-0.688$ | $5.304$ | $3.618$ | $0.213$ | $5.125$ | $0.213$ | $-4.634$ | $+$ | $+$ | $+$
$69$ | CM2-W2 | $-0.688$ | $5.304$ | $1.131$ | $2.624$ | $13.718$ | $0.337$ | $-5.266$ | $+$ | $+$ | $+$
$70$ | CM2-W2 | $-0.688$ | $5.305$ | $3.481$ | $0.168$ | $15.810$ | $0.168$ | $-7.160$ | $+$ | $+$ | $+$
$71$ | CM2-W2 | $-0.688$ | $5.305$ | $-1.013$ | $0.168$ | $15.293$ | $0.168$ | $-7.370$ | $+$ | $+$ | $+$
$72$ | CM2-W2 | $-0.688$ | $5.305$ | $-0.603$ | $0.213$ | $16.281$ | $0.213$ | $-7.583$ | $+$ | $+$ | $+$
$73$ | CM2-W2 | $-0.686$ | $5.304$ | $-3.769$ | $0.213$ | $19.899$ | $0.213$ | $-8.637$ | $+$ | $+$ | $+$
$74$ | CM2-W2 | $-0.686$ | $5.305$ | $5.578$ | $0.213$ | $5.728$ | $0.213$ | $-10.954$ | $+$ | $+$ | $+$
$75$ | CM2-W2 | $-0.686$ | $5.304$ | $-3.951$ | $0.168$ | $14.085$ | $0.168$ | $-11.162$ | $+$ | $+$ | $+$
$76$ | CM2-W2 | $-0.666$ | $5.336$ | $0.754$ | $0.213$ | $18.090$ | $0.213$ | $-9.058$ | $+$ | $+$ | $+$
$77$ | CM2-W2 | $-0.682$ | $5.327$ | $1.357$ | $0.213$ | $20.954$ | $0.213$ | $-1.895$ | $+$ | $+$ | $+$
$78$ | CM2-W2 | $-0.631$ | $5.372$ | $-0.357$ | $0.251$ | $23.737$ | $0.282$ | $-16.429$ | $+$ | $+$ | $+$
$79$ | CM2-W2 | $-0.614$ | $5.376$ | $-1.009$ | $2.068$ | $21.915$ | $0.701$ | $-5.472$ | $+$ | $+$ | $+$
$80$ | CM2-W2 | $-0.647$ | $5.387$ | $-5.578$ | $0.282$ | $17.185$ | $0.337$ | $-13.693$ | $+$ | $+$ | $+$
$81$ | CM2-W2 | $-0.651$ | $5.324$ | $-3.392$ | $0.117$ | $38.307$ | $0.117$ | $-10.109$ | $+$ | $+$ | $+$
$82$ | CM2-W2 | $-0.646$ | $5.325$ | $1.407$ | $0.117$ | $13.155$ | $0.117$ | $-23.802$ | $+$ | $+$ | $+$
$83$ | CM2-W2 | $-0.646$ | $5.326$ | $4.035$ | $0.259$ | $15.955$ | $0.259$ | $-22.536$ | $+$ | $+$ | $+$
$84$ | CM2-W2 | $-0.667$ | $5.346$ | $2.648$ | $0.117$ | $13.238$ | $0.117$ | $-3.368$ | $+$ | $+$ | $+$
$85$ | CM2-W2 | $-0.668$ | $5.346$ | $0.183$ | $0.259$ | $9.536$ | $0.259$ | $-5.893$ | $+$ | $+$ | $+$
$86$ | CM2-W2 | $-0.663$ | $5.358$ | $-1.658$ | $0.213$ | $12.814$ | $0.213$ | $-10.954$ | $+$ | $+$ | $+$
$87$ | CM2-W2 | $-0.661$ | $5.354$ | $0.183$ | $0.259$ | $10.086$ | $0.259$ | $-2.944$ | $+$ | $+$ | $+$
$88$ | MM1-W1 | $-0.222$ | $3.641$ | $-0.904$ | $0.213$ | $12.361$ | $0.213$ | $-14.325$ | $+$ | $+$ | $-$
$89$ | MM1-W1 | $-0.223$ | $3.641$ | $-2.237$ | $0.168$ | $15.733$ | $0.168$ | $-13.901$ | $+$ | $+$ | $-$
$90$ | MM1-W1 | $-0.222$ | $3.641$ | $-3.467$ | $0.213$ | $5.125$ | $0.213$ | $-14.535$ | $+$ | $+$ | $-$
$91$ | MM1-W1 | $-0.222$ | $3.641$ | $-0.248$ | $0.117$ | $7.115$ | $0.117$ | $-14.111$ | $+$ | $+$ | $-$
$92$ | MM1-W1 | $-0.188$ | $3.672$ | $-7.436$ | $0.678$ | $12.387$ | $1.270$ | $-61.497$ | $+$ | $+$ | $-$
$93$ | MM1-W1 | $-0.188$ | $3.672$ | $-8.178$ | $0.410$ | $12.776$ | $0.984$ | $-63.094$ | $+$ | $+$ | $-$
$94$ | MM1-W1 | $-0.188$ | $3.674$ | $-8.953$ | $0.168$ | $19.211$ | $0.168$ | $-64.039$ | $+$ | $+$ | $-$
$95$ | MM1-W1 | $-0.223$ | $3.641$ | $-0.151$ | $0.213$ | $16.130$ | $0.213$ | $-13.693$ | $+$ | $+$ | $+$
$96$ | MM1-W1 | $-0.188$ | $3.674$ | $-11.940$ | $0.168$ | $21.226$ | $0.168$ | $-63.407$ | $+$ | $+$ | $+$
$97$ | MM1-W1 | $-0.188$ | $3.674$ | $-11.740$ | $0.168$ | $20.267$ | $0.168$ | $-62.986$ | $+$ | $+$ | $+$
$98$ | MM1-W1 | $-0.188$ | $3.674$ | $-11.389$ | $0.168$ | $19.727$ | $0.168$ | $-62.564$ | $+$ | $+$ | $+$
$99$ | MM1-W1 | $-0.188$ | $3.674$ | $-6.978$ | $0.168$ | $7.344$ | $0.168$ | $-61.511$ | $+$ | $+$ | $+$
$100$ | MM1-W1 | $-0.188$ | $3.674$ | $-7.115$ | $0.117$ | $14.644$ | $0.117$ | $-60.458$ | $+$ | $+$ | $+$
$101$ | MM1-W3 | $-0.272$ | $2.884$ | $0.301$ | $0.213$ | $2.110$ | $0.213$ | $-0.420$ | $+$ | $+$ | $-$
$102$ | MM1-W3 | $-0.272$ | $2.883$ | $3.316$ | $0.213$ | $-0.754$ | $0.213$ | $-0.420$ | $+$ | $+$ | $-$
$103$ | MM1-W3 | $-0.272$ | $2.883$ | $1.357$ | $0.213$ | $-1.507$ | $0.213$ | $-0.210$ | $+$ | $+$ | $-$
$104$ | MM1-W3 | $-0.273$ | $2.889$ | $3.095$ | $0.168$ | $6.649$ | $0.168$ | $-3.789$ | $+$ | $+$ | $-$
$105$ | MM1-W3 | $-0.273$ | $2.892$ | $-1.241$ | $0.117$ | $-1.241$ | $0.117$ | $-3.368$ | $+$ | $+$ | $-$
$106$ | MM1-W3 | $-0.273$ | $2.892$ | $-3.806$ | $0.117$ | $-1.655$ | $0.117$ | $-3.789$ | $+$ | $+$ | $-$
$107$ | MM1-W3 | $-0.274$ | $2.892$ | $-2.292$ | $1.652$ | $-9.261$ | $2.806$ | $-2.523$ | $+$ | $+$ | $-$
$108$ | MM1-W3 | $-0.274$ | $2.894$ | $0.579$ | $0.117$ | $7.198$ | $0.117$ | $-1.472$ | $+$ | $+$ | $-$
$109$ | MM1-W3 | $-0.274$ | $2.894$ | $0.678$ | $0.168$ | $8.936$ | $0.168$ | $-1.682$ | $+$ | $+$ | $-$
$110$ | MM1-W3 | $-0.274$ | $2.893$ | $0.619$ | $0.168$ | $10.558$ | $0.168$ | $-1.893$ | $+$ | $+$ | $-$
$111$ | MM1-W3 | $-0.274$ | $2.893$ | $-0.956$ | $0.168$ | $15.018$ | $0.168$ | $-2.314$ | $+$ | $+$ | $-$
$112$ | MM1-W3 | $-0.274$ | $2.897$ | $2.952$ | $0.359$ | $11.163$ | $2.192$ | $-2.314$ | $+$ | $+$ | $-$
$113$ | MM1-W3 | $-0.273$ | $2.888$ | $-1.055$ | $0.213$ | $11.758$ | $0.213$ | $-2.738$ | $+$ | $+$ | $+$
$114$ | MM1-W3 | $-0.273$ | $2.888$ | $-3.937$ | $0.213$ | $9.222$ | $1.266$ | $-3.157$ | $+$ | $+$ | $+$
$115$ | MM1-W3 | $-0.273$ | $2.887$ | $-0.904$ | $0.213$ | $17.487$ | $0.213$ | $-3.370$ | $+$ | $+$ | $+$
$116$ | MM1-W3 | $-0.274$ | $2.891$ | $3.316$ | $0.213$ | $4.070$ | $0.213$ | $-2.316$ | $+$ | $+$ | $+$
$117$ | MM1-W3 | $-0.274$ | $2.891$ | $0.083$ | $0.117$ | $-0.993$ | $0.117$ | $-2.736$ | $+$ | $+$ | $+$
$118$ | MM1-W3 | $-0.273$ | $2.890$ | $-3.061$ | $0.117$ | $-1.986$ | $0.117$ | $-3.157$ | $+$ | $+$ | $+$
$119$ | MM1-W3 | $-0.273$ | $2.890$ | $2.076$ | $0.168$ | $6.296$ | $0.168$ | $-3.368$ | $+$ | $+$ | $+$
$120$ | MM1-W3 | $-0.273$ | $2.890$ | $2.319$ | $0.168$ | $6.502$ | $0.168$ | $-3.578$ | $+$ | $+$ | $+$
$121$ | MM1-W3 | $-0.273$ | $2.892$ | $-2.068$ | $0.117$ | $-2.317$ | $0.117$ | $-3.578$ | $+$ | $+$ | $+$
$122$ | MM1-W3 | $-0.274$ | $2.891$ | $-9.720$ | $0.259$ | $0.550$ | $0.259$ | $-2.312$ | $+$ | $+$ | $+$
$123$ | MM1-W3 | $-0.274$ | $2.893$ | $1.661$ | $0.168$ | $4.084$ | $0.168$ | $-2.736$ | $+$ | $+$ | $+$
$124$ | MM1-W3 | $-0.274$ | $2.894$ | $4.824$ | $0.213$ | $1.658$ | $0.213$ | $-3.159$ | $+$ | $+$ | $+$
$125$ | MM1-W3 | $-0.274$ | $2.893$ | $-0.301$ | $0.213$ | $11.457$ | $0.213$ | $-3.370$ | $+$ | $+$ | $+$
$126$ | UCHII-W3 | $0.137$ | $1.277$ | $2.834$ | $0.168$ | $-10.437$ | $0.168$ | $-36.231$ | $-$ | $+$ | $+$
$127$ | UCHII-W3 | $0.137$ | $1.277$ | $4.070$ | $0.213$ | $-12.512$ | $0.213$ | $-36.655$ | $-$ | $+$ | $+$
$128$ | UCHII-W3 | $0.178$ | $1.274$ | $4.774$ | $1.483$ | $-8.751$ | $0.393$ | $-47.607$ | $+$ | $+$ | $+$
$129$ | UCHII-W3 | $0.178$ | $1.274$ | $6.555$ | $0.168$ | $-16.538$ | $0.168$ | $-48.028$ | $+$ | $+$ | $+$
$130$ | UCHII-W1 | $-0.072$ | $-0.060$ | $0.301$ | $0.213$ | $8.743$ | $0.213$ | $-14.957$ | $+$ | $+$ | $-$
$131$ | UCHII-W1 | $-0.072$ | $-0.060$ | $2.044$ | $0.168$ | $4.421$ | $0.168$ | $-15.165$ | $+$ | $+$ | $-$
$132$ | UCHII-W1 | $-0.072$ | $-0.064$ | $-1.960$ | $0.500$ | $10.251$ | $1.830$ | $-13.482$ | $-$ | $+$ | $+$
$133$ | UCHII-W1 | $-0.072$ | $-0.063$ | $-4.673$ | $0.213$ | $7.537$ | $0.213$ | $-13.693$ | $-$ | $+$ | $+$
$134$ | UCHII-W1 | $-0.072$ | $-0.064$ | $-3.166$ | $0.994$ | $12.889$ | $1.778$ | $-13.903$ | $-$ | $+$ | $+$
$135$ | UCHII-W1 | $-0.072$ | $-0.060$ | $0.151$ | $0.213$ | $9.346$ | $0.213$ | $-15.378$ | $-$ | $+$ | $+$
$136$ | UCHII-W1 | $-0.074$ | $-0.059$ | $-0.603$ | $0.213$ | $9.196$ | $0.213$ | $-13.271$ | $-$ | $+$ | $+$
$137$ | UCHII-W1 | $-0.074$ | $-0.059$ | $-0.754$ | $0.213$ | $9.497$ | $0.213$ | $-13.482$ | $-$ | $+$ | $+$
$138$ | UCHII-W1 | $-0.074$ | $-0.059$ | $0.092$ | $0.168$ | $7.869$ | $0.168$ | $-14.322$ | $-$ | $+$ | $+$
$139$ | UCHII-W1 | $-0.074$ | $-0.059$ | $-0.285$ | $0.168$ | $8.164$ | $0.168$ | $-14.533$ | $-$ | $+$ | $+$
$140$ | UCHII-W1 | $-0.074$ | $-0.058$ | $-0.301$ | $0.213$ | $3.618$ | $0.213$ | $-14.746$ | $-$ | $+$ | $+$
$141$ | UCHII-W1 | $-0.074$ | $-0.059$ | $-6.602$ | $0.259$ | $20.173$ | $0.259$ | $-12.635$ | $-$ | $+$ | $+$
$142$ | UCHII-W1 | $0.001$ | $-0.000$ | $1.737$ | $0.117$ | $-1.489$ | $0.117$ | $-19.167$ | $+$ | $+$ | $+$
$143$ | UCHII-W1 | $0.001$ | $-0.001$ | $-2.640$ | $0.168$ | $1.697$ | $0.168$ | $-19.589$ | $+$ | $+$ | $+$
$144$ | UCHII-W1 | $0.002$ | $-0.001$ | $-0.301$ | $0.213$ | $-1.357$ | $0.213$ | $-22.119$ | $+$ | $+$ | $+$
$145$ | UCHII-W1 | $0.001$ | $-0.001$ | $0.301$ | $0.213$ | $0.301$ | $0.213$ | $-19.381$ | $+$ | $+$ | $+$
$146$ | UCHII-W1 | $0.000$ | $-0.000$ | $---$ | $---$ | $---$ | $---$ | $-20.642$ | $+$ | $+$ | $+$
$147$ | UCHII-W1 | $0.000$ | $-0.000$ | $---$ | $---$ | $---$ | $---$ | $-21.063$ | $+$ | $+$ | $+$
$148$ | UCHII-W1 | $0.000$ | $-0.000$ | $---$ | $---$ | $---$ | $---$ | $-21.274$ | $+$ | $+$ | $+$
$149$ | UCHII-W1 | $0.000$ | $-0.000$ | $---$ | $---$ | $---$ | $---$ | $-21.485$ | $+$ | $+$ | $+$
$150$ | UCHII-W1 | $0.000$ | $-0.000$ | $---$ | $---$ | $---$ | $---$ | $-21.698$ | $+$ | $+$ | $+$
$151$ | UCHII-W1 | $-0.072$ | $-0.063$ | $-0.754$ | $0.213$ | $9.196$ | $0.213$ | $-11.165$ | $+$ | $+$ | $+$
$152$ | UCHII-W1 | $-0.072$ | $-0.064$ | $-1.658$ | $0.213$ | $11.608$ | $0.213$ | $-12.007$ | $+$ | $+$ | $+$
$153$ | UCHII-W1 | $-0.072$ | $-0.063$ | $-2.261$ | $0.213$ | $11.457$ | $0.213$ | $-14.114$ | $+$ | $+$ | $+$
$154$ | UCHII-W1 | $-0.072$ | $-0.063$ | $-2.858$ | $0.201$ | $7.751$ | $1.055$ | $-14.322$ | $+$ | $+$ | $+$
$155$ | UCHII-W1 | $-0.072$ | $-0.063$ | $-7.702$ | $0.259$ | $-3.118$ | $0.259$ | $-12.635$ | $+$ | $+$ | $+$
$156$ | UCHII-W1 | $-0.072$ | $-0.059$ | $0.603$ | $0.213$ | $15.678$ | $0.213$ | $-13.061$ | $+$ | $+$ | $+$
$157$ | UCHII-W1 | $-0.072$ | $-0.058$ | $2.713$ | $0.213$ | $20.049$ | $0.213$ | $-13.271$ | $+$ | $+$ | $+$
$158$ | UCHII-W1 | $-0.072$ | $-0.058$ | $-2.713$ | $0.213$ | $5.125$ | $0.213$ | $-14.746$ | $+$ | $+$ | $+$
$159$ | UCHII-W1 | $-0.072$ | $-0.058$ | $-2.813$ | $0.117$ | $-5.130$ | $0.117$ | $-15.165$ | $+$ | $+$ | $+$
$160$ | UCHII-W1 | $-0.071$ | $-0.062$ | $-3.478$ | $0.168$ | $11.494$ | $0.168$ | $-8.213$ | $+$ | $+$ | $+$
$161$ | UCHII-W1 | $-0.071$ | $-0.062$ | $-2.377$ | $0.168$ | $11.660$ | $0.168$ | $-8.424$ | $+$ | $+$ | $+$
$162$ | UCHII-W1 | $-0.072$ | $-0.061$ | $0.236$ | $0.168$ | $8.838$ | $0.168$ | $-9.477$ | $+$ | $+$ | $+$
$163$ | UCHII-W1 | $-0.072$ | $-0.061$ | $0.026$ | $0.168$ | $10.197$ | $0.168$ | $-9.898$ | $+$ | $+$ | $+$
$164$ | UCHII-W1 | $-0.072$ | $-0.061$ | $-1.950$ | $0.168$ | $12.685$ | $0.168$ | $-10.109$ | $+$ | $+$ | $+$
$165$ | UCHII-W1 | $-0.072$ | $-0.062$ | $-2.262$ | $0.168$ | $18.775$ | $0.168$ | $-10.320$ | $+$ | $+$ | $+$
$166$ | UCHII-W1 | $-0.072$ | $-0.062$ | $-1.960$ | $0.213$ | $22.461$ | $0.213$ | $-10.743$ | $+$ | $+$ | $+$
$167$ | UCHII-W1 | $-0.072$ | $-0.060$ | $-5.125$ | $0.213$ | $-7.537$ | $0.213$ | $-10.954$ | $+$ | $+$ | $+$
$168$ | UCHII-W1 | $-0.072$ | $-0.061$ | $-0.904$ | $0.213$ | $11.608$ | $0.213$ | $-11.165$ | $+$ | $+$ | $+$
$169$ | UCHII-W1 | $-0.072$ | $-0.061$ | $1.498$ | $0.168$ | $12.758$ | $0.168$ | $-11.373$ | $+$ | $+$ | $+$
$170$ | UCHII-W1 | $-0.072$ | $-0.061$ | $2.180$ | $0.168$ | $3.495$ | $0.168$ | $-11.584$ | $+$ | $+$ | $+$
$171$ | UCHII-W1 | $-0.072$ | $-0.061$ | $1.159$ | $0.168$ | $4.015$ | $0.168$ | $-11.794$ | $+$ | $+$ | $+$
$172$ | UCHII-W1 | $-0.072$ | $-0.061$ | $-2.718$ | $0.168$ | $14.095$ | $0.168$ | $-12.005$ | $+$ | $+$ | $+$
$173$ | UCHII-W1 | $-0.072$ | $-0.061$ | $0.504$ | $0.168$ | $5.791$ | $0.168$ | $-12.216$ | $+$ | $+$ | $+$
$174$ | UCHII-W1 | $-0.072$ | $-0.061$ | $0.330$ | $0.168$ | $6.882$ | $0.168$ | $-12.426$ | $+$ | $+$ | $+$
$175$ | UCHII-W1 | $-0.072$ | $-0.061$ | $0.072$ | $0.168$ | $7.216$ | $0.168$ | $-12.637$ | $+$ | $+$ | $+$
$176$ | UCHII-W1 | $-0.072$ | $-0.061$ | $-0.204$ | $0.168$ | $12.544$ | $0.168$ | $-13.690$ | $+$ | $+$ | $+$
$177$ | UCHII-W1 | $-0.072$ | $-0.061$ | $-0.145$ | $0.168$ | $11.601$ | $0.168$ | $-13.901$ | $+$ | $+$ | $+$
$178$ | UCHII-W1 | $-0.072$ | $-0.061$ | $0.410$ | $0.168$ | $9.786$ | $0.168$ | $-14.322$ | $+$ | $+$ | $+$
$179$ | UCHII-W1 | $-0.072$ | $-0.060$ | $0.210$ | $0.168$ | $9.500$ | $0.168$ | $-14.533$ | $+$ | $+$ | $+$
$180$ | UCHII-W1 | $-0.072$ | $-0.060$ | $2.211$ | $0.168$ | $2.420$ | $0.168$ | $-14.744$ | $+$ | $+$ | $+$
$181$ | UCHII-W1 | $-0.072$ | $-0.060$ | $0.603$ | $0.213$ | $10.100$ | $0.213$ | $-15.799$ | $+$ | $+$ | $+$
$182$ | UCHII-W1 | $-0.072$ | $-0.060$ | $0.868$ | $0.168$ | $2.213$ | $0.168$ | $-16.007$ | $+$ | $+$ | $+$
$183$ | UCHII-W1 | $-0.072$ | $-0.060$ | $-8.619$ | $0.259$ | $22.557$ | $0.259$ | $-12.845$ | $+$ | $+$ | $+$
$184$ | UCHII-W1 | $-0.074$ | $-0.059$ | $-0.301$ | $0.213$ | $10.402$ | $0.213$ | $-13.061$ | $+$ | $+$ | $+$
$185$ | UCHII-W1 | $-0.074$ | $-0.059$ | $1.500$ | $0.168$ | $9.307$ | $0.168$ | $-13.901$ | $+$ | $+$ | $+$
$186$ | UCHII-W1 | $-0.074$ | $-0.059$ | $0.507$ | $0.168$ | $7.296$ | $0.168$ | $-14.112$ | $+$ | $+$ | $+$
Note. — ${}^{\text{a}}$ Maser feature ID.
${}^{\text{b}}$Offsets are with respect to the reference maser at
$(\alpha,\delta)$ = (17${}^{\text{h}}$20’52.600”, -35∘46’50.508”)
Table 2: Position and Velocity Variance/Covariance Matrix Analysis for the NGC
6334I proper motions
Diagonalization of the Position Variance/Covariance Matrices
---
Matrix No.${}^{\text{a}}$ | $\psi_{\text{max}}$ | | $\psi_{\text{min}}$ | | PA${}_{\text{max}}^{b}$ | |
| (10-6arcsec2) | | (10-6arcsec2) | | (∘) | |
1 | 0.897 | | 0.003 | | 79.3 | |
2 | 0.545 | | 0.002 | | 79.0 | |
3 | 0.152 | | 0.0001 | | -84.4 | |
Diagonalization of the Velocity Variance/Covariance Matrices
Matrix No. | $\psi_{\text{max}}$ | $\psi_{\text{mid}}$ | $\psi_{\text{min}}$ | PA${}_{\text{max}}$ | PA${}_{\text{mid}}^{c}$ | $\phi_{\text{max}}^{d}$ | $\phi_{\text{mid}}^{e}$
| (km2s-2) | (km2s-2) | (km2s-2) | (∘) | (∘) | (∘) | (∘)
1 | 3511.08 | 452.48 | 201.79 | -79.4 $\pm$ 9.2 | 8.8 $\pm$ 1.1 | -32.4 $\pm$ 3.2 | -2.9 $\pm$ 0.3
2 | 3321.62 | 274.64 | 148.94 | -77.9 $\pm$ 10.6 | 11.8 $\pm$ 1.5 | -6.0 $\pm$ 0.6 | -2.9 $\pm$ 0.3
3 | 2604.80 | 582.89 | 93.42 | -67.1 $\pm$ 12.7 | 63.5 $\pm$ 12.0 | 42.8 $\pm$ 7.6 | 35.1 $\pm$ 6.3
Note. —
${}^{\text{a}}$1: CM2-W2, MM1-W1, MM1-W3 & UCHII-W3. All the maser regions
associated with the jet.
2: CM2-W2 & UCHII-W3. The maser regions furthest from MM1B
3: MM1-W1 & MM1-W3. The maser regions closest to MM1B
${}^{\text{b}}$ Position angle of the axis with the largest eigenvalue
$\psi_{\text{max}}$
${}^{\text{c}}$ Position angle of the axis with the second largest eigenvalue
$\psi_{\text{mid}}$
${}^{\text{d}}$ Inclination angle of the axis corresponding to
$\psi_{\text{max}}$ with respect to the sky plane.
${}^{\text{e}}$ Inclination angle of the axis corresponding to
$\psi_{\text{mid}}$ with respect to the sky plane.
## References
* Bachiller & Cernicharo (1990) Bachiller, R., & Cernicharo, J. 1990, A&A, 239, 276
* Bloemhof (1993) Bloemhof, E. E. 1993, ApJ, 406, L75, doi: 10.1086/186790
* Bloemhof (2000) —. 2000, ApJ, 533, 893, doi: 10.1086/308714
* Breen et al. (2019) Breen, S. L., Sobolev, A. M., Kaczmarek, J. F., et al. 2019, ApJ, 876, L25, doi: 10.3847/2041-8213/ab191c
* Brogan et al. (2016) Brogan, C. L., Hunter, T. R., Cyganowski, C. J., et al. 2016, ApJ, 832, 187, doi: 10.3847/0004-637X/832/2/187
* Brogan et al. (2018) —. 2018, ApJ, 866, 87, doi: 10.3847/1538-4357/aae151
* Brogan et al. (2019) Brogan, C. L., Hunter, T. R., Towner, A. P. M., et al. 2019, ApJ, 881, L39, doi: 10.3847/2041-8213/ab2f8a
* Burns et al. (2016) Burns, R. A., Handa, T., Nagayama, T., Sunada, K., & Omodaka, T. 2016, MNRAS, 460, 283, doi: 10.1093/mnras/stw958
* Burns et al. (2020) Burns, R. A., Sugiyama, K., Hirota, T., et al. 2020, Nature Astronomy, 4, 506, doi: 10.1038/s41550-019-0989-3
* Caratti o Garatti et al. (2017) Caratti o Garatti, A., Stecklum, B., Garcia Lopez, R., et al. 2017, Nature Physics, 13, 276, doi: 10.1038/nphys3942
* Caswell et al. (2011) Caswell, J. L., Kramer, B. H., & Reynolds, J. E. 2011, MNRAS, 414, 1914, doi: 10.1111/j.1365-2966.2011.18510.x
* Chen et al. (2020a) Chen, X., Sobolev, A. M., Ren, Z.-Y., et al. 2020a, Nature Astronomy, doi: 10.1038/s41550-020-1144-x
* Chen et al. (2020b) Chen, X., Sobolev, A. M., Breen, S. L., et al. 2020b, ApJ, 890, L22, doi: 10.3847/2041-8213/ab72a5
* Chibueze et al. (2012) Chibueze, J. O., Imai, H., Tafoya, D., et al. 2012, ApJ, 748, 146, doi: 10.1088/0004-637X/748/2/146
* Chibueze et al. (2014) Chibueze, J. O., Omodaka, T., Handa, T., et al. 2014, ApJ, 784, 114, doi: 10.1088/0004-637X/784/2/114
* Hunter et al. (2006) Hunter, T. R., Brogan, C. L., Megeath, S. T., et al. 2006, ApJ, 649, 888, doi: 10.1086/505965
* Hunter et al. (2017) Hunter, T. R., Brogan, C. L., MacLeod, G., et al. 2017, ApJ, 837, L29, doi: 10.3847/2041-8213/aa5d0e
* Hunter et al. (2018) Hunter, T. R., Brogan, C. L., MacLeod, G. C., et al. 2018, ApJ, 854, 170, doi: 10.3847/1538-4357/aaa962
* Leurini et al. (2006) Leurini, S., Schilke, P., Parise, B., et al. 2006, A&A, 454, L83, doi: 10.1051/0004-6361:20065338
* MacLeod et al. (2018) MacLeod, G. C., Smits, D. P., Goedhart, S., et al. 2018, MNRAS, 478, 1077, doi: 10.1093/mnras/sty996
* MacLeod et al. (2019) MacLeod, G. C., Sugiyama, K., Hunter, T. R., et al. 2019, MNRAS, 489, 3981, doi: 10.1093/mnras/stz2417
* McCutcheon et al. (2000) McCutcheon, W. H., Sandell, G., Matthews, H. E., et al. 2000, MNRAS, 316, 152, doi: 10.1046/j.1365-8711.2000.03487.x
* McGuire et al. (2018) McGuire, B. A., Brogan, C. L., Hunter, T. R., et al. 2018, ApJ, 863, L35, doi: 10.3847/2041-8213/aad7bb
* Meyer et al. (2017) Meyer, D. M. A., Vorobyov, E. I., Kuiper, R., & Kley, W. 2017, MNRAS, 464, L90, doi: 10.1093/mnrasl/slw187
* Moscadelli et al. (2011) Moscadelli, L., Cesaroni, R., Rioja, M. J., Dodson, R., & Reid, M. J. 2011, A&A, 526, A66, doi: 10.1051/0004-6361/201015641
* Moscadelli et al. (2017) Moscadelli, L., Sanna, A., Goddi, C., et al. 2017, A&A, 600, L8, doi: 10.1051/0004-6361/201730659
* Ogbodo et al. (2017) Ogbodo, C. S., Burns, R. A., Handa, T., et al. 2017, MNRAS, 469, 4788, doi: 10.1093/mnras/stx1154
* Qiu et al. (2011) Qiu, K., Wyrowski, F., Menten, K. M., et al. 2011, ApJ, 743, L25, doi: 10.1088/2041-8205/743/1/L25
* Reid et al. (2014) Reid, M. J., Menten, K. M., Brunthaler, A., et al. 2014, ApJ, 783, 130, doi: 10.1088/0004-637X/783/2/130
* Sugiyama et al. (2019) Sugiyama, K., Saito, Y., Yonekura, Y., & Momose, M. 2019, The Astronomer’s Telegram, 12446
* Szymczak et al. (2018) Szymczak, M., Olech, M., Wolak, P., Gérard, E., & Bartkiewicz, A. 2018, A&A, 617, A80, doi: 10.1051/0004-6361/201833443
* Tapia et al. (1996) Tapia, M., Persi, P., & Roth, M. 1996, A&A, 316, 102
* Torrelles et al. (2011) Torrelles, J. M., Patel, N. A., Curiel, S., et al. 2011, MNRAS, 410, 627, doi: 10.1111/j.1365-2966.2010.17483.x
* Torrelles et al. (2014) Torrelles, J. M., Curiel, S., Estalella, R., et al. 2014, MNRAS, 442, 148, doi: 10.1093/mnras/stu847
* Visser et al. (2015) Visser, R., Bergin, E. A., & Jørgensen, J. K. 2015, A&A, 577, A102, doi: 10.1051/0004-6361/201425365
* Volvach et al. (2020) Volvach, A. E., Volvach, L. N., Larionov, M. G., et al. 2020, MNRAS, 494, L59, doi: 10.1093/mnrasl/slaa036
* Wu et al. (2014) Wu, Y. W., Sato, M., Reid, M. J., et al. 2014, A&A, 566, A17, doi: 10.1051/0004-6361/201322765
|
# Detecting Malicious Accounts showing Adversarial Behavior in Permissionless
Blockchains
Rachit Agarwal
IIT-Kanpur Tanmay Thapliyal
IIT-Kanpur Sandeep K Shukla
IIT-Kanpur
###### Abstract
Different types of malicious activities have been flagged in multiple
permissionless blockchains such as bitcoin, Ethereum etc. While some malicious
activities exploit vulnerabilities in the infrastructure of the blockchain,
some target its users through social engineering techniques. To address these
problems, we aim at automatically flagging blockchain accounts that originate
such malicious exploitation of accounts of other participants. To that end, we
identify a robust supervised machine learning (ML) algorithm that is resistant
to any bias induced by an over representation of certain malicious activity in
the available dataset, as well as is robust against adversarial attacks. We
find that most of the malicious activities reported thus far, for example, in
Ethereum blockchain ecosystem, behaves statistically similar. Further, the
previously used ML algorithms for identifying malicious accounts show bias
towards a particular malicious activity which is over-represented. In the
sequel, we identify that Neural Networks (NN) holds up the best in the face of
such bias inducing dataset at the same time being robust against certain
adversarial attacks.
## 1 Introduction
Blockchains can be modeled as ever-growing temporal graphs, where interactions
(also called as transactions) happen between different entities. In a
blockchain, various transactions are grouped to form a block. These blocks are
then connected together to form a blockchain. A typical blockchain is
immutable and is characterized by properties such as confidentiality,
anonymity, and non-repudiability. Irrespective of the type of blockchain
(which are explained next), these properties achieve a certain level of
privacy and security. There are mainly two types of blockchains -
permissionless and permissioned blockchains. In permissioned blockchains, all
actions on the blockchain are authenticated and authorized, while in
permissionless blockchains such aspects are not required for successful
transactions. Permissionless blockchains usually also support a native crypto-
currency.
Lots of malicious activities, such as ransomware payments in crypto-currency
and Ponzi schemes [4], happen due to the misuse of the permissionless
blockchain platform. A malicious activity is where accounts perform illegal
activities such as Phishing, Scamming and Gambling. In [3], the authors survey
different types of attacks and group them based on the vulnerabilities they
target in the permissionless blockchain. Thus, one question we ask is can we
train a machine learning algorithm (ML) to detect malicious activity and
generate alerts for other accounts to safeguard them? There are various state-
of-the-art approaches, such as [1, 19, 31], that use ML algorithms to detect
malicious accounts in various permissionless blockchains such as Ethereum [6]
and Bitcoin [18]. In [1], the authors outline an approach to detect malicious
accounts where they consider the temporal aspects inherent in a permissionless
blockchain including the transaction based features that were used in other
related works such as [19, 31].
Nonetheless, these related works have various drawbacks. They only study
whether an account in the blockchain is malicious or not. They do not classify
or comment upon the type of malicious activity (such as Phishing or Scamming)
the accounts are involved in. Further, as they do not consider the type of
malicious activities, they do not study the bias induced by the imbalance in
the number of accounts associated with particular malicious activities and,
thus, fail to detect differences in the performance of the identified
algorithm on the different kinds of malicious activities. Additionally, they
do not study the performance of the identified algorithm when any adversarial
input is provided. An adversarial input is defined as an intelligently crafted
account whose features hide the malicious characteristics that an ML algorithm
uses to detect malicious accounts. Such accounts may be designed to evade ML
based detection of their maliciousness.
Thus, we are motivated to answer: (Q1) can we effectively detect malicious
accounts that are represented in minority in the blockchain? Also, we consider
another question and answer: (Q2) can we effectively detect malicious accounts
that show behavior adversarial yp ML based detection? We answer these two
questions using a three-fold methodology. (R1) First, we analyze the
similarity between different types of malicious activities that are currently
known to exist. Here we understand if under-represented malicious activities
do have any similarity with other malicious activities. (R2) We then study the
effect of bias induced in the ML algorithm, if any, by the malicious accounts
attributed to a particular malicious activity which is represented in large
number. We then identify the ML algorithm which we can use to efficiently
detect not only the largely represented malicious activities on the
blockchain, but also activities that are under-represented. For the state-of-
the-art approaches, we train and test different ML algorithms on the dataset
and compute the recall considering all malicious activities under one class.
Next, to understand the robustness of the identified ML algorithm, we test it
on the malicious accounts that are newly tagged with a motivation to
understand, if in future, such accounts can be captured. (R3) Nonetheless, we
also test the robustness of the ML algorithm on the adversarial inputs. Here,
we use Generative Adversarial Networks (GAN) [13] to first generate
adversarial data using already known feature vectors of malicious accounts and
then perform tests on such data using the identified ML model to study the
effect of adversarial attacks.
To facilitate our work, we focus on a permissionless blockchain called
Ethereum blockchain [6]. Ethereum has mainly two types of accounts -
Externally Owned Accounts (EOA) and Smart Contracts (SC) - where transactions
involving EOAs are recorded in the ledger while transactions between SCs are
not. There are various vulnerabilities in Ethereum [8] that attackers exploit
to siphon off the Ethers (a crypto-currency used by Ethereum). Our study is
not exhaustive for all possible attacks in a blockchain. However, we study
only those for which we have example accounts. Note that our work is
applicable to other permissionless blockchains as well that focus on
transactions of crypto-currency. We choose Ethereum because of the volume,
velocity, variety, and veracity within the transaction data. Currently,
Ethereum has more than 14 different types of known fraudulent activities.
Our results show that certain malicious activities behave similarly when
expressed in terms of feature vectors that leverage temporal behavior of the
blockchain. We also observe that Neural Network (NN) performs relatively
better than other supervised ML algorithms and the volume of accounts
attributed to a particular malicious activity does not induce a bias in the
results of NN, contrary to other supervised ML algorithms. Moreover, when an
adversarial data is encountered, NN’s performance is still better than the
other supervised ML algorithms. Henceforth, note that when we refer to the
performance of an ML algorithm we mean recall achieved on a testing dataset.
In summary, our contributions, in this paper are as follows:
1. 1.
The similarity scores reveal that there exists similarity between different
types of malicious activities present until 7th Dec 2019 in the Ethereum
blockchain and they can be clustered in mostly 3 clusters.
2. 2.
All the state-of-the-art ML algorithms used in the related studies are biased
towards the dominant malicious activity, i.e., ‘Phishing’ in our case.
3. 3.
We identify that a Neural Network based ML model is the least affected by the
imbalance in the numbers of different types of the malicious activities.
Further, when adversarial input of the transactional data in the Ethereum
blockchain is provided as test data, NN model is resistant to it. When NN is
trained on some adversarial inputs, its balanced accuracy increases by 1.5%.
When trained with adversarial data, most other algorithms also regain their
performance with RandomForest leading having the best recall.
The rest of the paper is organized as follows. In section 2, we present the
background and the state-of-the-art techniques used to detect malicious
accounts in blockchains. In sections 3, we present a detailed description of
our methodology. This is followed by an in-depth evaluation along with the
results in section 4. We finally conclude in section 5 providing details on
prospective future work.
## 2 Background and Related Work
Between 2011 and 2019 there have been more than 65 exploit incidents on
various blockchains [3]. These attacks mainly exploit the vulnerabilities that
are present in the consensus mechanism, client, network, and mining pools. For
example, Sybil attack, Denial of Service attack, and the 51% attack. In
specific cases such as Ethereum, SC based vulnerabilities are also exploited.
For instance, the Decentralized Autonomous Organization (DAO)
attack111Understanding dao hack: https://www.coindesk.com/understanding-dao-
hack-journalists, exploits the Reentrancy bug [22] present in the SC of the
DAO system. While some of the attacks exploit bugs and vulnerabilities, some
exploits target users of the blockchain. The users are sometimes not well-
versed with the technical aspects of the blockchain while sometimes they get
easily influenced by the various social engineering attempts. Such exploits
are also present in permissionless blockchains such as Ethereum. From the
social engineering perspective, Phishing is the most common malicious activity
present in Ethereum, where it is represented by more than 3000 accounts [12].
The Table 1 presents different known malicious activities that are reported to
exist in the Ethereum blockchain and are used by us in this work. Note that
some activities have similar description but are marked differently in
Ethereum.
Malicious Incident/Activity | Description
---|---
Lendf.Me Protocol Hack | Exploit of a reentrancy vulnerability arising due to usage of ERC-777 token [20].
EtherDelta Website Spoof | Attackers spoofed the official EtherDelta website so that users transact through it [23].
Ponzi Scheme | An attacker enticed a user to lend him crypto-currency, which the attacker
used to repay the debt of previously scammed user.
Parity Bug | Bug in a multi-sig parity wallet which caused freezing of assets.
Phishing | When attackers pose as legitimate to lure individuals into transacting
with them, example Bancor Hack [21].
Gambling | Accounts involved in Gambling activities which is illegal in many countries.
Plus Token Scam | A Ponzi scheme [5].
Compromised | Accounts whose address were either compromised or were scammed.
Scamming | Accounts which are reported to be involved in frauds.
Cryptopia Hack | No official description, but unauthorized and unnoticed transfer happened from
Cryptopia wallet to other exchanges [14].
Bitpoint hack | “Irregular withdrawal from Bitpoint exchange’s hot wallet” [30].
Fake Initial Coin Offerings | Fake Startups aimed to siphon off crowd-funded investments.
Upbit Hack | Speculation that an insider carried out malicious activity when the exchange was
moving funds from hot to cold wallet [15].
Heist | Account involved in various Hacks such as Bitpoint Hack.
Spam Token | No official description on the activity.
Suspicious | No official description but account are mainly involved in various scams.
Scam | No official description but account are mainly involved in various scams.
Unsafe | No official description but account are mainly involved in various scams.
Bugs | Accounts whose associated activities caused issues in the system
Table 1: Malicious Activities on Ethereum blockchain as reported by Etherscan.
Variety of techniques and approaches have been used to detect, study and
mitigate such different types of attacks and vulnerabilities in blockchains.
In [8], authors survey different vulnerabilities and attacks in Ethereum
blockchain and provide a discussion on different defenses employed. We
classify these defenses into 3 groups: (a) those that deploy honeypots to
capture and analyse transactions in the blockchain, (b) those that use wide
variety of machine learning algorithms to analyse transactions, and (c) those
that study the vulnerabilities in the system such as bytecode of the smart
contracts to analyse malicious activities.
In [9], the authors deployed a honeypot and studied the attacks that happen in
the Ethereum blockchain. They analyzed the HTTP and Remote Procedure Call
(RPC) requests made to the honeypot and performed behavioral analysis of the
transactions. They found adversaries follow specific patterns to steal crypto-
currency from the blockchain. Nonetheless, in some instances such honeypots
are also compromised, for example the honeypot at the address -
‘0x2f30ff3428d62748a1d993f2cc6c9b55df40b4d7’.
In [1], the authors present a survey of different state-of-the-art ML
algorithms that are used to detect malicious accounts in a blockchain
transaction network and then presented the need for the temporal aspects
present in blockchains as new features. Here, we refrain ourselves from
surveying again the methods already presented in [1]. Instead, we present
their findings and new techniques that have been reported since. In [1], the
authors categorized the features into two main categories: transaction based
and graph based. With respect to the transaction based features, they reported
the use of features such as Balance In, Balance Out, and Active Duration. With
respect to the graph based features, they identified the extensive use of the
features such as clustering coefficient [26] and in/out-degree. The authors,
motivated to capture the diversity in the transactions, found that the use of
temporal features further enhanced the robustness of the ML algorithm used
towards identifying malicious accounts in a blockchain. These features
amounted to a total of 59 features and were related to inter-event transaction
properties such as the stability of neighborhood (referred to as
attractiveness) and non-uniformity (referred to as burst [16]) present in the
degree, inter-event time, gas-price, and balance. A complete list of feature
used in [1] is presented in appendix B. Using such enriched feature vector,
they validated their approach and achieved high recall ($>78\%$) on the entire
class of malicious accounts present in their test dataset.
In [2], the authors used Graph Convolutional Networks (GCN) to detect money-
laundering related malicious activities in the Bitcoin blockchain. They
developed a Bitcoin transaction graph where the transactions were represented
as the nodes while the flow of Bitcoin (crypto-currency used in Bitcoin
blockchain) was represented as the edges. They used transaction based features
such as amount of Bitcoins received and spent by an account and the Bitcoin
fee incurred by a transaction. Using GCN, they achieved F1 score of 0.77 on
their dataset. Similarly, in [17], the authors constructed a transaction graph
with similar features as in [2] to detect malicious activities in the Bitcoin
blockchain. They compared the use of unsupervised, supervised and active
learning approaches. They observe that the Active Learning techniques
performed better.
In [27] the authors explored the transactions carried out by different
accounts that were involved in Phishing activity. They analyzed the
transaction data and proposed trans2vec, a network embedding algorithm, to
extract features from the Ethereum blockchain data. They, then, used the
extracted features with One Class Support Vector Machine (OC-SVM) to detect
accounts involved in phishing activities, and achieved a recall score of 89.3%
on the malicious class. Although, they focused on the phishing activities,
they did not discuss the applicability of trans2vec with respect to other
types of malicious activities.
In [29], the authors developed a framework to analyze the transactions on the
Ethereum blockchain and detect various attacks which exploit different
vulnerabilities therein. They replayed all transactions related to a
particular address and monitor the Ethereum Virtual Machine (EVM) state. They,
then, applied logic rules on the transactions to detect abnormal behavior
associated with a particular vulnerability and study only Suicidal,
UncheckedCall, and Reentrancy vulnerabilities.
In all the above-mentioned work, the authors did not distinguish between the
behaviors of different types of malicious activities that are present in
permissionless blockchains. Further, no discussion is provided on the effect
of adversarial data on the approaches. To the best of our knowledge, in the
field of blockchain security, the current work is the first instance that
studies data poisoning and evasion and tries to safeguard them against any
adversarial attack.
As adversarial data might not be present in the dataset, GAN [13] is one of
the most commonly used techniques to generate adversarial data for testing the
approach. Based on Neural Network, GANs were originally proposed for usage in
the field of computer vision and machine translation, but over the years the
technique has gained popularity in various sub-domains of cyber-security such
as intrusion detection systems [7]. The architecture of GAN consists of two
basic components: a generator and a discriminator. A generator generates fake
data that has similar characteristics as the original data. The fake generated
data along with the real data is passed to the discriminator which
discriminates the input data and identifies if it is real or fake. Both the
generator and the discriminator are trained iteratively over the dataset. Over
time, generator becomes more intelligent, making it hard for the discriminator
to correctly classify the real and the fake data. There are many variants of
GANs that are available222Different GAN algorithms:
https://github.com/hindupuravinash/the-gan-zoo. We do not describe all the
techniques and variants of GAN as this is out of the scope of this work.
However, here, we only describe CTGan [28] that we use to generate fake data
for every malicious activity represented by accounts in our dataset. Our
choice of CTGan is based on the fact that CTGan is able to generate tabular
data. In CTGan, the generator and discriminator models contain 2 fully
connected hidden layers, which account for capturing correlation in the
feature vectors given to them. They use Rectified Linear Units (ReLU)
activation function along with batch normalization in generator model to
reduce over-fitting. ReLU is defined as the positive part of the argument
passed to activate the neuron. The discriminator has leaky ReLU activation
along with dropout regularization [25] implemented in each hidden layer. When
using CTGan for data generation, for the best results, the authors recommend
the number of epochs (where one epoch represents single iteration over the
dataset) to be greater than 300. One limitation of CTGan is that the generator
model needs at least 10 feature vectors to generate adversarial data.
## 3 Our Approach
In this section, we describe our three-fold approach towards answering our
research questions, in detail.
### 3.1 Computing similarity amongst malicious accounts
We compute cosine similarity measure amongst accounts attributed to different
known malicious activities to understand if the malicious activities have
similar behavior. We acknowledge that there are other methods to quantify
similarity, but in this work we use cosine similarity as it is widely adopted.
As the accounts are associated with a specific type of malicious activity,
besides computing the individual similarity, we compute and analyse pairwise
cosine similarity among the accounts associated with malicious activities.
Assume a dataset $D_{a}$ of malicious and benign accounts in a permissionless
blockchain. We assume that each account is attributed to one and only one
malicious activity. In reality, an account can have multiple associations.
Consider, two malicious activities, $M_{1}$ and $M_{2}$ from a set of
malicious activities $M$ that have set of accounts $A_{M_{1}}$ and $A_{M_{2}}$
associated with them, respectively. We compute cosine similarity ($CS_{i,j}$)
such that $i\in A_{M_{1}}$, $j\in A_{M_{2}}$ and $A_{M_{1}}\bigcap
A_{M_{2}}=\emptyset$ and then identify the probability of it being more than
or equal to 0 ($p(CS_{i,j}\geq 0)$). If for all $i$ and $j$, $CS_{i,j}\geq 0$
then we say that the two malicious activities, $M_{1}$ and $M_{2}$, are
similar.
Then, we use clustering algorithm with the motivation that accounts which are
associated with the same malicious activity would cluster together and show
homophily [10]. We use K-Means algorithm to validate if indeed similarity
exists between thee two malicious activities. Here, we assume an upper limit
on $k$ (hyperparameter for K-Means) and use $k=||M||+1$. Note that $||M||$
represents the size of the set of different malicious activities, i.e., the
number of different activities under consideration and $+1$ part represents
the benign cluster. However, our results show that most of the accounts, even
if they are associated with different malicious activities, cluster together.
Note that in terms of the number of clusters found, the best case scenario
would have been malicious accounts associated with different malicious
activities cluster separately and the worst case would be all malicious
accounts, irrespective of their associated malicious activity, cluster
together.
### 3.2 Bias Analysis
The distribution of the number of accounts associated with each $M_{i}\in M$
is not uniform. This increases the sensitivity of the ML algorithm towards the
$M_{i}\in M$ that is more prominent, i.e., has more number of associated
accounts. Thereby, they induce a bias in the selected model towards $M_{i}$
that has the most number of associated accounts. To understand the effect of
the number of malicious accounts attributed to a particular $M_{i}$ on the ML
algorithm, we segment $D_{a}$ into different training and testing sub-datasets
and use them to train and test ML algorithms. Let the set of different
training and testing sub-datasets be $C=\\{C_{0},$ $C_{1},\cdots,$ $C_{n}\\}$
where each element $C_{i}$ represent a specific split of $D_{a}$. Let
$Tr^{C_{i}}$ denote the training dataset, which contains 80% of randomly
selected accounts from $C_{i}$ and $Ts^{C_{i}}$ denote the test dataset, which
contains the remaining 20% accounts. The different $C_{i}$’s we use are:
* •
Null model or $C_{0}$: This is our baseline sub-dataset. Here we do not
distinguish between the types of malicious activities rather only consider if
an account is malicious or not. Note that here based on above notations, the
training dataset is represented as $Tr^{C_{0}}$ and the testing dataset as
$Ts^{C_{0}}$.
Let ${A}_{M_{1}}^{S_{0}}$ represent the set of accounts associated with
$M_{1}$ type malicious activity in the testing dataset, $Ts^{C_{0}}$. As our
aim here is to analyze the bias caused due to a malicious activity, for
example $M_{1}$, we analyse the results obtained when training and testing the
ML algorithm using different combinations of accounts associated with $M_{1}$
activity. For instance, we analyse the performance of an ML algorithm when
accounts associated with $M_{1}$ are not present in training dataset but are
present the testing dataset. Below we list all such combinations that we use:
* •
$C_{1}$: Here we train on $Tr^{C_{0}}$ but test on $Ts^{C_{1}}$ where
$Ts^{C_{1}}=Ts^{C_{0}}-{A}_{M_{1}}^{S_{0}}$, i.e., we train on the 80% of the
dataset, but we remove all the accounts associated with activity $M_{1}$ from
the testing dataset. Ideally, in this case, ML algorithm should perform
similar to $C_{0}$ since the training dataset is same.
* •
$C_{2}$: Here, we train on $Tr^{C_{2}}=Tr^{C_{0}}-{A}_{M_{1}}^{S_{0}}$ but
test on $Ts^{C_{2}}$ which is same as $Ts^{C_{0}}$, i.e., we remove all the
accounts associated with activity $M_{1}$ from the training dataset, but we
keep the accounts associated with $M_{1}$ in the testing dataset. Ideally, in
this case, ML algorithm should misclassify accounts associated to $M_{1}$ that
are present in $Ts^{C_{2}}$. In case adverse results are obtained, it would
mean that there is a bias.
* •
$C_{3}$: Here, we train on $Tr^{C_{2}}$ and test on $Ts^{C_{3}}$ which is same
as $Ts^{C_{1}}$, i.e., we remove all the accounts associated with activity
$M_{1}$ from both the training and the testing dataset. Ideally, in this case,
ML algorithm should perform similar to $C_{0}$ since no accounts associated
with $M_{1}$ activity are present in $Tr^{C_{2}}$ and $Ts^{C_{3}}$. In case
adverse results are obtained, it would mean that there is a bias.
* •
$C_{4}$: Here, we train on $Tr^{C_{4}}=Tr^{C_{0}}+{A}_{M_{1}}^{S_{0}}$ and
test on $Ts^{C_{4}}$ which is same as $Ts^{C_{1}}$, i.e., we remove all the
accounts associated with activity $M_{1}$ from the testing dataset and add
them to the training dataset. Ideally, in this case, ML algorithm should
perform similar to $C_{1}$ since no accounts associated with $M_{1}$ activity
are present in $Ts^{C_{4}}$.
Note that the above four configurations do test different, yet all, scenarios
that are required to understand the effect of a malicious activity, $M_{1}$,
on the ML algorithm. For the sake of completeness, we also consider the
following training and testing sub-datasets:
* •
$C_{5}$: Here, we perform a 80-20 split of the number of accounts associated
to each malicious activity in $M$. We then collect all these 80% data-splits
along with 80% benign accounts to create the training dataset, $Tr^{C_{5}}$.
Similarly, we collect all the remaining 20% splits to create the testing
dataset, $Ts^{C_{5}}$. We then train on the resulting $80\%$ of the dataset
and test on the remaining $20\%$ of the dataset. Ideally, in this case, ML
algorithm should perform similar to $C_{0}$.
Among the supervised ML algorithms, in [1], the authors presented that the
ExtraTrees Classifier (ETC) performs the best on the data they had. We use ETC
with the identified hyperparameters on the above-mentioned sub-datasets to
identify the bias induced by a malicious activity. Further, as mentioned
before, it is possible that a classifier trained on ETC might fail to capture
new or adversarial data. Thus, we also apply different supervised ML
algorithms on $C_{0}$ and identify the algorithm that achieves the best recall
on the entire malicious class. We then execute the best identified ML
algorithm on the above mentioned sub-datasets. To test the performance of
different supervised ML algorithms on any new data, we collect the new and
more recent malicious accounts transaction data ($D_{b}$) and execute the
identified algorithms on the new transaction data.
### 3.3 Adversarial Analysis
The new collected data ($D_{b}$) shows a low similarity with existing
malicious activities. Such data does not classify as adversarial. We use CTGan
[28] to generate adversarial data for the malicious activities and use this
new adversarial data ($D_{g}$) to validate our findings. Here, we use $D_{g}$
only on the test dataset, i.e., we perform training on $Tr^{C_{0}}$ while we
perform our tests on $Ts^{D_{g}}$ that includes $D_{g}$ and all the benign
accounts in $Ts^{C_{0}}$. Further, we also perform tests after training
different ML algorithms when 1%, 5%, and 80% of such adversarial feature
vectors are added to the training dataset.
Figure 1: Overview of our Methodology used towards R1. Figure 2: Overview of
our Methodology used towards R2. Figure 3: Overview of our Methodology used
towards R3.
In summary, Table 2 provides a list of notations we use. Here, note that we
first generate the feature vector dataset using the same approach as listed by
the authors in [1]. Then, towards methodology one (R1), we compute the cosine
similarity amongst the malicious accounts in $D_{a}$ to understand how similar
they are. We also apply unsupervised ML algorithms such as K-Means on the
malicious accounts in $D_{a}$ to identify if malicious accounts cluster
together (see Figure 1). Towards methodology two (R2), we divide $D_{a}$ into
different training and testing sub-datasets, $C=\\{C_{0},$ $C_{1},\cdots,$
$C_{5}\\}$, and execute different supervised ML algorithms to understand bias
induced by a particular malicious activity and identify the performance of the
reported classifier on the transaction data made available until 27th May 2020
(see Figure 2). Towards methodology three (R3), we first filter out all the
malicious accounts from $D_{a}$ and use CTGan separately on each malicious
activity to generate adversarial ($D_{g}$) data for all the activities where
we have more than 10 accounts. Note that there are two ways using which we can
generate adversarial data: (1) using feature vectors of malicious accounts and
(2) using features vectors of benign accounts. These represent, (1) evading ML
algorithm from being detected as malicious, and (2) evading ML algorithms from
being detected as benign and getting classified as malicious. In this work, we
generate adversarial data using malicious accounts. Further, for CTGan, we use
default parameters to synthesize the data, and use 1000 epochs (an epoch
represents one iteration over the dataset) for fitting. We then compare the
performance of various state-of-the-art supervised ML algorithms on $D_{g}$
(see Figure 3).
Notation | Meaning
---|---
$D_{a}$ | Dataset
$M$ | Set of malicious activities present in $D_{a}$
$M_{i}$ | An element in $M$
$A_{M_{i}}$ | Set of accounts in $D_{a}$ associated with $M_{i}$
$CS_{i,j}$ | Cosine similarity between accounts $i$
and $j$
$p(CS_{i,j}\geq 0)$ | probability of $CS_{i,j}$ to be $>0$
$k$ | Hyperparameter of K-Means algorithm
$C$ | Set of sub-datasets of $D_{a}$
$C_{i}$ | an element in $C$
$Tr^{C_{i}}$ | Training dataset created from $C_{i}$
$Ts^{C_{i}}$ | Testing dataset created from $C_{i}$
$A_{M_{1}}^{S_{1}}$ | Set of accounts associated with $M_{1}$ and
in $Ts^{C_{1}}$
$D_{b}$ | New malicious accounts dataset
$D_{g}$ | Adversarial dataset
$Ts^{D_{g}}$ | Testing dataset created from $D_{g}$ and
benign accounts in $Ts^{C_{1}}$.
$F$ | Feature Vector
Table 2: List of Notations used in our work.
## 4 Evaluation
In this section, we first present the data we use and then provide a detailed
analysis of our results.
### 4.1 Data Used
We use external transactions data present in the Ethereum main-net blockchain.
Ethereum [6] is one of the most widely adopted permissionless blockchain
platform. It uses Proof-of-Work (PoW) consensus mechanism to validate
transactions of the users. Ethereum provides users with the functionality to
deploy additional programs called smart contracts which can be used to control
the flow of the Ethers. In Ethereum, EOAs are accounts/wallet which is owned
by a real-entity or a person; wherein the hash of public key of the owner of
an EOA is the address of the EOA. On the other hand, SCs are similar to EOAs,
with the exception that they contain code to automate certain tasks, such as
sending and receiving Ethers, invoking, creating, and destroying other smart
contracts when needed. SCs can be created and invoked both by EOAs and by
other SCs. There are two types of transactions which occur on the Ethereum
blockchain, External and Internal Transactions. While the external
transactions occur between different EOAs and between EOAs and SCs, they are
recorded on the blockchain ledger. Internal transactions are created by and
occur between different SCs and are not recorded on the blockchain ledger.
Further, SCs can execute external calls which are then recorded on the
blockchain ledger. An external transaction typically has information about
blockHash (hash of the block in which the transaction is present), blockNumber
(another identifier to represent the block in which the transaction is
present), the account from which the transaction is invoked, the account to
which the transfer was made, gas (the amount ‘account present in the from
field of transaction’ is willing to pay to the miners to include the
transaction in the block), gasPrice (cost per unit gas), Transaction hash
(hash of the transaction), balance (Ethers transferred), and the timestamp of
the block. Such transaction are then grouped together into blocks before being
published onto the blockchain.
We use Etherscan API [11] to get transaction data of $2946$ malicious accounts
that were marked until 7th December 2019. As the number of benign accounts
were more than $117$ Million account, we perform under-sampling to get
external transactions of $680,314$ benign accounts. Note that these 680,314
benign accounts and 2946 malicious accounts collectively constitute our
dataset $D_{a}$ (defined in section 3). Since using a similar approach in [1],
the authors obtained good results on similar dataset by including the temporal
features, we use the same set of $59$ features ($F$) in our work, as well. In
short, these $59$ features are based on and can be classified under: (a)
temporal graph based features that includes indegree, outdegree,
attractiveness, inter-event time burst, degree burst, and clustering
coefficient, and (b) transaction based features that includes activeness,
crypto-currency balance and fee paid.
Etherscan provides for each malicious account a caution warning message so
that other accounts can exercise caution while transacting with them. Other
than such warning message, Etherscan also provides information about what
malicious activity the account is involved in. These activities are: Phishing
($2590$ accounts), Scamming ($168$ accounts), Compromised ($21$ accounts),
Upbit Hack ($123$ accounts), Heist ($13$ accounts), Gambling ($8$ accounts),
Spam Token ($10$ accounts), Suspicious ($4$ accounts), Cryptopia Hack ($3$
accounts), EtherDelta Hack ($1$ accounts), Scam ($1$ accounts), Fake ICO ($2$
accounts), Unsafe ($1$ accounts), and Bugs ($2$ accounts). Thus, in the
dataset, there are in total $14$ different types of malicious activities. For
our different training and testing sub-datasets, we currently only focus on
“Phishing” as it is the most prominent malicious activity in our dataset.
Further, note that all the ‘Bitpoint Hack’ accounts were also tagged under
‘Heist’. Therefore, we only use ‘Heist’. In $D_{a}$, we observe that $101$
unique EOAs created $153$ different malicious SCs. These EOAs are not marked
as malicious in the dataset. Most of the EOAs created only one malicious SC,
while, one EOA created $15$ malicious SCs. There are only $3$ SCs that are
created by $3$ different SCs which in turn were created by the $2$ different
EOAs. However, we refrain from adding these accounts in our analysis so as to
not change any ground truth. We do not reveal the identities of these accounts
because we do not want to malign any future transactions.
As this list is dynamic, between 8th December 2019 and 27th May 2020, there
were 1249 more accounts that were identified as malicious by Etherscan. On
top, 487 accounts out of 2946 previously identified malicious accounts
continued to transact until 27th May 2020. These total 1736 malicious accounts
constitute our dataset $D_{b}$ (defined in section 3). The accounts in $D_{b}$
are associated with: Upbit Hack ($691$ accounts), Parity ($131$ accounts),
Phishing ($842$ accounts), Ponzi ($38$ accounts), Gambling ($28$ accounts),
Compromised ($1$ accounts), Unknown ($2$ accounts), Lendf.Me Hack ($2$
accounts), Plus Token Scam ($1$ accounts), Heist ($1$ accounts). We again
notice that, in this case also, some accounts have more than 1 label
associated with them. Based on our assumption, to ease our process, we
associate these accounts with only one type of malicious activity. When
analyzing $D_{b}$, we remove from $Tr^{C_{0}}$ all those accounts that were
common in $D_{a}$ and $D_{b}$ and moved them to $Ts^{C_{0}}$, and retrained
the different supervised ML algorithms to identify their performance.
### 4.2 Results
All our results are averaged over 50 iterations on our dataset and are
generated using Python3.
Figure 4: Cosine similarity between malicious accounts.
#### 4.2.1 R1: Similarity Analysis
Some of the malicious activities present in the dataset have similar
definition. We calculate cosine similarity score between all the malicious
accounts present in $D_{a}$ to identify similarity that exists between them
(see Figure 4). From the figure, we observe that in many cases $CS_{i,j}<0$,
as expected because they potentially belong to different malicious activities.
Nonetheless, we also observe that some pair of accounts have high similarity
between them. There are two reasons for such high similarity: (1) these
accounts could actually belong to the same malicious activity, and (2)
although, the account might represent same malicious activity, in Ethereum,
they might be marked differently, i.e., these accounts have been marked for
malicious activities that have similar description. To understand this
further, we check all the accounts associated with a pair of malicious
activities and mark the two activities as similar if all the accounts in one
type of malicious activity have cosine similarity more than 0 with all the
malicious accounts in the other type. Figure 5 depicts the probabilities
$p(CS_{i,j}<0)$ (see Figure 5a) and $p(CS_{i,j}\geq 0)$ (see Figure 5b)
between all the accounts related to two malicious activities. Note that the
two figures are complementary to each other as $p(CS_{i,j}<0)=1-p(CS_{i,j}\geq
0)$. From the figures, we notice that many activities show high probability of
similarity with other malicious activities, for example, ‘Bugs’ and ‘Unsafe’.
There are $\approx$ 158 pairs of malicious activities where $p(CS_{i,j}\geq
0)>0.5$. Further, we also notice that within ‘Phishing’, although most account
are similar, there are some accounts which show dissimilarity.
We use K-Means algorithm to analyze the clustering patterns of all malicious
accounts present in the dataset $D_{a}$ to further understand if the accounts
identified as similar show homophily. Here, we use $k=15$. We see that most of
the malicious accounts, irrespective of the malicious activity they belong to,
cluster together in at most 3 clusters except for the accounts that belong to
malicious activities ‘Heist’, ‘Compromised’, and ‘Gambling’ (see Table 3).
Further, in the cluster that had most number of the malicious accounts (i.e.,
cluster #1), except for accounts associated with ‘Upbit Hack’ and ‘Spam
Token’, all other malicious accounts clustered together in cluster #1.
Therefore, we infer that different malicious activities in a blockchain such
as Ethereum, behave in a similar manner. Same labels could be used to depict
certain malicious activities, such as ‘Phishing’ and ‘Fake ICO’. Currently, we
do not need 14 different labels as most of the malicious activities could be
captured in at most 3 clusters.
Tag Name | Total | Cluster 1 | Cluster 2 | Cluster 3
---|---|---|---|---
Phishing | 2590 | 1700 | 492 | 398
Upbit Hack$\dagger$ | 123 | 32 | 1 | 90
Scamming | 168 | 116 | 27 | 25
Heist$\ddagger$ | 13 | 9 | 1 | 2
Compromised$\ddagger$ | 21 | 17 | - | 1
Unsafe | 1 | 1 | - | -
Spam Token$\dagger$ | 10 | 1 | 3 | 6
Bugs | 2 | 2 | - | -
EtherDelta Hack | 1 | - | 1 | -
Cryptopia Hack | 3 | 2 | - | 1
Gambling$\ddagger$ | 8 | 7 | - | -
Suspicious | 4 | 3 | - | 1
Fake ICO | 2 | 2 | - | -
Scam | 1 | 1 | - | -
* $\dagger$
not well clustered in cluster 1, $\ddagger$ not well clustered in 3 clusters
Table 3: Clustering of malicious account using K-Means. Here clusters are
ranked based on the number of malicious accounts they cluster.
(a) $p(CS_{i,j}<0)$
(b) $p(CS_{i,j}\geq 0)$
Figure 5: Probability that accounts associated with two malicious activities
show similarity more than 0 or less than 0.
#### 4.2.2 R2: Bias Analysis
| | ETC | NN
---|---|---|---
Data | | Recall | Recall
Until | sub-datasets | Mal | Ben | Mal | Ben
07/12/2019 | $C_{0}$ | 0.70 | 0.99 | 0.83 | 0.94
$C_{1}$ | 0.76 | 0.99 | 0.86 | 0.89
$C_{2}$ | 0.28 | 0.99 | 0.79 | 0.93
$C_{3}$ | 0.59 | 0.99 | 0.83 | 0.97
$C_{4}$ | 0.79 | 0.99 | 0.87 | 0.90
$C_{5}$ | 0.70 | 0.99 | 0.88 | 0.91
* ETC
ExtraTreesClassifier(class_weight = ‘balanced’, criterion = ‘entropy’,
max_features = 0.3, max_samples = 0.3, min_samples_leaf = 14,
min_samples_split = 20, n_estimators = 200)
* NN
NeuralNetworks(epochs = 50, regularization = l2(0.0001), dropout =
0.5,loss=‘binary crossentropy’, optimizer=‘adam’ )
Table 4: Recall achieved using ExtraTrees Classifier and Neural Network for
different sub-dataset. Here ‘Mal’ represents malicious class and ‘Ben’
represents the benign class.
We first test ETC, as [1] reported it to produce the best results, on
different training and testing sub-datasets ($C_{i}$) to understand the bias
induced due to the imbalance in the number of accounts associated with a
particular malicious activity. For ETC, we use the hyperparameters reported by
[1]. These are: class_weight = ‘balanced’, criterion = ‘entropy’, max_features
= 0.3, max_samples = 0.3, min_samples_leaf = 14, min_samples_split = 20,
n_estimators = 200. All other hyperparameters are kept as default. In our
dataset, since most number of malicious accounts are associated with
‘Phishing’ activity, here we choose ‘Phishing’ for our study and our training
and testing sub-datasets are configured accordingly. Note that this analysis
is also valid for other malicious activities. From Table 4, we observe that,
for $C_{2}$ and $C_{3}$, the recall on the accounts tagged as malicious
deteriorates significantly. For $C_{2}$, we expected such results because
there are no ‘Phishing’ accounts in the training dataset ($Tr^{C_{2}}$), but
are present in test dataset ($Ts^{C_{0}}$). However, for $C_{3}$, the recall
on the malicious accounts was below expectation. This proves the existence of
bias in ETC towards ‘Phishing’. To understand further and know which all
malicious activities are impacted due to such bias, we study the confusion
matrix and the distribution of malicious accounts therein.
| Total | ETC | NN
---|---|---|---
Activity | $Ts^{C_{0}}$.. | $Ts^{C_{5}}$ | $Ts^{C_{0}}$ | $Ts^{C_{1}}$ | $Ts^{C_{2}}$ | $Ts^{C_{3}}$ | $Ts^{C_{4}}$ | $Ts^{C_{5}}$ | $Ts^{C_{0}}$ | $Ts^{C_{1}}$ | $Ts^{C_{2}}$ | $Ts^{C_{3}}$ | $Ts^{C_{4}}$ | $Ts^{C_{5}}$
$..Ts^{C_{4}}$
Phishing | 517 | 508 | 359 | - | 123 | - | - | 352 | 432 | - | 404 | - | - | 442
Upbit | 18 | 25 | 17 | 17 | 17 | 17 | 17 | 25 | 17 | 18 | 17 | 17 | 18 | 25
Scamming | 33 | 34 | 22 | 23 | 15$\dagger$ | 15$\dagger$ | 23 | 23 | 26 | 27 | 24$\ddagger$ | 24$\ddagger$ | 27 | 28
Heist | 1 | 3 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2
Compro.. | 5 | 5 | 5 | 5 | 4 | 4 | 5 | 4 | 2 | 2 | 5 | 4 | 4 | 5
Unsafe | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1
Spam T.. | 2 | 2 | 0 | 0 | 0 | 0 | 2 | 1 | 0 | 2 | 2 | 2 | 2 | 2
Bugs | - | 1 | - | - | - | - | - | 1 | - | - | - | - | - | 1
EtherDelta | - | 1 | - | - | - | - | - | 0 | - | - | - | - | - | 1
Cryptopia | - | 1 | - | - | - | - | - | 1 | - | - | - | - | - | 1
Gambling | 3 | 2 | 2 | 2 | 2 | 1 | 2 | 0 | 2 | 2 | 2 | 2 | 1 | 2
Suspicious | - | 1 | - | - | - | - | - | 0 | - | - | - | - | - | 1
Fake ICO | 1 | 1 | 1 | 1 | 0 $\bullet$ | 0 $\bullet$ | 1 | 1 | 1 | 1 | 1 $\odot$ | 1 $\odot$ | 1 | 1
* •
$\dagger$ number of ‘Scamming’ accounts correctly classified by ETC is less
$\ddagger$ number of ‘Scamming’ accounts correctly classified by NN is more
* •
$\bullet$ no ‘Fake ICO’ account correctly classified $\odot$ NN classifies
accounts associated with ‘Fake ICO’ correctly
* •
‘-’ symbol represents that accounts associated with the particular malicious
activity were not present in our test set
* •
‘Spam T..’ represents accounts associated with ‘Spam Token’ ; ‘Compro..’
represents accounts associated with ‘Compromised’
Table 5: Malicious activity based analysis of different confusion matrices for
ETC and NN. Note that here ‘Scam’ activity is not present as the only
associated account was selected in the Training dataset.
From the Table 5, we observe that for $C_{2}$, though only accounts attributed
to ‘Phishing’ activities are removed from the training dataset, more than 50%
of the accounts attributed to ‘Scamming’ are misclassified. Same results are
observed for $C_{3}$, when accounts associated with ‘Phishing’ are removed
from both the training dataset ($Tr^{C_{3}}$) and the test dataset
($Ts^{C_{3}}$). However, if ‘Phishing’ related accounts are present in the
training dataset, as in $C_{0}$, $C_{1}$ and $C_{4}$, less number of accounts
tagged as ‘Scamming’ are misclassified. A similar observation is made for the
accounts associated with ‘Fake ICO’. This is also obtained from the results in
the previous subsection (R1), where we see that ‘Phishing’ and ‘Scamming’
based accounts, show high similarity. This validates the results we obtain
using different configurations.
ML | $Ts^{C_{0}}$ | $D_{b}$ | $D_{g}$
---|---|---|---
algorithm | Mal | Ben | Mal | Ben | Mal | Ben
RF | 0.66 | 0.99 | 0.0 | 0.99 | 0.54 | 0.99
XG Boost | 0.70 | 0.99 | 0.003 | 0.99 | 0.60 | 0.99
DT | 0.71 | 0.99 | 0.04 | 0.98 | 0.67 | 0.99
ADaBoost | 0.51 | 0.99 | 0.001 | 0.99 | 0.56 | 0.99
Grad. Boost. | 0.72 | 0.99 | 0.006 | 0.99 | 0.38 | 0.99
ETC | 0.70 | 0.99 | 0.0 | 0.99 | 0.70 | 0.99
NN | 0.83 | 0.94 | 0.25 | 0.89 | 0.95 | 0.90
Table 6: Recall score obtained when different ML algorithms are trained on
$Tr^{C_{0}}$. Here ‘Mal’ represents malicious class, ‘Ben’ represents the
benign class, ‘RF’ represents RandomForest classifier, ‘DT’ represents
Decision Trees, ‘Grad. Boost.’ represent Gradient Boosting, ‘ETC’ represents
ExtraTrees classifier, and ‘NN’ represent Neural Networks.
Since ETC shows bias, we execute different supervised ML algorithms on the
different training and test sub-datasets created from $D_{a}$. Here, on the
features we have, we study those ML algorithms that were used by the related
works. Table 6 depicts the recall obtained after different supervised ML
algorithms were studied on $C_{0}$. Here, we keep the hyperparameters of the
supervised ML algorithms to either default or to those values that are
reported by the related works. Apart from the supervised ML algorithms
specified in Table 6, we also test the performance of GCN using our features
on the dataset $D_{a}$. GCN achieves a recall score of 32.1% on the dataset
$D_{a}$. From the set of ML algorithms, we identify that Neural Network (NN)
(with hyperparameters: epochs = 50, regularization = l2(0.0001), dropout =
0.5, loss=‘binary crossentropy’, optimizer=‘adam’) performs the best on
$C_{0}$ and achieves a recall $0.83$ on the malicious class. However recall on
benign class drops to $0.94$ resulting in a balanced-accuracy (average recall
on all classes) of $88.5\%$. We also note that although recall on malicious
class is high for NN, recall on benign class is relatively lower. Overall, the
balanced accuracy is still high. To understand if NN is biased towards
‘Phishing’, we execute NN on different training and test sub-datasets we have.
We find that NN has better recall on $C_{2}$ and $C_{3}$ as well (see Table
4). Further, we find that NN was able to correctly classify most of the
accounts associated with different malicious activities irrespective of the
bias (see Table 5).
Next, we perform experiments to understand which supervised ML algorithm
performs the best in the scenario when new data on already known malicious
activities is presented. From Figure 6, we find that the similarity between
the malicious accounts in $Ts^{C_{0}}$ and those in $D_{b}$ is less. We then
test the performance of different supervised ML algorithms using $D_{b}$ to
understand if these new malicious accounts are classified correctly on already
trained supervised ML algorithms, i.e., when we train on $Tr^{C_{0}}$ while
test on all benign accounts in $TS^{C_{0}}$ and all malicious account in
$D_{b}$. Table 6 also presents the recall score obtained in this scenario.
Here, we again observe that Neural Network performs better than the rest of
the supervised ML algorithms and was able to classify more than $434$ account
out of $1736$ malicious accounts. Thus, we get further confirmation that NN
performs the best and is not affected by the bias caused by high number of
accounts associated with ‘Phishing’ malicious activity in the Ethereum
transaction data.
#### 4.2.3 R3: Adversarial Analysis
Next, to answer our other research question and test the effect of adversarial
data on the supervised ML algorithms, we first generate adversarial data from
accounts attributed to the particular malicious activities and then test
various supervised ML algorithms on the generated datasets ($D_{g}$). For each
malicious activity where the number of associated accounts are sufficiently
large, we generate $1000$ adversarial samples. For malicious activities that
are moderately represented, we generate 50 adversarial samples. Since for all
the other malicious activities we had less than 10 accounts, we did not
generated adversarial accounts for those malicious activities. Thus, in
$D_{g}$, the distribution of accounts associated with different malicious
activities is as follows: Phishing ($1000$ accounts), Scamming ($1000$
accounts), Upbit Hack ($1000$ accounts), Spam Token ($50$ accounts),
Compromised ($50$ accounts), Heist ($50$ accounts). Here, the number of
accounts generated for each malicious activity are in accordance to the number
of malicious accounts available for each activity. As for ‘Phishing’,
‘Scamming’, and ‘Upbit Hack’ the number of accounts were more, we generated
1000 each adversarial feature vectors. This number is randomly selected and is
based on the computational power available to us.
As expected, $D_{g}$ has a high similarity with the source dataset (see Figure
7a) and can be considered as the dataset meant to evade state-of-the-art ML
algorithms. From Figure 7a, we note that some pairs of accounts have low
similarity. This is true as Figure 7a shows similarity between all malicious
accounts which belong to different malicious activities. Nonetheless, if we
see the similarity just between accounts associated with ‘Phishing’ activity
in $D_{a}$ and in $D_{g}$, we observe a similar behavior (see Figure 7b). Some
accounts in $D_{g}$ show similarity as low as $-0.84$. This is true because
some malicious accounts associated with Phishing in $D_{a}$ had low
similarities amongst themselves (see Figure 5a). Since accounts in $D_{g}$ are
based the accounts in $D_{a}$, this dataset also shows a same behavior.
In this scenario, we observe that NN algorithm also achieved the best recall
($>94.8\%$) on the malicious class among the set of supervised ML algorithms
(see Table 6). Comparing with the instance when adversarial data was not
presented, we note that previously identified ML algorithms did not perform
well. Their recall is much less than $70.1\%$. For GCN, however, as $D_{g}$ is
a generated feature vector dataset, we did not have associated graph.
Therefore, we were not able to perform any tests on GCN. Here, we infer that
the identified NN model is more robust and is able to capture any adversarial
data/attack. Note that here we did not train NN on any adversarial feature
vectors.
We expect to achieve better results when some adversarial feature vectors are
provided in the training. To test the effect, we test of three different
training dataset configurations: (1) include 1% randomly selected accounts
from $D_{g}$, (2) 5% randomly selected accounts from $D_{g}$, and (3) 80%
feature vectors of all the malicious activities in $D_{g}$. For sake of
completeness, we test on supervised ML algorithms used in the related work.
Table 7 provides recall of both malicious and benign accounts, when different
algorithms are trained on above-mentioned three configurations. Here, we note
that RandomForest, for the default hyperparameter values, performs best in all
the three cases. Here NN, does perform well but recall on both malicious and
benign classes are better for RandomForest classifier. Further, note that
although recall is less for malicious class when we perform tests considering
80% feature vectors of all the malicious activities in $D_{g}$, but the
balance accuracy is better than the other two cases.
Figure 6: Cosine similarity between malicious accounts present in $D_{b}$ and
$Ts^{C_{0}}$.
(a) Cosine similarity between malicious accounts in $D_{a}$ and those in
$D_{g}$
(b) Cosine similarity between phishing accounts in $D_{a}$ and those in
$D_{g}$
Figure 7: Cosine Similarity Scores.
## 5 Conclusion and Discussions
With a rise in their adoption, permissionless blockchains are often victim to
various cyber-attacks, scams, and host to ransom payments. Most of the
security threats and malicious activities in permissionless blockchains
exploit the vulnerabilities exposed due to platform, network architecture,
programming bugs, and sometimes are socially engineered. Currently, different
ML algorithms are used in the state-of-the-art techniques to detect such
exploits. However, these techniques have limitations as they do not
distinguish between different types of the exploits.
ML | 1% | 5% | 80% of all
---|---|---|---
algorithm | Mal | Ben | Mal | Ben | Mal | Ben
RF | 0.94 | 0.99 | 0.99 | 0.99 | 1.00 | 0.99
XG Boost | 0.81 | 0.99 | 0.96 | 0.99 | 1.00 | 0.99
DT | 0.90 | 0.99 | 0.97 | 0.99 | 1.00 | 0.99
ADaBoost | 0.61 | 0.99 | 0.89 | 0.99 | 0.98 | 0.99
Grad. Boost. | 0.88 | 0.99 | 0.91 | 0.99 | 0.99 | 0.99
ETC | 0.80 | 0.99 | 0.93 | 0.99 | 1.00 | 0.99
NN | 0.92 | 0.96 | 0.97 | 0.87 | 0.93 | 0.96
Table 7: Recall score obtained when different ML algorithms are trained on
adversarial feature vectors. Here ‘Mal’ represents malicious class, ‘Ben’
represents the benign class, RF represents RandomForest classifier, DT
represents Decision Trees, Grad. Boost. represent Gradient Boosting, ETC
represents ExtraTrees classifier, and NN represent Neural Networks.
In this paper, we first study similarity among accounts to understand if
presented ML models are biased towards a specific malicious activity. We
identify that most of the malicious activities can be clustered in mostly 3
clusters and existing ML models are biased towards the malicious activity that
has maximum number of accounts associated with it. We then identify that NN,
despite such bias, was able to achieve much better recall on the malicious
class and also detect any adversarial attacks. Thus, in the future, if any
attacker is able to intelligently craft transactions that show adversarial
behavior, they can be detected. We identify that the NN is robust to such
adversarial attacks and is able to classify malicious account with high
recall. Further, even if 1% adversarial inputs are added to the training
dataset, performance of NN increases by 1.5% to achieve a balanced accuracy of
94%. However, in this case RandomForest classifier performs the best among the
set of supervised ML algorithms and achieves a balanced accuracy of 96.5%.
Nonetheless, the performance of NN (or any other supervised algorithm) could
vary if any new features are introduced. Further, we obtained the labels
representing malicious activities from Etherscan that uses crowd-sourced
mechanism where anyone can suggest a label name for an account333Name Tagging
In Etherscan: https://etherscan.io/contactus. Such aspects cause diversity in
the label names where these label names could refer to the same malicious
activity. This raises questions upon the correctness of the label names.
Nonetheless, one could use natural language processing (NLP) approaches to
identify similar label names and group them accordingly.
One future direction that we would like to pursue following this study is to
analyse the behavior of accounts associated with new malicious activities in
the permissionless blockchains. For this, we would like to integrate ‘Active
Learning’ into our ML pipeline to do real-time analysis of the accounts.
Since, in our study, we do not focus on the internal transactions, we would
also like to use them to detect transactions based anomalies such as those
caused due to BZX exploit444BZX Exploit:
https://etherscan.io/accounts/label/bzx-exploit. Since internal transactions
are carried out by smart contracts, studying them would help better understand
the transactional behavior of the smart contracts, and thus detect those
attacks as well where transactions utilize multiple SCs. As reported before,
we observed that, in most cases, the creator (either an EOA or an SC) of the
SC which is involved in malicious activity as well as the SC created by such
malicious SC is not tagged as malicious. Therefore, we would like to analyse
such accounts as well in future.
## Acknowledgement
We thank the authors of [1] for sharing with us the account hashes of all the
2946 malicious accounts until 7th December 2019, 680,314 benign accounts, and
1736 malicious accounts until 27th May 2020.
## Availability
All our codes along with list of malicious accounts used are available for re-
use and test upon request.
## References
* [1] R. Agarwal, S. Barve, and S. Shukla. Detecting malicious accounts in permissionless blockchains using temporal graph properties. arXiv, pages 1–27, July 2020.
* [2] I. Alarab, S. Prakoonwit, and M. Nacer. Competence of graph convolutional networks for anti-money laundering in bitcoin blockchain. In Proceedings of the 5th International Conference on Machine Learning Technologies, pages 23–27, Beijing, China, June 2020. ACM.
* [3] A. Alkhalifah, A. Ng, A. Kayes, J. Chowdhury, M. Alazab, and P. Watters. A Taxonomy of Blockchain Threats and Vulnerabilities. In Y. Maleh, M. Shojafar, M. Alazab, and I. Romdhani, editors, Blockchain for Cybersecurity and Privacy, pages 3–28. CRC Press, August 2020\.
* [4] M. Bartoletti, B. Pes, and S. Serusi. Data Mining for Detecting Bitcoin Ponzi Schemes. In Crypto Valley Conference on Blockchain Technology, pages 75–84, Zug, June 2018.
* [5] Behind MLM. Plus token ponzi collapses, chinese media report $2.9 billion in losses, July 2019. (Accessed 04/10/2020) https://bit.ly/2SuLuSI.
* [6] Vitalik Buterin. Ethereum: A Next-Generation SmartContract and Decentralized Application Platform, July 2013. (Accessed 30/07/2020) https://ethereum.org/whitepaper/.
* [7] H. Chen and L. Jiang. Efficient GAN-based method for cyber-intrusion detection. arXiv, pages 1–6, July 2019.
* [8] H. Chen, M. Pendleton, L. Njilla, and S. Xu. A Survey on Ethereum Systems Security: Vulnerabilities, Attacks and Defenses. ACM Computing Surveys, 53(3):1–43, June 2020.
* [9] Z. Cheng, X. Hou, R. Li, Y. Zhou, X. Luo, J. Li, and K. Ren. Towards a First Step to Understand the Cryptocurrency Stealing Attack on Ethereum. In 22nd International Symposium on Research in Attacks, Intrusions and Defenses, pages 47–60, Beijing, September 2019. USENIX Association.
* [10] M. De Choudhury. Tie Formation on Twitter: Homophily and Structure of Egocentric Networks. In Third International Conference on Privacy, Security, Risk and Trust and Third International Conference on Social Computing, pages 465–470, Boston, October 2011. IEEE.
* [11] Etherscan. Ethereum Developer APIs, October 2020. (Accessed 09/10/2020) https://etherscan.io/apis.
* [12] Etherscan. Label Word Cloud, October 2020. (Accessed 09/10/2020) https://etherscan.io/labelcloud/.
* [13] I. Goodfellow, J. Abadie, M. Mirza, B. Xu, D. Farley, S. Ozair, A. Courville, and Y. Bengio. Generative Adversarial Nets. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, NIPS’14, page 2672–2680, Montreal, Canada, December 2014.
* [14] Hacken. How to avoid a hack: Cryptopia ‘success’ case, February 2019. (Accessed 04/10/2020) https://hacken.io/research/researches-and-investigations/how-to-avoid-a-hack-cryptopia-success-case/.
* [15] M. Huillet. Upbit hack: Stolen eth worth millions on the move to unknown wallets, December 2019. (Accessed 04/10/2020) https://bit.ly/3ixgeNp.
* [16] M. Karsai, K. Kaski, A. Barabási, and J. Kertész. Universal features of correlated bursty behaviour. Scientific Reports, 2(397):1–7, May 2012.
* [17] J. Lorenz, M. Silva, D. Aparício, J. Ascensao, and P. Bizarro. Machine learning methods to detect money laundering in the bitcoin blockchain in the presence of label scarcity. arXiv, May 2020.
* [18] S. Nakamoto. Bitcoin: A peer-to-peer electronic cash system, 2009. https://bitcoin.org/bitcoin.pdf.
* [19] T. Pham and S. Lee. Anomaly Detection in Bitcoin Network Using Unsupervised Learning Methods. Technical report, Stanford, November 2016.
* [20] D. Riley. $25M in cryptocurrency stolen in hack of Lendf.me and Uniswap, April 2020. (Accessed 04/10/2020) https://bit.ly/34MjeRc.
* [21] J. Russell. The crypto world’s latest hack sees bancor lose $23.5m, July 2018. (Accessed 04/10/2020) https://techcrunch.com/2018/07/10/bancor-loses-23-5m/.
* [22] N. Samreen and M. Alalfi. Reentrancy vulnerability identification in ethereum smart contracts. In International Workshop on Blockchain Oriented Software Engineering, pages 22–29, London, February 2020. IEEE.
* [23] K. Sedgwick. One Week On from the Etherdelta Hack, Funds Are Still Being Stolen, December 2017. (Accessed 04/10/2020) https://news.bitcoin.com/one-week-etherdelta-hack-funds-still-stolen/.
* [24] M. Spagnuolo, F. Maggi, and S. Zanero. BitIodine: Extracting Intelligence from the Bitcoin Network. In N. Christin and R. Safavi-Naini, editors, Proc. of 18th Financial Cryptography and Data Security, pages 457–468, Christ Church, Barbados, March 2014. Springer Berlin Heidelberg.
* [25] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929–1958, January 2014\.
* [26] D. Watts and S. Strogatz. Collective dynamics of ‘small-world’ networks. Nature, 393(6684):440–442, June 1998.
* [27] J. Wu, Q. Yuan, D. Lin, W. You, W. Chen, C. Chen, and Z. Zheng. Who are the phishers? phishing scam detection on ethereum via network embedding. Transactions on Systems, Man, and Cybernetics: Systems, pages 1–11, September 2020.
* [28] L. Xu, M. Skoularidou, A. Infante, and K. Veeramachaneni. Modeling Tabular data using Conditional GAN. In 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), pages 1–11, Vancouver, December 2019. NIPS proceedings.
* [29] M. Zhang, X. Zhang, Y. Zhang, and Z. Lin. TXSPECTOR: Uncovering attacks in ethereum from transactions. In 29th USENIX Security Symposium, pages 2775–2792. USENIX Association, August 2020.
* [30] W. Zhao. Bitpoint exchange hacked for $32 million in cryptocurrency, July 2019\. (Accessed 04/10/2020) https://bit.ly/30yY7jP.
* [31] F. Zola, M. Eguimendia, J. Bruse, and R. Urrutia. Cascading Machine Learning to Attack Bitcoin Anonymity. In 2nd International Conference on Blockchain, pages 10–17, Atlanta, July 2019. IEEE.
## Appendix A Appendix: Burst and Attractiveness
This section provides a brief overview of the Burst and Attractiveness related
features which we have used in our study:
* •
Burst: An entity is said to exhibit bursty behavior if it shows irregular
changes in the sequence of events associated with it. Bursts for different
entities is defined as:
1. 1.
Degree Burst: In graph theory, the degree of a vertex is defined as the number
of edges it has. An entity is said to have a ‘Degree Burst’ if the number of
edges the entity has at a given epoch is greater than a threshold value. With
respect to blockchains, such aspect captures irregularity shown by the
malicious accounts, such as those involved in ‘Upbit Hack’.
2. 2.
Balance Burst: In our work, we say that a particular account has a ‘Balance
Burst’ if the crypto-currency involved in a transaction by the same account at
a particular epoch is greater than a threshold value. Usually, accounts
involved in malicious activities, such as money laundering, tend to transfer
high amount of crypto-currency. Capturing such events would lead to
identification of accounts showing malicious behavior, such as the accounts
involved in Silk-Road activities [24].
3. 3.
Transaction fee Burst: An adversary can attempt to bribe a miner into
accepting a particular transaction in a block by raising the fee corresponding
to a transaction. Capturing burst of fees paid, would capture such instances.
Therefore, ‘Transaction fee Burst’ is the number of instances where the
‘Transaction fee’ involved in a transaction is greater than a threshold value.
4. 4.
Inter-event time burst: Inter-event time is defined as the time between two
transactions. There can be instances where an account in a blockchain
transacts with others such that the distribution of inter-event time is non-
uniform. Such non-uniformity, leads to having multiple transactions in a short
time period. Therefore, ‘Inter-event time burst’ is defined to capture such
aspects.
* •
Attractiveness : The accounts involved in malicious activities usually tend to
carry out transactions with the new accounts overtime. Such accounts show less
stability in terms of accounts they have transacted with before.
‘Attractiveness’ for an account is thus the probability of an account
receiving crypto-currency from a previously unknown accounts at a given epoch.
## Appendix B Appendix: List of Features used
The set of 59 features used in our work is: $F=\\{$indegreeTimeInv,
outdegreeTimeInv, degreeTimeInv, numberOfburstTemporalInOut,
longestBurstTemporalInOut, numberOfburstTemporalIn, longestBurstTemporalIn,
numberOfburstTemporalOut, longestBurstTemporalOut, numberOfburstDegreeInOut,
longestBurstDegreeInOutAtTime, numberOfburstDegreeIn,
longestBurstDegreeInAtTime, numberOfburstDegreeOut,
longestBurstDegreeOutAtTime, zeroTransactions, totalBal, transactedFirst,
transactedLast, activeDuration, averagePerInBal, uniqueIn, lastActiveSince,
indegree__index_mass_quantile__q_0.1,
indegree__energy_ratio_by_chunks__num_segments_10__-segment_focus_0,
indegree__linear_trend__attr_“pvalue", ittime__quantile__q_0.7,
ittime__fft_coefficient__coeff_0__attr_“real", ittime__median,
outdegree__energy_ratio_by_chunks__num_segments_10-__segment_focus_0,
outdegree__enegy_ratio_by_chunks__-num_segments_10__segment_focus_1,
outdegree__fft_coefficient__coeff_0__attr_“real", gasPrice__quantile__q_0.2,
gasPrice__quantile__q_0.1, gasPrice__cwt_coefficients__widths_(2, 5, 10,
20)__coeff_0__w_20, attractiveness__median, attractiveness__quantile__q_0_0.4,
attractiveness__mean, balanceOut__quantile__q_0.1,
balanceOut__quantile__q_0.3, balanceOut__cwt_coefficients__widths_(2, 5, 10,
20)__coeff_0__w_2, balanceIn__quantile__q_0.4, balanceIn-
__cwt_coefficients__widths_(2, 5, 10,20)__coeff_0__w_20,
balanceIn__quantile__q_0.3, maxInPayment__quantile__q_0.3,
maxInPayment__quantile__q_0.2, maxInPayment__cwt_coefficients__widths_(2, 5,
10, 20)__coeff_0__w_5, maxOutPayment__quantile__q_0.6,
maxOutPayment__quantile__q_0.1, maxOutPayment__cwt_coefficients__widths_(2, 5,
10, 20)__coeff_0__w_2, clusteringCoeff, burstCount_gasPrice,
burstCount_balanceIn, burstCount_balanceOut, burstInstance_indegree,
burstInstance_outdegree, burstInstance_outdegree, burstInstance_maxInPayment,
burstInstance_maxOutPayment, burstInstance_gasPrice$\\}$
|
Kenji K: E-mail<EMAIL_ADDRESS>
# Simons Observatory Small Aperture Telescope overview
Kenji Kiuchi Department of Physics, The University of Tokyo, 7-3-1 Hongo,
Bunkyo, Tokyo 113-0033, Japan Shunsuke Adachi Department of Physics, Faculty
of Science, Kyoto University, Kitashirakawa Oiwake-cho, Sakyo-ku, Kyoto
606-8502, Japan Aamir M. Ali Department of Physics, University of
California, Berkeley, LeConte Hall, Berkeley, CA 94720, USA Kam Arnold
Department of Physics, University of California, San Diego, La Jolla, CA
92093, USA Peter Ashton Department of Physics, University of California,
Berkeley, LeConte Hall, Berkeley, CA 94720, USA Physics Division, Lawrence
Berkeley National Laboratory, Berkeley, CA 94720, USA Kavli Institute for The
Physics and Mathematics of The Universe (WPI), The University of Tokyo,
Kashiwa, 277- 8583, Japan Jason E. Austermann NIST Quantum Sensors Group,
325 Broadway Mailcode 687.08, Boulder, CO 80305, USA Andrew Bazako Joseph
Henry Laboratories of Physics, Jadwin Hall, Princeton University, Princeton,
NJ 08544, USA James A. Beall NIST Quantum Sensors Group, 325 Broadway
Mailcode 687.08, Boulder, CO 80305, USA Yuji Chinone Kavli Institute for The
Physics and Mathematics of The Universe (WPI), The University of Tokyo,
Kashiwa, 277- 8583, Japan Research Center for the Early Universe, School of
Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-0033, Japan
Gabriele Coppi Department of Physics, University of Milano-Bicocca Piazza
della Scienza, Milano (MI), Italy Kevin D. Crowley Joseph Henry Laboratories
of Physics, Jadwin Hall, Princeton University, Princeton, NJ 08544, USA Kevin
T. Crowley Department of Physics, University of California, Berkeley, LeConte
Hall, Berkeley, CA 94720, USA Simon Dicker Department of Physics and
Astronomy, University of Pennsylvania, 209 South 33rd Street, Philadelphia, PA
19104, USA Bradley Dober NIST Quantum Sensors Group, 325 Broadway Mailcode
687.08, Boulder, CO 80305, USA Shannon M. Duff NIST Quantum Sensors Group,
325 Broadway Mailcode 687.08, Boulder, CO 80305, USA Giulio Fabbian
Department of Physics and Astronomy, University of Sussex, Brighton BN1 9QH,
UK Nicholas Galitzki Department of Physics, University of California, San
Diego, La Jolla, CA 92093, USA Joseph E. Golec Department of Astronomy and
Astrophysics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL
60637, USA Jon E. Gudmundsson Department of Physics, Stockholm University,
SE-106 91 Stockholm, Sweden Kathleen Harrington Department of Astronomy and
Astrophysics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL
60637, USA Masaya Hasegawa Institute of Particle and Nuclear Studies, High
Energy Accelerator Research Organization, 1-1 Oho, Tsukuba, Ibaraki 305-0801,
Japan Makoto Hattori Astronomical Institute, Tohoku University, 6-3 Aramaki
Aza Aoba, Aoba, Sendai, Miyagi 980-8578, Japan Charles A. Hill Department of
Physics, University of California, Berkeley, LeConte Hall, Berkeley, CA 94720,
USA Physics Division, Lawrence Berkeley National Laboratory, Berkeley, CA
94720, USA Shuay-Pwu Patty Ho Department of Physics, Stanford University,
382 Via Pueblo, Stanford, CA 94305, USA Johannes Hubmayr NIST Quantum
Sensors Group, 325 Broadway Mailcode 687.08, Boulder, CO 80305, USA Bradley
R. Johnson Department of Astronomy, University of Virginia, Charlottesville,
VA 22903, USA Daisuke Kaneko Institute of Particle and Nuclear Studies, High
Energy Accelerator Research Organization, 1-1 Oho, Tsukuba, Ibaraki 305-0801,
Japan Nobuhiko Katayama Kavli Institute for The Physics and Mathematics of
The Universe (WPI), The University of Tokyo, Kashiwa, 277- 8583, Japan Brian
Keating Department of Physics, University of California, San Diego, La Jolla,
CA 92093, USA Akito Kusaka Department of Physics, The University of Tokyo,
7-3-1 Hongo, Bunkyo, Tokyo 113-0033, Japan Physics Division, Lawrence
Berkeley National Laboratory, Berkeley, CA 94720, USA Research Center for the
Early Universe, School of Science, The University of Tokyo, 7-3-1 Hongo,
Bunkyo, Tokyo 113-0033, Japan Kavli Institute for the Physics and Mathematics
of the Universe (WPI), Berkeley Satellite, the University of California,
Berkeley 94720, USA Jack Lashner Department of Physics and Astronomy,
University of Southern California, Los Angeles, CA 90089, USA Adrian T. Lee
Department of Physics, University of California, Berkeley, LeConte Hall,
Berkeley, CA 94720, USA Frederick Matsuda Kavli Institute for The Physics
and Mathematics of The Universe (WPI), The University of Tokyo, Kashiwa, 277-
8583, Japan Heather McCarrick Kavli Institute for The Physics and
Mathematics of The Universe (WPI), The University of Tokyo, Kashiwa, 277-
8583, Japan Masaaki Murata Department of Physics, The University of Tokyo,
7-3-1 Hongo, Bunkyo, Tokyo 113-0033, Japan Federico Nati Department of
Physics, University of Milano-Bicocca Piazza della Scienza, Milano (MI), Italy
Yume Nishinomiya Department of Physics, The University of Tokyo, 7-3-1 Hongo,
Bunkyo, Tokyo 113-0033, Japan Lyman Page Joseph Henry Laboratories of
Physics, Jadwin Hall, Princeton University, Princeton, NJ 08544, USA Mayuri
Sathyanarayana Rao Physics Division, Lawrence Berkeley National Laboratory,
Berkeley, CA 94720, USA Raman Research Institute, C. V. Raman Avenue, 5th
Cross Road, Sadashivanagar, Near Mekhri Circle, Bengaluru, Karnataka 560080,
India Christian L. Reichardt School of Physics, University of Melbourne,
Parkville, VIC 3010, Australia Kana Sakaguri Department of Physics, The
University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-0033, Japan Yuki Sakurai
Kavli Institute for The Physics and Mathematics of The Universe (WPI), The
University of Tokyo, Kashiwa, 277- 8583, Japan Joseph Seibert Department of
Physics, University of California, San Diego, La Jolla, CA 92093, USA Jacob
Spisak Department of Physics, University of California, San Diego, La Jolla,
CA 92093, USA Osamu Tajima Department of Physics, Faculty of Science, Kyoto
University, Kitashirakawa Oiwake-cho, Sakyo-ku, Kyoto 606-8502, Japan Grant
P. Teply Department of Physics, University of California, San Diego, La
Jolla, CA 92093, USA Tomoki Terasaki Department of Physics, The University
of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-0033, Japan Tran Tsan Department of
Physics, University of California, San Diego, La Jolla, CA 92093, USA
Samantha Walker NIST Quantum Sensors Group, 325 Broadway Mailcode 687.08,
Boulder, CO 80305, USA Edward J. Wollack NASA/Goddard Space Flight Center,
Greenbelt, MD, USA Zhilei Xu Department of Physics and Astronomy, University
of Pennsylvania, 209 South 33rd Street, Philadelphia, PA 19104, USA Kavli
Institute, Massachusetts Institute of Technology, Cambridge, MA, USA Kyohei
Yamada Department of Physics, The University of Tokyo, 7-3-1 Hongo, Bunkyo,
Tokyo 113-0033, Japan Mario Zannoni Department of Physics, University of
Milano-Bicocca Piazza della Scienza, Milano (MI), Italy Ningfeng Zhu
Department of Physics and Astronomy, University of Pennsylvania, 209 South
33rd Street, Philadelphia, PA 19104, USA
###### Abstract
The Simons Observatory (SO) is a cosmic microwave background (CMB) experiment
from the Atacama Desert in Chile comprising three small-aperture telescopes
(SATs) and one large-aperture telescope (LAT). In total, SO will field over
60,000 transition-edge sensor (TES) bolometers in six spectral bands centered
between 27 and 280 GHz in order to achieve the sensitivity necessary to
measure or constrain numerous cosmological quantities. In this work, we focus
on the SATs which are optimized to search for primordial gravitational waves
that are detected as parity-odd polarization patterns called a B-modes on
degree scales in the CMB. Each SAT employs a single optics tube with TES
arrays operating at 100 mK. The high throughput optics system has a 42 cm
aperture and a 35-degree field of view coupled to a 36 cm diameter focal
plane. The optics consist of three metamaterial anti-reflection coated silicon
lenses. Cryogenic ring baffles with engineered blackbody absorbers are
installed in the optics tube to minimize the stray light. The entire optics
tube is cooled to 1 K. A cryogenic continuously rotating half-wave plate near
the sky side of the aperture stop helps to minimize the effect of atmospheric
fluctuations. The telescope warm baffling consists of a forebaffle, an
elevation stage mounted co-moving shield, and a fixed ground shield that
together control the far side-lobes and mitigates ground-synchronous
systematics. We present the status of the SAT development.
###### keywords:
Cosmic Microwave Background, CMB, B-mode, primordial gravitational wave,
Radio, Ground-Based Telescopes
## 1 INTRODUCTION
The Simons Observatory (SO) is designed to measure the polarization from the
Atacama Desert in Chile at an altitude of 5,200 m. In order to measure large
angular scale pattern to small angular scale pattern, the SO will have three
small aperture telescopes (SATs) and one large aperture telescope (LAT). The
LAT has a six meter aperture and more than 30,000 transition edge sensors
(TESes). The SATs have a 42 cm aperture with more than 30,000 transition edge
sensors distributed across the three SATs.
The SATs will measure 10% of the full sky [1]. Three SATs have 2 frequency
bands each, for a total of 6 bands between bands with center frequency of 27
GHz to 280 GHz in order to subtract synchrotron radiation and dust emission.
The Middle Frequency (MF) has centre frequencies of 93 GHz and 145 GHz, the
Ultra High Frequency (UHF) has bands centered at 225 GHz and 280 GHz, while
the Low Frequency (LF) will be developed for frequency bands around 27 GHz and
39 GHz. Three SATs will observe with two MF configuration and one UHF
configuration. And if needed, one MF telescope will be replaced by LF
configuration for a single year of LF observation.
The SATs are optimized for measuring degree-scale B-modes produced by the
primordial gravitational wave[2, 3]. Hence their target multipole range is 30
$<\ell<$ 300\. The science goal is to measure the tensor to scalar ratio, r,
with $\sigma(r)=0.002$ or better[1, 4]. We have begun integration in 2020 and
aim to deploy the first SAT in Chile in 2021.
---
Figure 1: The $left$ figure shows a cross section of a SAT receiver CAD model.
Top side is a sky side. $Green\ dashed\ line$ shows the HWP that cooled at 50
K using the PTC. The dilution refrigerator cools the optics tube ($red\ dash\
line$) and focal plane ($orange\ dash\ line$) at 1 K and 100 mK, respectively.
The dilution refrigerator and pulse tube cryocooler are tilted to be vertical
when observation at an elevation of 50∘. The $right$ picture is the cryostat
for the first SAT.
## 2 Instruments
Figure 1 $(left)$ shows a cross section of SAT. The SAT cryostat consists of
two refrigerators, one optics tube including three silicon lenses, focal plane
with TES arrays and multiplexing chips and the half wave plate (HWP) signal
modulator. On top of the cryostat, there is a the forebaffle and a ftsparse
wire-grid calibrator. This section explains the design of the each components
and its status.
### 2.1 Optical design
The SAT optics have a 42 cm aperture and a diffraction-limited field of view
(FOV) of 35∘ couple to a 36 cm diameter focal plane that accommodate seven
detector modules. The primary constraint for the optics is given by the
available size of optical components. We will use the HWP polarization signal
modulator for systematic mitigation. The largest available size of sapphire
that has the property we desire is 51 cm in diameter.We employ lenses made by
single crystalline silicon with meta-material AR cut on surfaces[5]. The
largest available size of a silicon ingot for the lenses is 46 cm in diameter.
The optics is optimized with a three-lens configuration with these
constraints. The SAT optics realize high strehl ratio of $>0.89$.
#### 2.1.1 Warm baffle component
The SAT has a three component baffling scheme to suppress the ground pick up.
A forebaffle is mounted on the cryostat and is able to move with entire
optics. The second component is a comoving shield mounted on azimuth stage.
The third component is a ground screen. The radius and height of ground shield
is 8.2 m and 5.6 m, respectively. The combination of three baffles blocks the
direct ray path to nearby mountains and mitigates ground synchronous pick up.
---
Figure 2: $Left$ picture shows a cross section of 3D CAD model of optics tube
(OT) and the picture of fabricated optics tube. $Right$ picture is a half-wave
plate rotation mechanism.
### 2.2 Cryostat and Platform
Figure 1 $(right)$ shows a picture of a SAT cryostat. Each cryostat has a
dilution refrigerator (DR, BlueFors BF-SD400) including one pulse tube
cryocooler (PTC) and one additional PTC (Cryomech PT420). The focal plane is
cooled to 100 mK via the mixing chamber stage of dilution refrigerator. The
optics tube including lenses is cooled to 1 K by the still stage of dilution
refrigerator. The additional PTC cools the 50K radiation shell and 4K
radiation shell. The HWP is mounted on the 50K shell. The DR provides 400
$\mu$W cooling power at 100 mK and allow us to operate $>$10,000 sensors in
one SAT. Thermal loading to each stage is well controlled[6]. The SAT platform
has azimuth, elevation, boresight rotational degree of freedom. The boresight
rotation is limited to $\pm$ 90∘ due to the limitation of the cooling power of
PTC.
### 2.3 Optics Tube
Figure 2 $(left)$ shows a picture and a cross section of optics tube. The
optics tube (OT) for first SAT is built and has been integrated into SAT
cryostat. The SAT optics tube consists three silicon lenses, lens mounts,
cryogenic baffles, metal mesh filters and the 42 cm cold stop.
#### 2.3.1 Mechanical and Thermal
The entire structure operates at 1 K. This low temperature suppresses
radiative loading on the detector. The mechanical structure including lens
mounts are made of pure aluminum ($>$ 99.5%) to ensure good thermal
conductivity. Five machined parts support lenses with a position accuracy of
$<$ 0.5 mm. The coefficient of the thermal expansion of pure aluminum is well
known and leads to a 0.4% contraction from room temperature to 1 K. The AR
coating is required to be transparent (low reflectance and low loss) across a
wide-band and have a matched coefficient of thermal expansion (CTE). Our AR
coating is made of silicon itself using silicon dicing technique [5, 7]. This
technology is already used ACTPol and satisfies all our requirements. Segments
of compressed metal RF gaskets111spira gasket: https://www.spira-emi.com/ are
used for the lens mount to absorb CTE mismatch between the Si lens and Al
mount. The optics tube will be cooled at 1 K level via a thermal strap
connected to the still stage of the dilution unit. The thermal conductivity of
the aluminum is degraded below the superconducting transition temperature of
1.2 K, while the expected temperature of the OT is slightly higher than that.
Pure aluminum has a good balance of thermal conductivity, weight and
machinability. The inner edge of the cold stop has double side taper with
taper angle of 40 degrees.
#### 2.3.2 Cryogenic Baffling
The cryogenic baffles consist of six ring shape discs, with the last ring
being used as a mount for metal-mesh low-pass filters. The cryogenic baffles
are designed to block bounces (reflection and scattering) from sky-side to
detector side. The inner edge of cryogenic baffles have 10 $\lambda$ clearance
to the beam at 150 GHz and its shape is double side knife edge to suppress
reflection at the edges. The inner surface of the optics tube is blackened
using two types of blackbody absorbers. One is fabricated using injection
molding technique[8]. The carbon loaded plastic is formed to black tiles that
have pyramid structure. Injection molding is suitable for mass production and
$>$ 500 pieces of blackbodies covers cryogenic baffling section where is a
large surface area in one OT. This blackbody has CTE mismatch compared to the
aluminum structure, we screwed on aluminum plate with suitable torque to avoid
damage due to thermal contraction. A second types of blackbody is fabricated
using 3D printing technique[9]. The 3D pieces are suitable for covering
relatively complex surface like lens mounts. We fabricated plastic pyramidal
mold using a 3D printer and the mold is filled with stainless-steel loaded
Stycast 2850FT. This blackbody is glued using Stycast 2850FT because of the
similar CTE to the aluminum. The reflectances of both blackbodies are expected
to be $<$ 1%. Mounting methods are also expected to have good thermal
conductivity at the 1 K. Combined with new blackbodies and baffling structure,
the OT minimizes stray light to the detector. This will improve the
sensitivity and mitigate the systematics.
### 2.4 Cryogenic half wave plate (CHWP)
Figure 2 $(right)$ picture shows the HWP rotation mechanism for the first SAT.
#### 2.4.1 Optical
The cryogenic continuously rotating half wave plate (HWP) rapidly modulates
the polarization signal at a higher frequency. The target rotational speed of
our HWP is 2 Hz and this polarization modulation frequency will be 8 Hz above
the $1/f$ fluctuations of the atmosphere. Another advantage of the HWP is a
result of the rotation of the transmitted polarization. A TES, which is
sensitive to polarization in a single direction, is able to measure
polarization in two orthogonal directions due to the rotating effect of the
spinning HWP.
#### 2.4.2 Mechanical
The mechanical parts are already fabricated and tested in a SAT cryostat. The
design of the HWP system is based on the POLARBEAR-2b HWP system[10]. The HWP
system is mounted on the 50 K stage of pulse tube cryocooler (PTC). The PTC
has sufficient cooling power at this stage and also the radiation from the
sapphire is already suppressed at this temperature. The optical diameter of
HWP will limit the aperture size of optics. We employed three 50.5 cm diameter
sapphire disc stacked with achromatic combination. The clear aperture of the
HWP system is 478 mm. The superconducting magnetic bearing shows low power
dissipation at 50K and the HWP can rotate continuously during observation
period.
### 2.5 Sparse wire-grid calibrator
Figure 3 $(left)$ shows the rotating part of the sparse wire-grid calibrator.
This is designed to calibrate the relative polarization angle. It is mounts on
top of the vacuum window. The grid of thin tungsten wires produce uniform
polarized signal and polarized signal, and this signal passes through all of
the optical components (the HWP, lenses, LPEs and etc) before it reaches
detectors. Typically 39 wires are aligned in one direction to within
$0.1{}^{\circ}$ to calibrate the relative angle of all sensors with the
optical components with $0.1{}^{\circ}$ level. The wire ring is mounted on a
circular bearing and can rotate in stepwise. We will develop automatic loading
system allowing us to calibrate the system frequently.
---
Figure 3: $left$ picture is a rotational part of sparse wire-grid. $right$
picture is a cold readout assembly. The detector module (UFM) will be
installed innermost part and coaxial wires and DC wires are connected to the
outer side of cryostat.
### 2.6 Detector and Focal Plane
SO will use a common detector module, referred to as a universal focal-plane
module (UFM) for the LAT and the SATs. The detector module consists of 6-inch
diameter TES wafers and multiplexing chips. The operational temperature of our
detector modules are designed and fabricated to be operated at a 100 mK stage
temperature.. The TESes use Aluminum-Manganese (AlMn) alloy and tune the
critical temperature to 160 mK. Each pixel is dual polarization sensitive and
di-chroic. The microwave circuits including band-pass filters are integrated
on a TES wafer, which is fabricated using 6-inch photolithography process. The
feed-horn coupled TES array will cover MF and UHF[11], and lenslet coupled TES
array will cover LF[12]. The target multiplexing factor of detector module is
$\sim$ 1,000. We adopt Microwave SQUID Multiplexer ($\mu$MUX [13, 14]) that
reads the TES using RF SQUID amplifier connected to a unique superconducting
quarter wavelength resonators between 4-6 GHz[15, 16]. We inject a ramp wave
through the flux ramp line that inductively couples to the RF SQUID loop and
modulate signal. This flux ramp modulation corrects the non-linearity of SQUID
and helps mitigate 1/f noise in the resonator. The SLAC Superconducting
Microresonator RF electronics (SMuRF[17]) will be used as a warm readout
system for $\mu$MUX. Each detector (TES with $\mu$MUX) with SMuRF readout is
designed to be photon noise limited under out expected operating conditions in
Chile. Seven detector modules will be installed in one SAT focal plane, and
more than 30,000 detectors will be operated in three SATs total. Figure 3
$(right)$ shows a cold readout assembly including low noise HEMT amplifiers
and coaxial wires from 4 K to 1 K can be assembled separately[18]. Thermal
loading is enough smaller than the cooling power. Thermal noise is designed to
be smaller than a photon noise limited. The linearity of the amplifiers are
well controlled for the high multiplexing factor of 1,000.
## 3 Conclusion
The SAT telescopes are optimized to measure a B-mode signal produced by the
primordial gravitational waves. Our science goal is to measure this signal
with $\sigma(r)=0.002$. The SAT components are all designed and the SAT
platform, the cryostat and the optics tube are already being built. The
development of HWP, calibrator, detector modules and readout are ongoing and
mechanical parts for these instruments are fabricated. We have begun
integration in 2020 and aim to deploy the first SAT in Chile in 2021.
###### Acknowledgements.
This work was supported in part by a grant from the Simons Foundation (Award
#457687, B.K.) This work was supported in part by the World Premier
International Research Center Initiative (WPI Initiative), MEXT, Japan. In
Japan, this work was supported by JSPS KAKENHI grant Nos. 17H06134, 16K21744
and 19H00674, and the JSPS Core-to-Core Program JPJSCCA20200003. Work at LBNL
is supported in part by the U.S. Department of Energy, Office of
Science,Office of High Energy Physics, under contract No. DE-AC02-05CH11231.
ZX is supported by the Gordon and Betty Moore Foundation
## References
* [1] Ade, P., Aguirre, J., Ahmed, Z., Aiola, S., Ali, A., Alonso, D., Alvarez, M. A., Arnold, K., Ashton, P., Austermann, J., and et al., “The simons observatory: science goals and forecasts,” Journal of Cosmology and Astroparticle Physics 2019, 056–056 (Feb 2019).
* [2] Kamionkowski, M., Kosowsky, A., and Stebbins, A., “A probe of primordial gravity waves and vorticity,” Phys. Rev. Lett. 78, 2058–2061 (Mar 1997).
* [3] Alonso, D., Dunkley, J., Thorne, B., and Næss, S., “Simulated forecasts for primordial $b$-mode searches in ground-based experiments,” Phys. Rev. D 95, 043504 (Feb 2017).
* [4] The Simons Observatory Collaboration et al., “The simons observatory: Astro2020 decadal project whitepaper,” (2019).
* [5] Datta, R., Munson, C. D., Niemack, M. D., McMahon, J. J., Britton, J., Wollack, E. J., Beall, J., Devlin, M. J., Fowler, J., Gallardo, P., Hubmayr, J., Irwin, K., Newburgh, L., Nibarger, J. P., Page, L., Quijada, M. A., Schmitt, B. L., Staggs, S. T., Thornton, R., and Zhang, L., “Large-aperture wide-bandwidth antireflection-coated silicon lenses for millimeter wavelengths,” Appl. Opt. 52, 8747 (Dec. 2013).
* [6] Galitzki, N. et al., “The Simons Observatory: instrument overview,” in [Millimeter, Submillimeter, and Far-Infrared Detectors and Instrumentation for Astronomy IX ], Zmuidzinas, J. and Gao, J.-R., eds., 10708, 1 – 13, International Society for Optics and Photonics, SPIE (2018).
* [7] Golec, J. E. et al., “Design and fabrication metamaterail anti-reflectiuon coatings for cmb observations,” International Society for Optics and Photonics, SPIE (2020).
* [8] Xu, Z., Chesmore, G. E., Adachi, S., Ali, A. M., Bazarko, A., Coppi, G., Devlin, M., Devlin, T., Dicker, S. R., Gallardo, P. A., Golec, J. E., Gudmundsson, J. E., Harrington, K., Hattori, M., Kofman, A., Kiuchi, K., Kusaka, A., Limon, M., Matsuda, F., McMahon, J., Nati, F., Niemack, M. D., Suzuki, A., Teply, G. P., Thornton, R. J., Wollack, E. J., Zannoni, M., and Zhu, N., “The Simons Observatory: Metamaterial Microwave Absorber (MMA) and its Cryogenic Applications,” arXiv e-prints , arXiv:2010.02233 (Oct. 2020).
* [9] Adachi, S., Hattori, M., Kanno, F., Kiuchi, K., Okada, T., and Tajima, O., “Production method of millimeter-wave absorber with 3d-printed mold,” Review of Scientific Instruments 91(1), 016103 (2020).
* [10] Hill, C. A., Kusaka, A., Ashton, P., Barton, P., Adkins, T., Arnold, K., Bixler, B., Ganjam, S., Lee, A. T., Matsuda, F., Matsumura, T., Sakurai, Y., Tat, R., and Zhou, Y., “A cryogenic continuously rotating half-wave plate for the POLARBEAR-2b cosmic microwave background receiver,” arXiv e-prints , arXiv:2009.03972 (Sept. 2020).
* [11] Walker, S., Sierra, C. E., Austermann, J. E., Beall, J. A., Becker, D. T., Dober, B. J., Duff, S. M., Hilton, G. C., Hubmayr, J., Van Lanen, J. L., McMahon, J. J., Simon, S. M., Ullom, J. N., and Vissers, M. R., “Demonstration of 220/280 ghz multichroic feedhorn-coupled tes polarimeter,” Journal of Low Temperature Physics 199, 891–897 (May 2020).
* [12] Suzuki, A. et al., “The polarbear-2 and the simons array experiments,” Journal of Low Temperature Physics 184, 805–810 (Aug 2016).
* [13] Irwin, K. D. and Lehnert, K. W., “Microwave squid multiplexer,” Applied Physics Letters 85(11), 2107–2109 (2004).
* [14] Mates, J. A. B., Hilton, G. C., Irwin, K. D., Vale, L. R., and Lehnert, K. W., “Demonstration of a multiplexer of dissipationless superconducting quantum interference devices,” Applied Physics Letters 92(2), 023514 (2008).
* [15] Dober, B., Ahmed, Z., Becker, D. T., Bennett, D. A., Connors, J. A., Cukierman, A., D’Ewart, J. M., Duff, S. M., Dusatko, J. E., Frisch, J. C., Gard, J. D., Henderson, S. W., Herbst, R., Hilton, G. C., Hubmayr, J., Mates, J. A. B., Reintsema, C. D., Ruckman, L., Ullom, J. N., Vale, L. R., Winkle, D. D. V., Vasquez, J., Young, E., and Yu, C., “A microwave squid multiplexer optimized for bolometric applications,” (2020).
* [16] Hearly, E. et al., “Assembly development for the simons observatory focal plane readout module,” International Society for Optics and Photonics, SPIE (2020).
* [17] Henderson, S. W., Ahmed, Z., Austermann, J., Becker, D., Bennett, D. A., Brown, D., Chaudhuri, S., Cho, H.-M. S., D’Ewart, J. M., Dober, B., Duff, S. M., Dusatko, J. E., Fatigoni, S., Frisch, J. C., Gard, J. D., Halpern, M., Hilton, G. C., Hubmayr, J., Irwin, K. D., Karpel, E. D., Kernasovskiy, S. S., Kuenstner, S. E., Kuo, C.-L., Li, D., Mates, J. A. B., Reintsema, C. D., Smith, S. R., Ullom, J., Vale, L. R., Winkle, D. D. V., Vissers, M., and Yu, C., “Highly-multiplexed microwave SQUID readout using the SLAC Microresonator Radio Frequency (SMuRF) electronics for future CMB and sub-millimeter surveys,” in [Millimeter, Submillimeter, and Far-Infrared Detectors and Instrumentation for Astronomy IX ], Zmuidzinas, J. and Gao, J.-R., eds., 10708, 170 – 185, International Society for Optics and Photonics, SPIE (2018).
* [18] Sathyanarayana Rao, M., Silva-Feaver, M., Ali, A., Arnold, K., Ashton, P., Dober, B. J., Duell, C. J., Duff, S. M., Galitzki, N., Healy, E., Henderson, S., Ho, S.-P. P., Hoh, J., Kofman, A. M., Kusaka, A., Lee, A. T., Mangu, A., Mathewson, J., Mauskopf, P., McCarrick, H., Moore, J., Niemack, M. D., Raum, C., Salatino, M., Sasse, T., Seibert, J., Simon, S. M., Staggs, S., Stevens, J. R., Teply, G., Thornton, R., Ullom, J., Vavagiakis, E. M., Westbrook, B., Xu, Z., and Zhu, N., “Simons observatory microwave squid multiplexing readout: Cryogenic rf amplifier and coaxial chain design,” Journal of Low Temperature Physics 199, 807–816 (May 2020).
|
# Energy-mass equivalence from Maxwell equations
Alejandro Perez<EMAIL_ADDRESS>Aix Marseille Univ, Université de
Toulon, CNRS, CPT, 13000 Marseille, France Salvatore Ribisi
<EMAIL_ADDRESS>Aix Marseille Univ, Université de Toulon,
CNRS, CPT, 13000 Marseille, France
###### Abstract
Since the appearance of Einstein’s paper “On the Electrodynamics of Moving
Bodies” and the birth of special relativity, it is understood that the theory
was basically coded within Maxwell’s equations. The celebrated mass-energy
equivalence relation, $E=mc^{2}$, is derived by Einstein using thought
experiments involving the kinematics of the emission of light (electromagnetic
energy) and the relativity principle. Text book derivations often follow paths
similar to Einstein’s, or the analysis of the kinematics of particle
collisions interpreted from the perspective of different inertial frames. All
the same, in such derivations the direct dynamical link with hypothetical
fundamental fields describing matter (e.g. Maxwell theory or other) is
overshadowed by the use of powerful symmetry arguments, kinematics, and the
relativity principle.
Here we show that the formula can be derived directly form the dynamical
equations of a massless matter model confined in a box (which can be thought
of as a toy model of a composite particle). The only assumptions in the
derivation are that the field equations hold and the energy-momentum tensor
admits a universal interpretation in arbitrary coordinate systems. The mass-
energy equivalence relation follows from the inertia or (taking the
equivalence principle for granted) weight of confined field radiation. The
present derivation offers an interesting pedagogical perspective on the
formula providing a simple toy model on the origin of mass and a natural
bridge to the foundations of general relativity.
## I Introduction
One of the striking results of special relativity Einstein:2015:RSG is the
implication of an equivalence between the concepts of inertia and energy. In
one of his founding papers original Einstein arrives at the postulate of
mass-energy equivalence by showing that a body emitting an energy $E$ via
electromagnetic radiation will see its mass decreased by an amount $E/c^{2}$.
Today textbooks give several different derivations. In the classic book
Jackson:100964 , for instance, the equivalence is found by equating the force
on a charged particle with its four-momentum variation. Another derivation is
presented that uses the consistency with the relativity principle of the
kinematics of colliding particles as seen from different inertial frames.
Perhaps the simplest (yet the most formal) derivation corresponds to the one
that starts from the geometric (relativistic) free particle action
$S[x(t)]=-mc\int dt\sqrt{|g_{\mu\nu}\dot{x}^{\mu}\dot{x}^{\nu}|}$ (1)
whose non relativistic limit justifies the non-relativistic Lagrangian
$L=m\dot{x}^{2}/2$, and in literally two lines of Hamiltonian analysis
produces the canonical Hamiltonian energy $E(v)=mc^{2}/(\sqrt{1-v^{2}/c^{2}})$
with $E(0)=mc^{2}$.
All these derivations are important and insightful in their own way and remain
perhaps the simplest path to the equivalence formula. However, none of these
make the link between pure energy and mass dynamically explicit. They hide, in
some sense, one very important aspect which is perhaps the central one in view
of the necessary generalization of special relativity to include gravity in
the general theory of relativity.
The derivation proposed here is complementary to the standard account with the
added value of presenting an explicit link between dynamics of a massless
matter model—here the electromagnetic field, or a massless scalar field—and
mass of its energy when confined in an idealized box. The derivation uses
strongly the notion of stress-energy-momentum tensor of matter and the
covariant interpretation of its physical content. The exercise paves the way
for the understanding of the energy momentum tensor as the source of gravity
in general relativity, and serves also as an introduction to the mathematics
that is central in the definition of the theory.
We will consider radiation confined in an idealised box and we will show that
this radiation confers inertia—encoded in a mass $m=E/c^{2}$—to the box, where
$E$ is its energy content. The simplest model of radiation will be Maxwell
electromagnetic fields which strengthens the idea that special relativity is
entirely encoded in the properties of electromagnetism (this was indeed the
perspective adopted by Einstein in his 1905 revolution). As our derivation
basically relies entirely on coordinate independence of the field equations
and the conservation of the stress-energy-momentum tensor of the matter
fields, the result should be valid for any generally covariant matter model.
As an additional example we show that the construction also works for a
massless scalar field.
The idealized box confining the radiation can in turn be thought of as a
poor’s man model of a composite particle (such as a proton or a neutron): from
modern computations Durr:2008zz in quantum chromodynamics (QCD) we know that
$99\%$ of the mass of a proton comes from the energy of the confined (not by a
box but by the non-linear strong interaction) gluon-quark radiation while only
the remaining $1\%$ is associated to the contribution of the rest mass of the
quarks. The QCD confinement potential is replaced in our model by the box and
the boundary conditions that require the fields to vanish at its walls 111One
could speculate that fundamental massive fermions like the electrons might be
seen as confined massless radiation as well. Solutions of the Dirac equation
can be interpreted as two massless Weyl fermions (the left handed and right
handed components of the Dirac fermion) which due to the mutual interaction
mediated by the mass term in the Dirac equation constantly annihilate into
each other. In this process the momentum of each individual massless component
bounces by changing the sign of the momentum in the direction of the spin
(Schroedinger’s zitterbewegung schrodinger1930kraftefreie ) as if confined in
a box of a size of the order of the Compton wave length of the fermion. .
Our derivation could not realistically have replaced the historical one
because it strongly uses the physical interpretation of the stress-energy-
momentum tensor (a notion that even when present in the literature on Maxwell
fields, became only central after the development of relativity) and the
general covariance of the relativistic field equations (which also emerged
with the understanding of general relativity). Nonetheless, all the
mathematical ingredients and physical interpretation were arguably available
in the context of Maxwell electromagnetism. Yet the derivation we propose is
straightforward only once modern tools and modern understanding of covariant
methods are used (at the technical level, the derivation we present uses to a
large extend the mathematical tools of general relativity: covariant
derivatives and general coordinate invariance). We hope that this paper will
present the students with an alternative (perhaps technically more advanced)
pedagogical perspective of both technical interest and conceptual value.
The paper is organized as follows. In Section II we give some motivation for
our approach by using the heuristics of a photon trapped inside an
accelerating box (or a box on a constant gravitational field from the
perspective of the equivalence principle). In Section III we review the
properties of Rindler coordinates which represent accelerating frames. In
Section IV we derive the formula $E=mc^{2}$ by analysing the energy content of
an accelerating box trapping stationary electromagnetic radiation. In Section
V we do the same but for a massless scalar field which suggests the
universality of our derivation.
In the Appendix we give supplementary material that answers some questions
that naturally arise from the our analysis. The only important piece of
information for the proof of the main result of the paper, in Section IV, is
that the electric field in the rest frame of the box must be perpendicular to
the boundary walls (which is obvious for an inertial box but requires
justification for an accelerating one). In Appendix A we briefly recall the
structure of the covariant version of Maxwells equations and introduce its
stress-energy-momentum tensor, we also calculate the properties of
electromagnetic field at the boundary of an accelerating box of perfectly
conducting walls. In Appendix B, we show that the mass-energy equivalence
formula can also be derived from the work done by an external agent
accelerating the massless confined radiation. This is the analog of the
heuristic argument using the photon given at the beginning of the paper. We do
the same for the scalar field in Appendix C.
## II Heuristics with a trapped photon
As a warming up exercise let us first illustrate the basic idea by using the
heuristics provided by the particle interpretation of electromagnetic
radiation arising from quantum mechanics. Thus, consider a single photon of
frequency $\omega$ trapped inside a cubic box of side $L$. Assume that the box
is accelerated with acceleration $|a|$ in the upward direction (see Figure 1).
At time $t=0$ we also assume the box is at rest, and the photon is passing
through the center of the box and moving up at the speed of light $c$. At time
$\displaystyle t_{1}$ $\displaystyle=$
$\displaystyle\frac{L+|a|t_{1}^{2}}{2c}=\frac{L}{2c}\left(1+{\mathfs{O}}\left(\frac{|a|L}{c^{2}}\right)\right){}$
(2) $\displaystyle\approx$ $\displaystyle\frac{L}{2c},$
the photon hits the top of the box. We are assuming that the speed of the box
when the photon hits the top is much smaller that the speed of light, hence
$\frac{|a|L}{c^{2}}\ll 1$. This can happen in two different limits: either the
box is very small or the acceleration is very small with respect to the box
size.
$\begin{array}[]{c}\includegraphics[width=199.16928pt]{cuerda.pdf}\end{array}\
\ \ \ \ \ \ \ \ \ \ \ \ \
\begin{array}[]{c}\includegraphics[width=184.9429pt]{mesa.pdf}\end{array}$
Figure 1: On the left panel: an accelerating box containing a photon. On
average, a force $|F|=c^{-2}\hbar\omega|a|$ needs to be applied to maintain
the acceleration. The inertia of the box is due to the average of the
differential radiation pressure of the photon bouncing on the walls of the box
(the pressure is smaller at the top than at the bottom due to the Doppler
shift). On the right: the equivalent gravitational situation with the box on a
table; the weight of the box is $W=c^{-2}\hbar\omega|g|$. Doppler shift is now
replaced by the equivalent gravitational red-shift.
When the photon hits the top its frequency in the rest frame of the top wall
is $\omega_{1}=(1-|a|t_{1}/c)\omega$ due to the Doppler shift. Notice that
this effect is expected from the non relativistic point of view also: in the
limit where $\frac{|a|L}{c^{2}}\ll 1$ holds, the standard sound-wave-type
Doppler red shift formula coincides with the (physically) correct relativistic
one. Thus no explicit use of special relativity is being made here. Indeed, in
the regime we are working, we can safely assume that time $t$ is always given
by the very same inertial time (thought to be absolute time in pre-
relativistic terms). At time $t_{1}$ the momentum of the photon changes from
$\hbar\omega_{1}/c\to-\hbar\omega_{1}/c$ in the vertical axis. We have that
$\Delta p_{1}=-2\hbar\omega_{1}/c$.
Now, the photon travels downwards and its frequency is back to $\omega$ (in
the instantaneous rest frame of the box) thanks to the Doppler effect when it
passes through the center. Indeed the frequency at the center is always
$\omega$ as the photons energy must be stationary in the rest frame of the
box. This is the easiest understood intuitively by considering the equivalent
situation of the box on the table on the right panel of Figure 1. This implies
that at time $t_{2}$ the photon hits the bottom of the box with a local
frequency $\omega_{2}=(1+|a|L/(2c^{2}))\omega$ and $\Delta
p_{2}=2\hbar\omega_{2}/c$. When the photon gets back to the center at time
$t_{3}=2L/c$ the average force $F=\Delta p/\Delta t$ is
$\displaystyle|F|$ $\displaystyle=$ $\displaystyle\frac{\Delta p}{\Delta
t}=\frac{2\hbar\omega}{c}\frac{\left(1+\frac{|a|L}{2c^{2}}-1+\frac{|a|L}{2c^{2}}\right)}{2\frac{L}{c}}$
$\displaystyle=$ $\displaystyle\frac{\hbar\omega}{c^{2}}|a|,$
which implies that the box carries a mass $m=E/c^{2}$ with $E={\hbar\omega}$
(the quantum energy of the photon). The fluctuating character of the mass due
to the bouncing back and forth of the photon would go away if we consider many
photons in a suitable configuration that makes the radiation inside the box
‘stationary’ in a way that will become precise in the following section. In
the previous heuristic derivation, special relativity is of course hidden in
the assumption that the momentum of a photon is $\hbar\omega/c$ but this is
really only through quantum mechanics and the dynamical equations of Maxwell
theory. That is indeed our point: the mass-energy equivalence relation is
encoded in Maxwells dynamics.
The calculation can be improved by considering multiple photons in arbitrary
configurations. The final answer remains the same although details become more
and more cumbersome. The single photon example suffices as a motivation for
what follows. The limitation of the present approach is the use of photons and
quantum mechanics. Nevertheless, this simple argument also shows the heart of
the reason for the energy-mass equivalence just derived; it resides in the
structure of the photon momentum energy relation $p=\hbar\omega/c$ and
$E=\hbar\omega$ which is also present in the classical theory, as realized by
Einstein, and as pointed out in text books like the classic
rindler2013essential . More precisely, electromagnetic energy density
$\rho\equiv(E^{2}+B^{2})/(8\pi)$ and the electromagnetic Poynting
vector—momentum density of the electromagnetic
radiation—$P_{i}\equiv(\vec{E}\times\vec{B})_{i}/(4\pi c)$ are such that for
radiation (where $|E|=|B|$ and $\vec{B}\perp\vec{E}$) one has that
$|\vec{P}|=E^{2}/(4\pi c)$ and $\rho=E^{2}/(4\pi)$ which confirms the energy-
mass equivalence via the relationship $(energy/c^{2})\times velocity=momentum$
for light. In what follows we will realize this in a clear-cut fashion by
making appeal only to the structure of Maxwell equations without needing the
details of a particular solution. As long as the radiation is confined inside
the box in a stationary configuration the energy-mass formula will follow.
## III Rindler coordinates
We first need to get familiar with the description of an accelerated frame
that will be used to represent those observers that are at rest with respect
to the idealized box model of a composite particle made of confined
electromagnetic radiation (or massless scalar field radiation). We will
consider a box full of radiation (electromagnetic fields or massless scalar
fields) in Minkowski spacetime whose metric in inertial coordinates takes the
standard form
$ds^{2}=-c^{2}dt^{2}+dx^{2}+dy^{2}+dz^{2}.$ (4)
Inertial time translations define an isometry of flat spacetime so that
$\xi^{\rm lab}\equiv\partial_{ct}$ satisfies a covariant equation known as the
Killing equation 222 The Killing equation follows from the fact that the Lie
derivative of the metric along a vector field defining an isometry vanishes
${\mathfs{L}}_{\xi^{\rm lab}}\eta_{ab}=2\nabla_{(a}\xi^{\rm lab}_{b)}=0$
Wald:1984rg .
$\nabla_{(a}\xi^{\rm lab}_{b)}=0.$ (5)
In order to describe the uniformly accelerating box in the $x$-direction it
will be convenient to introduce Rindler coordinates PhysRev.119.2082 that are
related to the inertial coordinates by
$\displaystyle{}ct$ $\displaystyle=$ $\displaystyle{\mathrm{bar}x}\sinh(\tau)$
$\displaystyle x$ $\displaystyle=$ $\displaystyle{\mathrm{bar}x}\cosh(\tau)$
(6)
so that the flat metric becomes
$ds^{2}=-{\mathrm{bar}x}^{2}d\tau^{2}+d{\mathrm{bar}x}^{2}+dy^{2}+dz^{2}.$ (7)
These new coordinates are those associated with uniformly accelerated
observers Wald:1984rg . The inverse transformation is
$\displaystyle\mathrm{bar}x$ $\displaystyle=$
$\displaystyle\sqrt{x^{2}-c^{2}t^{2}}$ (8) $\displaystyle\tau$
$\displaystyle=$ $\displaystyle{\rm arctanh}\left(\frac{ct}{x}\right).$ (9)
For later use it is important to write $\xi^{a}_{\rm lab}=\partial_{ct}^{a}$
in terms of Rindler coordinates, from (8) we get
$\displaystyle\xi^{a}_{\rm lab}$ $\displaystyle=$
$\displaystyle\partial_{ct}^{a}=\frac{1}{c}\frac{\partial\tau}{\partial
t}\partial^{a}_{\tau}+\frac{1}{c}\frac{\partial\mathrm{bar}x}{\partial
t}\partial^{a}_{\mathrm{bar}x}{}$ (10) $\displaystyle=$
$\displaystyle\gamma(\tau)\left(\frac{1}{\mathrm{bar}x}\partial^{a}_{\tau}-\frac{v(\tau)}{c}\partial^{a}_{\mathrm{bar}x}\right),$
where we introduced the relativistic gamma factor
$\gamma(\tau)=\cosh(\tau)=(1-\beta^{2})^{-1/2}$ and
$\beta(\tau)=\tanh(\tau)=v(\tau)/c$. Also for later use, the 4-volume form
(see for instance Appendix B in Wald:1984rg ) in terms of Rindler coordinates
is
$dv^{\scriptscriptstyle(4)}=\mathrm{bar}x\,d\mathrm{bar}xdydzd\tau,$ (11)
and the $3$-volume density for the simultaneity surfaces $\tau=$constant is
$d\Sigma_{\tau}=d\mathrm{bar}xdydz.$ (12)
Since the metric does not depend on $\tau$,
$\xi^{a}_{\rm
box}\equiv\partial_{\tau}^{a}=\mathrm{bar}x\cosh\tau\partial_{ct}^{a}+\mathrm{bar}x\sinh\tau\partial_{x}^{a}$
(13)
is a Killing vector too
$\nabla_{(a}\xi^{\rm box}_{b)}=0.$ (14)
The subindex ”box” is natural due to the fact that this Killing field is
associated with the time translation invariance of the uniformly accelerating
observers at rest with the box that will contain the confined radiation as
described in the following section (the isometry corresponding to this Killing
field is the one associated with the invariance of the flat Minkowski metric
under boosts).
The four velocity $u^{a}_{\rm box}$ of these observers is just proportional
the Killing vector, i.e., given by $u^{a}_{\rm box}=\xi_{\rm
box}^{a}/{|\xi_{\rm box}|}$, explicitly
$\displaystyle u^{a}_{\rm box}$ $\displaystyle=$
$\displaystyle\cosh\tau\partial_{ct}^{a}+\sinh\tau\partial_{x}^{a}{}$ (15)
$\displaystyle=$
$\displaystyle\gamma(\tau)\partial_{ct}^{a}+\gamma(\tau)\beta(\tau)\partial_{x}^{a}.$
The previous expression of $u^{a}_{\rm box}$ tells us that the stationary
observers following the killing trajectories of (13) correspond to orbits of
boosts in the $x$-direction with rapidity given by $\tau$. One can easily
compute their acceleration and find that it is constant (independent of
$\tau$) and given by
$\displaystyle a_{a}^{\rm box}$ $\displaystyle=$ $\displaystyle
c^{2}u^{b}_{\rm box}\nabla_{b}u_{a}^{{\rm box}}=c^{2}\frac{\xi^{b}_{\rm
box}}{|\xi_{\rm box}|}\nabla_{b}\left(\frac{\xi_{a}^{\rm box}}{|\xi_{\rm
box}|}\right){}$ (16) $\displaystyle=$ $\displaystyle
c^{2}\nabla_{a}\log(|\xi_{\rm
box}|)=c^{2}\frac{d{\mathrm{bar}x}_{a}}{{\mathrm{bar}x}},$
where to get the final line we have used the Killing equation (14). Therefore,
(15) defines the four velocity field of a box where its bulk points move along
constant acceleration trajectories with
$|a_{\rm box}|=\frac{c^{2}}{\mathrm{bar}x}$ (17)
Notice that even when all points of the box move at the same speed (the box
behaves as a rigid box) different points have different acceleration, e.g. the
bottom of the box and the top of the box accelerate differently so that the
box remains un-stretched (the distance between the top and the bottom of the
box remains fixed). This might be surprising at first sight but it is one of
these counter intuitive facts in Lorentzian geometry. All the same, when the
concept of acceleration of the box will be needed (only in the material
presented in the Appendices) we will work under the assumption that
$\frac{|a_{\rm box}|L}{c^{2}}\ll 1$ (already used in Section II) in which case
a single constant notion of acceleration can be assigned in an approximate
manner to the whole box.
$\begin{array}[]{c}\includegraphics[width=341.43306pt]{fig1.pdf}\end{array}$
Figure 2: Accelerated box in Minkowski space-time. $\Sigma$ represent constant
time surfaces while $R$ is the region inside the box. The hyperbola correspond
to the trajectory of the walls perpendicular to the motion.
$\begin{array}[]{c}\includegraphics[width=199.16928pt]{gravity-
falls-2.pdf}\end{array}\ \ \ \ \ \ \ \ \ \ \ \ \ \
\begin{array}[]{c}\includegraphics[width=184.9429pt]{gravity-
falls.pdf}\end{array}$
Figure 3: A box of accelerated radiation on the left. On the right the
equivalent situation of radiation in a box near the vicinity of the earth. The
energy-mass equivalence holds for the confined radiation. The only assumption
is the stationarity of the radiation in the rest frame of the box and the
validity of the field equations.
## IV The mass of Maxwell fields confined in a box
In this section we derive the mass-energy equivalence formula from the
inertial properties of a box made of perfectly conducting walls filled with
electromagnetic radiation. We consider Maxwell theory and its solutions in
Rindler coordinates, introduced in the previous section, and impose the
boundary conditions that represent the presence of perfectly conducting walls
in that frame. However, we will see that very little information about the
solutions is needed (and this is one of the nice features of the result). More
precisely, the only explicit thing that enters the proof below is that the
electric field, in the rest frame of the accelerating box, must be
perpendicular to the walls of the box. This is well known for a box in
inertial motion, the fact that it remains true in the uniformly accelerated
case is perhaps physically clear but technically less obvious. The proof is
given in the Appendix A.1 whose main results are discussed in A.2, and the
basic mathematical reason is that Maxwell’s equations maintain very much the
same structure on the accelerated frame as in an inertial one.
Let us now compute the energy content of the box, as measured in the
laboratory frame, at a given simultaneity surface of constant Rindler time
$\tau$ (see Figure 2). In order to do this we introduce the stress-energy-
momentum tensor (energy-momentum tensor from now on) $T_{ab}$ of the Maxwell
field and the current
$j^{\rm lab}_{a}\equiv-T_{ab}\xi_{\rm lab}^{b}.$ (18)
The energy momentum tensor for electromagnetism is given in (45); however, its
explicit form is not important at the moment. By definition of the current in
the previous equation, the energy content of the box of confined radiation at
a given $\tau$ as measured in the lab frame is
$E_{\rm box}(\tau)=-{c^{2}}\int_{\Sigma_{\tau}}j^{\rm lab}_{a}n^{a}\
d\Sigma_{\tau},$ (19)
where $d\Sigma_{\tau}=d\mathrm{bar}xdydz$ is the volume density of the
hypersurface $\tau=$constant as introduced in (12) and $n^{a}$ is the normal
to these hypersurfaces. Explicitly,
$n^{a}=\mathrm{bar}x^{-1}\partial^{a}_{\tau}$ which when replaced in the
previous equation gives
$\displaystyle E_{\rm box}(\tau)$ $\displaystyle=$
$\displaystyle-{c^{2}}\int_{\Sigma_{\tau}}j^{\rm
lab}_{a}n^{a}d\Sigma_{\tau}={c}\int\mathrm{bar}x^{-1}T_{t\tau}d\mathrm{bar}xdydz.$
(20)
Now, from (10) we get
$\displaystyle E_{\rm box}(\tau)$ $\displaystyle=$
$\displaystyle\gamma(v)c^{2}\int_{\Sigma_{\tau}}\mathrm{bar}x^{-2}T_{\tau\tau}d\mathrm{bar}xdydz-v{c}\gamma(v)\int_{\Sigma_{\tau}}\mathrm{bar}x^{-1}T_{\tau\mathrm{bar}x}d\mathrm{bar}xdydz.$
(21)
As we show below, the second term in the previous equation vanishes if we
demand that the radiation inside the box is stationary in its rest frame,
namely that the space components of the total linear momentum of the radiation
vanishes in the rest frame of the box. Strictly speaking we only need the
total momentum in the ${\mathrm{bar}x}$ direction to vanish. However, assuming
we want our box to represent a simple model of composite particle that could
be accelerated in arbitrary directions then the vanishing of all component of
the space part of the linear momentum in the rest frame of the box will be a
natural demand. In order to show that this is equivalent to the vanishing of
the second term in (21), let us now consider the energy current associated to
the frame in which the box is at rest. So we have
$j^{\rm box}_{a}=-T_{ab}u_{\rm
box}^{b}=-\mathrm{bar}x^{-1}T_{ab}\partial_{\tau}^{b},$ (22)
where $u^{a}_{\rm box}=\xi^{a}_{\rm box}/|\xi_{\rm
box}|=\mathrm{bar}x^{-1}\partial_{\tau}$ as given in (15) is the four velocity
of the box. The momentum density of the radiation inside the box along the
direction $\partial_{\mathrm{bar}x}$ as measured by the observer $u^{a}_{\rm
box}$ at a time $\tau$ is given by $p_{\hat{x}}(\tau)=j^{\rm
box}_{a}\partial_{\mathrm{bar}x}^{a}$. The condition for the vanishing of the
total momentum of the radiation in the rest frame of the box in the
${\mathrm{bar}x}$ direction takes the form 333Notice that as the previous
integral involves the notion of ‘${\mathrm{bar}x}$-direction’ at different
point in the box, one would need in principle to parallel transport all these
vectors at some reference point to be able to integrate (sum up) all the
contributions. It is easy to check that such parallel transport is trivial in
this case.
$\int_{\Sigma_{\tau}}j^{\rm
box}_{a}\partial_{\mathrm{bar}x}^{a}dv^{\scriptscriptstyle(3)}=-\int_{\Sigma_{\tau}}\mathrm{bar}x^{-1}T_{\tau\mathrm{bar}x}d\mathrm{bar}xdydz=0.$
(23)
The previous is the only real requirement on the solutions of the
electromagnetic field inside of the confining box. It has a natural physical
meaning corresponding to restricting the radiation to a stationary
configuration which reflects the notion of a compact particle-like object that
we have in mind. In more general situations the two terms in (21) are
important.
The previous stationarity condition reduces (21) to
$\displaystyle E_{\rm box}(\tau)$ $\displaystyle=$
$\displaystyle\gamma(\tau)\int_{\Sigma_{\tau}}\frac{c^{2}}{\mathrm{bar}x^{2}}T_{\tau\tau}d\mathrm{bar}xdydz$
(24)
Now we show that the integral in the previous equation does not depend on
$\Sigma_{\tau}$. A simple calculation of the divergence of the current (22)
yields
$\displaystyle\nabla^{a}j^{\rm
box}_{a}=-\nabla^{a}\left(\mathrm{bar}x^{-1}T_{ab}\xi_{\rm box}^{b}\right)$
$\displaystyle=$ $\displaystyle\mathrm{bar}x^{-1}\sigma^{a}E^{\rm
box}_{a}\delta_{\rm box}-T_{a\tau}g^{ac}\nabla_{c}\mathrm{bar}x^{-1}{}$ (25)
$\displaystyle=$ $\displaystyle\mathrm{bar}x^{-2}T_{\tau\mathrm{bar}x}$
where in the first line we have used (14) and (48), and in the second line the
fact that $\sigma^{a}E^{\rm box}_{a}=0$ (the electric field in the box frame
must be orthogonal to the surface current for perfectly conducting walls).
Indeed, the electric field $E^{\rm box}_{a}$ at the walls of the box is
orthogonal to the walls of the box while the normal component of the magnetic
field $B^{\rm box}_{a}$ vanishes at the walls. Even though this might be
physically clear, the mathematical proof from Maxwell equations is tricky
because one is on a non-inertial frame. We present it in the Appendix A.1 and
A.2.
As implied by (25) the current $j^{\rm box}_{a}$ is not locally conserved;
nevertheless, when we integrate it in space-time region $R$ swept by the box
(see figure 2) we find
$\displaystyle\int_{R}\nabla^{a}j_{a}^{\rm box}dv^{(4)}$ $\displaystyle=$
$\displaystyle\int
d\tau\left(\int_{\Sigma_{\tau}}\mathrm{bar}x^{-1}T_{\tau\mathrm{bar}x}d\mathrm{bar}xdydz\right)=0,$
(26)
where we used (11) and the last integration vanishes because the quantity in
the parenthesis vanishes due to the stationarity condition (23). Now, it
follows from the perfect conductor boundary conditions that
$\left.j^{a}_{\rm box}N_{a}\right|_{\rm walls}=0,$ (27)
where $N^{a}$ is the normal to the walls of the box (see detail proof in
Appendix A.2, equation (69)). As a result, Gauss theorem implies that the flux
across the boundary of the region $R$ receives only contributions from the
spacelike components of the boundary of $R$ (see in Figure 2), namely
$0=\int_{R}\nabla^{a}j_{a}^{\rm box}dv^{(4)}=\int_{\Sigma_{2}}j_{a}^{\rm
box}u_{\rm box}^{a}d\Sigma_{2}-\int_{\Sigma_{1}}j_{a}^{\rm box}u_{\rm
box}^{a}d\Sigma_{1},$ (28)
hence the integration of $j_{a}^{\rm box}u_{\rm box}^{a}$ does not depend on
the $\tau=$constant hypersurface $\Sigma_{\tau}$: it is a constant of motion.
What is the physical interpretation of that constant? From the fact that
$u^{a}_{\rm lab}=u^{a}_{\rm box}$ at $\tau=0$ we see that this constant is
nothing else but the rest energy of the radiation in the box
$\displaystyle E(0)=-\int_{\Sigma}c^{2}j_{a}^{\rm box}u_{\rm
box}^{a}d\Sigma=\int_{\Sigma}c^{2}T_{ab}u_{\rm box}^{a}u_{\rm
box}^{b}d\Sigma=\int_{\Sigma}c^{2}\mathrm{bar}x^{-2}T_{\tau\tau}d\mathrm{bar}xdydz.$
(29)
Therefore, equation (24) takes the form
${E_{\rm box}({v})=\gamma(v)E_{\rm box}(0)},$ (30)
where we are now using the direct correspondence $\tau={\rm arctanh}(v/c)$ and
hence trading $\tau$ by the velocity $v$ in the previous expression. Expanding
$\gamma(v)$ to leading order on $v$ we find
$\displaystyle E_{\rm box}(v)=\left(1+\frac{v^{2}}{2c^{2}}\right)E_{\rm
box}(0)+{\mathfs{O}}\left(\frac{v^{4}}{c^{4}}\right)E_{\rm box}(0).$ (31)
Correspondence with the non relativistic limit requires
$\boxed{E_{\rm box}(0)=mc^{2}}$ (32)
where $m$ is the rest mass of the confined electromagnetic radiation.
## V The massless scalar field case
The result of the previous section depends of the field equations only and in
a generic manner, in the sense that no particular solutions need to be
considered for the proof. The first important ingredient is the conservation
of the energy-momentum tensor in the bulk of the box (which follows from the
validity of the field equations). The second is the behaviour of the
divergence of $j_{a}^{\rm box}$ in (25) where the reflecting boundary
conditions constrain the electric field (as measured in the box frame) to be
orthogonal to the walls, and the third is the orthogonality of $j_{a}^{\rm
box}$ to the walls. Both these ingredients follow from the structure of the
field equations at the boundary as shown in Appendix A.2. But the important
things is that no specific solution needs to be chosen to prove these
properties: these are generic consequences of the equations and the physical
conditions at the idealized walls of the box. Therefore, one would expect that
the proof of the previous section should be valid (with small adjustments) for
any massless matter model. We do not have a general proof of this;
nevertheless, we can at least build up evidence by exhibiting another simple
example: the massless scalar field.
The field equation of a massless scalar field $\phi$ is
$\mathrm{{}^{2}}\phi\equiv g^{ab}\nabla_{a}\nabla_{b}\phi=j,$ (33)
where $j$ is a source term (necessary to impose the boundary conditions at the
walls of the box). The stress-energy-momentum tensor is given by
$T_{ab}=\nabla_{a}\phi\nabla_{b}\phi-\frac{1}{2}g_{ab}\
g^{cd}\nabla_{c}\phi\nabla_{d}\phi.$ (34)
Direct calculation of the divergence of the stress-energy-momentum tensor (34)
yields
$\nabla^{a}T_{ab}=j\nabla_{b}\phi,$ (35)
which in the absence of sources vanishes identically. When the radiation is
confined inside a box, made of perfectly reflecting walls (as in the Maxwell
case) surface charges appear. We write
$j=\sigma\delta_{\rm box},$ (36)
where $\sigma$ represent the surface charge density. This is the analog of the
surface electric charges and surface current in a perfect conducting wall in
electromagnetism. They are the responsible of imposing reflecting boundary
conditions that, in the present case, boil down to $\phi=0$ at the box walls.
These sources fix the normal derivative of the scalar field: from (33), and
the Gauss law applied to the vector field $\nabla^{a}\phi$, it follows that
$N^{a}\nabla_{a}\phi=\sigma.$ (37)
or simply
$\nabla_{a}\phi=\sigma N_{a}.$ (38)
As in the case of Maxwell fields we start form the definition of the energy
content of the box in the lab frame (19). The argument follows the same lines
from (19) to (24) where the field equations do not really enter. Things change
slightly when considering the current (22) whose divergence remains
$\displaystyle\nabla^{a}j^{\rm
box}_{a}=-\nabla^{a}\left(\mathrm{bar}x^{-1}T_{ab}\xi_{\rm box}^{b}\right)$
$\displaystyle=$ $\displaystyle-\mathrm{bar}x^{-1}\sigma(u^{a}_{\rm
box}\nabla_{a}\phi)\delta_{\rm
box}-T_{a\tau}g^{ac}\nabla_{c}\mathrm{bar}x^{-1}{}$ (39) $\displaystyle=$
$\displaystyle\mathrm{bar}x^{-2}T_{\tau\mathrm{bar}x},$
due to the fact that $u^{a}_{\rm
box}\nabla_{a}\phi=\mathrm{bar}x^{-1}\partial_{\tau}\phi=0$ at the boundary
(either because $\phi=0$ for all $\tau$ or, equivalently, due to equation (38)
and the fact that $u_{\rm box}^{a}N_{a}=0$). Therefore, the equivalent of
equation (26) is also valid for the scalar field as long as the stationarity
condition (23) is satisfied for the scalar field inside the box. Now, the
validity of equation (28) depends on the validity of $j^{\rm box}_{a}N^{a}=0$.
Using the definition of the energy-momentum tensor and the fact that $u_{\rm
box}^{a}N_{a}=0$ we see that
$\left.j^{\rm box}_{a}N^{a}\right|_{\rm walls}=-(u_{\rm
box}^{a}\nabla_{a}\phi)(N^{b}\nabla_{b}\phi)=0,$ (40)
due to the boundary condition $u^{a}_{\rm
box}\nabla_{a}\phi=\mathrm{bar}x^{-1}\partial_{\tau}\phi=0$. The rest of the
argument from (28) to the main result (32) now follows exactly as in the
Maxwell case.
## VI Conclusions
We have shown that confined radiation in an idealized box with walls imposing
perfectly reflecting boundary conditions for both Maxwell electromagnetic
fields and massless scalar fields has an inertial mass given by its energy
content divided by the square of the speed of light. Our calculation relies
entirely on general properties of the solution of the field equations and the
properties of the energy momentum tensor of the confined radiation. The only
explicit requirement on the solutions is that the radiation be in a stationary
state of vanishing total linear momentum in the frame of the box (not moving
inside the box). This assumption is compatible with the idea of the box
representing a toy-model of a composite particle (an ultra simplified
classical model of proton or a neutron). The calculations done in this
solvable simple model of a composite particle has deep conceptual implications
making natural the possibility that all mass parameters in our physical models
could have a more fundamental description in terms of more basic degrees of
freedom (mass as an emergent notion).
The proof of main claim is straightforward once the relevant equations are
written in covariant form. Even when no gravitational field is invoked, the
result follows naturally from the application of the mathematics of general
relativity. On the physical front, the naturalness of the energy-momentum
tensor as the source of gravity is made more transparent by our pre-
gravitational analysis. For those reason we expect the paper to be useful from
a pedagogical perspective.
## VII Acknowledgements
The basic idea of this paper started in discussion with a group students of
the first year master in theoretical physics at Aix-Marseille University. We
thank discussions with F. Balfour, M. L. Frisch Sbarra, A. Vesperini, and S.
Charfi.
## Appendix A Maxwell equations and boundary conditions
Maxwell equations in covariant form and in the presence of sources are
$\displaystyle\nabla^{a}F_{ab}$ $\displaystyle=$ $\displaystyle-{4\pi}J_{b}$
(41) $\displaystyle\nabla_{a}F_{bc}+\nabla_{b}F_{ca}+\nabla_{c}F_{ab}$
$\displaystyle=$ $\displaystyle 0.$ (42)
where $F_{ab}=-F_{ba}$ is the electromagnetic field strength, and $J_{a}$ is
the electric four-current. For an arbitrary observer with four velocity
$u^{a}$ the electric field is given by
$E_{a}=F_{ab}u^{b}$ (43)
while the magnetic field is
$B_{a}=-\frac{1}{2}\epsilon_{abcd}F^{cd}u^{b}.$ (44)
The stress-energy-momentum tensor of the electromagnetic field is
$T_{ab}=\frac{1}{4\pi}(F_{ac}F^{\ c}_{b}-\frac{1}{4}g_{ab}\ F_{cd}F^{cd}).$
(45)
It follows from the validity of (41) and (42) that the divergence of $T_{ab}$
is given by
$\nabla^{a}T_{ab}=J^{a}F_{ab}.$ (46)
We will assume that the electromagnetic field is confined in a box without
charges inside. Therefore, $J^{a}=0$ in the bulk of the box. However, boundary
currents must be present to ensure that the fields vanish outside the
confining box (they are responsible for enforcing reflecting boundary
conditions). We assume that the walls are made of a perfect conductor with
infinitely light charge carriers that can move freely. We will hence write the
current as
$J^{a}=\sigma^{a}\delta_{\rm box},$ (47)
where $\sigma^{a}$ is the surface current and $\delta_{\rm box}$ denotes the
Dirac distribution with support on the walls of the box. Thus from (46) we
have
$\nabla^{a}T_{ab}=\sigma^{a}F_{ab}\delta_{\rm box}.$ (48)
The energy-momentum current associated to the lab-frame ($\xi^{a}_{\rm
lab}=\partial_{ct}^{a}$) is
$j^{\rm lab}_{a}\equiv-T_{ab}\xi_{\rm lab}^{b}$ (49)
which is not conserved because of the contributions of the boundary degrees of
freedom mentioned above. In fact, from Maxwell equations we get
$\nabla_{a}j_{\rm lab}^{a}=-J^{a}F_{ab}\xi_{\rm lab}^{b}=-\sigma^{a}E^{{\rm
lab}}_{a}\delta_{\rm box}$ (50)
where we used that $\nabla_{a}\xi^{\rm lab}_{b}+\nabla_{b}\xi^{\rm lab}_{a}=0$
because $\xi^{\rm lab}_{a}$ is a Killing field, recall (5). Note that the
right hand side of the previous equation vanishes inside the box where
$J^{a}=0$. If the box is at rest then $\sigma^{a}E^{{\rm lab}}_{a}=0$ on the
boundary due to the perfect conductor boundary conditions444Otherwise the
charges would accommodate as they can move due to the electric force and
neutralise any parallel components of the electric field. and the current is
conserved. However, one can have $\sigma^{a}E^{{\rm lab}}_{a}\not=0$ in
general situations where the box is moving; such possibility is important and
plays a role in Section B.
### A.1 Maxwell equations in the accelerated frame of the box
A specially interesting case for the present paper is the one corresponding to
a uniformly accelerated box. Thus we analyse the content of Maxwell theory in
terms of the electric and magnetic fields defined in an accelerating frame.
Given the four velocity of a family of observers at rest with respect to the
accelerating box $u_{\rm box}^{a}$–recall (15)–one can write the
electromagnetic field strength as
$F_{ab}=-2E^{\rm box}_{[a}u^{\rm box}_{b]}-\epsilon_{abcd}B_{\rm
box}^{c}u_{\rm box}^{d}$ (51)
where $E^{\rm box}_{a}=F_{ab}u_{\rm box}^{b}$ and $B^{\rm
box}_{a}=-\frac{1}{2}\epsilon_{abcd}F^{cb}u_{\rm box}^{b}$ 555Indeed, the
previous expression is valid for any timelike vector field $u^{a}$ of four-
velocities representing a field of observers in spacetime.. It follows from
the skew symmetry of $F_{ab}$ that
$B^{\rm box}_{a}u^{a}_{\rm box}=0=E^{\rm box}_{a}u^{a}_{\rm box}.$ (52)
Now we write Maxwell equations (41) in a way that it would lead to the analog
of Gauss and Ampére integral identities for inertial frames but now these are
valid in an accelerated frame. This step is rather technical but very
important; a general treatment in curved spacetimes can be found in
10.1093/mnras/198.2.339 .
First notice that $u^{\rm box}_{a}=g_{ab}u^{a}_{\rm
box}=-{\mathrm{bar}x}\nabla_{a}\tau$. This suggests the introduction of a new
quantity, $\overline{u}_{a}\equiv-\nabla_{a}\tau$, which has the following
nice properties:
$\displaystyle\nabla_{a}\overline{u}^{a}$ $\displaystyle=$ $\displaystyle 0{}$
$\displaystyle\nabla_{a}\overline{u}_{b}$ $\displaystyle=$
$\displaystyle\nabla_{b}\overline{u}_{a}{}$ $\displaystyle
h^{c}_{a}h^{d}_{b}\nabla_{c}\overline{u}_{d}$ $\displaystyle=$ $\displaystyle
0,$ (53)
where $h_{ab}=g_{ab}+u^{\rm box}_{a}u^{\rm box}_{b}$ is the spacial metric of
the box simultaneity slices, $\tau=$constant slices in Figure 2. The first
property says that the $\overline{u}^{a}$ congruence is divergence free, the
second implies that it is surface forming (trivially coming from the fact that
$\overline{u}_{a}$ is an exact form normal to the $\tau=$constant surfaces),
and the last property implies that it is shear free Wald:1984rg . Equation
(51) can be written as
$F_{ab}=-2\overline{E}_{[a}\overline{u}_{b]}-\epsilon_{abcd}\overline{B}^{c}\overline{u}^{d}$
(54)
where $\overline{E}^{a}={\mathrm{bar}x}E_{\rm box}^{a}$ and
$\overline{B}={\mathrm{bar}x}B_{\rm box}^{a}$. Maxwell equation (41) becomes
$\displaystyle-4\pi J_{b}$ $\displaystyle=$
$\displaystyle-\nabla^{a}(\overline{E}_{a}\overline{u}_{b})+\nabla^{a}(\overline{E}_{b}\overline{u}_{a})-\epsilon_{abcd}\nabla^{a}(\overline{B}^{c}\overline{u}^{d})$
$\displaystyle=$
$\displaystyle-(\nabla^{a}\overline{E}_{a})\overline{u}_{b}-\overline{E}^{a}\nabla_{a}\overline{u}_{b}+\overline{u}^{a}\nabla_{a}\overline{E}_{b}-\epsilon_{abcd}(\nabla^{a}\overline{B}^{c})\overline{u}^{d}{}$
$\displaystyle+$
$\displaystyle\underbrace{\overline{E}_{b}\nabla^{a}\overline{u}_{a}-\epsilon_{abcd}\overline{B}^{c}(\nabla^{a}\overline{u}^{d})}_{=0},{}$
where for the moment we just used the Leibniz rule and wrote at the end the
two terms that vanish identically due to the first two identities in (A.1).
The next step is to separate the previous equation into its part parallel to
$u^{a}_{\rm box}$ (projecting with $-u^{a}_{\rm box}u^{\rm box}_{b}$) and its
normal or spacial part (which we can obtain by projecting with
$h^{a}_{b}=\delta^{a}_{b}+u_{\rm box}^{a}u^{\rm box}_{b}$).
Before doing the projections we notice that
$\displaystyle-4\pi J_{b}$ $\displaystyle=$
$\displaystyle-(\nabla^{a}\overline{E}_{a})\overline{u}_{b}-\overline{E}^{a}\nabla_{a}\overline{u}_{b}+\overline{u}^{a}\nabla_{a}\overline{E}_{b}-\overbrace{\epsilon_{abcd}(\nabla^{a}\overline{B}^{c})\overline{u}^{d}}^{{\rm
orthogonal\ to}\ \overline{u}^{a}}{}$ (56) $\displaystyle=$
$\displaystyle-(\nabla^{a}\overline{E}_{a})\overline{u}_{b}-\overline{E}^{a}\nabla_{b}\overline{u}_{a}+\overline{u}^{a}\nabla_{a}\overline{E}_{b}-\epsilon_{abcd}(\nabla^{a}\overline{B}^{c})\overline{u}^{d}{}$
$\displaystyle=$
$\displaystyle-(\nabla^{a}\overline{E}_{a})\overline{u}_{b}+\overline{u}^{a}\nabla_{b}\overline{E}_{a}+\overline{u}^{a}\nabla_{a}\overline{E}_{b}-{\epsilon_{abcd}(\nabla^{a}\overline{B}^{c})\overline{u}^{d}}$
where in the second line we used the second equation in (A.1) for the second
term, and in the third line we used that $\overline{E}_{a}\overline{u}^{a}=0$,
or (52). Let us now project along $u_{\rm box}^{a}$ recalling that $u^{a}_{\rm
box}\overline{u}_{a}=u^{a}_{\rm box}(u^{\rm
box}_{a}{\mathrm{bar}x}^{-1})=-{\mathrm{bar}x}^{-1}$ we get
$\displaystyle-4\pi{\mathrm{bar}x}J_{b}u^{b}_{\rm box}$ $\displaystyle=$
$\displaystyle(\nabla^{a}\overline{E}_{a})+2u_{\rm box}^{a}u_{\rm
box}^{b}\nabla_{b}\overline{E}_{a}{}$ (57) $\displaystyle=$
$\displaystyle(g^{ab}+u_{\rm box}^{a}u_{\rm
box}^{b})\nabla_{b}\overline{E}_{a}+u_{\rm box}^{a}u_{\rm
box}^{b}\nabla_{b}\overline{E}_{a}{}$ $\displaystyle=$
$\displaystyle\underbrace{(g^{ab}+u_{\rm box}^{a}u_{\rm
box}^{b})\nabla_{b}\overline{E}_{a}}_{\equiv D^{a}\overline{E}_{a}}-(u_{\rm
box}^{b}\nabla_{b}u^{\rm box}_{a})\overline{E}^{a},$
where in the last line we used $\overline{E}_{a}\overline{u}^{a}=0$ again and
we have used the definition of the spacial covariant derivative $D_{a}$ such
that $D_{a}h_{bc}=0$ Wald:1984rg . Substituting the expression (16) of the
acceleration, and $\overline{E}_{a}={\mathrm{bar}x}E_{a}^{\rm box}$ in the
last equation we obtain the familliar Gauss law
$-4\pi J_{b}u^{b}_{\rm
box}=\frac{1}{{\mathrm{bar}x}}D^{a}({\mathrm{bar}x}E_{a}^{\rm
box})-\frac{D_{a}{\mathrm{bar}x}}{{\mathrm{bar}x}}E^{a}_{\rm box}$ (58)
simplifying
$\boxed{-4\pi J_{a}u_{\rm box}^{a}=D^{a}(E^{\rm box}_{a}),}$ (59)
which has the form of the usual Gauss law in an inertial frame. Indeed it is
easy to show that the Gauss law holds in its usual form in arbitrary frames
(see Problem 2 in Chapter 4 of Wald:1984rg ). In the present case the
technical complications of the previous lines are justified not by the
objective of obtaining the Gauss law but rather the aim of getting the analog
of Ampére’s law which will follow from the spacelike part of the previous
equations.
Therefore, we need to project (56) using $h^{a}_{b}=\delta^{b}_{c}+u_{\rm
box}^{b}u^{\rm box}_{c}$. But before we notice that the first term projects to
zero while the last term projects to itself. Let us analyse the remaining
terms before projecting. There is
$\overline{u}^{a}\nabla_{b}\overline{E}_{a}=-\overline{E}^{a}\nabla_{b}\overline{u}_{a},$
(60)
which in its form on the right clearly projects to zero according to the third
equation in (A.1) and the fact that $\overline{E}^{a}$ is purely spacelike.
Now let us analyse the remaining term
$\overline{u}^{a}\nabla_{a}\overline{E}_{b}={\mathrm{bar}x}^{-1}(\underbrace{u_{\rm
box}^{a}\nabla_{a}\overline{E}_{b}+\overline{E}_{a}\nabla_{b}u_{\rm
box}^{a}}_{{\mathfs{L}}_{u_{\rm
box}}\overline{E}_{b}}-\overline{E}_{a}\nabla_{b}u_{\rm box}^{a}),$ (61)
where we have added and subtracted the same term on the right just to recover
the expression of the Lie derivative ${\mathfs{L}}_{u_{\rm
box}}\overline{E}_{b}$ which is the natural derivative along the world-lines
of the box observers. Notice that the term we added and subtracted projects to
zero (purely time-like) due to the third equation in (A.1). The Lie derivative
in the previous equation corresponds to a natural proper time ${\rm
T}\equiv{\mathrm{bar}x}\tau$ derivative of the electric $\overline{E}_{a}$.
Its space projection is the proper time Fermi transport derivative
10.1093/mnras/198.2.339 , we denote this
$D_{\rm T}\overline{E}_{a}\equiv h_{a}^{b}({\mathfs{L}}_{u_{\rm
box}}\overline{E}_{b})=h_{a}^{b}(\overline{u}^{c}\nabla_{c}\overline{E}_{b}),$
(62)
where the previous equivalence of derivatives is valid in our simple case due
to (A.1). For the general relationship among these see 10.1093/mnras/198.2.339
. Thus, finally putting all this together and projecting into the space part
of (56) we get
$\boxed{-4\pi{\mathrm{bar}x}J^{\rm space-part}_{b}=D_{\rm
T}({\mathrm{bar}x}E^{\rm box})_{b}-(D\times{\mathrm{bar}x}B^{\rm box})_{b}},$
(63)
where (as in 59) $D_{a}$ is the 3d covariant derivative compatible with the
space metric $h_{ab}$. Finally, the homogeneous Maxwell equations (42) can be
written as
$\nabla^{a}F^{\star}_{ab}=0$ (64)
where
$F^{\star}_{ab}=\frac{1}{2}\epsilon_{abcd}F^{cd}=2B^{\rm box}_{[a}u^{\rm
box}_{b]}+\epsilon_{abcd}E_{\rm box}^{c}u_{\rm box}^{d}.$ (65)
The previous is the analog of $F_{ab}$ as given in (51) where $B_{a}\to-
E_{a}$. As $B_{a}u^{a}_{\rm box}=0$ as well and this was the only requirement
entering the derivation of (63) and (59) in addition to the properties of
$\overline{u}_{a}$ (A.1), it follows from (64) that
$\boxed{D^{a}(B^{\rm box}_{a})=0,}$ (66)
and
$\boxed{-D_{\rm T}({\mathrm{bar}x}B^{\rm
box})_{b}+(D\times{\mathrm{bar}x}E^{\rm box})_{b}=0.}$ (67)
Equations (63), (59), (66), and (67) are Maxwell’s equations for the electric
and magnetic field on the accelerated (instantaneous rest) frame of the
confining box.
### A.2 Consequences
In the previous section we have recast the Maxwell equations in terms of the
electric and magnetic fields as measured in the rest frame of the accelerating
box. It was a bit technical but the consequences for the electromagnetic field
near the perfectly conducting walls of the box are quite simple and analogous
to those that one is familiar with for a box at rest in an inertial frame. In
this short section we analyse and state them. We will now see that, as in the
case of an inertial box, the Maxwell equations (plus the standard physical
assumption that the magnetic field inside the conductor is initially zero)
applied to the accelerating box imply that
$F_{ab}({\rm inside\ conductor})=0.$ (68)
More precisely, the idealization of perfectly conducting walls requires first
the electric field to vanish inside the conductor, $E^{\rm box}_{a}({\rm
inside\ conductor})=0$. In addition, right inside the box and at the walls any
parallel component of $E^{\rm box}_{a}$ to the walls must vanish: if not there
would be a force rearranging surface charges to make this component vanish
666Here we are assuming idealized charge carriers without mass. Real massive
charges would produce a parallel $E^{\rm box}_{a}$ component to equilibrate
for the gravitational pull as it is intuitive from the perspective offered by
the right panel in Figure 3.. Therefore $E^{\rm box}_{a}|_{\rm box}\propto
N_{a}$ where $N_{a}$ is the normal to the walls. Now, equation (67) implies
that the magnetic field $B_{a}^{\rm box}$ must be time independent inside the
conductor. Assuming that the magnetic field was zero initially then we have
that $B^{\rm box}_{a}({\rm inside\ conductor})=0$ for all times. Equation (68)
now follows from (51).
Another important consequence of the vanishing of the magnetic field inside
the box is that from (66) one can prove that the normal component of $B^{\rm
box}_{a}$ at the walls (on the inside of the box) must vanish. An important
consequence of this follows from a two lines calculation that uses (51), the
definition $j_{a}^{\rm box}\equiv-T_{ab}u^{b}_{\rm box}$, and (45), and leads
to the important equation
$\left.j^{\rm box}_{a}N^{a}\right|_{\rm
walls}=\frac{1}{4\pi}\left.\vec{B}_{\rm box}\cdot(\vec{E}_{\rm
box}\times\vec{N})\right|_{\rm walls}=0,$ (69)
where we have used once again that $E^{\rm box}_{a}|_{\rm box}\propto N_{a}$.
In the two sections that follow we will use (68) and Maxwell’s equations to
express $F_{ab}$ at the walls of the box explicitly in terms of the surface
charge current. This will then allow us to write explicitly the energy-
momentum tensor (45) at the walls which is important in the analysis of
Appendix B. We do this first using three-dimensional methods that involve
Gausses law and Ampère’s law and later in a more direct covariant fashion.
### A.3 Electromagnetic field at the boundary: canonical derivation
Let us consider the case of a box made of perfectly conducting walls. Then the
presence of surface charges is characterized by the four-current
$J^{a}=\sigma^{a}\delta_{\rm box}$ (70)
As charges can move freely on the walls, only the normal component of the
electric field is non vanishing at the wall. One can use this fact and the
Gauss law (the integral form of (59) using a suitably chosen region) at the
wall and obtain
$\left.E^{\rm box}_{a}\right|_{\rm walls}=-4\pi\left.(\sigma_{b}u^{b}_{\rm
box})\right|_{\rm walls}N_{a},$ (71)
where $N^{a}$ is the unit normal to the wall. Similarly, using Ampère’s law
(Stokes theorem and (63)), and the fact that the magnetic field has only
parallel components to the walls, one obtains
$\left.B^{\rm box}_{a}\right|_{\rm
walls}=-\left.4\pi\epsilon_{abcd}\sigma^{b}N^{c}u^{d}_{\rm box}\right|_{\rm
walls}.$ (72)
In order to prove the previous statement one chooses an infinitesimal
2-surface transversal to the walls and such that its normal is aligned with
the surface current. This choice and equation (63) yields immediately (72);
the time derivative term in (63) does not contribute because it is orthogonal
to the surface’s normal.
With this the field-strength (51) on the walls of the box is given by
$\displaystyle\boxed{F_{ab}=8\pi N_{[a}\sigma_{b]}.}$ (73)
### A.4 The energy momentum tensor at the walls
With the previous result we can now write the energy momentum tensor at the
walls of the box using its definition (45)
$\displaystyle\left.T_{ab}\right|_{\rm box}$ $\displaystyle=$
$\displaystyle{4\pi}\left(\sigma_{a}\sigma_{b}+(\sigma\cdot\sigma)N_{a}N_{b}-\frac{g_{ab}}{2}(\sigma\cdot\sigma)\right),$
(74)
where we have used the boundary condition $N^{a}\sigma_{a}=0$. It follows that
$\left.j^{\rm lab}_{a}N^{a}\right|_{\rm
box}=-2\pi(\sigma\cdot\sigma)N_{b}\xi_{\rm lab}^{b},$ (75)
and from (50) and (73)
$\nabla^{a}j^{\rm lab}_{a}=\left.-\sigma^{a}F_{ab}\xi_{\rm
lab}^{b}\right|_{\rm box}\delta_{\rm box}=4\pi(\sigma\cdot\sigma)N_{b}\xi_{\rm
lab}^{b}\delta_{\rm box}.$ (76)
Finally, notice that the pressure at the walls is given by
$P_{N}\equiv T_{ab}N^{a}N^{b}=2\pi(\sigma\cdot\sigma).$ (77)
### A.5 Electromagnetic field at the boundary: covariant derivation
$\begin{array}[]{c}\includegraphics[width=227.62204pt]{ampere-
law.pdf}\end{array}$
Figure 4: Four-dimensional region ${\mathfs{B}}_{\mathrm{bar}x}$ at the
boundary of the world-tube of the box where the application of Gauss theorem
leads directly to (73).
Equation (73) was derived from the Gauss and Ampère’s laws on an accelerating
box in a way that follows the standard text books type of considerations in
inertial frames that break covariance by invoking the electric and magnetic
fields. However, the simplicity of (73) calls for a more direct and covariant
derivation. As an exercise, here we show that such more direct path is
actually available.
The key input is the requirement that charges on the wall move freely and so
cancel any parallel component of the electric field $E^{\rm
box}_{a}=F_{ab}u^{b}_{\rm box}$ on the rest frame of the box, and similarly
that the normal magnetic field component vanishes at the wall (as shown in
Appendix A.2). Consider the four vectors
$X^{a}\equiv\nabla^{a}{\mathrm{bar}x}$, $Y^{a}\equiv\nabla^{a}y$,
$Z^{a}\equiv\nabla^{a}z$, and $T^{a}\equiv\nabla^{a}\tau$ which by definition
are such that
$\nabla^{[a}X^{b]}=\nabla^{[a}Y^{b]}=\nabla^{[a}Z^{b]}=\nabla^{[a}T^{b]}=0.$
(78)
The previous set of equations together with the definition of the currents
$p^{Y}_{a}=F_{ab}Y^{b}$, $p^{Z}_{a}=F_{ab}Z^{b}$, and $p^{T}_{a}=F_{ab}T^{b}$
imply, from (41), that
$\nabla^{a}p^{Y}_{a}=-4\pi J_{a}Y^{a},\ \ \ \ \nabla^{a}p^{Z}_{a}=-4\pi
J_{a}Z^{a},\ \ \ \ \nabla^{a}p^{T}_{a}=-4\pi J_{a}T^{a}.$ (79)
Applying the Gauss theorem to $p^{Y}_{a}$ in the region
${\mathfs{B}}_{\mathrm{bar}x}$ shown in figure 4 we get
$\displaystyle\int_{{\mathfs{B}}_{\mathrm{bar}x}}\nabla^{a}p^{Y}_{a}$
$\displaystyle=$
$\displaystyle\int_{\partial{\mathfs{B}}_{\mathrm{bar}x}}p^{Y}_{a}n^{a}{}$
$\displaystyle-4\pi\int_{{\mathfs{B}}_{\mathrm{bar}x}}J_{a}Y^{a}$
$\displaystyle=$
$\displaystyle\int_{\partial{\mathfs{B}}_{\mathrm{bar}x}}F_{ab}n^{a}Y^{b},$
(80)
where $n^{a}$ is the normal to the boundary
$\partial{\mathfs{B}}_{\mathrm{bar}x}$ with the orientation shown in Figure 4.
Notice that as the normal to the bottom and top (spacelike) portions of the
boundaries are proportional to $u^{a}_{\rm box}$ one has there that
$F_{ab}n^{a}Y^{b}\propto E^{\rm box}_{a}Y^{a}=0$ as only the normal component
along $X^{a}$ of the rest-frame electric field $E^{\rm box}_{a}$ is non
vanishing due to the presence of perfectly conducting walls. In addition
$F_{ab}=0$ on the right piece of the timelike component of
$\partial{\mathfs{B}}_{\mathrm{bar}x}$. Therefore, only the integral on the
left timelike piece contributes to the right hand side of the previous
equation. From this and equation (70), together with the assumption that the
region ${\mathfs{B}}_{\mathrm{bar}x}$ is infinitesimally thin around the wall,
we get
$-4\pi\int d\tau dydz{\mathrm{bar}x}(\sigma_{a}Y^{a})=\int d\tau
dydz{\mathrm{bar}x}(F_{ab}X^{a}Y^{b}),$ (81)
where we used that the volume density is ${\mathrm{bar}x}d\tau dydz$ and that
the normal (oriented for the Gauss theorem is) $n^{a}=X^{a}$; however, the
conventional inner pointing normal of the box $N^{a}$. Assuming that the
region is infinitesimal in all directions the previous identity implies
$F_{ab}X^{a}Y^{b}=-4\pi\sigma_{a}Y^{a}$ which in terms of $N^{a}$ reads
$F_{ab}N^{a}Y^{b}=4\pi\sigma_{a}Y^{a}.$ (82)
The same logic applied to the current $p^{Z}_{a}$ implies
$F_{ab}N^{a}Z^{b}=4\pi\sigma_{a}Z^{a}.$ (83)
A moment of reflection shows that the argument is also true for the current
$p_{a}^{T}$. Now the top and bottom contributions vanish because the normal
there $n^{a}\propto T^{a}$ and thus $F_{ab}n^{a}T^{a}=0$ because of the skew
symmetry of $F_{ab}$. Therefore, we also have
$F_{ab}N^{a}T^{b}=4\pi\sigma_{a}T^{a}.$ (84)
The most general $F_{ab}$ would be of the form
$F_{ab}=f_{TN}T_{[a}N_{b]}+f_{TY}T_{[a}Y_{b]}+f_{TZ}T_{[a}Z_{b]}+f_{YN}Y_{[a}N_{b]}+f_{ZN}Z_{[a}N_{b]}+f_{YZ}Y_{[a}Z_{b]}.$
(85)
The fact that the electric field $E^{\rm box}_{a}$ is proportional to $N^{a}$
due to the presence of the wall implies that $f_{TY}=f_{TZ}=0$. As the normal
component of the magnetic field must vanish due to the present of the
conducting wall we also have $f_{YZ}=0$. Thus
$F_{ab}=f_{TN}T_{[a}N_{b]}+f_{YN}Y_{[a}N_{b]}+f_{ZN}Z_{[a}N_{b]}.$ (86)
Equations (82), (83), and (84) fix the last three components. The solution is
$\boxed{F_{ab}=8\pi N_{[a}\sigma_{b]}},$ (87)
which is the same as (73). One can easily check that the same solution follows
from the same argument applied to regions ${\mathfs{B}}_{y}$ and
${\mathfs{B}}_{z}$ adapted to the world sheets of the other walls of the box.
## Appendix B Work done by the walls (Maxwell case)
We have seen that the energy content of the box, as measured in the lab frame,
depends on $\tau$. This is due to the action of an external agent that is
accelerating the box of radiation. The change in the energy $E(\tau)$ is thus
related to the work done by the external agent on the box. This is associated
with the failure for the current $j_{a}$ to be conserved: in our idealization
of the accelerating box, the external agent acts upon the electromagnetic
field via the boundary charges that impose the box boundary conditions and
source the divergence of $j_{a}$ (see (50)). The dynamical contribution of
these charges is feeding energy into the system. In fact, from the Gauss law,
now applied to $j_{a}$ in the region of interest (Figure 2) we get
$\displaystyle E(\tau)-E(0)=\Delta W$ $\displaystyle\equiv$
$\displaystyle\int_{R}\nabla^{a}j^{\rm lab}_{a}-\int_{\partial
R-\Sigma_{1}-\Sigma_{2}}j^{\rm lab}_{a}N^{a}{}$ (88) $\displaystyle=$
$\displaystyle 2\pi\int_{\partial
R-\Sigma_{1}-\Sigma_{2}}(\sigma\cdot\sigma)N_{b}\xi_{\rm lab}^{b}$
where we have used (75) and (76) and the Gauss theorem where the bulk integral
involves the integration of the $\delta_{\rm box}$ distribution whose support
is at the boundary of $R$. Now from (10) we observe that $N_{b}\xi^{b}_{\rm
lab}=0$ on any parts of the boundary where $\partial_{\mathrm{bar}x}$ is
tangent to the boundary. At the bottom and at the top we have
$N_{b}\xi^{b}_{\rm lab}=\mp\gamma(v)v$ (where for simplicity we are assuming
that the box is a cube with walls defined by $\mathrm{bar}x$, $y$, and $z$
equal constant). Therefore, using this in the last line of (88) we get
$E(\tau)-E(0)=\left(\int_{\rm top}-\int_{\rm bottom}\right)2\pi\gamma
v(\sigma\cdot\sigma).$ (89)
On the other hand, the pressure on the top/bottom is given by
$\displaystyle P_{\mathrm{bar}x}\equiv
T_{ab}\partial_{\mathrm{bar}x}^{a}\partial_{\mathrm{bar}x}^{b}=T_{\mathrm{bar}x\mathrm{bar}x}=2\pi(\sigma\cdot\sigma).$
(90)
So we find the following expression for the work
$\displaystyle E(\tau)-E(0)$ $\displaystyle=$ $\displaystyle\left(\int_{\rm
top}-\int_{\rm bottom}\right)P_{\mathrm{bar}x}\mathrm{bar}x\gamma vd\tau dydz$
(91) $\displaystyle=$ $\displaystyle\int\sinh\tau
d\tau\left.\left(\mathrm{bar}x\int P_{\mathrm{bar}x}dydz\right)\right|_{\rm
bottom}^{\rm top}.$ (92)
This expression tells us that the origin of the inertia of the box (its
resistance to acceleration encoded in the mass (32)) is the difference of the
radiation pressure of the electromagnetic field on the walls between the top
and the bottom of the box. In order to accelerate the box, an external agent
must impose an external force to compensate for the radiation pressure of the
confined radiation. Its infinitesimal version is
$\frac{dE}{d\tau}=\gamma v\left.\left(\mathrm{bar}x\int
P_{\mathrm{bar}x}dydz\right)\right|_{\rm bottom}^{\rm top}.$ (93)
Let us define
$F^{\mathrm{bar}x}=\int P_{\mathrm{bar}x}dydz$ (94)
In order to better interpret the previous result let us introduce the proper
time measured at the center of the box ${\rm T_{c}}=\mathrm{bar}x_{\rm
c}\tau$, and recall that we denote by $L$ the legth of the box. With this the
previous equation becomes
$\frac{{d}E}{d{\rm T}_{c}}=\gamma v\left(\left(1+\frac{L}{2\mathrm{bar}x_{\rm
c}}\right)F^{\mathrm{bar}x}_{\rm top}-\left(1-\frac{L}{2\mathrm{bar}x_{\rm
c}}\right)F^{\mathrm{bar}x}_{\rm bottom}\right),$ (95)
and using that the magnitude of the acceleration of the center of the box is
(according to (17)) $|a|=c^{2}/\mathrm{bar}x_{c}$ we arrive at
$\frac{{d}E}{d{\rm T}_{c}}=\gamma v\left(F^{\mathrm{bar}x}_{\rm
top}-F^{\mathrm{bar}x}_{\rm bottom}\right)+\frac{|a|L}{2c^{2}}\gamma
v\left(F^{\mathrm{bar}x}_{\rm top}+F^{\mathrm{bar}x}_{\rm bottom}\right).$
(96)
Let us define $F_{\rm net}\equiv\left(F^{\mathrm{bar}x}_{\rm
top}-F^{\mathrm{bar}x}_{\rm bottom}\right)$ for what follows. If both the
length and acceleration of the box are small in the sense that by the time
light travels the distance $L$ the velocity increase of the box due to the
acceleration is much smaller than $c$ then
$\frac{|a|L}{2c^{2}}\ll 1.$ (97)
For such small-size/small-acceleration boxes (those that model well a
composite particle) we recover the usual relativistic law
$\frac{{\rm d}E}{\rm dT_{c}}=\gamma vF_{\rm net}.$ (98)
or in covariant notation $u_{\rm c}^{a}\nabla_{a}{E}=F^{a}_{\rm net}\xi^{\rm
lab}_{a}$. The previous equation implies the familiar second Newton equation
in the instantaneous rest frame
$ma=F_{\rm net}$ (99)
where $m$ is given by $E(0)/c^{2}$ as given in (29). This shows that the
physical origin of mass can be traced to the inertia produced by the
difference of radiation pressure between the top and the bottom of the box (in
analogy with the heuristic simplistic picture given in terms of the bouncing
photon in Section II).
## Appendix C Work done by the walls (scalar field case)
In this section we repeat the derivation of the previous one but in the case
of the massless scalar field. In the lab frame $\xi_{\rm
lab}^{a}=\partial_{t}^{a}$ the associated energy-momentum current is
$j^{\rm lab}_{a}\equiv-T_{ab}\xi_{\rm lab}^{b}$ (100)
which, as in the electromagnetic case, is not conserved due to the
contributions of the boundary degrees of freedom. One has
$\nabla_{a}j_{\rm lab}^{a}=-\sigma\,u^{b}\nabla_{b}\phi\delta_{\rm box}$ (101)
where we used (5). The energy momentum tensor (34) at the walls is therefore
$\left.T_{ab}\right|_{\rm
box}=\sigma^{2}\left(N_{a}N_{b}-\frac{1}{2}g_{ab}\right)$ (102)
Now from (35) we have
$\nabla^{a}T_{ab}=\sigma^{2}N_{b}\delta_{\rm box},$ (103)
from (100)
$\left.j_{a}N^{a}\right|_{\rm box}=-\frac{1}{2}\sigma^{2}N_{b}u^{b},$ (104)
and from (101)
$\nabla^{a}j_{a}=-\sigma^{2}N_{b}u^{b}\delta_{\rm box}.$ (105)
Finally, notice that the pressure at the walls is given by
$P_{N}\equiv T_{ab}N^{a}N^{b}=\frac{1}{2}\sigma^{2}.$ (106)
As in the electromagnetic case, equations (103), (104), (105), and (106) and
the same line of argument of Section B lead to
$\frac{dE}{d\tau}=\gamma v\left.\left(\mathrm{bar}x\int
P_{\mathrm{bar}x}dydz\right)\right|_{\rm bottom}^{\rm top}.$ (107)
The same conclusions as for the Maxwell case follow from here.
## References
* (1) A. Einstein, Relativity: the Special and the General theory. 100th anniversary ed., 2015. With commentaries and background material by Hanoch Gutfreund and Jürgen Renn.
* (2) A. Einstein, “Does the Inertia of a Body Depend Upon its Energy-Content,” Annalen der Physik 323 (March, 1905) 639–641.
* (3) J. D. Jackson, Classical electrodynamics; 2nd ed. Wiley, New York, NY, 1975.
* (4) S. Durr et al., “Ab-Initio Determination of Light Hadron Masses,” Science 322 (2008) 1224–1227, arXiv:0906.3599.
* (5) E. Schrödinger, Über die kräftefreie Bewegung in der relativistischen Quantenmechanik. Akademie der wissenschaften in kommission bei W. de Gruyter u. Company, 1930.
* (6) W. Rindler, Essential Relativity: Special, General, and Cosmological. Springer New York, 2013.
* (7) R. Wald, General Relativity. University of Chicago Press, Chicago, 1984.
* (8) W. Rindler, “Hyperbolic Motion in Curved Space Time,” Phys. Rev. 119 (Sep, 1960) 2082–2089.
* (9) K. S. Thorne and D. MacDonald, “Electrodynamics in curved spacetime: 3 + 1 formulation,” Monthly Notices of the Royal Astronomical Society 198 (02, 1982) 339–343, arXiv:https://academic.oup.com/mnras/article-pdf/198/2/339/9402846/mnras198-0339.pdf.
|
# First-passage time theory of activated rate chemical processes in electronic
molecular junctions
Riley J. Preston College of Science and Engineering, James Cook University,
Townsville, QLD, 4811, Australia Maxim F. Gelin School of Sciences, Hangzhou
Dianzi University, 310018 Hangzhou, China Daniel S. Kosov College of Science
and Engineering, James Cook University, Townsville, QLD, 4811, Australia
###### Abstract
Confined nanoscale spaces, electric fields and tunneling currents make the
molecular electronic junction an experimental device for the discovery of new,
out-of-equilibrium chemical reactions. Reaction-rate theory for current-
activated chemical reactions is developed by combining a Keldysh
nonequilibrium Green’s functions treatment of electrons, Fokker-Planck
description of the reaction coordinate, and Kramers’ first-passage time
calculations. The NEGF provide an adiabatic potential as well as a diffusion
coefficient and temperature with local dependence on the reaction coordinate.
Van Kampen’s Fokker-Planck equation, which describes a Brownian particle
moving in an external potential in an inhomogeneous medium with a position-
dependent friction and diffusion coefficient, is used to obtain an analytic
expression for the first-passage time. The theory is applied to several
transport scenarios: a molecular junction with a single, reaction coordinate
dependent molecular orbital, and a model diatomic molecular junction. We
demonstrate the natural emergence of Landauer’s blowtorch effect as a result
of the interplay between the configuration dependent viscosity and diffusion
coefficients. The resultant localized heating in conjunction with the bond-
deformation due to current-induced forces are shown to be the determining
factors when considering chemical reaction rates; each of which result from
highly tunable parameters within the system.
## I INTRODUCTION
A molecular junction is a single molecule confined in the nanoscale gap
between two macroscopic, conducting leads. An applied voltage allows for the
flow of electronic current across the system through the valence states of the
molecule. Large current densities and power dissipation give way to strong
current-induced forces and bond-selective heating which act to destabilize the
molecular configuration within the junction, resulting in molecular
conformational changes including telegraphic switching Weick _et al._ (2010);
Lü _et al._ (2012); Preston _et al._ (2020a); Kershaw and Kosov (2020),
along with providing the necessary energy for total bond ruptureDzhioev and
Kosov (2011); Dzhioev _et al._ (2013); Erpenbeck _et al._ (2018); Härtle and
Thoss (2011); Li _et al._ (2015, 2016); Lü _et al._ (2010). This is
obviously an undesirable feature for promoting molecular electronics as a
possible avenue for moving beyond the traditional silicon semiconductor
technology into a regime of highly efficient and tailorable molecular-scale
devices, and so a thorough theoretical understanding is required for further
progress. Nonetheless, it creates an exciting opportunity to explore and
produce new chemical reactions by providing a device which traps a single
molecule in a confined space of a few nanometers where the electric field and
current are applied locally and selectively Huang _et al._ (2013); Aragonés
_et al._ (2016); Borca _et al._ (2017).
The adequate and well established theories have been developed for reaction
rate calculations in gas and condensed phases Zwanzig (2001); Coffey _et al._
(2004); Hänggi _et al._ (1990); Melnikov (1991); Voth and Hochstrasser
(1996); Miller (1998); Pollak and Talkner (2005); Schüller _et al._ (2020).
However, the development of similar theories for molecules in an electronic
junction environment is no simple task and as such, the scope of theoretical
work is still very limited. Three types of approaches have been proposed to
model current-induced dissociation. The first is based on the rate equation
approach where a single harmonic vibration is pumped beyond the dissociation
threshold limit Koch _et al._ (2006); Brisker and Peskin (2008). The second
is a numerically exact scheme, which uses the hierarchical quantum master
equation method in conjunction with a discrete variable representation for the
nuclear degrees of freedom to numerically study current-induced dissociation
Erpenbeck _et al._ (2020). The third uses Keldysh nonequilibrium Green’s
functions to obtain a Fokker-Planck equation for the reaction coordinate which
is used to compute average escape times and the accompanying reaction rates
Dzhioev and Kosov (2011); Dzhioev _et al._ (2013). The further development of
this approach is the subject of this paper.
Our approach utilizes the Born-Oppenheimer approximation, in which nuclei
within the system are assumed to be slow-moving, classical particles
interacting with a sea of fast, quantum electrons. This separation of time-
scales enables the use of a Langevin description for the motion of nuclei, in
which the forces due to the quantum electrons are conceived through a
frictional force which acts to drive the nuclei into a motionless state, a
fluctuating force which re-energizes the nuclei, and an adiabatic force which
can deform the structure of the reaction potential. This enables the
consideration of highly non-trivial behaviour on the nuclear dynamics at the
cost of a fully quantum description. Nevertheless, the method has proven
successful in a range of circumstances Dou and Subotnik (2018, 2017); Dou _et
al._ (2017); Bode _et al._ (2011); Dzhioev _et al._ (2013); Pistolesi _et
al._ (2008); Weick _et al._ (2010); Bode _et al._ (2012); Kershaw and Kosov
(2020); Preston _et al._ (2020a); Lü _et al._ (2010); Preston _et al._
(2020b).
The method further lends itself to the study of current-induced chemical
reaction rates Dzhioev and Kosov (2011); Dzhioev _et al._ (2013). The use of
Langevin dynamics to compute reaction rates was first explored by Kramers in
his seminal 1940 paper Kramers (1940), in which the mean first-escape time of
a particle trapped in an arbitrary potential well subject to Langevin forces
was approximated. The next significant step was made in the 1990s, when
Kramers’ theory was extended to account for position-dependent friction and
generalized Langevin equations describing finite-memory (non-Markovian)
fluctuation-dissipation processes Pollak and Berezhkovskii (1993); Haynes _et
al._ (1994); Haynes and Voth (1995); Neria and Karplus (1996). The effect of a
velocity-dependent friction on Kramers’ rates was investigated in Ref. Gelin
and Kosov (2007). However, these studies were limited to the regime of
thermodynamic equilibrium. Beyond this regime, the fluctuation-dissipation
theorem no longer holds, allowing for the emergence of localized heating
effects in analogy to Landauer’s proposed blowtorch effect Landauer (1975,
1993), in which specific configurations of the reaction coordinate may
experience heightened temperatures, which may have a significant affect on the
evolution of the system. Such systems are not limited to the realms of
molecular electronics; the most common examples include numerous molecular
motors, ratchets, and heat engines Reimann (2002); Hänggi and Marchesoni
(2009); Erbas-Cakmak _et al._ (2015) as well as various confined nanosystems
Das and Ray (2015); Devine and Jack (2017); Franzese _et al._ (2017); Holubec
_et al._ (2017); Arango-Restrepo and Rubi (2020), notably of biological
significance Basak _et al._ (2019); Rubi (2019). Several explicit simulations
of Landauer’s blowtorch effects in double-well potentials have also been
performed recently Bekele _et al._ (1999); Das _et al._ (2015); Das and
Kantz (2019).
One of the aims of this paper is to further shed light on this topic. A
comprehensive understanding of the effects of localized heating on the
stability of molecular geometries is required to ensure the productive
development of specific functionalities of molecular-scale electronic systems.
In this paper, we relax the requirement of thermodynamic equilibrium, allowing
for the self-consistent study of the mean first-passage times in model
molecular electronic junctions in both the underdamped and overdamped regimes.
This is calculated through a Fokker-Planck equation, which arises due to our
Langevin description of the reaction coordinate within the junction. To a
certain extent this work is the continuation of two papers of one of the
authors (DSK) Dzhioev and Kosov (2011); Dzhioev _et al._ (2013), however
Ref.Dzhioev and Kosov (2011) considered the problem employing the fluctuation-
dissipation theorem and Ref.Dzhioev _et al._ (2013) focused on the
underdamped case only.
The paper is structured as follows. In section II, we demonstrate our
calculations for the mean first-passage time in the limiting regimes. This
involves the calculation of the current-induced forces in the system, from
which a Fokker-Planck description then yields an equation for the mean first-
passage times. This is then applied to a simple model of the blowtorch effect
in section III.1. In section III.2, we calculate the reaction rates for a
single-level junction model, in which current-induced forces are calculated
self-consistently within the model. This is then further applied to a model
two-level molecule within the junction in section III.3. Atomic units are used
in all calculations such that $\hbar=e=1$. Boltzmann’s constant is set to one
$k_{B}=1$ in all derivations and calculations meaning that the temperature is
measured in units of energy.
## II THEORY
### II.1 Hamiltonian
We begin with the general Hamiltonian describing a molecular junction as given
by
$H(t)={H}_{M}(t)+{H}_{L}+{H}_{R}+{H}_{LM}+{H}_{MR}.$ (1)
The total system Hamiltonian is partitioned into the following components; the
molecular Hamiltonian ${H}_{M}(t)$, the left and right leads Hamiltonians
${H}_{L}$ and ${H}_{R}$, and the leads-molecule coupling Hamiltonians
${H}_{LM}$ and ${H}_{MR}$ which describe the coupling between the electronic
states on the central molecule and the left and right leads, respectively.
The molecular Hamiltonian takes the form:
${H}_{M}(t)=\sum_{ij}h_{ij}(x(t))d^{{\dagger}}_{i}d_{j}+\frac{p^{2}}{2m}+U(x),$
(2)
where the operators $d^{{\dagger}}_{i}$ and $d_{j}$ denote the creation and
annihilation operators for the molecular electronic states whose energies are
given by the Hamiltonian matrix elements $h_{ij}(x(t))$. Note the explicit
time dependence here, which arises as a result of the time evolution of the
classical reaction coordinate $x$. The last two terms in $H_{M}$ are not
quantum mechanical operators, $p^{2}/2m$ is the kinetic energy for the motion
of the reaction coordinate and $U(x)$ is the classical potential energy.
The leads Hamiltonian is taken in the standard form of non-interacting
electrons reservoirs:
${H}_{L}+{H}_{R}=\sum_{k\alpha}\epsilon_{k\alpha}d^{\dagger}_{k\alpha}d_{k\alpha},$
(3)
where the creation and annihilation operators are given by
$d^{\dagger}_{k\alpha}$ and $d_{k\alpha}$, and the subscript $k\alpha$ denotes
the operator acting on state $k$ in the $\alpha$ lead which has energy
$\epsilon_{k\alpha}$.
Finally, the system-lead coupling Hamiltonians $H_{LM}$ and $H_{MR}$ are given
by:
${H}_{LM}+{H}_{MR}=\sum_{k\alpha i}\Big{(}t_{k\alpha
i}d^{\dagger}_{k\alpha}d_{i}+\text{h.c.}\Big{)}.$ (4)
The matrix elements $t_{k\alpha i}$ (and their conjugates) describe the
tunnelling amplitudes between lead states $k\alpha$ and the molecular orbitals
$i$.
### II.2 Green’s Functions and Self-Energies
The foundational components from which we build our model for the classical
motion within a molecular junction are the adiabatic Green’s functions. The
required Green’s function components (advanced, retarded, lesser and
greater)Stefanucci and van Leeuwen (2013) can be solved for from the Keldysh-
Kadanoff-Baym equations, expressed in the Wigner space byHaug and Jauho (2010)
$\Big{(}\omega+\frac{i}{2}\partial_{t}-e^{\frac{1}{2i}\partial_{\omega}^{G}\partial_{t}^{h}}h(t)\Big{)}G^{R/A}(t,\omega)=I\\\
+e^{-\frac{1}{2i}\partial_{\omega}^{\Sigma}\partial_{t}^{\mathcal{G}}}\Sigma^{R/A}(\omega)G^{R/A}(t,\omega),$
(5)
and
$\Big{(}\omega+\frac{i}{2}\partial_{t}-e^{\frac{1}{2i}\partial_{\omega}^{G}\partial_{t}^{h}}h(t)\Big{)}G^{</>}(t,\omega)=\\\
e^{-\frac{1}{2i}\partial_{\omega}^{\Sigma}\partial_{t}^{G}}\Big{(}\Sigma^{R}(\omega)G^{</>}(t,\omega)+\Sigma^{</>}(\omega)G^{A}(t,\omega)\Big{)},$
(6)
where we have shown the retarded/advanced and lesser/greater equations
collectively. Here, we have already assumed that our molecule-leads coupling
components $t_{k\alpha i}$ are time-independent, which makes the self-energies
only $\omega$-dependent.
In alignment with our previous workKershaw and Kosov (2017, 2018, 2019, 2020);
Preston _et al._ (2020a, b), we assume that the classical motion along the
reaction coordinate within the system occurs over long time-scales relative to
the characteristic electron tunnelling time. This provides us with the
required small parameter to be able to perturbatively solve (5) and (6) up to
the first order in expansion of the exponents with derivatives. Resultantly,
the adiabatic Green’s functions in the Wigner space (Green’s functions which
instantaneously follow the changes in the reaction coordinate) are given by
matrices of the form:
$G^{R/A}(x,\omega)=(\omega I-h(x)-\Sigma^{R/A})^{-1},$ (7)
$G^{</>}(x,\omega)=G^{R}(x,\omega)\Sigma^{</>}(\omega)G^{A}(x,\omega),$ (8)
where $I$ represents the identity operator in the molecular space. The
corresponding self-energy components are given by
$\Sigma_{\alpha,ij}^{R}=-\frac{i}{2}\Gamma_{\alpha,ij},$ (9)
$\Sigma_{\alpha,ij}^{A}=\frac{i}{2}\Gamma_{\alpha,ij},$ (10)
$\Sigma_{\alpha,ij}^{<}(\omega)=if_{\alpha}(\omega)\Gamma_{\alpha,ij},$ (11)
and
$\Sigma_{\alpha,ij}^{>}(\omega)=-i(1-f_{\alpha}(\omega))\Gamma_{\alpha,ij},$
(12)
where we have applied the wide-band approximation to the leads, eliminating
any $\omega$ dependence from $\Gamma$. As a result, the elements of the level
broadening function are given by
$\Gamma_{\alpha,ij}=2\pi\rho_{\alpha}t^{*}_{\alpha i}t_{\alpha j},$ (13)
where $\rho_{\alpha}$ is the constant, energy-independent density of states
for lead $\alpha$ and $t_{i\alpha}$ is the tunneling amplitude $t_{ik\alpha}$
which no longer depends on $k$.
We additionally solve for the first non-adiabatic correction to the Green’s
functions, which account for the non-zero velocity of the classical
coordinate. The Green’s function corrections are given by Bode _et al._
(2012)
$G_{(1)}^{R/A}=\frac{1}{2i}G^{R/A}\Big{[}G^{R/A},\partial_{t}h\Big{]}_{-}G^{R/A},$
(14)
and
$G_{(1)}^{<}=G^{R}\Sigma^{<}G_{(1)}^{A}\ +G_{(1)}^{R}\Sigma^{<}G^{A}\\\ \
+\frac{1}{2i}G^{R}\Big{(}\partial_{T}hG^{R}\partial_{\omega}\Sigma^{<}+G^{<}\partial_{T}h+h.c.\Big{)}G^{A}.$
(15)
### II.3 The Langevin Equation
Under the overarching assumption that the reaction coordinate $x$ along with
its corresponding momentum $p$ are classical variables in our approach due to
the separation of time-scales within the system, the equation of motion of the
reaction coordinate can be expressed in the form of a quasi-classical Langevin
equation, in which the classical motion is dictated by quantum mechanical
forces. Our Langevin equation is given by Bode _et al._ (2012); Preston _et
al._ (2020a); Pistolesi _et al._ (2008); Weick _et al._ (2010); Dzhioev _et
al._ (2013); Kershaw and Kosov (2020)
$\frac{dp}{dt}=-\partial_{x}U(x)+F(x)-\frac{\xi(x)}{m}p+\delta f(t),$ (16)
in which we have the external classical potential $U(x)$, an adiabatic force
$F(x)$ arising due to the occupation of electronic levels within the molecule,
the frictional force and its corresponding viscosity coefficient $\xi(x)$, and
the stochastic force $\delta f(t)$. In our model, the adiabatic force takes
the form
$F(x)=\frac{i}{2\pi}\int
d\omega\text{Tr}\Big{\\{}\partial_{x}hG^{<}\Big{\\}}.$ (17)
Similarly, the viscosity coefficient is defined according toBode _et al._
(2012); Dzhioev _et al._ (2013)
$\frac{\xi(x)}{m}p=-\frac{i}{2\pi}\int
d\omega\text{Tr}\Big{\\{}\partial_{x}hG_{(1)}^{<}\Big{\\}}.$ (18)
It is assumed that the fluctuating force can be described by a Gaussian
process, defined by
$\langle\delta f(t)\rangle=0,$ (19)
and
$\langle\delta f(t)\delta f(t^{\prime})\rangle=D(x)\delta(t-t^{\prime}),$ (20)
where $D(x)$ is the quantum-mechanically calculated diffusion coefficient
which determines the variance in the fluctuating force. In our model, $D(x)$
is calculated according toBode _et al._ (2012); Dzhioev _et al._ (2013)
$D(x)=\frac{1}{2\pi}\int
d\omega\text{Tr}\Big{\\{}\partial_{x}hG^{>}\partial_{x}hG^{<}\Big{\\}}.$ (21)
### II.4 Effective potential energy surface
The key quantity for chemical reaction rate calculations is the effective
potential energy surface. Here the effective nonequilibrium potential energy
surface is defined via the integration of the nonequilibrium force in the
Langevin equation
$U_{\text{eff}}(x)=U(x)-\int_{x_{0}}^{x}dy\;F(y).$ (22)
We chose $x_{0}$ in such a way that $U_{\text{eff}}(x)$ at the bottom of the
potential well, $x_{a}$, is zero, $U_{\text{eff}}(x_{a})=0$. This enforces
that the energy $E$ in the formulae below can vary in the range from 0 to
$\infty$. Suppose that the effective potential has the ”standard” Kramers
problem form, in which case we have a minimum at $x=x_{a}$ (reactant state)
and an energy barrier at $x=x_{b}$, separating the reactant and product state.
Let’s call
$U_{a}=U_{\text{eff}}(x_{a})=0$ (23)
and
$U_{b}=U_{\text{eff}}(x_{b}).$ (24)
### II.5 Calculations of chemical reaction rates
#### II.5.1 Fokker-Planck equation with reaction-coordinate dependent
viscosity and diffusion
The starting point for all derivations of the first-passage times is van
Kampen’s Fokker-Planck equation for the probability density function
$\rho(x,p,t)$ which describes a Brownian particle of mass $m$, position $x$
and momentum $p$ moving in the external potential $U_{\text{eff}}(x)$ in an
inhomogeneous medium with position-dependent friction $\xi(x)$ and diffusion
coefficient $D(x)$ van Kampen (1988a)
$\partial_{t}\rho(x,p,t)=$
$\left(-\partial_{x}\frac{p}{m}+\partial_{p}U_{\text{eff}}^{{}^{\prime}}(x)+\left\\{\partial_{p}\frac{p}{m}\xi(x)+\partial_{p}^{2}D(x)\right\\}\right)\rho(x,p,t).$
(25)
Van Kampen’s Fokker-Planck equation (25) is internally consistent (for
example, it yields energy- and mass-flow transport coefficients obeying
Onsager’s reciprocity relations van Kampen (1991)) and can unambiguously be
derived from the Langevin equation (16) with Gaussian stochastic force obeying
(19) and (20). The Itô-Stratonovich dilemma van Kampen (1981) does not arise
in the derivation of van Kampen’s Fokker-Planck equation (25), because the
viscosity $\xi(x)$ of Eq. (18) and the diffusion coefficient $D(x)$ of Eq.
(21) are momentum-independent.
#### II.5.2 Underdamped limit
In this section, we present our equations for the mean first-passage times
from a reaction potential in an inhomogeneous medium for both the underdamped
and overdamped limiting regimes. Beginning with van Kampen’s Fokker-Planck
equation (25) for our reaction coordinate $x$ and momentum $p$, we can obtain
a one-dimensional energy-diffusion equation which is valid in the underdamped
limit (see Appendix A for the derivation):
$\partial_{t}\rho(E,t)=\partial_{E}D(E)\rho^{\text{un}}_{\text{st}}(E)\partial_{E}\left(\rho^{\text{un}}_{\text{st}}(E)\right)^{-1}\rho(E,t).$
(26)
Here, we have introduced the energy-diffusion $D(E)$ as
$D(E)=\frac{\nu(E)}{\Omega(E)},$ (27)
defined as the ratio of the two energy-dependent functions,
$\nu(E)=\sqrt{\frac{2}{m}}\int
dx\xi(x)T_{\text{eff}}(x)\sqrt{E-U_{\text{eff}}(x)},$ (28)
and
$\Omega(E)=\sqrt{\frac{m}{2}}\int\frac{dx}{\sqrt{E-U_{\text{eff}}(x)}},$ (29)
where we have defined the effective temperature $T_{\text{eff}}$ as
$T_{\text{eff}}(x)=\frac{D(x)}{2\xi(x)}.$ (30)
In Eqs. (28) and (29) and in all other similar expressions, the integration
domain corresponds to $E>U_{\text{eff}}(x)$.
The stationary distribution is written as
$\rho^{\text{un}}_{\text{st}}(E)=Z_{\text{un}}^{-1}\Omega(E)\exp\left\\{-\int_{0}^{E}dE^{\prime}\frac{\mu(E^{\prime})}{\nu(E^{\prime})}\right\\},$
(31)
with normalization constant
$Z_{\text{un}}=\int
dE\,\Omega(E)\exp\left\\{-\int_{0}^{E}dE^{\prime}\frac{\mu(E^{\prime})}{\nu(E^{\prime})}\right\\},$
(32)
and
$\mu(E)=\sqrt{\frac{2}{m}}\int dx\xi(x)\sqrt{E-U_{\text{eff}}(x)}.$ (33)
The energy diffusion equation can be used to calculate the mean first-passage
time in the underdamped regime as per
$\tau=\int_{U_{a}}^{U_{b}}dE^{\prime}\frac{1}{D(E^{\prime})\rho^{\text{un}}_{\text{st}}(E^{\prime})}\int_{0}^{E^{\prime}}dE^{\prime\prime}\rho^{\text{un}}_{\text{st}}(E^{\prime\prime}).$
(34)
#### II.5.3 Overdamped limit
In the overdamped limit, van Kampen’s Fokker-Planck equation (25) reduces to
the diffusion equation van Kampen (1988a, b) which can be cast into a form
similar to Eq. (26):
$\partial_{t}\rho(x,t)=\partial_{x}\frac{T_{\text{eff}}(x)}{\xi(x)}\rho^{\text{ov}}_{\text{st}}(x)\partial_{x}(\rho^{\text{ov}}_{\text{st}}(x))^{-1}\rho(x,t).$
(35)
The stationary distribution is given by
$\rho^{\text{ov}}_{\text{st}}(x)=\frac{1}{Z_{\text{ov}}T_{\text{eff}}(x)}\exp\left\\{-\int_{0}^{x}dx^{\prime}\frac{\partial_{x}U(x^{\prime})-F(x^{\prime})}{T_{\text{eff}}(x^{\prime})}\right\\},$
(36)
where
$Z_{\text{ov}}=\int_{-\infty}^{+\infty}dx\frac{1}{T_{\text{eff}}(x)}\exp\left\\{-\int_{0}^{x}dx^{\prime}\frac{\partial_{x}U(x^{\prime})-F(x^{\prime})}{T_{\text{eff}}(x^{\prime})}\right\\}.$
(37)
The first-passage time in the overdamped regime can be thus written as
$\tau=\int_{x_{a}}^{x_{b}}dx^{\prime}\frac{\xi(x^{\prime})}{T_{\text{eff}}(x^{\prime})\rho^{\text{ov}}_{\text{st}}(x^{\prime})}\int_{-\infty}^{x^{\prime}}dx^{\prime\prime}\rho^{\text{ov}}_{\text{st}}(x^{\prime\prime}).$
(38)
To summarize the derivations in section II: the main result is the combined
use of the adiabatic force (17), the position-dependent viscosity (18) and the
position-dependent diffusion coefficient (21) - all obtained from
nonequilibrium Green’s function calculations - with exact (in underdamped and
overdamped limits) expressions (34,38) for the mean first-passage time $\tau$.
The rates for corresponding chemical reactions can obtained as inverse of the
first-passage time $1/\tau$.
## III RESULTS
### III.1 The blowtorch effect
In this section we investigate Landauer’s proposed blowtorch effectLandauer
(1993), in which a non-equilibrium system allows for coordinate-dependent
variations to the dissipative forces acting on a particle, which then has an
effect on the properties of the steady-state distribution. Landauer’s
blowtorch effect plays a critical role in chemical reactions in molecular
electronic junctions, therefore we first discuss its general features which
will be relevant for our subsequent discussion. For consistency with Kramers’
seminal paperKramers (1940) on chemical reaction rates, our analysis is
formulated in terms of the mean first escape time from the left minimum of a
bistable potential, which we model according to a quartic of the form
$U(x)=-ax^{2}+bx^{4},$ (39)
where $x$ is our reaction coordinate. The constants $a$ and $b$ are adjustable
parameters which determine the width and depth of the minimum. In all tests in
this section, we set $a=0.04$ and $b=0.008$, such that $U_{b}=0.05$. In
addition, the particle always begins its trajectory with zero velocity at the
minimum of the reaction potential. To begin with, the viscosity $\xi_{0}$ and
diffusion coefficients $D_{0}$ are set to a constant value over the range of
$x$, yielding a constant temperature as determined by the fluctuation-
dissipation theorem.
In order to introduce an inhomogeneity into the temperature, a Gaussian spike
is applied to the diffusion coefficient locally at position $x_{0}$,
$D(x)=D_{0}+D_{peak}(x),$ (40)
where
$D_{peak}(x)=D_{m}e^{-\frac{(x-x_{0})^{2}}{\sigma^{2}}},$ (41)
with adjustable width $\sigma$ and magnitude $D_{m}$ parameters. The effective
temperature profile is given by (30), the effective temperature at the peak is
then given by
$T_{max}=T_{0}(1+\frac{D_{m}}{D_{0}}),$ (42)
where
$T_{0}=\frac{D_{0}}{2\xi_{0}}.$ (43)
This represents Laundauer’s so-called ”blowtorch” which heats a small segment
of the reaction coordinate, as shown diagramatically in Fig.1.
Figure 1: An adjustable temperature spike is introduced which heats a chosen
part of our reaction coordinate potential.
Here, the intention is to study the effect of shifting the temperature spike
along the reaction coordinate on the mean first-passage time $\tau$. We
analyse the overdamped and underdamped regimes separately for the same
parameters except for the mass $m$ of a Brownian particle, which is chosen to
satisfy the desired regime.
(a)
(b)
Figure 2: The calculated mean first-passage time $\tau$ as a function of the
temperature peak’s position along the reaction coordinate, for the (a)
overdamped and (b) underdamped regime ($m=1000$a.u). The black line denotes
$\tau$ in the absence of an applied blowtorch. Parameters: $D_{0}=0.01$,
$\xi_{0}=1$, $\sigma=0.05$.
In Fig.2, we observe the behaviour of the mean first-passage time as the
position of the temperature peak is shifted along the reaction coordinate
(shown in blue), while the reaction potential is shown as a reference in
orange. In the underdamped regime, we observe $\tau$ to be minimized when the
heating is applied to the bottom of the potential. This enables the molecule
to heat up quickly at low energies, and repeatedly attain more energy as it
passes through this region in a near-harmonic manner.
The overdamped regime differs, in that $\tau$ is minimized when the heating is
applied approximately halfway up the potential, around the point of steepest
ascent. In the overdamped regime, the escaping particle will very quickly
equilibrate to any given temperature fluctuation to which it is exposed. As
such, the heated region causes a flattening of the probability distribution in
that region, nullifying the dependence of the distribution on the reaction
coordinate. This causes an effective reduction to the height of the energy
barrier $U_{b}$ as elucidated by LandauerLandauer (1975, 1993); a phenomenon
which is maximized when the heating is applied in the region of steepest
ascent. We note the counter-intuitive observation that if the heating is
applied to the bottom of the potential in the overdamped case, this causes
only a small reduction to $\tau$. This is because the particle will quickly
lose the obtained energy as it returns to the cooler regions when it attempts
to escape.
It is insightful for us to also study the effect of the strength of
interaction of a Brownian particle with the environment, while maintaining a
homogeneous temperature. This entails that any changes in the diffusion
coefficient as a function of $x$ will be counteracted by a corresponding
changes in the viscosity coefficient at the same $x$, enforcing a homogeneous
temperature as per the fluctuation-dissipation theorem. Here, we perform a
similar analysis as above, such that we have a moveable peak of increased
interaction (simultaneously locally increased diffusion and viscosity) while
the temperature is homogeneous. The results of this are displayed in Fig.3.
(a)
(b)
Figure 3: The calculated mean first-passage time as a function of the
interaction peak’s position along the reaction coordinate, for the (a)
overdamped and (b) underdamped regime ($m_{e}=1000$a.u). The black line
denotes $\tau$ in the absence of an applied blowtorch. Parameters:
$D_{0}=0.01$, $\xi_{0}=1$, $\sigma=0.05$.
In the underdamped regime, we observe that the largest reduction to $\tau$
occurs when the interaction peak is placed at the minimum. The decrease in
$\tau$ agrees with the homogeneous-case solution, with the distinction that
reaction coordinates at higher energies in the potential have diminishing
contributions to the decreasing $\tau$. In the overdamped regime, it is seen
that the interaction peak results in an increase to $\tau$ as also predicted
in the homogeneous case. However, we observe that this is dominated by the
increased interaction strength near to the maximum of the potential, while
changing the interaction strength in the rest of the potential has negligible
effect. This demonstrates that $\tau$ has little regard for the interaction
strength in the overdamped regime, except in the region approaching the
maximum.
This general analysis arms us with the required physical intution before
proceeding to the next section, in which we first observe how a Landauer
blowtorch emerges naturally from a simple molecular junction model, then
demonstrate the effect on hypothetical chemical reaction rates.
### III.2 Application to a molecule with a single current-carrying molecular
orbital
In this section, we analyze the calculated mean first-passage time $\tau$ for
our model of a molecular junction. Contrary to section III.1, the viscosity
and diffusion coefficients will be computed using nonequilibrium Green’s
functions according to eqs. (18) and (21). We consider the case of a single
electronic level coupled to the left and right leads under some applied bias
voltage. The molecular Hamiltonian is simplified to
$H_{M}=(h_{0}+\lambda x)d^{{\dagger}}d,$ (44)
where subscript $d^{\dagger}$ and $d^{\dagger}$ denote the creation and
annihilation operators for an electron on the molecular orbital, while all
previous matrix quantities are simplified to scalars in this regime. Here, the
dependency of $H_{M}$ on the reaction coordinate acts to shift the electronic
level, as scaled by the tuneable parameter $\lambda$. The left and right lead
are each at room temperature (0.00095a.u) and are symmetrically coupled to the
central electronic state such that our level-width function is given by
$\Gamma_{L}=\Gamma_{R}=0.03$.
In the interest of consistency, we again utilize the same quartic to describe
our classical nuclear potential for the reaction coordinate, which is now
acted on by an additional adiabatic force term computed using nonequilibrium
Green’s functions according to equation 17. This has the effect of shifting
and shallowing/deepening the reaction potential depending on the parameters
chosen. We also allow for the manual shift of the external potential along the
reaction coordinate according to some parameter $x_{a}$:
$U(x)=-a(x-x_{a})^{2}+b(x-x_{a})^{4}.$ (45)
This means that when $x_{a}=0$, the potential minimum (ignoring the effects of
$F$) occurs at $x=0$, while a positive $x_{a}$ shifts the input potential to
the right. Any bias voltage is applied symmetrically, such that
$\mu_{L}=-\mu_{R}=V/2$, where $\mu_{L}$ and $\mu_{R}$ are the chemical
potentials of the left and right leads.
(a)
(b)
(c)
(d)
Figure 4: The effect of an applied gate voltage to (a) the viscosity
coefficient and (b) the effective temperature. The mean first-passage time
$\tau$ in the (c) overdamped and (d) underdamped regime is plotted against the
peak coordinate of the viscosity and temperature (as determined by the applied
gate voltage) for different $\lambda$. The coordinates of the minimum and
maximum of the reaction potential are denoted by the vertical black lines in
(c) and (d).
We study the effect of applying a gate voltage to the system, as modelled by a
shift in the $h_{0}$ value. This allows for a degree of controllability of the
reaction rates for a given system. Figs.4(a) and 4(b) demonstrate the
resultant viscosity coefficient and temperature respectively, as a function of
the reaction coordinate. Application of a gate voltage acts to shift the curve
along the reaction coordinate. This analysis is performed for a non-zero bias
voltage such that the temperature is now inhomogeneous in addition to the
viscosity. In the underdamped regime shown in Fig.4(d), $\tau$ is minimized
when the viscosity and temperature peaks are shifted near to the minimum of
the reaction potential (note however, that the minimum in $\tau$ does not
occur exactly when the peaks are shifted to the minimum of the potential due
to the slight asymmetry of the reaction potential). In contrast, the
overdamped regime displays highly non-trivial behaviour, arising as a result
of the interplay between the strength of the viscosity and the temperature. In
our analysis of the overdamped regime in the previous section, we noted that
the dependence of $\tau$ on the temperature is dominated by the region of
steepest ascent up towards the maximum. Here, we again observe this behaviour
as the large peak in Fig.4(c) corresponds to when the dip in the temperature
occurs in this region (when the temperature peak has been shifted to the
right). A corresponding but smaller peak also occurs due to a shift to the
left in the temperature such that the low temperature aligns with the steep
region of the potential. The difference in peak sizes arises as a result of
the inhomogeneous viscosity, which per the previous section, we know is
important in the region near the maximum of the reaction potential. The large
peak in $\tau$ occurs when the temperature is low in the steep region, while
the viscosity is high towards the maximum. The small peak has a low viscosity
near the maximum, explaining its comparatively smaller magnitude.
### III.3 Model of a two-level molecule
In this section, we expand the model to consider a two-level system. The
second molecular orbital is not a simple addition of an extra level here,
since the Green’s functions, self-energies and molecular Hamiltonian become
$2\times 2$ matrices and some nontrivial terms such as the commutator in (14)
will no longer be zero. In our model, the molecular energy levels is taken to
correspond to the bonding and anti-bonding states of a free $H_{2}^{+}$
moleculeMcQuarrie (2007). As such, the molecular Hamiltonian now reads
$H_{M}(t)=\sum_{ij}h_{ij}(q(t))d^{{\dagger}}_{i}d_{j},$ (46)
where $d_{i}^{{\dagger}}$ and $d_{j}$ are now in the molecular orbital basis.
The electronic Hamiltonian elements are represented in the form of a $2\times
2$ matrix according to
$h=\begin{pmatrix}H_{b}(q)&0\\\ 0&H_{a}(q)\end{pmatrix},$ (47)
where we use $H_{b}(q)$ and $H_{a}(q)$ to denote the bonding and anti-bonding
molecular orbitals, respectively, while $q$ is the bond-length. The values for
$H_{b}(q)$ and $H_{a}(q)$ are calculated according to molecular orbital
theoryMcQuarrie (2007) and are given by
$H_{b}(q)=\frac{H_{AA}+H_{AB}}{1+S_{AB}},$ (48)
and
$H_{a}(q)=\frac{H_{AA}-H_{AB}}{1-S_{AB}},$ (49)
where $H_{AA}$ and $H_{AB}$ are the Hamiltonian elements in the atomic basis
and $S_{AB}$ is the overlap integral between atomic 1s Slater orbitals. The
constituent components are then given by
$H_{AA}=-\frac{1}{2}+e^{-2q}\Big{(}1+\frac{1}{q}\Big{)}-\frac{1}{q},$ (50)
(a)
(b)
Figure 5: The adiabatic potential as a function of the bond-length presented
for (a) varying bias voltages and (b) varying the magnitude of leads coupling
to $H_{a}$. Parameters: $V=0$, $\Gamma_{aa}/\Gamma_{bb}=1$, unless otherwise
specified. $\Gamma_{bb}=0.03$ in all calculations.
(a)
(b)
Figure 6: The effect of varying the bias voltage is shown for the (a)
viscosity coefficient and (b) the effective temperature, as a function of the
bond length. Insets: Shows the same quantity at $V=0.02$ for
$\Gamma_{aa}/\Gamma_{bb}=0$ (dashed) and $\Gamma_{aa}/\Gamma_{bb}=0.5$
(solid). Parameters; $\Gamma_{aa}/\Gamma_{bb}=1$ in the main plots.
$\Gamma_{bb}=0.03$ in all calculations.
(a)
(b)
Figure 7: The mean first-passage time $\tau$ as a function of the bias
voltage, varying the coupling to $H_{a}$ in the (a) overdamped and (b)
underdamped cases.
$H_{AB}=-\frac{S_{AB}(q)}{2}-e^{-q}(1+q),$ (51)
and
$S_{AB}=e^{-q}(1+q+q^{2}/3).$ (52)
In the interest of simplicity, each of the molecular orbitals is symmetrically
coupled to the left and right leads; as controlled by the parameter $\Gamma$
which now takes the form of a matrix as per
$\Gamma_{\alpha}=\begin{pmatrix}\Gamma_{\alpha,bb}&\Gamma_{\alpha,ba}\\\
\Gamma_{\alpha,ab}&\Gamma_{\alpha,aa}\end{pmatrix}$ (53)
for the $\alpha$ lead, where the off-diagonal components can be defined
according to,
$\Gamma_{\alpha,ba}=\Gamma_{\alpha,ab}=\sqrt{\Gamma_{\alpha,aa}\Gamma_{\alpha,bb}}.$
(54)
In each test, we have $\mu_{L}=-0.7$ and $\mu_{R}=\mu_{L}-V$, while the lead
temperatures are again set to room temperature.
The external potential now represents the classical nuclear repulsion, which
in atomic units is given by
$U(q)=\frac{1}{q}.$ (55)
Inclusion of the electronic forces allows us to generate modified electronic
potentials for varied parameters in order to assess the molecular stability.
Examples of these potentials are shown in Fig.5; where in (a) an applied bias
voltage is shown to decrease the energy required for bond rupture, while (b)
shows the effect of the additional electronic level which when occupied, acts
to increase the bond stability.
Along with the shape of the effective potential, the bond stability is also
determined by the electronic viscosity and effective temperature, which are
demonstrated in Fig.6. In the viscosity coefficient, each curve shows a peak
at small $q$, which approximately corresponds to when $H_{a}$ crosses the
fermi-level of the left lead. Likewise, the peaks at large $q$ are a result of
$H_{b}$ crossing the fermi-level of the right, then left, leads (these split
peaks merge together when $V=0$). The inset plot demonstrates the effect of
allowing an additional transport channel through the excited state which not
only introduces the peak at small $q$, but also increases the magnitude of the
viscous forces overall. The effective temperature is equal to the leads
temperature for $V=0$, while non-zero voltages yield a complex array of
localized heating and cooling effects, which arise as the energy levels shift
in and out of the resonance region as the bond-length is increased.
These competing effects culminate in our calculation of the mean first-passage
time, which is demonstrated in Fig.7 as a function of the bias voltage, for
different coupling values to the excited electronic state. In both limiting
regimes, an increase to the bias voltage acts to destabilize the bond and
decrease $\tau$, both due to the increased effective temperatures and the
weakening of the bond due to the current-induced forces. In the overdamped
regime, allowing the leads to be coupled to an additional level in the central
region has a stabilizing effect for all voltages tested, increasing the
average amount of time for bond rupture. The underdamped case shows similar
behavior for very low voltages; however, at higher voltages the availability
of the additional transport channel through the excited state increases the
current-induced forces such that the energy required for a bond-rupture is
found more easily, decreasing $\tau$.
## IV CONCLUSIONS
In this paper, we have demonstrated that the rates of chemical reactions for
molecules in electronic junctions depend on three crucial ingredients; the
potential energy surface which defines the energy required for a configuration
change or bond rupture, the rate of the energy removal from vibrational to
electronic degrees of freedom given by the electronic viscosity coefficient,
and lastly, the effective temperature dynamically established in the molecule.
While the magnitude of these quantities is of high importance, the local
distribution of the viscosity and effective temperature along the potential
energy surface (Landauer’s blowtorch effect) also proves to be critical.
The addition of localized heating and cooling effects as a result of
inhomogeneity with respect to the molecular configuration has been shown to
induce significant variations in the mean first-passage time, as calculated
according to a Fokker-Planck description obtained by separating slow (reaction
coordinate) and fast (electronic) time-scales in the Keldysh-Kadanoff-Baym
equations for the nonequilibrium Green’s functions. This has been demonstrated
for a single-level molecular junction model, as well as a two-level model
inspired by $H_{2}^{+}$ molecular orbitals with the bond length considered as
the reaction coordinate. This interplay between the amount of energy required
for bond rupture and the energy supplied due to tunneling electrons has been
shown to be strongly dependent on the choice of experimentally tuneable
parameters for the system. This enables the possibility of a high degree of
controllability for molecular junction systems, with promises of controlled
initiation of chemical reactions or conversely, enforcing the stability of
specific configurations within the system.
DATA AVAILABILITY
The data that supports the findings of this study are available within the
article.
## Appendix A Derivation of the energy diffusion equation in the underdamped
limit
Following Zwanzig Zwanzig (2001), we can introduce the projection operator
$\widehat{O}=\Omega^{-1}(E)\int dxdp\,\,\delta(E-H)$ (56)
where
$H=\frac{p^{2}}{2m}+U_{\text{eff}}(x)$ (57)
is the Hamiltonian of the bath-free Brownian particle and the microcanonical
partition function is defined as
$\Omega(E)=\int dxdp\,\,\delta(E-H)$ (58)
(equivalently, $\Omega(E)$ is determined via Eq. (29)). Note that
$\widehat{O}$ has an extra factor of $\Omega^{-1}(E)$ in comparison with the
operator introduced by Zwanzig. With our definition,
$\widehat{O}^{2}=\widehat{O}$. Applying the projection operator of Eq. (56) to
the Fokker-Planck equation (25), exploiting the identity
$\widehat{O}\left(-\partial_{x}\frac{p}{m}+\partial_{p}U_{\text{eff}}^{{}^{\prime}}(x)\right)=0$
and taking into account that $\xi(x)\ll 1$ (underdamped limit) we obtain
$\partial_{t}\widehat{O}\rho(x,p,t)=\widehat{O}\zeta(x)\left\\{\partial_{p}\frac{p}{m}+\partial_{p}^{2}T(x)\right\\}\widehat{O}\rho(x,p,t).$
(59)
Changing the differentiation variables ($\partial_{p}=m^{-1}\partial_{E}p$)
and using the identity
$\partial_{p}^{2}\widehat{O}=m^{-1}\partial_{p}p\partial_{E}\widehat{O}$, we
obtain the following equation for $\rho(E,t)\equiv\Omega(E)\widehat{O}\rho(t)$
(cf. Zwanzig (2001); Gelin and Kosov (2007)):
$\partial_{t}\rho(E,t)=\partial_{E}\left\\{\mu(E)+\nu(E)\partial_{E}\right\\}\Omega^{-1}(E)\rho(E,t).$
(60)
Here $\mu(E)$ and $\nu(E)$ are defined through Eqs. (33) and (28), and
$\rho(E,t)$ is normalized according to $\int dE\rho(E,t)=1$.
## References
* Weick _et al._ (2010) G. Weick, F. Pistolesi, E. Mariani, and F. von Oppen, Phys. Rev. B 81, 121409 (2010).
* Lü _et al._ (2012) J.-T. Lü, M. Brandbyge, P. Hedegård, T. N. Todorov, and D. Dundas, Phys. Rev. B 85, 245444 (2012).
* Preston _et al._ (2020a) R. J. Preston, V. F. Kershaw, and D. S. Kosov, Phys. Rev. B 101, 155415 (2020a).
* Kershaw and Kosov (2020) V. F. Kershaw and D. S. Kosov, The Journal of Chemical Physics 153, 154101 (2020), https://doi.org/10.1063/5.0023275 .
* Dzhioev and Kosov (2011) A. A. Dzhioev and D. S. Kosov, The Journal of Chemical Physics 135, 074701 (2011), https://doi.org/10.1063/1.3626521 .
* Dzhioev _et al._ (2013) A. A. Dzhioev, D. S. Kosov, and F. von Oppen, The Journal of Chemical Physics 138, 134103 (2013), https://doi.org/10.1063/1.4797495 .
* Erpenbeck _et al._ (2018) A. Erpenbeck, C. Schinabeck, U. Peskin, and M. Thoss, Phys. Rev. B 97, 235452 (2018).
* Härtle and Thoss (2011) R. Härtle and M. Thoss, Phys. Rev. B 83, 125419 (2011).
* Li _et al._ (2015) H. Li, T. A. Su, V. Zhang, M. L. Steigerwald, C. Nuckolls, and L. Venkataraman, Journal of the American Chemical Society 137, 5028 (2015), pMID: 25675085, https://doi.org/10.1021/ja512523r .
* Li _et al._ (2016) H. Li, N. T. Kim, T. A. Su, M. L. Steigerwald, C. Nuckolls, P. Darancet, J. L. Leighton, and L. Venkataraman, Journal of the American Chemical Society 138, 16159 (2016), pMID: 27960303, https://doi.org/10.1021/jacs.6b10700 .
* Lü _et al._ (2010) J.-T. Lü, M. Brandbyge, and P. Hedegård, Nano Letters 10, 1657 (2010), https://doi.org/10.1021/nl904233u .
* Huang _et al._ (2013) K. Huang, L. Leung, T. Lim, Z. Ning, and J. C. Polanyi, Journal of the American Chemical Society 135, 6220 (2013).
* Aragonés _et al._ (2016) A. C. Aragonés, N. L. Haworth, N. Darwish, S. Ciampi, N. J. Bloomfield, G. G. Wallace, I. Diez-Perez, and M. L. Coote, Nature 531, 88 (2016).
* Borca _et al._ (2017) B. Borca, T. Michnowicz, R. Pétuya, M. Pristl, V. Schendel, I. Pentegov, U. Kraft, H. Klauk, P. Wahl, R. Gutzler, A. Arnau, U. Schlickum, and K. Kern, ACS Nano 11, 4703 (2017), pMID: 28437066, https://doi.org/10.1021/acsnano.7b00612 .
* Zwanzig (2001) R. Zwanzig, _Nonequilibrium statistical mechanics_ (Oxford University Press, Oxford, 2001).
* Coffey _et al._ (2004) W. T. Coffey, Y. P. Kalmykov, and J. T. Waldron, _The Langevin Equation_ (World Scientific, Singapore, 2004).
* Hänggi _et al._ (1990) P. Hänggi, P. Talkner, and M. Borcovec, Reviews of Modern Physics 62, 251 (1990), https://doi.org/10.1063/1.527963 .
* Melnikov (1991) V. I. Melnikov, Phys. Rep. 209, 1 (1991), https://doi.org/10.1063/1.527963 .
* Voth and Hochstrasser (1996) G. A. Voth and R. M. Hochstrasser, J. Phys. Chem. 100, 13034 (1996), https://doi.org/10.1063/1.527963 .
* Miller (1998) W. H. Miller, J. Phys. Chem. A 102, 793 (1998), https://doi.org/10.1063/1.527963 .
* Pollak and Talkner (2005) E. Pollak and P. Talkner, Chaos 15, 026116 (2005), https://doi.org/10.1063/1.527963 .
* Schüller _et al._ (2020) B. Schüller, A. Meistrenko, H. Hees, Z. Xu, and C. Greiner, Annals of Physics 412, 168045 (2020), https://doi.org/10.1063/1.527963 .
* Koch _et al._ (2006) J. Koch, M. Semmelhack, F. von Oppen, and A. Nitzan, Phys. Rev. B 73, 155306 (2006).
* Brisker and Peskin (2008) D. Brisker and U. Peskin, The Journal of Chemical Physics 129, 244709 (2008), https://doi.org/10.1063/1.3021288 .
* Erpenbeck _et al._ (2020) A. Erpenbeck, Y. Ke, U. Peskin, and M. Thoss, Phys. Rev. B 102, 195421 (2020).
* Dou and Subotnik (2018) W. Dou and J. E. Subotnik, Phys. Rev. B 97, 064303 (2018).
* Dou and Subotnik (2017) W. Dou and J. E. Subotnik, Phys. Rev. B 96, 104305 (2017).
* Dou _et al._ (2017) W. Dou, G. Miao, and J. E. Subotnik, Phys. Rev. Lett. 119, 046001 (2017).
* Bode _et al._ (2011) N. Bode, S. V. Kusminskiy, R. Egger, and F. von Oppen, Phys. Rev. Lett. 107, 036804 (2011).
* Pistolesi _et al._ (2008) F. Pistolesi, Y. M. Blanter, and I. Martin, Phys. Rev. B 78, 085127 (2008).
* Bode _et al._ (2012) N. Bode, S. V. Kusminskiy, R. Egger, and F. von Oppen, Beilstein J. Nanotechnol 3, 144 (2012).
* Preston _et al._ (2020b) R. J. Preston, T. D. Honeychurch, and D. S. Kosov, The Journal of Chemical Physics 153, 121102 (2020b), https://doi.org/10.1063/5.0019178 .
* Kramers (1940) H. Kramers, Physica 7, 284 (1940).
* Pollak and Berezhkovskii (1993) E. Pollak and A. M. Berezhkovskii, J. Chem. Phys. 99, 1344 (1993), https://doi.org/10.1063/1.527963 .
* Haynes _et al._ (1994) G. R. Haynes, E. Pollak, and G. A. Voth, J. Chem. Phys. 101, 7811 (1994), https://doi.org/10.1063/1.527963 .
* Haynes and Voth (1995) G. R. Haynes and G. A. Voth, J. Chem. Phys. 103, 10176 (1995), https://doi.org/10.1063/1.527963 .
* Neria and Karplus (1996) E. Neria and M. Karplus, J. Chem. Phys. 105, 10812 (1996), https://doi.org/10.1063/1.527963 .
* Gelin and Kosov (2007) M. F. Gelin and D. K. Kosov, J. Chem. Phys. 126, 244501 (2007), https://doi.org/10.1063/1.527963 .
* Landauer (1975) R. Landauer, Phys. Rev. A 12, 636 (1975).
* Landauer (1993) R. Landauer, Physica A: Statistical Mechanics and its Applications 194, 551 (1993).
* Reimann (2002) P. Reimann, Phys. Rep. 361, 57 (2002), https://doi.org/10.1063/1.527963 .
* Hänggi and Marchesoni (2009) P. Hänggi and F. Marchesoni, Rev. Mod. Phys. 81, 387 (2009), https://doi.org/10.1063/1.527963 .
* Erbas-Cakmak _et al._ (2015) S. Erbas-Cakmak, D. A. Leigh, C. T. McTernan, and A. L. Nussbaumer, Chem. Rev. 115, 10081 (2015), https://doi.org/10.1063/1.527963 .
* Das and Ray (2015) M. Das and D. S. Ray, Phys. Rev. E 92, 052133 (2015), https://doi.org/10.1063/1.527963 .
* Devine and Jack (2017) J. Devine and M. W. Jack, Phys. Rev. E 96, 062130 (2017), https://doi.org/10.1063/1.527963 .
* Franzese _et al._ (2017) G. Franzese, I. Latella, and J. M. Rubi, Entropy 19, 507 (2017), https://doi.org/10.1063/1.527963 .
* Holubec _et al._ (2017) V. Holubec, A. Ryabov, M. H. Yaghoubi, M. Varga, A. Khodaee, M. E. Foulaadvand, and P. Chvosta, Entropy 19, 119 (2017), https://doi.org/10.1063/1.527963 .
* Arango-Restrepo and Rubi (2020) A. Arango-Restrepo and J. M. Rubi, J. Chem. Phys. 153, 034108 (2020), https://doi.org/10.1063/1.527963 .
* Basak _et al._ (2019) S. Basak, S. Sengupta, and K. Chattopadhyay, Bioph. Rev. 11, 851 (2019), https://doi.org/10.1063/1.527963 .
* Rubi (2019) J. M. Rubi, EPL 127, 10001 (2019), https://doi.org/10.1063/1.527963 .
* Bekele _et al._ (1999) M. Bekele, S. Rajesh, G. Ananthakrishna, and N. Kumar, Phys. Rev. E 59, 143 (1999), https://doi.org/10.1063/1.527963 .
* Das _et al._ (2015) M. Das, D. Das, D. Barik, and D. S. Ray, Phys. Rev. E 19, 052102 (2015), https://doi.org/10.1063/1.527963 .
* Das and Kantz (2019) M. Das and H. Kantz, Phys. Rev. E 100, 032108 (2019), https://doi.org/10.1063/1.527963 .
* Stefanucci and van Leeuwen (2013) G. Stefanucci and R. van Leeuwen, _Nonequilibrium Many-Body Theory of Quantum Systems: A Modern Introduction_ (Cambridge University Press, 2013).
* Haug and Jauho (2010) H. Haug and A. Jauho, _Quantum Kinetics in Transport and Optics of Semiconductors_ (Springer, Berlin/Heidelberg, 2010).
* Kershaw and Kosov (2017) V. F. Kershaw and D. S. Kosov, J. Chem. Phys. 147, 224109 (2017), https://doi.org/10.1063/1.5007071 .
* Kershaw and Kosov (2018) V. F. Kershaw and D. S. Kosov, J. Chem. Phys. 149, 044121 (2018), https://doi.org/10.1063/1.5028333 .
* Kershaw and Kosov (2019) V. F. Kershaw and D. S. Kosov, Journal of Chemical Physics 150, 074101 (2019), 1809.07140 .
* van Kampen (1988a) N. G. van Kampen, IBM J. Res. Develop. 32, 107 (1988a), https://doi.org/10.1063/1.527963 .
* van Kampen (1991) N. G. van Kampen, Journal of Statistical Physics 63, 1019 (1991), https://doi.org/10.1063/1.527963 .
* van Kampen (1981) N. G. van Kampen, Journal of Statistical Physics 24, 175 (1981), https://doi.org/10.1063/1.527963 .
* van Kampen (1988b) N. G. van Kampen, Journal of Mathematical Physics 29, 1220 (1988b), https://doi.org/10.1063/1.527963 .
* McQuarrie (2007) D. A. McQuarrie, _Quantum Chemistry_ (University Science Books, Sausalito, California, 2007).
|
# Néel domain wall as a tunable filter for optically excited magnetostatic
waves
N.E. Khokhlov<EMAIL_ADDRESS>http://www.ioffe.ru/ferrolab/ Ioffe
Institute, 26 Politekhnicheskaya, 194021 St. Petersburg, Russia A.E. Khramova
Ioffe Institute, 26 Politekhnicheskaya, 194021 St. Petersburg, Russia Faculty
of Physics, M.V. Lomonosov Moscow State University, 119991 Moscow, Russia
Russian Quantum Center, Skolkovo, 121205 Moscow, Russia Ia.A. Filatov Ioffe
Institute, 26 Politekhnicheskaya, 194021 St. Petersburg, Russia P.I.
Gerevenkov Ioffe Institute, 26 Politekhnicheskaya, 194021 St. Petersburg,
Russia B.A. Klinskaya Academic lyceum ”Physical-Technical High School”,
194021 St. Petersburg, Russia A.M. Kalashnikova Ioffe Institute, 26
Politekhnicheskaya, 194021 St. Petersburg, Russia
###### Abstract
We present a concept of a tunable optical excitation of spin waves and
filtering their spectra in a ferromagnetic film with 180∘ Néel domain wall. We
show by means of micromagnetic simulation that the fluence of the femtosecond
laser pulse and its position with respect to the domain wall affect the
frequencies of the excited spin waves, and the presence of the domain wall
plays crucial role in control of the spin waves’ spectrum. The predicted
effects are understood by analyzing the changes of the spin waves’ dispersion
under the impact of the laser pulse.
Spin waves; domain wall; ultrafast magnetism; spin dynamics; micromagnetism
††preprint: APS/123-QED
## I Introduction
In magnonics, spin waves (SW) are used to implement alternative methods of
transferring information in magnetic nanostructures that can replace
traditional transistor circuits [1, 2, 3, 4]. Unlike electric charges, SW can
propagate in materials even without free charge carriers [5, 6]. Thus, SW
propagation is not associated with Joule losses which reduction is the
challenging problem in traditional electronics. Different types of magnetic
ordering support SW with frequencies in the range from GHz to THz [7, 8, 9,
10], extending the operation rates of magnonics circuits.
Developing approaches for controlling SW is essential for bringing magnonics
concepts to applications. Control of SW amplitude, phase, velocity, and
propagation direction have been demonstrated by introducing various types of
magnetic non-uniformity in the SW guiding media [11, 12, 13, 14].
Particularly, topological defects such as domain walls (DW) and skyrmions
change amplitude and phase of SW passing through them [15, 16, 17, 18, 19, 20,
21, 22, 23, 24, 25]. In parallel, tunable magnetic non-uniformities, e.g.,
those induced by illuminating a magnetic structure with light, are of
particular interest since they allow creating reconfigurable magnonic elements
[26, 27, 28, 29]. Furthermore, changing magnetic properties of a medium
locally by femtosecond laser pulses appears to be one of the efficient ways to
generate propagating spin waves and to tune their characteristics [14, 30, 31,
32, 33, 34].
There is a broad range of mechanisms enabling manipulation of magnetic
ordering by laser pulses [35, 36, 37], including ultrafast demagnetization,
inverse magneto-optical effects, excitation of coherent phonons, ultrafast
change of magnetic anisotropy. The last one is a versatile mechanism as the
anisotropy could be of different nature: magnetocrystalline, shape-, strain-
induced, etc. Furthermore, laser-induced anisotropy changes as a triggering
mechanism of magnetization dynamics can be realized in metals, semiconductors,
and dielectrics [38] through ultrafast heating [39, 40, 41], excitation of
lattice distortions [42], photomagnetic effects [43], etc. Therefore, it is
appealing to realize a tunable source of SW by combining the advantages of
ultrafast laser-induced excitation of SW with their control by local magnetic
defects, such as DW.
In this Article, we present a micromagnetic study of magnetostatic spin waves
(MSW) optically excited in the vicinity of a 180∘ Néel DW in a thin
ferromagnetic film. MSW are triggered by changes of the magnetic anisotropy of
the film resulting from ultrafast laser-induced increase of the lattice
temperature occurring within a few picoseconds after the excitation [39, 40,
41]. We show that the laser-excited area and the DW effectively form a tunable
source of MSW. Excitation of selected frequencies in the MSW spectrum is found
to be controlled by the laser spot - DW distance, as well as by laser pulse
fluence and laser spot width.
## II Model details
Figure 1: (a) Scheme of the ferromagnetic strip with 180∘ Néel domain wall at
the center. Small arrows indicate orientation of magnetization ${\bf M}$ at
different $x$. Bold double arrow indicates the anisotropy axis along $y$-axis.
(b) Calculated initial spatial distribution of the in-plane magnetization
projections $M_{x}$ and $M_{y}$ across the strip; out-of-plane projection
$M_{z}$ is zero.
We use micromagnetic numerical calculations on a model system of a
ferromagnetic strip. The material parameters of the strip are: the saturation
magnetization $M_{s}$ = 800 kA/m, exchange stiffness $A$ = 1.3$\cdot$10-11
J/m, Gilbert damping constant $\alpha$=0.008, uniaxial anisotropy parameter
$K_{u}$ = 5 kJ/m3. The strip has a width of 60 $\mu$m in the $x$-direction,
infinite length in the $y$-direction, and a thickness of 20 nm, as shown in
Fig.1(a). Easy axis of magnetic anisotropy is along the $y$-axis.
The Object Oriented MicroMagnetic Framework OOMMF [44] is utilized for solving
the Landau–Lifshitz–Gilbert equation [45, 46]:
$\frac{d\bf M}{dt}=-|\gamma|{\bf M}\times{\bf
H}_{\mathrm{eff}}+\frac{\alpha}{M_{s}}\left({\bf M}\times\frac{d\bf
M}{dt}\right),$ (1)
where $\bf M$ is magnetization, $t$ is time, $\gamma=-2.211\times
10^{5}\,mA^{-1}s^{-1}$ is the Gilbert gyromagnetic ratio, ${\bf
H}_{\mathrm{eff}}$ is the effective field consisting of anisotropy, exchange
and magnetostatic fields. We consider the case when no external magnetic field
is applied. The solution of Eq. (1) is the dynamics of ${\bf M}$ as a function
of $t$ following the excitation with the laser pulse. We use a time step of
100 fs and a total time window of 4 ns. A cell size of $4\times 4\times 20$
nm3 is chosen to be smaller than both the magnetostatic exchange length
$\sqrt{2A/(\mu_{0}M_{s}^{2})}$ and the magnetocrystalline exchange length
$\sqrt{A/K_{u}}$ [47]. The infinite strip length along the $y$-axis is modeled
with 1D periodic boundary conditions. The calculated initial distribution of
${\bf M}$ is a two-domain state with the 180∘ Néel DW at the center of the
strip, $x_{0}=0$; ${\bf M}$ is oriented in the film plane (Fig.1(b)).
We assume the impact of the optical laser pulse as a local relative reduction
of the anisotropy parameter $\Delta K_{u}$ resulting from the laser-induced
heating. As found in various experiments, reduction of the anisotropy
parameter occurs at a picosecond time scale [40, 39], and, thus, can be
approximated in our model by the instantaneous decrease of $K_{u}$. Following
recovery of $K_{u}$ to its equilibrium value occurs at time scales of the
order of a few nanoseconds, and is neglected in our model.
We consider the pump spot having Gaussian profile along $x$-axis and elongated
infinitely along $y$-axis to model the excitation of plane SW, similar to
recent all-optical experiments [48, 14, 49]. The resulting spatial-temporal
profile of the anisotropy change is
$K_{u}(x,t)=K_{u}\left(1-\Delta
K_{u}\exp\left[\frac{(x-x_{p})^{2}}{2\sigma^{2}}\right]\Theta(t)\right),$ (2)
where $x_{p}$ is the position of the pump spot center, $\Theta(t)$ is a
Heaviside function, $\sigma$ characterizes the width of the pump. In the
following, we refer to $\sigma$ as the pump width. It is worth noting that the
minimum accessible value of $\sigma$ is limited by the diffraction limit and
is of a few hundreds of nanometers. Thus, the range of wavenumbers for
optically excited SW propagating laterally has the upper limit of
$\sqrt{2}/\sigma$ [50, 32]. Thus, wavenumbers lesser than 10 rad/$\mu$m are
exited in optical experiments, what corresponds to the magnetostatic type of
SW. Nonetheless, exchange perpendicular standing SW are accessible by optical
excitation in flat metal films [51, 52] and complex dielectric structures
[53]. Such waves do not propagate laterally and are omitted from the
consideration below.
The analysis of MSW properties is based on monitoring the temporal and spatial
evolution of the out-of-plane component of the magnetization $M_{z}$ as laser-
induced MSW are usually detected in all-optical pump-probe experiments using
polar magneto-optical Kerr effect in reflection [33, 32, 51, 50] or Faraday
effect in transmission [30, 14, 48, 31, 49]. Similar to all optical pump-probe
experiments, we assume the detection of $M_{z}$ at a variable position
$x_{probe}$.
## III Results and discussion
### III.1 Excitation of spin waves
Figure 2(a) shows the spatial-temporal distribution of $M_{z}$ for the pump
pulse positioned at $x_{p}=-2\,\mu$m. The calculations show the propagation of
MSW packets from the pump spot in both directions along the $x$-axis.
Figure 2: (a) Spatial-temporal map $M_{z}(x,t)$. (b) Scheme of reorientation
of total effective field ${\bf H}_{eff}$ due to decreasing of anisotropy
parameter. ${\bf H}_{a}^{\prime}$ and ${\bf H}_{eff}^{\prime}$ are the
anisotropy and effective fields at $t=0+$, respectively. (c) 1D FFT of
$M_{z}(t)$ vs $x$-coordinate. (d) $M_{z}(t)$ at $x_{probe}=x_{p}\pm 4\mu$m.
Data in (a,c,d) are obtained for $x_{p}=-2\,\mu$m, $\sigma=0.2\,\mu$m, $\Delta
K_{u}=0.2$.
The excitation of MSW is enabled by an abrupt reorientation of effective field
${\bf H}_{eff}$ caused by the pump, as shown schematically in Fig.2(b). At
equilibrium, ${\bf H}_{eff}$ is a sum of stray field ${\bf H}_{s}$ and
anisotropy field ${\bf H}_{a}=(2K_{u}/M_{s}){\bf m}_{y}$, where ${\bf
m}_{y}=(M_{y}/M_{s}){\bf e}_{y}$, ${\bf e}_{y}$ is the unit vector along the
$y$-axis. Non-zero component of ${\bf H}_{s}$ along $x$-axis is proportional
to $M_{x}$. Stray field appears because of the presence of the DW and
decreases with the distance from the wall (Fig.1(b)). As $H_{a}$ decreases due
to laser-induced change of the anisotropy parameter $\Delta K_{u}$, ${\bf
H}_{eff}$ changes its magnitude and orientation to ${\bf H}_{eff}^{\prime}$ at
the time scale much shorter than the magnetization precession period. This
leads to a non-zero angle between ${\bf H}_{eff}^{\prime}$ and ${\bf M}$ in
the $xy$-plane at $t=0+$ with corresponding non-zero torque $\bf{T}$ acting on
$\bf{M}$:
$\displaystyle{\bf T}(x)=-|\gamma|{\bf M}(x)\times{\bf H}_{eff}^{\prime}(x)=$
$\displaystyle=2|\gamma|K_{u}\Delta
K_{u}\frac{M_{x}(x)M_{y}(x)}{M_{s}^{2}}{\bf e}_{z},$ (3)
where ${\bf e}_{z}$ is the unit vector along the $z$-axis. Non-zero T triggers
the precession of M within the pump spot, which in turn launches the
propagation of MSW outside the spot.
Equation (III.1) shows that the described mechanism of MSW excitation requires
presence of the DW. Indeed, the absence of DW means the initial single domain
state of the film with ${\bf H}_{eff}$ and ${\bf M}$ aligned along $y-$axis
and resulting $H_{s}=0$ at any $x$. Thus, the change of $K_{u}$ alters
$H_{a}$, but does not affect the direction of ${\bf H}_{eff}$. The presence of
DW in the two-domain state provides non-zero $M_{x}$ (Fig.1(b)) with the
corresponding non-zero $x$-component of ${\bf H}_{s}$ in the vicinity of
$x_{0}$. Therefore, the presence of DW in our system enables MSW excitation
via ultrafast changes of magnetic anisotropy even in zero applied magnetic
field.
### III.2 Spectrum of MSW optically excited in the DW vicinity
Here we turn to the analysis of the MSW features related to the DW presence.
The spatial-temporal maps $M_{z}(x,t)$ show the reflection and transmission of
the MSW from/through DW (Fig.2(a)). For detailed movie of the MSW propagation
see Suppl. Mat. [54]. The reflection is due to nonuniform ${\bf H}_{eff}$ in
the DW vicinity. Indeed, the orientation of ${\bf H}_{eff}$ defines the MSW
dispersion law $f(k)$ [55], where $f$ and $k$ are MSW frequency and
wavenumber, respectively. Far from DW, ${\bf k}$ and ${\bf H}_{eff}$ are
orthogonal and $f(k)$ corresponds to the surface mode of MSW [56]. In the DW
vicinity, ${\bf H}_{eff}$ acquires a projection on the $x$-axis resulting in
corresponding changes of $f(k)$. Thus, the DW works as a non-uniformity of the
effective refractive index for MSW, and, as a consequence, a fraction of MSW
packet is reflected from DW [17]. The reflected MSW interferes with the MSW
propagating directly from the pump spot at $x<x_{0}$. We note that the DW
displacement resulting from the interaction with the MSW wavepacket is found
to be of 4 nm only (1 cell of the mesh) and, thus is omitted from the
consideration below.
To analyze in detail the MSW interference at various $x$, we performed one-
dimensional fast Fourier transform (1D FFT) of the temporal signals at
different $x$. The resulting $x$-$f$ maps (Fig.2(c)) demonstrate an evolution
of MSW packet’s spectrum with $x$, possessing a fir-tree-shape. In particular,
there are multiple peaks in the FFT spectra at $x<x_{0}$ due to the
interference. The range $x>x_{0}$ does not reveal any pronounced interference
pattern, and the MSW spectrum possesses a single broad maximum. Thus, below,
we focus our discussion on the properties of MSW at $x<x_{0}$. However, we
note that DW brings the phase shift of $\pi$ to the MSW packet at $x>x_{0}$
(Fig.2(d)). The effect could be described as a switch of a spin angular
momentum of magnons passing through the DW and was observed in resent
experiments [24].
At $x<x_{0}$, we distinguish two ranges, $x_{p}<x<x_{0}$ and $x<x_{p}$, with
different interference patterns and multipeak spectra. The region
$x_{p}<x<x_{0}$ can be seen as a resonator formed by DW and the area excited
with the pump pulse. Here DW works as a partially reflecting mirror for MSW,
as described above, and the pump spot produces non-uniformity of $K_{u}$ and,
thus, acts as a second mirror of the resonator. Indeed, the changes of $K_{u}$
modify ${\bf H}_{eff}$ with corresponding variation of $f(k)$ inside the pump
spot. As a result the spot works as a potential gap for some MSW frequencies,
as discussed below in Sec. D. Detailed discussion about the traps of MSW
induced by continuous wave lasers could be found elsewhere [57, 58].
In the region $x<x_{p}$, MSW exit the DW-pump resonator and possess spectrum
with equidistant peaks. As we show below, the properties of the resonator are
defined by the pump parameters, and it enables controllable variations of MSW
spectra. Below, we focus on discussion of the region $x<x_{p}$ as the
frequency composition of MSW packet does not varies with $x$ here.
### III.3 Effect of the pump position on the MSW spectrum
Figure 3: (a) 1D FFT of $M_{z}(t)$ at different $x_{p}$ and $\Delta
K_{u}=0.1$, $\sigma=0.2\,\mu$m, $x_{probe}=x_{p}-2\,\mu$m. (b) Area under the
FFT curves (symbols) and torque $T(x)$ calculated using Eq. (III.1) (line)
versus $x_{p}$.
Figure 3(a) shows spectra of the MSW obtained at $x_{probe}=x_{p}-2\,\mu$m for
different positions of the pump spot $x_{p}$. As can be seen, the variation of
$x_{p}$ affects the shape and amplitude of MSW spectrum at $x<x_{p}$. This is
a result of the changes of the DW-pump resonator length
$l_{res}=|x_{0}-x_{p}|$, on the one hand, and the spatial variation of $H_{s}$
in the DW vicinity, on the other hand.
The pump position $x_{p}$ defines $l_{res}$, that, in turn, affects the
resonance peaks in the spectrum of MSW and their spectral positions. The
number of peaks is also defined by $l_{res}$. In particular, for larger
$l_{res}$ more peaks with smaller distance between them occur within the MSW
spectrum (Fig.3(a)). Additionally, the choice of $x_{p}$ defines the amplitude
of the excited MSW as follows. $x_{p}$ defines the value of $M_{x}$ and
$H_{s}$ inside the pump spot. They are maximal in the DW vicinity, leading to
a maximal laser-induced torque ${\bf T}$ (Eq. (III.1)) near the DW. Thus, for
smaller $l_{res}$, larger amplitude of MSW is observed. To demonstrate it, we
find the amplitude of the spectrally broad MSW packet as the area under the
FFT curve and plot it as a function of $x_{p}$ in Fig. 3(b). As can be seen,
the change of the MSW packet amplitude with $x_{p}$ follows the dependence
$T(x)$ from Eq.(III.1).
### III.4 Effect of pump fluence and width on the MSW spectrum
Figure 4: (a) Normalized 1D FFT of $M_{z}(t)$ at different $\Delta K_{u}$ and
$x_{p}=-2\,\mu$m, $\sigma=0.2\,\mu$m, $x_{probe}=-4\,\mu$m. Black arrow points
at the additional resonance peak in the spectra. (b) Scheme of dispersion
shift with $\Delta K_{u}$. (c,d) 2D FFT of $M_{z}(x,t)$ maps at $\Delta
K_{u}=0.1$ and $\Delta K_{u}=1$, respectively.
To simulate the effect of the optical pump fluence variation on the MSW
properties, we performed calculations for different values of $\Delta K_{u}$
at constant $\sigma$. As a result, the change of $\Delta K_{u}$ leads to the
change of MSW spectrum. Particularly, the increase of $\Delta K_{u}$ narrows
the spectrum width and decreases the relative amplitude of high-frequency
resonance peaks (Fig.4(a)).
The effect of pump fluence on MSW spectra could be described qualitatively in
terms of $f(k)$ shift due to $\Delta K_{u}$ (Fig.4(b)). The spectrum of the
excited MSW packet is limited by two factors – the time scale $\tau_{K}$ of
$K_{u}$ decrease, and the range of excited MSW wavenumbers $k$. Typical
$\tau_{K}$ is within a few hundreds of fs in the experiments, and $1/\tau_{K}$
is within the THz range, which is substantially larger than MSW frequency in
ferromagnets. The maximum value of excited MSW wavenumbers
$k_{max}=\sqrt{2}/\sigma$ is defined by the width of the spatial Fourier
transform of the pump spot [30, 32, 50]. Thus, the spectral range of the
excited MSW spans from $f_{FMR}=f(0)$ to $f(k_{max})$ (Fig.4(b)). The decrease
of $K_{u}$ leads to a negative frequency shift of $f(k)$ inside the pump spot
due to decrease of $H_{eff}$ (lower curve in Fig.4(b)). Thus, a smaller
spectral width of the MSW $\Delta f$ is observed outside the laser spot as
there is a cut-on frequency $f_{FMR}$ (upper curve in Fig.4(b)). The increase
of $\Delta K_{u}$ leads to the smaller range of $\Delta f$ for MSW propagating
outside the spot.
The effect of the MSW spectrum narrowing is seen in the calculation results
(Fig.4(a)). Notably, the ratio between the amplitudes of resonance peaks
changes with $\Delta K_{u}$. The increase of $\Delta K_{u}$ leads to narrowing
of the range of the MSW wavenumbers $\Delta k$ detected outside of the pump
spot, as follows from the $f(k)$ scheme (Fig.4(b)) and verified by
reconstruction of $f(k)$ from the calculated $x$-$t$ maps with the 2D FFT
(Fig.4(c,d)).
The dispersion scheme (Fig.4(b)) explains also the spatial decay of the low-
frequency part of the MSW spectrum evident in Fig.2(c). Outside of the pump
spot, only the high-frequency part of MSW packet can propagate as the lower
frequencies are forbidden at $\Delta K_{u}=0$ (upper curve in Fig.4(b)). As a
result, the magnetization precession with $f<f_{FMR}$ is observed only within
the laser-excited spot, which is seen as a ”trunk” of the ”tree” at $x=x_{p}$
on the $x$-$f$ map (Fig.2(c)). The shape of ”coma” is related to the stronger
spatial decay of the high frequency part of the MSW packet upon propagation.
Indeed, the propagation length $L_{pr}$ of the MSW with certain $f$ is
determined by its lifetime $\tau=(2\pi f\alpha)^{-1}$ and phase velocity
$v_{ph}=2\pi f[k(f)]^{-1}$ as
$L_{pr}(f)=\tau v_{ph}=\frac{1}{\alpha k(f)}.$ (4)
Thus, the positive slope of $f(k)$ leads to faster spatial decay of MSW with
higher $f$.
Figure 5: (a) Normalized 1D FFT of $M_{z}(t)$ at different $\sigma$ and
$x_{p}=-2\,\mu$m, $\Delta K_{u}=0.2$, $x_{probe}=-4\,\mu$m. Black arrow points
at the additional resonance peak in the spectra. (b,c) 2D FFT of $M_{z}(x,t)$
maps at $\sigma=0.2\,\mu$m and $\sigma=0.5\,\mu$m, respectively.
The increase of the pump spot width $\sigma$ at constant $\Delta K_{u}$ leads
to narrowing of MSW spectra, as demonstrated in Fig.5(a). The effect is a
result of the decrease of the maximal wavevector $k_{max}$ if MSW excited at
larger $\sigma$. Therefore, in this case, the spectrum of exited MSW appears
to be narrower with the suppressed high-frequency part (Fig.4(b)). The effect
of the spectra narrowing is clearly seen in the reconstructed dispersions of
MSW (Fig.5(b,c)). Notably, for small values of $\sigma$ there are additional
resonance peaks appearing in the MSW spectra caused by DW-pump resonator.
## IV Conclusions
We have shown the number of features of MSW optically excited by a femtosecond
laser pulse in the vicinity of DW in the thin strip of a ferromagnet. Firstly,
the presence of DW makes it possible to excite MSW via the change of magnetic
anisotropy induced by ultrafast laser heating in zero applied magnetic field,
as the internal magnetic field of the stripe is non-uniform. This contrasts
with the case of the single domain state, when the excitation of MSW via such
anisotropy change requires an applied magnetic field. Secondly, focused
optical pulse produces local changes in magnetic properties and, therefore,
DW-pump spot system forms the resonator for MSW. As a result of the
interference of MSW propagating from the pump spot and reflected by the DW,
the spectrum of the MSW packet outside the resonator possesses a complex
structure with several resonance peaks. The properties of the resonator depend
on the pump parameters and position, enabling the adjustment of MSW spectrum.
For instance, the pump position allows tuning the frequencies and amplitudes
of the peaks in the spectrum of MSW packets. The increase of laser pulse
fluence leads to the narrowing of MSW spectrum and the change of the ratio
between the amplitudes of the resonance peaks in the spectrum. Such tuning of
the MSW packet is not available in traditional RF methods of SW excitation as
a RF field does not vary the magnetic properties of SW guiding media.
Moreover, as the time scale of the anisotropy change with fs-laser pulses is
of about 1 ps, the presented SW resonator could be realized as an element of
ultrafast optically reconfigurable magnonics [26, 27].
Finally, we note that the laser-induced anisotropy changes can also have
different origins apart from ultrafast heating. Thus, there are no principal
restrictions on the sign and value of $\Delta K_{u}$, as well as on its
temporal evolution. Furthermore, the presented concept of MSW excitation and
spectrum modification is expected to work near any magnetic non-uniformity
inducing spatially varying stray and/or demagnetizing fields. Such a non-
uniformity can be induced by Néel or Bloch DW, magnetic skyrmion, bubble
domain, impurities, etc. Each type of non-uniformity presents an individual
interest for the study as the internal structure, size, and topology affect
its interaction with SW drastically [17, 25]. Importantly, the position and
internal structure of magnetic non-uniformities, DW in particular, can be
controlled by external magnetic and electric [59] fields, spin-polarized
currents [60], and even by propagating SW [20, 17, 21, 22, 23], opening
additional paths to tune the SW properties in magnonic devices.
## Author statement
N.E. Khokhlov: Methodology, Software, Writing - Review & Editing, Supervision
A.E. Khramova: Conceptualization, Writing - Original Draft Ia.A. Filatov:
Formal analysis P.I. Gerevenkov: Software, Visualization B.A. Klinskaya:
Visualization A.M. Kalashnikova: Conceptualization, Writing - Review & Editing
## acknowledgments
The authors thank V.I. Belotelov for fruitful discussions. N.E.Kh. and A.E.Kh.
acknowledge financial support by the Russian Foundation for Basic Research
(project No. 19-32-50128) and the ”BASIS” Foundation (grants No. 19-1-3-42-1
and 18-2-6-202-1).
## Competing interests
The authors declare no competing interests.
## References
* Lenk _et al._ [2011] B. Lenk, H. Ulrichs, F. Garbs, and M. M$\ddot{\mathrm{u}}$nzenberg, The building blocks of magnonics, Physics Reports 507, 107 (2011).
* Nikitov _et al._ [2015] S. A. Nikitov, D. V. Kalyabin, I. V. Lisenkov, A. N. Slavin, Y. N. Barabanenkov, S. A. Osokin, A. V. Sadovnikov, E. N. Beginin, M. A. Morozova, Y. P. Sharaevsky, Y. A. Filimonov, Y. V. Khivintsev, S. L. Vysotsky, V. K. Sakharov, and E. S. Pavlov, Magnonics: a new research area in spintronics and spin wave electronics, Phys. Usp. 58, 1002 (2015).
* Chumak _et al._ [2015] A. V. Chumak, V. I. Vasyuchka, A. A. Serga, and B. Hillebrands, Magnon spintronics, Nature Physics 11, 453 (2015).
* Mahmoud _et al._ [2020] A. Mahmoud, F. Ciubotaru, F. Vanderveken, A. V. Chumak, S. Hamdioui, C. Adelmann, and S. Cotofana, Introduction to spin wave computing, Journal of Applied Physics 128, 161101 (2020).
* Kajiwara _et al._ [2010] Y. Kajiwara, K. Harii, S. Takahashi, J. Ohe, K. Uchida, M. Mizuguchi, H. Umezawa, H. Kawai, K. Ando, K. Takanashi, S. Maekawa, and E. Saitoh, Transmission of electrical signals by spin-wave interconversion in a magnetic insulator, Nature 464, 262 (2010).
* Hou _et al._ [2019] D. Hou, Z. Qiu, and E. Saitoh, Spin transport in antiferromagnetic insulators: progress and challenges, NPG Asia Mater 11, 35 (2019).
* Wang _et al._ [2002] Z. Wang, M. Kuok, S. Ng, D. Lockwood, M. Cottam, K. Nielsch, R. Wehrspohn, and U. Gösele, Spin-wave quantization in ferromagnetic nickel nanowires, Physical review letters 89, 027201 (2002).
* Cramer _et al._ [2017] J. Cramer, E.-J. Guo, S. Geprägs, A. Kehlberger, Y. P. Ivanov, K. Ganzhorn, F. Della Coletta, M. Althammer, H. Huebl, R. Gross, J. Kosel, M. Kläui, and S. T. B. Goennenwein, Magnon Mode Selective Spin Transport in Compensated Ferrimagnets, Nano Lett. 17, 3334 (2017).
* Rezende _et al._ [2019] S. M. Rezende, A. Azevedo, and R. L. Rodríguez-Suárez, Introduction to antiferromagnetic magnons, Journal of Applied Physics 126, 151101 (2019).
* Lebrun _et al._ [2020] R. Lebrun, A. Ross, O. Gomonay, V. Baltz, U. Ebels, A.-L. Barra, A. Qaiumzadeh, A. Brataas, J. Sinova, and M. Kläui, Long-distance spin-transport across the Morin phase transition up to room temperature in ultra-low damping single crystals of the antiferromagnet $\alpha$-${\rm{Fe}_{2}\rm{O}_{3}}$, Nat Commun 11, 6332 (2020).
* Vasiliev _et al._ [2007] S. Vasiliev, V. Kruglyak, M. Sokolovskii, and A. Kuchko, Spin wave interferometer employing a local nonuniformity of the effective magnetic field, Journal of Applied Physics 101, 113919 (2007).
* Sadovnikov _et al._ [2015] A. V. Sadovnikov, C. S. Davies, S. V. Grishin, V. V. Kruglyak, D. V. Romanenko, Y. P. Sharaevskii, and S. A. Nikitov, Magnonic beam splitter: The building block of parallel magnonic circuitry, Appl. Phys. Lett. 106, 192406 (2015).
* Stigloher _et al._ [2016] J. Stigloher, M. Decker, H. S. Körner, K. Tanabe, T. Moriyama, T. Taniguchi, H. Hata, M. Madami, G. Gubbiotti, K. Kobayashi, T. Ono, and C. H. Back, Snell’s law for spin waves, Phys. Rev. Lett. 117, 037204 (2016).
* Hioki _et al._ [2020a] T. Hioki, Y. Hashimoto, and E. Saitoh, Bi-reflection of spin waves, Commun Phys 3, 188 (2020a).
* Hertel _et al._ [2004] R. Hertel, W. Wulfhekel, and J. Kirschner, Domain-wall induced phase shifts in spin waves, Phys. Rev. Lett. 93, 257202 (2004).
* Buijnsters _et al._ [2016] F. J. Buijnsters, Y. Ferreiros, A. Fasolino, and M. I. Katsnelson, Chirality-dependent transmission of spin waves through domain walls, Phys. Rev. Lett. 116, 147204 (2016).
* Chang _et al._ [2018] L.-J. Chang, Y.-F. Liu, M.-Y. Kao, L.-Z. Tsai, J.-Z. Liang, and S.-F. Lee, Ferromagnetic domain walls as spin wave filters and the interplay between domain walls and spin waves, Scientific reports 8, 1 (2018).
* Albisetti _et al._ [2020] E. Albisetti _et al._ , Synthetic Antiferromagnets: Optically Inspired Nanomagnonics with Nonreciprocal Spin Waves in Synthetic Antiferromagnets, Advanced Materials 32, 2070063 (2020).
* Hämäläinen _et al._ [2018] S. J. Hämäläinen, M. Madami, H. Qin, G. Gubbiotti, and S. van Dijken, Control of spin-wave transmission by a programmable domain wall, Nat Commun 9, 4853 (2018).
* Han _et al._ [2009] D.-S. Han, S.-K. Kim, J.-Y. Lee, S. J. Hermsdoerfer, H. Schultheiss, B. Leven, and B. Hillebrands, Magnetic domain-wall motion by propagating spin waves, Appl. Phys. Lett. 94, 112502 (2009).
* Dadoenkova _et al._ [2019] N. N. Dadoenkova, Y. S. Dadoenkova, I. L. Lyubchanskii, M. Krawczyk, and K. Y. Guslienko, Inelastic Spin‐Wave Scattering by Bloch Domain Wall Flexure Oscillations, Phys. Status Solidi RRL 13, 1800589 (2019).
* Yan _et al._ [2011] P. Yan, X. S. Wang, and X. R. Wang, All-magnonic spin-transfer torque and domain wall propagation, Phys. Rev. Lett. 107, 177207 (2011).
* Wang _et al._ [2012] X.-g. Wang, G.-h. Guo, Y.-z. Nie, G.-f. Zhang, and Z.-x. Li, Domain wall motion induced by the magnonic spin current, Phys. Rev. B 86, 054445 (2012).
* Han _et al._ [2019] J. Han, P. Zhang, J. T. Hou, S. A. Siddiqui, and L. Liu, Mutual control of coherent spin waves and magnetic domain walls in a magnonic device, Science 366, 1121 (2019).
* Lan and Xiao [2021] J. Lan and J. Xiao, Skew scattering and side jump of spin wave across magnetic texture, Phys. Rev. B 103, 054428 (2021).
* Vogel _et al._ [2015] M. Vogel, A. V. Chumak, E. H. Waller, T. Langner, V. I. Vasyuchka, B. Hillebrands, and G. von Freymann, Optically-reconfigurable magnetic materials, Nature Physics 11, 487 (2015).
* Grundler [2015] D. Grundler, Reconfigurable magnonics heats up, Nature Physics 11, 438 (2015).
* Vogel _et al._ [2018] M. Vogel, R. Aßmann, P. Pirro, A. V. Chumak, B. Hillebrands, and G. von Freymann, Control of spin-wave propagation using magnetisation gradients, Scientific Reports 8, 11099 (2018).
* Sadovnikov _et al._ [2019] A. V. Sadovnikov, E. N. Beginin, S. E. Sheshukova, Y. P. Sharaevskii, A. I. Stognij, N. N. Novitski, V. K. Sakharov, Y. V. Khivintsev, and S. A. Nikitov, Route toward semiconductor magnonics: Light-induced spin-wave nonreciprocity in a $\rm{YIG/GaAs}$ structure, Phys. Rev. B 99, 054424 (2019).
* Satoh _et al._ [2012] T. Satoh, Y. Terui, R. Moriya, B. A. Ivanov, K. Ando, E. Saitoh, T. Shimura, and K. Kuroda, Directional control of spin-wave emission by spatially shaped light, Nature Photonics 6, 662 (2012).
* Jäckl _et al._ [2017] M. Jäckl, V. I. Belotelov, I. A. Akimov, I. V. Savochkin, D. R. Yakovlev, A. K. Zvezdin, and M. Bayer, Magnon accumulation by clocked laser excitation as source of long-range spin waves in transparent magnetic films, Phys. Rev. X 7, 021009 (2017).
* Khokhlov _et al._ [2019] N. Khokhlov, P. Gerevenkov, L. Shelukhin, A. Azovtsev, N. Pertsev, M. Wang, A. Rushforth, A. Scherbakov, and A. Kalashnikova, Optical excitation of propagating magnetostatic waves in an epitaxial galfenol film by ultrafast magnetic anisotropy change, Phys. Rev. Applied 12, 044044 (2019).
* Au _et al._ [2013] Y. Au, M. Dvornik, T. Davison, E. Ahmad, P. S. Keatley, A. Vansteenkiste, B. Van Waeyenberge, and V. V. Kruglyak, Direct excitation of propagating spin waves by focused ultrashort optical pulses, Phys. Rev. Lett. 110, 097201 (2013).
* Muralidhar _et al._ [2021] S. Muralidhar, R. Khymyn, A. A. Awad, A. Alemán, D. Hanstorp, and J. Åkerman, Femtosecond laser pulse driven caustic spin wave beams, Phys. Rev. Lett. 126, 037204 (2021).
* Kimel _et al._ [2020] A. Kimel, A. Kalashnikova, A. Pogrebna, and A. Zvezdin, Fundamentals and perspectives of ultrafast photoferroic recording, Physics Reports 852, 1 (2020).
* Kirilyuk _et al._ [2010] A. Kirilyuk, A. V. Kimel, and T. Rasing, Ultrafast optical manipulation of magnetic order, Rev. Mod. Phys. 82, 2731 (2010).
* Walowski and Münzenberg [2016] J. Walowski and M. Münzenberg, Perspective: Ultrafast magnetism and thz spintronics, Journal of Applied Physics 120, 140901 (2016).
* Baranov _et al._ [2019] P. G. Baranov _et al._ , Spintronics of semiconductor, metallic, dielectric, and hybrid structures (100th anniversary of the ioffe institute), Phys. Usp. 62, 795 (2019).
* Shelukhin _et al._ [2018] L. A. Shelukhin, V. V. Pavlov, P. A. Usachev, P. Y. Shamray, R. V. Pisarev, and A. M. Kalashnikova, Ultrafast laser-induced changes of the magnetic anisotropy in a low-symmetry iron garnet film, Phys. Rev. B 97, 014422 (2018).
* Carpene _et al._ [2010] E. Carpene, E. Mancini, D. Dazzi, C. Dallera, E. Puppin, and S. De Silvestri, Ultrafast three-dimensional magnetization precession and magnetic anisotropy of a photoexcited thin film of iron, Phys. Rev. B 81, 060415 (2010).
* Bigot _et al._ [2005] J.-Y. Bigot, M. Vomir, L. Andrade, and E. Beaurepaire, Ultrafast magnetization dynamics in ferromagnetic cobalt: The role of the anisotropy, Chemical Physics 318, 137 (2005).
* Kats _et al._ [2016] V. N. Kats, T. L. Linnik, A. S. Salasyuk, A. W. Rushforth, M. Wang, P. Wadley, A. V. Akimov, S. A. Cavill, V. Holy, A. M. Kalashnikova, and A. V. Scherbakov, Ultrafast changes of magnetic anisotropy driven by laser-generated coherent and noncoherent phonons in metallic films, Phys. Rev. B 93, 214422 (2016).
* Stupakiewicz _et al._ [2017] A. Stupakiewicz, K. Szerenos, D. Afanasiev, A. Kirilyuk, and A. V. Kimel, Ultrafast nonthermal photo-magnetic recording in a transparent medium, Nature 542, 71 (2017).
* Donahue and Porter [1999] M. Donahue and D. G. Porter, OOMMF user’s guide, version 1.0, NIST Interagency Report No. 6376, National Institute of Standards and Technology, Gaithersburg, MD (1999), http://math.nist.gov/oommf.
* Landau and Lifshitz [1935] L. Landau and E. Lifshitz, To the theory of magnetic permeability dispersion in ferromagnetic solids, Sov. Phys 8, 153 (1935).
* Gilbert [2004] T. L. Gilbert, A phenomenological theory of damping in ferromagnetic materials, IEEE Transactions on Magnetics 40, 3443 (2004).
* Abo _et al._ [2013] G. S. Abo, Y.-K. Hong, J. Park, J. Lee, W. Lee, and B.-C. Choi, Definition of magnetic exchange length, IEEE Trans. Magn. 49, 4937 (2013).
* Hioki _et al._ [2020b] T. Hioki, R. Tsuboi, T. H. Johansen, Y. Hashimoto, and E. Saitoh, Snell’s law for spin waves at a 90° magnetic domain wall, Appl. Phys. Lett. 116, 112402 (2020b).
* Matsumoto _et al._ [2020] K. Matsumoto, I. Yoshimine, K. Himeno, T. Shimura, and T. Satoh, Observation of evanescent spin waves in the magnetic dipole regime, Phys. Rev. B 101, 184407 (2020).
* Kamimaki _et al._ [2017] A. Kamimaki, S. Iihama, Y. Sasaki, Y. Ando, and S. Mizukami, Reciprocal excitation of propagating spin waves by a laser pulse and their reciprocal mapping in magnetic metal films, Phys. Rev. B 96, 014438 (2017).
* Kamimaki _et al._ [2017] A. Kamimaki, S. Iihama, Y. Sasaki, Y. Ando, and S. Mizukami, Micro-focused pulse laser-induced propagating spin waves in permalloy films with different thicknesses, IEEE Transactions on Magnetics 53, 1 (2017).
* van Kampen _et al._ [2002] M. van Kampen, C. Jozsa, J. T. Kohlhepp, P. LeClair, L. Lagae, W. J. M. de Jonge, and B. Koopmans, All-optical probe of coherent spin waves, Phys. Rev. Lett. 88, 227201 (2002).
* Chernov _et al._ [2020] A. I. Chernov, M. A. Kozhaev, D. O. Ignatyeva, E. N. Beginin, A. V. Sadovnikov, A. A. Voronov, D. Karki, M. Levy, and V. I. Belotelov, All-dielectric nanophotonics enables tunable excitation of the exchange spin waves, Nano Letters 20, 5259 (2020).
* [54] See Supplemental Material at [URL will be inserted by publisher] with calculated $M_{z}$ dynamics after SW excitation in a movie format .
* Gurevich and Melkov [1996] A. Gurevich and G. Melkov, _Magnetization Oscillations and Waves_ (Taylor & Francis, 1996).
* Damon and Eshbach [1961] R. W. Damon and J. R. Eshbach, Magnetostatic modes of a ferromagnet slab, Journal of Physics and Chemistry of Solids 19, 308 (1961).
* Kolokoltsev _et al._ [2012] O. Kolokoltsev, N. Qureshi, E. Mejía-Uriarte, and C. L. Ordóñez-Romero, Hot spin-wave resonators and scatterers, Journal of Applied Physics 112, 013902 (2012).
* Busse _et al._ [2015] F. Busse, M. Mansurova, B. Lenk, M. von der Ehe, and M. Münzenberg, A scenario for magnonic spin-wave traps, Scientific Reports 5, 12824 (2015).
* Pyatakov _et al._ [2017] A. Pyatakov, V. Belotelov, D. Kulikova, N. Khokhlov, Z. Pyatakova, and A. Nikolaev, Magnetoelectricity in topological magnetic textures, Journal of Magnetism and Magnetic Materials 440, 60 (2017).
* Yamaguchi _et al._ [2004] A. Yamaguchi, T. Ono, S. Nasu, K. Miyake, K. Mibu, and T. Shinjo, Real-space observation of current-driven domain wall motion in submicron magnetic wires, Phys. Rev. Lett. 92, 077205 (2004).
|
11institutetext: Department of Computer Science and Engineering (DISI)
University of Bologna, Bologna, Italy
11email<EMAIL_ADDRESS>
# An Upper Bound on the Complexity of Tablut
Andrea Galassi ID
###### Abstract
Tablut is a complete-knowledge, deterministic, and asymmetric board game,
which has not been solved nor properly studied yet. In this work, its rules
and characteristics are presented, then a study on its complexity is reported.
An upper bound to its complexity is found eventually by dividing the state-
space of the game into subspaces according to specific conditions. This upper
bound is comparable to the one found for Draughts, therefore, it would seem
that the open challenge of solving this game requires a considerable
computational effort.
###### Keywords:
Board game game-playing game complexity Tafl games.
## 1 Introduction
Tablut is a strategy board game belonging to the family of Tafl games, a group
of Celtic and Nordic asymmetric board games designed for two players, which
share similar rules. Tafl games (sometimes called Hnefatafl games) may derive
from the Roman game Ludus latrunculorum, and have evolved into many different
variants of the original game, such as Tablut, Brandubh, Hnefatafl, and
Tawlbwrdd. The exact rules of these games are difficult to known, since little
documentation has survived until the present days, and Tablut is probably the
one for which most information is available.
Indeed, Carl Nilsson Linnaeus wrote notes about the game in the XVIII century,
after observing Sámi tribes playing it, even if he didn’t know their language,
so he could not be sure about the original rules. In the following centuries,
other researchers [6, 7, 5, 2, 3] have analyzed, translated, and adjusted
Linnaeus’ notes, producing a set of rules which makes the game relatively
balanced and hopefully similar to its original version. Due to this scarce
documentation, Tafl games have never been subject to proper analysis, and
therefore we don’t know their complexity, nor their solution to any degree.111
According to [1], a game can be solved in several degrees: ultra-weakly solved
(the game-theoretic value for the initial position is known), weakly solved (a
strategy is known to obtain the game-theoretic value of the game), and
strongly solved (for any state reachable from the initial position, a strategy
to obtain the game-theoretic value of that state is known.) The purpose of
this work is to provide an initial estimate of the complexity of Tablut, with
the hope to lay the grounds for deeper studies.
In Section 2, the rules and the properties of Tablut are presented. In Section
3 the game is analyzed so to compute an upper bound on its complexity. Section
4 concludes.
## 2 Rules and Properties
Many different variants of Tablut rules do exist. In this work, the rules of
the game are described mostly following the work of Ashton [2].
### 2.1 Terminology, Material, and Setup
The game is played by two players on a square board of $9\times 9$ cells
depicted in Figure 1. The central cell is called the royal citadel, or castle,
or throne. On each board side, there are 4 groups of 4 cells arranges with a
t-shape that are called citadels or camps. For better comprehension, the
former will be addressed as castle, the latter as camps, while the term
citadel will be used with the meaning “either of them”. Any non-camp and non-
corner cell along the edge of the board will be addressed as escape, the
reason will be clear in the next Subsection. Two cells are considered adjacent
if and only if they are aligned horizontally or vertically and they share an
edge. The term side of a checker will be used to indicate any cell which is
adjacent to the cell where the checker is placed.
One player moves the white checkers, which represent the defenders or Swedes,
while the other moves the black checkers, which represent the attackers or
Muscovites. There are 16 black checkers and 9 white checkers.222 In this work,
the terms _checker_ , _stone_ , and _piece_ will be used as synonyms. One of
the white checkers is the king and it is marked. Any non-king piece will be
addressed as soldier. The checkers are placed as in Figure 1, with the king in
the castle, the black soldiers in the camps, and the remaining white soldiers
aligned by 2 on each side of the king.
Figure 1: The Tablut empty board (left) and the initial setup (right). The
_castle_ cell is represented with the orange color and the cross symbol. The
_camp_ cells are represented in grey with a triangle. The _escape_ cells are
in blue, with a star. The white soldiers starting cells are marked with a
circle. The _king_ checker is marked with a black cross symbol.
### 2.2 Endgame
The game ends when one of the following conditions is met:
1. 1.
The king reaches one of the escape cells. This results in the winning of the
white player.
2. 2.
The king is captured. This results in the winning of the black player.
3. 3.
The game reaches the same state twice. This results in a draw.333 This rule is
added to avoid an endless sequence of repeated moves and to simplify the game
with respect to [2].
4. 4.
When a player has no possible moves. This results in the winning of the other
player. This is an addition with respect to [2].
### 2.3 Movement and Capture
The two players alternate their turns, which consist of a single movement of a
checker. The white player starts first. A checker can be moved along a single
straight line, horizontally or vertically, by any number of cells. The
movement must not pass over nor end into a cell that is occupied by another
checker. The same holds for cells that are part of a citadel unless the
checker starts its movement from a cell of the same citadel. This implies that
the only checkers that may move inside a citadel are the black checkers, but
only in their starting citadel and only if they have not ever left it.
To make a capture, a player must move one of its own pieces so as to surround
an adversary piece. A checker is considered surrounded according to different
criteria:
* •
When the king is in the castle, it is considered surrounded if there are enemy
pieces on all four of its sides.
* •
When the king is adjacent to the castle, it is considered surrounded if there
are three enemy pieces on three of its side and the castle on the fourth.
* •
When a soldier is adjacent to a citadel, or when the king is adjacent to a
camp, it is considered surrounded if there is an enemy piece on the opposite
side with respect to the citadel/camp. A soldier inside a camp cannot be
captured.
* •
In any other case, any piece is considered surrounded if there are two enemy
pieces on two opposite sides of its cell so that the three pieces are aligned
horizontally or vertically.
Capture can happen only in an active way, which means that if a player moves
their own piece so to make it surrounded, the piece is not captured. It is
possible to capture multiple pieces (up to 3) with a single move if that move
allows surrounding more than one piece.
### 2.4 Properties
At any moment of the game, the two players know everything about the state of
the game. Given the state of the game, it is also known the consequence of any
possible move, since there are no random components. Finally, the two players
have different starting positions, checkers, and goals. Therefore this is a
complete-knowledge, deterministic, and asymmetric game.
The board is symmetric along 4 axes, so a single board configuration can have
up to 7 symmetric configurations.
## 3 Computing an Upper Bound for the State Space Size
To compute the complexity of the game it is useful to divide the state-space
into subspaces and compute their size separately. Firstly two subspaces are
distinguished: the space without endgame states and the space with endgame
states. Then, additional subspaces are considered according to specific
conditions. The symmetries of the problem will not be taken into account.
### 3.1 State Space without Endgames
It is useful to compute the complexity of the game without considering endgame
positions because the computation of the final state of the game may not be
necessary for some solvers. Indeed, the characteristics of the final move may
be sufficient to determine the conclusion of the game and its winner, reducing
the computational footprint. For example, a move that declares an escape cell
as the destination for the king is an example of this.
Two possible endgames are not taken into account in this work: the endgame by
draw and neither the endgame given by the impossibility for a player to move
if he has still checkers on the board. The former is naturally included in the
subspace without an endgame state. The latter is not investigated in this
work.
It is possible to make the following considerations regarding any state that
is not an endgame state:
* •
The king has to be on the board, it can not be in a corner cell, and it can
not be on escape cells, otherwise it would mean that one of the players has
already won (respectively, the black or the white). Therefore, it can be in
any of the $7\times 7=49$ cells in the center of the board. Excluding the
camps, there are only 45 possible cells, one of which is the castle.
* •
There has to be at least one black soldier on the board. Otherwise, it would
mean that the white player has won. Indeed, the black player could lose its
last checker(s) only due to capture by the white player. The turn after that
move, the black player would not be able to move any checker and therefore
would lose.
* •
The castle can host only the king.
A naive initial upper bound can be computed considering the values that any
cell can assume: 2 for the castle (with and without the king), 2 for the 16
camps cells (with and without the black soldier), 3 for the 20 edge cells
(with a black soldier, a white soldier or neither), 4 for any other of the
remaining 44 cells (with a black soldier, a white soldier, the king, empty).
This would result in $10^{41}$ possible states as given by Equation 1.
$UB_{naive}=2\cdot 2^{16}\cdot 3^{20}\cdot 4^{44}\approx 10^{41}$ (1)
To reduce the upper bound, it is possible to split the state space according
to particular conditions. It is possible indeed to differentiate two
scenarios: when the king is in the castle and when it is not.
In the first scenario, each possible configuration has $1\leq b\leq 16$ black
pieces, $0\leq w\leq 8$ white soldiers, and $e=80-w-b$ empty cells. It can
therefore be modeled as a permutation with repetitions as in Equation 2.
$P_{80}^{(b,w,e)}=\frac{80!}{b!\cdot w!\cdot e!}$ (2)
The second scenario can be defined similarly, but the king would occupy one of
the 44 cells (the castle is excluded). The remaining cells are therefore 79,
so the number of states (for each combination of $b$, $w$, and $e$ value) is
$44\cdot P_{79}^{(b,w,e)}$.
An upper bound to the number of possible states without endgame positions is,
therefore, $6\times 10^{27}$, as given summing these two scenarios for any
configuration of $b$, $w$, and $e$, as in Equation 3.
$UB_{no\\_end}=\displaystyle\sum_{b=1}^{16}\displaystyle\sum_{w=0}^{8}P_{80}^{(b,w,80-b-w)}+44\cdot\displaystyle\sum_{b=1}^{16}\displaystyle\sum_{w=0}^{8}P_{79}^{(b,w,79-b-w)}\approx
6.1\times 10^{27}$ (3)
It is possible to consider one more characteristic: the white checkers cannot
occupy a camp. Therefore, it is useful to treat camps and non-camps cells
separately. A new variable $c$ is now defined as the number of black checkers
inside any camp. The number of black checkers outside the camp is then $b-c$.
The number of possible configuration of black checkers inside the camps is now
given by the permutation $P_{16}^{(c,16-c)}$, with $0\leq c\leq b$. The number
of possible configurations on the non-camps cells can be computed as
previously (considering 16 cells less). According to this new consideration,
an upper bound on the size of the two discussed scenarios is then computed as:
$UB_{no\\_end}^{castle}=\displaystyle\sum_{b=1}^{16}\displaystyle\sum_{w=0}^{8}\displaystyle\sum_{c=0}^{b}P_{16}^{(c,16-c)}\cdot
P_{64}^{(b-c,w,64-b-w+c)}\approx 3\times 10^{25}$ (4)
$UB_{no\\_end}^{no\\_castle}=44\cdot\displaystyle\sum_{b=1}^{16}\displaystyle\sum_{w=0}^{8}\displaystyle\sum_{c=0}^{b}P_{16}^{(c,16-c)}\cdot
P_{63}^{(b-c,w,63-b-w+c)}\approx 9.2\times 10^{26}$ (5)
Their sum, (Equation 6) provides an upper bound $UB_{no\\_end}^{\prime}\approx
9.5\times 10^{26}$
$UB_{no\\_end}^{\prime}=UB_{no\\_end}^{castle}+UB_{no\\_end}^{no\\_castle}\approx
9.5\times 10^{26}$ (6)
Another fact that could be taken into account is that if three black pieces
are in the ends of a camp, also the fourth black piece has to be in that camp,
but this is will not be considered in this work.
### 3.2 State Space of Endgames
To make a proper comparison with other board games, an upper bound has to be
computed considering also the endgame positions. This is done considering all
the possible endgame scenarios computing an upper bound for each one.
Excluding the case when the king is captured, a first endgame scenario is
obtained considering the states when all the black soldiers have been
captured. As done previously, it is possible to divide the scenarios where the
king is inside or outside the castle.
$UB_{\alpha}=\displaystyle\sum_{w=0}^{8}P_{64}^{(w,64-w)}+44\cdot
P_{63}^{(w,63-w)}\approx 2.0\times 10^{11}$ (7)
Another endgame scenario is given by the successful escape of the king to one
of the 16 escape cells. The cell adjacent to the cell used to escape has to be
empty, therefore only 62 cells can host black and white soldiers. In these
cases, the lower bound on $b$ increases once again to 1, since without black
soldiers the game would have already been finished.444It is possible for the
king to make a capture in the same movement in which it reaches the escape,
but this does not change the number of cases. On the opposite, this
consideration would lower the number of states. This contributes to the final
upper bound with a term
$UB_{\beta}=16\cdot\displaystyle\sum_{b=1}^{16}\displaystyle\sum_{w=0}^{8}\displaystyle\sum_{c=0}^{b}P_{16}^{(c,16-c)}\cdot
P_{62}^{(b-c,w,62-b-w+c)}\approx 2.3\times 10^{26}$ (8)
The last endgame condition is given by the capture of the king, which could
occur in many different cases. In the following scenarios, the variable $b$
will assume the meaning of the number of black soldiers on the board, that are
not involved in the capture of the king.
1. 1.
When the king is inside of the castle, 4 black soldiers are necessary to
capture it, therefore those checkers and those cells are determined. The
number of cells which can host any other checker is therefore reduced from 64
to 60, while the highest possible value of $b$ is 12. The contribution of this
case is:
$UB_{\gamma}=\displaystyle\sum_{b=0}^{12}\displaystyle\sum_{w=0}^{8}\displaystyle\sum_{c=0}^{b}P_{16}^{(c,16-c)}\cdot
P_{60}^{(b-c,w,60-b-w+c)}\approx 2.8\times 10^{22}$ (9)
2. 2.
When the king is adjacent to the castle (4 possible positions), 3 black
soldiers are necessary to capture it. As for the previous case, the cells
surrounding the king are for sure occupied by 3 black checkers. This reducing
the number of cells to consider to 60 and the highest value of $b$ to 13. The
upper bound is therefore computed as follows:
$UB_{\delta}=4\cdot\displaystyle\sum_{b=0}^{13}\displaystyle\sum_{w=0}^{8}\displaystyle\sum_{c=0}^{b}P_{16}^{(c,16-c)}\cdot
P_{60}^{(b-c,w,60-b-w+c)}\approx 5.1\times 10^{23}$ (10)
3. 3.
When the king is adjacent to a camp (12 possible positions), 1 black soldiers
is sufficient to capture it. In 8 case, the capturing checker can be in 2
positions, while in 4 it has to be in a specific position, therefore the
possible scenarios are 20. The highest value of $b$ is 15 and the number of
cells to consider are 62. The possible configurations for this scenario are:
$UB_{\epsilon}=20\cdot\displaystyle\sum_{b=0}^{15}\displaystyle\sum_{w=0}^{8}\displaystyle\sum_{c=0}^{b}P_{16}^{(c,16-c)}\cdot
P_{62}^{(b-c,w,62-b-w+c)}\approx 8.0\times 10^{25}$ (11)
4. 4.
Finally, when the king is captured in any other position (28 possible cells),
2 black soldiers are necessary. For any position, there are 2 possible
configurations of black soldiers, so the cases are 56. The highest number of
black soldier on the board not involved in the capture is 14, and the cells to
be taken into account are 61. So the possible configurations for this scenario
are :
$UB_{\zeta}=56\cdot\displaystyle\sum_{b=0}^{14}\displaystyle\sum_{w=0}^{8}\displaystyle\sum_{c=0}^{b}P_{16}^{(c,16-c)}\cdot
P_{61}^{(b-c,w,61-b-w+c)}\approx 1.6\times 10^{26}$ (12)
Taking all these scenarios into account as in Equation 13, the upper bound on
the number of endgame states is $UB_{end}\approx 4.6\times 10^{26}$.
$UB_{end}=UB_{\alpha}+UB_{\beta}+UB_{\gamma}+UB_{\delta}+UB_{\epsilon}+UB_{\zeta}\approx
4.6\times 10^{26}$ (13)
### 3.3 Total State Space
Summing the contribution of Equation 6 and Equation 13, the final upper bound
on the state space is $UB_{end}\approx 1.4\times 10^{27}$:
$UB_{end}=UB_{no_{e}nd}^{\prime}+UB_{end}\approx 1.4\times 10^{27}$ (14)
Taking into account possible endgames given by impossibility to move, or some
other properties, would lead to an improvement of this estimation. For now, it
is fair to assert that would be difficult to find even a weak solution for
this game, since games with a similar state space are still unsolved, as
illustrated in Table 1.
Table 1: Comparison of upper bound on state-space complexity in different board games | Tablut | | Nine Men’s
---
Morris
| English
---
Draughts
| International
---
Draughts
Othello | Chess | Go
$UB$ | $1.4\times 10^{27}$ | $3\times 10^{11}$ | $5\times 10^{20}$ | $10^{30}$ | $10^{28}$ | $10^{43}$, $10^{50}$ | 2 $\times 10^{170}$
Solution | No | Strong | Weak | No | No | No | No
Source | | [4] | [8] | [1] | [1] | [1] | [9]
## 4 Conclusion and Discussion
For the first time, an upper bound for the state-space complexity of the board
game Tablut has been computed. The game seems to be comparable with the game
of Draughts, therefore finding the strong solution of Tablut would probably
require a great computational effort. Nonetheless, many characteristics of the
game have not been taken into account, therefore it is still possible to
reduce this upper bound. Moreover, since no lower bound has been computed, its
real complexity may be greatly inferior to what has been estimated in this
work.
Due to the separation of the sub-spaces of the game, it is possible to know
which are the scenarios that contribute the most to this upper bound. For what
concerns the non-endgame subspace, they are the cases where the king is not in
the castle. Among the end-game scenarios, they are the scenarios where the
king escapes successfully. Hopefully this initial investigation will encourage
to compute a more accurate evaluation of the complexity of this game and to
find a solution for it.
## References
* [1] Allis, L.V.: Searching for solutions in games and artificial intelligence. Ph.D. thesis, Maastricht University (1994)
* [2] Ashton, J.C.: Linnaeus’s game of tablut and its relationship to the ancient viking game hnefatafl. The Heroic Age: A Journal of Early Medieval Northwestern Europe 13, 1526–1867 (2010), https://www.heroicage.org/issues/13/ashton.php
* [3] Duggan, E.: A game on the edge: an attempt to unravel the gordian knot of tafl games. Discussion paper, University of Suffolk (March 2020), http://oars.uos.ac.uk/1264/
* [4] Gasser, R.: Solving nine men’s morris. Computational Intelligence 12(1), 24–41 (1996). https://doi.org/10.1111/j.1467-8640.1996.tb00251.x, https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-8640.1996.tb00251.x
* [5] Helmfrid, S.: Hnefatafl — the strategic board game of the vikings (2005)
* [6] Murray, H.J.R.: A history of chess. Clarendon Press (1913)
* [7] Murray, H.J.R.: A history of board-games other than chess. Clarendon press (1952)
* [8] Schaeffer, J., Burch, N., Björnsson, Y., Kishimoto, A., Müller, M., Lake, R., Lu, P., Sutphen, S.: Checkers is solved. Science 317(5844), 1518–1522 (2007). https://doi.org/10.1126/science.1144079, http://science.sciencemag.org/content/317/5844/1518
* [9] Tromp, J., Farnebäck, G.: Combinatorics of go. In: van den Herik, H.J., Ciancarini, P., Donkers, H.H.L.M.J. (eds.) Computers and Games. pp. 84–99. Springer Berlin Heidelberg, Berlin, Heidelberg (2007). https://doi.org/10.1007/978-3-540-75538-8_8
|
# Photonic heat rectification in a coupled qubits system
A. Iorio<EMAIL_ADDRESS>NEST, Istituto Nanoscienze-CNR and Scuola
Normale Superiore, I-56127 Pisa, Italy E. Strambini NEST, Istituto
Nanoscienze-CNR and Scuola Normale Superiore, I-56127 Pisa, Italy G. Haack
Department of Applied Physics, University of Geneva, Chemin de Pinchat 22,
1227 Carouge, Genève, Switzerland M. Campisi NEST, Istituto Nanoscienze-CNR
and Scuola Normale Superiore, I-56127 Pisa, Italy Department of Physics and
Astronomy, University of Florence, I-50019, Sesto Fiorentino (FI), Italy INFN
- Sezione di Pisa, I-56127 Pisa, Italy F. Giazotto NEST, Istituto
Nanoscienze-CNR and Scuola Normale Superiore, I-56127 Pisa, Italy
###### Abstract
We theoretically investigate a quantum heat diode based on two interacting
flux qubits coupled to two heat baths. Rectification of heat currents is
achieved by asymmetrically coupling the qubits to the reservoirs modelled as
dissipative $RLC$ resonators. We find that the coherent interaction between
the qubits can be exploited to enhance the rectification factor, which
otherwise would be constrained by the baths temperatures and couplings.
Remarkably high values of rectification ratio up to $\mathcal{R}\sim 3.5$ can
be obtained for realistic system parameters, with an enhancement up to $\sim
230\%$ compared to the non-interacting case. The system features the
possibility of manipulating both the rectification amplitude and direction,
allowing for an enhancement or suppression of the heat flow to a chosen bath.
For the regime of parameters in which rectification is maximized, we find a
significant increase of the rectification above a critical interaction value
which corresponds to the onset of a non vanishing entanglement in the system.
Finally, we discuss the dependence of the rectification factor on the bath
temperatures and couplings.
Figure 1: Left: graphic illustration of the qubit heat diode. Two interacting
flux qubits (green) are mutually coupled to each others via the inductance
$M_{12}$ and to two thermal reservoirs with coupling factors $g_{L,R}$. The
two reservoirs reside at temperature $T_{L}$ and $T_{R}$ (red and blue).
Right: circuit diagram corresponding to the investigated system.
## I Introduction
The recent development of quantum technologies brought an increasing interest
in the experimental and theoretical investigation of heat transport at the
nanoscale Pekola (2015); Anders and Esposito (2017); Fornieri and Giazotto
(2017); Binder _et al._ (2018). In this context, phenomena such as phase
coherence and entanglement are currently actively studied as they could
potentially lead to quantum advantages, e.g., in terms of improved performance
of thermal machines Vischi _et al._ (2019) and devices Martinez-Perez and
Giazotto (2013a); Guarcello _et al._ (2017); Hwang, Giazotto, and Sothmann
(2018); Guarcello _et al._ (2018), including refrigerators Niskanen,
Nakamura, and Pekola (2007); Solinas _et al._ (2012); Brunner _et al._
(2014); Solinas, Bosisio, and Giazotto (2016), heat switches Ojanen and Jauho
(2008); Sothmann, Giazotto, and Hankiewicz (2017); Karimi and Pekola (2017);
Ronzani _et al._ (2018); Dutta _et al._ (2020), heat engines Hofer and
Sothmann (2015); Marchegiani _et al._ (2016); Samuelsson, Kheradsoud, and
Sothmann (2017); Haack and Giazotto (2019); Erdman _et al._ (2019); Scharf
_et al._ (2020); Marchegiani, Braggio, and Giazotto (2020), thermal
accelerators Buffoni and Campisi (2020), and towards genuine quantum thermal
machines producing entanglement Brask _et al._ (2015); Khandelwal _et al._
(2020); Aguilar, Freitas, and Paz (2020). A topic of great interest in this
context is thermal rectification i.e., the lack of reversal symmetry of heat
current under the reversal of the temperature gradient established between two
thermal reservoirs. A finite rectification means that the magnitude of the
thermal powers changes as the direction of the heat current gets reversed.
Superconducting hybrid devices offer outstanding performance regarding
electronic heat rectification and highest values of rectification have been
reported in systems composed by tunnel junctions between different
superconductors/normal metals Giazotto and Bergeret (2013); Martinez-Perez and
Giazotto (2013b); Fornieri, Martinez-Perez, and Giazotto (2014); Fornieri and
Giazotto (2017); Martínez-Pérez, Fornieri, and Giazotto (2015), topological
insulators Bours _et al._ (2019) and ferromagnetic insulators Giazotto and
Bergeret (2020). While typically electronic heat conduction is considered at
low temperature, also the radiative channel can be significant or even
dominant in certain designs Schmidt, Schoelkopf, and Cleland (2004); Meschke,
Guichard, and Pekola (2006); Ojanen and Jauho (2008); Ruokola, Ojanen, and
Jauho (2009); Marchegiani, Braggio, and Giazotto (2021); Bosisio _et al._
(2016). Photonic heat flow is important, for instance, when applying circuit
quantum electrodynamics (cQED) schemes to the thermal regime with the
potential to study quantum heat transport with remarkable control and
precision Campisi _et al._ (2013); Pekola (2015); Pekola and Karimi (2020).
This emerging field of superconducting circuit quantum thermodynamics (cQTD)
has already achieved a number of relevant results Partanen _et al._ (2016);
Ronzani _et al._ (2018); Senior _et al._ (2020), being significant for both
fundamental study of quantum mechanics as well as for real world quantum
technologies applications. The investigation of more complex schemes where the
interaction and coherence among multiple qubits plays a prominent role is now
actively developing Campisi, Pekola, and Fazio (2015); Jamshidi Farsani and
Fazio (2019); Clivaz _et al._ (2019); Khandelwal _et al._ (2020); Tavakoli
_et al._ (2020); Rignon-Bret _et al._ (2020).
Here, we theoretically analyze a prototypical system consisting of two
interacting flux qubits coupled to two environmental photonic baths, as
sketched in Fig. 1. The two qubits are inductively coupled to each other and
asymmetrically to the two baths which are modeled as $RLC$ oscillators. This
system, which has been previously investigated as a heat switch Karimi and
Pekola (2017), can behave as a photonic thermal diode whose rectification
factor can be greatly enhanced by the qubits interaction. Moreover, we show
that not only the amplitude, but also the direction of rectification can be
manipulated, allowing to switch from configurations in which heat flow is
favored or suppressed. The high tunability provided by the magnetic flux
provides a convenient knob for the control of both the rectifying amplitude
and direction. Tunable inductive couplings can allow a further control of the
mutual interaction between the qubits themeselves and between qubits and
reservoirs Schwarz _et al._ (2013).
## II Model
The full Hamiltonian describing the two interacting qubits with the
dissipative environments reads
$H=H_{S}+H_{S,L}+H_{S,R}+H_{L}+H_{R},$ (1)
where $H_{S}$ is the Hamiltonian of the two interacting qubits, $H_{S,L/R}$
are the qubit-bath interaction and $H_{L}$ and $H_{R}$ are the bare baths
Hamiltonians, which we shall model as sets of harmonic oscillators. The
Hamiltonian of the interacting qubits reads
$H_{S}=H_{0}+H_{12},$ (2)
with $H_{0}$ being the two non-interacting flux qubits Hamiltonian and
$H_{12}$ the interaction between them. The first term reads Orlando _et al._
(1999)
$H_{0}=\sum_{i=1}^{2}-\epsilon_{i}(q\hat{\sigma}_{z,i}+\Delta_{i}\hat{\sigma}_{x,i}),$
(3)
where $\epsilon_{i}=I_{p,i}\Phi_{0}$, with $I_{p,i}$ the circulating current
and $\Phi_{0}$ the superconducting flux quantum, $\Delta_{i}$ are the
dimensionless tunneling amplitudes, $q=(\Phi_{ex}/\Phi_{0}-\tfrac{1}{2})$ is
the dimensionless applied external magnetic flux and $\hat{\sigma}_{\alpha,i}$
with $\alpha=\\{x,y,z\\}$ denote the Pauli matrices of qubit $i$. The qubits
are inductively coupled to each other so that the corresponding interaction
takes the form
$H_{12}=\gamma\hat{\sigma}_{z,1}\hat{\sigma}_{z,2},$ (4)
where $\gamma=2M_{12}I_{p,1}I_{p,2}$ is the coupling strenght and $M_{12}$ is
the mutual inductance between the qubits. In the following, we will consider
the condition of identical qubits, i.e.,
$\epsilon_{1}=\epsilon_{2}=\epsilon_{0}$ and $\Delta_{1}=\Delta_{2}=\Delta$.
As represented schematically in Fig. 1, the two qubits are interacting with
two distinct heat baths with temperatures $T_{L}$ (left bath) and $T_{R}$
(right bath). The dissipative environment can be conveniently modeled as an
$LC$ oscillator with a series resistance $R_{B}$ which inductively couples
through the $\hat{\sigma}_{z,i}$ components of the two qubits as depicted in
Fig. 1 Storcz and Wilhelm (2003); Ojanen and Jauho (2008). For uncorrelated
baths, the Hamiltonian describing the interaction between our two qubit system
$S$ and the bath $B=\\{L,R\\}$ reads Wilhelm _et al._ (2003); Martinis _et
al._ (2003)
$H_{S,B}=M_{B}I_{p}(\hat{\sigma}_{z,1}+\hat{\sigma}_{z,2})\otimes\delta\hat{i}_{n,B},$
(5)
where the current operator $\delta\hat{i}_{n,B}$ for the environment $B$ sets
the temperature-dependent Johnson-Nyquist noise through its spectral function
$S_{B}$
$S_{B}(\omega)=\int_{-\infty}^{\infty}dte^{i\omega(t-t^{\prime})}\langle\delta\hat{i}_{n,B}(t)\delta\hat{i}_{n,B}(t^{\prime})\rangle.$
(6a) The latter can be assessed directly from the impedance of the
corresponding environment Devoret (1997); Storcz and Wilhelm (2003); Ojanen
and Jauho (2008), which in our case reads
$S_{B}(\omega)=\frac{2\hbar\omega}{1-e^{-\hbar\omega/k_{B}T_{B}}}\text{Re}{\\{Y_{B}(\omega)\\}},$
(6b)
with
$Y_{B}(\omega)=1/R_{B}[1+iQ_{B}(\omega/\omega_{LC,B}-\omega_{LC,B}/\omega)]^{-1}$
being the admittance of the $RLC$ circuit of resistance $R_{B}$ and quality
factor $Q_{B}=\sqrt{L_{B}/C_{B}}/R_{B}$.
## III Heat transport
Heat transport is crucially determined, among other quantities, by the rate of
transitions incurring in the two-qubit system $S$ as a consequence of their
coupling to the noisy reservoirs. By assuming a standard weak-coupling regime,
transition rates from level $k$ to level $l$ of the coupled qubits system due
to the bath $B$ can be evaluated as Niskanen, Nakamura, and Pekola (2007);
Karimi and Pekola (2017)
$\Gamma_{k\rightarrow l,B}=\frac{(M_{B}I_{p})^{2}}{\hbar^{2}}|\langle
k|(\hat{\sigma}_{z,1}+\hat{\sigma}_{z,2})|l\rangle|^{2}S_{B}(\omega_{kl}),$
(7)
where $|n\rangle$ denotes an eigenstate of $H_{S}$,
$H_{S}|n\rangle=E_{n}|n\rangle$, and $S_{B}(\omega)$ is the noise spectral
function. In order to quantify the thermal power transmitted between the
baths, we first need to further evaluate the components of the density matrix
$\rho$ of the coupled qubits system. These are governed by the master equation
Breuer and Petruccione (2007); Blum (2012); Karimi and Pekola (2017)
$\dot{\rho}_{kl}=-i\omega_{kl}\rho_{kl}+\delta_{kl}\sum_{i}\rho_{ii}\Gamma_{i\rightarrow
k}-\frac{1}{2}\rho_{kl}\sum_{i}(\Gamma_{k\rightarrow i}+\Gamma_{l\rightarrow
i}),$ (8)
where $\Gamma_{k\rightarrow l}\equiv\Gamma_{k\rightarrow
l,L}+\Gamma_{k\rightarrow l,R}$ are the total transition rates and $\rho$ is
written in the instantaneous eigenbasis $|n\rangle$ of the coupled qubits
111Note that we are describing our system by a global master equation. This is
dictated by the geometry of our system, with the two qubits being directly
coupled to both baths. For a configuration with the qubits in series rather
than in parallel, special care must be taken in choosing whether local or
global master equations should be used, see e.g. Hofer _et al._ (2017);
Mitchison and Plenio (2018); Cattaneo _et al._ (2020); Khandelwal _et al._
(2020).. For weak qubit-bath coupling, we can thus write the thermal power to
the bath $B$ in the form Aurell and Montana (2019); Karimi and Pekola (2017)
$P_{B}=\sum_{i,j}\rho_{ii}E_{ij}\Gamma_{i\rightarrow j,B},$ (9)
where $E_{ij}=E_{i}-E_{j}$ is the transition energy between eigenstates
$|i\rangle$ and $|j\rangle$ and $\Gamma_{i\rightarrow j,B}$ is the
corresponding transition rate induced by the bath $B$. We can thus quantify
the rectification ratio as
$\mathcal{R}=\left|\frac{P_{B}^{+}}{P_{B}^{-}}\right|,$ (10)
such that the absence of heat rectification corresponds to $\mathcal{R}=1$,
while $\mathcal{R}>1$ or $\mathcal{R}<1$ indicates a favored or suppressed
heat flow to the bath $B$. In the steady state regime, Eq. (10) can be
equivalently expressed in terms of the left/right reservoirs $B=\\{L,R\\}$.
Figure 2: a) Normalized power $\hbar P_{R}^{\pm}/\epsilon_{0}^{2}$ transmitted
to the right bath as a function of the dimensionless applied external flux $q$
for different values of interaction $\tilde{\gamma}=\gamma/\epsilon_{0}$.
Continuous/dashed lines indicates the direct and reverse thermal bias
configuration with $k_{B}T_{L}=\epsilon_{0}/5$ and
$k_{B}T_{R}=\epsilon_{0}/20$. b) Rectification ratio
$\mathcal{R}=|P_{R}^{+}/P_{R}^{-}|$ extracted from the curves in a). The grey
dashed line indicates absence of rectification. c) Full dependence of
$\mathcal{R}$ as a function of $q$ and $\tilde{\gamma}$. The dashed black line
over the white area corresponds to points of absence of rectification
($\mathcal{R}=1$). Red and blue areas correspond, respectively, to regions of
direct and reverse rectification direction. d) Maximal rectification
$\chi_{max}$ as a function of $\tilde{\gamma}$ for different $T_{L}$ at fixed
$k_{B}T_{R}=\epsilon_{0}/20$ is shown as an highlighted curve. The solid and
dashed lines corresponds to the quantities $\log{\mathcal{R}_{max}}$ and
$-\log{\mathcal{R}_{min}}$. The turning points at $\tilde{\gamma}_{c}$,
associated with a switch of the rectification direction, are shown as white
circles. The red curve corresponds to the temperature bias shown in c). e)
Entanglement $\mathcal{E}$ corresponding to the same parameters values of d).
In all plots, $\epsilon_{0}=1$, $\Delta=0.1$,
$\hbar\omega_{LC}=10\epsilon_{0}$, $Q_{L}=Q_{R}=10$, $R_{L}=R_{R}=1$ $\Omega$,
$g_{L}=0.75$, $g_{R}=0.25$ are assumed.
## IV Results
In the following we shall assume different qubit-bath coupling strengths for
the two environments. This provides the necessary structural asymmetry to
observe rectification Segal and Nitzan (2005); Ruokola, Ojanen, and Jauho
(2009). For simplicity, we quantify this coupling by the dimensionless
parameters
$g_{B}=\frac{M_{B}I_{p}}{\sqrt{\hbar R_{B}}},$ (11)
which are set to the values $g_{R}=0.25$ and $g_{L}=0.75$. Fig. 2a depicts the
powers $P_{R}^{\pm}$ transmitted to the right bath as a function of the
applied dimensionless flux $q$ with direct thermal bias (continuous line), and
reverse thermal bias (dashed line). In both cases the dimensionless power
$\hbar P_{R}^{\pm}/\epsilon_{0}^{2}$ increases dramatically at
$\tilde{\gamma}\equiv\gamma/\epsilon_{0}=0.15$ as a result of the resonance
condition matched between the qubits energy levels and the frequency of the
$LC$-oscillators constituting the dissipative environment Karimi and Pekola
(2017). More importantly, a notable variation in the intensity of the power
transmitted in the direct/reverse bias configurations is observed and
anticipates the significant rectifying properties of our heat diode. In Fig.
2b we plot $\log\mathcal{R}$ as a function of $q$ corresponding to the same
parameter values as in Fig. 2a. When the qubits are non-interacting (blue
line), the heat flow is always favored in the reverse thermal bias
configuration, which is testified by the fact that $\log\mathcal{R}<0$.
Moreover, in the simple case of uncoupled qubits, the rectification factor is
independent on the number of qubits and depends only on $g_{L,R}$ and
$T_{L,R}$ Senior _et al._ (2020). Instead, when the qubits are interacting
(orange and green lines), a remarkable increase of the rectification factor is
observed at $q=0$, with an enhancement for $\mathcal{R}$ of $\sim 230\%$ at
$\tilde{\gamma}=0.15$. Moreover, a change in the rectification direction takes
place and the system can also be tuned from forward ($\log\mathcal{R}>0$) to
backward ($\log\mathcal{R}<0$) rectification by spanning $q$. Indeed, it turns
out that the non-trivial dependence of the thermal powers on $\tilde{\gamma}$
can eventually balance and reverse the rectification direction of the qubits
with respect to the non-interacting case. This feature allows the system to be
fully in-situ tunable both in amplitude and direction of rectification. The
complete dependence of $\mathcal{R}(q,\tilde{\gamma})$ is displayed in the
colorplot in Fig. 2c. The dashed black line over the white area corresponds to
a region of absence of rectification, while the blue and red areas correspond
to regions of finite rectification, respectively with reverse and direct
rectification direction. For the considered system, we clearly observe that
one direction is more favorable than the other one (note also the scale on the
colorbar). We can further characterize the performance of the heat diode by
investigating its points of maximal rectification. In this regard, it is
useful to plot the quantities
$\mathcal{R}_{max}(\tilde{\gamma})=\max_{q}\mathcal{R}(\tilde{\gamma})$ and
$\mathcal{R}_{min}(\tilde{\gamma})=\min_{q}\mathcal{R}(\tilde{\gamma})$, which
are shown in Fig. 2d with continuous and dashed lines for different $T_{L}$ at
fixed $T_{R}$. The maximal rectification, independently of its direction, is
then given by
$\chi_{max}(\tilde{\gamma})=\max\\{\log\mathcal{R}_{max}(\tilde{\gamma}),-\log\mathcal{R}_{min}(\tilde{\gamma})\\},$
(12)
which is plotted as an highlighted curve in Fig. 2d. We observe that for the
range of parameters corresponding to $\mathcal{R}_{min}$, the diode is always
rectifying in the reverse thermal bias configuration
($\log\mathcal{R}_{min}<0$) and the magnitude of rectification is almost
unaffected by $\tilde{\gamma}$. Differently, $\mathcal{R}_{max}$ increases in
$\tilde{\gamma}$, starting from a small reverse rectification
($\log\mathcal{R}_{max}<0$) at low $\tilde{\gamma}$ up to a direct
rectification ($\log\mathcal{R}_{max}>0$) at large $\tilde{\gamma}$.
Interestingly, the turning points $\tilde{\gamma}_{c}$ associated with the
switch of the rectification direction are followed by a remarkable enhancement
of $\mathcal{R}$, as $\tilde{\gamma}$ is increased. To characterize the nature
of the correlations giving rise to this evolution, we plot in Fig. 2e the
amount of entanglement between the two qubits at steady state, as quantified
by the entanglement of formation $\mathcal{E}(\rho)$. The latter is standardly
defined as $\mathcal{E}(\rho)=-x\log_{2}(x)-(1-x)\log_{2}(1-x)$, where
$x=(1+\sqrt{1-C^{2}})/2$ and $C(\rho)$ is the concurrence associated to the
density matrix $\rho$ Wootters (1998). Remarkably, the critical values
$\tilde{\gamma}_{c}$ coincide with the values for which the entanglement
between the qubits starts to appear, which in turn results to have
$\mathcal{E}=0.4$ at the higher rectification point shown in Fig. 2e (red
line). Moreover, despite the reduction of the overall temperature gradient
$T_{L}-T_{R}$, both $\chi_{max}$ and the amount of entanglement $\mathcal{E}$
similarly increases for low values of $T_{L}$ as $\tilde{\gamma}$ increases.
For the regime of parameters where $\chi_{max}$ is optimized, the quantum
correlations of the system are strong enough to bring the qubits in an
entangled state making it possible to envision applications also for quantum
information purposes. Although the common features shared between
rectification and entanglement veil a deeper connection between the two, an
analytical one-to-one correspondence goes beyond the scope of our analysis
Khandelwal _et al._ (2020), but might be of interest for further
investigations.
Figure 3: a) Dependence of the rectification ratio $\mathcal{R}$ on the left
and right bath temperatures $T_{L,R}$ for $g_{L}=0.75$ and $g_{R}=0.25$. The
blue cut corresponds to the axis $T_{L}=T_{R}$, while the red one corresponds
to fixed left bath temperature $k_{B}T_{L}=\epsilon_{0}/5$. b) The two cuts
showed in a) displays the variation of $\mathcal{R}$ as a function of the
right temperature $T_{R}$. The green circle corresponds to the value
$k_{B}T_{R}=\epsilon_{0}/20$ employed in Fig. 2. The dashed grey lines
corresponds to lower values of interaction $\tilde{\gamma}=\\{0,0.1\\}$ at
$q=0$. c) Dependence of $\mathcal{R}$ on the qubit-baths coupling strengths
$g_{L,R}$. The blue line corresponds to the symmetric case of identical
coupling $g_{L}=g_{R}$, while the red line to the condition of fixed
$g_{L}=0.75$. d) The two cuts displayed in c) showing the dependence of
$\mathcal{R}$ on $g_{R}$. The green circle corresponds to the condition
$g_{R}=0.25$ presented in Fig. 2. The dashed grey lines corresponds to lower
interaction values $\tilde{\gamma}=\\{0,0.1\\}$ at $q=0$. In all plots,
$\epsilon_{0}=1$, $\Delta=0.1$, $\hbar\omega_{LC}=10\epsilon_{0}$,
$Q_{L}=Q_{R}=10$, $R_{L}=R_{R}=1$ $\Omega$ are employed.
In the following we characterize the performance of the heat diode depending
on the temperatures $T_{L,R}$ and the coupling strengths $g_{L,R}$. In
particular, Fig. 3a shows the dependence of $\log\mathcal{R}$ on the left and
right bath temperatures at the resonance point ($q=0$, $\tilde{\gamma}=0.15$).
The rectification ratio is clearly symmetric with respect to the $T_{L}=T_{R}$
axis (blue line), in which no rectification is obviously observed. Higher
values of $\mathcal{R}$ are obtained in the bottom-left angle of the
colorplot, corresponding to lower values of baths temperatures as in agreement
with the considerations previously done with regard to Fig. 2d. The red cut,
corresponding to the condition of fixed left bath temperature $T_{L}$ and
variable right bath temperature $T_{R}$, is plotted in Fig. 2b. Notably, the
condition $\log\mathcal{R}=0$, stemming for absence of rectification, is
achieved not only in the absence of a temperature gradient $(T_{L}=T_{R})$,
but also for another value of temperature bias. For this value, the asymmetry
given by the bath temperatures compensates the asymmetric qubits-bath coupling
resulting in a overall absence of preferential heat flow. This only holds when
the qubits are interacting ($\tilde{\gamma}\neq 0$) as displayed by the
behavior of $\mathcal{R}$ for $\tilde{\gamma}=\\{0,0.1\\}$ with dashed grey
lines. Figure 3c displays the full dependence of $\mathcal{R}$ on the qubit-
bath coupling strengths $g_{L,R}$ at ($q=0$, $\tilde{\gamma}=0.15$). The blue
line, corresponding to symmetric coupling $g_{L}=g_{R}$, highlights the need
for a structural asymmetry between the qubits and baths in order to see any
rectification effect Segal and Nitzan (2005); Ruokola, Ojanen, and Jauho
(2009). The behavior of $\mathcal{R}$ for fixed left bath coupling
$g_{L}=0.75$ is shown in Fig. 3c (red line). The rectification is remarkably
sensitive to the coupling strengths when the system approaches the resonance
condition at $\tilde{\gamma}=0.15$. In this case, exceptional high values of
rectification can be achieved with an enhancement up to $\sim 230\%$ for
$g_{R}=0.25$. Such coupling strengths can be achieved with a proper design of
the mutual inductances between qubits and thermal baths. For instance, by
assuming a persistent current of $I_{p}=200$ nA and a bath resistance
$R_{B}=1\Omega$, values of $g_{B}\sim 0.25-0.75$ can be easily achieved for
$M_{B}=20-45$ pH.
## V Conclusions
In conclusion, we have investigated the rectification properties of a system
composed of two interacting flux qubits asymmetrically coupled to two $RLC$
resonators acting as thermal baths. The system behaves as an efficient
photonic heat diode in which rectification of heat currents between the two
thermal environments takes place. We exploit quantum correlations between the
two qubits to enhance the rectification factor, which would otherwise be
constrained by the coupling to the baths and by their temperatures. Remarkably
high values of rectification ratio up to $\mathcal{R}\sim 3.5$ can be obtained
for realistic system parameters, with an enhancement up to $\sim 230\%$
compared to the non-interacting case. The system features the possibility of
manipulating both the rectification amplitude and direction, effectively
allowing to favor or suppress heat flow to a chosen bath. Standard
nanofabrication techniques can be employed for the experimental realization of
similar devices. Our analysis can be easily adapted to other kinds of
superconducting qubits, different coupling schemes or increased number of
qubits.
## VI Acknowledgments
We acknowledge the EU’s Horizon 2020 research and innovation program under
Grant Agreement No. 800923 (SUPERTED), and the European Research Council under
Grant Agreement No. 899315-TERASEC for partial financial support. G.H.
acknowledges funding from the Swiss National Science Foundation through the
starting grant PRIMA PR00P2 179748 and the NCCR QSIT (Quantum Science and
Technology).
## References
* Pekola (2015) J. P. Pekola, “Towards quantum thermodynamics in electronic circuits,” Nature Physics 11, 118–123 (2015).
* Anders and Esposito (2017) J. Anders and M. Esposito, “Focus on quantum thermodynamics,” New Journal of Physics 19, 010201 (2017).
* Fornieri and Giazotto (2017) A. Fornieri and F. Giazotto, “Towards phase-coherent caloritronics in superconducting circuits,” Nature Nanotechnology 12, 944–952 (2017), arXiv:1610.01013 .
* Binder _et al._ (2018) F. Binder, L. A. Correa, C. Gogolin, J. Anders, and G. Adesso, eds., _Thermodynamics in the Quantum Regime: Fundamental Aspects and New Directions_, Fundamental Theories of Physics (Springer International Publishing, 2018).
* Vischi _et al._ (2019) F. Vischi, M. Carrega, P. Virtanen, E. Strambini, A. Braggio, and F. Giazotto, “Coherent Josephson thermodynamic cycles,” Scientific Reports 9, 3238 (2019), arXiv:1806.01568 .
* Martinez-Perez and Giazotto (2013a) M. J. Martinez-Perez and F. Giazotto, “Fully-Balanced Heat Interferometer,” Applied Physics Letters 102, 092602 (2013a), arXiv:1210.7187 .
* Guarcello _et al._ (2017) C. Guarcello, P. Solinas, M. Di Ventra, and F. Giazotto, “Hysteretic superconducting heat-flux quantum modulator,” Physical Review Applied 7 (2017), 10.1103/PhysRevApplied.7.044021, arXiv:1701.06602 .
* Hwang, Giazotto, and Sothmann (2018) S.-Y. Hwang, F. Giazotto, and B. Sothmann, “Phase-coherent heat circulator based on multi-terminal Josephson junctions,” Physical Review Applied 10 (2018), 10.1103/PhysRevApplied.10.044062, arXiv:1808.04606 .
* Guarcello _et al._ (2018) C. Guarcello, P. Solinas, A. Braggio, and F. Giazotto, “Phase-coherent solitonic Josephson heat oscillator,” Scientific Reports 8 (2018), 10.1038/s41598-018-30268-1, arXiv:1803.02588 .
* Niskanen, Nakamura, and Pekola (2007) A. O. Niskanen, Y. Nakamura, and J. P. Pekola, “Information entropic superconducting microcooler,” Physical Review B 76, 174523 (2007).
* Solinas _et al._ (2012) P. Solinas, M. Möttönen, J. Salmilehto, and J. P. Pekola, “Cooper-pair current in the presence of flux noise,” Physical Review B 85, 024527 (2012).
* Brunner _et al._ (2014) N. Brunner, M. Huber, N. Linden, S. Popescu, R. Silva, and P. Skrzypczyk, “Entanglement enhances cooling in microscopic quantum refrigerators,” Physical Review E 89, 032115 (2014).
* Solinas, Bosisio, and Giazotto (2016) P. Solinas, R. Bosisio, and F. Giazotto, “A Microwave Josephson Refrigerator,” Physical Review B 93 (2016), 10.1103/PhysRevB.93.224521, arXiv:1605.05884 .
* Ojanen and Jauho (2008) T. Ojanen and A.-P. Jauho, “Mesoscopic Photon Heat Transistor,” Physical Review Letters 100, 155902 (2008).
* Sothmann, Giazotto, and Hankiewicz (2017) B. Sothmann, F. Giazotto, and E. M. Hankiewicz, “High efficiency thermal switch based on topological Josephson junctions,” New Journal of Physics 19, 023056 (2017), arXiv:1610.06099 .
* Karimi and Pekola (2017) B. Karimi and J. P. Pekola, “Correlated vs. uncorrelated noise acting on a quantum refrigerator,” Physical Review B 96, 115408 (2017), arXiv:1703.10507 .
* Ronzani _et al._ (2018) A. Ronzani, B. Karimi, J. Senior, Y.-C. Chang, J. T. Peltonen, C. Chen, and J. P. Pekola, “Tunable photonic heat transport in a quantum heat valve,” Nature Physics 14, 991–995 (2018), arXiv:1801.09312 .
* Dutta _et al._ (2020) B. Dutta, D. Majidi, N. W. Talarico, N. Lo Gullo, H. Courtois, and C. B. Winkelmann, “Single-Quantum-Dot Heat Valve,” Physical Review Letters 125, 237701 (2020).
* Hofer and Sothmann (2015) P. P. Hofer and B. Sothmann, “Quantum heat engines based on electronic Mach-Zehnder interferometers,” Physical Review B 91, 195406 (2015).
* Marchegiani _et al._ (2016) G. Marchegiani, P. Virtanen, F. Giazotto, and M. Campisi, “Self-Oscillating Josephson Quantum Heat Engine,” Physical Review Applied 6, 054014 (2016), arXiv:1607.02850 .
* Samuelsson, Kheradsoud, and Sothmann (2017) P. Samuelsson, S. Kheradsoud, and B. Sothmann, “Optimal Quantum Interference Thermoelectric Heat Engine with Edge States,” Physical Review Letters 118, 256801 (2017).
* Haack and Giazotto (2019) G. Haack and F. Giazotto, “Efficient and tunable Aharonov-Bohm quantum heat engine,” Physical Review B 100, 235442 (2019).
* Erdman _et al._ (2019) P. A. Erdman, V. Cavina, R. Fazio, F. Taddei, and V. Giovannetti, “Maximum power and corresponding efficiency for two-level heat engines and refrigerators: Optimality of fast cycles,” New Journal of Physics 21, 103049 (2019).
* Scharf _et al._ (2020) B. Scharf, A. Braggio, E. Strambini, F. Giazotto, and E. M. Hankiewicz, “Topological josephson heat engine,” Communications Physics 3, 1–6 (2020).
* Marchegiani, Braggio, and Giazotto (2020) G. Marchegiani, A. Braggio, and F. Giazotto, “Superconducting nonlinear thermoelectric heat engine,” Physical Review B 101, 214509 (2020).
* Buffoni and Campisi (2020) L. Buffoni and M. Campisi, “Thermodynamics of a quantum annealer,” Quantum Science and Technology 5, 035013 (2020).
* Brask _et al._ (2015) J. B. Brask, G. Haack, N. Brunner, and M. Huber, “Autonomous quantum thermal machine for generating steady-state entanglement,” New Journal of Physics 17, 113029 (2015).
* Khandelwal _et al._ (2020) S. Khandelwal, N. Palazzo, N. Brunner, and G. Haack, “Critical heat current for operating an entanglement engine,” New Journal of Physics 22, 073039 (2020).
* Aguilar, Freitas, and Paz (2020) M. Aguilar, N. Freitas, and J. P. Paz, “Entanglement generation in quantum thermal machines,” Physical Review A 102, 062422 (2020), arXiv:2010.05885 .
* Giazotto and Bergeret (2013) F. Giazotto and F. S. Bergeret, “Thermal rectification of electrons in hybrid normal metal-superconductor nanojunctions,” Applied Physics Letters 103, 242602 (2013), arXiv:1310.3923 .
* Martinez-Perez and Giazotto (2013b) M. J. Martinez-Perez and F. Giazotto, “Efficient phase-tunable Josephson thermal rectifier,” Applied Physics Letters 102, 182602 (2013b), arXiv:1304.3672 .
* Fornieri, Martinez-Perez, and Giazotto (2014) A. Fornieri, M. J. Martinez-Perez, and F. Giazotto, “A normal metal tunnel-junction heat diode,” Applied Physics Letters 104, 183108 (2014), arXiv:1404.2834 .
* Martínez-Pérez, Fornieri, and Giazotto (2015) M. J. Martínez-Pérez, A. Fornieri, and F. Giazotto, “Rectification of electronic heat current by a hybrid thermal diode,” Nature Nanotechnology 10, 303–307 (2015), arXiv:1403.3052 .
* Bours _et al._ (2019) L. Bours, B. Sothmann, M. Carrega, E. Strambini, A. Braggio, E. M. Hankiewicz, L. W. Molenkamp, and F. Giazotto, “Phase-tunable thermal rectification in the topological SQUIPT,” Physical Review Applied 11, 044073 (2019), arXiv:1811.02969 .
* Giazotto and Bergeret (2020) F. Giazotto and F. S. Bergeret, “Very large thermal rectification in ferromagnetic insulator-based superconducting tunnel junctions,” Applied Physics Letters 116, 192601 (2020), arXiv:2004.03620 .
* Schmidt, Schoelkopf, and Cleland (2004) D. R. Schmidt, R. J. Schoelkopf, and A. N. Cleland, “Photon-Mediated Thermal Relaxation of Electrons in Nanostructures,” Physical Review Letters 93, 045901 (2004).
* Meschke, Guichard, and Pekola (2006) M. Meschke, W. Guichard, and J. P. Pekola, “Single-mode heat conduction by photons,” Nature 444, 187–190 (2006).
* Ruokola, Ojanen, and Jauho (2009) T. Ruokola, T. Ojanen, and A.-P. Jauho, “Thermal rectification in nonlinear quantum circuits,” Physical Review B 79, 144306 (2009).
* Marchegiani, Braggio, and Giazotto (2021) G. Marchegiani, A. Braggio, and F. Giazotto, “Highly efficient phase-tunable photonic thermal diode,” Applied Physics Letters 118, 022602 (2021), arXiv:2011.02777 .
* Bosisio _et al._ (2016) R. Bosisio, P. Solinas, A. Braggio, and F. Giazotto, “Photonic heat conduction in josephson-coupled bardeen-cooper-schrieffer superconductors,” Physical Review B 93, 144512 (2016).
* Campisi _et al._ (2013) M. Campisi, R. Blattmann, S. Kohler, D. Zueco, and P. Hänggi, “Employing circuit QED to measure non-equilibrium work fluctuations,” New Journal of Physics 15, 105028 (2013).
* Pekola and Karimi (2020) J. P. Pekola and B. Karimi, “Qubit decay in circuit quantum thermodynamics,” arXiv:2010.11122 [cond-mat, physics:quant-ph] (2020), arXiv:2010.11122 [cond-mat, physics:quant-ph] .
* Partanen _et al._ (2016) M. Partanen, K. Y. Tan, J. Govenius, R. E. Lake, M. K. Mäkelä, T. Tanttu, and M. Möttönen, “Quantum-limited heat conduction over macroscopic distances,” Nature Physics 12, 460–464 (2016).
* Senior _et al._ (2020) J. Senior, A. Gubaydullin, B. Karimi, J. T. Peltonen, J. Ankerhold, and J. P. Pekola, “Heat rectification via a superconducting artificial atom,” Communications Physics 3, 40 (2020).
* Campisi, Pekola, and Fazio (2015) M. Campisi, J. Pekola, and R. Fazio, “Nonequilibrium fluctuations in quantum heat engines: Theory, example, and possible solid state experiments,” New Journal of Physics 17, 035012 (2015).
* Jamshidi Farsani and Fazio (2019) M. Jamshidi Farsani and R. Fazio, “Quantum heat switch with multiple qubits,” Physics Letters A 383, 1722–1727 (2019).
* Clivaz _et al._ (2019) F. Clivaz, R. Silva, G. Haack, J. B. Brask, N. Brunner, and M. Huber, “Unifying Paradigms of Quantum Refrigeration: A Universal and Attainable Bound on Cooling,” Physical Review Letters 123, 170605 (2019).
* Tavakoli _et al._ (2020) A. Tavakoli, G. Haack, N. Brunner, and J. B. Brask, “Autonomous multipartite entanglement engines,” Physical Review A 101, 012315 (2020).
* Rignon-Bret _et al._ (2020) A. Rignon-Bret, G. Guarnieri, J. Goold, and M. T. Mitchison, “Thermodynamics of precision in quantum nano-machines,” arXiv:2009.11303 [cond-mat, physics:quant-ph] (2020), arXiv:2009.11303 [cond-mat, physics:quant-ph] .
* Schwarz _et al._ (2013) M. J. Schwarz, J. Goetz, Z. Jiang, T. Niemczyk, F. Deppe, A. Marx, and R. Gross, “Gradiometric flux qubits with tunable gap,” New Journal of Physics 15, 045001 (2013), arXiv:1210.3982 .
* Orlando _et al._ (1999) T. P. Orlando, J. E. Mooij, L. Tian, C. H. van der Wal, L. S. Levitov, S. Lloyd, and J. J. Mazo, “Superconducting persistent-current qubit,” Physical Review B 60, 15398–15413 (1999).
* Storcz and Wilhelm (2003) M. J. Storcz and F. K. Wilhelm, “Decoherence and gate performance of coupled solid-state qubits,” Physical Review A 67, 042319 (2003).
* Wilhelm _et al._ (2003) F. K. Wilhelm, M. J. Storcz, C. H. van der Wal, C. J. P. M. Harmans, and J. E. Mooij, “Decoherence of Flux Qubits Coupled to Electronic Circuits,” arXiv:cond-mat/0305349 43, 763–780 (2003), arXiv:cond-mat/0305349 .
* Martinis _et al._ (2003) J. M. Martinis, S. Nam, J. Aumentado, K. M. Lang, and C. Urbina, “Decoherence of a superconducting qubit due to bias noise,” Physical Review B 67, 094510 (2003).
* Devoret (1997) M. H. Devoret, “Quantum Fluctuations in Electrical Circuits,” , 351 (1997).
* Breuer and Petruccione (2007) H.-P. Breuer and F. Petruccione, _The Theory of Open Quantum Systems_ (Oxford University Press, 2007).
* Blum (2012) K. Blum, _Density Matrix Theory and Applications_, 3rd ed., Springer Series on Atomic, Optical, and Plasma Physics (Springer-Verlag, Berlin Heidelberg, 2012).
* Note (1) Note that we are describing our system by a global master equation. This is dictated by the geometry of our system, with the two qubits being directly coupled to both baths. For a configuration with the qubits in series rather than in parallel, special care must be taken in choosing whether local or global master equations should be used, see e.g. Hofer _et al._ (2017); Mitchison and Plenio (2018); Cattaneo _et al._ (2020); Khandelwal _et al._ (2020).
* Aurell and Montana (2019) E. Aurell and F. Montana, “Thermal power of heat flow through a qubit,” Physical Review E 99, 042130 (2019), arXiv:1901.05896 .
* Segal and Nitzan (2005) D. Segal and A. Nitzan, “Spin-Boson Thermal Rectifier,” Physical Review Letters 94, 034301 (2005).
* Wootters (1998) W. K. Wootters, “Entanglement of Formation of an Arbitrary State of Two Qubits,” Physical Review Letters 80, 2245–2248 (1998).
* Hofer _et al._ (2017) P. P. Hofer, M. Perarnau-Llobet, L. D. M. Miranda, G. Haack, R. Silva, J. B. Brask, and N. Brunner, “Markovian master equations for quantum thermal machines: Local versus global approach,” New Journal of Physics 19, 123037 (2017).
* Mitchison and Plenio (2018) M. T. Mitchison and M. B. Plenio, “Non-additive dissipation in open quantum networks out of equilibrium,” New Journal of Physics 20, 033005 (2018).
* Cattaneo _et al._ (2020) M. Cattaneo, G. L. Giorgi, S. Maniscalco, and R. Zambrini, “Local vs global master equation with common and separate baths: Superiority of the global approach in partial secular approximation,” arXiv:1906.08893 [quant-ph] (2020), 10.1088/1367-2630/ab54ac, arXiv:1906.08893 [quant-ph] .
|
# A Bayesian approach for estimation of weight matrices in spatial
autoregressive models††thanks: This working paper is an earlier draft of an
article published by Taylor & Francis in Spatial Economic Analysis on 22nd
July 2022, available at:
https://www.tandfonline.com/doi/full/10.1080/17421772.2022.2095426.
Tamás Krisztin
International Institute for Applied Systems Analysis (IIASA)
and
Philipp Piribauer
Austrian Institute of Economic Research (WIFO) Tamás Krisztin was supported by
funds of the Austrian National Bank: 18690.Philipp Piribauer was supported by
the Austrian Science Fund (FWF): ZK 35.
###### Abstract
We develop a Bayesian approach to estimate weight matrices in spatial
autoregressive (or spatial lag) models. Datasets in regional economic
literature are typically characterized by a limited number of time periods $T$
relative to spatial units $N$. When the spatial weight matrix is subject to
estimation severe problems of over-parametrization are likely. To make
estimation feasible, our approach focusses on spatial weight matrices which
are binary prior to row-standardization. We discuss the use of hierarchical
priors which impose sparsity in the spatial weight matrix. Monte Carlo
simulations show that these priors perform very well where the number of
unknown parameters is large relative to the observations. The virtues of our
approach are demonstrated using global data from the early phase of the
COVID-19 pandemic.
Keywords: Estimation of spatial weight matrix, spatial econometric model,
Bayesian MCMC estimation, Monte Carlo simulations, COVID-19 pandemic
JEL Codes: C11, C21, C23, C51
## 1 Introduction
Spatial econometrics deals with the study of cross-sectional dependence and
interactions among (spatial) observations. A particularly popular spatial
econometric model is the spatial autoregressive (or spatial lag)
specification, where spatial interdependence between observations is governed
by a so-called spatial weight matrix. The spatial weight matrix is typically
assumed non-negative, row-standardized and exogenously given, with spatial
weights based on some concept of neighbourhood. Geographic neighbourhood is
often preferred due to exogeneity assumptions. However, when relying on
geographic information, several competing approaches exist for constructing
the weight matrix (for a thorough discussion, see LeSage and Pace 2009).
Recently, Kelejian and Piras (2014), Qu and Lee (2015), Han and Lee (2016),
and Hsieh and Lee (2016) use alternative measures, such as (socio-)economic
proximity. Another strand of the literature focusses on the uncertainty
associated with the choice of neighbourhood structures by selecting or
combining alternative weight matrices (see, for example, Debarsy and LeSage
2018 and Piribauer and Cuaresma 2016).
Since direct estimation of a spatial weight matrix requires estimating at
least $(N-1)N$ parameters (ignoring the other model parameters), only few
approaches target direct estimation of spatial weight matrices. Recently,
Ahrens and Bhattacharjee (2015) and Lam and Souza (2020) tackle this problem
through LASSO-based approaches (Tibshirani 1996), which involve (a priori)
expert knowledge about the interactions between spatial units, while allowing
the final estimates of the spatial weights to slightly deviate from
it.111Ahrens and Bhattacharjee (2015) consider the case of sparsity in the
spatial weights by employing shrinkage towards the zero matrix. However, for
regional economic panels, where the time dimension $T$ is often limited
relative to the number of spatial observations $N$, estimation results in a
deleterious proliferation of the number of parameters.
In this paper we describe a novel and flexible Bayesian approach for
estimation of spatial weight matrices. Our definition of spatial weight
matrices fulfils the typical assumptions employed in the vast majority of
spatial econometric literature. The resulting spatial weight matrices are
assumed non-negative and specific requirements to identification of the
parameters can be easily implemented in a Markov-chain Monte Carlo (MCMC)
sampling strategy. Although our primary focus is on row-standardized spatial
weight matrices, weights without row-standardization are also implementable.
To make our estimation approach applicable to spatial panels where the number
of time periods $T$ is limited as compared to the number of spatial units $N$,
we focus on spatial weight matrices which are binary prior to potential row-
standardization.
In this paper we primarily focus on scenarios where no a priori information on
the spatial structure is available. However, we also discuss how a priori
spatial information can be implemented in a very simple and transparent way.
For cases where the number of unknown parameters is large relative to the
number of observations, we discuss hierarchical prior setups which impose
sparsity in the weight matrix. In a Monte Carlo study, we show that these
sparsity priors perform particularly well when the number of spatial
observations $N$ is large relative to the time periods $T$.
We show that our approach can be implemented in an efficient Gibbs sampling
algorithm, which implies that the estimation strategy can be easily extended
to other spatial econometric specifications. Among several others, such
extensions include shrinkage estimation to avoid overparameterization
(Piribauer and Cuaresma 2016), more flexible specifications of the innovation
process (LeSage 1997), controlling for unobserved spatial heterogeneity
(Cornwall and Parent 2017; Piribauer 2016), or allowing for non-linearity in
the slope parameters (Basile 2008; Krisztin 2017). It is moreover worth noting
that the proposed approach can be easily adapted to matrix exponential spatial
specifications (LeSage and Pace 2007), spatial error specifications (see,
LeSage and Pace 2009), or local spillover models (Vega and Elhorst 2015).
The rest of the paper is organized as follows: the next section outlines the
panel version of the considered spatial lag model. Section 3 discusses the
Bayesian estimation approach of the spatial weights along with several
potential prior setups. Section 4 presents the Bayesian MCMC estimation
algorithm and also discusses how to efficiently deal with the computational
difficulties when updating the spatial weights in the MCMC sampler. Section 5
assesses the accuracy of the sampling procedure via a Monte Carlo simulation
study. Section 6 illustrates our approach using data on global infection rates
of the very first phase of the recent COVID-19 pandemic. The final section
concludes.
## 2 Econometric framework
We consider a panel version of a global spillover spatial autoregressive model
(SAR) of the form:222We also consider specifications with a spatial lag of the
temporally lagged dependent variable. Sampling strategies for these cases are
presented in the appendix.
$\boldsymbol{y}_{t}=\rho\boldsymbol{Wy}_{t}+\boldsymbol{\mu}+\tau_{t}+\boldsymbol{Z}_{t}\boldsymbol{\beta}_{0}+\boldsymbol{\varepsilon}_{t},\hskip
56.9055ptt=1,...,T$ (1)
where $\boldsymbol{y}_{t}$ denotes an $N\times 1$ vector of observations on
the dependent variable measured at period $t$. $\boldsymbol{\mu}$ and
$\tau_{t}$ represent parameters associated with fixed effects for the $N$
spatial units and $T$ time periods, respectively. $\boldsymbol{Z}_{t}$ is an
$N\times q_{0}$ full rank matrix of explanatory variables, with corresponding
$q_{0}\times 1$ vector of slope parameters $\boldsymbol{\beta}_{0}$.
$\boldsymbol{\varepsilon}_{t}$ is a standard $N\times 1$ disturbance term
$\boldsymbol{\varepsilon}_{t}\sim\mathcal{N}(\boldsymbol{0},\sigma^{2}\boldsymbol{I}_{N})$.
The $N\times N$ matrix $\boldsymbol{W}$ denotes a spatial weight matrix and
$\rho$ is a (scalar) spatial dependence parameter. $\boldsymbol{W}$ is non-
negative with $w_{ij}>0$ if observation $j$ is considered as a neighbour to
$i$, and $w_{ij}=0$ otherwise. A vital assumption is also that $w_{ii}=0$, in
order to avoid the case that an observation is assumed as a neighbour to
itself. A frequently made assumption amongst practitioners is that
$\boldsymbol{W}$ is row-stochastic with rows summing to unity. In this paper,
we mainly present results relating to row-stochastic weight matrices. However,
as the decision on row-standardizing $\boldsymbol{W}$ depends on the empirical
application, it is worth noting that the proposed approach may be easily
adapted to problems without row-standardization of
$\boldsymbol{W}$.333Thorough discussions on the implications of row-
standardization are provided by Plümper and Neumayer (2010) and Liu et al.
(2014).
The reduced form of the SAR model is given by:
$\boldsymbol{y}_{t}=(\boldsymbol{I}_{N}-\rho\boldsymbol{W})^{-1}(\boldsymbol{\mu}+\tau_{t}+\boldsymbol{Z}_{t}\boldsymbol{\beta}_{0}+\boldsymbol{\varepsilon}_{t}),$
(2)
where
$(\boldsymbol{I}_{N}-\rho\boldsymbol{W})^{-1}=\sum_{r=0}^{\infty}\rho^{r}\boldsymbol{W}^{r}$
is a so-called spatial multiplier matrix. To ensure that
($\boldsymbol{I}_{N}-\rho\boldsymbol{W}$) is invertible, appropriate stability
conditions need to be imposed. For row-stochastic spatial weight matrices, a
sufficient stability condition for the spatial autoregressive parameter often
employed is $\rho\in(-1,1)$ (see, for example, LeSage and Pace 2009).
In most cases, the elements of $\boldsymbol{W}$ are typically treated as
known. In the spatial econometric literature, there are various ways as a
means to constructing such a spatial weight matrix. In this study we focus on
estimation of weight matrices which are binary prior to row-standardization.
We therefore assume that the typical element of our spatial weight matrix can
be obtained from an unknown $N\times N$ spatial adjacency matrix
$\boldsymbol{\Omega}$ (with typical element $\omega_{ij}$).444Eq. (3) implies
some observations may have zero neighbours. However, priors on the number of
neighbours can be easily elicited to rule out such situations. Moreover, a
researcher might easily abstain from row-standardization by neglecting the
transformation in Eq. (3). We therefore define
$\boldsymbol{W}=f(\boldsymbol{\Omega})$, where $f(\cdot)$ denotes the row-
standardization function:555The function $f(\cdot)$ may simply be dropped when
considering models without row-standardization of $\boldsymbol{W}$.
$w_{ij}=\begin{cases}\omega_{ij}/\sum_{j=1}^{N}\omega_{ij}&\text{if
}\sum_{j=1}^{N}\omega_{ij}>0\\\ 0&\text{otherwise}.\end{cases}$ (3)
The elements of the adjacency matrix $\boldsymbol{\Omega}$ are assumed as
unknown binary indicators, which are subject to estimation. It is worth noting
that the assumption of a binary $\boldsymbol{\Omega}$ covers a wide range of
specifications commonly used in the literature such as contiguity, distance
band, or nearest neighbours (see, for example, LeSage and Pace 2009).
To alleviate further notation, we collect the respective dummy variables
associated with the fixed effects along with the explanatory variables in a
$N\times q$ matrix $\boldsymbol{X}_{t}$ with corresponding $q\times 1$
parameter vector $\boldsymbol{\beta}$. Moreover, define
$\boldsymbol{Y}=\left[\boldsymbol{y}_{1}^{\prime},\dots,\boldsymbol{y}_{T}^{\prime}\right]^{\prime}$,
$\boldsymbol{X}=\left[\boldsymbol{X}_{1}^{\prime},\dots,\boldsymbol{X}_{T}^{\prime}\right]^{\prime}$,
$\boldsymbol{S}=\boldsymbol{I}_{T}\otimes(\boldsymbol{I}_{N}-\rho\boldsymbol{W})$,
and $\mathcal{D}=\\{\boldsymbol{Y},\boldsymbol{X}\\}$ denotes the data. The
Gaussian likelihood $p(\mathcal{D}|\bullet)$ is then given by:
$p(\mathcal{D}|\bullet)=\frac{1}{(2\pi\sigma^{2})^{NT}}|\boldsymbol{S}|\exp\left[-\frac{1}{2\sigma^{2}}(\boldsymbol{SY}-\boldsymbol{X\beta})^{\prime}(\boldsymbol{SY}-\boldsymbol{X\beta})\right].$
(4)
When the elements of the spatial weight matrix are subject to estimation, the
number of unknown parameters is likely much larger than the number of
observations. Since spatial economic panels often feature limited $T$ relative
to $N$, the proposed estimation approach has to address the issue of over-
parametrization. We discuss different ways to tackle this problem. First and
foremost, one may reduce the dimensionality of the problem by imposing a
priori information on spatial weights or assuming symmetry of the spatial
neighbourhood structure. Alternatively, we consider hierarchical prior setups
which impose sparsity in the weight matrix.
When estimating spatial weights in addition to the spatial and slope
parameters, identification issues are more complicated as compared to models
assuming exogenous spatial weights. We therefore follow De Paula et al.
(2019), who provide a thorough discussion on parameter identification for
rather general spatial autoregressive model specifications. As mentioned
before, we consider spatial weight matrices which are non-negative and
$w_{ii}=0$ for all $i$. Further standard assumptions include
$\sum_{j=i}^{N}|\rho w_{ij}|<1$ $\forall i$, $|\rho|<1$, and
$||\boldsymbol{W}||<C$ for some positive $C\in\mathbb{R}$, as well as
$\boldsymbol{\beta}_{0}\rho\neq 0$. As an additional identifying assumption,
it is important that the main diagonal elements of $\boldsymbol{W}^{2}$ are
not proportional to a vector of ones.666The most obvious case, where this
assumption would be violated is a fully connected $\boldsymbol{W}$ with
$w_{ij}=1/N$ for all $i\neq j$. Sufficient conditions for global
identification are fulfilled if we make the additional assumption of $\rho>0$
(see Corollary 3 in De Paula et al. 2019). Without this additional restriction
on $\rho$, De Paula et al. (2019) show that a strongly connected spatial
network for global identification is needed. Since strong a priori information
on the spatial weight matrix is often not available (or desired), we therefore
assume $\rho\in(0,1)$ and only consider positive spatial autocorrelation,
which is a typical assumption for empirical applications.777These assumptions
can be checked during estimation by using standard rejection sampling
techniques in the MCMC sampling steps (see, for example, LeSage and Pace 2009,
or Koop 2003). Rejection sampling rejects draws of parameter combinations
which do not fulfil these assumptions.
## 3 Bayesian estimation of W
In this paper we use a Bayesian estimation approach to obtain estimates and
inference on the unknown quantities $\rho$, $\boldsymbol{\beta}$,
$\sigma^{2}$, as well as the elements of $\boldsymbol{\Omega}$. After
eliciting suitable priors for the unknown parameters, we employ a
computationally efficient MCMC algorithm.
Let $p(\omega_{ij}=1)$ denote the prior belief in including the $ij$th element
of the spatial weight matrix. Conversely, for a proper prior specification the
prior probability of exclusion is then simply given by
$p(\omega_{ij}=0)=1-p(\omega_{ij}=1)$. With $\boldsymbol{\Omega}_{-ij}$
denoting the elements of the neighbourhood matrix without $\omega_{ij}$, the
posterior probabilities of $\omega_{ij}=1$ and $\omega_{ij}=0$ conditional on
all other parameters are given by:
$\displaystyle\begin{aligned}
p(\omega_{ij}=1|\boldsymbol{\Omega}_{-ij},\boldsymbol{\beta},\sigma^{2},\rho,\mathcal{D})\propto
p(\omega_{ij}=1)|\boldsymbol{S}_{1}|\exp\left[-\frac{1}{2\sigma^{2}}(\boldsymbol{S}_{1}\boldsymbol{Y}-\boldsymbol{X\beta})^{\prime}(\boldsymbol{S}_{1}\boldsymbol{Y}-\boldsymbol{X\beta})\right]\phantom{,}\\\
p(\omega_{ij}=0|\boldsymbol{\Omega}_{-ij},\boldsymbol{\beta},\sigma^{2},\rho,\mathcal{D})\propto
p(\omega_{ij}=0)|\boldsymbol{S}_{0}|\exp\left[-\frac{1}{2\sigma^{2}}(\boldsymbol{S}_{0}\boldsymbol{Y}-\boldsymbol{X\beta})^{\prime}(\boldsymbol{S}_{0}\boldsymbol{Y}-\boldsymbol{X\beta})\right],\end{aligned}$
(5)
where $\boldsymbol{S}_{1}$ and $\boldsymbol{S}_{0}$ are given by
$\boldsymbol{S}$ through updating the spatial weight matrix $\boldsymbol{W}$
via setting $\omega_{ij}=1$ and $\omega_{ij}=0$, respectively.888To reduce the
dimensionality of the parameter space, an interesting alternative might be the
assumption of a symmetric $\boldsymbol{\Omega}$, which halves the number of
free elements in the spatial weight matrix. This assumption can be imposed in
the way by simply simultaneously updating $\omega_{ij}=\omega_{ji}$,
respectively. Using the law of total probability, it is straightforward to
show that the resulting conditional posterior for $\omega_{ij}$ is Bernoulli:
$\displaystyle
p(\omega_{ij}|\boldsymbol{\Omega}_{-ij},\boldsymbol{\beta},\sigma^{2},\rho,\mathcal{D})\sim\mathcal{BER}\left(\frac{\bar{p}^{(1)}_{ij}}{\bar{p}^{(0)}_{ij}+\bar{p}^{(1)}_{ij}}\right),$
(6)
with
$\bar{p}^{(1)}_{ij}=p(\omega_{ij}=1|\boldsymbol{\Omega}_{-ij},\boldsymbol{\beta},\sigma^{2},\rho,\mathcal{D})$
and
$\bar{p}^{(0)}_{ij}=p(\omega_{ij}=0|\boldsymbol{\Omega}_{-ij},\boldsymbol{\beta},\sigma^{2},\rho,\mathcal{D})$
given in Eq. (5). Since the conditional posterior follows a convenient and
well-known form, efficient Gibbs sampling can be employed.
A Bayesian estimation framework requires elicitation of a prior on
$\boldsymbol{\Omega}$. Obvious candidates are independent Bernoulli priors on
the unknown indicators $\omega_{ij}$:
$p(\omega_{ij})\sim\mathcal{BER}\left(\underline{p}_{ij}\right),$ (7)
where $\underline{p}_{ij}$ denotes the prior inclusion probability of
$\omega_{ij}$, $p(\omega_{ij}=1)=\underline{p}_{ij}$. Conversely, the prior
probability of exclusion then simply takes the form
$p(\omega_{ij}=0)=1-\underline{p}_{ij}$.
A natural prior choice would involve setting
$\underline{p}_{ij}=\underline{p}=1/2$ for $i\neq j$, and zero otherwise,
which implies that each off-diagonal element in $\boldsymbol{\Omega}$ has an
equal prior chance of being included. However, in many cases a researcher has
possible a priori information on the underlying structure of the spatial
weight matrix. The following stylized examples demonstrate how to incorporate
such information in a flexible and straightforward way.
Figure 1: Some stylized prior examples for $\boldsymbol{W}$ in a linear city
(a) (A) Exogenous $\boldsymbol{W}$
(b) (B) Fixed ($\underline{p}=1/2$)
(c) (C) Spatial prior
(d) (D) Spatial prior: combining two $\boldsymbol{W}$’s
Notes: Alternative prior setups for a linear city of $N=15$ spatial
observations. Case (A) shows a prior specification without any prior
uncertainty on the spatial links. This setup implies an exogenous
$\boldsymbol{W}$ and no estimation of the weights is involved. Case (B)
involves no spatial prior information and each element has a prior probability
of inclusion $\underline{p}_{ij}=1/2\,\forall i\neq j$. Case (C) shows
uncertainty of the linkages in $\boldsymbol{W}$ only within a certain spatial
domain. Case (D) is a stylized prior specification considering uncertainty
among two (or more) weight matrices, with setting $p_{ij}=1$ in regions where
the two matrices overlap.
Figure 1 illustrates the flexibility of prior elicitation for
$\boldsymbol{\Omega}$ in the case of a "linear city" with $N=15$ equidistant
regions. Case (A) in the figure shows a prior specification without any prior
uncertainty on the elements of $\boldsymbol{W}$ by setting
$\underline{p}_{ij}=1$ if $i$ and $j$ are considered as neighbours and zero
otherwise. In this case, no estimation on the spatial links is involved and
the model reduces to a standard SAR model with an exogenously given
$\boldsymbol{W}$ (in this example, a distance band specification).
Case (B) depicts the opposite case where no prior spatial information is
available. Specifically, this case considers full estimation of all $N^{2}-N$
potential links with respective prior inclusion probability
$\underline{p}_{ij}=1/2$ for $i\neq j$.
Subplots (C) and (D) in Figure 1 depict prior setups where a priori spatial
information is available to the researcher, but associated with uncertainty.
Case (C) illustrates a prior where the general spatial domain is assumed as
being a priori known, but uncertainty over specific linkages exists. In
empirical practice, spatial weight matrices based on geographic information
are often viewed as being preferable due to exogeneity assumptions to
(socio-)economic data. The illustrated prior specification follows this idea
by still allowing for uncertainty and flexibility among the spatial
neighbourhood.
Recent contributions to spatial econometric literature propose selecting
(Piribauer and Cuaresma 2016) or combining (Debarsy and LeSage 2018) multiple
exogenous spatial weight matrices. Case (D) follows a similar idea by
depicting a mixture of a distance band and a contiguity matrix (i.e.
neighbourhood if regions share a common border). The intersecting elements of
the two spatial structures (resulting in a contiguity matrix) are assumed as
being included by setting $p_{ij}=1$.
### Hierarchical prior setups and sparsity
The prior structure in Eq. (7) involves _fixed_ inclusion probabilities
$\underline{p}$, which implies that the number of neighbours of observation
$i$ follows a Binomial distribution
$\sum_{l=1}^{N-1}\omega_{il}\sim\mathcal{BN}(N-1,\underline{p})$ with a prior
expected number of neighbours of $(N-1)\underline{p}$. However, such a prior
structure has the potential undesirable effect of promoting a relatively large
number of neighbours. For example, when $\underline{p}=1/2$, the prior
expected number of neighbours is $(N-1)/2$, since combinations of
$\omega_{ij}$ resulting in such a neighbourhood size are dominant in number.
To put more prior weight on parsimonious neighbourhood structures and
therefore promote sparsity in the adjacency matrix, one may explicitly account
for the number of linkages in each row of the adjacency matrix
$\boldsymbol{\omega}_{i}=\left[\omega_{i1},\dots,\omega_{iN}\right]^{\prime}$.
We consider a flexible prior structure on the number of neighbours
$\sum\boldsymbol{\omega}_{i}$ that corresponds to a beta-binomial distribution
$\mathcal{BB}(N-1,\underline{a}_{\omega},\underline{b}_{\omega})$ with two
prior hyperparameters $\underline{a}_{\omega},\underline{b}_{\omega}>0$. The
beta-binomial distribution is the result of treating the prior inclusion
probability $\underline{p}$ as _random_ (rather than being fixed) by placing a
hierarchical beta prior on it. For $\omega_{ij}$, the resulting prior can be
written as follows:
$p(\omega_{ij})\propto\Gamma\left(\underline{a}_{w}+\sum\boldsymbol{\omega}_{i}\right)\Gamma\left(\underline{b}_{\omega}+(N-1)-\sum\boldsymbol{\omega}_{i}\right),$
(8)
where $\Gamma(\cdot)$ denotes the Gamma function, and $\underline{a}_{\omega}$
and $\underline{b}_{\omega}$ are prior hyperparameters.
In the case of $\underline{a}_{\omega}=\underline{b}_{\omega}=1$, the prior
takes the form of a discrete uniform distribution over the number of
neighbours. By fixing $\underline{a}_{\omega}=1$, we follow Ley and Steel
(2009) and anchor the prior expected number of neighbours $\underline{m}$ via
$\underline{b}_{\omega}=[(N-1)-\underline{m}]/\underline{m}$.
## 4 Bayesian MCMC estimation of the model
This section presents the Bayesian MCMC estimation algorithm for the proposed
modelling framework. Estimation is carried out using an efficient Gibbs
sampling scheme. The only exception is the sampling step for the spatial
(scalar) autoregressive parameter $\rho$, where we propose using a standard
griddy Gibbs step.999A random walk Metropolis-Hastings step for $\rho$ might
be employed as an alternative. The sampling scheme involves the following
steps:
1. I.
Set starting values for the parameters (e.g. by sampling from the prior
distributions)
2. II.
Sequentially update the parameters by subsequently sampling from the
conditional posterior distributions presented in this section.
Step II. is repeated for $B$ times after discarding the first $B_{0}$ draws as
burn-ins.
### Sampling $\boldsymbol{\beta}$ and $\sigma^{2}$
For the slope parameters $\boldsymbol{\beta}$ and the error variance
$\sigma^{2}$ we use common Normal and inverted Gamma prior specifications,
respectively. Specifically,
$p(\boldsymbol{\beta})\sim\mathcal{N}(\boldsymbol{0},\underline{\boldsymbol{V}}_{\beta})$
and
$p(\sigma^{2})\sim\mathcal{IG}(\underline{a}_{\sigma^{2}},\underline{b}_{\sigma^{2}}$),
where $\underline{\boldsymbol{V}}_{\beta}$, $\underline{a}_{\sigma^{2}}$, and
$\underline{b}_{\sigma^{2}}$ denote prior hyperparameters.
The resulting conditional posterior distribution is Gaussian and of well-known
form (see, for example, LeSage and Pace 2009):
$\displaystyle
p(\boldsymbol{\beta}|\sigma^{2},\rho,\boldsymbol{\Omega},\mathcal{D})$
$\displaystyle\sim$
$\displaystyle\mathcal{N}(\bar{\boldsymbol{b}}_{\beta},\bar{\boldsymbol{V}}_{\beta})$
(9) $\displaystyle\bar{\boldsymbol{b}}_{\beta}$ $\displaystyle=$
$\displaystyle\sigma^{-2}\bar{\boldsymbol{V}}_{\beta}\boldsymbol{X}^{\prime}\boldsymbol{SY}$
$\displaystyle\bar{\boldsymbol{V}}_{\beta}$ $\displaystyle=$
$\displaystyle\left(\sigma^{-2}\boldsymbol{X}^{\prime}\boldsymbol{X}+\underline{\boldsymbol{V}}_{\beta}^{-1}\right)^{-1}.$
The conditional posterior of $\sigma^{2}$ is inverted Gamma:
$\displaystyle
p(\sigma^{2}|\boldsymbol{\beta},\rho,\boldsymbol{\Omega},\mathcal{D})$
$\displaystyle\sim$
$\displaystyle\mathcal{IG}(\bar{a}_{\sigma^{2}},\bar{b}_{\sigma^{2}})$ (10)
$\displaystyle\bar{a}_{\sigma^{2}}$ $\displaystyle=$
$\displaystyle\underline{a}_{\sigma^{2}}+NT/2$
$\displaystyle\bar{b}_{\sigma^{2}}$ $\displaystyle=$
$\displaystyle\underline{b}_{\sigma^{2}}+(\boldsymbol{SY}-\boldsymbol{X\beta})^{\prime}(\boldsymbol{SY}-\boldsymbol{X\beta}).$
### Sampling $\rho$
For the spatial parameter $\rho$, we use a standard Beta distribution (see
LeSage and Pace, 2009, p. 142). The conditional posterior is given by:
$p(\rho|\boldsymbol{\beta},\sigma^{2},\boldsymbol{\Omega},\mathcal{D})\propto
p(\rho)|\boldsymbol{S}|\exp\left[-\frac{1}{2\sigma^{2}}(\boldsymbol{SY}-\boldsymbol{X\beta})^{\prime}(\boldsymbol{SY}-\boldsymbol{X\beta})\right].$
(11)
Note that the conditional posterior for $\rho$ does not follow a well-known
form and thus requires alternative sampling techniques. We follow LeSage and
Pace (2009) and use a griddy-Gibbs step (Ritter and Tanner 1992) to sample
$\rho$.101010Since the support for $\rho$ is limited, the griddy-Gibbs
approach (or sometimes inversion approach) relies on univariate numerical
integration techniques of the conditional posterior for $\rho$ and uses the
cumulative density function for producing draws of $\rho$. A Metropolis-
Hastings step may be used as a standard alternative, but these typically
produce less efficient draws with poorer mixing properties (see also LeSage
and Pace 2009).
### Sampling the elements of the adjacency matrix $\boldsymbol{\Omega}$
As discussed in the previous section, we propose two alternative prior
specifications for the unknown indicators of the spatial weight matrix
$\omega_{ij}.$ First, an independent Bernoulli prior structure with fixed
inclusion probabilities (7). Second, a hierarchical prior structure which
treats the inclusion probabilities as random (8). After eliciting the prior,
the binary indicators $\omega_{ij}$ can be sequentially sampled in random
order from a Bernoulli distribution with conditional posterior given in (6).
### Fast computation of the determinant terms
For the Bayesian MCMC algorithm, it is worth noting that repeated sampling
from Eq. (6) is required. However, this requires evaluating the conditional
probabilities $p(\omega_{ij}=1|\cdot)$ and $p(\omega_{ij}=0|\cdot)$ in Eq.
(5). The main computational difficulty lies in the calculation of the
determinants $|\boldsymbol{S}_{0}|$ and $|\boldsymbol{S}_{1}|$, which has to
be carried out per Gibbs sampling step for the $N^{2}-N$ unknown elements of
the spatial adjacency matrix. The computational costs associated with direct
calculation of these determinants steeply rises with $N$ – in fact by a factor
of $\mathcal{O}(N^{3})$. This makes direct evaluation of the determinant
prohibitively expensive, especially for large values of $N$. To avoid direct
evaluation, we provide computationally efficient updates for the determinant,
allowing for estimation of models with larger sample sizes.
It is worth noting that it is not necessary to directly calculate the
determinant of the $NT\times NT$ matrix $\boldsymbol{S}_{z}$ (with
$z\in\\{0,1\\}$). Only the determinant of the $N\times N$ matrix
$\boldsymbol{A}_{z}=\boldsymbol{I}_{N}-\rho\boldsymbol{W}_{z}$ needs to be
updated, since
$|\boldsymbol{S}_{z}|=|\boldsymbol{I}_{T}\otimes\boldsymbol{A}_{z}|=|\boldsymbol{A}_{z}|^{T}$.
Here, $\boldsymbol{W}_{z}$ denotes the spatial weight matrix obtained by
setting $\omega_{ij}=1$ and $\omega_{ij}=0$, respectively.
Direct evaluation of $|\boldsymbol{A}_{z}|$ can be largely avoided, since
updating $\omega_{ij}$ changes only the $i$-th row of $\boldsymbol{A}$, if we
do not restrict $\boldsymbol{\Omega}$ to be symmetric (we will address this
case shortly). To illustrate, let $\boldsymbol{\Omega}^{(c)}$ denote the
current – to be updated – spatial adjacency matrix, and $\boldsymbol{W}^{(c)}$
the associated spatial weight matrix with determinant
$|\boldsymbol{A}^{(c)}|=|\boldsymbol{I}_{N}-\rho\boldsymbol{W}^{(c)}|$. Using
the so-called matrix determinant lemma, we can efficiently calculate:
$\displaystyle|\boldsymbol{A}_{z}|=|\boldsymbol{A}^{(c)}+\boldsymbol{\nu}_{i}\boldsymbol{\delta}_{i}^{\prime}|=\left\\{1+\boldsymbol{\delta}_{i}^{\prime}(\boldsymbol{A}^{(c)})^{-1}\boldsymbol{\nu}_{i}\right\\}|\boldsymbol{A}^{(c)}|.$
(12)
$\boldsymbol{\nu}_{i}$ is an $N\times 1$ vector of zeros, except for its
$i$-th entry, which is unity. The $N\times 1$ vector $\boldsymbol{\delta}_{i}$
contains the differences between the $i$-th row of $\boldsymbol{A}_{z}$ and
the $i$-th row of $\boldsymbol{A}^{(c)}$.
It becomes clear that Eq. (12) provides a computationally cheap way for
updating the determinant $|\boldsymbol{A}_{z}|$, conditional on
$|\boldsymbol{A}^{(c)}|$ and $\left(\boldsymbol{A}^{(c)}\right)^{-1}$. This
implies that during the MCMC procedure, for each update of $\omega_{ij}$, we
have to keep track of the determinant (for which Eq. (12) provides a simple
update) and the inverse of $\boldsymbol{A}_{z}$. Direct evaluation of
$\boldsymbol{A}_{z}^{-1}$ is – similar to direct evaluation of the determinant
– prohibitively expensive for moderate to large $N$, since it has to be
carried out for each unknown element of $\boldsymbol{\Omega}$. However, we can
rely on the so-called Sherman-Morrison formula to avoid direct evaluation of
the matrix inverse:
$\displaystyle\boldsymbol{A}_{z}^{-1}=\left(\boldsymbol{A}^{(c)}+\boldsymbol{\nu}_{i}\boldsymbol{\delta}_{i}^{\prime}\right)^{-1}=\left(\boldsymbol{A}^{(c)}\right)^{-1}-\frac{\left(\boldsymbol{A}^{(c)}\right)^{-1}\boldsymbol{\nu}_{i}\boldsymbol{\delta}_{i}^{\prime}\left(\boldsymbol{A}^{(c)}\right)^{-1}}{1+\boldsymbol{\delta}_{i}^{\prime}\left(\boldsymbol{A}^{(c)}\right)^{-1}\boldsymbol{\nu}_{i}}.$
(13)
Combining the formulas in Eqs. (12) and (13) thus provides a numerically cheap
and viable way to update the elements of the spatial adjacency
matrix.111111Note the implication that an update of $\rho$ necessitates a
direct evaluation of the determinant $|\boldsymbol{A}|$ and the matrix inverse
$\boldsymbol{A}^{-1}$, as in this case no convenient equations exist. An
update of $\rho$, however, has to be performed only once per Gibbs step, as
opposed to the $N^{2}-N$ updates necessary for $\boldsymbol{\Omega}$, thus
justifying the relatively higher computational costs.
The binary nature of $\omega_{ij}$ can be exploited for additional
computational gains. Either $\boldsymbol{A}_{0}$ or $\boldsymbol{A}_{1}$
always exactly equals $\boldsymbol{A}^{(c)}$ and thus its determinant and
inverse is already known. This only necessitates calculating
$|\boldsymbol{A}_{z}|$ and $(\boldsymbol{A}_{z})^{-1}$ for only $z=1$ or for
$z=0$, but not both.
If a symmetric spatial adjacency matrix $\boldsymbol{\Omega}$ is assumed, the
update process remains generally the same, however the determinant and matrix
inverse updates have to be performed iteratively. In this case, both
$\omega_{ij}$ and $\omega_{ji}$ (for $i\neq j$) are set to either $1$ or $0$.
Thus, both the $i$-th and the $j$-th row of $\boldsymbol{A}_{z}$ differ from
$\boldsymbol{A}^{(c)}$. Following the notation in the non-symmetric case, let
us denote the differences between these rows as $\boldsymbol{\delta}_{i}$ and
$\boldsymbol{\delta}_{j}$. To obtain an update of $|\boldsymbol{A}_{z}|$ and
$\boldsymbol{A}_{z}^{-1}$, we first evaluate Eqs. (12) and (13), based on
$\boldsymbol{\delta}_{i}$, $\boldsymbol{\nu}_{i}$, $|\boldsymbol{A}^{(c)}|$,
and $(\boldsymbol{A}^{(c)})^{-1}$. Using the resulting determinant and matrix
inverse, as well as $\boldsymbol{\nu}_{j}$, and $\boldsymbol{\delta}_{j}$, we
again evaluate Eqs. (12) and (13), which yield $|\boldsymbol{A}_{z}|$ and
$\boldsymbol{A}_{z}^{-1}$.
## 5 Simulation study
To assess the accuracy of our proposed approach, we evaluate its performance
in a Monte Carlo study. Our benchmark data generating process comprises two
randomly generated explanatory variables, as well as spatial unit and time
fixed effects:
$\displaystyle\tilde{\boldsymbol{y}}_{t}=\tilde{\rho}\widetilde{\boldsymbol{W}}\tilde{\boldsymbol{y}}_{t}+\tilde{\boldsymbol{\mu}}+\tilde{\tau}_{t}+\tilde{\boldsymbol{Z}}_{t}\tilde{\boldsymbol{\beta}}_{0}+\tilde{\boldsymbol{\varepsilon}}_{t}.$
To maintain succinct notation, we denote the simulated values in the Monte
Carlo study with a tilde. The matrix of explanatory variables
$\tilde{\boldsymbol{Z}}_{t}$ is defined as
$\tilde{\boldsymbol{Z}}_{t}=[\tilde{{z}}_{1t},\tilde{{z}}_{2t}]$, where both
$\tilde{{z}}_{1t}$ and $\tilde{{z}}_{2t}$ are normally distributed with zero
mean and variance of one, $q_{0}=2$. The corresponding vector of coefficients
is defined as $\tilde{\boldsymbol{\beta}}_{0}=[-1,1]^{\prime}$. The vector of
residuals $\tilde{\boldsymbol{\varepsilon}}_{t}$ is generated from a normal
distribution with zero mean and $\tilde{\sigma}^{2}=0.5$. The fixed effects
parameters $\tilde{\boldsymbol{\mu}}$ and $\tilde{\tau}_{t}$ are randomly
generated from a standard normal distribution.
The row-stochastic spatial weight matrix $\widetilde{\boldsymbol{W}}$ is based
on an adjacency matrix $\widetilde{\boldsymbol{\Omega}}$, which is generated
from an $N/20$ nearest neighbour specification, by additionally assuming
symmetry of the weight matrix prior to row-standardization.121212More
specifically,
$\widetilde{\boldsymbol{\Omega}}=(\widetilde{\boldsymbol{\Omega}}_{0}^{\prime}+\widetilde{\boldsymbol{\Omega}}_{0})/2$
where $\widetilde{\boldsymbol{\Omega}}_{0}$ is a $N/20$ nearest neighbour
adjacency matrix. The nearest neighbour specification is based on a randomly
generated spatial location pattern, sampled from a normal distribution with
zero mean and unity variance. In the Monte Carlo study we vary
$T\in\\{10,40\\}$ and $N\in\\{20,100\\}$. Additionally, we vary the strength
of spatial dependence $\tilde{\rho}\in\\{0.3,0.5,0.8\\}$.
For the Monte Carlo simulation study, we compare the following prior setups:
1. 1.
Fixed ($\underline{p}=1/2$) prior: this prior corresponds to the fixed
Bernoulli prior specification in Eq. (7), where we set $\underline{p}=1/2$.
2. 2.
Sparsity ($\underline{m}=(N-1)/2$) prior: this is analogous to the prior setup
in Eq. (8), with $\underline{a}_{\omega}=\underline{b}_{\omega}=1$. This prior
setup corresponds to a discrete uniform distribution over the number of
neighbours.
3. 3.
Sparsity ($\underline{m}=N/10$) prior: this prior setup corresponds to Eq.
(8), with $\underline{a}_{\omega}=1$ and
$\underline{b}_{\omega}=[(N−1)−\underline{m}]/\underline{m}$. We set the
number of a priori expected neighbours to $\underline{m}=N/10$. This prior
setup thus imposes more sparsity in $\boldsymbol{\Omega}$ as compared to the
former.
For all prior specifications under scrutiny, we consider two alternative
estimation setups by assuming that the adjacency matrix is either symmetric or
non-symmetric.131313However, a direct comparison of the results between
symmetric and non-symmetric specifications does not appear reasonable, since
the adjacency matrix in the data generating process is assumed symmetric. We
moreover report the predictive performance of two alternative specifications
using exogenous weight matrices. In these cases the employed weights are based
on the true (symmetric) adjacency matrix by fixing the accuracy to the 99% and
95% level, respectively. We simulate such cases by randomly switching 1% and
5% of the elements in the true binary adjacency matrix
$\widetilde{\boldsymbol{\Omega}}$, respectively. The resulting exogenous
adjacency matrices thus result in exactly 99% and 95% overlap in the binary
observations with the true adjacency matrix, while maintaining the same level
of sparsity.
The prior setup for our remaining parameters is as follows. We assume a
Gaussian prior for $\boldsymbol{\beta}$ with zero mean and a variance of
$100$. We use an inverse gamma prior for $\sigma^{2}$ with rate and shape
parameters $0.01$. The prior for the spatial autoregressive parameter $\rho$
is a symmetric Beta specification with shape and rate parameters equal to
$1.01$. The chosen priors can thus be considered highly non-informative.
In Table 1 we use several criteria to evaluate the performance of the
alternative specifications. For the spatial autoregressive and the slope
parameters we report the well-known root mean squared error (RMSE). For
assessing the ability to estimating the spatial adjacency matrix, we use the
measure of accuracy. The accuracy measure is defined as the sum of correctly
identified unknown elements, divided by the number of total elements to be
estimated. This measure is calculated separately for each posterior draw. The
reported value is an average over all posterior draws and Monte Carlo
iterations.
[tbp] $N$ $T$ $\tilde{\rho}$ Non-symmetric Symmetric Exogenous Fixed Sparsity
Sparsity Fixed Sparsity Sparsity $\boldsymbol{W}$ $\underline{p}=1/2$
$\underline{m}=N/2$ $\underline{m}=N/10$ $\underline{p}=1/2$
$\underline{m}=n/2$ $\underline{m}=n/10$ $0.99$ $0.95$
RMSE($\boldsymbol{\beta}$) 20 40 0.3 0.193 0.161 0.163 0.162 0.163 0.164 0.168
0.176 0.5 0.172 0.173 0.172 0.170 0.169 0.171 0.179 0.216 0.8[b] 0.169 0.169
0.169 0.165 0.166 0.167 0.287 0.553 10 0.3 0.234 0.207 0.198 0.203 0.182 0.181
0.192 0.204 0.5 0.257 0.210 0.206 0.189 0.191 0.190 0.206 0.253 0.8[b] 0.217
0.216 0.217 0.205 0.204 0.206 0.371 0.658 100 40 0.3 0.098 0.099 0.097 0.099
0.099 0.098 0.079 0.080 0.5 0.144 0.088 0.083 0.145 0.114 0.076 0.084 0.086
0.8[b] 0.154 0.087 0.088 0.073 0.081 0.081 0.089 0.141 10 0.3 0.111 0.112
0.111 0.111 0.111 0.112 0.092 0.093 0.5 0.135 0.118 0.104 0.135 0.136 0.118
0.088 0.094 0.8[b] 0.346 0.143 0.140 0.254 0.102 0.102 0.100 0.151
RMSE($\rho$) 20 40 0.3 0.199 0.029 0.031 0.030 0.029 0.030 0.034 0.060 0.5
0.035 0.040 0.042 0.035 0.035 0.035 0.039 0.083 0.8[b] 0.021 0.021 0.022 0.018
0.018 0.018 0.084 0.177 10 0.3 0.237 0.152 0.094 0.291 0.147 0.106 0.058 0.080
0.5 0.155 0.060 0.054 0.109 0.053 0.051 0.059 0.114 0.8[b] 0.027 0.032 0.032
0.028 0.028 0.029 0.097 0.179 100 40 0.3 0.280 0.283 0.277 0.279 0.283 0.287
0.027 0.033 0.5 0.447 0.109 0.101 0.446 0.353 0.220 0.021 0.054 0.8[b] 0.148
0.044 0.047 0.049 0.024 0.024 0.034 0.097 10 0.3 0.242 0.256 0.268 0.245 0.252
0.274 0.050 0.062 0.5 0.373 0.176 0.141 0.371 0.391 0.404 0.041 0.074 0.8[b]
0.473 0.106 0.110 0.169 0.141 0.137 0.044 0.105 Accuracy $\boldsymbol{\Omega}$
20 40 0.3 0.648 0.930 0.954 0.963 0.982 0.983 0.990 0.950 0.5 0.983 0.988
0.989 0.998 0.998 0.998 0.990 0.950 0.8[b] 0.995 0.995 0.995 0.999 1.000 0.999
0.990 0.950 10 0.3 0.554 0.752 0.866 0.679 0.875 0.904 0.990 0.950 0.5 0.734
0.898 0.931 0.915 0.962 0.967 0.990 0.950 0.8[b] 0.975 0.983 0.984 0.996 0.997
0.997 0.990 0.950 100 40 0.3 0.530 0.713 0.847 0.539 0.686 0.848 0.990 0.950
0.5 0.530 0.898 0.929 0.539 0.793 0.933 0.990 0.950 0.8[b] 0.847 0.966 0.966
0.978 0.977 0.977 0.990 0.950 10 0.3 0.530 0.713 0.844 0.539 0.685 0.846 0.990
0.950 0.5 0.530 0.746 0.883 0.539 0.702 0.905 0.990 0.950 0.8[b] 0.531 0.926
0.933 0.564 0.944 0.944 0.990 0.950
Table 1: Monte Carlo simulation results
* •
Notes: Results are based on $1,000$ Monte Carlo iterations. For each Monte
Carlo iteration the corresponding sampling algorithms are run using $500$
draws, where the initial $500$ were discarded as burn-in. The values given for
RMSE($\boldsymbol{\beta}$) and RMSE($\rho$) correspond to the average root
mean squared error over all Monte Carlo iterations. Bold values denote the
best performing specification within a section (symmetric or non-symmetric).
The exogenous $\boldsymbol{\Omega}$ specifications correspond to classic SAR
models with randomly perturbed exogenous adjacency matrices, which have an
accuracy of 99% and 95% compared to the _true_ adjacency matrix. For RMSEs,
lower values indicate outperformance. Conversely, for the accuracy indicators
of $\boldsymbol{\Omega}$, higher values indicate outperformance.
Table 1 summarizes the results of our Monte Carlo simulation. For all
combinations of $N$, $T$, $\tilde{\rho}$ under scrutiny, the table presents
the respective root mean square error for both the slope coefficients
$\boldsymbol{\beta}$ and the spatial autoregressive parameter. The third block
of the table shows the accuracy of the estimated adjacency matrix
$\boldsymbol{\Omega}$. Lower values in terms of RMSEs indicate outperformance.
Conversely, for accuracy in $\boldsymbol{\Omega}$ higher values indicate
outperformance. The best performance among the three employed prior scenarios
within a subgroup is highlighted in bold. In addition, the last two columns in
Table 1 show the results for the benchmark SAR models using exogenous randomly
perturbed adjacency matrices with accuracy fixed at the $99\%$ and the $95\%$
level, respectively.
Intuitively, the precision of the estimation improves as the number of
observations $NT$ increases in proportion to the number of unknown
parameters.141414The number of unknown parameters amounts to $N^{2}+T+q_{0}+2$
and $N(N-1)/2+N+T+q_{0}+2$ for non-symmetric and symmetric spatial weight
matrices, respectively. The results in Table 1 largely confirm this intuition.
The performance indicators for both $\rho$ and $\boldsymbol{\Omega}$ also
clearly improve for high levels of spatial autocorrelation ($\rho=0.8$). In
scenarios where the number of unknown parameters is smaller than the number of
observations our approach even manages to outperform both rather hard
benchmarks using exogenous spatial weight matrices close to the true DGP. This
relative outperformance appears particularly pronounced when the strength of
spatial dependence $\rho$ is large. In these settings, symmetric
specifications (which resemble the true DGP) even manage to produce accuracy
in the adjacency matrix close to unity.
Particularly interesting results appear in the most challenging Monte Carlo
scenarios, where the number of unknown parameters is particularly large
relative to the number of observations ($N=100$ and $T=10$). In these
scenarios, the number of parameters to be estimated exceeds the number of
observations by a factor of more than ten. In these cases, prior
specifications without using shrinkage appear to fail estimating the
underlying spatial structure by producing rather poor accuracy measures.
However, when employing sparsity priors, the table reveals that our approach
still manages to produce relatively accurate predictive results. In the
existence of pronounced spatial autocorrelation, the sparsity specifications
even manage to closely track the predictive performance of the rather tough
exogenous benchmarks.
Note that the symmetric specifications (where we impose
$\omega_{ij}=\omega_{ji}$) typically outperform their non-symmetric
counterparts due to their resemblance to the true DGP. However, for settings
where the number of unknown parameters is smaller than the number of
observations both scenarios track each other closely. Among the alternative
prior specifications under scrutiny, the table shows rather similar results
(no clear best specification emerges) in scenarios where $N$ is small relative
to $T$. However, for particularly over-parametrized settings (high $N$ and low
$T$) the proposed sparsity priors particularly outperform the fixed setups.
Specifically, even in the scenario with $N=100$ and $T=10$, the sparsity
priors still perform comparatively well.151515Figure A4 in the appendix
illustrates the convergence properties of a random Monte Carlo sample for the
case of $N=20$ and $T=10$. This case was chosen as it is similar to the
settings in the empirical applications.
## 6 Empirical illustration
To illustrate our proposed approach using real data, we estimate spatial panel
specifications based on country-specific daily infection rates in the very
early phase of the coronavirus pandemic. We use the COVID-19 data set provided
by the Johns Hopkins University (Dong et al. 2020). The database contains
information on (official) daily infections for a large panel of countries
around the globe. For the empirical illustration, we focus on the very
beginning of the outbreak by using data from 17th of February to the 20th of
April of 2020.
The starting date of our sample marks the beginning of the pandemic in major
countries, such that large parts of Asia, Europe and North America can be
included.161616Countries without any (official) infections in the starting
period have been excluded from the sample. We moreover exclude India as a
clear outlier from the sample due to its particular small (official) infection
rates throughout the observation period. The choice of the end date is
motivated by the results of Krisztin et al. (2020), where the degree of
spatial dependence among infections rates becomes insignificant after the 20th
April, when the majority of countries in the sample implemented lockdown
policies.
For the empirical application we use data for the following countries:
Australia (AUS), Bahrain (BHR), Belgium (BEL), Canada (CAN), China (CHN),
Finland (FIN), France (FRA), Germany (DEU), Iran (IRN), Iraq (IRQ), Israel
(ISR), Italy (ITA), Japan (JPN), Kuwait (KWT), Lebanon (LBN), Malaysia (MYS),
Oman (OMN), Republic of Korea (KOR), Russian Federation (RUS), Singapore
(SGP), Spain (ESP), Sweden (SWE), Thailand (THA), United Arab Emirates (ARE),
United Kingdom (GBR), United States (USA), and Viet Nam (VNM).
By including a biweekly time lag, our resulting panel thus comprises $N=27$
countries across the globe for a period of $T=19$ days.171717With a biweekly
time lag, the dependent variable thus captures data from 2nd of April to the
20th of April ($T=19$). For a better comparison, we have fixed the time period
captured by $\boldsymbol{y}_{t}$ for all alternative specifications. It is
moreover worth noting that a notable earlier starting date would result in
relatively few (cross-sectional) observations. However, our results, are
rather robust when considering a longer time horizon. We follow work by
Guliyev (2020), Krisztin et al. (2020), or Han et al. (2021), among others,
and use panel versions of a spatial growth specification for the country-
specific COVID-19 infections:
$\boldsymbol{y}_{t}=\boldsymbol{\mu}+\tau_{t}+\rho\boldsymbol{Wy}_{t-r}+\boldsymbol{x}_{t-14}\beta+\boldsymbol{Z}_{t-14}\boldsymbol{\beta}_{0}+\boldsymbol{\varepsilon}_{t},$
(14)
where $\boldsymbol{y}_{t}=\boldsymbol{x}_{t}-\boldsymbol{x}_{t-14}$, and
$\boldsymbol{x}_{t}$ is an $N\times 1$ vector comprising the (logged) daily
number of official cases per 100,000 inhabitants per country for time period
$t=1,...,T$.181818The spatial growth regression in (14) may be alternatively
specified in levels rather than in log-differences by setting
$\boldsymbol{y}_{t}=\boldsymbol{x}_{t}$. Results using this alternative
specification are very similar and are presented in the appendix.
$\boldsymbol{\mu}$ and $\tau_{t}$ represent fixed effects for the countries
and the time periods, respectively. $\boldsymbol{W}$ denotes the spatial
weight matrix with spatial autoregressive parameter $\rho$ as defined before.
We again primarily focus on row-stochastic weight matrices. Results based on
spatial weight matrices without row-standardization are presented in the
appendix.
We also consider alternative model specifications using contemporaneous as
well as temporal lags of the spatial lag ($\boldsymbol{Wy}_{t-r}$ with
$r\in\\{0,14\\}$). A plethora of recent studies exploit the contemporaneous
spatial information ($r=0$) for modelling the spread of COVID-19 infections
(among others, see Han et al. 2021, Jaya and Folmer 2021, Kosfeld et al. 2021,
Guliyev 2020, or Krisztin et al. 2020). Using contemporaneous spatial
information appears reasonable when the primary interest lies in quantifying
spatial co-movements of infection rates. However, for many questions of
interest, a temporal spatial lag $\boldsymbol{Wy}_{t-r}$ ($r>0$) might be an
interesting alternative since it reflects the notion that the spatial process
of virus transmission takes some time to manifest (Elhorst 2021, Mitze and
Kosfeld 2021). Since our proposed estimation approach can be easily applied to
these alternative specifications, we provide estimates for both
specifications.191919It is worth noting that in the special case of $r>0$,
computational efficiency is tremendously increased, as no log-determinant
calculations are required in the MCMC algorithm. The sampling strategy for
these cases is presented in the appendix.
In addition to the _Initial infections_ variable $\boldsymbol{x}_{t-14}$,
matrix $\boldsymbol{Z}_{t-14}$ contains three explanatory variables on a daily
basis. Several studies emphasize the importance of climatic condition on the
COVID-19 virus spread. For a survey on the effects of climate on the spread of
the COVID-19 pandemic, see Briz-Redón and Serrano-Aroca (2020). We therefore
use daily data on the country specific maximum measured temperature
(_Temperature_) and precipitation levels (_Precipitation_) as additional
covariates. Both variables stem from a daily database of country-specifc data,
which was compiled via the Dark Sky
API.202020https://www.kaggle.com/datasets/vishalvjoseph/weather-dataset-for-
covid19-predictions As a third variable, we also include the well-known
stringency index (_Stringency_) put forward by Hale et al. (2020), which
summarizes country-specific governmental policy measures to contain the spread
of the virus. In this application, we use the biweekly average of the reported
stringency index. Since all these influences arguably require some time to be
reflected in the official infection figures, we use a biweekly lag of $14$
days (in accordance with $r$ in alternative variants).212121As robustness
checks, we have also tried a shorter lag length of one week. The estimated
spatial structures appeared very similar to the biweekly benchmarks. All these
additional robustness checks, along with the R codes, are available from the
authors upon request.
[tbp]
Table 2: Estimation results for benchmark specifications
| $\boldsymbol{Wy}_{t}$ | $\boldsymbol{Wy}_{t-14}$
---|---|---
| Fixed | Sparsity | Fixed | Sparsity
| Mean | Std.Dev. | Mean | Std.Dev. | Mean | Std.Dev. | Mean | Std.Dev.
Initial infections | -0.8761 | 0.0117 | -0.9244 | 0.0117 | -0.9533 | 0.0126 | -0.9911 | 0.0114
Stringency | -0.4566 | 0.0736 | -0.5661 | 0.0451 | -0.2503 | 0.0858 | 0.0616 | 0.0410
Precipitation | 0.0365 | 0.0339 | -0.0444 | 0.0335 | 0.0541 | 0.0608 | 0.0483 | 0.0511
Temperature | -0.0014 | 0.0015 | -0.0016 | 0.0015 | -0.0032 | 0.0026 | -0.0017 | 0.0025
$\rho$ | 0.6319 | 0.0129 | 0.5592 | 0.0101 | 0.9618 | 0.0110 | 0.9481 | 0.0139
$\sigma^{2}$ | 0.0187 | 0.0013 | 0.0209 | 0.0014 | 0.0401 | 0.0034 | 0.0516 | 0.0036
Avg. # neighbours | 7.8370 | | 3.6083 | | 4.2849 | | 2.8082 |
Fixed effects | Yes | | Yes | | Yes | | Yes |
$N$ | 27 | | 27 | | 27 | | 27 |
$T$ | 19 | | 19 | | 19 | | 19 |
* •
Notes: Posterior quantities based on $5,000$ MCMC draws, where the first
$2,500$ were discarded as burn-ins. Values in bold denote significance under a
90% posterior credible interval.
Table 2 presents a summary of the estimation results. The left part of the
table shows results for specifications using a contemporaneous spatial lag
$\boldsymbol{Wy}_{t}$, while the right part summarizes results for the case
$\boldsymbol{Wy}_{t-14}$.
For each specification, the first rows contain the posterior mean and standard
deviations for the slope parameters followed by estimates of $\rho$ and
$\sigma^{2}$. Posterior quantities which appear significantly different from
zero using a 90% posterior credible interval are depicted in bold. The table
moreover presents the average posterior expected number of neighbours, which
is given by the average row sum of the matrix of posterior inclusion
probabilities based on $p(\omega_{ij}=1|\mathcal{D})$. This measure can be
viewed as a measure of sparsity in the estimated matrix of linkages. All
specifications moreover contain fixed effects for both $N$ and $T$.222222For
the benchmark specifications, the number of unknown parameters and
observations thus amounts to $753$ and $513$, respectively.
Table 2 shows rather similar $\rho$ and $\sigma^{2}$ posterior quantities for
the flat and the sparsity prior. However, there appear some marked differences
between the specifications $\boldsymbol{Wy}_{t}$ and $\boldsymbol{Wy}_{t-14}$.
In all cases, spatial dependence appears strong and precisely estimated, but
appears particularly high in the temporal lag specification
$\boldsymbol{Wy}_{t-14}$. However, the table similarly reveals higher
estimates for the nuisance parameter $\sigma^{2}$ for the temporal spatial lag
models. The table shows rather precise and negative coefficients for the
initial infections variable, indicating conditional convergence patterns. For
most model variants the table moreover suggests a significant negative impact
of the stringency index on infection growth. The majority of the slope
parameter estimates associated with the variables temperature and
precipitation appear more muted and insignificant. Overall, the table moreover
clearly demonstrates that a hierarchical prior setup can enforce sparsity in
the resulting adjacency matrix. Both sparsity specifications result in an
average number of neighbours smaller than the models with fixed prior
specifications.
Figure 2: Posterior inclusion probabilities for benchmark specifications
(a) Fixed ($\underline{p}=1/2$); $\boldsymbol{Wy}_{t}$
(b) Sparsity ($\underline{m}=7$); $\boldsymbol{Wy}_{t}$
(c) Fixed ($\underline{p}=1/2$); $\boldsymbol{Wy}_{t-14}$
(d) Sparsity ($\underline{m}=7$); $\boldsymbol{Wy}_{t-14}$
Notes: Posterior inclusion probabilities of spatial links based on $5,000$
MCMC draws. Inclusion probabilities 0.50-0.75 (little evidence for inclusion)
are coloured grey. Strong evidence for inclusion (>0.75) indicated by black
colour.
Figure 2 depicts the posterior inclusion probabilities
$p(\omega_{ij}=1|\mathcal{D})$ for the considered specifications. To better
visualize the results we have reordered the countries by their longitudes,
starting with Canada and the United States and ending with south-east Asian
countries, Australia and Japan. Clusters along the main diagonal thus roughly
indicate geographic spatial linkages. For the sake of visualization, we
distinguish between negligible evidence for inclusion ($<0.50$; white colour),
moderate evidence ($0.50-0.75$; grey colour), and strong evidence ($>0.75$;
black colour).
The two upper plots in Figure 2 depict posterior inclusion probabilities
$p(\omega_{ij}=1|\mathcal{D})$ for the specifications involving a
contemporaneous spatial lag $\boldsymbol{Wy}_{t}$, while the lower part shows
temporal spatial lag specifications $\boldsymbol{Wy}_{t-14}$. In both cases,
the left subplots present results based on independent prior inclusion
probabilities of $\underline{p}=1/2$. The right plots are based on sparsity
priors using $\underline{m}=7$. The columns in the subplots indicate marginal
posterior importance of the countries as predictors of coronavirus infections
in linked countries. Conversely, rows depict the countries to be predicted.
The results using sparsity priors generally produce similar patterns as the
fixed prior specifications and clearly demonstrate its ability of dimension
reduction in the connectivity structure. For the contemporaneous spatial lag
specification (upper plots), the figure suggests a slightly more pronounced
regional dependency structure as compared to the temporal spatial lags. The
figure moreover reveals marked spill-out effects from Asian countries, as well
as from Iran and Italy.232323The regional dependency structure appears
particularly pronounced when a level specification of the infection dynamics
is imposed. Sensitivity checks based on this alternative specifications are
presented in Figure A1 in the appendix.
Results based on a biweekly temporal spatial lag $\boldsymbol{Wy}_{t-14}$ show
even more pronounced spill-out effects from Asian countries (most notably
China, Republic of Korea, and Singapore).242424When comparing the results, it
is important to note that for all specifications under scrutiny, we have fixed
time period in the dependent variable ($\boldsymbol{y}_{t}$ ranges from the
2nd February to the 20th of February; i.e. $T=19$).The biweekly temporal
spatial lag specification thus inherently comprises spatial information prior
to the period in $\boldsymbol{y}_{t}$. For European countries, results
similarly suggest Italy as a further important source country of spatial virus
transmission. The estimated spatial linkages are thus in close agreement with
the actual origins of the overall virus transmission for the very early period
of the global outbreak of the pandemic.
To showcase convergence of the posterior MCMC chains, Figure 3 depicts trace
plots for $\rho$, $\sigma^{2}$, and slope parameters. Overall, the trace plots
show rather good mixing and convergence properties. Convergence of the chains
have moreover been checked using the diagnostics proposed by Geweke (1992)
implemented in the R package coda (Plummer et al. 2006). Results moreover
appear rather robust concerning alternative modelling frameworks. Estimation
results of these alternative specifications are presented in the
appendix.252525Estimates when using a smaller time lag of seven days also
appear very similar. Results along with the R codes used are available from
the authors upon request.
Figure 3: Trace plots for benchmark specifications
Notes: Posterior draws based on $5,000$ MCMC draws, where the first $2,500$
were discarded as burn-ins.
## 7 Concluding remarks
In this paper we propose a Bayesian approach for estimation of weight matrices
in spatial econometric models. A particular advantage of our approach is the
simple integration into a standard Bayesian MCMC algorithm. The proposed
framework can therefore be adapted and extended in a simple and
computationally efficient way to cover a large number of alternative spatial
specifications prevalent in recent literature. Our approach may thus be easily
extended to cover inter alia non-Gaussian models such as spatial probit
(LeSage et al., 2011) or logit specifications (Krisztin and Piribauer, 2021),
local spillover models (Vega and Elhorst, 2015), or spatial error models
(LeSage and Pace, 2009).
Our approach does not not necessarily rely on specific prior information for
the spatial linkages. Spatial information, however, can be easily implemented
in a flexible and transparent way. We moreover motivate the use of
hierarchical priors which impose sparsity in the resulting spatial weight
matrix. These sparsity priors are particularly useful in applications where
the number of unknown parameters exceeds those of the observations. The
virtues of our approach comes at the price that we focus on spatial
neighbourhood structures which are binary (prior to row-standardization).
However, this assumption is implicitly assumed in many spatial applications in
the regional economic literature where spatial weight matrices are constructed
based on concepts of contiguity, distance band, or nearest neighbours.
Based on Monte Carlo simulations, we show that our approach appears
particularly promising when the number of spatial observations $N$ is large
relative to the time dimension $T$, which is a rather common characteristic of
data sets in the regional science literature. We moreover demonstrate the
usefulness of our approach using real data on the outbreak of the COVID-19
pandemic. The results of this empirical application corroborate the findings
in the Monte Carlo simulation study that the proposed approach performs well
even in the cases of high over-parametrization.
## References
* Ahrens and Bhattacharjee (2015) Ahrens A and Bhattacharjee A (2015) Two-step lasso estimation of the spatial weights matrix. _Econometrics_ 3(1), 128–155
* Basile (2008) Basile R (2008) Regional economic growth in Europe: A semiparametric spatial dependence approach. _Papers in Regional Science_ 87(4), 527–544
* Briz-Redón and Serrano-Aroca (2020) Briz-Redón Á and Serrano-Aroca Á (2020) The effect of climate on the spread of the COVID-19 pandemic: A review of findings, and statistical and modelling techniques. _Progress in Physical Geography: Earth and Environment_ 44(5), 591–604
* Cornwall and Parent (2017) Cornwall GJ and Parent O (2017) Embracing heterogeneity: the spatial autoregressive mixture model. _Regional Science and Urban Economics_ 64, 148–161
* De Paula et al. (2019) De Paula Á, Rasul I and Souza P (2019) Identifying network ties from panel data: theory and an application to tax competition. _arXiv preprint arXiv:1910.07452_
* Debarsy and LeSage (2018) Debarsy N and LeSage J (2018) Flexible dependence modeling using convex combinations of different types of connectivity structures. _Regional Science and Urban Economics_ 69, 48–68
* Dong et al. (2020) Dong E, Du H and Gardner L (2020) An interactive web-based dashboard to track COVID-19 in real time. _The Lancet infectious diseases_ 20(5), 533–534
* Elhorst (2021) Elhorst JP (2021) The dynamic general nesting spatial econometric model for spatial panels with common factors: Further raising the bar. _Review of Regional Research_ , 1–19
* Geweke (1992) Geweke J (1992) Evaluating the accuracy of sampling-based approaches to the calculations of posterior moments. _Bayesian Statistics_ 4, 641–649
* Guliyev (2020) Guliyev H (2020) Determining the spatial effects of COVID-19 using the spatial panel data model. _Spatial Statistics_ 38, 100443
* Hale et al. (2020) Hale T, Petherick A, Phillips T and Webster S (2020) Variation in government responses to COVID-19. _Blavatnik School of Government Working Paper_ 31, 2020–11
* Han and Lee (2016) Han X and Lee LF (2016) Bayesian analysis of spatial panel autoregressive models with time-varying endogenous spatial weight matrices, common factors, and random coefficients. _Journal of Business & Economic Statistics_ 34(4), 642–660
* Han et al. (2021) Han X, Xu Y, Fan L, Huang Y, Xu M and Gao S (2021) Quantifying COVID-19 importation risk in a dynamic network of domestic cities and international countries. _Proceedings of the National Academy of Sciences_ 118(31)
* Hsieh and Lee (2016) Hsieh CS and Lee LF (2016) A social interactions model with endogenous friendship formation and selectivity. _Journal of Applied Econometrics_ 31(2), 301–319
* Jaya and Folmer (2021) Jaya IGNM and Folmer H (2021) Bayesian spatiotemporal forecasting and mapping of COVID-19 risk with application to West Java Province, Indonesia. _Journal of Regional Science_ 61(4), 849–881
* Kelejian and Piras (2014) Kelejian HH and Piras G (2014) Estimation of spatial models with endogenous weighting matrices, and an application to a demand model for cigarettes. _Regional Science and Urban Economics_ 46, 140–149
* Koop (2003) Koop G (2003) _Bayesian Econometrics_. John Wiley & Sons Ltd., West Sussex
* Kosfeld et al. (2021) Kosfeld R, Mitze T, Rode J and Wälde K (2021) The Covid-19 containment effects of public health measures: A spatial difference-in-differences approach. _Journal of Regional Science_ 61(4), 799–825
* Krisztin (2017) Krisztin T (2017) The determinants of regional freight transport: A spatial, semiparametric approach. _Geographical Analysis_ 49(3), 268–308
* Krisztin and Piribauer (2021) Krisztin T and Piribauer P (2021) A Bayesian spatial autoregressive logit model with an empirical application to European regional FDI flows. _Empirical Economics_ 61, 231–257
* Krisztin et al. (2020) Krisztin T, Piribauer P and Wögerer M (2020) The spatial econometrics of the coronavirus pandemic. _Letters in Spatial and Resource Sciences_ 13, 209–218
* Lam and Souza (2020) Lam C and Souza PC (2020) Estimation and selection of spatial weight matrix in a spatial lag model. _Journal of Business & Economic Statistics_ 38(3), 693–710
* LeSage (1997) LeSage JP (1997) Bayesian estimation of spatial autoregressive models. _International Regional Science Review_ 20(1-2), 113–129
* LeSage et al. (2011) LeSage JP, Kelley Pace R, Lam N, Campanella R and Liu X (2011) New Orleans business recovery in the aftermath of Hurricane Katrina. _Journal of the Royal Statistical Society: Series A (Statistics in Society)_ 174(4), 1007–1027
* LeSage and Pace (2007) LeSage JP and Pace RK (2007) A matrix exponential spatial specification. _Journal of Econometrics_ 140(1), 190–214
* LeSage and Pace (2009) LeSage JP and Pace RK (2009) _Introduction to Spatial Econometrics_. CRC Press, Boca Raton London New York
* Ley and Steel (2009) Ley E and Steel MF (2009) On the effect of prior assumptions in Bayesian model averaging with applications to growth regression. _Journal of Applied Econometrics_ 24(4)
* Liu et al. (2014) Liu X, Patacchini E and Zenou Y (2014) Endogenous peer effects: local aggregate or local average? _Journal of Economic Behavior & Organization_ 103, 39–59
* Mitze and Kosfeld (2021) Mitze T and Kosfeld R (2021) The propagation effect of commuting to work in the spatial transmission of COVID-19. _Journal of Geographical Systems_ 24, 5–31
* Piribauer (2016) Piribauer P (2016) Heterogeneity in spatial growth clusters. _Empirical Economics_ 51(2), 659–680
* Piribauer and Cuaresma (2016) Piribauer P and Cuaresma JC (2016) Bayesian variable selection in spatial autoregressive models. _Spatial Economic Analysis_ 11(4), 457–479
* Plummer et al. (2006) Plummer M, Best N, Cowles K and Vines K (2006) CODA: Convergence diagnosis and output analysis for MCMC. _R News_ 6(1), 7–11
* Plümper and Neumayer (2010) Plümper T and Neumayer E (2010) Model specification in the analysis of spatial dependence. _European Journal of Political Research_ 49(3), 418–442
* Qu and Lee (2015) Qu X and Lee Lf (2015) Estimating a spatial autoregressive model with an endogenous spatial weight matrix. _Journal of Econometrics_ 184(2), 209–232
* Ritter and Tanner (1992) Ritter C and Tanner MA (1992) Facilitating the Gibbs sampler: The Gibbs stopper and the Griddy–Gibbs sampler. _Journal of the American Statistical Association_ 87(419), 861–868
* Tibshirani (1996) Tibshirani R (1996) Regression shrinkage and selection via the lasso. _Journal of the Royal Statistical Society: Series B (Methodological)_ 58(1), 267–288
* Vega and Elhorst (2015) Vega HS and Elhorst JP (2015) The SLX model. _Journal of Regional Science_ 55(3), 339–363
## Appendix
### Estimation strategies for alternative spatial lag specifications
In the empirical application, the paper also considers model variants with a
spatial lag on the temporal lag of the dependent variable. The considered
specification can be written as:
$\boldsymbol{y}_{t}=\boldsymbol{\mu}+\tau_{t}+\rho\boldsymbol{Wy}_{t-1}+\boldsymbol{Z}_{t}\boldsymbol{\beta}_{0}+\boldsymbol{\varepsilon}_{t},\hskip
56.9055ptt=1,...,T,$ (15)
where $\boldsymbol{y}_{t-1}$ now denotes the temporal lag of the dependent
variable and the other quantities are defined as before. From a Bayesian
perspective, it is worth noting that an additional temporal lag of the
dependent variable $\boldsymbol{y}_{t-1}$ can be treated like any other
explanatory variable and thus part of the matrix of covariates
$\boldsymbol{Z}_{t}$.
From a computational perspective, the specification in Eq. (15) is much easier
to deal with as compared to SAR models involving a contemporaneous spatial lag
in the dependent variable (i. e. $\rho\boldsymbol{Wy}_{t}$). This is due to
the fact that the likelihood function does not involve a determinant term.
To maintain succinct notation, we again collect the fixed effects along with
the explanatory variables in a $N\times q$ matrix $\boldsymbol{X}_{t}$ and
stack the quantities as before
$\boldsymbol{X}=\left[\boldsymbol{X}_{1}^{\prime},\dots,\boldsymbol{X}_{T}^{\prime}\right]^{\prime}$,
with $\boldsymbol{Y}_{t}$ and $\boldsymbol{Y}_{t-1}$ denoting the stacked
$NT\times 1$ vectors of the dependent variable and the lag, respectively.
Defining
$\boldsymbol{e}_{t}=\boldsymbol{Y}_{t}-\rho(\boldsymbol{I}_{T}\otimes\boldsymbol{W})\boldsymbol{Y}_{t-1}-\boldsymbol{X\beta}$,
the likelihood reduces to a much simpler form and is given by:
$p(\mathcal{D}|\bullet)=\frac{1}{(2\pi\sigma^{2})^{NT}}\exp\left[-\frac{1}{2\sigma^{2}}\boldsymbol{e}_{t}^{\prime}\boldsymbol{e}_{t}\right].$
(16)
By using the same prior specifications like in the SAR case, the posterior
probabilities of including or excluding $\omega_{ij}$ conditional on the other
parameters are then given by:
$\displaystyle
p(\omega_{ij}=1|\boldsymbol{\Omega}_{-ij},\boldsymbol{\beta},\sigma^{2},\rho,\mathcal{D})\propto
p(\omega_{ij}=1)\exp\left[-\frac{1}{2\sigma^{2}}\boldsymbol{e}_{1}^{\prime}\boldsymbol{e}_{1}\right],$
(17) $\displaystyle
p(\omega_{ij}=0|\boldsymbol{\Omega}_{-ij},\boldsymbol{\beta},\sigma^{2},\rho,\mathcal{D})\propto
p(\omega_{ij}=0)\exp\left[-\frac{1}{2\sigma^{2}}\boldsymbol{e}_{0}^{\prime}\boldsymbol{e}_{0}\right],$
(18)
where $\boldsymbol{e}_{1}$ and $\boldsymbol{e}_{0}$ denote the updated vector
of residuals $\boldsymbol{e}$ when $\omega_{ij}=1$ and $\omega_{ij}=0$,
respectively. The conditional Bernoulli posterior for $\omega_{ij}$ is given
by:
$p(\omega_{ij}|\boldsymbol{\Omega}_{-ij},\boldsymbol{\beta},\sigma^{2},\rho,\mathcal{D})\sim\mathcal{BER}\left(\frac{\bar{p}_{ij}^{(1)}}{\bar{p}_{ij}^{(0)}+\bar{p}_{ij}^{(1)}}\right),$
(19)
with
$\bar{p}_{ij}^{(1)}=p(\omega_{ij}=1|\boldsymbol{\Omega}_{-ij},\boldsymbol{\beta},\sigma^{2},\rho,\mathcal{D})$
and
$\bar{p}_{ij}^{(0)}=p(\omega_{ij}=0|\boldsymbol{\Omega}_{-ij},\boldsymbol{\beta},\sigma^{2},\rho,\mathcal{D})$.
The remaining conditional posterior distributions required for the MCMC
sampler are given by:
$\displaystyle
p(\boldsymbol{\beta}|\sigma^{2},\rho,\boldsymbol{\Omega},\mathcal{D})$
$\displaystyle\sim$
$\displaystyle\mathcal{N}(\bar{\boldsymbol{b}}_{\beta},\bar{\boldsymbol{V}}_{\beta})$
(20) $\displaystyle\bar{\boldsymbol{b}}_{\beta}$ $\displaystyle=$
$\displaystyle\sigma^{-2}\bar{\boldsymbol{V}}_{\beta}\boldsymbol{X}^{\prime}[\boldsymbol{Y}-\rho(\boldsymbol{I}_{T}\otimes\boldsymbol{W})\boldsymbol{Y}_{t-1}]$
$\displaystyle\bar{\boldsymbol{V}}_{\beta}$ $\displaystyle=$
$\displaystyle\left(\sigma^{-2}\boldsymbol{X}^{\prime}\boldsymbol{X}+\underline{\boldsymbol{V}}_{\beta}^{-1}\right)^{-1}.$
$\displaystyle
p(\sigma^{2}|\boldsymbol{\beta},\rho,\boldsymbol{\Omega},\mathcal{D})$
$\displaystyle\sim$
$\displaystyle\mathcal{IG}(\bar{a}_{\sigma^{2}},\bar{b}_{\sigma^{2}})$ (21)
$\displaystyle\bar{a}_{\sigma^{2}}$ $\displaystyle=$
$\displaystyle\underline{a}_{\sigma^{2}}+NT/2$
$\displaystyle\bar{b}_{\sigma^{2}}$ $\displaystyle=$
$\displaystyle\underline{b}_{\sigma^{2}}+\boldsymbol{e}_{t}^{\prime}\boldsymbol{e}_{t}.$
Unlike the other parameters, the conditional posterior for $\rho$ again takes
no well-known form and can be sampled by using a griddy-Gibbs or tuned
Metropolis-Hastings step:
$p(\rho|\boldsymbol{\beta},\sigma^{2},\boldsymbol{\Omega},\mathcal{D})\propto
p(\rho)\exp\left[-\frac{1}{2\sigma^{2}}\boldsymbol{e}_{t}^{\prime}\boldsymbol{e}_{t}\right].$
(22)
When using a normal prior distribution for $p(\rho)$, it is worth noting that
the spatial lag can be simply captured in the matrix of explanatory variables,
such that the parameter $\rho$ is incorporated in the vector
$\boldsymbol{\beta}$. However, in order to pay particular attention to model
stability as well as prior consistency to the benchmark SAR specification in
the main body of the paper, we similarly employ a beta prior for $\rho$, which
results in the non-standard form of the conditional posterior for
$\rho$.262626When considering specifications with a spatial lag in the
explanatory variables (typically referred to as SLX models), the MCMC sampling
scheme is rather similar, which also considerably reduces the computational
burden as compared to SAR frameworks.
### Empirical results for additional model specifications and Monte Carlo
diagnostics
This section provides results based on alternative model specifications. We
provide estimates and inferences for three different specifications. We first
consider a specification where the dependent variable is based on the log
levels of infection rates rather than (biweekly) log differences (all else
being equal). These results (labelled level specification are presented in
Table A1 and Figure A1. Overall, the results in general appear very similar to
the benchmark specifications.272727Note that the interpretation of initial
infections variable in the level specifications is different to benchmark case
using log differences as dependent variable. Specifically, in the former
parameters smaller than unity (as compared to negative parameters in the
benchmark specifications) point towards convergence. Second, we also consider
specifications without row-standardization of the spatial weight matrix. A
summary of the estimation results along with the posterior results for the
spatial weigh matrix is provided in Table A2 and Figure A2,
respectively.282828From an econometric point of view, estimation is the same
as compared to the row-stochastic counterparts without conducting the
standardization in the MCMC sampler. However, in this case several caveats
arise. Most notably, row-standardization of $\boldsymbol{W}$ has the great
advantage that the parameter space for the spatial autoregressive parameter
$\rho$ is clearly defined, such that the inverse
$(\boldsymbol{I}_{N}-\rho\boldsymbol{W})^{-1}$ exists. To ensure stationarity
of the MCMC sampler in case of no row-standardization, we have therefore
implemented a rejection step by rejecting draws resulting to singular
solutions. Third, to show the merits of our approach in highly over-
parametrized environments, we also present a robustness check with only
$T=10$.292929Specifically, in these specifications we reduce the end date of
the dependent variable accordingly. Results are presented in Table A3 and
Figure A3. We have moreover tried various other robustness checks including
versions using a shorter time lag of only seven days or even shorter time
periods, which produces similar results.
[tbp] $\boldsymbol{Wy}_{t}$ $\boldsymbol{Wy}_{t-14}$ Fixed Sparsity Fixed
Sparsity Mean Std.Dev. Mean Std.Dev. Mean Std.Dev. Mean Std.Dev. Initial
infections 0.0438 0.0126 0.0697 0.0097 0.0222 0.0109 0.0532 0.0152 Stringency
-0.2633 0.0445 -0.2089 0.0498 0.0609 0.0398 -0.3677 0.0507 Precipitation
-0.0205 0.0454 -0.0077 0.0405 0.0222 0.0517 0.0075 0.0572 Temperature -0.0059
0.0020 -0.0057 0.0018 -0.0032 0.0025 -0.0014 0.0027 $\rho$ 0.9188 0.0139
0.8468 0.0168 0.9510 0.0155 0.9384 0.0166 $\sigma^{2}$ 0.0391 0.0029 0.0353
0.0026 0.0579 0.0044 0.0592 0.0043 Avg. # neighbours 7.9544 3.6625 3.7096
3.1224 Fixed effects Yes Yes Yes Yes $N$ 27 27 27 27 $T$ 19 19 19 19
Table A1: Estimation results for level specifications
* •
Notes: Posterior quantities based on $5,000$ MCMC draws, where the first
$2,500$ were discarded as burn-ins. Values in bold denote significance under a
90% credible interval. Level specifications refer to specifications by using
(all else being equal) log levels of infection rates rather than log-
differences as the dependent variable.
Figure A1: Posterior inclusion probabilities of linkages for level
specifications
(a) Fixed ($\underline{p}=1/2$); $\boldsymbol{Wy}_{t}$
(b) Sparsity ($\underline{m}=7$); $\boldsymbol{Wy}_{t}$
(c) Fixed ($\underline{p}=1/2$); $\boldsymbol{Wy}_{t-14}$
(d) Sparsity ($\underline{m}=7$); $\boldsymbol{Wy}_{t-14}$
Notes: Posterior inclusion probabilities of spatial links based on $5,000$
MCMC draws. Inclusion probabilities 0.50-0.75 (little evidence for inclusion)
are coloured grey. Strong evidence for inclusion (>0.75) indicated by black
colour. Level specifications refer to specifications by using (all else being
equal) log levels of infection rates rather than log-differences as the
dependent variable.
[tbp] $\boldsymbol{Wy}_{t}$ $\boldsymbol{Wy}_{t-14}$ Fixed Sparsity Fixed
Sparsity Mean Std.Dev. Mean Std.Dev. Mean Std.Dev. Mean Std.Dev. Initial
infections -0.9538 0.0108 -0.9715 0.0093 -0.9718 0.0217 -0.9655 0.0223
Stringency -0.4062 0.0472 -0.5259 0.0360 -0.4856 0.0739 -0.3712 0.0738
Precipitation 0.0084 0.0365 -0.0304 0.0357 -0.0966 0.1023 -0.1284 0.1071
Temperature -0.0025 0.0017 -0.0018 0.0017 -0.0116 0.0047 -0.0172 0.0048 $\rho$
0.0801 0.0018 0.0960 0.0026 0.3624 0.0452 0.2526 0.0484 $\sigma^{2}$ 0.0264
0.0018 0.0266 0.0018 0.2318 0.0151 0.2457 0.0164 Avg. # neighbours 10.8863
6.5014 2.5103 2.0225 Fixed effects Yes Yes Yes Yes $N$ 27 27 27 27 $T$ 19 19
19 19
Table A2: Estimation results for specifications without row-standardization of
the weight matrix
* •
Notes: Posterior quantities based on $5,000$ MCMC draws, where the first
$2,500$ were discarded as burn-ins. Values in bold denote significance under a
90% credible interval.
Figure A2: Posterior inclusion probabilities of linkages for specifications
without row-standardization of the weight matrix
(a) Fixed ($\underline{p}=1/2$); $\boldsymbol{Wy}_{t}$
(b) Sparsity ($\underline{m}=7$); $\boldsymbol{Wy}_{t}$
(c) Fixed ($\underline{p}=1/2$); $\boldsymbol{Wy}_{t-14}$
(d) Sparsity ($\underline{m}=7$); $\boldsymbol{Wy}_{t-14}$
Notes: Posterior inclusion probabilities of spatial links based on $5,000$
MCMC draws. Inclusion probabilities 0.50-0.75 (little evidence for inclusion)
are coloured grey. Strong evidence for inclusion (>0.75) indicated by black
colour.
[tbp] $\boldsymbol{Wy}_{t}$ $\boldsymbol{Wy}_{t-14}$ Fixed Sparsity Fixed
Sparsity Mean Std.Dev. Mean Std.Dev. Mean Std.Dev. Mean Std.Dev. Initial
infections -0.9834 0.0161 -1.0090 0.0122 -1.0234 0.0207 -1.0016 0.0175
Stringency -0.2595 0.0853 -0.4966 0.0672 -0.1537 0.1667 0.0200 0.0822
Precipitation -0.0067 0.0453 0.0131 0.0440 -0.1114 0.0764 -0.1084 0.0724
Temperature 0.0030 0.0022 -0.0022 0.0024 -0.0093 0.0038 -0.0062 0.0037 $\rho$
0.7148 0.0111 0.4405 0.0164 0.8910 0.0317 0.8930 0.0359 $\sigma^{2}$ 0.0183
0.0019 0.0193 0.0019 0.0570 0.0060 0.0513 0.0064 Avg. # neighbours 10.4409
3.5630 3.1953 2.4116 Fixed effects Yes Yes Yes Yes $N$ 27 27 27 27 $T$ 10 10
10 10
Table A3: Estimation results for specifications with $T=10$
* •
Notes: Posterior quantities based on $5,000$ MCMC draws, where the first
$2,500$ were discarded as burn-ins. Values in bold denote significance under a
90% credible interval.
Figure A3: Posterior inclusion probabilities for specifications with $T=10$
(a) Fixed ($\underline{p}=1/2$); $\boldsymbol{Wy}_{t}$
(b) Sparsity ($\underline{m}=7$); $\boldsymbol{Wy}_{t}$
(c) Fixed ($\underline{p}=1/2$); $\boldsymbol{Wy}_{t-14}$
(d) Sparsity ($\underline{m}=7$); $\boldsymbol{Wy}_{t-14}$
Notes: Posterior inclusion probabilities of spatial links based on $5,000$
MCMC draws. Inclusion probabilities 0.50-0.75 (little evidence for inclusion)
are coloured grey. Strong evidence for inclusion (>0.75) indicated by black
colour.
Figure A4: Diagnostic plots for a Monte Carlo run based on $N=20$ and $T=10$
(a) Fixed ($\underline{p}=1/2$)
$\tilde{\rho}=0.3$
(b) Sparsity ($\underline{m}=N/10$)
$\tilde{\rho}=0.3$
(c) $\tilde{\rho}=0.5$
(d) $\tilde{\rho}=0.5$
(e) $\tilde{\rho}=0.8$
(f) $\tilde{\rho}=0.8$
Notes: Trace plots and posterior densities based on $1,000$ MCMC draws, where
the first $500$ were discarded as burn-ins. Dashed lines denote prior
distributions.
|
# All-optical beam steering using the polariton lighthouse effect
Samuel M.H. Luk [ Hadrien Vergnet [ Ombline Lafont [ Przemyslaw
Lewandowski [ Nai H. Kwong [ Elisabeth Galopin [ Aristide Lemaitre [
Philippe Roussignol [ Jérôme Tignon [ Stefan Schumacher [ Rolf Binder [
Emmanuel Baudin [<EMAIL_ADDRESS>
###### Abstract
We demonstrate theoretically and experimentally that a specifically designed
microcavity driven in the optical parametric oscillation regime exhibits
lighthouse-like emission, i.e., an emission focused around a single direction.
Remarkably, the emission direction of this micro-lighthouse is continuously
controlled by the linear polarization of the incident laser, and angular beam
steering over $360\text{\,}\mathrm{\SIUnitSymbolDegree}$ is demonstrated.
Theoretically, this unprecedented effect arises from the interplay between the
nonlinear optical response of microcavity exciton-polaritons, the difference
in the subcavities forming the microcavity, and the rotational invariance of
the device.
###### keywords:
Nonlinear Optics, All-Optical Signal Processing, Semiconductor Microcavity,
Polaritons
UoA]Department of Physics, University of Arizona, Tucson, AZ 85721, USA
LPENS]Laboratoire de Physique de l’Ecole Normale Supérieure, ENS, Université
PSL, CNRS, Sorbonne Université, Université Paris-Diderot, Sorbonne Paris Cité,
24 rue Lhomond 75005 Paris, France LPENS]Laboratoire de Physique de l’Ecole
Normale Supérieure, ENS, Université PSL, CNRS, Sorbonne Université, Université
Paris-Diderot, Sorbonne Paris Cité, 24 rue Lhomond 75005 Paris, France
Paderborn]Physics Department and Center for Optoelectronics and Photonics
Paderborn (CeOPP), Universität Paderborn, Warburger Strasse 100, 33098
Paderborn, Germany Paderborn]Wyant College of Optical Sciences, University of
Arizona, Tucson, AZ 85721, USA C2N]Centre de Nanosciences et de
Nanotechnologies (C2N), CNRS, Université Paris Sud, Université Paris-Saclay,
91120 Palaiseau, France C2N]Centre de Nanosciences et de Nanotechnologies
(C2N), CNRS, Université Paris Sud, Université Paris-Saclay, 91120 Palaiseau,
France LPENS]Laboratoire de Physique de l’Ecole Normale Supérieure, ENS,
Université PSL, CNRS, Sorbonne Université, Université Paris-Diderot, Sorbonne
Paris Cité, 24 rue Lhomond 75005 Paris, France LPENS]Laboratoire de Physique
de l’Ecole Normale Supérieure, ENS, Université PSL, CNRS, Sorbonne Université,
Université Paris-Diderot, Sorbonne Paris Cité, 24 rue Lhomond 75005 Paris,
France Paderborn]Physics Department and Center for Optoelectronics and
Photonics Paderborn (CeOPP), Universität Paderborn, Warburger Strasse 100,
33098 Paderborn, Germany [Paderborn]Wyant College of Optical Sciences,
University of Arizona, Tucson, AZ 85721, USA UoA]Department of Physics,
University of Arizona, Tucson, AZ 85721, USA [Paderborn]Wyant College of
Optical Sciences, University of Arizona, Tucson, AZ 85721, USA
LPENS]Laboratoire de Physique de l’Ecole Normale Supérieure, ENS, Université
PSL, CNRS, Sorbonne Université, Université Paris-Diderot, Sorbonne Paris Cité,
24 rue Lhomond 75005 Paris, France
## 1 Introduction
Lighthouses have been used for millennia to inform ships on their relative
position on the sea. The lighthouse design possesses two advantages (Fig. 1a):
Its highly directive radiation pattern allows limiting the required power to
reach remote locations, and the dynamic control of the emission allows
converting spatial information into time information, and vice versa. Such
design is used in our everyday life in bar code readers, phased array radars
or beamformer sonars.
Non-mechanical lighthouse designs are attractive because they allow reducing
mechanical fatigue limitations, and increasing the lifetime, simplicity and
speed of devices. There are two strategies to reach such goal: The first one
relies on using the interference of a controllable array of antennas or light
modulators. This method is inspired from the phased array radar technology 1
and usually requires complex photonic circuits in the visible wavelengths 2,
3, 4, 5, controlled electronically by phase shifters.
Figure 1: Geometry of the lighthouse emission (a) and of the planar double
microcavity (b) composed of three distributed Bragg reflectors (DBR)
surrounding two sets of quantum wells (QWs). The microcavity is excited at
normal incidence from the top and the emission direction is governed by the
incident linear polarization of the resonant excitation. (c) Dispersion of the
two lower polariton branches of the planar double microcavity, and
representation of the triply degenerate optical parametric oscillation process
at work in the lighthouse effect between LP1 and the LP2 elastic circle.
Another approach relies on using the nonlinear light-matter response to steer
light, opening the road to all-optical operation. Steering can be achieved by
using the light field intensity - usually of an auxiliary pump field - to
modify the field propagation within the device. Beam steering has been
achieved in this way to steer solitons 6, 7, intense fields in biased
photorefractive crystals 8, femtosecond pulses on metallic mirrors 9, or to
control the emission of spatial multimodal lasers by using the nonlinear gain
response 10, 11.
The nonlinear approach has the advantage of much simpler and robust designs,
it is also usually much faster. But since direction is not controlled
independently of intensity, it is hard to imagine practical applications,
e.g., in information technology, ranging or microscopy. In this letter we show
that, rather surprisingly, the linear polarization is sufficient to achieve
continuous control of the beam emission. We demonstrate the lighthouse-like
emission of a planar multiple microcavity device (Fig. 1). This simple device
is composed of two coupled microcavities containing multiple quantum wells 12.
The strong coupling between cavity photons and quantum well excitons is
reached and the collective excitations of the device are microcavity exciton-
polaritons 13, 14, 15, which are part-light, part-matter quasiparticles and
therefore exhibit strong nonlinear interactions. 16, 17, 18, 19, 20, 21 The
dispersion of the two lower polariton branches (LP) is shown in figure 1c:
When we resonantly pump the LP2 at normal incidence, a triply degenerate
$\chi^{3}$ parametric scattering process occurs towards LP1, and light is
emitted at a finite emission angle $\theta$.
In principle, due to the rotational invariance of the polaritonic dispersion,
parametric scattering can occur in any direction, but we predict and observe
that in the nonlinear system the emission direction is controlled by the
linear pump polarization in the optical parametric oscillation (OPO) regime
allowing beam steering over an interval of the azimuth angle $\phi$ of
$360\text{\,}\mathrm{\SIUnitSymbolDegree}$.
## 2 Theory
To describe the scattering of LP2 polaritons at zero in-plane wave vector onto
the LP1 elastic circle, we utilize the nonlinear coupled cavity and exciton
field theory discussed in Ref. 22.
The core ingredients to this theory are interband polarization functions,
obtained from a fermionic quantum-field theory and specialized to excitonic
polarizations. The Coulomb interaction between charge carriers gives rise to
spin-dependent exciton-exciton interactions and thus to two-exciton
correlations, as for example obtained in the dynamics-controlled truncation
formalism 23, 24. The nonlinear optical response of the two subcavities
follows from a numerical solution of a coupled mode theory including the
excitonic interband polarizations coupled to single-mode equations for the
light fields at the positions of the quantum wells, Eqs. (1)-(4) in Ref. 22,
with parameter values, in the notation of Ref. 22,
$\gamma_{x}=\gamma_{c}=$0.2\text{\,}\mathrm{m}\mathrm{e}\mathrm{V}$$,
$m_{TM}=$0.23\text{\,}\mathrm{m}\mathrm{e}\mathrm{V}\,\mathrm{p}\mathrm{s}^{2}\,\mathrm{\SIUnitSymbolMicro}\mathrm{m}^{-2}$$,
$m_{TE}=1.001~{}m_{TM}$, $\Omega_{x}=6.35~{}\text{meV}$,
$\Omega_{c}=$5.05\text{\,}\mathrm{m}\mathrm{e}\mathrm{V}$$,
$T^{++}=$5.69\times
10^{-3}\text{\,}\mathrm{m}\mathrm{e}\mathrm{V}\mathrm{\SIUnitSymbolMicro}\mathrm{m}^{2}$$,
$T^{+-}=-T^{++}/3$, $A_{\mathrm{PSF}}=$2.594\times
10^{-4}\text{\,}\mathrm{\SIUnitSymbolMicro}\mathrm{m}^{2}$$; detuning of the
cavity mode from the bare exciton frequency
$\Delta_{P}=$-4.3086\text{\,}\mathrm{m}\mathrm{e}\mathrm{V}$$.
The nonlinearities entering the theory are phase-space filling and spin-
dependent exciton-exciton T-matrices, which are different in the co-circular
$T^{++}$ and cross-circular $T^{+-}$ scattering channels. The transverse-
electric (TE) and transverse-magnetic (TM) bare cavity modes are assumed to
have parabolic dispersion with different curvatures (or cavity effective
masses). This results in polaritonic TE-TM splitting 25. In the experimental
conditions, the two subcavities are slightly different, which leads to a
different coupling with the external pump. A quantitative estimate of this
difference follows from a linear transfer matrix simulation of the entire
structure and as a result the external pump field is chosen to be larger (by a
factor of $z=-1.52$) in the first subcavity than in the second subcavity. The
factor $z$ is defined by the relation
$p^{x}_{\text{pump},1}=zp^{x}_{\text{pump},2}$, where $p^{x}_{\text{pump},i}$
are pump parameters for the two subcavities $i=1,2$. The relation between
$p^{x}_{\text{pump},j}$ and $R^{\pm}_{\text{pump},i}$ in Eqs. (1) of Ref. 22
is as follows. We first solve the steady-state versions of Eqs. (1) and (2) of
Ref. 22, Fourier-transform from real (configuration) to wave vector space and
take $k=0$ to describe the spatially homogeneous pump process. These equations
read
$E^{\pm}_{\text{pump},i}=-(\Delta_{p}+i\gamma_{x}-T^{++}|p^{\pm}_{\text{pump},i}|^{2}-T^{+-}|p^{\mp}_{\text{pump},i}|^{2})p^{\pm}_{\text{pump},i}/[\Omega_{x}(1-2A_{PSF}|p^{\pm}_{\text{pump},i}|^{2})]$
and
$R^{\pm}_{\text{pump},i}=-(\Delta_{p}+i\gamma_{c})E^{\pm}_{\text{pump},i}+\Omega_{c}E^{\pm}_{\text{pump},j}+\Omega_{x}p^{\pm}_{\text{pump},i}$.
For given $p^{\pm}_{\text{pump},i}$ they determine $R^{\pm}_{\text{pump},i}$.
For our linearly polarized pump, we have
$p^{\pm}_{\text{pump},i}=\frac{1}{\sqrt{2}}e^{\mp i\phi}p^{x}_{\text{pump},i}$
where $\phi$ is the pump polarization angle with respect to the x-axis. More
details of the pump simulation are given in Sec. 2.3 of Ref. 26. We use
$p^{x}_{\text{pump},1}=$12\text{\,}\mathrm{\SIUnitSymbolMicro}\mathrm{m}^{-1}$$
and $p^{x}_{\text{pump},2}=p^{x}_{\text{pump},1}/z$. For a linearly polarized
pump with negative detuning from the bare exciton resonance, because of the
spin-dependent exciton-exciton interactions, the polarization channel cross-
linear to the pump’s polarization is favored for instability (OPO) of pump-
induced LP1 polaritons. Together with the TE-TM cavity mode splitting this
leads to a spatial anisotropy for polariton scattering. Taking also into
account the effect of the cavity difference in the double-cavity system, here
we find stable 2-spot patterns at pump powers not too far above the OPO
threshold. The spatial orientation of these 2-spot patterns is found to be
parallel to the vectorial polarization direction of the pump. A representative
numerical solution that has reached a steady state for each pump polarization
is shown in Fig. 2.
Figure 2: Theoretical simulation of the polaritonic lighthouse effect. (a)
Two-spot far-field pattern detected in the cross-linear polarization channel
for resonant excitation with a linearly polarized cw laser (polarization
direction in this example is 108∘ relative to the x-axis). (b) Relative
orientation (azimuth) of the two emission intensity maxima with respect to the
linear vectorial polarization of the pump. When rotating the pump polarization
orientation, the 2-spot emission rotates with it.
The pattern shape depends on the choice in the cavity asymmetry and pump
power: The 2-spot pattern of figure 2 is obtained for a particular choice in
the cavity asymmetry ($z\neq 1$) and the pump power. If we choose a symmetric
cavity or larger pump powers, we can also obtain a 6-spot pattern with a
$60\text{\,}\mathrm{\SIUnitSymbolDegree}$ angle between the output fields.
Which angles can occur in principle is determined by the nonlinearity, in our
case a 3rd\- order (or $\chi^{(3)}$) nonlinearity. The physical processes
giving rise to the $180\text{\,}\mathrm{\SIUnitSymbolDegree}$ (2-spot) and
$60\text{\,}\mathrm{\SIUnitSymbolDegree}$ angles (6-spot) are illustrated in
Ref. 27. For a conventional single-cavity design, we have also found the
possibility of $90\text{\,}\mathrm{\SIUnitSymbolDegree}$ angles 28 (4-spot) in
the case of linearly polarized pump beams. These
$90\text{\,}\mathrm{\SIUnitSymbolDegree}$ angles patterns are made possible by
the interplay between the exciton-exciton interaction and TE-TM splitting.
Interestingly, Whittaker et al. report on the experimental observation of a
diverse family of polariton patterns, including ones with an odd number of
lobes, by using circularly polarized excitation29. In their experiment as
well, a pattern rotation was observed by changing from left to right
circularly polarized pumping. We finally note that our model is for an
optically isotropic crystal without strain or defects, so that the crystal
orientation does not come into play in the selection of patterns and in their
orientation.
## 3 Experimental setup and observations
Figure 3: Experimental observation of the polaritonic lighthouse effect. (a)
Scheme of the experimental setup: the linear pump polarization is controlled
with a half-wave plate, and a field stop is used in the Fourier space (F.S.)
to filter the direct pump reflection on the sample. (b),(c) Examples of far
field observed for an incoming linear polarization pump beam oriented at
$32\text{\,}\mathrm{\SIUnitSymbolDegree}$ (resp.
$136\text{\,}\mathrm{\SIUnitSymbolDegree}$) with respect to the [100] axis.
Two spots are observed collinear to the polarization direction. (d), (e)
Corresponding radiation pattern at $32\text{\,}\mathrm{\SIUnitSymbolDegree}$
(gray) (resp. $136\text{\,}\mathrm{\SIUnitSymbolDegree}$). Beam width is about
$14\text{\,}\mathrm{\SIUnitSymbolDegree}$ and extinction reaches up to
$10\text{\,}\mathrm{d}\mathrm{B}_{\mathrm{i}}$, the direction of emission is
controlled with a $5\text{\,}\mathrm{\SIUnitSymbolDegree}$ accuracy. The beige
disk represents the direction-independent background due to elastic
scattering. (f) Map of the emission amplitude as a function of the pump linear
polarization angle and of the emission angle relative to the pump
polarization.
The sample, which has been used in several studies 12, 30, 31, 32, consists of
two coupled $\lambda/2$ Ga0.05Al0.95As cavities embedded between three
Ga0.05Al0.95As/Ga0.8Al0.2As distributed Bragg reflectors with 25 (back), 17.5
(middle), and 17.5 (front) pairs respectively (fig. 1b). The nominal Q factor
is $10^{5}$, and the middle Bragg mirror induces a
$10\text{\,}\mathrm{m}\mathrm{e}\mathrm{V}$ coupling between bare cavity
modes. In each cavity, three sets of four $7\text{\,}\mathrm{n}\mathrm{m}$
GaAs QWs are inserted at the antinodes of the field, resulting in a
$13\text{\,}\mathrm{m}\mathrm{e}\mathrm{V}$ Rabi splitting. Experiments are
performed at $T=$6\text{\,}\mathrm{K}$$, the sample is excited at normal
incidence, in resonance with the lower polariton branch 2 (see fig. 1c), with
a linearly polarized single-mode Ti:Sapphire laser. The pump polarization is
controlled by a half-wave plate, and the far field is recorded in reflection
geometry in the polarization orthogonal to the pump. The pump reflection is
spatially filtered.
With a $50\text{\,}\mathrm{\SIUnitSymbolMicro m}$-wide resonant excitation
beam, we observe the optical parametric oscillation (OPO) regime above an
incoming laser power of $60\text{\,}\mathrm{m}\mathrm{W}$ and the
structuration of the far field in a variety of light patterns 12. The two-spot
patterns are the simplest of them and are observed just above OPO threshold,
when the LP2 branch is slightly negatively detuned with respect to the bare
exciton energy ($-2\text{\,}\mathrm{m}\mathrm{e}\mathrm{V}$).
Figure 3 represents the observed far-field emission of the microcavity (3a)
and the corresponding radiation pattern (3b) that is observed at a radial
angle $\theta$ of $20\text{\,}\mathrm{\SIUnitSymbolDegree}$. This lighthouse-
like emission is always directed along the linear pump polarization,
independently of the crystal orientation, as evidenced by the direction
emission map in Fig. 3c.
## 4 Discussion
Let us now discuss the experimental observations in light of the theoretical
prediction of a lighthouse-like emission. Since the polariton lighthouse
effect implies both a directional and controllable emission, we are going to
analyze these two aspects.
The radiation patterns are approximately symmetric and highly directional: In
the radial direction, the lighthouse emission is well focused with a beam
width $\Delta\theta$ of only $0.4\text{\,}\mathrm{\SIUnitSymbolDegree}$. This
feature results from the strong constraint on radial emission set by the
phase-matching conditions on resonant parametric scattering between the two
lower polaritonic branches. In the azimuthal direction, the beam width is
$\Delta\phi=14^{\circ}$ independent of the excitation linear polarization
configuration.
In contrast, the theoretical prediction on figure 2 suggests much tighter
azimuthal focusing is possible. This discrepancy probably originates from the
effect of elastic scattering on line defects within the DBRs, which are
signaled by a characteristic speckle signature in the elastically scattered
signal (not shown). Indeed, whereas polariton-polariton parametric scattering
in a uniform planar microcavity preserves total momentum, additional random
elastic scattering lifts this restriction.
Most of the emission is cast in the two main lobes. The main lobes apparent
imbalance results from a partial field stop within the large NA collecting
objective. Stray light accounts for
$[-7;-10]\text{\,}\mathrm{d}\mathrm{B}_{\mathrm{i}}$ and has two origins:
First, the resonant elastic scattering on DBR defects, which forms a direction
independent background (yellow disk on fig. 3b) 31. Second, the competing
parametric scattering pathways which are responsible for the formation of
structured side lobes. Figure 3b) gives a representative example of radiation
pattern (incoming polarization at $32\text{\,}\mathrm{\SIUnitSymbolDegree}$
with the [100] crystalline axis), as well as the most selective configuration
($136\text{\,}\mathrm{\SIUnitSymbolDegree}$). From figure 3c), we see that
side lobes are most pronounced when the incoming polarization is oriented
along the [100] direction. This anisotropic response is most probably the
result of residual built-in strain effects within the heterostructure.33, 34
We note that a clear lighthouse effect is evidenced over 90% of the incoming
polarization angles and that the two-spot emission always dominate side lobes
emission.
The angular precision of the lighthouse emission by the linear polarization is
achieved with a remarkably high accuracy of
$5\text{\,}\mathrm{\SIUnitSymbolDegree}$. Previous work using InGaAs/GaAs
microcavities presented strong anisotropic line defects and mosaicity which
pinned the OPO emission on the semiconductor crystalline axes35. Therefore, we
conclude that the small lattice mismatch of AlGaAs/GaAs microcavities is
critical to achieve controllability of the emission.
The maximum emitted power in the main pattern reaches
$200\text{\,}\mathrm{\SIUnitSymbolMicro}\mathrm{W}$, which is only $0.4\%$ of
the incoming power ($60\text{\,}\mathrm{m}\mathrm{W}$, necessary to reach OPO
threshold). This currently low external power efficiency (emitted
power/incident power) is due to the fact that a strong Kerr effect occurs at
resonance. As a result, the microcavity LP2 resonance is repelled from the
excitation laser energy and most of the incoming power gets reflected on the
first DBR and does not enter the microcavity. 36 By taking into account the
low injection efficiency, we estimate that the internal power efficiency
(emitted power/transmitted power) is in the $25\%$ range, whereas the
theoretical limit is $50\%$ due to power balance between parametrically
scattered beams in the OPO regime.
## 5 Conclusion
We have shown that an optical microcavity can present lighthouse-like emission
continuously controlled by the linear vectorial polarization of the pump laser
itself. The use of vectorial polarization as a control parameter implies that
this function can only be implemented in a nonlinear optical device.
We used a coupled cavity theory combined with a many-particle theory
describing exciton-exciton interactions to demonstrate the polariton
lighthouse effect if a minimal set of physical ingredients are included,
namely the nonlinear optical response of microcavity exciton-polaritons, the
difference in the subcavities forming the microcavity, and the rotational
invariance of the device. We observed the lighthouse effect with a device made
of quantum wells embedded in a double microcavity and evidenced that a two-
spot pattern oriented along the linear incoming pump polarization is observed
for most azimuth angles with good control on the emission direction.
With our present cavity, we observe some deviations from the ideal lighthouse
emission, such as parasitic stray light, originating from elastic scattering
on line defects and the effect of built-in strain, and inherent to real world
devices. Progresses in heterostructures quality and power efficiency could
turn this polariton lighthouse effect into an original and useful all-optical
beam-steering method which can be of great interest for microscopy or LIDARs.
We acknowledge that this project has received funding from the Agence
Nationale de la Recherche (ANR) (ANR-16-CE24-0023 TeraMicroCav), the US
National Science Foundation (NSF) (DMR 1839570), and the Deutsche
Forschungsgemeinschaft (SCHU 1980/5-2 and Heisenberg grant 270619725). RB
acknowledges CPU time at HPC (University of Arizona); SS acknowledges
computing time at Paderborn Center for Parallel Computing ($\mathrm{PC^{2}}$).
The microcavity geometry is detailed in section 3. During fabrication by
molecular beam epitaxy, a wedge in the cavity thicknesses is introduced by
stopping the cavity rotation during growth of the cavity layers. As a
consequence, the sample has a natural gradient in the bare cavity energy,
while the confined exciton energy is mostly unaffected. Note that upper and
lower cavities have a parallel gradient in energy so that the coupled cavity
modes inherit the same energy gradient. By recording the photoluminescence of
lower polariton branches 1 and 2 on the sample surface, we can reconstruct the
polariton anticrossing curve and characterize the polariton key properties of
the sample. Figure 4a shows a typical dispersion curve obtained by
photoluminescence, where only the two lower polariton branches (labeled LPB1
and LPB2) are visible. Lower polariton energies at normal incidence are
represented function of the position on the sample along the gradient
direction on figure 4b, the position corresponding to panel a is indicated as
green points.
Figure 4: Characterization of the double microcavity exciton-polaritons (a)
Typical energy dispersion for the double microcavity sample. Only the two
lower polaritons branches are visible (LPB1 and LPB2). The pump is resonant
with LPB2 at normal incidence. The white lines correspond to a fourth order
polynomial fit of the branches to obtain as precisely as possible the energy
minima of LPB1 and LPB2. (b) Anticrossing curve obtained from
photoluminescence spectra at various positions onto the sample. The two green
points correspond to the position of panel (a). The dashed black line
corresponds to the bare exciton energy as a function of the position on the
sample.The blue and red dashed lines correspond to the optical cavity modes’
energies as a function of the position on the sample.
The two lower polariton branches of figure 4b are fitted by
$E_{\mathrm{pol}}^{(i)}=\frac{1}{2}(E_{\mathrm{cav}}^{(i)}+E_{\mathrm{exc}}+\sqrt{(E_{\mathrm{cav}}^{(i)}-E_{\mathrm{exc}})^{2}+4\Omega_{R}^{2}})$
where $E_{\mathrm{cav}}^{(i)}$ is the coupled cavity mode forming the
polariton of interest, and both $E_{\mathrm{cav}}$ and $E_{\mathrm{exc}}$
evolves linearly with position on the sample. From these data we infer the
cavity energy gradient of $3.6\pm
0.1\text{\,}\mathrm{m}\mathrm{e}\mathrm{V}\,\mathrm{m}\mathrm{m}{-1}$, the
exciton energy $1.606\pm 0.003\text{\,}\mathrm{e}\mathrm{V}$, and the Rabi
coupling $\Omega_{R}=$6.4\pm 0.8\text{\,}\mathrm{m}\mathrm{e}\mathrm{V}$$,
which is half the Rabi splitting between the lower and upper polaritons. From
the coupled cavity splitting, we also deduce the coupling between the upper
and lower microcavities $\Omega_{c}=$5.1\pm
0.7\text{\,}\mathrm{m}\mathrm{e}\mathrm{V}$$.
## References
* Visser 2006 Visser, H. J. _Array and phased array antenna basics_ ; John Wiley & Sons, 2006; DOI:10.1002/0470871199
* Sun et al. 2013 Sun, J.; Timurdogan, E.; Yaacobi, A.; Hosseini, E. S.; Watts, M. R. Large-scale nanophotonic phased array. _Nature_ 2013, _493_ , 195–199
* Jarrahi et al. 2008 Jarrahi, M.; Pease, R. F. W.; Miller, D. A.; Lee, T. H. Optical switching based on high-speed phased array optical beam steering. _Appl. Phys. Lett._ 2008, _92_ , 014106
* Hulme et al. 2015 Hulme, J.; Doylend, J.; Heck, M.; Peters, J.; Davenport, M.; Bovington, J.; Coldren, L.; Bowers, J. Fully integrated hybrid silicon two dimensional beam scanner. _Opt. Express_ 2015, _23_ , 5861–5874
* Hutchison et al. 2016 Hutchison, D. N.; Sun, J.; Doylend, J. K.; Kumar, R.; Heck, J.; Kim, W.; Phare, C. T.; Feshali, A.; Rong, H. High-resolution aliasing-free optical beam steering. _Optica_ 2016, _3_ , 887–890
* Cao et al. 1994 Cao, X.; Meyerhofer, D.; Agrawal, G. Optimization of optical beam steering in nonlinear Kerr media by spatial phase modulation. _J. Opt. Soc. Am. B_ 1994, _11_ , 2224–2231
* Rosberg et al. 2006 Rosberg, C. R.; Garanovich, I. L.; Sukhorukov, A. A.; Neshev, D. N.; Krolikowski, W.; Kivshar, Y. S. Demonstration of all-optical beam steering in modulated photonic lattices. _Opt. Lett._ 2006, _31_ , 1498–1500
* Shwartz et al. 2004 Shwartz, S.; Segev, M.; El-Hanany, U. Self-deflection and all-optical beam steering in CdZnTe. _Opt. Lett._ 2004, _29_ , 760–762
* Wheeler et al. 2012 Wheeler, J. A.; Borot, A.; Monchocé, S.; Vincenti, H.; Ricci, A.; Malvache, A.; Lopez-Martens, R.; Quéré, F. Attosecond lighthouses from plasma mirrors. _Nat. Photonics_ 2012, _6_ , 829
* Liew et al. 2014 Liew, S. F.; Redding, B.; Ge, L.; Solomon, G. S.; Cao, H. Active control of emission directionality of semiconductor microdisk lasers. _Appl. Phys. Lett._ 2014, _104_ , 231108
* Bittner et al. 2018 Bittner, S.; Loirette-Pelous, A.; Lafargue, C.; Gozhyk, I.; Ulysse, C.; Dietz, B.; Zyss, J.; Lebental, M. Dynamical control of the emission of a square microlaser via symmetry classes. _Phys. Rev. A_ 2018, _97_ , 043826
* Ardizzone et al. 2013 Ardizzone, V. et al. Formation and control of Turing patterns in a coherent quantum fluid. _Sci. Rep._ 2013, _3_ , 3016
* Weisbuch et al. 1992 Weisbuch, C.; Nishioka, M.; Ishikawa, A.; Arakawa, Y. Observation of the coupled exciton-photon mode splitting in a semiconductor quantum microcavity. _Phys. Rev. Lett._ 1992, _69_ , 3314
* Sanvitto and Timofeev 2012 Sanvitto, D.; Timofeev, V. _Exciton Polaritons in Microcavities: New Frontiers_ ; Springer Science & Business Media, 2012; Vol. 172
* Kavokin et al. 2017 Kavokin, A.; Baumberg, J. J.; Malpuech, G.; Laussy, F. P. _Microcavities_ ; Oxford University Press, 2017; DOI:10.1093/oso/9780198782995.001.0001
* Kuwata-Gonokami et al. 1997 Kuwata-Gonokami, M.; Inouye, S.; Suzuura, H.; Shirane, M.; Shimano, R.; Someya, T.; Sakaki, H. Parametric scattering of cavity polaritons. _Phys. Rev. Lett._ 1997, _79_ , 1341
* Savvidis et al. 2000 Savvidis, P.; Baumberg, J.; Stevenson, R.; Skolnick, M.; Whittaker, D.; Roberts, J. Angle-resonant stimulated polariton amplifier. _Phys. Rev. Lett._ 2000, _84_ , 1547
* Huang et al. 2000 Huang, R.; Tassone, F.; Yamamoto, Y. Experimental evidence of stimulated scattering of excitons into microcavity polaritons. _Phys. Rev. B_ 2000, _61_ , 7854
* Ciuti et al. 2000 Ciuti, C.; Schwendimann, P.; Deveaud, B.; Quattropani, A. Theory of the angle-resonant polariton amplifier. _Phys. Rev. B_ 2000, _62_ , 4825
* Stevenson et al. 2000 Stevenson, R.; Astratov, V.; Skolnick, M.; Whittaker, D.; Emam-Ismail, M.; Tartakovskii, A.; Savvidis, P.; Baumberg, J.; Roberts, J. Continuous wave observation of massive polariton redistribution by stimulated scattering in semiconductor microcavities. _Phys. Rev. Lett._ 2000, _85_ , 3680
* Langbein 2004 Langbein, W. Spontaneous parametric scattering of microcavity polaritons in momentum space. _Phys. Rev. B_ 2004, _70_ , 205301
* Kwong et al. 2016 Kwong, N. H.; Tsang, C. Y.; Luk, M. H.; Tse, Y. C.; Lewandowski, P.; Chan, C. K. P.; Leueng, P. T.; Schumacher, S.; Binder, R. Patterns and switching dynamics in polaritonic quantum fluids in semiconductor microcavities. _J. Opt. Soc. Am. B_ 2016, _33_ , 153–159
* Axt and Stahl 1994 Axt, V. M.; Stahl, A. A dynamics-controlled truncation scheme for the hierarchy of density matrices in semiconductor optics. _Z. Phys. B_ 1994, _93_ , 195–204
* Takayama et al. 2002 Takayama, R.; Kwong, N. H.; Rumyantsev, I.; Kuwata-Gonokami, M.; Binder, R. T-matrix analysis of biexcitonic correlations in the nonlinear optical response of semiconductor quantum wells. _Eur. Phys. J. B_ 2002, _25_ , 445–462
* Schumacher et al. 2007 Schumacher, S.; Kwong, N. H.; Binder, R. Influence of exciton-exciton correlations on the polarization characteristics of polariton amplification in semiconductor microcavities. _Phys. Rev. B_ 2007, _76_ , 245324
* Luk 2018 Luk, M. H. Pattern Generation and Control in Semiconductor Quantum Well Microcavities. Ph.D. thesis, The University of Arizona, 2018; available at https://repository.arizona.edu/handle/10150/628051
* Kwong et al. 2017 Kwong, N. H.; Tsang, C. Y.; Luk, M. H.; Tse, Y. C.; Chan, C.; Lewandowski, P.; Leung, P. T.; Schumacher, S.; Binder, R. Optical switching of polariton density patterns in semiconductor microcavity. _Phys. Scr._ 2017, _92_ , 034006 – (10)
* Lewandowski et al. 2017 Lewandowski, P.; Luk, S. M. H.; Chan, C. K. P.; Leung, P. T.; Kwong, N. H.; Binder, R.; Schumacher, S. Directional optical switching and transistor functionality using optical parametric oscillation in a spinor polariton fluid. _Opt. Express_ 2017, _25_ , 31056 –(8)
* Whittaker et al. 2017 Whittaker, C. E.; Dzurnak, B.; Egorov, O. A.; Buonaiuto, G.; Walker, P. M.; Cancellieri, E.; Whittaker, D. M.; Clarke, E.; Gavrilov, S. S.; Skolnick, M. S.; Krizhanovskii, D. N. Polariton pattern formation and photon statistics of the associated emission. _Phys. Rev. X_ 2017, _7_ , 031033
* Lafont et al. 2017 Lafont, O.; Luk, S.; Lewandowski, P.; Kwong, N.; Leung, P.; Galopin, E.; Lemaitre, A.; Tignon, J.; Schumacher, S.; Baudin, E.; Binder, R. Controlling the Optical Spin Hall Effect with Light. _Appl. Phys. Lett._ 2017, _110_ , 061108
* Lewandowski et al. 2016 Lewandowski, P.; Lafont, O.; Baudin, E.; Chan, C.; Leung, P.; Luk, S.; Galopin, E.; Lemaitre, A.; Bloch, J.; Tignon, J.; Roussignol, P.; Kwong, N.; Binder, R.; Schumacher, S. Polarization Dependence of Nonlinear Wave Mixing of Spinor Polaritons in Semiconductor Microcavities. _Phys. Rev. B_ 2016, _94_ , 045308
* Luk et al. 2018 Luk, S. M. H.; Lewandowski, P.; Kwong, N. H.; Baudin, E.; Lafont, O.; Tignon, J.; Leung, P. T.; Chan, C. K. P.; Babilon, M.; Schumacher, S.; Binder, R. Theory of Optically Controlled Anisotropic Polariton Transport in Semiconductor Double Microcavities. _J. Opt. Soc. Am. B_ 2018, _35_ , 146–155
* Dasbach et al. 2002 Dasbach, G.; Dremin, A.; Bayer, M.; Kulakovskii, V.; Gippius, N.; Forchel, A. Oscillations in the differential transmission of a semiconductor microcavity with reduced symmetry. _Phys. Rev. B_ 2002, _65_ , 245316
* Lafont et al. 2016 Lafont, O.; Ardizzone, V.; Lemaître, A.; Sagnes, I.; Senellart, P.; Bloch, J.; Tignon, J.; Roussignol, P.; Baudin, E. Origins and control of the polarization splitting in exciton-polaritons microwires. _arXiv preprint arXiv:1610.04856_ 2016,
* Abbarchi et al. 2012 Abbarchi, M.; Diederichs, C.; Largeau, L.; Ardizzone, V.; Mauguin, O.; Lecomte, T.; Lemaitre, A.; Bloch, J.; Roussignol, P.; Tignon, J. Discretized Disorder in Planar Semiconductor Microcavities: Mosaicity Effect on Resonant Rayleigh Scattering and Optical Parametric Oscillation. _Phys. Rev. B_ 2012, _85_ , 045316
* Baas et al. 2004 Baas, A.; Karr, J. P.; Eleuch, H.; Giacobino, E. Optical bistability in semiconductor microcavities. _Phys. Rev. A_ 2004, _69_ , 023809
|
# An Explainable AI System for Automated COVID-19 Assessment and Lesion
Categorization from CT-scans
Matteo Pennisi111DIEEI, University of Catania, Catania, Italy Isaak
Kavasidis22footnotemark: 2 Concetto Spampinato33footnotemark: 3 Vincenzo
Schininà444National Institute for infectious disease, “Lazzaro Spallanzani”
Department, Rome, Italy Simone Palazzo55footnotemark: 5 Francesco
Rundo666STMicroelectronics - ADG Central R&D, Catania, Italy Massimo
Cristofaro77footnotemark: 7 Paolo Campioni88footnotemark: 8 Elisa
Pianura99footnotemark: 9 Federica Di Stefano1010footnotemark: 10 Ada
Petrone1111footnotemark: 11 Fabrizio Albarello1212footnotemark: 12 Giuseppe
Ippolito1313footnotemark: 13 Salvatore Cuzzocrea141414ChimBioFaram Department,
University of Messina, Messina, Italy Sabrina Conoci1515footnotemark: 15
(November 2020)
###### Abstract
COVID-19 infection caused by SARS-CoV-2 pathogen is a catastrophic pandemic
outbreak all over the world with exponential increasing of confirmed cases
and, unfortunately, deaths. In this work we propose an AI-powered pipeline,
based on the deep-learning paradigm, for automated COVID-19 detection and
lesion categorization from CT scans. We first propose a new segmentation
module aimed at identifying automatically lung parenchyma and lobes. Next, we
combined such segmentation network with classification networks for COVID-19
identification and lesion categorization. We compare the obtained
classification results with those obtained by three expert radiologists on a
dataset consisting of 162 CT scans. Results showed a sensitivity of 90% and a
specificity of 93.5% for COVID-19 detection, outperforming those yielded by
the expert radiologists, and an average lesion categorization accuracy of over
84%. Results also show that a significant role is played by prior lung and
lobe segmentation that allowed us to enhance performance by over 20 percent
points. The interpretation of the trained AI models, moreover, reveals that
the most significant areas for supporting the decision on COVID-19
identification are consistent with the lesions clinically associated to the
virus, i.e., crazy paving, consolidation and ground glass. This means that the
artificial models are able to discriminate a positive patient from a negative
one (both controls and patients with interstitial pneumonia tested negative to
COVID) by evaluating the presence of those lesions into CT scans. Finally, the
AI models are integrated into a user-friendly GUI to support AI explainability
for radiologists, which is publicly available at http://perceivelab.com/covid-
ai. The whole AI system is unique since, to the best of our knowledge, it is
the first AI-based software, publicly available, that attempts to explain to
radiologists what information is used by AI methods for making decision and
that involves proactively them in the decision loop to further improve the
COVID-19 understanding.
## 1 Introduction
At the end of 2019 in Wuhan (China) several cases of an atypical pneumonia,
particularly resistant to the traditional pharmacological treatments, were
observed. In early 2020, the COVID-19 virus [1] has been identified as the
responsible pathogen for the unusual pneumonia. From that time, COVID-19 has
spread all around the world hitting, to date about 32 million of people (with
about 1M deaths), stressing significantly healthcare systems in several
countries. Since the beginning, it has been noted that 20% of infected
subjects appear to progress to severe disease, including pneumonia and
respiratory failure and in around 2% of cases death [2].
Currently, the standard diagnosis of COVID-19 is de facto based on a
biomolecular test through Real-Time Polimerase Chain Reaction (RT-PCR) test
[3, 4]. However, although widely used, this biomolecular method is time-
consuming and appears to be not quite accurate suffering from a large number
of false-negatives [5].
Recent studies have outlined the effectiveness of radiology imaging through
chest X-ray and mainly Computed Tomography (CT) given the pulmonary
involvement in subjects affected by the infection [5, 6]. Given the extension
of the infection and the number of cases that daily emerge worldwide and that
call for fast, robust and medically sustainable diagnosis, CT scan appears to
be suitable for a robust-scale screening, given the higher resolution w.r.t.
X-Ray. In this scenario, artificial intelligence may play a fundamental role
to make the whole diagnosis process automatic, reducing, at the same time, the
efforts required by radiologists for visual inspection [7].
In this paper, thus, we present an innovative artificial intelligent approach
to achieve both COVID-19 identification and lesion categorization (ground
glass, crazy and paving consolidation) that are instrumental to evaluate lung
damages and the prognosis assessment. Our method relies only on radiological
image data avoiding the use of additional clinical data in order to create AI
models that are useful for large-scale and fast screening with all the
subsequent benefits for a favorable outcome. More specifically, we propose an
innovative automated pipeline consisting of 1) lung/lobe segmentation, 2)
COVID-19 identification and interpretation and 3) lesion categorization. We
tested the AI-empowered software pipeline on multiple CT scans, both publicly
released and collected at the Spallanzani Institute in Italy, and showed that:
1) our segmentation networks is able to effectively extract lung parenchyma
and lobes from CT scans, outperforming state of the art models; 2) the
COVID-19 identification module yields better accuracy (as well as specificity
and sensitivity) than expert radiologists. Furthermore, when attempting to
interpret the decisions made by the proposed AI model, we found that it
learned automatically, and without any supervision, the CT scan features
corresponding to the three most common lesions spotted in the COVID-19
pneumonia, i.e., consolidation, ground glass and crazy paving, demonstrating
its reliability in supporting the diagnosis by using only radiological images.
As an additional contribution, we integrate the tested AI models into an user-
friendly GUI to support further AI explainability for radiologists, which is
publicly available at http://perceivelab.com/covid-ai. The GUI processes
entire CT scans and reports if the patient is likely to be affected by
COVID-19, showing, at the same time, the scan slices that supported the
decision.
## 2 Related Work
The COVID-19 epidemic caught the scientific community flat-footed and in
response a high volume of research has been dedicated at all possible levels.
In particular, since the beginning of the epidemic, AI models have been
employed for disease spread monitoring [8, 9, 10], for disease progression
[11] and prognosis [12], for predicting mental health ailments inflicted upon
healthcare workers [13] and for drug repurposing [14, 15] and discovery [16].
However, the lion’s share in employing AI models for the fight against
COVID-19 belongs to the processing of X-rays and CT scans with the purpose of
detecting the presence of COVID-19 or not. In fact, recent scientific
literature has demonstrated the high discriminative and predictive capability
of deep learning methods in the analysis of COVID-19 related radiological
images[17, 18]. The key radiological techniques for COVID-19 induced pneumonia
diagnosis and progression estimation are based on the analysis of CT and X-ray
images of the chest, on which deep learning methodologies have been widely
used with good results for segmentation, predictive analysis, and
discrimination of patterns [19, 20, 21]. If, on one hand, X-Ray represents a
cheaper and most effective solution for large scale screening of COVID-19
disease, on the other hand, its low resolution has led AI models to show lower
accuracy compared to those obtained with CT data.
For the above reasons, CT scan has become the gold standard for investigation
on lung diseases. In particular, deep learning, mainly in the form of Deep
Convolutional Neural Networks (DCNN), has been largely applied to lung disease
analysis from CT scans images, for evaluating progression in response to
specific treatment (for instance immunotherapy, chemotherapy, radiotherapy)
[22, 23], but also for interstitial lung pattern analysis [24, 25] and on
segmentation and discrimination of lung pleural tissues and lymph-nodes [26,
27]. This latter aspect is particularly relevant for COVID-19 features and
makes artificial intelligence an extremely powerful tool for supporting early
diagnosis of COVID-19 and disease progression quantification. As a
consequence, several recent works have reported using AI models for automated
categorization of CT scans [21] and also on COVID-19 [28, 29, 30] but without
being able to distinguish between the various types of COVID-19 lesions.
Thus, the main contributions of this paper w.r.t. the state of the art are the
following ones:
* 1.
We propose a novel lung-lobe segmentation network outperforming state of the
art models;
* 2.
We employ the segmentation network to drive a classification network in first
identifying CT scans of COVID-19 patients, and, afterwards, in automatically
categorizing specific lesions;
* 3.
We then provide interpretation of the decisions made by the employed models
and discover that, indeed, those models focus on specific COVID-19 lesions for
distinguishing whether a CT scan pertains COVID-19 patients or not;
* 4.
We finally integrate the whole AI pipeline into a web platform to ease use for
radiologists, supporting them in their investigation on COVID-19 disease.
## 3 Explainable AI for COVID-19 data understanding
The proposed AI system aims at 1) extracting lung and lobes from chest CT
data, 2) categorizing CT scans as either COVID-19 positive or COVID-19
negative; 3) identifying and localizing typical COVID-19 lung lesions
(consolidation, crazy paving and ground glass); and 4) explaining eventually
what CT slices it based its own decisions.
### 3.1 AI Model for Lung Segmentation
Our lung-lobe segmentation model is based on the _Tiramisu_ network [31], a
fully-convolutional DenseNet [32] in a U-Net architecture [33]. The model
consists in two data paths: the downsampling one, that aims at extracting
features and the upsampling one that aims at generating the output images
(masks). Skip connections (i.e., connections starting from a preceding layer
in the network’s pipeline to another one found later bypassing intermediate
layers) aim at propagating high-resolution details by sharing feature maps
between the two paths.
In this work, our segmentation model follows the Tiramisu architecture, but
with two main differences:
* 1.
Instead of processing each single scan individually, convolutional LSTMs [34]
are employed at the network’s bottleneck layer to exploit the spatial axial
correlation of consecutive scan slices.
* 2.
In the downsampling and upsampling paths, we add residual squeeze-and-
excitation layers [35], in order to emphasize relevant features and improve
the representational power of the model.
Before discussing the properties and advantages of the above modifications, we
first introduce the overall architecture, shown in Fig. 1.
Figure 1: The proposed segmentation architecture, consisting of a downsampling
path (top) and an upsampling path (bottom), interconnected by skip connections
and by the bottleneck layer.
The input to the model is a sequence of 3 consecutive slices – suitably
resized to 224$\times$224 – of a CT scan, which are processed individually and
combined through a convolutional LSTM layer. Each slice is initially processed
with a standard convolutional layer to expand the feature dimensions. The
resulting feature maps then go through the downsampling path of the model (the
encoder) consisting of five sequences of dense blocks, residual squeeze-and-
excitation layers and transition-down layers based on max-pooling. In the
encoder, the feature maps at the output of each residual squeeze-and-
excitation layer are concatenated with the input features of the preceding
dense block, in order to encourage feature reuse and improve their
generalizability. At the end of the downsampling path, the _bottleneck_ of the
model consists of a dense block followed by a convolutional LSTM. The
following upsampling path is symmetric to the downsampling one, but it
features: 1) skip connections from the downsampling path for concatenating
feature maps at the corresponding layers of the upsampling path; 2)
transition-up layers implemented through transposed convolutions. Finally, a
convolutional layer provides a 6-channel segmentation map, representing,
respectively, the log-likelihoods of the lobes (5 channels, one for each lobe)
and non-lung (1 channel) pixels.
In the following, we review the novel characteristics of the proposed
architecture.
Residual squeeze-and-excitation layers. Explicitly modeling interdependencies
between feature channels has demonstrated to enhance performance of deep
architectures; squeeze-and-excitation layers [35] instead aim to select
informative features and to suppress the less useful ones. In particular, a
set of input features of size $C\times H\times W$ is squeezed through average-
pooling to a $C\times 1\times 1$ vector, representing global feature
statistics. The “excitation” operator is a fully-connected non-linear layer
that translates the squeezed vector into channel-specific weights that are
applied to the corresponding input feature maps.
Convolutional LSTM. We adopt a recurrent architecture to process the output of
the bottleneck layer, in order to exploit the spatial axial correlation
between subsequent slices and enhance the final segmentation by integrating 3D
information in the model. Convolutional LSTMs [34] are commonly used to
capture spatio-temporal correlations in visual data (for example, in videos),
by extending traditional LSTMs using convolutions in both the input-to-state
and the state-to-state transitions. Employing recurrent convolutional layers
allows the model to take into account the context of the currently-processed
slice, while keeping the sequentiality and without the need to process the
entire set of slices in a single step through channel-wise concatenation,
which increases feature sizes and loses information on axial distance.
Fig. 2 shows an example of automated lung and lobe segmentation from a CT scan
by employing the proposed segmentation network. The proposed segmentation
network is first executed on the whole CT scan for segmenting only lung (and
lobes); the segmented CT scan is then passed to the downstream classification
modules for COVID-19 identification and lesion categorization.
Figure 2: Example of lung and lobes segmentation.
### 3.2 Automated COVID-19 Diagnosis: CT classification
Figure 3: Overview of the COVID-19 detection approach for CT scan
classification as either COVID-19 positive or negative.
After parenchima lung segmentation (through the segmentation model presented
in 2) a deep classification model analyzes slice by slice, each segmented CT
scan, and decides whether a single slice contains some evidence of the
COVID-19 disease. Afterwards, a voting method provides its final prediction
according to all the per-slice decisions. At this stage, the system does not
carry out any identification and localization of COVID-19 lesions, but it just
identifies all slices where patterns of interest may be found and according to
them, makes a guess on the presence or not of COVID-19 induced infection. An
overview of this model is shown in Fig. 3: first the segmentation network,
described in the previous section, identifies lung areas from CT scan, then a
deep classifier (a DenseNet model in the 201 configuration [32]) processes the
segmented lung areas to identify if the slice shows signs of COVID-19 virus.
Figure 4: The DenseNet architecture. Convolutional processing layers are
grouped in Dense Blocks (top). Features extracted in previous layers are
concatenated and fed to all the next layers in the same Dense Block ensuring
maximum information flow. Given that feature maps from previous layers are
passed to the next layers, redundancy is avoided (i.e., later layers do not
need to learn almost identical information from the immediately previous
ones). In this way, each successive layer adds only a small number of feature
maps, the so called growth factor, thus requiring fewer parameters to achieve
state-of-the-art performance. Multiple Dense Blocks can be concatenated and
form a deeper network (bottom).
Once the COVID-19 identification model is trained, we attempt to understand
what features it employs to discriminate between positive and negative cases.
Thus, to interpret the decisions made by the trained model we compute class-
discriminative localization maps that attempt to provide visual explanations
of the most significant input features for each class. To accomplish this we
employ GradCAM [36] combined to VarGrad [37]. More specifically, GradCAM is a
technique to produce such interpretability maps through by investigating
output gradient with respect to feature map activations. More specifically,
GradCAM generates class-discriminative localization map for any class $c$ by
first computing the gradient of the score for class $c$, $s^{c}$, w.r.t
feature activation maps $A_{k}$ of a given convolutional layer. Such gradients
are then global-average-pooled to obtain the activation importance weights
$w$, i.e.:
$w^{c}_{k}=\sum_{i}\sum_{j}\frac{\partial y^{c}}{\partial A^{k}_{ij}}$ (1)
Afterwards, the saliency map $S^{c}$, that provides an overview of the
activation importance for the class $c$, is computed through a weighted
combination of activation maps, i.e.:
$S^{c}=ReLU\left(\sum_{k}w_{k}^{c}A^{k}\right)$ (2)
VarGrad is a technique used in combination to GradGAM and consists in
performing multiple activation map estimates by adding, each time, Gaussian
noise to the input data and then aggregating the estimates by computing the
variance of the set.
### 3.3 COVID-19 lesion identification and categorization
An additional deep network activates only if the previous system identifies a
COVID-19 positive CT scan. In that case, it works on the subset of slices
identified as COVID-19 positives by the first AI system with the goal to
localize and identify specific lesions (consolidation, crazy paving and ground
glass). More specifically, the lesion identification system works on segmented
lobes to seek COVID-19 specific patterns. The subsystem for lesion
categorization employs the knowledge already learned by the COVID-19 detection
module (shown in Fig. 3) and refines it for specific lesion categorization. An
overview of the whole system is given in Fig. 5.
Figure 5: Overview of COVID-19 lesion categorization approach.
### 3.4 A Web-based Interface for Explaining AI decisions to Radiologists
In order to explain to radiologists, the decisions made by a “black-box” AI
system, we integrated the inference pipeline for COVID-19 detection into a
web-based application. The application was designed to streamline the whole
inference process with just a few clicks and visualize the results with a
variable grade of detail (Fig. 6).
Figure 6: The main page of the AI-empowered web GUI for explainable AI. The
user is presented with a list of the CT scan classifications reporting the
models’ prediction.
If the radiologists desire to see which CT slices were classified as positive
or negative, they can click on “Show slices” where a detailed list of slices
and their categorization is showed (Fig. 7).
Figure 7: The summarized classification result showing the CT slices that the
neural network classified as positive or negative.
Because the models may not achieve perfect accuracy, a single slice inspection
screen is provided, where radiologists can inspect more closely the result of
the classification. It also features a restricted set of image manipulation
tools (move, contrast, zoom) for aiding the user to make a correct diagnosis
(Fig. 8).
Figure 8: The slice inspection screen. In this screen the user can inspect
each single slice and the AI models’ decisions.
The AI-empowered web system integrates also a relevance feedback mechanism
where radiologists can correct the predicted outputs, and the AI module
exploits such a feedback to improve its future assessments. Indeed, both at
the CT scan level and at the CT slice level, radiologists can correct models’
prediction. The AI methods will then use the correct labels to enhance their
future assessments.
## 4 Results and Discussion
### 4.1 Dataset
Our dataset contains 72 CT scans of COVID-19 positive patients (positivity
confirmed both by a molecular - reverse transcriptase–polymerase chain
reaction for SARS-coronavirus RNA from nasopharyngeal aspirates - and an IgG
or IgM antibody test) and 94 CT scans of COVID-19 negative subjects (35
patients with interstitial pneumonia but tested negative to COVID-19 and 59
controls).
CT scans were performed on a multi-detector row helical CT system scanner
161616Bright Speed, General Electric Medical Systems, Milwaukee, WI using 120
kV pp, 250 mA, pitch of 1.375, gantry rotation time of 0,6 s and time of scan
5,7 s. The non-contrast scans were reconstructed with slice thicknesses of
0.625 mm and spacing of 0.625 mm with high-resolution lung algorithm. The
images obtained on lung (window width, 1,000–1,500 H; level, –700 H) and
mediastinal (window width, 350 H; level, 35–40 H) settings were reviewed on a
picture archiving and communication system workstation171717Impax ver.
6.6.0.145, AGFA Gevaert SpA, Mortsel, Belgium. CT scans of positive patients
were also annotated by three expert radiologists (through consensus) who
selected a subset of slices and annotated them with the type (Consolidation,
Ground Glass and Crazy Paving) and the location (combinations of
left/right/central and posterior/anterior) of the lesion. In total about 2,400
slices were annotated with COVID-19 lesions and about 3,000 slices of negative
patients with no lesion. Tab. 1 provides an overview of all the CT scans and
annotations in our dataset.
For training the lung/lobe segmentation model we adopted a combination of the
LIDC [38], LTRC181818https://ltrcpublic.com/ and [39] datasets, for a total of
300 CT scans. Annotations on lung/lobe areas were done manually by three
expert radiologists.
Dataset statistics
---
CT Data
CT Scans | | 166
| COVID-19+ | 72
| COVID-19- | 94
Annotations
Positive slices | | 2,390
| Ground Glass | 1,035
| Crazy Paving | 757
| Consolidation | 598
Negative slices | | 2,988
Table 1: CT Dataset for training and testing of the AI models.
### 4.2 Training Procedure
The COVID-19 detection network is a DenseNet201, which was used pretrained on
the ImageNet dataset [40]. The original classification layers in DenseNet201
were replaced by a 2-output linear layer for the COVID-19 positive/negative
classification. Among the set of 166 CT scans, we used 95 scans (36 positives
and 59 negatives) for training, 9 scans for validation (5 positives and 4
negatives) and 62 scans (31 positives and 31 negatives) for test. To compare
the AI performance to the human one, the test set of 62 CT scans was provided
to three expert radiologists for blind evaluation. Given the class imbalance
in the training set, we used the weighted binary cross-entropy (defined in 3)
as training loss and RT-PCR virology test as training/test labels.
The weighted binary cross-entropy loss for a sample classified as $x$ with
target label $y$ is then calculated as:
$WBCE=-w\left[y\cdot\log x+(1-y)\cdot\log(1-x)\right]$ (3)
where $w$ is defined as the ratio of the number negative samples to the total
number of samples if the label is positive and vice versa. This way the loss
results higher when misclassifying a sample that belongs to the less frequent
class. It is important to highlight that splitting refers to the entire CT
scan and not to the single slices: we made sure that full CT scans were not
assigned in different splits to avoid any bias in the performance analysis.
This is to avoid the deep models overfit the data by learning spurious
information from each CT scan, thus invalidating the training procedure, thus
enforcing robustness to the whole approach. Moreover, for the COVID-19
detection task, we operate at the CT level by processing and categorizing each
single slice. To make a decision for the whole scan, we perform voting: if 10%
of total slices is marked as positive then the whole exam is considered as a
COVID-19 positive, otherwise as COVID-19 negative. The choice of the voting
threshold was done empirically to maximize training performance.
The lesion categorization deep network is also a DenseNet201 model where
classification layers were replaced by a 4-output linear layer (ground glass,
consolidation, crazy paving, negative). The lesion categorization model
processes lobe segments (extracted by our segmentation model) with the goal to
identify specific lesions. Our dataset contains 2,488 annotated slices; in
each slice multiple lesion annotations with relative location (in lobes) are
available. Thus, after segmenting lobes from these images we obtained 5,264
lobe images. We did the same on CT slices of negative patients (among the
2,950 available as shown in Tab. 1) and selected 5,264 lobe images without
lesions. Thus, in total, the the entire set consisted of 10,528 images. We
also discarded the images for which lobe segmentation produced small regions
indicating a failure in the segmentation process. We used a fixed test split
consisting of 195 images with consolidation, 354 with crazy paving, 314 with
ground glass and 800 images with no lesion. The remaining images were split
into training and validation sets with the ratio 80/20. Given the class
imbalance in the training set, we employed weighted cross-entropy as training
loss.
The weighted cross-entropy loss for a sample classified as $x$ with target
label $y$ is calculated as:
$WCE=-w\sum^{C}y\cdot log(x)$ (4)
where $C$ is the set of all classes. The weight $w$ for each class $c$ is
defined as:
$w_{c}=\frac{N-Nc}{N}$ (5)
where $N$ is the total number of samples and $N_{c}$ is the number of samples
that have label $c$.
Since the model is the same as the COVID identification network, i.e.,
DenseNet201, we started from the network trained on the COVID-identification
task and fine-tune it on the categorization task to limit overfitting given
the small scale of our dataset.
For both the detection network and the lesion categorization network, we used
the following hyperparameters: batch-size = 12, learning rate = 1e-04, ADAM
back-propagation optimizer with beta values 0.9 and 0.999, eps = 1e-08 and
weight decay = 0 and the back-propagation method was used to update the
models’ parameters during training. Detection and categorization networks were
trained for 20 epochs. In both cases, performance are reported at the highest
validation accuracy.
For lung/lobe segmentation, input images were normalized to zero mean and
unitary standard deviation, with statistics computed on the employed dataset.
In all the experiments for our segmentation model, input size was set to
$224\times 224$, initial learning rate to 0.0001, weight decay to 0.0001 and
batch size to 2, with RMSProp as optimizer. When C-LSTMs were employed,
recurrent states were initialized to zero and the size of the input sequences
to the C-LSTM layers was set to 3. Each training was carried out for 50
epochs.
### 4.3 Performance Evaluation
In this section report the performance of the proposed model for lung/lobe
segmentation, COVID-19 identification and lesion categorization.
#### 4.3.1 Lobe segmentation
Our segmentation model is based on the Tiramisu model [31] with the
introduction of _squeeze-and-excitation_ blocks and of a convolutional LSTM
(either unidirectional or bidirectional) after the bottleneck layer. In order
to understand the contribution of each module, we first performed ablation
studies by testing the segmentation performance of our model using different
architecture configurations:
* 1.
Baseline: the vanilla Tiramisu model described in [31];
* 2.
Res-SE: residual _squeeze-and-Excitation_ module are integrated in each dense
block of the Tiramisu architecture;
* 3.
C-LSTM: a unidirectional convolutional LSTM is added after the bottleneck
layer of the Tiramisu architecture;
* 4.
Res-SE + C-LSTM: variant of the Tiramisu architecture that includes both
residual _squeeze-and-Excitation_ at each dense layer and a unidirectional
convolutional LSTM after the bottleneck layer.
We also compared the performance against the U-Net architecture proposed in
[39] that is largely adopted for lung/lobe segmentation.
All architectures were trained for 50 epochs by splitting the employed lung
datasets into a training, validation and test splits using the 70/10/20 rule.
Results in terms of Dice score coefficient (DSC) are given in Tab. 2. It has
to noted that unlike [39], we computed DSC on all frames, not only on the lung
slices.
The highest performance is obtained with the Res-SE + C-LSTM configuration,
i.e., when adding _squeeze-and-excitation_ and the unidirectional C-LSTM at
the bottleneck layer of the Tiramisu architecture. This results in an accuracy
improvement of over 4 percent points over the baseline. In particular, adding
_squeeze-and-excitation_ leads to a 2 percent point improvement over the
baseline. Segmentation results are computed using data augmentation obtained
by applying random affine transformations (rotation, translation, scaling and
shearing) to input images. The segmentation network is then applied to our
COVID-19 dataset for prior segmentation without any additional fine-tuning to
demonstrate also its generalization capabilities.
Model | Lung segmentation | Lobe segmentation
---|---|---
Baseline Tiramisu [31] | $89.41\pm 0.45$ | 77.97 $\pm$ 0.31
Baseline + Res-SE | $91.78\pm 0.52$ | 80.12 $\pm$ 0.28
Baseline + C-LSTM | $91.49\pm 0.57$ | 79.47 $\pm$ 0.38
Baseline + Res-SE + C-LSTM | 94.01 $\pm$ 0.52 | 83.05 $\pm$ 0.27
Table 2: Ablation studies of our segmentation network in terms of dice score.
Best results are shown in bold. Note: we did not compute confidence intervals
on these scores as they are obtained from a very large set of CT pixels.
#### 4.3.2 COVID-19 assessment
We compute results both for COVID-19 detection and lesion categorization and
compare to those yielded by three experts with different degree of expertise:
1. 1.
Radiologist 1: a physician expert in thoracic radiology ($\sim$30 years of
experience) with over 30,000 examined CT scans;
2. 2.
Radiologist 2: a physician expert in thoracic radiology ($\sim$10 years of
experience) with over 9,000 examined CT scans;
3. 3.
Radiologist 3: a resident student in thoracic radiologist ($\sim$3 years of
experience) with about 2,000 examined CT scans.
We also assess the role of prior segmentation on the performance. This means
that in the pipelines showed in Figures 3 and 5 we removed the segmentation
modules and performed classification using the whole CT slices using also
information outside the lung areas. Results for COVID-19 detection are
measured in terms of sensitivity and specificity and given in Tables 3 and 4.
| Sensitivity | C.I. (95%)
---|---|---
Radiologist 1 | 83.9% | [71.8% – 91.9%]
Radiologist 2 | 87.1% | [75.6% – 94.3%]
Radiologist 3 | 80.6% | [68.2% – 89.5%]
AI Model without lung segmentation | 83.9% | [71.8% – 91.9%]
AI Model with lung segmentation | 90.3% | [79.5% – 96.5%]
Table 3: Sensitivity (together with 95% confidence interval) comparison
between manual readings of expert radiologists and the AI model for COVID-19
detection without lung segmentation and AI model with segmentation.
Thus, the AI model using lung segmentation achieves the best performance
outperforming expert radiologists in the COVID-19 assessment. Furthermore,
performing lung segmentation improves by about 6 percent points both the
sensitivity and the specificity, demonstrating its effectiveness. The
important aspect to highlight is that expert radiologists during the
annotation process did not have to segment lungs or lobes, showing the
generalization capabilities of the proposed deep learning-based methods.
| Specificity | C.I. (95%)
---|---|---
Radiologist 1 | 87.1% | [75.6% – 94.3%]
Radiologist 2 | 87.1% | [75.6% – 94.3%]
Radiologist 3 | 90.3% | [79.5% – 96.5%]
AI Model without lung segmentation | 87.1% | [75.6% – 94.3%]
AI Model with lung segmentation | 93.5% | [83.5% – 98.5%]
Table 4: Specificity (together with 95% confidence interval) comparison
between manual readings of expert radiologists and the AI model for COVID-19
detection without lung segmentation and AI model with segmentation.
As a backbone model for COVID-19 identification, we employed DenseNet201 since
it yielded the best performance when compared to other state of the art
models, as shown in Table 5. In all the tested cases, we used upstream
segmentation through the model described in Sect. 2. Voting threshold was set
to 10% on all cases.
Model | Variant | Sensitivity (CI) | Specificity (CI) | Accuracy (CI)
---|---|---|---|---
AlexNet | – | 71.0% (57.9–81.6) | 90.3% (79.5–96.5) | 80.7% (68.3–89.5)
ResNet | 18 | 71.0% (57.9–81.6) | 93.5% (83.5–98.5) | 82.3% (70.1–90.7)
34 | 80.7% (68.3–89.5) | 90.3% (79.5–96.5) | 85.5% (73.7–93.1)
50 | 83.9% (71.9–91.9) | 90.3% (79.5–96.5) | 87.1% (75.6–94.3)
101 | 77.4% (64.7–89.9) | 87.1% (75.6–94.3) | 82.3% (70.1–90.7)
| 152 | 77.4% (64.7–89.9) | 90.3% (79.5–96.5) | 83.9% (71.9–91.9)
DenseNet | 121 | 77.4% (64.7–89.9) | 93.5% (83.5–98.5) | 85.5% (73.7–93.1)
169 | 67.9% (83.5–98.5) | 93.5% (83.5–98.5) | 81.4% (68.7–90.2)
| 201 | 90.3% (79.5–96.5) | 93.5% (83.5–98.5) | 91.9% (81.5–97.5)
SqueezeNet | – | 66.7% (54.5–78.9) | 93.5% (83.5–98.5) | 81.4% (68.7–90.2)
ResNeXt | – | 77.4% (64.7–86.9) | 90.3% (79.5–96.5) | 83.9% (71.9–91.9)
Table 5: COVID-19 classification accuracy by several state of the art models.
Values in parentheses indicate 95% confidence intervals (CI).
In order to enhance trust in the devised AI models, we analyzed what features
these methods employ for making the COVID-19 diagnosis decision. This is done
by investigating which artificial neurons fire the most, and then projecting
this information to the input images. To accomplish this we combined GradCAM
[36] with VarGrad [37]191919https://captum.ai/ and, Fig. 9 shows some examples
of the saliency maps generated by interpreting the proposed AI COVID-19
classification network. It is interesting to note that the most significant
activation areas correspond to the three most common lesion types, i.e.,
ground glass, consolidation and crazy paving. This is remarkable as the model
has indeed learned the COVID-19 peculiar patterns without any information on
the type of lesions (to this end, we recall that for COVID-19 identification
we only provide, at training times, the labels “positive” or “negative”, while
no information on the type of lesions is given).
Figure 9: Lung salient areas identified automatically by the AI model for CT
COVID-19 identification.
For COVID-19 lesion categorization we used mean (and per-class) classification
accuracy over all lesion types and per lesion that are provided, respectively,
in Table 6.
| Model no_segm | Model w_segm
---|---|---
Consolidation | 77.8% (69.9–84.1) | 97.9% (93.6–99.8)
Ground glass | 18.6% (14.1–24.1) | 41.3% (35.1–47.7)
Crazy Paving | 57.1% (49.4–64.4) | 98.3% (94.8–99.8)
Negative | 99.3% (98.6–99.7) | 99.9% (99.5–100)
Average | 63.2% | 84.4%
Table 6: Per-class accuracy for lesion categorization between manual readings
of expert radiologists and the AI model without lung segmentation and AI model
with segmentation. Values in parentheses indicate 95% confidence intervals
(CI).
Mean lesion categorization accuracy reaches, when operating at the lobe level,
about 84% of performance. The lowest performance is obtained on ground glass,
because ground glass opacities are specific CT findings that can appear also
in normal patients with respiratory artifact. Operating at the level of single
lobes yields a performance enhancement of over 21 percent points, and, also in
this case, radiologists did not have to perform any lobe segmentation
annotation, reducing significantly their efforts to build AI models. The most
significant improvement when using lobe segmentation w.r.t. no segmentation is
obtained Crazy Paving class, i.e., 98.3% against 57.1%.
Despite the CT diagnosis of COVID-19 pneumonia seems an easy task for
experienced radiologists, the results show that our system is able to
outperform them providing more accurate decisions. Artificial intelligence
(AI), in particular, is able to identify more accurately lung lesions, in
particular the smaller and undefined ones (as those highlighted in Fig. 9) The
identification elements increases the sensitivity and specificity of the
method for the correct diagnosis. The results obtained both for COVID-19
identification and lesion categorization pave the way to further improvement
by implementing an advanced COVID-19 CT/RX image-driven diagnostic pipeline
interpretable and strongly robust to provide not only the diseases
identification and differential diagnosis but also the risk of disease
progression.
## 5 Conclusions
In this work we have presented an AI-based pipeline for automated lung
segmentation, COVID-19 detection and COVID-19 lesion categorization from CT
scans. Results showed a sensitivity of 90% and a specificity of 93.5% for
COVID-19 detection and average lesion categorization accuracy of about 64%.
Results also show that a significant role is played by prior lung and lobe
segmentation that allowed us to enhance performance of about 6 percent points.
The AI models are then integrated into an user-friendly GUI to support AI
explainability for radiologists, which is publicly available at
http://perceivelab.com/covid-ai. To the best of our knowledge, this is the
first AI-based software, publicly available, that attempts to explain
radiologists what information is used by AI methods for making decision and
that involve proactively in the loop to further improve the COVID-19
understanding. These results pave the way to further improvement to provide
not only the diseases identification and differential diagnosis but also the
risk of disease progression.
## Acknowledgment
We thank the “Covid 19 study group” from Spallanzani Hospital (Maria
Alessandra Abbonizio, Chiara Agrati, Fabrizio Albarello, Gioia Amadei,
Alessandra Amendola, Mario Antonini, Raffaella Barbaro, Barbara Bartolini,
Martina Benigni, Nazario Bevilacqua, Licia Bordi, Veronica Bordoni, Marta
Branca, Paolo Campioni, Maria Rosaria Capobianchi, Cinzia Caporale, Ilaria
Caravella, Fabrizio Carletti, Concetta Castilletti, Roberta Chiappini, Carmine
Ciaralli, Francesca Colavita, Angela Corpolongo, Massimo Cristofaro, Salvatore
Curiale, Alessandra D’Abramo, Cristina Dantimi, Alessia De Angelis, Giada De
Angelis, Rachele Di Lorenzo, Federica Di Stefano, Federica Ferraro, Lorena
Fiorentini, Andrea Frustaci, Paola Gallì, Gabriele Garotto, Maria Letizia
Giancola, Filippo Giansante, Emanuela Giombini, Maria Cristina Greci, Giuseppe
Ippolito, Eleonora Lalle, Simone Lanini, Daniele Lapa, Luciana Lepore, Andrea
Lucia, Franco Lufrani, Manuela Macchione, Alessandra Marani, Luisa Marchioni,
Andrea Mariano, Maria Cristina Marini, Micaela Maritti, Giulia Matusali,
Silvia Meschi, Francesco Messina Chiara Montaldo, Silvia Murachelli, Emanuele
Nicastri, Roberto Noto, Claudia Palazzolo, Emanuele Pallini, Virgilio Passeri,
Federico Pelliccioni, Antonella Petrecchia, Ada Petrone, Nicola Petrosillo,
Elisa Pianura, Maria Pisciotta, Silvia Pittalis, Costanza Proietti, Vincenzo
Puro, Gabriele Rinonapoli, Martina Rueca, Alessandra Sacchi, Francesco Sanasi,
Carmen Santagata, Silvana Scarcia, Vincenzo Schininà, Paola Scognamiglio,
Laura Scorzolini, Giulia Stazi, Francesco Vaia, Francesco Vairo, Maria
Beatrice Valli) for the technical discussion and critical reading of this
manuscript.
## Regulation and Informed Consent
All data and methods were carried out in accordance to the General Data
Protection Regulation 2016/679. The experimental protocols were approved by
the Ethics Committee of the National Institute for Infectious Diseases Lazzaro
Spallanzani in Rome. All patients enrolled in the study were over 18 at the
time of their participation in the experiment and signed informed consent.
## Declarations of interest
None.
## References
* [1] N. Zhu, D. Zhang, W. Wang, X. Li, B. Yang, J. Song, X. Zhao, B. Huang, W. Shi, R. Lu, et al., A novel coronavirus from patients with pneumonia in china, 2019, New England Journal of Medicine (2020).
* [2] W. H. Organization, et al., Novel coronavirus (2019-ncov): situation report, 8 (2020).
* [3] P. Huang, T. Liu, L. Huang, H. Liu, M. Lei, W. Xu, X. Hu, J. Chen, B. Liu, Use of chest ct in combination with negative rt-pcr assay for the 2019 novel coronavirus but high clinical suspicion, Radiology 295 (1) (2020) 22–23.
* [4] M.-Y. Ng, E. Y. Lee, J. Yang, F. Yang, X. Li, H. Wang, M. M.-s. Lui, C. S.-Y. Lo, B. Leung, P.-L. Khong, et al., Imaging profile of the covid-19 infection: radiologic findings and literature review, Radiology: Cardiothoracic Imaging 2 (1) (2020) e200034.
* [5] H. Liu, F. Liu, J. Li, T. Zhang, D. Wang, W. Lan, Clinical and ct imaging features of the covid-19 pneumonia: Focus on pregnant women and children, Journal of infection (2020).
* [6] M. Chung, A. Bernheim, X. Mei, N. Zhang, M. Huang, X. Zeng, J. Cui, W. Xu, Y. Yang, Z. A. Fayad, et al., Ct imaging features of 2019 novel coronavirus (2019-ncov), Radiology 295 (1) (2020) 202–207.
* [7] F. Rundo, C. Spampinato, G. L. Banna, S. Conoci, Advanced deep learning embedded motion radiomics pipeline for predicting anti-pd-1/pd-l1 immunotherapy response in the treatment of bladder cancer: Preliminary results, Electronics 8 (10) (2019) 1134.
* [8] Z. Allam, D. S. Jones, On the coronavirus (covid-19) outbreak and the smart city network: universal data sharing standards coupled with artificial intelligence (ai) to benefit urban health monitoring and management, in: Healthcare, Vol. 8, Multidisciplinary Digital Publishing Institute, 2020, p. 46.
* [9] L. Lin, Z. Hou, Combat covid-19 with artificial intelligence and big data, Journal of travel medicine 27 (5) (2020) taaa080.
* [10] N. Zheng, S. Du, J. Wang, H. Zhang, W. Cui, Z. Kang, T. Yang, B. Lou, Y. Chi, H. Long, et al., Predicting covid-19 in china using hybrid ai model, IEEE Transactions on Cybernetics (2020).
* [11] X. Bai, C. Fang, Y. Zhou, S. Bai, Z. Liu, L. Xia, Q. Chen, Y. Xu, T. Xia, S. Gong, et al., Predicting covid-19 malignant progression with ai techniques (2020).
* [12] W. Liang, J. Yao, A. Chen, Q. Lv, M. Zanin, J. Liu, S. Wong, Y. Li, J. Lu, H. Liang, et al., Early triage of critically ill covid-19 patients using deep learning, Nature communications 11 (1) (2020) 1–7.
* [13] K. Ćosić, S. Popović, M. Šarlija, I. Kesedžić, T. Jovanovic, Artificial intelligence in prediction of mental health disorders induced by the covid-19 pandemic among health care workers, Croatian Medical Journal 61 (3) (2020) 279.
* [14] S. Mohanty, M. H. A. Rashid, M. Mridul, C. Mohanty, S. Swayamsiddha, Application of artificial intelligence in covid-19 drug repurposing, Diabetes & Metabolic Syndrome: Clinical Research & Reviews (2020).
* [15] Y.-Y. Ke, T.-T. Peng, T.-K. Yeh, W.-Z. Huang, S.-E. Chang, S.-H. Wu, H.-C. Hung, T.-A. Hsu, S.-J. Lee, J.-S. Song, et al., Artificial intelligence approach fighting covid-19 with repurposing drugs, Biomedical Journal (2020).
* [16] P. Richardson, I. Griffin, C. Tucker, D. Smith, O. Oechsle, A. Phelan, J. Stebbing, Baricitinib as potential treatment for 2019-ncov acute respiratory disease, Lancet (London, England) 395 (10223) (2020) e30.
* [17] L. Brunese, F. Mercaldo, A. Reginelli, A. Santone, Explainable deep learning for pulmonary disease and coronavirus covid-19 detection from x-rays, Computer Methods and Programs in Biomedicine 196 (2020) 105608.
* [18] L. Huang, R. Han, T. Ai, P. Yu, H. Kang, Q. Tao, L. Xia, Serial quantitative chest ct assessment of covid-19: Deep-learning approach, Radiology: Cardiothoracic Imaging 2 (2) (2020) e200075.
* [19] P. Nardelli, D. Jimenez-Carretero, D. Bermejo-Pelaez, G. R. Washko, F. N. Rahaghi, M. J. Ledesma-Carbayo, R. S. J. Estépar, Pulmonary artery–vein classification in ct images using deep learning, IEEE transactions on medical imaging 37 (11) (2018) 2428–2440.
* [20] N. Navab, J. Hornegger, W. M. Wells, A. Frangi, Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III, Vol. 9351, Springer, 2015.
* [21] X. Mei, H.-C. Lee, K.-y. Diao, M. Huang, B. Lin, C. Liu, Z. Xie, Y. Ma, P. M. Robson, M. Chung, et al., Artificial intelligence–enabled rapid diagnosis of patients with covid-19, Nature Medicine (2020) 1–5.
* [22] A. A. A. Setio, F. Ciompi, G. Litjens, P. Gerke, C. Jacobs, S. J. Van Riel, M. M. W. Wille, M. Naqibullah, C. I. Sánchez, B. van Ginneken, Pulmonary nodule detection in ct images: false positive reduction using multi-view convolutional networks, IEEE transactions on medical imaging 35 (5) (2016) 1160–1169.
* [23] K. H. Cha, L. Hadjiiski, H.-P. Chan, A. Z. Weizer, A. Alva, R. H. Cohan, E. M. Caoili, C. Paramagul, R. K. Samala, Bladder cancer treatment response assessment in ct using radiomics with deep-learning, Scientific reports 7 (1) (2017) 1–12.
* [24] D. Bermejo-Peláez, S. Y. Ash, G. R. Washko, R. S. J. Estépar, M. J. Ledesma-Carbayo, Classification of interstitial lung abnormality patterns with an ensemble of deep convolutional neural networks, Scientific reports 10 (1) (2020) 1–15.
* [25] M. Gao, U. Bagci, L. Lu, A. Wu, M. Buty, H.-C. Shin, H. Roth, G. Z. Papadakis, A. Depeursinge, R. M. Summers, et al., Holistic classification of ct attenuation patterns for interstitial lung diseases via deep convolutional neural networks, Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization 6 (1) (2018) 1–6.
* [26] J. H. Moltz, L. Bornemann, J.-M. Kuhnigk, V. Dicken, E. Peitgen, S. Meier, H. Bolte, M. Fabel, H.-C. Bauknecht, M. Hittinger, et al., Advanced segmentation techniques for lung nodules, liver metastases, and enlarged lymph nodes in ct scans, IEEE Journal of selected topics in signal processing 3 (1) (2009) 122–134.
* [27] H.-C. Shin, H. R. Roth, M. Gao, L. Lu, Z. Xu, I. Nogues, J. Yao, D. Mollura, R. M. Summers, Deep convolutional neural networks for computer-aided detection: Cnn architectures, dataset characteristics and transfer learning, IEEE transactions on medical imaging 35 (5) (2016) 1285–1298.
* [28] L. Li, L. Qin, Z. Xu, Y. Yin, X. Wang, B. Kong, J. Bai, Y. Lu, Z. Fang, Q. Song, et al., Artificial intelligence distinguishes covid-19 from community acquired pneumonia on chest ct, Radiology (2020).
* [29] F. Shi, J. Wang, J. Shi, Z. Wu, Q. Wang, Z. Tang, K. He, Y. Shi, D. Shen, Review of artificial intelligence techniques in imaging data acquisition, segmentation and diagnosis for covid-19, IEEE reviews in biomedical engineering (2020).
* [30] H. X. Bai, R. Wang, Z. Xiong, B. Hsieh, K. Chang, K. Halsey, T. M. L. Tran, J. W. Choi, D.-C. Wang, L.-B. Shi, et al., Ai augmentation of radiologist performance in distinguishing covid-19 from pneumonia of other etiology on chest ct, Radiology (2020) 201491.
* [31] S. Jégou, M. Drozdzal, D. Vazquez, A. Romero, Y. Bengio, The one hundred layers tiramisu: Fully convolutional densenets for semantic segmentation, in: CVPRW 2017, IEEE, 2017, pp. 1175–1183.
* [32] G. Huang, Z. Liu, L. Van Der Maaten, K. Q. Weinberger, Densely connected convolutional networks., in: CVPR, Vol. 1, 2017, p. 3.
* [33] O. Ronneberger, P. Fischer, T. Brox, U-net: Convolutional networks for biomedical image segmentation, in: International Conference on Medical image computing and computer-assisted intervention, Springer, 2015, pp. 234–241.
* [34] S. Xingjian, Z. Chen, H. Wang, D.-Y. Yeung, W.-K. Wong, W.-c. Woo, Convolutional lstm network: A machine learning approach for precipitation nowcasting, in: NIPS, 2015.
* [35] J. Hu, L. Shen, G. Sun, Squeeze-and-excitation networks, arXiv preprint arXiv:1709.01507 7 (2017).
* [36] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, D. Batra, Grad-cam: Visual explanations from deep networks via gradient-based localization, in: 2017 IEEE International Conference on Computer Vision (ICCV), 2017, pp. 618–626.
* [37] J. Adebayo, J. Gilmer, M. Muelly, I. Goodfellow, M. Hardt, B. Kim, Sanity checks for saliency maps, in: S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, R. Garnett (Eds.), Advances in Neural Information Processing Systems 31, Curran Associates, Inc., 2018, pp. 9505–9515.
URL http://papers.nips.cc/paper/8160-sanity-checks-for-saliency-maps.pdf
* [38] S. Armato, G. McLennan, L. Bidaut, M. McNitt-Gray, C. Meyer, A. Reeves, H. MacMahon, R. Engelmann, R. Roberts, A. Starkey, P. Caligiuri, D. Aberle, M. Brown, R. Pais, D. Qing, P. Batra, C. Jude, I. Petkovska, A. Biancardi, B. Zhao, C. Henschke, D. Yankelevitz, D. Max, A. Farooqi, E. Hoffman, E. van Beek, A. Smith, E. Kazerooni, P. Bland, G. Laderach, G. Gladish, R. Munden, L. Quint, L. Schwartz, B. Sundaram, L. Dodd, C. Fenimore, D. Gur, N. Petrick, J. Freymann, J. Kirby, B. Hughes, A. Casteele, S. Gupte, M. Sallam, M. Heath, M. Kuhn, E. Dharaiya, R. Burns, D. Fryd, M. Salganicoff, V. Anand, U. Shreter, S. Vastagh, B. Croft, L. Clarke, The lung image database consortium, (lidc) and image database resource initiative (idri):: a completed reference database of lung nodules on ct scans, Medical Physics 38 (2) (2011) 915–931. doi:10.1118/1.3528204.
* [39] J. Hofmanninger, F. Prayer, J. Pan, S. Rohrich, H. Prosch, G. Langs, Automatic lung segmentation in routine imaging is a data diversity problem, not a methodology problem, arXiv preprint arXiv:2001.11767 (2020).
* [40] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, L. Fei-Fei, Imagenet: A large-scale hierarchical image database, in: 2009 IEEE conference on computer vision and pattern recognition, Ieee, 2009, pp. 248–255.
|
# Size, shade or shape? The contribution of galaxies of different types to the
star-formation history of the Universe from SDSS-IV MaNGA
Thomas Peterken,1 Alfonso Aragón-Salamanca,1 Michael Merrifield,1 Vladimir
Avila-Reese,2 Nicholas F. Boardman,3 Helena Domínguez Sánchez,4 Dmitry
Bizyaev,5 Niv Drory,6 Kaike Pan,5 Joel R. Brownstein7
1School of Physics and Astronomy, University of Nottingham, University Park,
Nottingham NG7 2RD, UK
2Instituto de Astronomía, Universidad Nacional Autónoma de México, A.P.
70–264, 04510 CDMX, México
3Department of Physics and Astronomy, University of Utah, Salt Lake City, UT
84112, USA
4Institute of Space Sciences (ICE, CSIC), Campus UAB, Carrer de Magrans,
E-08193 Barcelona, Spain
5New Mexico State University, Apache Point Observatory, P.O. Box 59, Sunspot,
NM 88349
6McDonald Observatory, The University of Texas at Austin, 1 University
Station, Austin, TX 78712, USA
7Department of Physics and Astronomy, University of Utah, 115 S. 1400 E., Salt
Lake City, UT 84112, USA E-mail<EMAIL_ADDRESS><EMAIL_ADDRESS>
(Draft copy: )
###### Abstract
By fitting stellar populations to SDSS-IV MaNGA survey observations of $\sim
7000$ suitably-weighted individual galaxies, we reconstruct the star-formation
history of the Universe, which we find to be in reasonable agreement with
previous studies. Dividing the galaxies by their present-day stellar mass, we
demonstrate the downsizing phenomenon, whereby the more massive galaxies
hosted the most star-formation at earlier times. Further dividing the galaxy
sample by colour and morphology, we find that a galaxy’s present-day colour
tell us more about its historical contribution to the cosmic star formation
history than its current morphology. We show that downsizing effects are
greatest among galaxies currently in the blue cloud, but that the level of
downsizing in galaxies of different morphologies depends quite sensitively on
the morphological classification used, due largely to the difficulty in
classifying the smaller low-mass galaxies from their ground-based images.
Nevertheless, we find agreement that among galaxies with stellar masses
$M_{\star}>6\times 10^{9}\,M_{\odot}$, downsizing is most significant in
spirals. However, there are complicating factors. For example, for more
massive galaxies, we find that colour and morphology are predictors of the
past star formation over a longer timescale than in less massive systems.
Presumably this effect is reflecting the longer period of evolution required
to alter these larger galaxies’ physical properties, but shows that
conclusions based on any single property don’t tell the full story.
###### keywords:
galaxies: evolution
††pubyear: 2020††pagerange: Size, shade or shape? The contribution of galaxies
of different types to the star-formation history of the Universe from SDSS-IV
MaNGA–Size, shade or shape? The contribution of galaxies of different types to
the star-formation history of the Universe from SDSS-IV MaNGA
## 1 Introduction
The question of when and where the stars residing in today’s galaxies formed
is essential to understanding the present-day Universe. Since early works by
Madau et al. (1996), Connolly et al. (1997) and others, many studies have
measured the average instantaneous star-formation rate in galaxy populations
observed at different redshift snapshots to build a picture of the overall
cosmic star-formation history; see Madau & Dickinson 2014 for a comprehensive
review. These approaches have built up a picture of a two-phase star-formation
history of the Universe, with the mean star-formation rate per unit of
comoving volume rising rapidly during the first $1-2\,\textrm{Gyr}$ (i.e.
$z>4$) after the Big Bang, and declining in more recent times. The exact
lookback time at which the peak of cosmic star-formation occurred is
uncertain, but most sources agree that it is broadly in the range of $z=1-3$,
corresponding to approximately $8-11\,\textrm{Gyr}$ before the present day;
see analyses by Hopkins & Beacom (2006), Behroozi et al. (2013), and Madau &
Dickinson (2014), and references therein.
Within this picture, studies of galaxy populations at different redshifts have
demonstrated the link between the star-formation history of the Universe and
the evolution of its constituent galaxies. For example, many show that the
star-formation rate of galaxies with high stellar mass peaked and declined at
preferentially earlier times than low-mass galaxies (Cowie et al., 1996;
Fontanot et al., 2009; Peng et al., 2010; Muzzin et al., 2013), an effect
normally referred to as “downsizing”. Others have proven that a link between a
galaxy’s morphology and its contribution to the cosmic star-formation history
exists over a range of redshifts (Wuyts et al., 2011; Bell et al., 2012;
Cheung et al., 2012; Mortlock et al., 2013; Moresco et al., 2013), although
determining detailed morphologies at high redshift is of course difficult.
It has long been known that in the present-day Universe, spirals are generally
bluer in colour (e.g. Holmberg 1958) due to their higher rate of star
formation (e.g. Roberts 1963; Kennicutt 1983), and are less massive (Blanton &
Moustakas, 2009) than earlier-type galaxies, suggesting an evolutionary
sequence of galaxies transitioning from late to early types as they grow and
subsequently cease their star-formation. However, the exact mechanisms by
which galaxies are able to alter their shape and colour are still not fully
understood. One avenue to studying the typical evolution processes which have
occurred in galaxies could be to determine whether a present-day galaxy’s
colour111“Colour” here refers to a broadband measure of the spectral energy
distribution in the optical region or its morphology is the strongest
indicator to its past. It would then be possible to understand which property
transition — morphology or colour; “shape” or “shade” — is more fundamental,
and whether the associated timescales vary according to galaxy properties such
as stellar mass.
Unfortunately, by their nature, studies of galaxy populations at different
redshifts are limited to studying the average statistical behaviour of a
galaxy population’s star-formation at different snapshots in the Universe’s
history, and are therefore unable to trace how individual galaxies have
evolved. In order to understand the link between a galaxy’s present-day
properties and its past contribution to the cosmic star-formation history, an
alternative approach is therefore required.
Using large samples of galaxies from spectroscopic surveys, Panter et al.
(2003); Panter et al. (2007) and Heavens et al. (2004) showed that by
measuring galaxies’ individual star-formation histories using spectral fitting
techniques, the star-formation history of the Universe can be reconstructed.
Despite significant progress in more recent years, obtaining accurate non-
parametric star-formation histories of galaxies in this manner is still
subject to significant uncertainties, particularly those due to difficulties
in creating reliable stellar population templates using stellar evolution
models which are still poorly understood (Charlot et al., 1996; Maraston,
1998, 2005; Yi, 2003; Lee et al., 2007; Ge et al., 2019), and also due to
assumptions about the behaviour of the stellar initial mass function
(Maraston, 1998; van Dokkum et al., 2008; Pforr et al., 2012; Cid Fernandes et
al., 2014) and the treatment of stellar elemental abundances. See Conroy et
al. (2009, 2010), and Conroy & Gunn (2010) for a set of detailed reviews and
discussion on this important subject.
However, notwithstanding these difficulties, many studies have shown that non-
parametric stellar population fitting methods produce generally reliable
results (see e.g. Cid Fernandes et al. 2005; Panter et al. 2007; Sánchez et
al. 2016; Li et al. 2017; de Amorim et al. 2017; Ge et al. 2018; Cid Fernandes
2018; see also Appendix A of Peterken et al. 2020 for tests specific for the
spectral fitting methods used here), and these stellar population “fossil
record” methods have therefore been successfully applied to modern integral-
field spectroscopic galaxy surveys to uncover the history of low-redshift
galaxies and their physical components (see e.g. Cid Fernandes et al. 2013;
Pérez et al. 2013; Ibarra-Medel et al. 2016; González Delgado et al. 2016,
2017; Peterken et al. 2019; Peterken et al. 2020; García-Benito et al. 2019;
Fraser-McKelvie et al. 2019). Others have used spectral fitting analyses to
show that a galaxy’s observed colour (e.g. Ibarra-Medel et al. 2016; or
equivalently star-formation rate, e.g. Sánchez et al. 2019) and morphology
(e.g. García-Benito et al. 2017; López Fernández et al. 2018; Lacerna et al.
2020; Bellstedt et al. 2020) are directly linked to the historical evolution
of its star formation rate. In a cosmological context, López Fernández et al.
(2018) and Sánchez et al. (2019) have demonstrated the power of using fossil
record techniques to reconstruct how galaxies’ physical properties have
evolved over cosmic time, showing good agreement with redshift snapshot
studies, and thereby justifying this approach as a complementary analysis
technique to study the connection between the Universe’s evolution with its
present-day galaxies.
With its consistent radial coverage of a large sample of galaxies with varying
physical properties, the integral-field spectroscopic MaNGA survey (Bundy et
al., 2015) (part of the fourth generation of the Sloan Digital Sky Survey;
SDSS-IV; Blanton et al. 2017) offers an ideal tool to investigate the link
between today’s galaxies and the Universe’s past. In Peterken et al. (2021),
we explored how the stellar population fossil record can reveal the cosmic
evolution of the star-formation “main sequence” and the mass function of
galaxies. Here, we use the same measured star-formation histories of a large
sample of galaxies to derive the star-formation history of the Universe, and
make use of morphological information from both citizen science and machine
learning classifications to explore the connection between present-day stellar
mass, colour, and morphology to a galaxy’s star-formation rate evolution over
the age of the Universe.
This paper is structured as follows. In Section 2, we outline the relevant
details of the SDSS-IV MaNGA survey. We outline how the samples were selected
and describe the division of these samples into morphological and colour sub-
samples in Section 3. We then briefly summarise the spectral fitting methods
we employ to measure galaxy star-formation histories in Section 4. Section 5
contains the derivation of the cosmic star-formation history and its
contribution from galaxies of different present-day stellar masses (5.1),
colour (5.2), and morphological classifications (5.3). Finally, we interpret
how these results fit into a context of downsizing and discuss the relative
importance of morphology and broadband colour in Section 6.
Throughout this paper, we assume a flat $\Lambda$CDM cosmology with
$H_{0}=68\,\textrm{km}\,\textrm{s}^{-1}\,\textrm{Mpc}^{-1}$ and $\Omega_{\rm
m}=0.308$, consistent with Planck Collaboration et al. (2016).
## 2 MaNGA
As part of the SDSS-IV, Mapping Nearby Galaxies at Apache Point Observatory
(MaNGA; Bundy et al. 2015) is an integral-field spectroscopic galaxy survey.
As of the time of writing, on-site operations have been completed, and by the
end of 2020 the fully-reduced observations will be available for over 10,000
low-redshift ($0.01<z<0.15$, median $z\sim 0.3$) galaxies with a spatial
resolution of 2.5 arcseconds (Yan et al., 2016b). Observations make use of
specially-designed integral field units of five sizes ranging from 12 to 32
arcsecond diameters with 19 to 127 fibres (Drory et al., 2015), which are
mounted onto plates on the 2.5-metre Sloan telescope at Apache Point
Observatory in New Mexico (Gunn et al., 2006). The fibres are fed into the
BOSS spectrographs (Smee et al., 2013) and the spectra are calibrated to
better than 5% accuracy (Yan et al., 2016a) covering a wavelength range of
$3600-10300\,\textrm{\AA}$ with a resolution of $R\approx 2000$. Observations
are designed to reach a minimum signal-to-noise ratio of
$5\,\textrm{\AA}^{-1}$ at $1.5\,R_{\rm e}$ (Law et al., 2015), where $R_{\rm
e}$ is the effective radius of each observed galaxy measured by the NASA-Sloan
Atlas (NSA; Blanton et al. 2011).
The calibrated spectra are reduced and combined into three-dimensional
datacubes by a custom data reduction pipeline (DRP; Law et al. 2016; Law et
al. in preparation), and a data analysis pipeline (DAP; Westfall et al. 2019;
Cherinka et al. 2019) provides data analysis products such as spectral index
maps, stellar and gas kinematics, and emission line fluxes (Belfiore et al.,
2019).
## 3 Sample selection
MaNGA targets galaxies with a flat distribution in log(stellar mass) over the
range of $10^{9}M_{\odot}<M_{\star}<10^{11.5}M_{\odot}$, and the full
targeting catalogue assigns galaxies to designated samples. The Primary sample
galaxies are observed to a radius of $1.5\,R_{\rm e}$, while the Secondary
sample contains observations to $2.5\,R_{\rm e}$. The Primary sample is
supplemented by a colour-enhanced sample to form the Primary+ sample, which
over-samples unusual regions of the stellar mass–colour plane such as high-
mass blue galaxies, low-mass red galaxies, and the “green valley” (Law et al.,
2015).
From the latest internal MaNGA data release (MaNGA Product Launch 9; MPL-9),
we selected all galaxies belonging to any of the Primary, Primary+, or
Secondary samples. In doing so, we required the MaNGA DRP to have assigned no
warning flags at all to any galaxy (i.e. drp3qual=0 for all galaxies). We also
require the DAP to have successfully modelled an emission spectrum cube for
each galaxy. These criteria produce a full sample of 6861 galaxies, of which
3255 belong to the Primary sample, 4342 to the Primary+ sample, and 2519 to
the larger-coverage Secondary sample.
### 3.1 Sample weightings
As a result of MaNGA’s sampling, none of the survey’s galaxy samples are
intrinsically volume-limited. However, since the sampling strategies are well-
defined, each galaxy can be assigned an appropriate weighting such that any
analysis can be performed on an effectively volume-limited sample (Wake et
al., 2017). An implementation of these weightings is described in detail by
Wake et al. (2017), but here — as in Peterken et al. (2021) — we use
weightings generated using the method implemented by Sánchez et al. (2019);
see also Rodriguez-Puebla et al. (2020) and Calette et al. (in preparation)
for further details. This choice of which set of weightings to use was due to
the Sánchez et al. (2019) weightings being more robust at lower stellar masses
and more detailed in its treatment of galaxy colour. However, since the
sample-weighting calculations are similar, we find that the results shown here
are unchanged when the Wake et al. (2017) sample weightings are used instead.
The adopted galaxy weights are reliable for galaxies with stellar mass
$M_{\star}>10^{9}\,M_{\odot}$, making this the limit above which each of the
properly-weighted samples are effectively volume-limited.
### 3.2 Comparison to single-fibre spectra
Figure 1: The weighted distributions of galaxy radii. Each galaxy is sampled
to $1.2\,R_{\rm e}$ (and $2.3\,R_{\rm e}$ for Secondary-sample galaxies),
resulting in a distribution of on-sky apertures. All samples probe to larger
angular apparent radii than SDSS single-fibre spectra (marked as a vertical
dashed line) would.
The weighted distributions of angular radii for each of the samples is shown
in Figure 1. For each of the samples, the majority of galaxies’ $1.2\,R_{\rm
e}$ radius threshold (as used in this work; see Section 4) is larger than the
$3\,\textrm{arcsec}$ radius probed by single-fibre SDSS spectra, highlighting
the extra information available with integral-field spectroscopic
observations. As well as including more of the total star-formation in the
Universe, the spectra we use here are measuring consistent radii in all
galaxies regardless of their observed redshift.
## 4 Spectral fitting
In Peterken et al. (2021), we describe how we implemented full-spectrum
fitting techniques to each Primary+ galaxy. We use an identical method here
but also including galaxies in the Secondary sample. To summarise; we removed
emission lines from each spaxel using the DAP’s emission-line spectrum, and
combined all spaxels’ spectra within $1.2\,R_{\rm e}$ of each galaxy after
removing line-of-sight velocities. We then fit a single spectrum of the
stellar component for each galaxy using Starlight, with a combination of 54
single stellar population (SSP) spectra from the E-MILES (Vazdekis et al.,
2016) and Asa’d et al. (2017) libraries as templates. We also assume a
Calzetti et al. (2000) dust extinction model and fit within the range
$3541.4\leq\lambda\leq 8950.4\,\textrm{\AA}$. Typical combined signal-to-noise
ratios for each galaxy within the fitting range are at least $\sim 500$.
Further description of the fitting method can be found in Peterken et al.
(2021), and we also refer the reader to Peterken et al. (2020) for a full
assessment of its reliability.
As well as fitting the spectrum of each galaxy sampled to $1.2\,R_{\rm e}$, we
also repeat the above fitting method using all spaxels of all Secondary
galaxies sampled to $2.3\,R_{\rm e}$. We therefore obtained 9380 individual
fits; one of each Primary+ galaxy, and two for each Secondary galaxy (sampled
to each radius limit). The two aperture limits were chosen to balance the
inclusion of as much of the MaNGA field of view as possible, while avoiding
overlap with the hexagonal IFU edges which might contaminate and bias results
(see e.g. Ibarra-Medel et al. 2016).
We then obtain a $0.2\,\textrm{dex}$ time-smoothed star-formation history of
each of the fits using the SSP weights assigned by Starlight using the method
described in Peterken et al. (2021). In smoothing, the Starlight-derived star-
formation history (in units of $M_{\odot}\,\textrm{yr}^{-1}$) is first
resampled at 250 lookback times which are evenly spaced in log(time) over the
range of stellar populations used in fitting, and then convolved in the
log(time) axis with a Gaussian function of width $0.2\,\textrm{dex}$.
### 4.1 Comparison of measured star-formation rates
Figure 2: The present-day star-formation rates measured using both H$\alpha$
emission SFRHα are consistent with Starlight-derived total star-formation rate
over the last $100\,\textrm{Myr}$ $\textrm{SFR}_{{\rm SSP},\,(t\leq 100\,{\rm
Myr})}$ (top). The NSA $u-i$ broad-band colours are also closely linked to the
present-day specific star formation rate (bottom). Coloured contours indicate
lines of 20%, 40%, 60%, and 80% of the peak density for each of the unweighted
samples.
To establish the trustworthiness of the spectral fitting methods, we compare
the measured outputs with known results obtained through established analyses.
We find that the total stellar mass measurements for each galaxy agree well
with those determined photometrically by the NASA-Sloan Atlas (see Peterken et
al. 2020).
We are also able to compare the present-day star-formation rates of each
galaxy measured using the Starlight-derived star-formation histories with
those calculated entirely independently from H$\alpha$ fluxes. For each
galaxy, we corrected the DAP’s map of Gaussian-modelled H$\alpha$ emission
line flux using the Balmer decrement, assuming an intrinsic value of $f_{\rm
H\alpha}/f_{\rm H\beta}=2.87$ (corresponding to electron temperature $T_{\rm
e}=10^{4}$ K and density $n_{\rm e}=10^{2}$ cm-3 under Osterbrock & Ferland
(2006) “Case B” recombination) and a Calzetti et al. (2000) reddening curve
with $R_{V}=3.1$. Note that in the stellar population modelling in Section 4,
we assume $R_{V}=4.05$, but the lower value is used when considering the
emission lines; see Catalán-Torrecilla et al. (2015) and Greener et al. (2020)
for an explanation on the difference between dust corrections applied to
stellar and gas spectra. These calculations and corrections are detailed fully
by Greener et al. (2020).
We then summed the flux from all spaxels within $1.2\,R_{\rm e}$ (and
$2.3\,R_{\rm e}$ for the Secondary sample) which have an emission-line signal-
to-noise ratio of at least 10 and calculated the star-formation rate
$\textrm{SFR}_{\rm H\alpha}$ using the relation described by Kennicutt (1998).
For consistency with the SSP templates used in fitting with Starlight, we
assume a Chabrier (2003) IMF for this calculation.
We also calculate a present-day star-formation rate from Starlight
$\textrm{SFR}_{{\rm SSP},\,(t\leq 100\,{\rm Myr})}$ by extracting from the
smoothed star-formation histories the total mass added to each galaxy over the
last $100\,\textrm{Myr}$. By plotting the offset between $\textrm{SFR}_{\rm
H\alpha}$ and $\textrm{SFR}_{{\rm SSP},\,(t\leq 100\,{\rm Myr})}$ as a
function of $\textrm{SFR}_{{\rm SSP},\,(t\leq 100\,{\rm Myr})}$, we show in
Figure 2 that these two measurements of the ongoing star-formation rate are
broadly comparable despite the H$\alpha$ emission having been removed from the
spectra prior to fitting. The offset from the line of equality is to be
expected given that the two measurements are using completely different
approaches and calibrations, contain differing systematics, and are sensitive
to different timescales of star-formation. For the Primary+ sample, we find
that the relationship between the two measurements can be described by
$\log\left(\frac{\textrm{SFR}_{\rm H\alpha}}{\textrm{SFR}_{{\rm SSP},\,(t\leq
100\,{\rm Myr}}}\right)=-0.5\log\left(\textrm{SFR}_{{\rm SSP},\,(t\leq
100\,{\rm Myr})}\right)-0.1$ (1)
with a root mean square deviation of $\sim 0.7\,\textrm{dex}$ from this best-
fit line. Comparable relationships are found for the other samples. The
relatively small offset does not affect any conclusions, reassuring that the
stellar population fits are reliable.
We obtain Starlight-derived specific star-formation rates $\textrm{sSFR}_{{\rm
SSP},\,(t\leq 100\,{\rm Myr})}$ by calculating the ratio of
$\textrm{SFR}_{{\rm SSP},\,(t\leq 100\,{\rm Myr})}$ to the total Starlight-
measured stellar mass contained within each present day galaxy. We also show
in Figure 2 that such specific star-formation rates are closely correlated
with a galaxy’s $u-i$ NSA broadband colour, with a best-fit relation in the
Primary+ sample described by
$u-i=-0.3\log\left(\textrm{SFR}_{{\rm SSP},\,(t\leq 100\,{\rm
Myr})}\right)-1.7$ (2)
which has a root mean square devation of $\sim 0.4\,\textrm{mag}$. As before,
comparable relationships also exist for the other galaxy samples. This link is
unsurprising, as the redder $i$ band is most sensitive to the low-mass stars
which comprise the bulk of a galaxy’s total stellar mass, while the $u$ band
is more sensitive to bluer stars and is therefore indicative of recent star-
formation, making $u-i$ a suitable proxy for specific star-formation rate.
## 5 The star-formation history of the Universe
Having obtained individual star-formation histories for each galaxy, the
cosmic star-formation history can be constructed. To do so, the individual
star-formation histories measured for each galaxy must be carefully weighted
and combined. In combining galaxy star-formation histories, we account for the
lookback time due to the observed reshift of each galaxy. Each star-formation
history’s age sampling is shifted onto a lookback-time sampling by adding the
galaxy’s redshift’s lookback time. The galaxy star-formation histories are
then combined and interpolated onto a common sampling of lookback times.
Figure 3: Fractional cumulative completeness in sample-weighted stellar mass
as a function of lookback time, due to galaxies’ observed redshifts. The
Secondary sample contains galaxies at higher redshifts than the Primary and
Primary+ samples. The lookback times corresponding to a 90% mass completeness
for each sample are given in the legend, and are illustrated by vertical
dotted lines. We do not sample star-formation histories at lookback times
below these limits.
Since the Primary(+) and Secondary samples are selected from different
redshift distributions (Wake et al., 2017) — as shown in Figure 3 — any
derived star-formation history of the Universe will only be able to probe down
to a specific limit in lookback times, depending on which sample is used.
Since the distribution of galaxy redshifts within any sample is dependent on
the galaxy mass, we only measure cosmic star-formation histories at lookback
times greater than that for which each sample has at least 90% completeness in
sample-weighted mass. The specific adopted limits are shown in the legend in
Figure 3.
Figure 4: The star-formation history of the Universe determined using the
different galaxy samples (solid coloured lines). The agreement between the
Primary and Primary+ samples is too close to be able to properly distinguish
between them. Also shown is the Madau & Dickinson (2014) best-fit parametric
function to the cosmic star-formation history derived from galaxy redshift
studies (black dotted line). For consistency, this comparison study has been
smoothed according to our treatment of each individual galaxy’s star-formation
history.
The cosmic star-formation histories measured from each sample is shown for
each sample in Figure 4. We show that the calculated star-formation history of
the Universe measured within 1.2 $R_{\rm e}$ of each galaxy agrees well at all
lookback times regardless of the MaNGA sample used, and that all samples show
a peak cosmic star-formation history at a lookback time of $10^{9.80\pm
0.01}\,\textrm{years}$ (corresponding to $z\approx 0.67\pm 0.02$).
Figure 4 also shows the best-fit function derived by Madau & Dickinson (2014)
of the cosmic star-formation history measured through galaxy redshift studies.
To ensure a like-for-like comparison, we calculated the mass weights which
would be assigned to each SSP template age used in our fitting method based on
Equation 15 of Madau & Dickinson (2014), by integrating the curve within a box
centred in log-space on each SSP’s nominal age, to obtain a raw star-formation
history as might ideally be measured using the stellar population fitting
methods described in Section 4. We then smoothed the modelled SSP weights
using the procedure described in Section 4 and in Peterken et al. (2021). This
smoothing procedure results in the smoothed profile showing a broader peak in
the cosmic star-formation history which occurs at a lower redshift (at $z\sim
1$ rather than $z\sim 2$) compared to the unsmoothed Madau & Dickinson (2014)
best-fit function (not shown).
We find that the smoothed Madau & Dickinson (2014) star-formation history is
in general agreement with the results obtained through the completely
independent approach performed here — with particularly strong agreement at
lookback times less than $\sim 6\,\textrm{Gyr}$ ($z\sim 0.63$) — lending
further evidence that such a fossil record analysis is trustworthy. We suggest
that the broader peak obtained here is due to having co-added multiple star-
formation histories which have each been smoothed to $0.2\,\textrm{dex}$
rather than simply smoothing a single function by $0.2\,\textrm{dex}$ in the
comparison measurement. We argue that this effect could also partly explain
the difference in lookback times to the peak in cosmic star formation,
although the observed difference is small given the entirely independent
approaches of the two methods. Our results also show good quantitative
agreement with the cosmic star-formation histories obtained through fossil-
record analyses performed by both López Fernández et al. (2018) and Sánchez et
al. (2019), and also with the earlier best-fit parametric models to
observational redshift studies determined by Behroozi et al. (2013) and
Hopkins & Beacom (2006).
If the measured lookback time corresponding to the peak of cosmic star-
formation was dictated primarily by artefacts of the SSP templates or of the
fitting method, we would expect to find that the peak in each galaxy’s
individual star-formation history might be biased towards a certain stellar
population age. In such a case, the derived cosmic star-formation history
measured in each sample would therefore display its peak at different lookback
times once galaxies’ observed redshifts are accounted for, since each MaNGA
sample targets galaxies from different redshift distributions. The close match
between star-formation histories shown in Figure 4 therefore shows that the
signal being measured is intrinsic to the data rather than being the product
of artefacts. Indeed, we find that the agreement between the star-formation
histories measured using each MaNGA sample is only seen when the galaxy
redshifts are taken into account, and that the above close agreement is
smaller than the difference in median lookback times for each sample’s galaxy
populations.
We find that increasing the aperture from $1.2\,R_{\rm e}$ to $2.3\,R_{\rm e}$
using the Secondary sample results in a cosmic star-formation history which is
greater at all lookback times than Madau & Dickinson (2014)’s best fit to
cosmological results. This excess could either be due to overly-conservative
aperture corrections or to surface brightness limits causing cosmological
studies to have underestimated the total star-formation rates at different
redshifts.
Figure 5: The fractional excess in measured cosmic star formation at all
lookback times from the Secondary sample when the aperture is increased from
$1.2\,R_{\rm e}$ to $2.3\,R_{\rm e}$. At more recent times, the fractional
increase is larger, indicative of inside-out growth occurring in most
galaxies.
Having derived the cosmic star-formation history using the Secondary-sample
galaxies with radius limits of $1.2\,R_{\rm e}$ and $2.3\,R_{\rm e}$
separately, we are able to quantify what extra fraction of star-formation is
included by increasing the FOV diameter by a factor of $\sim 1.9$. Comparison
between the two Secondary-derived cosmic star-formation histories of Figure 4
shows that the effect of the increased FOV results in a 35–40% enhancement in
the measured star-formation histories at lookback times $\gtrapprox
5\,\textrm{Gyr}$, increasing to a 55% enhancement by $\sim 1.5\,\textrm{Gyr}$,
as illustrated in Figure 5. This increase over time is indicative of inside-
out growth resulting in a greater star formation contribution by the galaxy
outskirts at more recent times, as we explored for spiral galaxies in Peterken
et al. (2020); see also Pérez et al. (2013); Ibarra-Medel et al. (2016);
García-Benito et al. (2017); Goddard et al. (2017). It is interesting to note
that even at recent times, the ratio in star-formation rate between larger to
the smaller aperture of $\sim 1.5$ is smaller than the corresponding ratio of
sky coverage area ($\sim 3.7$), showing that star-formation density is still
greatest at galactic centres on average despite the effect of inside-out
growth.
### 5.1 Size222We are not using “size” here in its usual meaning of a
galaxy’s physical or apparent radius or diameter; rather, we mean its stellar
mass. Still, such a simple substitution is suitable to sufficiently sustain a
satisfactory sibilance with subsequent subheadings.: effects of present-day
stellar mass
Figure 6: The star-formation history of the Universe stratified by its
contributions from galaxies in different present-day stellar mass bins, in
absolute star-formation rates (top) and by the fractional contribution from
each mass bin (bottom). Shown is for the Primary+ sample (red line), but all
other samples are similar. Higher-mass galaxies have become less dominant in
the more recent Universe and vice-versa.
Having obtained the cosmic star-formation history, we now begin to explore the
connection between a galaxy’s present-day physical properties with its
historical contribution to the Universe’s star-formation. We split the galaxy
sample into five discrete bins of present-day photometrically-measured stellar
mass $M_{\star\,{\rm NSA}}$ such that each bin contains an equal (unweighted)
number of Primary+-sample galaxies. The thresholds of these bins and the star-
formation history of the Universe split into contributions of these mass bins
are shown in Figure 6. We have shown the stellar-mass breakdown for the
Primary+ sample within $1.2\,R_{\rm e}$, but all other samples’ are similar.
We see that the star-formation contribution from higher-mass galaxies becomes
less significant at more recent times. For example, galaxies with present-day
stellar mass $M_{\star}<10^{10.29}\,M_{\odot}$ dominated the cosmic star
formation until $\sim 2\,\textrm{Gyr}$ ago to contribute only $\sim 50\%$ in
the local Universe. We have therefore recovered the known observational
effects of downsizing.
Figure 7: The lookback time to the peak of star-formation $t_{\rm peak}$ as a
function of present-day stellar mass $M_{\star\,{\rm NSA}}$, using each of the
MaNGA samples. The solid lines indicate the sample-weighted median values, and
the shaded regions lie between the one-third and two-thirds percentiles.
Greyscale-shaded background regions indicate the mass bins used in Figure 6
and elsewhere.
A similar perspective of downsizing is gained by measuring the lookback time
at which each individual galaxy reached its maximum star-formation rate. The
volume-weighted median lookback time $t_{\rm peak}$ of peak star-formation in
galaxies as a function of their present-day stellar mass $M_{\star}$ is shown
in Figure 7. We find that the peak in star formation typically occurred at
$t_{\rm peak}=8\pm 1\,\textrm{Gyr}$ ago (corresponding to $z\approx 1.1\pm
0.3$) in the galaxies with the highest present-day stellar masses in the
samples ($M_{\star}>10^{11}\,M_{\odot}$), falling to $t_{\rm peak}\lessapprox
2\,\textrm{Gyr}$ ($z\lessapprox 0.2$) for galaxies with present-day stellar
masses of $M_{\star}<3\times 10^{9}\,M_{\odot}$. We also see some signs of
downsizing effects being strongest among low-mass galaxies where the gradient
$\Delta t_{\rm peak}/\Delta M_{\star}$ is greatest.
By measuring the lookback time by which galaxies had built the bulk of their
present-day stellar mass, Ibarra-Medel et al. (2016), García-Benito et al.
(2017), and Peterken et al. (2020) have previously found that low-mass
galaxies typically show larger variation in the characteristic formation times
than high-mass galaxies. However, we find here that the scatter in $t_{\rm
peak}$ remains approximately constant at $\sim 0.3\,\textrm{dex}$ over all
stellar masses. The two results are not incompatible: we interpret such an
apparent dichotomy as indicative of low-mass galaxies having greater variation
in the rate of decline in star-formation after their common peak time.
Figure 7 again shows strong agreement between each MaNGA sample. We therefore
present hereafter only results measured using the Primary+ sample, but results
for other samples are similar throughout.
### 5.2 Shade: effects of present-day colour
By splitting each galaxy sample by the galaxies’ locations on the stellar
mass–colour plot, we are also able to investigate the effect of present-day
colour on a galaxy’s contribution to the cosmic star-formation history.
#### 5.2.1 Colour classifications
Figure 8: Mass–colour plane of the Primary+-sample galaxies, with galaxies
classified according tho their colour as used in Section 5.2. Galaxy stellar
masses $M_{\star\,\textrm{NSA}}$ and $\left(u-i\right)_{\textrm{NSA}}$
broadband colours are NSA measurements. Shaded backgrounds indicate the bins
in stellar mass in Figure 6 and elsewhere.
We separate galaxies by their colours according to the following criteria:
* •
Red sequence: $\left(u-i\right)_{\rm NSA}>0.25\log\left(M_{\star\,{\rm
NSA}}\right)-0.2$
* •
Green valley:
$0.25\log\left(M_{\star\,{\rm NSA}}\right)-0.5\leq\left(u-i\right)_{\rm
NSA}\leq 0.25\log\left(M_{\star\,{\rm NSA}}\right)-0.2$
* •
Blue cloud: $\left(u-i\right)_{\rm NSA}<0.25\log\left(M_{\star\,{\rm
NSA}}\right)-0.5$
where the galaxies’ stellar masses $M_{\star\,{\rm NSA}}$ (measured in
$M_{\odot}$) and $\left(u-i\right)_{\rm NSA}$ broadband colours are taken from
the NSA (Blanton et al., 2011) photometry-derived measurements.333Note that a
galaxy’s broadband colour is affected by its inclination due to increased
internal dust extinction. We do not attempt to correct for this effect, so
that our measurements resemble those of other studies of galaxy populations as
closely as possible. These thresholds and the subsequent classifications of
Primary+ galaxies are illustrated in Figure 8.
Figure 9: Comparison of galaxies with colour classifications defined in
Section 5.2 (left panels, based on $\left(u-i\right)_{\rm NSA}$ broadband
colours) and the specific star-formation rate classifications defined in
Peterken et al. (2021) (right panels) in the $\left(u-i\right)_{\rm
NSA}$–$M_{\star\,{\rm NSA}}$ plane (top panels) and $\textrm{SFR}_{\rm
SL}$–$M_{\star\,{\rm SL}}$ (bottom panels) plane at a lookback time of
$t=0.68\,\textrm{Gyr}$ for Primary+ galaxies. The classifications of blue
cloud, green valley, and red sequence are analogous to classifications of
star-forming, retiring, and retired galaxies respectively. Dashed lines in the
top panels indicate the boundaries used to separate blue cloud, green valley,
and red sequence galaxies, and those in the bottom panels indicate those used
to separate star-forming, retiring, and retired galaxies.
We showed in Section 4.1 and Equation 2 that broadband colour and specific
star-formation rate are closely correlated. To demonstrate how the above
colour classifications are linked to star-formation properties, we have also
split the galaxy sample into star-forming, retiring, and retired populations
using thresholds of $\textrm{sSFR}_{{\rm SL}\left(t=0.68\,{\rm
Gyr}\right)}=10^{-11}$ and $10^{-12}\,\textrm{yr}^{-1}$ — as used in Peterken
et al. (2021) — where $\textrm{sSFR}_{{\rm SL}\left(t=0.68\,{\rm Gyr}\right)}$
is the ratio of the average Starlight-measured instantaneous star-formation
rate to the Starlight-measured instantaneous stellar mass $M_{\star\,{\rm
SL}}$ at a lookback time of $t=0.68\,{\rm Gyr}$, which is the lowest lookback
time measurable with the Primary+ sample as defined in Figure 3. Figure 9
shows a direct comparison of galaxies’ classifications under both schemes on
the $\left(u-i\right)_{\rm NSA}$–$M_{\star\,{\rm NSA}}$ plane (using NSA-
measured colours $\left(u-i\right)_{\rm NSA}$ and stellar masses
$M_{\star\,{\rm NSA}}$) and on the $\textrm{SFR}_{\rm SL}$–$M_{\star\,{\rm
SL}}$ plane at a lookback time of $t=0.68\,\textrm{Gyr}$ (using Starlight-
derived star-formation rates SFRSL and stellar masses $M_{\star\,{\rm SL}}$).
We find that most star-forming galaxies map onto the blue cloud region, most
retiring galaxies to the green valley region, and most retired galaxies to the
red sequence region, and vice-versa, showing that the $u-i$ colour
classifications defined above are broadly analogous to classifications of star
formation444This strong link is by constuction; we chose the $u-i$ broadband
colour in Section 4.1 as a proxy for specific star-formation rate due to the
sensitivity of $u$ to recent star-formation and $i$ to the total stellar mass.
Classifications based on broadband colours at longer wavelengths (e.g. $i-z$)
will not be so closely linked to the classifications of specific star-
formation rates., and therefore linking the observed measurements to physical
galaxy properties.
Having demonstrated the commutativity of broadband colour and specific star-
formation rate, it would be feasible to investigate how either of these
properties relates to a galaxy’s history. Ibarra-Medel et al. (2016) showed
that colour and specific star-formation rate have similar effects on the star-
formation history of galaxies. However, since colour is a more readily-
measurable property of a galaxy, we will continue to investigate the role of
present-day broadband colour rather than present-day specific star-formation
rate in the analysis shown here, for more straightforward application of the
results to other studies of galaxy populations.
#### 5.2.2 Present-day colour and the star-formation history of the Universe
Figure 10: The star-formation history of the Universe inferred from the
Primary+ sample, separated into its contributions from galaxies belonging to
each galaxy classification (indicated by colours) defined in Section 5.2.1.
Each colour classification is also stratified by present-day stellar mass
(darkest to lightest shades for most to least massive galaxies) within each
broadband colour classification, using the mass bins of Section 5.1. The upper
panel shows the total contribution to the star-formation history, while the
lower panel shows the fractional contributions to the total star-formation
history.
Figure 10 shows the Primary+-derived star-formation history of the Universe
showing the relative contributions from galaxies classified as currently
belonging to the blue cloud, the green valley, and the red sequence as defined
above. There is a strong correlation between a galaxy’s present-day colour and
its past contribution to the total star formation, with the blue cloud
contributing only 20% at early times and rising to more than 70% at the
present day. We recover how galaxies which are currently exhibiting very low
levels of star formation were once the dominant source of new stars.
Figure 11: The absolute (top row) and fractional (middle row) contributions to
the star-formation history of the Universe of galaxies of different present-
day colour classifications (colours) in each mass bin (columns; transparencies
equivalent to those used in Figure 10), and the relative contribution of each
colour classification to each mass bin’s star-formation history (bottom row).
However, stellar mass and colour classification are not independent galaxy
properties, as can be seen in Figure 8. Specifically, high-mass galaxies are
more likely to belong to the red sequence and vice-versa. To determine how the
effect of present-day colour varies with a galaxy’s stellar mass, we show the
relative contribution of each colour classification to each mass bin’s star-
formation history in Figure 11. We find that the blue cloud’s increase in
contribution to the star-formation history between lookback times of 10 and 1
Gyr varies from a $\sim 20$% increase in the highest-mass bin to a $\sim 40$%
increase in the lowest-mass bins:555The discrepancy between these mass-bin
values and the increase of $\sim 50$% in the blue cloud’s contribution overall
is due to the dependent nature of colour and stellar mass: lower-mass bins
with a larger fraction of blue cloud galaxies becoming more dominant at more
recent times. a galaxy’s colour is therefore more strongly correlated to its
historical contribution to the star-formation history in low-mass than in
high-mass galaxies.
We additionally see some evidence for the colour designations reflecting
shorter star-formation timescales in low-mass galaxies than in their high-mass
counterparts. The most massive blue cloud galaxies began increasing their
contribution to that mass bin’s star-formation history $\sim 6\,\textrm{Gyr}$
ago to reach its current level $\sim 2\,\textrm{Gyr}$ ago, while the lowest-
mass bin’s blue cloud only started to become more dominant $\sim
2\,\textrm{Gyr}$ ago and appears to still be increasing to the present-day.
We find that galaxies currently in the green valley have contributed an
approximately constant 20% to the star-formation history in all mass bins.
Such a consistency of the green valley’s contribution between mass bins is to
be expected assuming that the timescale of star-formation transition is
independent of stellar mass. However, the constancy of the present-day green
valley’s contribution over the last $10\,\textrm{Gyr}$ indicates a complex
picture. If these galaxies are those caught in the act of a rapid “quenching”
transition from the blue cloud to the red sequence or a rapid rejuvenation
transition in the opposite direction, it would be expected that their
historical contributions to the cosmic star-formation history should be
similar to either the blue cloud’s or the red sequence’s until recent times.
The fact that this does not hold true could hint towards the possibility that
the galaxies designated as belonging to the green valley using the colour
criteria above contain a mix of quenching and rejuvenating galaxies. However,
further analysis of the individual star-formation histories of galaxies
currently in the green valley shows that this is not true in that most show
declining levels of star-formation over the last Gyr, and there is no
significant population of rejuvenating green valley galaxies. Instead, it
seems that most present-day green valley galaxies have always had low but
sustained levels of star formation, with star-formation histories which have
generally followed the star-formation history of the Universe as a whole.
While we do not speculate here as to a possible physical explanation for this
observation, it might reflect the slower retiring processes proposed by
Schawinski et al. (2014), for example. Possible drivers of such a “slow
quenching” scenario are thought to be predominantly internal processes (Martig
et al., 2009; Fang et al., 2013; Bluck et al., 2014; Smethurst et al., 2018;
Das et al., 2021). There is observational evidence to suggest that cessation
of star-formation occurs on long (several Gyr) timescales in a significant
number of galaxies, including ellipticals (see e.g. Smethurst et al. 2015,
2017; Belfiore et al. 2018; Lacerna et al. 2020), although other studies argue
otherwise (see e.g. Bremer et al. 2018).
Figure 12: The absolute (top row) and fractional (middle row) contributions to
the star-formation history of the Universe of galaxies of different stellar
masses in each colour classification, and the relative contribution of
galaxies with different stellar mass to each colour classification’s star-
formation history (bottom row).
By considering how galaxies of each colour classification in the five bins of
stellar mass used in Section 5.1 have contributed to the cosmic star-formation
history, we see in Figure 10 some signs of downsizing effects being present in
all colour classifications; low-mass galaxies of all colours become relatively
more dominant at smaller lookback times. To measure how downsizing effects
vary between colour classifications in this way, we also show the relative
contribution of galaxies in each mass bin to each colour classification’s
star-formation history in Figure 12. We find evidence for downsizing having
occurred most strongly in galaxies currently in the blue cloud, with galaxies
in the two lowest-mass bins increasing their contribution by 20% in the red
sequence and 30% in the blue cloud, with the green valley lying in between.
Figure 13: The lookback time to the peak star formation rate of galaxies
currently belonging to each of the colour classifications. Solid lines
indicate median values, and shaded regions lie between the one- and two-third
percentiles. Blue cloud galaxies peaked in star formation more recently and
show stronger downsizing effects than red sequence galaxies.
This difference in the strength of the downsizing effect seen in galaxies of
different colour classifications is illustrated further in Figure 13, where we
see that red sequence galaxies reached peak star formation at earlier times
than green valley or blue cloud galaxies at all stellar masses, but that this
effect is greatest in low-mass galaxies.
### 5.3 Shape: effects of present-day morphology
Having quantitatively measured the link between a galaxy’s present day colour
and its historical contribution to the cosmic star-formation history, we can
now independently assess the relative role of a galaxy’s present-day
morphology.
#### 5.3.1 Morphological classifications
Over 91% of galaxies in all MaNGA samples have been classified by the Galaxy
Zoo “citizen science” project (Lintott et al., 2008), in which volunteers are
asked to classify galaxies’ morphological features. We make use of the
redshift-debiased vote fractions of Hart et al. (2016) from Galaxy Zoo 2
(Willett et al., 2013) to split the galaxy samples by their present-day
morphology. We classify each galaxy according to the following criteria:
* •
Elliptical: $p_{\rm features\,or\,disk}<0.5$
* •
S0: $p_{\rm features\,or\,disk}>0.5$ and $p_{\rm spiral}<0.5$
* •
Spiral: $p_{\rm features\,or\,disk}>0.5$ and $p_{\rm spiral}>0.5$
where $p_{\rm[class]}>0.5$ indicates the debiased vote fraction for [class]
from Hart et al. (2016). These thresholds were chosen to minimise the number
of unclassifiable or ambiguous galaxies, and are therefore less stringent than
those recommended by Willett et al. (2013), and hence open to significant
contamination between morphological classes.
Figure 14: The distribution of galaxies in the stellar mass–colour plane (as
Figure 8), with galaxies coloured by their Galaxy Zoo (top) and Domínguez
Sánchez et al. (2018) machine learning (bottom) morphological classifications
as used in Section 5.3. Spiral galaxies occupy the blue region of this
parameter space in both schemes. There is ambiguity at low stellar masses
where the Galaxy Zoo classifications are less reliable due to difficulties in
distinguishing spiral structure in SDSS imagery and the low thresholds used
for separating morphologies. Galaxies with low stellar mass are more likely to
have spiral-morphology machine learning classifications compared to their
Galaxy Zoo classifications. Contours of 30%, 50%, and 80% of the peak
unweighted density are shown for each classification. For reference, dashed
black lines delineate the boundaries between the colour classifications used
in Section 5.2, and shaded background regions indicate the stellar mass bins
used throughout.
The stellar mass–colour distribution of the Primary+ galaxies indicating their
Galaxy Zoo morphologies is shown in the upper panel of Figure 14. Over most of
the sample’s range in stellar mass, the spirals occupy the bluer region of
parameter space, but at the low-mass ($<2\times 10^{9}\,~{}M_{\odot}$) end,
blue galaxies are likely to have been classified as earlier-type galaxies, as
we noted in Peterken et al. (2021). This unexpected trend is likely because
spiral structure in low-mass disk galaxies is harder to discern in SDSS
imagery due to resolution effects. The classifications we have implemented
from the vote fractions to minimise the number of unclassified galaxies will
therefore result in a high level of contamination between morphologies. A
lower spiral vote-fraction threshold might more reliably distinguish between
morphologies in low-mass galaxies, but only at the expense of increasing the
number of high-mass early-type disk galaxies being mis-classified as spirals.
Alternatively, using the recommended criteria of Willett et al. (2013) to
obtain clean samples of galaxies of each morphology creates a large number of
unclassified “ambiguous” galaxies, which when removed from the sample will
render the MaNGA sample weightings described in Section 3.1 inaccurate.
However, we have repeated the analysis using debiased vote fraction thresholds
of 80% instead of 50% in the criteria above and found no change to the results
at high stellar masses, but at low stellar masses the dominance of galaxies
with “ambiguous” morphologies makes any quantitative results impossible.
Alternative morphological classifications using the deep learning methods
developed by Domínguez Sánchez et al. (2018) are available for galaxies which
were part of the SDSS data release 15 (DR15; Aguado et al. 2019). This
approach uses Galaxy Zoo and Nair & Abraham (2010) classifications as training
sets to classify SDSS galaxy images. Instead of replicating vote fractions for
the presence of morphological features in each galaxy, the Domínguez Sánchez
et al. (2018) approach directly provides a prediction for each galaxy’s
morphological T-type. The resulting separation of different morphologies is
cleaner at low stellar masses in the colour–mass plane, as illustrated in the
lower panel of Figure 14. The question then arises of which classification
method should be used. Fortunately, in most cases we find that the results
presented here do not change depending on which classification we use and we
have therefore primarily presented results using the Galaxy Zoo
classifications to make use of the larger sample size. In the instances where
we find that the two classification methods provide conflicting results, we
will describe and discuss those differences.
#### 5.3.2 Present-day morphology and the star-formation history of the
Universe
Figure 15: The star-formation history of the Universe inferred from the
Primary+ sample, stratified into its contributions from galaxies in different
stellar mass bins (shades; darkest to lightest for most- to least-massive
galaxies) and of different Galaxy Zoo morphologies (colours), analogous to
Figure 10. The top panel shows the star-formation history, and the bottom
panel shows the percentage contribution from each of the sample’s subdivisions
to the total cosmic star-formation history. Machine learning morphologies of
Domínguez Sánchez et al. (2018) show similar results.
The cosmic star-formation history separated into contributions from galaxies
of different present-day stellar masses and the Galaxy Zoo morphologies is
shown in Figure 15. Present-day spiral galaxies have contributed the greatest
amount (at least 40%) to the total star formation of any morphology at all
lookback times, and contribute the majority (60%) at the present day. This 20%
morphology effect on a galaxy’s past contribution to the star-formation
history of the Universe is smaller than the 40% colour increase seen in Figure
10, implying that colour is a stronger indicator of a galaxy’s full star-
formation history than morphology, as found by others (e.g. Ibarra-Medel et
al. 2016).
We find that a galaxy’s morphology only reflects the last $2-3\,\textrm{Gyr}$
of its star-formation history on average, in that the relative contributions
to the cosmic star-formation history by galaxies of different morphologies did
not change until this lookback time. We have previously demonstrated this
result in more detail (see Peterken et al. 2021).
Figure 16: The absolute (top row) and fractional (middle row) contributions to
the star-formation history of the Universe of galaxies of different
morphologies in each bin of stellar mass (columns), and the relative
contribution of different Galaxy Zoo morphologies to each mass bin’s star-
formation history (bottom row). Analogous to Figure 11.
However, morphology and mass are correlated, as shown in Figure 14: observed
morphology effects could be simply a result of early-type galaxies having
higher stellar mass on average, and will therefore exhibit star-formation
histories with higher star-formation rates at earlier lookback times than
their late-type and less massive counterparts, as we saw in Figure 7. To
properly distinguish between downsizing and morphological effects, Figure 16
shows the same data as in Figure 15 but with each mass bin separated. We find
that regardless of the morphological composition of the mass bin, spirals have
increased their contribution to the star-formation history by $\sim 20$%
between lookback times of 10 and 1 Gyr. We therefore find the effects of
present-day morphology on the contribution to the star-formation history of
the Universe to be independent of present-day stellar mass.
However, we also see some evidence for the morphological classifications
reflecting different timescales of star-formation histories in different mass
bins in the same manner as we saw for the colour classifications in Figure 11.
Spiral galaxies with the largest present-day stellar mass began increasing in
their contribution to their mass bin’s star-formation history $\sim
4\,\textrm{Gyr}$ ago to reach the current level $\sim 2\,\textrm{Gyr}$ ago,
compared to $\sim 2\,\textrm{Gyr}$ ago for the lowest-mass spirals to start
increasing their contribution with a trend of continued increase.
Figure 17: The absolute (top row) and fractional (middle row) contributions to
the star-formation history of the Universe of galaxies with different present-
day Galaxy Zoo moprhologies, stratified by the contributions from each stellar
mass bin. The relative contribution of galaxies with different stellar mass to
each morphology’s star-formation history is also shown (bottom row). Analogous
to Figure 12.
As in Figure 10, we are also able to reorganise the area plot of Figure 15 by
morphology, to show how galaxies of different present-day stellar masses have
contributed to each morphological classification’s star-formation history to
reveal how the effects of stellar mass change among galaxies of different
morphologies, as shown in Figure 17. For the Galaxy Zoo morphologies, we find
that downsizing effects are stronger in earlier-type galaxies; galaxies in the
lowest-mass bin become more dominant among lenticular and elliptical galaxies
at smaller lookback times than the equivalent effect in spiral galaxies.
Figure 18: As Figure 17 but using the Domínguez Sánchez et al. (2018) machine
learning morphologies. The galaxies which were not included in DR15 have been
removed, and the weights of the remaining galaxies have been scaled such that
the total weighted mass in the sample remains the same. The total star-
formation history for the full Primary+ sample (black line) is shown,
resulting in the sum of all shown contributions from each morphology to not
equal exactly 100% of the total at some lookback times.
However, when using the Domínguez Sánchez et al. (2018) machine learning
morphological classifications — as shown in Figure 18 — we find the opposite;
spiral galaxies show stronger downsizing effects than lenticular or elliptical
galaxies. We also find that elliptical galaxies have contributed almost no
star formation in the last $\sim 2\,\textrm{Gyr}$. This difference highlights
how carefully morphological classifications must be used to avoid biases
associated with the difficulties inherent in identifying structures in low-
mass galaxies.
The observed differences are easily explained by uncertainty in the
classification of low-mass spiral galaxies, which appears to be more reliably
resolved by the machine learning approach. We wish to stress here that such a
discrepancy merely highlights the importance of careful consideration of what
exact morphological classifications are being used, and does not imply that
Galaxy Zoo classifications are inherently flawed. Instead, we argue that
studies using any morphological classification method should be wary of how
galaxies with potentially ambiguous morphologies are treated. This is
especially important if the classification scheme is based on a “pure” Hubble
approach, which often contains significant subjectivity (Naim et al., 1995)
and assumptions (Freeman, 1970; Hart et al., 2017, 2018; Masters et al.,
2019), and does not necessarily reflect the physical processes occurring
within galaxies (Cappellari, 2016; Wang et al., 2020). Indeed, we note again
here that the implementation of morphological classifications using Galaxy Zoo
vote fractions described in Section 5.3.1 goes contrary to the advice of
Willett et al. (2013), who detail how low-contamination samples of Galaxy Zoo
galaxies with any required morphology can be effectively obtained.
In Figure 18 (and to a lesser extent also in Figure 17), we see some apparent
“upsizing” occurring in elliptical galaxies; high-mass galaxies appear to
increase their relative contribution to the star-formation history of all
ellipticals in the last $\sim 2\,\textrm{Gyr}$. This phenomenon is likely due
to the problems that we addressed in Peterken et al. (2020) of the “UV upturn”
(see also Yi 2008; Cid Fernandes & González Delgado 2010) being most dominant
in high-mass ellipticals due to their extremely old stellar populations. That
this effect is strongest using the Domínguez Sánchez et al. (2018)
morphological classifications is therefore due to the population of possibly
misclassified low-mass galaxies in the Galaxy Zoo classifications diluting the
effect with “real” star formation; see also Fischer et al. (2019, Section 4.4)
who describe this phenomenon in more detail. However, we note that the
observed effects are extremely small; even with this spurious increase of
star-formation rate in high-mass elliptical galaxies, those galaxies still
only account for $<1\%$ of the star formation in the present-day Universe
regardless of which morphological classification method is used, so these
effects do not significantly affect the derived cosmic star-formation history.
We find the total star-formation histories of all galaxies with stellar mass
$M_{\star}>10^{10.29}\,M_{\odot}$ (i.e. in the three bins of greatest stellar
mass) to be similar for lenticular and elliptical classifications. This
similarity could imply that all massive present-day early-type galaxies have
experienced similar star-formation histories, with the difference in
morphologies due to other factors such as merger rate histories.
Alternatively, it could be evidence for visually-determined shape being an
ineffective way to separate the bimodality of fast- and slow-rotating early-
type galaxies (Emsellem et al., 2007; Cappellari et al., 2011; Cappellari,
2016; Graham et al., 2018), or a combination of both of these effects.
Figure 19: The peak in star-formation history of galaxies of each present-day
morphological classification in the Primary+ sample, as determined using
Galaxy Zoo. Solid lines indicate the weighted medians and the coloured shaded
regions enclose the area between the one- and two-thirds weighted percentiles.
Figure 20: As Figure 19, but using the Domínguez Sánchez et al. (2018) machine
learning morphological classifications of DR15 galaxies. Figure 21: The offset
$\Delta\log\left(t_{\rm peak}\right)$ in lookback time of peak star-formation
rate $t_{\rm peak}$ observed between present-day red sequence and blue cloud
galaxies (blue dashed line), and between present-day elliptical and spiral
galaxies (purple dotted line) using Galaxy Zoo classifications, as a function
of present-day galaxy stellar mass $M_{\star}$. The effect of present-day
colour is larger than that of present-day morphology at all stellar masses,
but is particularly significant in the range $7\times
10^{9}M_{\odot}\lessapprox M_{\star}\lessapprox 4\times 10^{10}M_{\odot}$.
By comparing the lookback time of peak star formation of galaxies with
different present-day morphologies in Figures 19 (for Galaxy Zoo morphologies)
and 20 (for Domínguez Sánchez et al. (2018) machine learning morphologies), we
see that spiral galaxies peaked more recently than early-type galaxies at all
present-day stellar masses regardless of the classification method. We find
that this difference is greater in galaxies with low stellar mass, and is
greatest around $M_{\star}=6\times 10^{9}\,M_{\odot}$; above this threshold we
find that both classification schemes display stronger downsizing among
spirals than in early-type galaxies, in agreement with Goddard et al. (2017).
We also find that the peak in star-formation history of lenticular and
elliptical galaxies is similar for early-type galaxies with stellar masses
$M_{\star}>\sim 10^{10}\,M_{\odot}$ using the Galaxy Zoo classifications
(Figure 19) and for stellar masses $M_{\star}>\sim 10^{9}\,M_{\odot}$ using
the Domínguez Sánchez et al. (2018) classifications (Figure 20), again
reflecting either the similar histories of these galaxies or the difficulty in
distinguishing between them through imagery alone. We see that the star-
formation peak in present-day lenticular galaxies classified by Galaxy Zoo
becomes systematically more recent for lower stellar masses, highlighting
likely contamination of mis-classified spiral galaxies using the thresholds
applied here.
However, regardless of which classification method is used, we see again that
the effect of morphology on a galaxy’s contribution to the cosmic star-
formation history is smaller than the effect of colour: the difference in peak
star-formation times between early- and late-type galaxies is smaller than the
difference between blue cloud and red sequence galaxies ($\sim 0.2$ dex and
$\sim 0.5$ dex respectively at $M_{\star}=10^{10}\,M_{\odot}$ for example).
This is illustrated at all present-day stellar masses in Figure 21.
## 6 Size versus shade versus shape: Conclusions and interpretation
Here, we have recovered evidence for downsizing, in that the fraction of star-
formation occurring in the galaxies with greatest present-day stellar mass was
largest at early times, with low-mass galaxies dominating the cosmic star-
formation at more recent times, and that the lookback time corresponding to a
galaxy’s peak star-formation rate is therefore correlated with its present-day
stellar mass. We find that this correlation exhibits the strongest gradient
among low-mass galaxies, suggesting accelerating downsizing at more recent
times. By further splitting the galaxy sample, we subsequently showed that
downsizing effects are present in galaxies of all present-day colours (which
we showed to be analogous to specific star-formation rate) and morphologies. A
galaxy’s present-day stellar mass is therefore a significant indicator of its
star-formation history regardless of its other properties.
In quantifying these effects, we found that galaxies currently in the blue
cloud exhibited stronger downsizing effects than those in the green valley or
red sequence. However, we found some differences in whether downsizing is
strongest in present-day early- or late-type galaxies depending on which
classification method was used, which we argue is due to our non-standard
implementation of Galaxy Zoo morphologies, and therefore highlights the care
that must be taken when separating galaxies by their visual morphologies.
Reassuringly, there is agreement that late-type galaxies exhibit the stronger
downsizing effects among galaxies with stellar mass $M_{\star}>6\times
10^{9}\,M_{\odot}$ and that high-mass elliptical and lenticular galaxies have
had similar star-formation histories.
Irrespective of classification scheme, we find that the effect of present-day
colour is greater than the effect of present-day morphology on a galaxy’s
star-formation history — confirming previous results (Ibarra-Medel et al.,
2016; García-Benito et al., 2019) — in that the contribution from present-day
blue cloud galaxies increased by $\sim 40\%$ over the past $10\,\textrm{Gyr}$
but that from spirals only increased by $\sim 20\%$ over the same time period
by comparison. Similarly, although we found that present-day spirals and blue
cloud galaxies exhibited the more recent peaks in star-formation at all
stellar masses, the typical difference between spiral and elliptical galaxies
($\sim 0.3\,\textrm{dex}$) is less than that between blue cloud and red
sequence galaxies ($\sim 0.6\,\textrm{dex}$). These results suggest that
galaxies of similar morphologies are more likely to have undergone more
significantly different histories compared to the equivalent variation in
histories among galaxies of similar colours.
However, we also find evidence that these straightforward results do not tell
the complete story. Specifically, we find that both present-day colour and
morphology reflect only the more recent star-formation histories among lower-
mass galaxies compared to their counterparts with high stellar mass. We
interpret this as reflective of a galaxy’s inertia to change. The physical
processes required to suppress or rejuvenate star-formation or to restructure
a galaxy must occur over longer timescales to significantly alter a massive
galaxy’s observed properties. Meanwhile, the tidal, stripping, starvation, or
inflow processes experienced by less massive galaxies are able to affect their
star-formation rates (and therefore colours) and morphologies much more
rapidly.
We have therefore demonstrated that a galaxy’s mass, colour and morphology —
size, shade and shape — all indicate its historical contribution to the cosmic
star-formation history, but to different extents and with codependencies such
that all three properties must be considered to build a full picture.
## 7 Summary
Using an established stellar population “fossil record” analysis, we obtained
star-formation histories of the inner $1.2\,R_{\rm e}$ of 6861 galaxies from
the SDSS-IV MaNGA survey, with 2519 galaxies also sampled to $2.3\,R_{\rm e}$.
By carefully weighting each galaxy to create an effectively volume-limited
sample, we inferred the star-formation history of the Universe, which was
found to be in general agreement with previous observations of galaxy
populations at different redshifts.
We showed that a galaxy’s star-formation history is linked to its present-day
mass, its colour, and its morphology, although colour is more strongly
connected than morphology regarding historical contributions to the cosmic
star-formation history. We also found evidence for downsizing effects being
significant in galaxies of all present-day colours and morphologies, in that
low-mass galaxies of all types have become more dominant in their contribution
to the cosmic star-formation rate in more recent times. These effects,
however, were found to be most significant among galaxies currently in the
blue cloud. Different morphological classification schemes gave contradictory
results on whether downsizing is strongest among galaxies designated as being
currently early or late type, but show agreement that spiral galaxies exhibit
the stronger effects among galaxies with present-day stellar mass
$M_{\star}>6\times 10^{9}\,M_{\odot}$.
We also found that the historical contribution to the cosmic star-formation
history from galaxies currently in the “green valley” does not directly follow
that of either the blue cloud or the red sequence populations, with these
galaxies’ star-formation histories being remarkably representative of that of
the Universe as a whole.
These results once again demonstrate the power of stellar population fossil
record techniques in uncovering the link between the Universe’s past with the
galaxies we see today.
## 8 Data Availability
This publication uses the team-internal MPL-9 version of the MaNGA science
data products, which are broadly similar to previous versions available
publicly through SDSS Data Releases DR13 (Albareti et al., 2017), DR14
(Abolfathi et al., 2018), and DR15 (Aguado et al., 2019). Comparable data
products containing the full MaNGA galaxy sample — including the full sample
used here — will be publicly released in 2021 as part of SDSS DR17, as will
the raw data and all previous versions of the data reduction pipeline.
## Acknowledgements
The authors thank the anonymous reviewer for their helpful suggestions and
improvements to this paper.
The authors also thank S. F. Sánchez, D. Wake, A. R. Calette and A. Rodriguez-
Puebla for their extensive help and support on the technical aspects of this
work.
Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P.
Sloan Foundation, the U.S. Department of Energy Office of Science, and the
Participating Institutions. SDSS acknowledges support and resources from the
Center for High-Performance Computing at the University of Utah. The SDSS web
site is www.sdss.org.
SDSS is managed by the Astrophysical Research Consortium for the Participating
Institutions of the SDSS Collaboration including the Brazilian Participation
Group, the Carnegie Institution for Science, Carnegie Mellon University, the
Chilean Participation Group, the French Participation Group, Harvard-
Smithsonian Center for Astrophysics, Instituto de Astrofísica de Canarias, The
Johns Hopkins University, Kavli Institute for the Physics and Mathematics of
the Universe (IPMU) / University of Tokyo, the Korean Participation Group,
Lawrence Berkeley National Laboratory, Leibniz Institut f ur Astrophysik
Potsdam (AIP), Max-Planck-Institut f ur Astronomie (MPIA Heidelberg), Max-
Planck-Institut f ur Astrophysik (MPA Garching), Max-Planck-Institut f ur
Extraterrestrische Physik (MPE), National Astronomical Observatories of China,
New Mexico State University, New York University, University of Notre Dame,
Observatório Nacional / MCTI, The Ohio State University, Pennsylvania State
University, Shanghai Astronomical Observatory, United Kingdom Participation
Group, Universidad Nacional Autónoma de México, University of Arizona,
University of Colorado Boulder, University of Oxford, University of
Portsmouth, University of Utah, University of Virginia, University of
Washington, University of Wisconsin, Vanderbilt University, and Yale
University.
This publication uses data generated via the Zooniverse.org platform,
development of which is funded by generous support, including a Global Impact
Award from Google, and by a grant from the Alfred P. Sloan Foundation.
We are grateful for access to the University of Nottingham’s Augusta high
performance computing facility.
## References
* Abolfathi et al. (2018) Abolfathi B., et al., 2018, ApJS, 235, 42
* Aguado et al. (2019) Aguado D. S., et al., 2019, ApJS, 240, 23
* Albareti et al. (2017) Albareti F. D., et al., 2017, ApJS, 233, 25
* Asa’d et al. (2017) Asa’d R. S., Vazdekis A., Cerviño M., Noël N. E. D., Beasley M. A., Kassab M., 2017, MNRAS, 471, 3599
* Behroozi et al. (2013) Behroozi P. S., Wechsler R. H., Conroy C., 2013, ApJ, 770, 57
* Belfiore et al. (2018) Belfiore F., et al., 2018, MNRAS, 477, 3014
* Belfiore et al. (2019) Belfiore F., et al., 2019, AJ, 158, 160
* Bell et al. (2012) Bell E. F., et al., 2012, ApJ, 753, 167
* Bellstedt et al. (2020) Bellstedt S., et al., 2020, arXiv e-prints, p. arXiv:2005.11917
* Blanton & Moustakas (2009) Blanton M. R., Moustakas J., 2009, ARA&A, 47, 159
* Blanton et al. (2011) Blanton M. R., Kazin E., Muna D., Weaver B. A., Price-Whelan A., 2011, AJ, 142, 31
* Blanton et al. (2017) Blanton M. R., et al., 2017, AJ, 154, 28
* Bluck et al. (2014) Bluck A. F. L., Mendel J. T., Ellison S. L., Moreno J., Simard L., Patton D. R., Starkenburg E., 2014, MNRAS, 441, 599
* Bremer et al. (2018) Bremer M. N., et al., 2018, MNRAS, 476, 12
* Bundy et al. (2015) Bundy K., et al., 2015, ApJ, 798, 7
* Calzetti et al. (2000) Calzetti D., Armus L., Bohlin R. C., Kinney A. L., Koornneef J., Storchi-Bergmann T., 2000, ApJ, 533, 682
* Cappellari (2016) Cappellari M., 2016, ARA&A, 54, 597
* Cappellari et al. (2011) Cappellari M., et al., 2011, MNRAS, 416, 1680
* Catalán-Torrecilla et al. (2015) Catalán-Torrecilla C., et al., 2015, A&A, 584, A87
* Chabrier (2003) Chabrier G., 2003, PASP, 115, 763
* Charlot et al. (1996) Charlot S., Worthey G., Bressan A., 1996, ApJ, 457, 625
* Cherinka et al. (2019) Cherinka B., et al., 2019, AJ, 158, 74
* Cheung et al. (2012) Cheung E., et al., 2012, ApJ, 760, 131
* Cid Fernandes (2018) Cid Fernandes R., 2018, MNRAS, 480, 4480
* Cid Fernandes & González Delgado (2010) Cid Fernandes R., González Delgado R. M., 2010, MNRAS, 403, 780
* Cid Fernandes et al. (2005) Cid Fernandes R., Mateus A., Sodré L., Stasińska G., Gomes J. M., 2005, MNRAS, 358, 363
* Cid Fernandes et al. (2013) Cid Fernandes R., et al., 2013, A&A, 557, A86
* Cid Fernandes et al. (2014) Cid Fernandes R., et al., 2014, A&A, 561, A130
* Connolly et al. (1997) Connolly A. J., Szalay A. S., Dickinson M., SubbaRao M. U., Brunner R. J., 1997, ApJ, 486, L11
* Conroy & Gunn (2010) Conroy C., Gunn J. E., 2010, ApJ, 712, 833
* Conroy et al. (2009) Conroy C., Gunn J. E., White M., 2009, ApJ, 699, 486
* Conroy et al. (2010) Conroy C., White M., Gunn J. E., 2010, ApJ, 708, 58
* Cowie et al. (1996) Cowie L. L., Songaila A., Hu E. M., Cohen J. G., 1996, AJ, 112, 839
* Das et al. (2021) Das A., Pandey B., Sarkar S., 2021, arXiv e-prints, p. arXiv:2101.02564
* Domínguez Sánchez et al. (2018) Domínguez Sánchez H., Huertas-Company M., Bernardi M., Tuccillo D., Fischer J. L., 2018, MNRAS, 476, 3661
* Drory et al. (2015) Drory N., et al., 2015, AJ, 149, 77
* Emsellem et al. (2007) Emsellem E., et al., 2007, MNRAS, 379, 401
* Fang et al. (2013) Fang J. J., Faber S. M., Koo D. C., Dekel A., 2013, ApJ, 776, 63
* Fischer et al. (2019) Fischer J. L., Domínguez Sánchez H., Bernardi M., 2019, MNRAS, 483, 2057
* Fontanot et al. (2009) Fontanot F., De Lucia G., Monaco P., Somerville R. S., Santini P., 2009, MNRAS, 397, 1776
* Fraser-McKelvie et al. (2019) Fraser-McKelvie A., et al., 2019, MNRAS, 488, L6
* Freeman (1970) Freeman K. C., 1970, ApJ, 160, 811
* García-Benito et al. (2017) García-Benito R., et al., 2017, A&A, 608, A27
* García-Benito et al. (2019) García-Benito R., González Delgado R. M., Pérez E., Cid Fernandes R., Sánchez S. F., de Amorim A. L., 2019, A&A, 621, A120
* Ge et al. (2018) Ge J., Yan R., Cappellari M., Mao S., Li H., Lu Y., 2018, MNRAS, 478, 2633
* Ge et al. (2019) Ge J., Mao S., Lu Y., Cappellari M., Yan R., 2019, MNRAS, 485, 1675
* Goddard et al. (2017) Goddard D., et al., 2017, MNRAS, 466, 4731
* González Delgado et al. (2016) González Delgado R. M., et al., 2016, A&A, 590, A44
* González Delgado et al. (2017) González Delgado R. M., et al., 2017, A&A, 607, A128
* Graham et al. (2018) Graham M. T., et al., 2018, MNRAS, 477, 4711
* Greener et al. (2020) Greener M. J., et al., 2020, MNRAS,
* Gunn et al. (2006) Gunn J. E., et al., 2006, AJ, 131, 2332
* Hart et al. (2016) Hart R. E., et al., 2016, MNRAS, 461, 3663
* Hart et al. (2017) Hart R. E., et al., 2017, MNRAS, 472, 2263
* Hart et al. (2018) Hart R. E., Bamford S. P., Keel W. C., Kruk S. J., Masters K. L., Simmons B. D., Smethurst R. J., 2018, MNRAS, 478, 932
* Heavens et al. (2004) Heavens A., Panter B., Jimenez R., Dunlop J., 2004, Nature, 428, 625
* Holmberg (1958) Holmberg E., 1958, Meddelanden fran Lunds Astronomiska Observatorium Serie II, 136, 1
* Hopkins & Beacom (2006) Hopkins A. M., Beacom J. F., 2006, ApJ, 651, 142
* Ibarra-Medel et al. (2016) Ibarra-Medel H. J., et al., 2016, MNRAS, 463, 2799
* Kennicutt (1983) Kennicutt R. C. J., 1983, ApJ, 272, 54
* Kennicutt (1998) Kennicutt Jr. R. C., 1998, ARA&A, 36, 189
* Lacerna et al. (2020) Lacerna I., Ibarra-Medel H., Avila-Reese V., Hernández-Toledo H. M., Vázquez-Mata J. A., Sánchez S. F., 2020, A&A, 644, A117
* Law et al. (2015) Law D. R., et al., 2015, AJ, 150, 19
* Law et al. (2016) Law D. R., et al., 2016, AJ, 152, 83
* Lee et al. (2007) Lee H.-c., Worthey G., Trager S. C., Faber S. M., 2007, ApJ, 664, 215
* Li et al. (2017) Li H., et al., 2017, ApJ, 838, 77
* Lintott et al. (2008) Lintott C. J., et al., 2008, MNRAS, 389, 1179
* López Fernández et al. (2018) López Fernández R., et al., 2018, A&A, 615, A27
* Madau & Dickinson (2014) Madau P., Dickinson M., 2014, ARA&A, 52, 415
* Madau et al. (1996) Madau P., Ferguson H. C., Dickinson M. E., Giavalisco M., Steidel C. C., Fruchter A., 1996, MNRAS, 283, 1388
* Maraston (1998) Maraston C., 1998, MNRAS, 300, 872
* Maraston (2005) Maraston C., 2005, MNRAS, 362, 799
* Martig et al. (2009) Martig M., Bournaud F., Teyssier R., Dekel A., 2009, ApJ, 707, 250
* Masters et al. (2019) Masters K. L., et al., 2019, MNRAS, 487, 1808
* Moresco et al. (2013) Moresco M., et al., 2013, A&A, 558, A61
* Mortlock et al. (2013) Mortlock A., et al., 2013, MNRAS, 433, 1185
* Muzzin et al. (2013) Muzzin A., et al., 2013, ApJ, 777, 18
* Naim et al. (1995) Naim A., et al., 1995, MNRAS, 274, 1107
* Nair & Abraham (2010) Nair P. B., Abraham R. G., 2010, ApJS, 186, 427
* Osterbrock & Ferland (2006) Osterbrock D. E., Ferland G. J., 2006, Astrophysics of gaseous nebulae and active galactic nuclei. University Science Books
* Panter et al. (2003) Panter B., Heavens A. F., Jimenez R., 2003, MNRAS, 343, 1145
* Panter et al. (2007) Panter B., Jimenez R., Heavens A. F., Charlot S., 2007, MNRAS, 378, 1550
* Peng et al. (2010) Peng Y.-j., et al., 2010, ApJ, 721, 193
* Pérez et al. (2013) Pérez E., et al., 2013, ApJ, 764, L1
* Peterken et al. (2019) Peterken T. G., et al., 2019, MNRAS, 489, 1338
* Peterken et al. (2020) Peterken T., Merrifield M., Aragón-Salamanca A., Fraser-McKelvie A., Avila-Reese V., Riffel R., Knapen J., Drory N., 2020, MNRAS, 495, 3387
* Peterken et al. (2021) Peterken T., Merrifield M., Aragón-Salamanca A., Avila-Reese V., Boardman N. F., Drory N., Lane R. R., 2021, MNRAS, 500, L42
* Pforr et al. (2012) Pforr J., Maraston C., Tonini C., 2012, MNRAS, 422, 3285
* Planck Collaboration et al. (2016) Planck Collaboration et al., 2016, A&A, 594, A13
* Roberts (1963) Roberts M. S., 1963, ARA&A, 1, 149
* Rodriguez-Puebla et al. (2020) Rodriguez-Puebla A., Calette A. R., Avila-Reese V., Rodriguez-Gomez V., Huertas-Company M., 2020, arXiv e-prints, p. arXiv:2004.13740
* Sánchez et al. (2016) Sánchez S. F., et al., 2016, Rev. Mex. Astron. Astrofis., 52, 21
* Sánchez et al. (2019) Sánchez S. F., et al., 2019, MNRAS, 482, 1557
* Schawinski et al. (2014) Schawinski K., et al., 2014, MNRAS, 440, 889
* Smee et al. (2013) Smee S. A., et al., 2013, AJ, 146, 32
* Smethurst et al. (2015) Smethurst R. J., et al., 2015, MNRAS, 450, 435
* Smethurst et al. (2017) Smethurst R. J., Lintott C. J., Bamford S. P., Hart R. E., Kruk S. J., Masters K. L., Nichol R. C., Simmons B. D., 2017, MNRAS, 469, 3670
* Smethurst et al. (2018) Smethurst R. J., et al., 2018, MNRAS, 473, 2679
* Vazdekis et al. (2016) Vazdekis A., Koleva M., Ricciardelli E., Röck B., Falcón-Barroso J., 2016, MNRAS, 463, 3409
* Wake et al. (2017) Wake D. A., et al., 2017, AJ, 154, 86
* Wang et al. (2020) Wang B., Cappellari M., Peng Y., Graham M., 2020, MNRAS, 495, 1958
* Westfall et al. (2019) Westfall K. B., et al., 2019, AJ, 158, 231
* Willett et al. (2013) Willett K. W., et al., 2013, MNRAS, 435, 2835
* Wuyts et al. (2011) Wuyts S., et al., 2011, ApJ, 742, 96
* Yan et al. (2016a) Yan R., et al., 2016a, AJ, 151, 8
* Yan et al. (2016b) Yan R., et al., 2016b, AJ, 152, 197
* Yi (2003) Yi S. K., 2003, ApJ, 582, 202
* Yi (2008) Yi S. K., 2008, in Heber U., Jeffery C. S., Napiwotzki R., eds, Astronomical Society of the Pacific Conference Series Vol. 392, Hot Subdwarf Stars and Related Objects. Astronomical Society of the Pacific, p. 3
* de Amorim et al. (2017) de Amorim A. L., et al., 2017, MNRAS, 471, 3727
* van Dokkum et al. (2008) van Dokkum P. G., et al., 2008, ApJ, 677, L5
|
# Subspace coverings with multiplicities
Anurag Bishnoi Delft Institute of Applied Mathematics, Technische Universiteit
Delft, 2628 CD Delft, Netherlands. E-mail<EMAIL_ADDRESS>Simona
Boyadzhiyska Institut für Mathematik, Freie Universität Berlin, 14195 Berlin,
Germany.E-mail<EMAIL_ADDRESS>Research supported by the
Deutsche Forschungsgemeinschaft (DFG) Graduiertenkolleg “Facets of Complexity”
(GRK 2434). Shagnik Das22footnotemark: 2 E-mail<EMAIL_ADDRESS>Research supported by the Deutsche Forschungsgemeinschaft (DFG) project
415310276. Tamás Mészáros22footnotemark: 2 E-mail<EMAIL_ADDRESS>Research supported by the Deutsche Forschungsgemeinschaft (DFG) under
Germany’s Excellence Strategy - The Berlin Mathematics Research Center MATH+
(EXC-2046/1, project ID: 390685689).
###### Abstract
We study the problem of determining the minimum number $f(n,k,d)$ of affine
subspaces of codimension $d$ that are required to cover all points of
$\mathbb{F}_{2}^{n}\setminus\\{\vec{0}\\}$ at least $k$ times while covering
the origin at most $k-1$ times. The case $k=1$ is a classic result of Jamison,
which was independently obtained by Brouwer and Schrijver for $d=1$. The value
of $f(n,1,1)$ also follows from a well-known theorem of Alon and Füredi about
coverings of finite grids in affine spaces over arbitrary fields.
Here we determine the value of this function exactly in various ranges of the
parameters. In particular, we prove that for $k\geq 2^{n-d-1}$ we have
$f(n,k,d)=2^{d}k-\left\lfloor\frac{k}{2^{n-d}}\right\rfloor$, while for
$n>2^{2^{d}k-k-d+1}$ we have $f(n,k,d)=n+2^{d}k-d-2$, and also study the
transition between these two ranges. While previous work in this direction has
primarily employed the polynomial method, we prove our results through more
direct combinatorial and probabilistic arguments, and also exploit a
connection to coding theory.
## 1 Introduction
How many affine hyperplanes does it take to cover the vertices of the
$n$-dimensional Boolean hypercube, $\\{0,1\\}^{n}$? This simple question has
an equally straightforward answer — one can cover all the vertices with a
parallel pair of hyperplanes, while it is easy to see that a single plane can
cover at most half the vertices, and so two planes are indeed necessary.
However, the waters are quickly muddied with a minor twist to the problem.
Indeed, if one is instead asked to cover all the vertices except the origin,
the parallel hyperplane construction is no longer valid. Given a moment’s
thought, one might come up with the much larger family of $n$ hyperplanes
given by $\\{\vec{x}:x_{i}=1\\}$ for $i\in[n]$. This fulfils the task and,
surprisingly, turns out to be optimal, although this is far from obvious. This
problem has led to rich veins of research in both finite geometry and extremal
combinatorics, and in what follows we survey its history before introducing
our new results.
### 1.1 An origin story
When we work over the finite field $\mathbb{F}_{2}$, this problem is
equivalent to the well-known blocking set problem from finite geometry, and it
was in this guise that it was first studied. A blocking set in
$\mathbb{F}_{2}^{n}$ is a set of points that meets every hyperplane, and the
objective is to find a blocking set of minimum size. By translating, we may
assume that our blocking set contains the origin $\vec{0}$, and so the problem
reduces to finding a collection of points that meets all hyperplanes avoiding
the origin. Applying duality, now, we return to our original problem of
covering the nonzero points of $\mathbb{F}_{2}^{n}$ with affine hyperplanes.
From this perspective, there is no reason to restrict our attention to the
binary field $\mathbb{F}_{2}$, and we can generalise the problem to ask how
many hyperplanes are needed to cover the nonzero points of
$\mathbb{F}_{q}^{n}$. Going even further, one may replace the hyperplanes with
affine subspaces of codimension $d$. In this generality, the problem was
answered in the late 1970s by Jamison [14], who proved that the minimum number
of affine subspaces of codimension $d$ that cover all nonzero points in
$\mathbb{F}_{q}^{n}$ while avoiding the origin is $q^{d}-1+(n-d)(q-1)$. In
particular, when $q=2$ and $d=1$, this lower bound is equal to $n$, showing
that the earlier construction with $n$ planes is optimal. A simpler proof of
the case $d=1$ was independently provided by Brouwer and Schrijver [7].
While the finite geometry motivation naturally leads one to work over finite
fields, one can also study the problem over infinite fields $\mathbb{F}$. Of
course, one would need infinitely many hyperplanes to cover all nonzero points
of $\mathbb{F}^{n}$, which is why we instead ask how many hyperplanes are
needed to cover the nonzero points of the hypercube
$\\{0,1\\}^{n}\subseteq\mathbb{F}^{n}$. This problem was raised in the early
1990s by Komjáth [15], who, in order to prove some results in infinite Ramsey
theory, showed that this quantity must grow with $n$. Shortly afterwards, a
celebrated result of Alon and Füredi [1] established a tight bound in the more
general setting of covering all but one point of a finite grid. They showed
that, for any collection of finite subsets $S_{1},S_{2},\ldots,S_{n}$ of some
arbitrary field $\mathbb{F}$, the minimum number of hyperplanes needed to
cover all but one point of $S_{1}\times S_{2}\times\ldots\times S_{n}$ is
$\sum_{i}\left(|S_{i}|-1\right)$. If we take $S_{i}=\\{0,1\\}$ for all $i$,
this once again shows that one needs $n$ hyperplanes to cover the nonzero
points of the hypercube.
### 1.2 The polynomial method
Despite these motivating applications to finite geometry and Ramsey theory,
the primary reason this problem has attracted so much attention lies in the
proof methods used. These hyperplane covers have driven the development of the
polynomial method — indeed, in light of his early results, this is sometimes
referred to as the Jamison method in finite geometry [9].
To see how polynomials come into play, suppose we have a set of hyperplanes
$\\{H_{i}:i\in[m]\\}$ in $\mathbb{F}^{n}$, with the plane $H_{i}$ defined by
$H_{i}=\\{\vec{x}:\vec{x}\cdot\vec{a}_{i}=c_{i}\\}$ for some normal vector
$\vec{a}_{i}\in\mathbb{F}^{n}$ and some constant $c_{i}\in\mathbb{F}$. We can
then define the degree-$m$ polynomial $f(\vec{x})=\prod_{i\in
m}\left(\vec{x}\cdot\vec{a}_{i}-c_{i}\right)$, observing that $f(\vec{x})=0$
if and only if $\vec{x}$ is covered by one of the hyperplanes $H_{i}$. Thus,
lower bounds on the degrees of polynomials that vanish except at the origin
translate to lower bounds on the number of hyperplanes needed to cover all
nonzero points.
This approach has proven very robust, and lends itself to a number of
generalisations. For instance, Kós, Mészáros and Rónyai [17] and Bishnoi,
Clark, Potukuchi and Schmitt [5] considered variations over rings, while
Blokhuis, Brouwer and Szőnyi [6] studied the problem for quadratic surfaces
and Hermitian varieties in projective and affine spaces over $\mathbb{F}_{q}$.
### 1.3 Covering with multiplicity
In this paper, we shall remain in the original setting, but instead extend the
problem to higher multiplicities. That is, we shall seek the minimum number of
hyperplanes needed in $\mathbb{F}^{n}$ to cover the nonzero points at least
$k$ times, while the origin is covered fewer times. Previous work in this
direction has imposed the stricter condition of avoiding the origin
altogether; Bruen [8] considered this problem over finite fields, while Ball
and Serra [4] and Kós and Rónyai [16] worked with finite grids over arbitrary
fields, with some further generalisations recently provided by Geil and
Matrínez-Peñas [11]. In all of these papers, the polynomial method described
above was strengthened to obtain lower bounds for this problem with higher
multiplicities. However, these lower bounds are most often not tight; Zanella
[20] discusses when Bruen’s bound is sharp, with some improvements provided by
Ball [2].
Significant progress in this line of research was made recently when Clifton
and Huang [10] studied the special case of covering all nonzero points of
$\\{0,1\\}^{n}\subseteq\mathbb{R}^{n}$ at least $k$ times, while leaving the
origin uncovered. Observe that one can remove $k-1$ hyperplanes arbitrarily
from such a cover, and the remainder will still cover each nonzero point at
least once. Thus, by the Alon–Füredi Theorem, we must be left with at least
$n$ planes, giving a lower bound of $n+k-1$. While it is not hard to see that
this is tight for $k=2$, Clifton and Huang used Ball and Serra’s Punctured
Combinatorial Nullstellensatz [4] to improve the lower bound for larger $k$.
They showed that for $k=3$ and $n\geq 2$, the correct answer is $n+3$, while
for $k\geq 4$ and $n\geq 3$, the answer lies between $n+k+1$ and
$n+\binom{k}{2}$, conjecturing the upper bound to be correct when $n$ is large
with respect to $k$. However, they showed that this was far from the case when
$n$ is fixed and $k$ is large; in this range, the answer is $(c_{n}+o(1))k$,
where $c_{n}$ is the $n$th term in the harmonic series.
A major breakthrough was then made by Sauermann and Wigderson [19], who
skipped the geometric motivation and resolved the polynomial problem directly.
More precisely, they proved the following theorem.
###### Theorem 1.1.
Let $k\geq 2$ and $n\geq 2k-3$, and let $P\in\mathbb{R}[x_{1},\ldots,x_{n}]$
be a polynomial having zeroes of multiplicity at least $k$ at all points in
$\\{0,1\\}^{n}\setminus\\{\vec{0}\\}$, and such that $P$ does not have a zero
of multiplicity at least $k-1$ at $\vec{0}$. Then $P$ must have degree at
least $n+2k-3$. Furthermore, for every $\ell\in\\{0,1,\ldots,k-2\\}$, there
exists a polynomial $P$ with degree exactly $n+2k-3$ having zeroes of
multiplicity at least $k$ at all points in
$\\{0,1\\}^{n}\setminus\\{\vec{0}\\}$, and such that $P$ has a zero of
multiplicity exactly $\ell$ at $\vec{0}$.
As an immediate corollary, this improves the lower bound in the Clifton–Huang
result from $n+k+1$ to $n+2k-3$. However, Theorem 1.1 establishes that
$n+2k-3$ is also an upper bound for the polynomial problem, whereas Clifton
and Huang conjecture that the answer for their problem should be
$n+\binom{k}{2}$. This suggests that the polynomial method alone is not
sufficient to resolve the hyperplane covering problem.
Even though Theorem 1.1 is stated for polynomials defined over $\mathbb{R}$,
Sauermann and Wigderson note that the proof works over any field of
characteristic zero. However, the result need not hold over finite fields. In
particular, they show the existence of a polynomial $P_{4}$ over
$\mathbb{F}_{2}$ of degree $n+4$ with zeroes of multiplicity four at all
nonzero points in $\mathbb{F}_{2}^{n}$ and with $P_{4}(\vec{0})\neq 0$. More
generally, for every $k\geq 4$,
$P_{k}(\vec{x})=x_{1}^{k-4}(x_{1}-1)^{k-4}P_{4}(\vec{x})$ is a binary
polynomial of degree only $n+2k-4$ with zeroes of multiplicity $k$ at all
nonzero points and of multiplicity $k-4$ at the origin. The correct behaviour
of the problem over finite fields is left as an open problem.
Note also that Theorem 1.1 allows the origin to be covered up to $k-2$ times.
Sauermann and Wigderson also considered the case where the origin must be
covered with multiplicity exactly $k-1$, showing that the minimum degree then
increases to $n+2k-2$. In contrast to Theorem 1.1, the proof of this result is
valid over all fields.
### 1.4 Our results
In this paper, we study the problem of covering with multiplicity in
$\mathbb{F}_{2}^{n}$. We are motivated not only by the body of research
described above, but also by the fact, as we shall show in Proposition 3.3,
when one forbids the origin from being covered, this problem is equivalent to
finding linear binary codes of large minimum distance. As this classic problem
from coding theory has a long and storied history of its own, and is likely to
be very difficult, we shall instead work in the setting where we require all
nonzero points in $\mathbb{F}_{2}^{n}$ to be covered at least $k$ times while
the origin can be covered at most $k-1$ times.
In light of the previous results, we shall abstain from employing the
polynomial method, and instead attack the problem more directly with
combinatorial techniques. As an added bonus, our arguments readily generalise
to covering points with codimension-$d$ affine subspaces, rather than just
hyperplanes, thereby extending Jamison’s original results in the case $q=2$.
To be able to discuss our results more concisely, we first introduce some
notation that we will use throughout the paper.
Given integers $k\geq 1$ and $n\geq d\geq 1$, we say a multiset $\mathcal{H}$
of $(n-d)$-dimensional affine subspaces in $\mathbb{F}_{2}^{n}$ is a _$(k,d)$
-cover_ if every nonzero point of $\mathbb{F}_{2}^{n}$ is covered at least $k$
times, while $\vec{0}$ is covered at most $k-1$ times. We next introduce an
extremal function $f(n,k,d)$, which is defined to be the minimum possible size
of a $(k,d)$-cover in $\mathbb{F}_{2}^{n}$.
For instance, when we take $k=1$, we obtain the original covering problem, and
from the work of Jamison [14] we know $f(n,1,d)=n+2^{d}-d-1$. At another
extreme, if we take $d=n$, then our affine subspaces are simply individual
points, each of which must be covered $k$ times, and hence
$f(n,k,n)=k\left(2^{n}-1\right)$. We study this function for intermediate
values of the parameters, determining it precisely when either $k$ is large
with respect to $n$ and $d$, or $n$ is large with respect to $k$ and $d$, and
derive asymptotic results otherwise.
###### Theorem 1.2.
Let $k\geq 1$ and $n\geq d\geq 1$. Then:
* (a)
If $k\geq 2^{n-d-1}$, then
$f(n,k,d)=2^{d}k-\left\lfloor\frac{k}{2^{n-d}}\right\rfloor$.
* (b)
If $n>2^{2^{d}k-d-k+1}$, then $f(n,k,d)=n+2^{d}k-d-2$.
* (c)
If $k\geq 2$ and $n\geq\left\lfloor\log_{2}k\right\rfloor+d+1$, then
$n+2^{d}k-d-\log_{2}(2k)\leq f(n,k,d)\leq n+2^{d}k-d-2$.
There are a few remarks worth making at this stage. First, observe that, just
as in the Clifton–Huang setting, the extremal function $f(n,k,d)$ exhibits
different behaviour when $n$ is fixed and $k$ is large as compared to when $k$
is fixed and $n$ is large. Second, and perhaps most significantly, Theorem 1.2
demonstrates the gap between the hyperplane covering problem and the
polynomial degree problem: our result shows that, for any $k\geq 4$ and
sufficiently large $n$, we have $f(n,k,1)=n+2k-3$, whereas the answer to the
corresponding polynomial problem is at most $n+2k-4$, as explained after
Theorem 1.1. Our ideas allow us to establish an even stronger separation in
the case $k=4$ — while the polynomial $P_{4}$ constructed by Sauermann and
Wigderson, which has zeroes of multiplicity at least four at all nonzero
points of $\mathbb{F}_{2}^{n}$ while not vanishing at the origin, has degree
only $n+4$, we shall show in Corollary 3.4 that any hyperplane system with the
corresponding covering properties must have size at least
$n+\log\left(\tfrac{2}{3}n\right)$. Third, we see that in the intermediate
range, when both $n$ and $k$ grow moderately, the bounds in (c) determine
$f(n,k,d)$ up to an additive error of $\log_{2}(2k)$, which is a lower-order
term. Thus, $f(n,k,d)$ grows asymptotically like $n+2^{d}k$. Last of all, if
one substitutes $k=2^{n-d-1}-1$, the lower bound from (c) is larger than the
value in (a). This shows that $k\geq 2^{n-d-1}$ is indeed the correct range
for which the result in (a) is valid. In contrast, we believe the bound on $n$
in (b) is far from optimal, and discuss this in greater depth in Section 4.
The remainder of this paper is devoted to the proof of Theorem 1.2, and is
organised as follows. In Section 2 we prove part (a), determining the extremal
function for large multiplicities. We prove part (b) in Section 3, handling
the case when the dimension of the ambient space grows quickly. A key step in
the proof is showing the intuitive, yet surprisingly not immediate, fact that
$f(n,k,d)$ is strictly increasing in $n$, as a result of which we shall also
be able to deduce the bounds in (c). Section 4 is devoted to the study of the
gradual transition between parts (a) and (b), where we exhibit some
constructions that show $f(n,k,d)$ takes values strictly between those of
parts (a) and (b). Finally, we end by presenting some concluding remarks and
open problems in Section 5.
## 2 Covering with large multiplicity
In this section we prove Theorem 1.2(a), handling the case of large
multiplicities. We start by introducing some definitions and notation that we
will use in the proof. To start with, it will be convenient to have some
notation for affine hyperplanes. Given a nonzero vector
$\vec{u}\in\mathbb{F}_{2}^{n}$, let $H_{\vec{u}}$ denote the hyperplane
$\\{\vec{x}:\vec{x}\cdot\vec{u}=1\\}$.
Next, it will sometimes be helpful to specify how many times the origin is
covered. Hence, given integers $n\geq d\geq 1$ and $k>s\geq 0$, we call a
$(k,d)$-cover in $\mathbb{F}_{2}^{n}$ a _$(k,d;s)$ -cover_ if it covers the
origin exactly $s$ times. Let us write $g(n,k,d;s)$ for the minimum possible
size of a $(k,d;s)$-cover and call a cover _optimal_ if it has this minimum
size. Clearly, we have $f(n,k,d)=\min_{0\leq s<k}g(n,k,d;s)$, so any knowledge
about this more refined function directly translates to our main focus of
interest.
### 2.1 The lower bound
To start with, we prove a general lower bound, valid for all choices of
parameters, that follows from a simple double-counting argument. This
establishes the lower bound of Theorem 1.2(a).
###### Lemma 2.1.
Let $n,k,d,s$ be integers such that $n\geq d\geq 1$ and $k>s\geq 0$. Then
$g(n,k,d;s)\geq 2^{d}k-\left\lfloor\frac{k-s}{2^{n-d}}\right\rfloor.$
In particular, $f(n,k,d)\geq
2^{d}k-\left\lfloor\frac{k}{2^{n-d}}\right\rfloor$.
###### Proof.
Let $\mathcal{H}$ be an optimal $(k,d;s)$-cover of $\mathbb{F}_{2}^{n}$, so
that we have $g(n,k,d;s)=|\mathcal{H}|$. We double-count the pairs
$(\vec{x},S)$ with $\vec{x}\in\mathbb{F}_{2}^{n}$, $S\in\mathcal{H}$, and
$\vec{x}\in S$. On the one hand, every affine subspace $S\in\mathcal{H}$
contains $2^{n-d}$ points, and so there are $2^{n-d}|\mathcal{H}|$ such pairs.
On the other hand, since every nonzero point is covered at least $k$ times and
the origin is covered $s$ times, there are at least $(2^{n}-1)k+s$ such pairs.
Thus $(2^{n}-1)k+s\leq 2^{n-d}|\mathcal{H}|$, and the claimed lower bound
follows from solving for $|\mathcal{H}|$ and observing that $g(n,k,d;s)$ is an
integer. The bound on $f(n,k,d)$ is obtained by noticing that our lower bound
on $g(n,k,d;s)$ is increasing in $s$, and is therefore minimised when $s=0$. ∎
### 2.2 The upper bound construction
To prove the upper bound of Theorem 1.2(a), we must construct small
$(k,d)$-covers. As a first step, we introduce a recursive method for
$(k,d;s)$-covers that allows us to reduce to the $d=1$ case.
###### Lemma 2.2.
For integers $n\geq d\geq 2$ and $k>s\geq 0$ we have
$g(n,k,d;s)\leq g(n-d+1,k,1;s)+2k(2^{d-1}-1),$
and, therefore,
$f(n,k,d)\leq f(n-d+1,k,1)+2k(2^{d-1}-1).$
###### Proof.
We first deduce the recursive bound on $g(n,k,d;s)$. Let
$S_{0}\subset\mathbb{F}_{2}^{n}$ be an arbitrary $(n-d+1)$-dimensional
(vector) subspace, and let $S_{1},\ldots,S_{2^{d-1}-1}$ be its affine
translates, that, together with $S_{0}$, partition $\mathbb{F}_{2}^{n}$. For
every $1\leq i\leq 2^{d-1}-1$, partition $S_{i}\cong\mathbb{F}_{2}^{n-d+1}$
further into two subspaces, thereby obtaining a total of $2(2^{d-1}-1)$ affine
subspaces of dimension $n-d$. We start by taking $k$ copies of each of these
affine subspaces. This gives us a multiset of $2k(2^{d-1}-1)$ subspaces, which
cover every point outside $S_{0}$ exactly $k$ times and leave the points in
$S_{0}$ completely uncovered.
It thus remains to cover the points within $S_{0}$ appropriately. Since
$(n-d)$-dimensional subspaces have relative codimension $1$ in $S_{0}$, this
reduces to finding a $(k,1;s)$-cover within
$S_{0}\cong\mathbb{F}_{2}^{n-d+1}$. By definition, we can find such a cover
consisting of ${g(n-d+1,k,1;s)}$ subspaces. Adding these to our previous
multiset gives a $(k,d;s)$-cover of $\mathbb{F}_{2}^{n}$ of size
$g(n-d+1,k,1;s)+2k(2^{d-1}-1)$, as required.
To finish, since $f(n,k,d)=\min_{s}g(n,k,d;s)$, and the recursive bound holds
for each $s$, it naturally carries over to the function $f(n,k,d)$, giving
$f(n,k,d)\leq f(n-d+1,k,1)+2k(2^{d-1}-1)$. ∎
Armed with this preparation, we can now resolve the problem for large
multiplicities.
###### Proof of Theorem 1.2(a).
The requisite lower bound, of course, is given by Lemma 2.1.
For the upper bound, we start by reducing to the case $d=1$. Indeed, suppose
we already know the bound for $d=1$; that is, $f(n,k,1)\leq
2k-\left\lfloor\frac{k}{2^{n-1}}\right\rfloor$ for all $k\geq 2^{n-2}$. Now,
given some $n\geq d\geq 2$ and $k\geq 2^{n-d-1}$, by Lemma 2.2 we have
$f(n,k,d)\leq f(n-d+1,k,1)+2k(2^{d-1}-1)\leq
2k-\left\lfloor\frac{k}{2^{n-d+1-1}}\right\rfloor+2k(2^{d-1}-1)=2^{d}k-\left\lfloor\frac{k}{2^{n-d}}\right\rfloor,$
as required.
Hence, it suffices to prove the bound in the hyperplane case. We begin with
the lowest multiplicity covered by part (a), namely $k=2^{n-2}$. Consider the
family
$\mathcal{H}_{0}=\\{H_{\vec{u}}:\vec{u}\in\mathbb{F}_{2}^{n},u_{n}=1\\}$,
where we recall that $H_{\vec{u}}=\\{\vec{x}:\vec{x}\cdot\vec{u}=1\\}$. Note
that we then have
$|\mathcal{H}_{0}|=2^{n-1}=2k=2k-\left\lfloor\frac{k}{2^{n-1}}\right\rfloor$,
and none of these hyperplanes covers the origin. Given nonzero vectors
$\vec{x}=(\vec{x}^{\prime},x)$ and $\vec{u}=(\vec{u}^{\prime},1)$ with
$\vec{x}^{\prime},\vec{u}^{\prime}\in\mathbb{F}_{2}^{n-1}$ and
$x\in\mathbb{F}_{2}$, we have $\vec{x}\cdot\vec{u}=1$ if and only if
$\vec{x}^{\prime}\cdot\vec{u}^{\prime}=1-x$. If $\vec{x}^{\prime}\neq\vec{0}$,
precisely half of the choices for $\vec{u}^{\prime}$ satisfy this equation; if
$\vec{x}^{\prime}=\vec{0}$ (and thus necessarily $x=1$), the equation is
satisfied by all choices of $\vec{u}^{\prime}$. Thus each nonzero point is
covered at least $2^{n-2}$ times, and hence $\mathcal{H}_{0}$ is a
$(2^{n-2},1)$-cover of the desired size.
To extend the above construction to the range $2^{n-2}\leq k<2^{n-1}$, one can
simply add an arbitrary choice of $k-2^{n-2}$ pairs of parallel hyperplanes.
The resulting family will have
$2^{n-1}+2\left(k-2^{n-2}\right)=2k=2k-\left\lfloor\frac{k}{2^{n-1}}\right\rfloor$
elements, every nonzero point is covered at least $k$ times, and the origin is
covered $k-2^{n-2}<k$ times.
Finally, suppose $k\geq 2^{n-1}$. Then we can write $k=a2^{n-1}+b$ for some
$a\geq 1$ and $0\leq b<2^{n-1}$. We take
$\mathcal{H}_{1}=\\{H_{\vec{u}}:\vec{u}\in\mathbb{F}_{2}^{n}\setminus\\{\vec{0}\\}\\}$
to be the set of all affine hyperplanes avoiding the origin, of which there
are $2^{n}-1$. Moreover, for each nonzero $\vec{x}$, there are exactly
$2^{n-1}$ vectors $\vec{u}$ with $\vec{x}\cdot\vec{u}=1$, and so each such
point is covered $2^{n-1}$ times by the hyperplanes in $\mathcal{H}_{1}$.
Now let $\mathcal{H}$ be the multiset of hyperplanes obtained by taking $a$
copies of $\mathcal{H}_{1}$ and appending an arbitrary choice of $b$ pairs of
parallel planes. Each nonzero point is then covered $a2^{n-1}+b=k$ times,
while the origin is only covered $b<2^{n-1}\leq k$ times, and so $\mathcal{H}$
is a $(k,1)$-cover. Thus,
$f(n,k,1)\leq|\mathcal{H}|=a(2^{n}-1)+2b=2(a2^{n-1}+b)-a=2k-\left\lfloor\frac{k}{2^{n-1}}\right\rfloor,$
proving the upper bound. ∎
## 3 Covering high-dimensional spaces
In this section we turn our attention to the case when $n$ is large with
respect to $k$, with the aim of proving part (b) of Theorem 1.2. Furthermore,
the results we prove along the way will allow us to establish the bounds in
part (c) as well.
### 3.1 The upper bound construction
In this range, in contrast to the large multiplicity setting, it is the upper
bound that is straightforward. This bound follows from the following
construction, which is valid for the full range of parameters.
###### Lemma 3.1.
Let $n,k,d$ be positive integers such that $n\geq d\geq 1$ and $k\geq 2$. Then
$f(n,k,d)\leq n+2^{d}k-d-2.$
###### Proof.
We start by resolving the case $d=1$ and $k=2$, for which we consider the
family of hyperplanes
$\mathcal{H}=\\{H_{\vec{e}_{i}}:i\in[n]\\}\cup\\{H_{\vec{1}}\\}$, where
$\vec{e}_{i}$ is the $i$th standard basis vector and $\vec{1}$ is the all-one
vector. To see that this is a $(2,1$)-cover of $\mathbb{F}_{2}^{n}$, note
first that the planes all avoid the origin. Next, if we have a nonzero vector
$\vec{x}$, it is covered by the hyperplanes $\\{H_{\vec{e}_{i}}:i\in[n]\\}$ as
many times as it has nonzero entries. Thus, all vectors of Hamming weight at
least two are covered twice or more. The only remaining vectors are those of
weight one, which are covered once by $\\{H_{\vec{e}_{i}}:i\in[n]\\}$, but
these are all covered for the second time by $H_{\vec{1}}$. Hence
$\mathcal{H}$ is indeed a $(2,1)$-cover, and is of the required size, namely
$n+1$.
Now we can extend this construction to the case $d=1$ and $k\geq 3$ by simply
adding $k-2$ arbitrary pairs of parallel hyperplanes. The resulting family
will be a $(k,1;k-2)$-cover (and hence, in particular, a $(k,1)$-cover) of
size $n+2k-3$, matching the claimed upper bound.
That leaves us with the case $d\geq 2$, which we can once again handle by
appealing to Lemma 2.2. In conjunction with the above construction, we have
$f(n,k,d)\leq f(n-d+1,k,1)+2k(2^{d-1}-1)\leq n-d+1+2k-3+2k(2^{d-1}-1),$
which simplifies to the required $n+2^{d}k-d-2$. ∎
### 3.2 Recursion, again
The upper bound in Lemma 3.1 is strictly increasing in $n$. Our next step is
to show that this behaviour is necessary — that is, the higher the dimension,
the harder the space is to cover. Although intuitive, this fact turned out to
be less elementary than expected, and our proof makes use of the probabilistic
method.
###### Lemma 3.2.
Let $n,k,d,s$ be integers such that $n\geq 2$, $n\geq d\geq 1$, and $k>s\geq
0$. Then
$g(n,k,d;s)\geq g(n-1,k,d;s)+1.$
###### Proof.
Let $\mathcal{H}$ be an optimal $(k,d;s)$-cover of $\mathbb{F}_{2}^{n}$. To
prove the lower bound on its size, we shall construct from it a
$(k,d;s)$-cover $\mathcal{H}^{\prime}$ of $\mathbb{F}_{2}^{n-1}$, which must
comprise of at least $g(n-1,k,d;s)$ subspaces. To obtain this cover of a
lower-dimensional space, we restrict $\mathcal{H}$ to a random hyperplane
$H\subset\mathbb{F}_{2}^{n}$ that passes through the origin. Since
$\mathcal{H}$ is a $(k,d;s)$-cover of all of $\mathbb{F}_{2}^{n}$, it
certainly covers $H\cong\mathbb{F}_{2}^{n-1}$ as well.
However, we require $\mathcal{H}^{\prime}$ to be a $(k,d;s)$-cover of $H$,
which must be built of affine subspaces of codimension $d$ relative to $H$ —
that is, subspaces of dimension one less than those in $\mathcal{H}$.
Fortunately, when intersecting the subspaces $S\in\mathcal{H}$ with a
hyperplane, we can expect their dimension to decrease by one. The exceptional
cases are when $S$ is disjoint from $H$, or when $S$ is contained in $H$. In
the former case, $S$ does not cover any points of $H$, and can therefore be
discarded from $\mathcal{H}^{\prime}$. In the latter case, we can partition
$S$ into two subspaces $S=S_{1}\cup S_{2}$, where each $S_{i}$ is of
codimension $d$ relative to $H$, and replace $S$ with $S_{1}$ and $S_{2}$ in
$\mathcal{H}^{\prime}$. By making these changes, we obtain a family
$\mathcal{H}^{\prime}$ of codimension-$d$ subspaces of $H$. Moreover, these
subspaces cover the points of $H$ exactly as often as those of $\mathcal{H}$
do, and thus $\mathcal{H}^{\prime}$ is a $(k,d;s)$-cover of $H$.
When building this cover, though, we need to control its size. Let $X$ denote
the set of subspaces $S\in\mathcal{H}$ that are disjoint from $H$, and let $Y$
denote the set of subspaces $S\in\mathcal{H}$ that are contained in $H$. We
then have $|\mathcal{H}^{\prime}|=|\mathcal{H}|-|X|+|Y|$. The objective, then,
is to show that there is a choice of hyperplane $H$ for which $|X|>|Y|$, in
which case the cover $\mathcal{H}^{\prime}$ we build is relatively small.
Recall that $H$ was a random hyperplane in $\mathbb{F}_{2}^{n}$ passing
through the origin, which is to say it has a normal vector $\vec{u}$ chosen
uniformly at random from $\mathbb{F}_{2}^{n}\setminus\\{\vec{0}\\}$. To
compute the expected sizes of $X$ and $Y$, we consider the probability that a
subspace $S\in\mathcal{H}$ is either disjoint from or contained in $H$.
Let $S\in\mathcal{H}$ be arbitrary and suppose first that $\vec{0}\in S$. We
immediately have $\mathbb{P}(S\in X)=0$, as in this case $\vec{0}\in S\cap H$,
so $S$ and $H$ cannot be disjoint. On the other hand, $\mathbb{P}(S\in
Y)=\frac{2^{d}-1}{2^{n}-1}$, as we have $S\subseteq H$ exactly when the normal
vector $\vec{u}$ is a nonzero element of the $d$-dimensional orthogonal
complement, $S^{\perp}$, of $S$ in $\mathbb{F}_{2}^{n}$.
In the other case, when $\vec{0}\notin S$, we can write $S$ in the form
$T+\vec{v}$, where $\vec{0}\in T\subset\mathbb{F}_{2}^{n}$ is an
$(n-d)$-dimensional vector subspace and $\vec{v}\in\mathbb{F}_{2}^{n}\setminus
T$. Then $S$ is disjoint from $H$ if and only if $\vec{u}\in S^{\perp}$ and
$\vec{u}\cdot\vec{v}=1$. Since $\vec{v}\notin T$, these are independent
conditions, and so we have $\mathbb{P}(S\in X)=\frac{2^{d-1}}{2^{n}-1}$.
Similarly, in order to have $S\subseteq H$, $\vec{u}$ must be a nonzero vector
satisfying $\vec{u}\in S^{\perp}$ and $\vec{u}\cdot\vec{v}=0$, and so
$\mathbb{P}(S\in Y)=\frac{2^{d-1}-1}{2^{n}-1}$.
Now, using linearity of expectation, we have
$\displaystyle\mathbb{E}\left[|X|-|Y|\right]$
$\displaystyle=\sum_{S\in\mathcal{H}}\left(\mathbb{P}(S\in X)-\mathbb{P}(S\in
Y)\right)$ $\displaystyle=\sum_{S\in\mathcal{H}:\vec{0}\notin
S}\left(\frac{2^{d-1}}{2^{n}-1}-\frac{2^{d-1}-1}{2^{n}-1}\right)+\sum_{S\in\mathcal{H}:\vec{0}\in
S}\left(0-\frac{2^{d}-1}{2^{n}-1}\right)$
$\displaystyle=\frac{|\\{S\in\mathcal{H}:\vec{0}\notin
S\\}|-\left(2^{d}-1\right)|\\{S\in\mathcal{H}:\vec{0}\in
S\\}|}{2^{n}-1}=\frac{|\mathcal{H}|-2^{d}s}{2^{n}-1},$
where we used the fact that $\mathcal{H}$ is a $(k,d;s)$-cover, and thus
$|\\{S\in\mathcal{H}:\vec{0}\in S\\}|=s$. We now apply the lower bound on
$|\mathcal{H}|$ given by Lemma 2.1 to obtain
$\mathbb{E}\left[|X|-|Y|\right]\geq\frac{2^{d}k-\left\lfloor\frac{k-s}{2^{n-d}}\right\rfloor-2^{d}s}{2^{n}-1}=\frac{2^{d}(k-s)-\left\lfloor\frac{k-s}{2^{n-d}}\right\rfloor}{2^{n}-1}>0.$
Therefore, there must be a hyperplane $H$ for which $|X|-|Y|\geq 1$. The
corresponding cover of $H$ thus has size at most $|\mathcal{H}|-1$ but, as a
$(k,d;s)$-cover of an $(n-1)$-dimensional space, has size at least
$g(n-1,k,d;s)$. This gives $|\mathcal{H}|-1\geq|\mathcal{H}^{\prime}|\geq
g(n-1,k,d;s)$, whence the required bound, $g(n,k,d;s)=|\mathcal{H}|\geq
g(n-1,k,d;s)+1$. ∎
While this inequality will be used in our proof of part (b) of Theorem 1.2, it
also gives us what we need to prove the bounds in part (c).
###### Proof of Theorem 1.2(c).
Lemma 3.1 gives us the upper bound, $f(n,k,d)\leq n+2^{d}k-d-2$, which is in
fact valid for all $k\geq 2$ and $n\geq d\geq 1$.
When $n\geq\left\lfloor\log_{2}k\right\rfloor+d+1$, we can prove the lower
bound, $f(n,k,d)\geq n+2^{d}k-d-\log_{2}(2k)$, by induction on $n$. For the
base case, when $n=\left\lfloor\log_{2}k\right\rfloor+d+1$, we appeal to Lemma
2.1, which gives
$f(n,k,d)\geq
2^{d}k-\left\lfloor\frac{k}{2^{n-d}}\right\rfloor=2^{d}k=n+2^{d}k-d-\left\lfloor\log_{2}k\right\rfloor-1\geq
n+2^{d}k-d-\log_{2}(2k).$
For the induction step we appeal to Lemma 3.2. First note that the lemma gives
$f(n,k,d)=\min_{s}g(n,k,d;s)\geq\min_{s}\left(g(n-1,k,d;s)+1\right)=f(n-1,k,d)+1$.
Thus, using the induction hypothesis, for all
$n>\left\lfloor\log_{2}k\right\rfloor+d+1$ we have
$f(n,k,d)\geq f(n-1,k,d)+1\geq
n-1+2^{d}k-d-\log_{2}(2k)+1=n+2^{d}k-d-\log_{2}(2k),$
completing the proof. ∎
### 3.3 A coding theory connection
In Lemma 3.2, we proved a recursive bound on $g(n,k,d;s)$ that is valid for
all values of $s$, the number of times the origin is covered. In this
subsection, we establish the promised connection to coding theory, which is
the key to our proof. Indeed, as observed in Corollary 3.6 below, it allows us
to restrict our attention to only two feasible values of $s$.
We begin with $(k,1;0)$-covers of $\mathbb{F}_{2}^{n}$, showing that, in this
binary setting, hyperplane covers that avoid the origin are in direct
correspondence with linear codes of large minimum distance.
###### Proposition 3.3.
A $(k,1;0)$-cover of $\mathbb{F}_{2}^{n}$ of cardinality $m$ is equivalent to
an $n$-dimensional linear binary code of length $m$ and minimum distance at
least $k$.
###### Proof.
Let $\mathcal{H}=\\{H_{1},H_{2},\ldots,H_{m}\\}$ be a $(k,1;0)$-cover of
$\mathbb{F}_{2}^{n}$. Since none of the hyperplanes cover the origin, for each
$i\in[m]$, $H_{i}$ has to be described by the equation
$\vec{u}_{i}\cdot\vec{x}=1$ for some
$\vec{u}_{i}\in\mathbb{F}_{2}^{n}\setminus\\{\vec{0}\\}$. Let $A$ be the
$m\times n$ matrix whose rows are
$\vec{u}_{1},\vec{u}_{2},\ldots,\vec{u}_{m}$. We claim that $A$ is the
generator matrix of a linear binary code of dimension $n$, length $m$ and
minimum distance at least $k$. Since each
$\vec{x}\in\mathbb{F}_{2}^{n}\setminus\\{\vec{0}\\}$ is covered by at least
$k$ of the planes, it follows that the vector $A\vec{x}$ has weight at least
$k$, which in turn is equivalent to the vectors in the column space of $A$
having minimum distance at least $k$. Indeed, any vector $\vec{y}$ in the
column space can be expressed in the form $A\vec{w}$ for some
$\vec{w}\in\mathbb{F}_{2}^{n}$. Thus, given two vectors
$\vec{y}_{1},\vec{y}_{2}$ in the column space, their difference is of the form
$A(\vec{w}_{1}-\vec{w}_{2})$, where $\vec{x}=\vec{w}_{1}-\vec{w}_{2}$ is
nonzero. Hence this difference has weight at least $k$; i.e., the two vectors
$\vec{y}_{1}$ and $\vec{y}_{2}$ have distance at least $k$.
Conversely, given a linear binary code of dimension $n$, length $m$ and
minimum distance at least $k$, let
$\vec{u}_{1},\vec{u}_{2},\ldots,\vec{u}_{m}$ be the rows of the generator
matrix. By the same reasoning as above, the hyperplanes $H_{i}$, $i\in[m]$,
defined by the equation $\vec{u}_{i}\cdot\vec{x}=1$, form a $(k,1;0)$-cover of
$\mathbb{F}_{2}^{n}$. ∎
Thus, the problem of finding a small $(k,1;0)$-cover of $\mathbb{F}_{2}^{n}$
corresponds to finding an $n$-dimensional linear code of minimum distance at
least $k$ and small length. This is a central problem in coding theory and, as
such, has been extensively studied. We can therefore leverage known bounds to
bound the function $g(n,k,1;0)$.
###### Corollary 3.4.
For all $k\geq 2$ and $n\geq 1$,
$g(n,k,1;0)\geq
n+\left\lfloor\frac{k-1}{2}\right\rfloor\log\left(\frac{2n}{k-1}\right).$
###### Proof.
Let $\mathcal{H}$ be an optimal $(k,1,0)$-cover and let
$\mathcal{C}\subseteq\mathbb{F}_{2}^{n}$ be the equivalent $n$-dimensional
linear binary code of length $m=|\mathcal{H}|$ and minimum distance at least
$k$, as described in Proposition 3.3. We can now appeal to the Hamming bound:
since the code has minimum distance $k$, the balls of radius
$t=\left\lfloor\frac{k-1}{2}\right\rfloor$ around the $2^{n}$ points of
$\mathcal{C}$ must be pairwise disjoint. As each ball has size
$\sum_{i=0}^{t}\binom{m}{i}$, and the ambient space has size $2^{m}$, we get
$2^{n}\leq\frac{2^{m}}{\sum_{i=0}^{t}\binom{m}{i}}.$
We bound the denominator from below by
$\sum_{i=0}^{t}\binom{m}{i}\geq\binom{m}{t}\geq\left(\frac{m}{t}\right)^{t}\geq\left(\frac{n}{t}\right)^{t}=2^{t\log\tfrac{n}{t}},$
where the last inequality is valid provided $m\geq n$, as it must be. Thus we
conclude
$g(n,k,1;0)=|\mathcal{H}|=m\geq n+t\log\tfrac{n}{t}\geq
n+\left\lfloor\frac{k-1}{2}\right\rfloor\log\left(\frac{2n}{k-1}\right).\qed$
###### Remark 3.5.
Although it may seem that some of our bounds might be wasteful, one can deduce
upper bounds from the Gilbert-Varshamov bound, which is obtained by
considering a random linear code. In particular, if $n$ is large with respect
to $k$, one finds that $g(n,k,1;0)\leq n+(k-1)\log(2n)$. Narrowing the gap
between these upper and lower bounds remains an active area of research in
coding theory.
The above lower bound can be used to show that if $n$ is large with respect to
$k$ and $d$ then every optimal $(k,d)$-cover has to cover the origin many
times. This corollary is critical to our proof of the upper bound.
###### Corollary 3.6.
If $n>2^{2^{d}k-k-d+1}$ then any optimal $(k,d)$-cover of $\mathbb{F}_{2}^{n}$
covers the origin at least $k-2$ times.
###### Proof.
Let $S_{1},\dots,S_{m}$ be an optimal $(k,d)$-cover, and, if necessary,
relabel the subspaces so that $S_{1},\dots,S_{s}$ are the affine subspaces
covering the origin. Suppose for a contradiction that $s\leq k-3$, and observe
that if we delete the first $k-3$ subspaces, each nonzero point must still be
covered at least thrice, while the origin is left uncovered. That is,
$S_{k-2},S_{k-1},\ldots,S_{m}$ forms a $(3,d;0)$-cover of
$\mathbb{F}_{2}^{n}$.
For each $k-2\leq j\leq m$, we can then extend $S_{j}$ to an arbitrary
hyperplane $H_{j}$ that contains $S_{j}$ and avoids the origin. Then
$\\{H_{k-2},H_{k-1},\ldots,H_{m}\\}$ is a $(3,1;0)$-cover, and hence
$m-k+3\geq g(n,3,1;0)$.
By Corollary 3.4, this, together with the assumption $n>2^{2^{d}k-k-d+1}$,
implies
$f(n,k,d)=m\geq g(n,3,1;0)+k-3\geq n+\log
n+k-3>n+2^{d}k-k-d+1+k-3=n+2^{d}k-d-2,$
which contradicts the upper bound from Lemma 3.1. ∎
###### Remark 3.7.
Observe that Corollary 3.6 in fact gives us some stability for large
dimensions. If $n=2^{2^{d}k-k-d+\omega(1)}$, then the above calculation shows
that any $(k,d)$-cover that covers the origin at most $k-3$ times has size at
least $n+2^{d}k+\omega(1)$. Thus, when $n=2^{2^{d}k-k-d+\omega(1)}$, any
$(k,d)$-cover that is even close to optimal must cover the origin at least
$k-2$ times.
### 3.4 The lower bound
By Corollary 3.6, when trying to bound $f(n,k,d)=\min_{s}g(n,k,d;s)$ for large
$n$, we can restrict our attention to $s\in\\{k-2,k-1\\}$. First we deal with
the latter case.
###### Lemma 3.8.
Let $n,k,d$ be positive integers such that $n\geq d\geq 1$. Then
$g(n,k,d;k-1)=n+2^{d}k-d-1.$
###### Proof.
To prove the statement, we will show that, for all positive integers $n,k,d$
with $n\geq d\geq 1$, we have $g(n+1,k,d;k-1)=g(n,k,d;k-1)+1$. Combined with
the simple observation that $g(d,k,d;k-1)=2^{d}k-1$ for all $k\geq 1$, since
when $d=n$ we are covering with individual points, this fact will indeed imply
the desired result.
By Lemma 3.2 we know that $g(n+1,k,d;k-1)\geq g(n,k,d;k-1)+1$. For the other
inequality, consider an optimal $(k,d;k-1)$-cover $\mathcal{H}$ of
$\mathbb{F}_{2}^{n}$. For every $S\in\mathcal{H}$, let
$S^{\prime}=S\times\\{0,1\\}$, which is a codimension-$d$ affine subspace of
$\mathbb{F}_{2}^{n+1}$, and let $S_{0}$ be any $(n+1-d)$-dimensional affine
subspace of $\mathbb{F}_{2}^{n+1}$ that contains the vector $(0,\ldots,0,1)$
but avoids the origin. We claim that
$\mathcal{H}^{\prime}=\\{S^{\prime}:S\in\mathcal{H}\\}\cup\\{S_{0}\\}$ is a
$(k,d;k-1)$-cover of $\mathbb{F}_{2}^{n+1}$. Indeed, for all
$S\in\mathcal{H}$, a point of the form $(\vec{x},t)$ is covered by
$S^{\prime}$ if and only if $\vec{x}$ is covered by $S$. Hence, the collection
$\\{S^{\prime}:S\in\mathcal{H}\\}$ covers $\vec{0}$ exactly $k-1$ times and
each point of the form $(\vec{x},t)$ with $\vec{x}\neq\vec{0}$ at least $k$
times. Finally, the point $(\vec{0},1)$ is covered $k-1$ times by the
$\\{S^{\prime}:S\in\mathcal{H}\\}$ and once by the subspace $S_{0}$, so it is
also covered the correct number of times. Hence $\mathcal{H}^{\prime}$ is
indeed a $(k,d;k-1)$-cover of of size $|\mathcal{H}|+1$, and so the second
inequality follows. ∎
###### Remark 3.9.
Recall that the special case of $d=1$, $g(n,k,1;k-1)=n+2k-2$, also follows
from [19, Theorem 1.5].
The proof of Theorem 1.2(b) is now straightforward.
###### Proof of Theorem 1.2(b).
The upper bound is given by Lemma 3.1. For the lower bound, first observe that
for any valid choice of the parameters, we have $g(n,k,d;s+1)\leq
g(n,k,d;s)+1$, as adding any subspace containing the origin to a
$(k,d;s)$-cover yields a $(k,d;s+1)$-cover. Then, by Corollary 3.6 and Lemma
3.8, we obtain
$f(n,k,d)=\min\\{g(n,k,d;k-2),g(n,k,d;k-1)\\}\geq
g(n,k,d;k-1)-1=n+2^{d}k-d-2,$
as desired. ∎
## 4 The transition
Parts (a) and (b) of Theorem 1.2 determine the function $f(n,k,d)$ exactly in
the two extreme ranges of the parameters — when $k$ is exponentially large
with respect to $n$, and when $n$ is exponentially large with respect to $k$.
As remarked upon in the introduction, we know that in the former case, the
bound on $k$ is best possible. However, that is not true for part (b), and we
believe the upper bound of Lemma 3.1 should be tight for much smaller values
of $n$ as well.
In this section we explore the transition between these two ranges, with an
eye towards better understanding when this upper bound becomes tight. As we
saw in Lemma 2.2, for our upper bounds we can generally reduce to the
hyperplane setting, and so we shall focus on the $d=1$ case in this section.
To simplify notation, we will refer to a $(k,1)$-cover as a $k$-cover and
write $f(n,k)$ instead of $f(n,k,1)$.
In this hyperplane setting, the upper bound of Lemma 3.1, valid for all $n\geq
1$ and $k\geq 2$, has the simple form $n+2k-3$. Given some fixed $k$, suppose
the bound is tight for some $n_{0}$; that is, $f(n_{0},k)=n_{0}+2k-3$. The
recursion of Lemma 3.2 implies $f(n,k)\geq f(n-1,k)+1$ for all $n\geq 2$, and
so these two bounds together imply $f(n,k)=n+2k-3$ for all $n\geq n_{0}$.
Hence, for every $k$, there is a well-defined threshold $n_{0}(k)$ such that
$f(n,k)=n+2k-3$ if and only if $n\geq n_{0}(k)$. Theorem 1.2(b) shows
$n_{0}(k)\leq 2^{k}+1$, and our goal now is to explore the true behaviour of
this threshold.
### 4.1 The diagonal case
As a natural starting point, one might ask what lower bound we can provide for
$n_{0}(k)$. From our previous results, in particular Theorem 1.2(a), we have
seen that $f(n,k)$ behaves differently when $k$ is large compared to $n$. We
therefore know the upper bound of Lemma 3.1 is not tight when $k\geq 2^{n-2}$
or, equivalently, we know $n_{0}(k)>\log_{2}k+2$. However, the following
construction, valid when $k\geq 4$, shows that we can improve upon Lemma 3.1
for considerably larger values of $n$ as well.
###### Proposition 4.1.
For all $k\geq 4$, we have $f(k,k)\leq 3k-4$. As a consequence, $n_{0}(k)\geq
k+1$.
###### Proof.
To prove the upper bound, we must construct a $k$-cover $\mathcal{H}$ of
$\mathbb{F}_{2}^{k}$ of size $3k-4$. Letting $\vec{e}_{i}$ denote the $i$th
standard basis vector and $\vec{1}$ the all-one vector, we take
$\mathcal{H}=\mathcal{H}_{1}\cup\mathcal{H}_{2}\cup\mathcal{H}_{3}$, where
$\mathcal{H}_{1}=\big{\\{}H_{\vec{e}_{i}}:i\in[k]\big{\\}}$,
$\mathcal{H}_{2}=\big{\\{}H_{\vec{1}-\vec{e}_{i}}:i\in[k]\big{\\}}$, and
$\mathcal{H}_{3}$ consists of $k-4$ copies of the hyperplane with equation
$\vec{x}\cdot\vec{1}=0$. Then $\mathcal{H}$ has size $3k-4$, while the only
planes containing the origin are those in $\mathcal{H}_{3}$. Thus it only
remains to verify that each nonzero point is covered at least $k$ times.
Given a nonzero point $\vec{x}$, let its weight be $w$. We then see that
$\vec{x}$ is covered $w$ times by the planes in $\mathcal{H}_{1}$. Next,
observe that $\vec{x}\cdot\left(\vec{1}-\vec{e}_{i}\right)$ is equal to $w$ if
$x_{i}=0$, and is equal to $w-1$ otherwise. Hence, if $w$ is odd, then
$\vec{x}$ is covered by $k-w$ planes in $\mathcal{H}_{2}$, and is thus covered
at least $k$ times by $\mathcal{H}$.
On the other hand, if $w$ is even, then $\vec{x}$ is covered $w$ times by the
planes in $\mathcal{H}_{2}$. However, in this case $\vec{x}\cdot\vec{1}=0$,
and so $\vec{x}$ is covered $k-4$ times by $\mathcal{H}_{3}$ as well. In
total, then, $\vec{x}$ is covered $2w+k-4$ times. As $\vec{x}$ is a nonzero
vector of even weight, we must have $w\geq 2$, and hence $\vec{x}$ is covered
at least $k$ times in this case as well.
In conclusion, we see that $\mathcal{H}$ forms a $k$-cover of
$\mathbb{F}_{2}^{k}$, and thus $f(k,k)\leq|\mathcal{H}|=3k-4$. As this is
smaller than the upper bound of Lemma 3.1, it follows that $n_{0}(k)\geq k+1$.
∎
### 4.2 Initial values
This still leaves us with a large range of possible values for $n_{0}(k)$: our
lower bound is linear, while our upper bound is exponential. To get a better
feel for which bound might be nearer to the truth, we next decided to take a
closer look at $f(n,k)$ for small values of the parameters.
To be able to compute a number of these values efficiently, it helped to
appeal to our recursive bounds. Lemma 3.2 already restricts the behaviour of
$f(n,k)$ as $n$ changes, showing that the function must be strictly increasing
in $n$. It is also very helpful to understand how $f(n,k)$ responds to changes
in $k$: as the following lemma shows, there is even less flexibility here.
###### Lemma 4.2.
For all $n\geq 1$ and $k\geq 2$ we have $f(n,k-1)+1\leq f(n,k)\leq
f(n,k-1)+2$.
###### Proof.
For the lower bound, observe that, given a $k$-cover of size $f(n,k)$,
removing a hyperplane covering the origin (or, if no such plane exists, an
arbitrary plane) leaves us with a $(k-1)$-cover, and thus $f(n,k-1)\leq
f(n,k)-1$.
For the upper bound, given a $(k-1)$-cover of size $f(n,k-1)$, we can add an
arbitrary pair of parallel hyperplanes to obtain a $k$-cover. Thus $f(n,k)\leq
f(n,k-1)+2$. ∎
Thus, if we know the value of $f(n,k-1)$, there are only two possible values
for $f(n,k)$. This becomes even more powerful when used in combination with
Lemma 3.2, which guarantees $f(n,k)\geq f(n-1,k)+1$. Hence, in case we have
$f(n-1,k)=f(n,k-1)+1$, the only possible value for $f(n,k)$ is $f(n,k-1)+2$.
Although this may seem a very conditional statement, this configuration occurs
quite frequently, as one can see in Table 1 below, and allows us to deduce
several values of $f(n,k)$ for free. This observation, together with our
previous bounds (and noting that $f(n,2)=n+1$), allows us to almost completely
determine $f(n,k)$ for $n\leq 6$. We were able to fill in the few outstanding
values through a computer search (using SageMath [18] and Gurobi [12]).111Some
of these values we first proved by hand, via direct case analysis. However, as
we do not see any more broadly applicable generalisation of the arguments
therein, we have omitted these proofs.
n k | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | $\cdots$
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---
3 | 6* | 7 | 9 | 11 | 13 | 14 | 16 | 18 | 20 | 21 | 23 | 25 | 27 | 28 | $\cdots$
4 | 7* | 8 | 10 | 12 | 14 | 15 | 17 | 19 | 21 | 23 | 25 | 27 | 29 | 30 | $\cdots$
5 | 8* | 10* | 11 | 13 | 15 | 16 | 18 | 20 | 22 | 24 | 26 | 28 | 30 | 31 | $\cdots$
6 | 9* | 11* | 13* | 14 | 16 | 18 | 20 | 22 | 23 | 25 | 27 | 29 | 31 | 32 | $\cdots$
Table 1: $f(n,k)$ for $3\leq n\leq 6$: values in green come from Theorem
1.2(a), values in blue are a consequence of the recursive bounds, values in
orange follow from Proposition 4.1, and values in red were obtained by a
computer search. An asterisk denotes values equal to the upper bound of Lemma
3.1; that is, where $n\geq n_{0}(k)$.
### 4.3 The extended Golay code
We see from Table 1 that $n_{0}(k)=k+1$ for $k\in\\{4,5\\}$, leading some
credence to the belief that the construction from Proposition 4.1 is perhaps
indeed the last time the upper bound from Lemma 3.1 can be improved. However,
we can once again exploit the coding theory connection of Proposition 3.3 to
show that this is not always the case.
The extended binary Golay code is a $12$-dimensional code of length $24$ and
minimum distance $8$. By Proposition 3.3, this code is equivalent to an
$(8,1;0)$-cover of $\mathbb{F}_{2}^{12}$ of size $24$, thus implying that
$f(12,8)\leq 24$, whereas the upper bound given by Lemma 3.1 is $25$.
Furthermore, we see in Table 1 that $f(6,8)=18$. By repeated application of
Lemma 3.2, we must have $f(12,8)\geq f(6,8)+6$, and thus $f(12,8)=24$.
Moreover, there must be equality in every step of the recursion, and thus
$f(n,8)=n+12$ for $6\leq n\leq 12$.
This result, coupled with the techniques described previously, allows us to
extend Table 1 to include values for $7\leq n\leq 12$ and $3\leq k\leq 10$.
These new values are depicted in Table 2 below. We see that the equality
$n_{0}(k)=k+1$ persists for $k=6,7$ until the Golay construction comes into
existence. In light of Lemma 4.2, this ensures $n_{0}(k)\geq k+2$ for $8\leq
k\leq 11$.
n k | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10
---|---|---|---|---|---|---|---|---
6 | 9* | 11* | 13* | 14 | 16 | 18 | 20 | 22
7 | 10* | 12* | 14* | 16* | 17 | 19 | 21 | 23
8 | 11* | 13* | 15* | 17* | 19* | 20 | 22 | 24
9 | 12* | 14* | 16* | 18* | 20* | 21 | 23 | 25
10 | 13* | 15* | 17* | 19* | 21* | 22 | 24 | 26
11 | 14* | 16* | 18* | 20* | 22* | 23 | 25 | 27
12 | 15* | 17* | 19* | 21* | 23* | 24 | 26 | 28
⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | | |
Table 2: More values of $f(n,k)$: green represents values coming from Theorem
1.2(a), red represents values obtained through computer computations, blue
represents values obtained from other values by the recursive bounds, orange
represents values obtained by Proposition 4.1 and recursion, and cyan
represents values obtained by the Golay code construction and its recursive
consequences. An asterisk denotes values attaining the upper bound of Lemma
3.1; that is, where $n\geq n_{0}(k)$.
This begs the question of what happens for larger values of $k$. Does the gap
$n_{0}(k)-k$ continue to grow? Does the threshold return to $k+1$ at a later
point? Unlike the construction in Proposition 4.1, the Golay code yields a
sporadic construction, which we have not been able to generalise. Furthermore,
this is known as a particularly efficient code, and we are not aware of any
other code whose parameters lead to an improvement on Proposition 4.1. Hence,
we are leaning towards the second possibility – not strongly enough, perhaps,
to conjecture it as the truth, but enough to pose it as a question.
###### Question 4.3.
Do we have $n_{0}(k)=k+1$ for all $k\geq 12$?
To answer Question 4.3, we need to determine the value of $f(k+1,k)$. For an
affirmative answer, we need to show $f(k+1,k)=3k-2$, while a negative answer
would follow from a construction showing $f(k+1,k)\leq 3k-3$. What could such
a construction look like? If we retrace the proof of Theorem 1.2(b), we see
that any $k$-cover of $\mathbb{F}_{2}^{k+1}$ that covers the origin at least
$k-2$ times must have size at least $3k-2$. Hence, any construction negating
Question 4.3 must cover the origin at most $k-3$ times.
While this seemingly contradicts Corollary 3.6, recall that we needed $n$ to
be exponentially large with respect to $k$ to draw that conclusion. Without
this condition, the Hamming bound on codes with large distance is not strong
enough to provide the requisite lower bound on $f(n,k)$. Indeed, the Gilbert-
Varshamov bound, discussed in Remark 3.5, shows that a random collection of
$k+O(\log k)$ hyperplanes forms a $3$-cover of $\mathbb{F}_{2}^{k+1}$ with
high probably. Adding $k-3$ arbitrary pairs of parallel planes then gives a
$k$-cover of size $3k+O(\log k)$ that only covers the origin $k-3$ times.
Thus, we can find numerous $k$-covers that are asymptotically optimal, and we
cannot hope for any strong stability when $n$ and $k$ are comparable.
## 5 Concluding remarks
In this paper, we investigated the minimum number of affine subspaces of a
fixed codimension needed to cover all nonzero points of $\mathbb{F}_{2}^{n}$
at least $k$ times, while only covering the origin at most $k-1$ times. We
were able to determine the answer precisely when $k$ is large with respect to
$n$, or when $n$ is large with respect to $k$, and provided asymptotically
sharp bounds for the range in between these extremes. In this final section,
we highlight some open problems and avenues for further research.
#### Bounding the threshold
In the previous section, we raised the question of determining the threshold
$n_{0}(k)$ beyond which the result of Theorem 1.2(b) holds. Although our proof
requires $n$ to be exponentially large with respect to $k$, our constructions
suggest the threshold might, with limited exceptions, be as small as $k+1$.
It is quite possible that solving Question 4.3 will require improving the
classic bounds on the length of binary codes of large minimum distance, and
will therefore perhaps be quite challenging. However, there is plenty of scope
to attack the problem from the other direction, and aim to reduce the
exponential upper bound on $n_{0}(k)$.
Our strategy was to prove the lower bound for $g(n,k,1;k-1)$ and
$g(n,k,1;k-2)$, using the recursive bounds. By removing planes covering the
origin, we could reduce the remaining cases to $g(n,3,1;0)$, for which, when
$n$ is large, the coding theory connection provides a large enough lower
bound.
There are two natural ways to improve this argument. The first would be to
extend the values $s$ for which we directly prove the lower bound on
$g(n,k,1;s)$. For instance, if we could show that $g(n,k,1;s)\geq n+2k-3$ for
$s\in\\{k-3,k-4\\}$ as well, then we could reduce the remaining cases to
$g(n,5,1;0)$ instead, for which the Hamming bound gives a stronger lower
bound. This would still yield an exponential bound on $n_{0}(k)$, but with a
smaller base.
The second approach concerns our reduction to $g(n,3,1;0)$, where we use the
fact that removing a hyperplane from a $k$-cover leaves us with a
$(k-1)$-cover. However, our constructions contain arbitrary pairs of parallel
planes, and thus it is possible to remove from them _two_ planes and still be
left with a $(k-1)$-cover. If we can show that this is true in general, it
could lead to a linear bound on $n_{0}(k)$.
Finally, while we have focused on the hyperplane case in Question 4.3, it
would also be worth exploring the corresponding threshold $n_{0}(k,d)$ for
$d\geq 2$. It would be very interesting if there were new constructions that
appear in this setting where we cover with affine subspaces of codimension
$d$.
#### Larger fields
In this paper we have worked exclusively over the binary field
$\mathbb{F}_{2}$, but it is also natural to explore these subspace covering
problems over larger finite fields, $\mathbb{F}_{q}$ for $q>2$. Let us denote
the corresponding extremal function by $f_{q}(n,k,d)$, which is the minimum
cardinality of a multiset of $(n-d)$-dimensional affine subspaces that cover
all points of $\mathbb{F}_{q}^{n}\setminus\\{\vec{0}\\}$ at least $k$ times,
and the origin at most $k-1$ times. The work of Jamison [14] establishes the
initial values of this function, showing $f_{q}(n,1,d)=(q-1)(n-d)+q^{d}-1$.
When it comes to multiplicities $k\geq 2$, some of what we have done here can
be transferred to larger fields as well.
To start, we can once again resolve the setting where the multiplicity $k$ is
large with respect to the dimension $n$. Indeed, the double-counting lower
bound of Lemma 2.1 generalises immediately to this setting, giving
$f_{q}(n,k,d)\geq q^{d}k-\left\lfloor\frac{k}{q^{n-d}}\right\rfloor$, and one
can obtain a matching upper bound by taking multiple copies of every affine
subspace.
In the other extreme, where $n$ is large with respect to $k$, the problem
remains widely open. We first note that the reduction to hyperplanes from
Lemma 2.2 can be extended, giving $f_{q}(n,k,d)\leq
f_{q}(n-d+1,k,1)+(q^{d-1}-1)kq$. Thus, as before, it is best to first focus on
the case $d=1$, and we define $f_{q}(n,k)\coloneqq f_{q}(n,k,1)$. Then
Jamison’s result gives $f_{q}(n,1)=(q-1)n$.
For an upper bound, let us start by considering $2$-covers. It is once again
true that if one takes the standard $1$-covering by hyperplanes, consisting of
all hyperplanes of the form $\\{\vec{x}:x_{i}=c\\}$ for some $i\in[n]$ and
$c\in\mathbb{F}_{q}\setminus\\{0\\}$, the only nonzero vectors that are only
covered once are those of Hamming weight $1$. However, since the nonzero
coordinate of these vectors can take any of $q-1$ different values, it takes a
further $q-1$ hyperplanes to cover these again, and so we have
$f(n,2)\leq(q-1)(n+1)$. Now, given a $(k-1)$-cover of $\mathbb{F}_{q}^{n}$,
one can obtain a $k$-cover by adding an arbitrary partition of
$\mathbb{F}_{q}^{n}$ into $q$ parallel planes, and this yields
$f_{q}(n,k)\leq(q-1)(n+1)+q(k-2)$. This construction is the direct analogue of
that from Lemma 3.1, and so, as in Theorem 1.2(b), we expect it to be tight
when $n$ is sufficiently large.
However, the lower bounds are lacking. A simple general lower bound is
obtained by noticing that removing $k-1$ hyperplanes from a $k$-cover leaves
us with at least a $1$-cover, and so $f_{q}(n,k)\geq
f_{q}(n,1)+k-1=(q-1)n+k-1$. This remains the best lower bound we know — in
particular, even the case of $f_{q}(n,2)$ is unsolved.
It would of course be very helpful to use some of the machinery we have
developed here, and so we briefly explain where the difficulties therein lie.
Key to our binary proof was the equivalence with codes of a certain minimum
distance, given in Proposition 3.3. When working over $\mathbb{F}_{q}$,
unfortunately, that equivalence breaks down. For an $n$-dimensional linear
code with minimum distance $k$ with generator matrix $A$, we require that, for
every nonzero vector $\vec{x}\in\mathbb{F}_{q}^{n}$, the vector $A\vec{x}$ has
at least $k$ nonzero entries. In the binary setting, this was precisely what
we wanted, since $\vec{x}$ was covered by the $i$th hyperplane if and only if
the $i$th entry of $A\vec{x}$ was nonzero. However, in the $q$-ary setting,
for $\vec{x}$ to be covered by the $i$th hyperplane, we need the $i$th entry
of $A\vec{x}$ to be equal to a prescribed nonzero value. Hence, while every
$k$-covering of $\mathbb{F}_{q}^{n}$ gives rise to a linear $q$-ary
$n$-dimensional code of minimum distance at least $k$, the converse is not
true. As a result, the coding theoretic bounds, which are of the form
$n+O(k\log n)$, are not strong enough to give us information here.
Another main tool was the recursion over $n$, showing that $f(n,k)$ is
strictly increasing in $n$. The same proof goes through here, and we can again
show $f_{q}(n,k)>f_{q}(n-1,k)$. However, from our bounds, we expect the
stronger inequality $f_{q}(n,k)\geq f_{q}(n-1,k)+q-1$ to hold. Intuitively,
this is because when we restrict a $k$-cover of $\mathbb{F}_{q}^{n}$ to
$\mathbb{F}_{q-1}^{n}\subset\mathbb{F}_{q}^{n}$, there are $q-1$ affine copies
of $\mathbb{F}_{q-1}^{n}$ that are lost. However, this does not (appear to)
come out of our probabilistic argument.
It would thus be of great interest to develop new tools to handle the $q$-ary
case, as these may also bear fruit when applied to the open problems in the
binary setting as well. We believe that new algebraic ideas may be necessary
to resolve the following question.
###### Question 5.1.
For $n\geq n_{0}(k,q)$, do we have $f_{q}(n,k)=(q-1)(n+1)+q(k-2)$?
#### Polynomials with large multiplicity
Finally, speaking of algebraic methods, we return to our introductory
discussion of the polynomial method. Recall that previous lower bounds in this
area have been obtained by considering the more general problem of the minimum
degree of a polynomial in $\mathbb{F}[x_{1},x_{2},\ldots,x_{n}]$ that vanishes
with multiplicity at least $k$ at all nonzero points in some finite grid, and
with lower multiplicity at the origin. Sauermann and Wigderson’s recent
breakthrough, Theorem 1.1, resolves this polynomial problem for $n\geq 2k-3$
over fields of characteristic 0, while our results here show that, in the
binary setting at least, there is separation between the hyperplane covering
and the polynomial problems.
Despite this, we wonder whether the answers to the two problems might coincide
in the range where the multiplicity $k$ is large with respect to the dimension
$n$. That is, can the simple double-counting hyperplane lower bound be
strengthened to the polynomial setting? We would therefore like to close by
emphasising a question of Sauermann and Wigderson [19], this time over
$\mathbb{F}_{2}$.
###### Question 5.2.
Given positive integers $k,n$ with $k\geq 2^{n-2}$, let
$P\in\mathbb{F}_{2}[x_{1},x_{2},\ldots,x_{n}]$ be a polynomial that vanishes
with multiplicity at least $k$ at every nonzero point, and with multiplicity
at most $k-1$ at the origin. Must we then have $\deg(P)\geq
2k-\left\lfloor\frac{k}{2^{n-1}}\right\rfloor$?
## References
* [1] N. Alon and Z. Füredi, Covering the cube by affine hyperplanes, European J. Combin. 14(2) (1993), 79–83.
* [2] S. Ball, On intersection sets in Desarguesian affine spaces, European J. Combin. 21(3) (2000), 441–446.
* [3] S. Ball, The polynomial method in Galois geometries, in Current research topics in Galois geometry, Chapter 5, Nova Sci. Publ., New York, (2012), 105–130.
* [4] S. Ball and O. Serra, Punctured Combinatorial Nullstellensätze, Combinatorica 29 (2009), 511–522.
* [5] A. Bishnoi, P. L. Clark, A. Potukuchi and J. R. Schmitt, On zeros of a polynomial in a finite grid, Comb. Prob. Comput. 27(3) (2018), 310–333.
* [6] A. Blokhuis, A. E. Brouwer and T. Szőnyi, Covering all points except one, J. Algebraic Comb. 32 (2010), 59-66.
* [7] A. E. Brouwer and A. Schrijver, The blocking number of an affine space, J. Combin. Theory Ser. A 24(2) (1978), 251–253.
* [8] A. A. Bruen, Polynomial multiplicities over finite fields and intersection sets, J. Combin. Theory Ser. A 60(1) (1992), 19-33.
* [9] A. A. Bruen and J. C. Fisher, The Jamison method in Galois geometries, Des. Codes Crypt. 1 (1991), 199–205.
* [10] A. Clifton and H. Huang, On almost $k$-covers of hypercubes, Combinatorica 40 (2020), 511–526.
* [11] O. Geil and U. Matrínez-Peñas, Bounding the Number of common zeros of multivariate polynomials and their consecutive derivatives, Comb. Prob. Comput. 28(2) (2019), 253–279.
* [12] Gurobi Optimizer Reference Manual, Gurobi Optimization, LLC, 2020, http://www.gurobi.com.
* [13] L. Guth, Polynomial methods in combinatorics, Vol. 64, American Mathematical Soc., (2016).
* [14] R. E. Jamison, Covering finite fields with cosets of subspaces, J. Combin. Theory Ser. A 22(3) (1977), 253–266.
* [15] P. Komjáth, Partitions of vector spaces, Period. Math. Hungar. 28 (1994), 187–193.
* [16] G. Kós and L. Rónyai, Alon’s Nullstellensatz for multisets, Combinatorica 32(5) (2012), 589–605.
* [17] G. Kós, T. Mészáros and L. Rónyai, Some extensions of Alon’s Nullstellensatz, Publicationes Mathematicae Debrecen 79(3-4) (2011), 507–519.
* [18] _SageMath, the Sage Mathematics Software System (Version 9.0)_ , The Sage Developers, 2020, https://www.sagemath.org.
* [19] L. Sauermann and Y. Wigderson, Polynomials that vanish to high order on most of the hypercube, arXiv preprint arXiv:2010.00077 (2020).
* [20] C. Zanella, Intersection sets in $\mathrm{AG}(n,q)$ and a characterization of hyperbolic quadric in $\mathrm{PG}(3,q)$, Discrete Math. 255 (2002), 381–386.
|
Renormalised singular stochastic PDEs
I. BAILLEUL & Y. BRUNED
Extended decorations on naturally decorated trees were introduced in the work of Bruned, Hairer and Zambotti on algebraic renormalization of regularity structures to provide a convenient framework for the renormalization of systems of singular stochastic PDEs within that setting. This non-dynamical feature of the trees complicated the analysis of the dynamical counterpart of the renormalization process. We provide a new proof of the renormalised system by-passing the use of extended decorations and working for a large class of renormalization maps, with the BPHZ renormalization as a special case. The proof reveals important algebraic properties connected to preparation maps.
§ INTRODUCTION
We consider systems of parabolic equations involving possibly different second order elliptic linear differential operators with constant coefficients $L_1,\dots, L_{k_0}$
\begin{equation} \label{main_equation}
(\partial_t - L_i)u_i = F_i(u, \nabla u)\xi, \qquad (i=1\dots k_0),
\end{equation}
with $u:=(u_1,\dots,u_{k_0})$ and each $u_i$ taking values in a finite dimensional space $\RR^{d_i}$, with $F_i(u, \nabla u)\in L(\RR^{n_0},\RR^{d_i})$, for each $u$, and $\xi = (\xi_1,\dots,\xi_{n_0})$ an $n_0$-dimensional spacetime `noise'. The unknown is defined on positive times, with a given time $0$ initial value. Some of the components of $\xi$ may be regular, or even constant, functions like in the generalized (KPZ) equation
(\partial_t-\partial_x^2) u = f(u)\xi_1 + g(u)\vert\partial_x u\vert^2,
where $u$ is a real-valued function, $\xi=(\xi_1,\xi_2)$, with $\xi_1$ a one-dimensional spacetime white noise and $\xi_2=1$. The foundations of regularity structures theory are contained in M. Hairer's groundbreaking work [26] and his subsequent works [10, 16, 5] with Bruned, Chandra, Chevyrev and Zambotti. We refer the reader to Friz and Hairer's book [22] for a gentle introduction to the subject, to Bruned, Hairer and Zambotti's short review [11], and to Bailleul and Hoshino's work [9] for a complete concise account of the analytic and algebraic sides of the subject. Possible solutions to a given equation/system are defined by their local behaviour
u(\cdot) \simeq \sum_\tau u_\tau(x)({\sf \Pi}_x\tau)(\cdot),
in terms of reference functions/distributions $({\sf \Pi}_x\tau)(\cdot)$, indexed by all state space points $x$ and a finite collection of symbols $\{\tau\}$. The reconstruction theorem ensures that one defines indeed in this way a unique function/distribution when the coefficient $\big\{u_\tau(x)\big\}$ form a consistent family, encoded in the notion of modelled distribution. The minimum set of symbols $\{\tau\}$ needed to give local expansions of possible solutions to singular PDEs have a natural combinatorial structure that comes with the naive Picard formulation of the equation and the very fact that they are used to build local expansion devices. Regularity structures are the appropriate abstraction of these combinatorial structures and models on regularity structures the appropriate analogue of local expansion devices.
Under proper subcriticality conditions on a given system of singular stochastic PDEs, one can build a regularity structure $\mathscr{T}$ for it, and given a model ${\sf M=(g ,\Pi)}$ on $\mathscr{T}$, one can recast the system (<ref>) as a fixed point problem for a modelled distribution ${\sf u}\in \mcD^\gamma(T,{\sf g})$, of regularity $\gamma$, for a model-dependent equation in the space $\mcD^\gamma(T,{\sf g})$, for a well-chosen positive regularity exponent $\gamma$. Such functions take values in a finite dimensional linear subspace $T_{<\gamma}$ of the linear space $T$. Obtaining a fixed point usually requires adjusting a parameter to get a contractive map on a proper functional space. This typically gives well-posedness results in small time or small parameter. A proper distribution/function on the state space where $\sf u$ is defined is obtained by applying to $\sf u$ the reconstruction operator $\sf R^M$ associated with the model $\sf M$. While the modelled distribution $\sf u$ solves a dynamical equation, counterpart of system (<ref>), the possibility to give a dynamical description of its reconstruction $\sf R^Mu$, and to relate it to the formal equation (<ref>), depends on the model. Denote by $\zeta=(\zeta_1,\dots,\zeta_{n_0})$ the symbol in $\mathscr{T}$ used to represent the noise $\xi$.
The use of admissible models ensures that
\begin{equation} \label{EqReconstructionDynamics}
u_i = K_i * {\sf R^M}\big({\sf F}_i({\sf u})\,\zeta \big) + K_i\big(u_i(0)\big), \qquad (i=1\dots k),
\end{equation}
where $K_i$ is the heat kernel of the operator $L_i$, the value at time $0$ of $u_i$ is $u_i(0)$, and the symbol $*$ stands for spacetime convolution. The functions ${\sf F}_i$ are the natural extensions of the functions $F_i$ to the space of modelled distributions. If the noise $\xi$ is smooth and one uses the canonical admissible model $\Theta$ sending the noise symbol $\zeta$ to $\xi$, its reconstruction operator happens to be multiplicative and system (<ref>) turns out to be equivalent to system (<ref>). The canonical admissible model is no longer well-defined if the noise is not sufficiently regular, which is the case of interest. In that case, one can only build random models, using probability tools, when the noise itself is random and satisfies some mild conditions detailed in Chandra and Hairer's work [16]. These admissible models $\sf M$ are limits in probability of admissible models ${\sf M}^\epsilon = ({\sf g}^\epsilon, {\sf \Pi}^\epsilon)$ for which
{\sf \Pi}^\epsilon = {\sf \Theta}^\epsilon\circ R^\epsilon,
where ${\sf \Theta}^\epsilon$ is the naive interpretation operator on symbols mapping the noise symbol $\zeta$ to a regularized version $\xi^\epsilon$ of $\xi$ that respects the parabolic scaling of the equation, and $R^\epsilon$ is a deterministic linear map on the finite dimensional linear space $T_{<\gamma}$, diverging as the regularization parameter $\epsilon$ goes to $0$. It is then no longer clear at all that the ${\sf M}^\epsilon$-reconstruction $u^\epsilon$ of the solution ${\sf u}^\epsilon$ to the above mentioned fixed point problem in $\mathcal{D}^\gamma(T,{\sf g}^\epsilon)$ is the solution of a PDE involving the regularized noise $\xi^\epsilon$. As the model ${\sf R}^{{\sf M}^\epsilon}$ takes values in the space of continuous functions, its reconstruction operator ${\sf R}^{{\sf M}^\epsilon}$ satisfies
\big({\sf R}^{{\sf M}^\epsilon} {\sf v}\big)(x) = \big({\sf \Pi}^\epsilon_x{\sf v}(x)\big)(x),
for all modelled distributions $\sf v$ of positive regularity. The possibility to turn the non-autonomous dynamics (<ref>) for $u^\epsilon$ and ${\sf u}^\epsilon$ into an autonomous dynamics for $u^\epsilon$ depends then on our ability to compute effectively the recentered renormalized interpretation operator ${\sf \Pi}^\epsilon_x$ associated with $({\sf g}^\epsilon, {\sf \Pi}^\epsilon)$. This is a non-elementary matter. Hairer used in his seminal work [26] the fact that one has for the $\Phi^4_3$ and $2$-dimensional generalized (PAM) equations
\begin{equation} \label{EqCondition1}
\big({\sf \Pi}^\epsilon_x\tau\big)(x) = \big({\sf \Theta}^\epsilon_x(R^\epsilon\tau)\big)(x), \qquad \forall\,\tau\in T,\;\forall\,x,
\end{equation}
for the natural choice of decorated trees $\tau$ associated with these equations, to deal by hand with these equations. Such a property of sets of natural trees associated with singular PDEs was later proved to hold as well for the $\sin$-Gordon equation [17], the generalized (KPZ) equations and the $\Phi^4_{4-\delta}$ equation [8, 5], and the $2$-dimensional Yang-Mills equation [15]. The stronger property
\begin{equation} \label{EqCondition2}
{\sf \Pi}^\epsilon_x\tau= \Theta^\epsilon_x(R^\epsilon\tau), \qquad \forall\,\tau\in T,\;\forall\,x,
\end{equation}
holds for the list of trees that comes from the Picard development of solutions of the (KPZ) and generalized (PAM) equations. However, not all subcritical singular PDEs satisfy either of these properties. The introduction in Bruned, Hairer and Zambotti's work [10] of extended decorations on the set of decorated trees was motivated by the desire to set a framework where this identity holds true for a whole class of equations.
Bruned, Chandra, Chevyrev and Hairer showed in [5] that one can run within this framework a clean analysis of the dynamics of $u^\epsilon$, and that $u^\epsilon=\big(u^\epsilon_1,\dots,u^\epsilon_k\big)$ is solution of the system
\begin{equation} \label{EqRenormalisedEquation}
(\partial_t - L_i)u_i^\epsilon = F_i(u^\epsilon,\nabla u^\epsilon)\xi^\epsilon + \sum_{\tau\in \mathcal{B}^-\backslash\{\textbf{\textsf{1}}\}} \ell^\epsilon(\tau)\,\frac{F_i(\tau)(u^\epsilon, \nabla u^\epsilon)}{S(\tau)}, \qquad (i=1\dots k_0),
\end{equation}
with additional explicit counterterms $F_i(u^\epsilon, \nabla u^\epsilon)$ depending on $u^\epsilon$ and its derivative, possibly, and where $\ell^\epsilon(\tau)$ are renormalization constants indexed by a finite set of decorated trees $\mathcal{B}^-$ containing an element , diverging as $\epsilon$ goes to $0$, and $S(\tau)$ is a symmetry factor. The terms $F_i(\tau)(\cdot)/S(\tau)$ are the counterparts of the coefficients used to describe $B$-series in numerical analysis. (Decorated
trees and the same type of coefficients have been used recently to describe a numerical scheme
at low regularity for dispersive equations in [13].) This comparison makes less surprising the crucial role plaid by pre-Lie structures in the analysis of singular PDEs and the fact that the preceding terms satisfy some crucial morphism property for a (multi-)pre-Lie structure that was first introduced by Bruned, Chandra, Chevyrev and Hairer in [5]. (That such structures have a role to play in these questions was first noticed in a rough paths setting in Bruned, Chevyrev, Friz and Preiss' works [6, 7].) Equation (<ref>) is called the renormalized equation. At the algebraic level, identity (<ref>) reflects a co-interaction between two Hopf algebras, whose use in [5] for deriving the renormalized equation for a large class of singular PDEs was instrumental – see e.g. Section 5 of [9] for the core points of this co-interaction.
We show in this work that none of the idendities (<ref>) and (<ref>) is actually needed to get back the renormalized equation and that one can run the analysis in the natural space of trees with no extended decorations.
A systematic renormalization procedure was designed by Bruned, Hairer and Zambotti in [10] and proved to provide converging renormalized models by Chandra and Hairer in [16]. It is named `BPHZ renormalization', after similar renormalization procedures introduced by Bogoliubov and Parasiuk for the needs of quantum field theory in the mid 50's, improved among others by Hepp and Zimmermann. Its regularity structures counterpart is subtle as renormalization needs to cope well with the recentering properties of the model. This compatibility between recentering and renormalization structures is encoded at an algebraic level in the above mentioned co-interaction of two Hopf algebras. (Such a co-interaction has been observed by Chartier, E. Hairer, Vilmart and Calaque, Ebrahimi-Fard, Manchon's works [19, 14] in the simpler context of the Butcher-Connes-Kreimer and the extraction-contraction Hopf algebras.) We consider here a larger class of renormalization procedures introduced in [3], containing the BPHZ renormalization as an element. They are built from special linear maps $R : T\rightarrow T$, to which one can associate a family of multiplicative operators $\Pi_x^{M^\circ}$ from $T$ to the space of smooth functions, and a smooth admissible model ${\sf M=(g,\Pi)}$ on $T$ such that its associated reconstruction operator $\sf R^M$ on modelled distributions of positive regularity factorizes through a multiplicative map
\big({\sf R^Mv}\big)(x) = \Big(\Pi^{M^\circ}{\sf v}(x)\Big)(x).
This brings back the core problem to understanding the action of $R$ on the lift to the regularity structure of the equation. The key point is then a right morphism property of $R$ for an $R$-independent product $\star$ introduced by Bruned and Manchon in [12] as the dual of the deformed Butcher-Connes-Kreimer coproduct used in [10].
The remainder of this work is organised as follows. In Section <ref>, we recall basics on decorated trees and deformed pre-Lie structures using mainly the formalism developed in Bruned and Manchon's work [12]. The main new result in this section is Proposition <ref>, which establishes a morphism property for $ F $ with respect to the $\star$-product. We introduce good multi-pre-Lie morphisms and (strong) preparation maps $R$ in Section <ref>, and show in Proposition <ref> that adjoints $R^*$ of strong preparation maps are right-morphisms for the $\star$-product. Proposition <ref> combines this result with Proposition <ref> to provide a clear understanding of the action of $R^*$ on $F$. The main result of the section is Theorem <ref>. It states that strong preparation maps $M$ are good multi-pre-Lie morphisms. Corollary <ref> states that the set of good multi-pre-Lie morphisms is in bijection with the set of preparation maps in a rough paths setting; a remarkable result. We prove the main result of the paper, Theorem <ref>, in Section <ref>. It shows that the reconstruction of the solution to the singular PDE in the space of modelled distributions associated with our class of renormalization maps is actually the solution of an explicit PDE of the form
(\partial_t-L_i)u_i = F_i(u,\nabla u)\xi + \sum_{1\leq l\leq n_0}F_i\big((R^*-\textrm{Id})\zeta_l\big)(u,\nabla u)\xi_l,\qquad (1\leq i\leq k_0).
One gets back a system of the form (<ref>) for maps $R^*$ fixing symbols $\zeta_l$ corresponding to non-constant noises.
The letter $\zeta$ will be used exclusively for the noise symbol in a regularity structure. The letters $\sigma, \tau, \mu, \nu$ will denote (decorated) trees.
§ DECORATED TREES AND PRE-LIE PRODUCTS
Recall system (<ref>) with its noises $\xi_1,\dots,\xi_{n_0}$ and its operators $L_1,\dots, L_{k_0}$. Let
\mathfrak{T}^-=\big(\Labhom^-_1,\dots,\Labhom^-_{n_0}\big), \;\textrm{and}\; \mathfrak{T}^+=\big(\Labhom_1^+,\dots,\Labhom_{k_0}^+\big)
be finite sets representing noise types and operator types, respectively. Denote by
\mathfrak{T} := \mathfrak{T}^-\cup\mathfrak{T}^+
the set of all types. We consider decorated trees $(\tau,\Labn,\Labe)$ where $\tau$ is a non-planar rooted tree with node set $N_\tau$ and edge set $E_\tau$. The maps $ \Labn : N_\tau \rightarrow \N^{d+1} $, and $\Labe=\big(\frak{t}(\cdot),\frak{p}(\cdot)\big): E_\tau \rightarrow \Lab \times \N^{d+1}$, are node decorations and edge decorations, respectively. The $\N^{d+1}$-part $\frak{p}(e)$ in the edge decoration of an edge $e$ encodes possible derivatives acting on the operator associated with the given edge type $\frak{t}(e)$. We will frequently abuse notations and simply denote by $\tau$ a decorated tree, using a symbolic notation.
$\bullet$ An edge decorated by $ (\Labhom,p) \in \Lab \times \N^{d+1} $ is denoted by $ \CI_{(\Labhom,p)} $. The symbol $ \CI_{(\Labhom,p)} $ is also viewed as the operation that grafts a tree onto a new root via a new edge with edge decoration $ (\Labhom,p) $
$\bullet$ A factor $ X^k $ encodes a single node $ \bullet^{k} $ decorated by $ k \in \N^{d+1} $. Denote by $\{e_1, \ldots, e_{d+1}\}$ form the canonical basis of $ \N^{d+1} $. For $1\leq i\leq d+1$, write $ X_i $ for $ X^{e_i} $. The element $ X^0 $ is identified with $ \one $.
We require that every decorated tree $ \tau $ contains at most one edge decorated by $ (\Labhom,p) $ with $ \Labhom \in \Lab^- $ and any $p\in\N^{d+1}$, at each node. This encodes the fact that no product of two noises are involved in the analysis of the system (<ref>). We suppose that these edges lead directly to leaves; we denote them by $\zeta_l$, for $1\leq l\leq n_0$; by convention $ \zeta_0 $ is equal to $ \one $. Any decorated tree $ \tau $ has a unique decomposition
\[
\tau = X^{k} \zeta_l \prod_{i=1}^{n} \CI_{a_i}(\tau_i) ,
\]
where $\prod_i$ is the tree product, the $\tau_i$ are decorated trees and the $a_i$ belong to $ \Lab^+ \times \N^{d+1}$, so no factor in the product is a noise symbol $\zeta_{l'}$. The algebraic symmetry factor $S(\tau)$ of a decorated tree $\tau= X^{k} \zeta_l \ \prod_{j=1}^m \mathcal{I}_{a_j}(\tau_j)^{\beta_j}$ is defined grouping terms uniquely in such a way that $(a_i,\tau_i) \neq (a_j,\tau_j)$ for $i \neq j$, and setting inductively
\begin{equation*}
S(\tau) = k!\,
\bigg(
\prod_{j=1}^{m}
\beta_{j}!
\bigg)\;.
\end{equation*}
A planted tree is a tree of the form $\CI_a(\sigma)$, for a decorated tree $\sigma$ and $a\in\mathfrak{T}^+\times\mathbb{N}^{d+1}$; we denote by $\CI(T)$ the set of planted trees. We define an inner product on the set of all decorated trees setting for all $\sigma, \tau \in\CT$
\begin{align*} \label{inner_product}
\langle \sigma , \tau \rangle := S(\tau)\,\textbf{\textsf{1}}_{\sigma=\tau}.
\end{align*}
We also set
\langle \sigma_1\otimes\sigma_2 , \tau_1\otimes\tau_2 \rangle := \langle \sigma_1 , \tau_1 \rangle\,\langle \sigma_2 , \tau_2 \rangle.
The linear span of decorated trees will be denoted by $T$. Note here that such trees are also useful to describe numerical schemes for dispersive equations with low regularity initial condition [13].
We now associate numbers to decorated trees. Fix a scaling $ \frak{s} \in \N^{d+1} $ and a map
|\cdot|_{\s} : \Lab \rightarrow \mathbb{R},
which is negative on the noise types $ \Lab_-$ and positive on the operator types $\Lab_+$. This map accounts for the regularity of the noises and the gain of regularity of the heat kernels $K_i$, encoded in Schauder-type estimates they satisfy. We extend the map $|\cdot|_{\s}$ to $\mathfrak{T}\times\N^{d+1} $ setting
|p|_{\s} := \sum_{i=1}^{d+1} \s_i p_i, \qquad \textrm{and}\qquad |(\frak{t},p)|_{\s} := |\frak{t}|_{\s} + |p|_{\s}, \qquad \textrm{for}\;k\in\N^{d+1}.
The degree of a decorated rooted tree $(\tau, \Labn,\Labe)$ is defined by
(τ, ,) := ∑_v ∈N_T |(v)| _+ ∑_e ∈E_T |t(e)| _ - |p(e)| _.
(`Degree' is called `homogeneity' in Hairer's work [26].) We use the degree to introduce the space of `positive' decorated trees $T_+$. It is the linear span of trees of the form $X^k \prod_{i=1}^{n} \CI_{a_i}(\tau_i)$, where $\deg( \CI_{a_i}(\tau_i)) > 0$ and $k\in\N^{d+1}$. We also consider the linear space $ T_- $ spanned by the decorated trees with negative degree, and denote by $\mathbb{R}[T_-] $ the linear space spanned by forests of trees in $T_-$.
Given $k\in\N^{d+1}$ denote by $\uparrow_v^k$ the derivation on decorated trees that adds $k$ to the decoration at the node $v$. We introduce a family of pre-Lie products of grafting type setting for all decorated trees $\sigma, \tau\in T$, and $a\in\frak{L}\times\N^{d+1}$,
\begin{equation*}
\sigma \curvearrowright_a \tau := \sum_{v\in N_{\tau}}\sum_{m\in\N^{d+1}}{\Labn_v \choose m} \,\sigma \curvearrowright^v_{a-m}(\uparrow_v^{-m} \tau),
\end{equation*}
where $ \Labn_v $ is the decoration at the node $ v $, and $\curvearrowright^v_{a-m}$ grafts $ \sigma $ onto $\tau$ at the node $ v $ with an edge decorated by $a -m$. (For $a=(\frak{t},p)$, one writes $a-m$ for $(\frak{t},p-m)$.) This formula requires that $\sigma=\textbf{\textsf{1}}$ is the only argument accepted on the left of the grafting operation when $a\in\frak{T}^-\times\N^{d+1}$, since edges of noise type have no other arguments. The above sum is finite due to the binomial coefficient $ {\Labn_v \choose m} $, which is equal to zero if $m$ is greater than $ \Labn_v $, by convention. The pre-Lie products $\curvearrowright_a$ are non-commutative; they were first introduced in Bruned, Chandra, Chevyrev and Hairer's work [5]. We recall one universal result that we will use in the sequel; it was first established in Corollary 4.23 of [5]. It can be viewed as an extension of the universal result of Chapoton-Livernet [18] on pre-Lie algebras. (Such a result becomes immediate when one constructs $\curvearrowright_a$ as a deformation, as in Section 2.1 of [12]. See also Foissy's work [21] for the case with no deformation.)
The space $T$ is freely generated by the elements $\Big\{ X^k \zeta_l; \, 1\leq l\leq n_0, \ k \in \N^{d+1}\Big\}$ and the operations $\big\{\hspace{-0.08cm}\curvearrowright_a\,;\,1\leq a\leq k_0\big\}$.
We define a product
\curvearrowright\,: \CI(T)\times T\rightarrow T
setting for all $a\in\mathfrak{T}\times\N^{d+1}, \sigma, \tau\in T$, with the appropriate restriction on $\sigma$ if $a\in\mathfrak{T}^-\times\N^{d+1}$,
_a(σ) ↷ τ:= σ ↷_a τ,
We extend this product to a product of planted trees, $\prod_{i=1}^n \CI_{a_i}(\sigma_i ) \,\curvearrowright \, \tau$, by grafting each tree $ \sigma_i $ on $ \tau $ along the grafting operator $\curvearrowright_{a_i} $, independently of the others – we allow here one of the $a_i$ to be an element of $\frak{T}^-\times\N^{d+1}$, so the product contains in that case (only) one noise.(The trees $\sigma_i$ are only grafted on $\tau$, not on one another.) Following Bruned and Manchon's construction in [12], for $ B \subset N_{\tau} $, consider the derivation map $ \uparrow^{k}_{B}$ defined as
↑^k_B τ= ∑_∑_v ∈B k_v = k ∏_v ∈B↑_v^k_v τ.
and set, as a shorthand notation for later use,
\uparrow^k \tau := \uparrow^k_{N_\tau} \tau.
We define the product
\star : T\times T\rightarrow T,
for all $\sigma = X^k \prod_{i} \CI_{a_i}(\sigma_i) \in T$ and $\tau \in T$, by the formula
σ⋆τ:= ↑^k_N_τ ( ∏_i ℐ_a_i(σ_i)↷τ)
One has for instance
X^k\,\zeta_l\prod_{i=1}^n\CI_{a_i}(\tau_i) = \left(X^k\prod_{i=1}^n\CI_{a_i}(\tau_i)\right)\star \zeta_l.
It has been proved in Section 3.3 of [12] that this product is associative; this can be obtained by applying the Guin-Oudom procedure [23, 24] to a well-chosen pre-Lie product. When $ \sigma \in T_+ $ and $\tau, \mu\in T$, one has from Theorem 4.2 in [12]
⟨σ⋆τ, μ⟩= ⟨τ⊗σ, Δμ⟩
\Delta : T \rightarrow T \otimes T_+
is a co-action first introduced in Hairer' seminal work [26] – see also [10] and [9], where it plays a prominent role. So the restriction to $T^+\times T$ of the product $\star$ is the $\langle\cdot,\cdot\rangle$-dual of the splitting map $\Delta$. (Note here that our product $\star$ corresponds to the $\star_2$ product in [12].)
With a view on the system (<ref>) of singular PDEs, assume we are given a family
(F_k^l)_{1\leq k\leq k_0, 1\leq l\leq n_0}
of functions of abstract variables $Z_a$ indexed by $a\in\frak{L}^+\times\N^{d+1}$. These variables account for the fact that the nonlinearities in (<ref>) may depend on $u$ and its derivatives – only $u$ and $\nabla u$ for (<ref>), but we could also consider systems where the differential operators $L_i$ have order higher than $2$; in which case the nonlinearities could depend on $u$ and all its $k$-th derivatives, for $k$ up to the order of $L_i$ minus $1$. The different components of $u$ are indexed by $1\leq i\leq k_0$, and its derivatives by $\N^{d+1}$ – with $d$ space dimensions and one time dimension. We define in the usual way partial derivatives $D_a$ in the variable $Z_a $, and set for all $k\in\N^{d+1}$
\partial^k := \sum_a Z_{a+k}\,D_a.
We define inductively a family $F = (F_i)_{1\leq i\leq k_0}$ of functions of the variables $Z_a$, indexed by $T$, setting for $ \tau = X^{k} \zeta_{l}\prod_{j=1}^{n} \CI_{a_j}(\tau_j) $, with $a_j = (\Labhom_{l_j},k_j)$, for all $1\leq i\leq k_0$,
\begin{equation} \label{def_upsilon}
\begin{split}
F_i(\zeta_l) := F^{l}_i, \qquad F_i(\tau) := \partial^k D_{a_1} ... D_{a_n} F_i (\zeta_l)\,\prod_{j=1}^n F_{l_j}(\tau_j).
\end{split} \end{equation}
The next statement is a morphism property of $F$ for the product $\star$, as a function on $T$.
For every $1\leq i\leq k_0$, for every $ \sigma\in T$, and $\tau = X^{k} \prod_{j=1}^{n} \CI_{a_j}(\tau_j) \in T$ with $ a_j = (\Labhom_{l_j}, k_j) $, one has
F_i({X^k ∏_j=1^n _a_j(τ_j)} ⋆σ) = ∂^k D_a_1 ... D_a_n F_i(σ) ∏_j=1^n F_l_j(τ_j).
We proceed by induction on $ \sigma $. The case $ \sigma = \zeta_l $ is part of the definition (<ref>). For $ \sigma = X^{m} \zeta_l $, we have
X^k ∏_j=1^n _a_j(τ_j) ⋆σ= X^k+m - ∑_j ℓ_j ∑_ℓ∈(^d+1)^n m ℓ ∏_j=1^n _a_j-ℓ_i(τ_j).
Using the fact that
∑_ℓ=(ℓ_1,…,ℓ_n) ∈(^d+1)^n m ℓ ∂^m-ℓ_j D_a_j-ℓ_j = D_a_j ∂^m,
one has
F_(X^k ∏_j=1^n _a_j(τ_j) ⋆σ) = ∏_j=1^nF_l_j(τ_j) ∑_ℓ∈(^d+1)^n m ℓ ∂^k+m- ∑_j ℓ_j ∏_j' D_a_j'-ℓ_j' F_i (ζ_l)
= ∏_j=1^nF_l_j(τ_j) ∂^k ∏_j' D_a_j' D^mF_i(ζ_l)
= ∏_j=1^nF_l_j(τ_j) ∂^k ∏_j' D_a_j' F_i(X^mζ_l)
Then, we assume that $ \sigma = \CI_{b_1}(\sigma_1)\curvearrowright\sigma_2 = \CI_{b_1}(\sigma_1) \star \sigma_2 $ with $ b_1 = (\Labhom_b,k_b) $. One has from the associativity of $\star$
τ⋆σ= ( τ⋆_b_1(σ_1) ) ⋆σ_2
= { ∑_I ⊂{1,...,n }∑_k_1 + k_2 = k X^k_1 ∏_j ∈I _a_j(τ_j) _b_1( X^k_2 ∏_j'∈{1,...,n}∖I _a_j'(τ_j') ⋆σ_1 ) } ⋆σ_2
We apply the induction hypothesis to get
F_i(τ⋆σ) = ∑_I ⊂{1,...,n }∑_k_1 + k_2 = k ∏_j =1^n F_l_j(τ_j) ∂^k_2 ∏_j∈{1,...,n}∖I D_a_j F_b(σ_1) ∂^k_1∏_j' D_a_j' D_b_1 F_i(σ_2)
Using the fact that $ \partial^{k} $ and $ D_{a_i} $ satisfy the Leibniz rule one then gets
F_i(τ⋆σ) = ∏_j=1^n F_l_j(τ_j) ∂^k ∏_j' =1^n D_a_j' F_b(σ_1) D_b_1 F_i(σ_2)
= ∏_j =1^n F_l_j(τ_j) ∂^k ∏_j' =1^n D_a_j' F_i(_b_1(σ_1) ↷σ_2)
= ∏_j =1^n F_l_j(τ_j) ∂^k ∏_j' =1^n D_a_j' F_i(σ).
which allows us to conclude the proof.
Identity (<ref>) was first noticed in the proof of Proposition 30 in (the first version of) Bailleul and Hoshino's work [9]. It lead the authors to a simple proof of the fact that for $a=(\frak{t}_j,p_a)$ and all $i$
F_i(ℐ_a( σ) ↷τ) = F_j(σ) D_a F_i(τ);
a special case of (<ref>). This is a huge simplification in comparison to the original proof given in Bruned, Chandra, Chevyrev and Hairer's work [5], where the authors had to go through an extended space of rooted trees in Section $4$ therein. Identity (<ref>) was observed in the simpler context of rough differential equation, in Lemma 3.4 of Bonnefoi, Chandra, Moinat and Weber's work [2]. The $\star$ product happens to be the adjoint of the Butcher-Connes-Kreimer coproduct in that setting.
§ PREPARATION MAPS AND MULTI-PRE-LIE MORPHISMS
§.§ Definition and properties
For $\tau\in T$ denote by $\vert\tau\vert$ the number of noise symbols that appear in $\tau$. Recall from [3] the following definition.
A is a linear map $R : T\rightarrow T$ such that
$\bullet$ for each $ \tau \in T $ there exist finitely many $\tau_i \in T$ and constants $\lambda_i$ such that
R τ= τ+ ∑_i λ_i τ_i, with (τ_i) ≥(τ) and |τ_i| < |τ|
$\bullet$ one has
( R ⊗𝕀) Δ= ΔR.
Preparation maps are the building bricks from which renormalization maps can be constructed. A typical example of preparation map keeps fixed any tree in $\CI(T)$ and only acts non-trivially on trees with multiple edges at the root. This accounts for the fact that such trees represent products of analytical quantities, some of which needs to be renormalized to be given sense. The `deformed product' provided by $R(\tau)$ for such trees $\tau$ makes precisely that. Preparation maps were named for that reason `local product renormalization maps' in Chandra, Moinat and Weber's work [20] for establishing a priori bounds for the $ \phi^{4}_{4-\delta}$ models in the full subcritical regime, as well as in Bruned's work [4] on the renormalization of branched rough paths.
A preparation map is in particular a perturbation of the identity by elements that are more `regular' ($\deg(\tau_i) \geq \deg(\tau)$) and defined with strictly less noises ($|\tau_i| < |\tau|$). Note that the linear map $R-\textrm{Id}$ is nilpotent as a consequence of condition (<ref>). Identity (<ref>) encodes the fact that the recentering operator and the preparation map commute. The next statement is a direct consequence of the duality relation (<ref>) between the product $\star$ and the splitting map $\Delta$.
Identity (<ref>) is equivalent to having
R^* ( σ⋆τ) = σ⋆( R^* τ)
for all $\sigma \in T^+ $ and $\tau \in T$.
We use the duality relation (<ref>) between $\Delta$ and $\star$ to write for $\mu,\nu\in T$ and $\sigma\in T^+$
\langle \mu \otimes \sigma , \Delta R \nu \rangle = \langle \sigma \star \mu , R \nu \rangle = \langle R^* \left( \sigma \star \mu \right), \nu \rangle.
The result is thus a consequence of the identity
\big\langle \mu \otimes \sigma , \left( R \otimes \textrm{Id} \right) \Delta \nu \big\rangle = \langle R^{*} \mu \otimes \sigma , \Delta\nu \rangle = \langle \sigma \star \left( R^*\mu \right) , \nu \rangle.
A is a preparation map satisfying identity (<ref>) for all $\sigma\in T$ and $\tau\in T$ – and not only for $\sigma\in T^+$. One says that $R^*$ is a right derivation for the product $\star$.
Taking specific $\sigma$'s yields special identities. For $ \sigma = \CI_{a}(\sigma_1) $ identity (<ref>) reads
R^* ( ℐ_a(σ_1) ↷ τ) = ℐ_a(σ_1)↷( R^*τ),
that is
R^*(σ_1↷_a τ) = σ_1↷_a( R^*τ)
Another interesting case is when $ \sigma $ is equal to the empty forest $\textbf{\textsf{1}}$ – single node trees are identified to the empty forest when using the operator $\curvearrowright$. In that case, one has for all $k\in\N^{d+1}$
\begin{equation} \label{condition_22}
R^*(\uparrow^k\tau) = \,\uparrow^k(R^* \tau).
\end{equation}
Take care that $\uparrow^k$ in the left hand side is $\uparrow^k_{N_\tau}$, while $R^*\tau=\sum_i \tau_i$ and one has on the right hand side $\uparrow^k(R^* \tau) = \sum_i \uparrow^k_{N_{\tau_i}}(\tau_i)$. Note that the universal property of $T$ stated in Proposition <ref> implies that identities (<ref>) and (<ref>) characterize the map $R^*$ once its values on the generators $X^k\zeta_l$ are given.
Denote by $\mcB^-$ the canonical basis of $T^-$ and recall from [10] or [9] that $\mathbb{R}[T^-]$ is equipped with an (Hopf) algebra structure. We follow [3] and define for any character $\ell$ of $\mathbb{R}[T^-]$ and all $\tau\in T$
R^*_ℓ(τ) := ∑_σ∈ℬ^- ℓ(σ)/S(σ) ( τ⋆σ).
(This definition corresponds to the dual of its usual definition – see Corollary 4.5 in [3].) The BPHZ renormalization map from [10, 16] corresponds to a particular choice of character $\ell$ on $T^-$.
The maps $ R^{*}_{\ell}$ are strong preparation maps.
From definition (<ref>), one has for any $\mu,\tau\in T$
R^*_ℓ(μ⋆τ) = ∑_σ∈ℬ^- ℓ(σ)/S(σ) ( μ⋆τ) ⋆σ.
By using the associativity of $\star$ one gets
R^*_ℓ( μ⋆τ) = ∑_σ∈ℬ^- ℓ(σ)/S(σ) μ⋆(τ⋆σ) = μ⋆R^*_ℓ(τ).
The condition on the degree $\deg(\cdot) $ in (<ref>) comes from the fact that we are summing over decorated trees with negative degree in the definition of
$ R^{*}_{\ell} $. For $ |\cdot| $ which measures the size of the trees, this is a consequence of the definition of the $\star$ products, which breaks any decorated tree into two parts of smaller size.
It is not clear presently whether preparation maps are actually always strong. This holds however in the special case of the Butcher-Connes-Kreimer Hopf algebra, involved in the study of branched rough paths. Although elementary, the next statement will play a crucial role in the proof of our main result, Theorem <ref>, in the next section.
For every $1\leq i\leq k_0$, for every $\tau = X^{k} \zeta_{l}\prod_{j=1}^n \CI_{a_j}(\tau_j)$ with $a_j=(\frak{t}_{l_j},p_j)\in\frak{T}^+\times\N^{d+1}$, one has
F_i(R^*τ) = ∂^k D_a_1 ... D_a_n F_i(R^*ζ_l) ∏_j=1^n F_l_j(τ_j)
Writing $\tau = \left( X^{k} \prod_{j=1}^{n} \CI_{a_j}(\tau_j) \right) \star \zeta_{l}$, and using the right derivatin property (<ref>), one gets
R^* ( ( X^k ∏_j=1^n _a_j(τ_j) ) ⋆ζ_l ) = ( X^k ∏_j=1^n _a_j(τ_j) ) ⋆( R^* ζ_l )
so identity (<ref>) in Proposition <ref> yields
F_i(X^k ∏_j=1^n _a_j(τ_j) ⋆( R^* ζ_l ) ) = ∂^k D_a_1 ... D_a_n F_i(R^*ζ_l) ∏_j=1^n F_l_j(τ_j).
A is a map $A : T\rightarrow T$ such that one has for all $\sigma, \tau\in T$ and $k\in\N^{d+1}$, and all $a\in\frak{L}^+\times\N^{d+1}$,
A(\sigma\curvearrowright_a\tau) = (A\sigma)\curvearrowright_a(A\tau), \quad \textrm{and}\quad A\hspace{-0.05cm}\uparrow^k\,=\,\uparrow^k\hspace{-0.1cm}A.
The fundamental role of (good) multi-pre-Lie morphisms in renormalization matters was unveiled first in a rough paths setting in [7], and then in [5], in a regularity structures setting. Let $R$ stand for a preparation map. This is the key feature of renormalization maps that allows to obtain the renormalized equation. We associate to a strong preparation map $R$ a linear map $M^{\circ} : T\rightarrow T$, defined by the requirement that $M^{\circ} \one = \one$, that $M^{\circ}$ is multiplicative, by the data of the $M^{\circ}\zeta_l$, for $1\leq l\leq n_0$, and the induction relation
M^∘(_(,k)τ) = _(,k)(M^∘(Rτ))
for all $\tau\in T$ and $(\frak{t},k)\in\mathcal{L}^+\times\N^{d+1}$. Define, as in Section 3.1 of [3],
\begin{equation} \label{EqConstructionRecipe}
M := M^\circ R.
\end{equation}
The map $M^*$ is a good multi-pre-Lie morphism satisfying $M^*(\zeta_l) = R^*(\zeta_l)$, for all $1\leq l\leq n_0$.
$\bullet$ We start by showing that the map $(M^\circ)^*$ is also multiplicative. One has
⟨(M^∘)^* _a(σ), _a(τ)⟩ = ⟨_a(σ), M^∘ _a(τ) ⟩= ⟨_a(σ), _a(M τ) ⟩
= ⟨σ, M τ⟩= ⟨M^* σ, τ⟩= ⟨_a(M^*σ), _a(τ) ⟩,
which implies $(M^{\circ})^{*} \CI_a(\sigma) = \CI_a(M^{*}\sigma)$. Denote by $\Delta_d$ the deconcatenation coproduct
Δ_d _a(τ) = _a(τ) ⊗+ ⊗_a(τ), Δ_d X^k = X^k ⊗+ ⊗X^k,
extended multiplicatively not to the tree product but to the product between a decorated tree and a decorated tree with no polynomial decorations at the root. It follows then from the multiplicativity of $M^\circ$ and the identity $M^\circ \CI_a(\sigma) = \CI_a(M\sigma)$, that
⟨(M^∘)^* _a(σ) μ, τ⟩= ⟨_a(σ) μ, M^∘ τ⟩= ⟨_a(σ) ⊗μ, Δ_d M^∘ τ⟩,
Δ_d M^∘ = ( M^∘ ⊗M^∘ )Δ_d.
Then, we get
⟨_a(σ) ⊗μ, Δ_d M^∘ τ⟩ = ⟨_a(σ) ⊗μ, ( M^∘ ⊗M^∘ )Δ_d τ⟩
= ⟨(M^∘)^* _a(σ) ⊗(M^∘)^* μ, Δ_d τ⟩= ⟨(M^∘)^* _a(σ) (M^∘)^* μ, τ⟩
which concludes the proof of the multiplicativity.
$\bullet$ It will be useful for our purpose to decompose the grafting map $\curvearrowright_a$ into the sum of a grafting map at the root and a grafting map outside the root
\curvearrowright_a = \curvearrowright_a^\textrm{root} + \curvearrowright_a^\textrm{non-root},
with, for $\tau=X^k\prod_{j=1}^n\CI_{a_i}(\tau_j)$ and $a_i\in\frak{T}\times\N^{d+1}$,
\sigma\curvearrowright_a^\textrm{root}\tau := \sum_{m\in\N^{d+1}} {k\choose m}\,X^{k-m}\,\CI_{a-m}(\sigma)\prod_{j=1}^n\CI_{a_j}(\tau_j)
\sigma\curvearrowright_a^\textrm{non-root}\tau := X^k \sum_{i=1}^n \CI_{a_i}(\sigma\curvearrowright_a\tau_i)\prod_{j\neq i}\CI_{a_j}(\tau_j).
We proceed by induction on the size of the trees appearing in the product. In the induction hypothesis, we include the two following identities for $ \sigma, \tau \in \CT $:
M^* ( σ↷_a τ) = (M^* σ) ↷_a (M^* τ)
(M^∘)^* ( σ↷_a τ) = (M^* σ) ↷_a ((M^∘)^* τ).
Let $ \sigma, \tau \in \CT $, one has
M^* ( σ↷_a τ) = R^* (M^∘)^*( σ↷_a τ)
= R^* ( M^* σ↷_a (M^∘)^* τ)
= ( M^* σ↷_a R^*(M^∘)^* τ) = ( M^* σ↷_a M^* τ)
where we have applied the induction hypothesis given by (<ref>) on $ (M^{\circ})^{*} $ and the righ-morphism property of $ R^{*} $. We consider $ \tau = X^k \bar \tau $ where $ \bar \tau = \prod_{i=1}^n \CI_{a_i}(\tau_i) $. The multiplicativity property of $(M^{\circ})^*$ and the fact that $(M^{\circ})^*\CI_a(\sigma) = \CI_a(M^*\sigma)$ yield
(M^∘)^*( σ↷_a^τ) = (M^∘)^* ∑_ℓ∈^d+1 k ℓ X^k-ℓ _a-ℓ(σ) τ̅
= ∑_ℓ∈^d+1 k ℓ X^k-ℓ _a-ℓ(M^*σ) (M^∘)^* τ̅
= M^* σ↷_a^(M^∘)^* τ
For the grafting outside the root, we use the induction hypothesis. One has:
(M^∘)^*( σ↷_a^τ) = (M^∘)^* _a_i( σ↷_a τ_i ) ∏_j ≠i _a_j(τ_j)
= _a_i( M^* (σ↷_a τ_i) ) ∏_j ≠i _a_j(M^* τ_j)
= _a_i( M^* σ↷_a M^* τ_i ) ∏_j ≠i _a_j(M^* τ_j)
= M^* σ↷_a^(M^∘)^*τ.
where we have used the induction hypothesis (<ref>) on $ \sigma $ and $ \tau_i $. The proof that $M^*\uparrow^k = \uparrow^{k} M^*$, works the same by decomposing insertion of polynomial decorations at the root and insertion outside the root.
Recall the setting of branched rough paths involves the Butcher-Connes-Kreimer Hopf algebra – basics on branched rough paths can be found in Gubinelli's original article [25], Hairer and Kelly's work [27], or [1], for instance.
In the Butcher-Connes-Kreimer setting, (good) multi-pre-Lie morphisms are in bijection with preparation maps.
We already noticed that all preparation maps are strong in the Butcher-Conner-Kreimer setting. There is no polynomial decorations in this setting, and (automatically strong) preparation maps $R$ define (good) multi-pre-Lie morphisms $M$, with the map $R\mapsto M$ being injective because $ M^{\circ}$ is invertible. Only multi-pre-Lie morphisms make sense in the Butcher-Conner-Kreimer setting. Given a multi-pre-Lie morphism $M$, we essentially have no choice for $R$; it needs to satisfy
R^*(\zeta_l) := M^*(\zeta_l), \quad R^*(\sigma\curvearrowright_a\tau) = \sigma\curvearrowright_a( R^{*} \tau).
The right-morphism property of $R^*$ gives back the `right commutation' relation (<ref>) with the coproduct. Property (<ref>) of the map $R$ just defined can then be read off on the identity
\big\langle \sigma\curvearrowright_a\tau\,,\,R\mu\big\rangle = \big\langle R^*(\sigma\curvearrowright_a\tau)\,,\,\mu\big\rangle = \sum \langle R^*\tau,\mu_1\rangle\,\langle\sigma,\mu_2\rangle,
using Sweedler's notation $\Delta\mu=\sum\mu_1\otimes\mu_2$. The map $M\mapsto R$ injective, so the conclusion follows.
Keep working in the Butcher-Connes-Kreimer setting, assuming an (automatically good) multi-pre-Lie morphism $M$ is given. Note that if one defines $M^\circ$ by the induction relations
(M^\circ)^*\zeta_l = \zeta_l, \quad (M^\circ)^*(\sigma\curvearrowright_a\tau) = M^*\sigma\curvearrowright_a (M^\circ)^*\tau,
then one has indeed $M=M^\circ R$. One can check by induction that $ (M^{\circ})^{*} $ is multiplicative as a consequence of the fact that $(M^\circ)^* \one = \one $ and $ (M^\circ)^* \CI_{a}(\tau) = \CI_{a}(M^* \tau) $. One uses the induction hypothesis to see that
(M^∘)^*(σ↷^_aτ) = M^*σ↷^_a (M^∘)^*τ,
which implies the multiplicativity given by
(M^∘)^*(σ↷^_aτ) = M^*σ↷^_a (M^∘)^*τ.
So Corollary <ref> entails that all multi-pre-Lie morphisms $M$ – hence all renormalization maps, are obtained in that setting from a preparation map $R$ using the construction (<ref>) and (<ref>). Such a statement is open for regularity structures.
§.§ Models associated to preparation maps
Let kernels $(K_i)_{1\leq i\leq \ell_0}$ with a polynomial singularity at $0$, and smooth noises $(\xi_l)_{1\leq l\leq n_0}$ on the state space be given. Following [3], one can associate to a preparation map $R$ an admissible model $\sf M$ on $T$. It is defined from a side family $\big((\Pi_x^{M^\circ}\tau)(\cdot)\big)_{x,\tau}$ of smooth functions on the state space satisfying
\big(\Pi_x^{M^{\circ}} \one\big)(y) = 1 , \quad \big(\Pi_{x}^{M^{\circ}} \zeta_l\big)(y) = \xi_l(y) , \quad \big(\Pi_{x}^{M^{\circ}} X_i\big)(y) = y_i-x_i,
the multiplicativity condition
\big(\Pi_{x}^{M^{\circ}}(\sigma\tau)\big)(y) = \big(\Pi_{x}^{M^{\circ}}\sigma\big)(y) \, \big(\Pi_{x}^{M^{\circ}}\tau\big)(y),
and the condition
\begin{equation} \label{EqConditionAlmostAdmissibility} \begin{aligned}
\Big(\Pi_{x}^{M^{\circ}} \CI_{a} (\tau)\Big)(y) = \Big( D^k K_i* \Pi^{M^\circ}_{x}(R\tau) \Big)(y) - \sum_{|\ell|_{\s} \leq \deg( \CI_{a} (\tau) )} \frac{(y-x)^{\ell}}{\ell!} \Big(D^{k + \ell} K_i* \Pi_{x}^{M^\circ}(R\tau)\Big)(x),
\end{aligned} \end{equation}
for $a=(\frak{t}_i,k)$. Define for all $x$ and $\tau$ a smooth function on the state space
\big(\Pi^{(R)}_x\tau\big)(\cdot) := \Big(\Pi^{M^\circ}_x(R\tau)\Big)(\cdot).
Bruned gave in Proposition 3.16 of [3] an explicit contruction of an admissible model $\sf M=(g,\Pi)$ on $T$, with values in the space of smooth functions, such that the operators $\Pi^{(R)}_x$ are indeed associated with $\sf M$, in the sense that one has for all $\tau\in T$ and $x$
\Pi^{(R)}_x\tau = {\sf \Pi}_x^{\sf g}\tau.
Since the model $\sf M$ takes values in the space of continuous functions, the reconstruction operator $\sf R^M$ associated with it is given by the explicit formula
\big({\sf R^M v}\big)(x) = \big({\sf \Pi}_x^{\sf g}{\sf v}(x)\big)(x)
for any modelled distribution $\sf v$ with positive regularity, so
\big({\sf R^M v}\big)(x) = \Big(\Pi_x^{M^\circ}R{\sf v}(x)\Big)(x).
Emphasize that $M^\circ$ depends on $R$, so the operators $\Pi_x^{M^\circ}$ are different from the naive interpretation operators one obtains when $R=\textrm{Id}$. For preparation maps $R$ for which
\begin{equation} \label{EqConditionR}
R(\CI_a\tau) = \CI_a\tau
\end{equation}
for all $\tau\in T$ and $a\in\frak{L}^+\times\N^{d+1}$, one has
{\sf \Pi}_x^{\sf g}(\CI_a\tau) = \Pi_x^{M^\circ}(\CI_a\tau).
(Condition (<ref>) is consistent with the idea that the preparation/`local product' map $R$ `renormalizes' only ill-defined products – no product is involved at the root of the tree $\CI_a\tau$.) The point here is that $\Pi_x^{M^\circ}$ is multiplicative while ${\sf \Pi}_x^{\sf g}$ is not. Denote by $T_X\subset T$ the linear space spanned by polynomial in $T$. Modelled distribution $\sf v$ with values in the subspace $\CI(T)\oplus T_X$ of $T$ satisfy in that case the identity
\begin{equation} \label{EqRelationPiMPiMCirc}
\big({\sf R^M v}\big)(x) = \Big(\Pi_x^{M^\circ}{\sf v}(x)\Big)(x).
\end{equation}
It follows further from relation (<ref>) that the model $\sf M$ is admissible if condition (<ref>) holds.
§ A SHORT PROOF FOR THE RENORMALISED EQUATION
This section contains the statement and proof of our main result, Theorem <ref>, describing the autonomous dynamics satisfied by ${\sf R^Mu}$ when $\sf M$ is the model constructed from a preparation map $R$ and $\sf u$ is the solution to the lift of system (<ref>) to its associated regularity structure
\begin{equation} \label{EqLiftedSystem}
{\sf u}_i = \mathcal{K}^{\sf M}_i\Big(\mathcal{Q}_{\gamma-2} \big({\sf F}_i({\sf u}, D{\sf u})\zeta\big)\Big) + \mathcal{P}_\gamma u_i(0), \qquad (1\leq i\leq k_0).
\end{equation}
The operator $\mathcal{P}_\gamma$ stands here for the projection on the subspace of elements of degree less than $\gamma$ of the natural lift in the polynomial regularity structure $T_X$ of the map $x=(t,x')\mapsto \big(P_tu(0)\big)(x')$. (Recall $x$ stands for a generic spacetime point.) The notation $\mathcal{Q}_{\gamma-2}$ stands here for the natural projection from $T$ to the linear subspace $T_{<\gamma-2}$ of elements of $T$ of degree less than $\gamma-2$. The $\sf M$-dependent map $\mathcal{K}^{\sf M}_i$ is the regularity structure lift of the operator $K_i=(\partial_t-L_i)^{-1}$; it sends continuously the space $\mathcal{D}^{\gamma-2, \eta}(T,\sf g)$ into $\mathcal{D}^{\gamma, \eta'}(T,\sf g)$. (The exponent $\beta$ in the spaces $\mathcal{D}^{\alpha, \beta}(T,\sf g)$ is related to the behaviour of the function near time $0^+$. We refer the reader to [10, 5] or [9] for a full account.) From the definition of the operators $\mathcal{K}^{\sf M}_i$, for a solution ${\sf u}=({\sf u}_1,\dots,{\sf u}_{k_0})$ of system (<ref>), each map ${\sf u}_i$ takes values in $\CI_{(\frak{t}_i,0)}(T)\oplus T_X$. Write
{\sf u}_i =: \sum {\sf u}_{i,\tau}\tau,
for a sum over trees $\tau$ in the canonical basis of $\CI_{(\frak{t}_i,0)}(T)\oplus T_X$ – monomials are seen as trees with just one vertex here. We recall here from Section 4.2 of [26] the definition of the lift ${\sf F}_i$ of the smooth enough function $F_i$. One has for any ${\sf a} =: a_{\bf 1}{\bf 1} + {\sf a}'\in T$, with $\langle{\sf a}',{\bf 1}\rangle=0$,
\begin{equation} \label{EqDefnRSLiftFunction}
{\sf F}_i({\sf a}) = \sum_k\frac{D^kF(a_{\bf 1})}{k!}\,({\sf a}')^k.
\end{equation}
One of the main results of [5] states that for all $\tau\in T$ in the canonical basis, with degree less than $\gamma-2$,
\begin{equation} \label{EqCoherence}
{\sf u}_{i,\tau} = \frac{{\sf F}_i(\tau)({\sf u}, D{\sf u})}{S(\tau)}.
\end{equation}
(Note that ${\sf F}_i(\tau)=0$ for all trees $\tau\in T$ that are not generated by the rule associated with the non-linearities $F_i$ in (<ref>).) Let $\xi=(\xi_1,\dots,\xi_{n_0})$ stand for a smooth function.
Let $R$ be a strong preparation map such that
R\tau=\tau, \qquad\textrm{for} \quad \tau\in\big(\CI(T)\oplus T_X\big).
Let $\sf M$ stand for its associated admissible model. Then $u:={\sf R^Mu}$ is a solution of the renormalized system
\begin{equation} \label{EqRenormalizedSystem}
(\partial_t - L_i) u_i = F_i(u,\nabla u)\,\xi + \sum_{l=1}^{n_0} F_i\Big(\big(R^* - \textrm{\emph{Id}}\big) \zeta_l\Big)(u,\nabla u)\,\xi_l, \qquad (1\leq i\leq k_0).
\end{equation}
As we are working with an admissible model we have
(\partial_t-L_i)u_i = {\sf R^M}\Big(\mathcal{Q}_{\gamma-2}\big({\sf F}_i({\sf u}, D{\sf u})\zeta\big)\Big) =: {\sf R^M}({\sf v}_i)
{\sf v}_i = \sum_{\textrm{deg}(\tau)<\gamma-2} \frac{{\sf F}_i(\tau)({\sf u}, D{\sf u})}{S(\tau)} \,\tau,
for a sum over the canonical basis of $T$, from the coherence condition (<ref>). The function ${\sf v}_i$ is a modelled distribution of regularity $\gamma$. One has by construction
\big({\sf R^M}{\sf v}_i\big)(x) = \Big(\Pi_x^{M^\circ}R{\sf v}_i(x)\Big)(x),\qquad \textrm{and}\qquad \langle R{\sf v}_i,\tau\rangle = \langle {\sf v}_i\,,\,R^*\tau\rangle = F_i(R^*\tau),
F_i(R^*\tau) = \partial^kD_{a_1}\cdots D_{a_n} F_i(R^*\zeta_l) \prod_{j=1}^nF_{l_j}(\tau_j)
when $\tau=X^k\zeta_l\prod_{j=1}^n\CI_{a_j}(\tau_j)$ and $a_j=(\frak{t}_{n_j},k_j)$, from Proposition <ref>. One can thus rewrite the equality
R{\sf v}_i = \sum \frac{F_i(R^*\tau)}{S(\tau)}\,\tau
under the form
\begin{equation*}
R{\sf v}_i = \sum_{l,n} \sum_{a_1,...,a_n} \sum_{k}\frac{k! \prod_{i=1}^n S(\tau_i)}{S\big(X^k\zeta_l\prod_{j=1}^n\CI_{a_j}(\tau_j)\big)} \, \frac{X^k}{k!} \prod_{j=1}^n \sum_{\tau_j\in \CT} \frac{F_{l_j}(\tau_j)}{S(\tau_j)}\, \CI_{a_j}(\tau_i) \, \partial^k \prod_{i=1}^n D_{a_i}F_i(R^*\zeta_l) \zeta_l.
\end{equation*}
chopping $\tau=X^k\zeta_l\prod_{j=1}^n\CI_{a_j}(\tau_j)$ in different pieces. Using distinct $a_j$'s
Rv_i= ∑_l,n ∑_a_1,...,a_n ∑_k X^k/k! ∏_j=1^n ( ∑_τ_j ∈T 1/β_j ! F_l_j(τ_j)/S(τ_j) _a_j(τ_j) )^β_j ∂^k ∏_j'=1^n (D_a_j')^β_j' F_l_j' (R^*ζ_l) ζ_l,
and the Faà di Bruno formula from Lemma A.1 in [5]
∂^kG/k! = ∑_b_1,...,b_m ∑_k = ∑_j=1^m β_j k_j ∏_j=1^m 1/β_j ! (Z_b_i + k_i/k_j!)^β_j ∏_j=1^m (D_b_j)^β_j G
for a function $G$ of distinct variables $Z_{b_1},\dots, Z_{b_m}$, one obtains
Rv_i = ∑_l,n ∑_a_1,...,a_n ∑_β_1,...,β_n ∏_j=1^n 1/β_i ! (u_a_i - ⟨u_a_j, ⟩)^β_j ∏_j'=1^n (D_a_j')^β_j' F_i (R^*ζ_l) ζ_l.
From the definition of ${\sf F}_i$ recalled in (<ref>), this is equivalent to
Rv_i = ∑_l=1^n_0 F_i(R^*ζ_l)(u, Du) ζ_l.
Recall $x$ stands for a generic spacetime point. Using the (crucial) multiplicativity of $\Pi_x^{M^\circ}$ and identity (<ref>) giving back $\big({\sf R^Mu}\big)(x)$ in terms of $\Pi_x^{M^\circ}$, we see that
\big((\partial_t-L_i)u_i\big)(x) = \big({\sf R^M}{\sf v}_i\big)(x) = \Pi_x^{M^\circ}\big(R{\sf v}_i(x)\big)(x)
\begin{equation*} \begin{split}
&= \sum_{l=1}^{n_0} \Pi_x^{M^\circ}\Big({\sf F}_i\big(R^{*}\zeta_l)({\sf u}(x), D{\sf u}(x)\big) \zeta_l\Big)(x) \\
&= \sum_{l=1}^{n_0} F_i(R^*\zeta_l)\Big(\big(\Pi_x^{M^\circ}{\sf u}(x)\big)(x), \nabla \big(\Pi_x^{M^\circ}{\sf u}(x)\big)(x)\Big) \,\Pi_x^{M^\circ}\zeta_l \\
&= \sum_{l=1}^{n_0} F_i(R^*\zeta_l)\big(u(x),\nabla u(x)\big)\xi_l.
\end{split}\end{equation*}
In the particular case where the preparation map $R$ is of BPHZ form (<ref>), a direct computation shows that the renormalized system (<ref>) takes the form (<ref>) if we further assume that $R_{\ell^{\varepsilon}}^*\zeta_l=\zeta_l$ for all $l\neq 0$ – recall $\zeta_0=\textbf{\textsf{1}}$, since one has from (<ref>)
F_i(R_ℓ^ε^* -) = ∑_τ∈ℬ^-∖{} ℓ^ε(τ)F_i(τ)/S(τ).
This assumption accounts for the fact that we never need to substract a multiple of one of the noises $(\zeta_l)_{1\leq l\leq n_0}$ to any tree-indexed quantity in our renormalization algorithm. Note that Chandra, Moinat and Weber used in [20] a similar strategy to get back the renormalized equation in the particular case of the $\Phi^4_{4-\delta}$ equation.
[1]
I. Bailleul,
On the definition of a solution to a rough differential equation.
To appear in Ann. Fac. Sci. Toulouse, 2021.
[2]
T. Bonnefoi, A. Chandra, A. Moinat, H. Weber.
A priori bounds for rough differential equations with a non-linear damping term.
[3]
Y. Bruned.
Recursive formulae in regularity structures.
Stoch. PDEs: Anal. Comp., 6(4), (2018), 525–564.
[4]
Y. Bruned.
Renormalization from non-geometric to geometric rough paths.
[5]
Y. Bruned, A. Chandra, I. Chevyrev, M. Hairer.
Renormalising SPDEs in regularity structures.
to appear in J. Eur. Math. Soc., (2021),
[6]
Y. Bruned, I. Chevyrev, and P. K. Friz.
Examples of renormalized SDEs.
Stochastic Partial Differential Equations and Related Fields. Cham: Springer International Publishing,
(2018), 303–317.
[7]
Y. Bruned, I. Chevyrev, P. K. Friz,
R. Preiss.
A rough path perspective on renormalization.
J. Funct. Anal. 277(11), (2019), 108283.
[8]
Y. Bruned, F. Gabriel, M. Hairer, L. Zambotti.
Geometric stochastic heat equations.
[9]
I. Bailleul, M. Hoshino.
A tourist's guide to regularity structures.
[10]
Y. Bruned, M. Hairer, L. Zambotti.
Algebraic renormalization of regularity structures.
Invent. Math. 215(3), (2019), 1039–1156.
[11]
Y. Bruned, M. Hairer, L. Zambotti.
renormalization of Stochastic Partial Differential Equations.
EMS Newsletter 115, no. 3, (2020), 7–11.
doi: 10.4171/NEWS/115/3http://dx.doi.org/10.4171/NEWS/115/3.
[12]
Y. Bruned, D. Manchon.
Algebraic deformation for (S)PDEs.
[13]
Y. Bruned, K. Schratz,
Resonance based schemes for dispersive equations via decorated trees
[14]
D. Calaque, K. Ebrahimi-Fard, D. Manchon.
Two interacting Hopf algebras of trees: a Hopf-algebraic approach
to composition and substitution of B-series.
Adv. in Appl. Math. 47(2), (2011), 282–308.
[15]
A. Chandra, I. Chevyrev, M. Hairer, H. Shen.
Langevin dynamics for the 2D Yang-Mills measure.
[16]
A. Chandra, M. Hairer.
An analytic BPHZ theorem for regularity structures.
[17]
A. Chandra, M. Hairer, H. Shen.
The dynamical sine-Gordon model in the full subcritical regime.
[18]
F. Chapoton, M. Livernet,
Pre-Lie algebras and the rooted trees operad.
Internat. Math. Res. Notices 2001 (2001) 395–408.
[19]
P. Chartier, E. Hairer, G. Vilmart.
Algebraic structures of B-series.
Found. Comput. Math. 10(4), (2010), 407–427.
[20]
A. Chandra, A. Moinat, H. Weber.
A priori bounds for the $\phi^4$ equation in the full sub-critical regime.
[21]
L. Foissy,
Algebraic structures on typed decorated rooted trees.
arXiv:1811.07572https://arxiv.org/abs/1811.07572, (2018).
[22]
P. K. Friz, M. Hairer.
A Course on Rough Paths.
Springer International Publishing, 2014.
[23]
D. Guin, J. M. Oudom,
Sur l'algèbre enveloppante d'une algèbre pré-Lie.
C. R. Math. Acad. Sci. Paris 340(5), (2005), 331–336.
[24]
D. Guin, J. M. Oudom,
On the Lie enveloping algebra of a pre-Lie algebra.
J. K-Theory 2(1), (2008),147–167.
[25]
M. Gubinelli,
Ramification of rough paths.
J. Differ. Equ. 248(4), (2010), 693 – 721.
[26]
M. Hairer.
A theory of regularity structures.
Invent. Math. 198(2), (2014), 269–504.
[27]
M. Hairer, D. Kelly,
Geometric versus non-geometric rough paths.
Ann. Inst. Henri Poincaré Probab. Stat. 51(1), (2015), 207–251.
$\bullet$ I. Bailleul – Univ. Rennes, CNRS, IRMAR - UMR 6625, F-35000 Rennes, France.
E-mail<EMAIL_ADDRESS>
$\bullet$ Y. Bruned – School of Mathematics, University of Edinburgh, EH9 3FD, Scotland
E-mail<EMAIL_ADDRESS>
|
11institutetext: Information Retrieval and Extraction Lab (iREL)
International Institute of Information Technology, Hyderabad
11email:
{tathagata.raha, vijayasaradhi.i}@research.iiit.ac.in,{aayush.upadhyaya,
jeevesh.kataria, pramud.bommakanti<EMAIL_ADDRESS><EMAIL_ADDRESS>
# Identifying COVID-19 Fake News in Social Media
Tathagata Raha Vijayasaradhi Indurthi Aayush Upadhyaya Jeevesh Kataria
Pramud Bommakanti Vikram Keswani Vasudeva Varma
###### Abstract
The evolution of social media platforms have empowered everyone to access
information easily. Social media users can easily share information with the
rest of the world. This may sometimes encourage spread of fake news, which can
result in undesirable consequences. In this work, we train models which can
identify health news related to COVID-19 pandemic as real or fake. Our models
achieve a high F1-score of 98.64%. Our models achieve second place on the
leaderboard, tailing the first position with a very narrow margin 0.05%
points.
###### Keywords:
fake-news COVID-19 social media.
## 1 Introduction
Fake news is ubiquitous and is impacting all spheres of life. The impact of
fake news is more felt especially when the fake news is related to the health
of people, specifically relating to the COVID-19 pandemic. Myths, rumours,
unsolicited tips and unverified claims and advises related to COVID-19 can
sometimes lead to loss of human life. Factually incorrect advises can
sometimes create false sense of health and might delay in getting the required
medical help often aggravating the condition. Uninformed people can easily
become victims of propaganda and has a huge impact on the society. Needless to
say, identifying fake COVID health news is very important as it can save
valuable human life.
NLP has made a significant progress in recent times. Transfer learning has
been playing an important role in the areas of NLP research. With the
introduction of novel architectures like Transformer, the field of NLP has
been revolutionized. We use RoBERTa, a improved variation of BERT for
identifying if the COVID health news is fake or real.
In this task, at first we have used different simple baseline models like
naive bayes, linear classifier, boosting, bagging and SVM models to classify a
tweet as fake or not. For getting the tweet embeddings, we have used tf-idf
and word2vec. As our advanced models, we have experimented with different
kinds of transformers models like bert, roberta, electra, etc.
## 2 Background
The task aims at identifying fake news related to COVID-19 in English
language. Given a social media post, we need to classify it as a fake or a
real categories. The task here was to train machine learning models which can
automatically identify posts related to COVID-19 pandemic as fake or real.
These posts include posts from various social media platforms like Twitter,
Facebook and Instagram. The task deals with these posts in English language,
and specifically those posts which are related to the COVID-19 pandemic. For
this task, training data has been provided. More details about this dataset
has been given in the following sections. Dhoju et. al [4] do a structural
analysis and extract relevant features to train models which can classify
health news as fake and real. They achieve a high F1-score of 0.96
## 3 Related Work
The study of fake news related to health has not received much attention. With
the COVID-19 pandemic, there has been an increased focus in identifying fake
health news. We list some of the recent related work here.
Dai et al. [2] constructed a comprehensive repository, FakeHealth, which
includes news contents with rich features, news reviews with detailed
explanations, social engagements and a user to user social network. They also
do exploratory analysis to understand the characteristics of the datasets and
analyse useful patterns. Waszak et al. [16] analyze top news shared on the
social media to identify leading fake medical information miseducating the
society. They curate top health weblinks in the Polish language social media
between 2012 and 2017 and provide detailed analysis.
## 4 System overview
In this shared task[11], we formulate the problem of identifying a social
media post as fake or not as a text classification problem. At first, we have
implemented a few simple baseline models like linear classifier and boosting
models. Then, we use the transformer architecture and fine-tune different
pretrained transformer models like Roberta, bert and electra on the COVID-19
training dataset. We do not do explicit preprocessing because we want the
model to learn the patterns of input, like the presence of too many hashtags,
too many mentions etc. to help identify the fake news.
We use Huggingface’s transformers library [17] for finetuning the pretrained
transformer models.
## 5 Dataset
We use the dataset collected by Patwa et. al [12]. The dataset consisted of
tweets and posts related to COVID-19 obtained from different social-media
sites like Twitter, Facebook, Instagram and for each tweet there was a label
corresponding to a tweet. The labels were as follows:
1. 1.
Fake: This denotes if a post is falsely claimed or fake in nature.
Example: Politically Correct Woman (Almost) Uses Pandemic as Excuse Not to
Reuse Plastic Bag https://t.co/thF8GuNFPe #coronavirus #nashville
2. 2.
Real: This denotes a verified post or a post which is true.
Example: The CDC currently reports 99031 deaths. In general the discrepancies
in death counts between different sources are small and explicable. The death
toll stands at roughly 100000 people today.
Below in Table 1. we can see the distribution of fake and real labels in
training set, validation set and test set respectively.
Split | #Samples | #Fake | #Real
---|---|---|---
Train | 6420 | 3060 | 3360
Validation | 2140 | 1020 | 1120
Test | 2140 | 1021 | 1120
Table 1: Results on validation set for COVID-19 Fake news identification task
for English language
As we can see that the dataset is fully balanced, hence there was no necessity
to perform steps to make the dataset balanced. For preprocessing the dataset,
we have taken the following measures:
* •
Lowercasing the words
* •
Replacing irrevelant symbols with spaces
* •
Removing stopwords
Below in Table 2., we have provided more dataset statistics like the average,
maximum and minimum number of words in the posts of training, testing and
validation dataset.
Split | Average | Maximum | Minimum
---|---|---|---
Train | 27.0 | 1456 | 3
Validation | 26.8 | 304 | 3
Test | 27.5 | 1484 | 4
Table 2: Dataset statistics showing the number of words in different splits of
the dataset
## 6 Baseline models
We have implemented different simple baseline models on the COVID dataset.
Word embeddings: The first step is to represent each post as a vector.We have
chosen two different word embeddings for getting vector representations for
our posts and sentences: Word2Vec [10] and tf-idf [18]. For the Word2Vec, we
find embeddings for each word and take the mean of embeddings of each to get a
300-dimension vector representation for a text.
Models: After getting the word embeddings, we performed experiments with the
six following classifiers:
* •
Naive Bayes: This classifier is a probabilistic classifier that uses Bayes
Theorem. On the basis of an event that has occurred previously, it calculates
the probability of the current event. [13]
* •
Logistic regression: Logistic regression is a statistical model that is used
to estimate the probability of a response based on predictor variables. [6]
* •
Bagging models (Random Forests): An ensemble of Decision Trees that uses a
tree-like model for predicting the labels. For the final output, it considers
the outputs of all the decision trees that it created. [9]
* •
Boosting models (XGBoost): Boosting is a general ensemble method where at
first a lot of weak classifiers are created and then building a strong
classifier by building a model from the training data, then creating a second
model that attempts to correct the errors from the first model. XGBoost is a
decision-tree-based ensemble Machine Learning algorithm that uses a gradient
boosting framework. [7]
* •
Support Vector Machines: SVM is a non-probabilistic classifier which
constructs a set of hyperplanes in a high-dimensional space separating the
data into classes. [5]
## 7 Transformer models
For our more advanced models we explored different transformer models. Vaswani
et al. [15] proposed the transformer architecture. They follow the non-
recurrent architecture with stacked self-attention and fully connected layers
for both the encoder and decoder. Transformer uses concepts like self
attention, multi-head attention, positional embeddings, residual connections
and masked attention. We used the following pre-trained transformer models
from HuggingFace repository and fine-tuned it to our classification task:
* •
bert-base-uncased: 12-layer, 768-hidden, 12-heads, 110M parameters. The model
has been pretrained on Book Corpus and the Wikipedia data using the Masked
Language Model(MLM) and the Next Sentence Prediction(NSP) objectives. [3]
* •
distilbert-base-uncased: 6-layer, 768-hidden, 12-heads, 66M parameters. It is
a smaller model than BERT which is a lot cheaper and faster to train than
BERT. [14]
* •
roberta-base: 12-layer, 768-hidden, 12-heads, 125M parameters.RoBERTa [8] is a
Robust BERT approach which has been trained on a much more larger dataset and
for much larger number of iterations with a larger batch size of 8k. RoBERTa
also removes the NSP objective from the pretraining.
* •
google/electra-base: 12-layer, 768-hidden, 12-heads, 110M parameters. ELECTRA
models are trained to distinguish ”real” input tokens vs ”fake” input tokens
generated by another neural network, similar to the discriminator of a GAN.
[1]
* •
xlnet-base-cased: 12-layer, 768-hidden, 12-heads, 110M parameters. It is
similar to BERT but it learns bidirectional context alongwith autoregressive
formulation. [19]
## 8 Experimental Setup
We combine both the dev and training dataset and then split them into train
and validation in the ratio of 90:10. We train on the training split and
evaluate on the validation split.
We do not do any explicit pre-processing like removing the mentions or
removing the hashtags because we want the model to learn these patterns.
We use Huggingface’s transformers library [17] for all our experiments.
The primary evaluation metric for the shared task is the F1 score. It is
defined as a is the harmonic mean of the precision and recall. An F1 score
reaches its best value at 1 and worst score at 0. In addition, we report the
accuracy metric also.
Method | Accuracy | F1-score
---|---|---
Naive Bayes Model(tf-idf) | 0.887 | 0.885
Linear Classifier(tf-idf) | 0.901 | 0.893
Bagging Model(tf-idf) | 0.926 | 0.921
Boosting Model(tf-idf) | 0.914 | 0.913
SVM Model(tf-idf) | 0.941 | 0.941
Linear Classifier(word2vec) | 0.883 | 0.879
Bagging Model(word2vec) | 0.915 | 0.912
Boosting Model(word2vec) | 0.914 | 0.912
SVM Model(word2vec) | 0.909 | 0.905
bert-base-uncased | 0.962 | 0.960
distilbert-base-uncased | 0.957 | 0.955
roberta-base | 0.982 | 0.982
electra-base | 0.981 | 0.981
xlnet-base-cased | 0.948 | 0.944
Table 3: Results on validation set for COVID-19 Fake news identification task for English language. The first section denotes the baseline models on tf-idf. Te second section denotes the baseline models on word2vec. The third section refers to the transformers models. Method | Accuracy | F1-score
---|---|---
SVM Model(tf-idf) | 0.939 | 0.938
Bagging Model(word2vec) | 0.910 | 0.910
Boosting Model(word2vec) | 0.927 | 0.926
roberta-base | 0.9864 | 0.9864
electra-base | 0.9827 | 0.9827
Table 4: Results on the official test set for COVID-19 Fake news
identification task for English language. The first section denotes the
baseline models on tf-idf. Te second section denotes the baseline models on
word2vec. The third section refers to the transformers models.
## 9 Results
Table 3 shows the results of our model on the validation dataset. We see that
the RoBERTa model gives an F1-score of 0.982 with an accuracy of 0.982 on the
validation set. Our Electra model achieves an F1-score of 0.981 and an
accuracy of 0.981 on the validation set. We submit these two models for final
evaluation on the official test set.
Table 4 shows the official results of our models on the official test set. We
see that the RoBERTa model gives an F1-score of 0.9864 with an accuracy of
0.9864 on the official test set.
Our RoBERTa model achieves 2nd position on the official leader board, 0.05
percentage points less than the best F1 score.
Our Electra model achieves an F1-score of 0.9827 with an accuracy of 0.9827 on
the official test set, comparable with the top performing models on the leader
board
## 10 Conclusion
Identifying fake COVID-19 news is challenging and going forward it would be
useful not only to classify if a social media post is fake or not, but also to
give interpretation on why the news is fake or not. We would like to explore
on the interpretability of the models.
## References
* [1] Clark, K., Luong, M.T., Le, Q.V., Manning, C.D.: Electra: Pre-training text encoders as discriminators rather than generators (2020)
* [2] Dai, E., Sun, Y., Wang, S.: Ginger cannot cure cancer: Battling fake health news with a comprehensive data repository. In: Proceedings of the International AAAI Conference on Web and Social Media. vol. 14, pp. 853–862 (2020)
* [3] Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)
* [4] Dhoju, S., Main Uddin Rony, M., Ashad Kabir, M., Hassan, N.: Differences in health news from reliable and unreliable media. In: Companion Proceedings of The 2019 World Wide Web Conference. pp. 981–987 (2019)
* [5] Hassan, S., Rafi, M., Shaikh, M.S.: Comparing svm and naive bayes classifiers for text categorization with wikitology as knowledge enrichment. 2011 IEEE 14th International Multitopic Conference (Dec 2011). https://doi.org/10.1109/inmic.2011.6151495, http://dx.doi.org/10.1109/INMIC.2011.6151495
* [6] Kowsari, Meimandi, J., Heidarysafa, Mendu, Barnes, Brown: Text classification algorithms: A survey. Information 10(4), 150 (Apr 2019). https://doi.org/10.3390/info10040150, http://dx.doi.org/10.3390/info10040150
* [7] Li, M., Xiao, P., Zhang, J.: Text classification based on ensemble extreme learning machine (2018)
* [8] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019)
* [9] Markel, J., Bayless, A.J.: Using random forest machine learning algorithms in binary supernovae classification (2020)
* [10] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space (2013)
* [11] Patwa, P., Bhardwaj, M., Guptha, V., Kumari, G., Sharma, S., PYKL, S., Das, A., Ekbal, A., Akhtar, S., Chakraborty, T.: Overview of constraint 2021 shared tasks: Detecting english covid-19 fake news and hindi hostile posts. In: Proceedings of the First Workshop on Combating Online Hostile Posts in Regional Languages during Emergency Situation (CONSTRAINT). Springer (2021)
* [12] Patwa, P., Sharma, S., PYKL, S., Guptha, V., Kumari, G., Akhtar, M.S., Ekbal, A., Das, A., Chakraborty, T.: Fighting an infodemic: Covid-19 fake news dataset. arXiv preprint arXiv:2011.03327 (2020)
* [13] Raschka, S.: Naive bayes and text classification i - introduction and theory (2017)
* [14] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter (2020)
* [15] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: NIPS. pp. 5998–6008 (2017)
* [16] Waszak, P.M., Kasprzycka-Waszak, W., Kubanek, A.: The spread of medical fake news in social media–the pilot quantitative study. Health policy and technology 7(2), 115–118 (2018)
* [17] Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., et al.: Huggingface’s transformers: State-of-the-art natural language processing. ArXiv pp. arXiv–1910 (2019)
* [18] Wu, H., Yuan, N.: An improved tf-idf algorithm based on word frequency distribution information and category distribution information. In: Proceedings of the 3rd International Conference on Intelligent Information Processing. p. 211–215. ICIIP ’18, Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3232116.3232152, https://doi.org/10.1145/3232116.3232152
* [19] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding (2020)
|
# On a tilted Liouville-master equation of open quantum systems
Fei Liu<EMAIL_ADDRESS>School of Physics, Beihang University, Beijing
100191, China
###### Abstract
A tilted Liouville-master equation in Hilbert space is presented for Markovian
open quantum systems. We demonstrate that it is the unraveling of the tilted
quantum master equation. The latter is widely used in the analysis and
calculations of stochastic thermodynamic quantities in quantum stochastic
thermodynamics.
## I Introduction
In the past two decades, stochastic thermodynamics for open quantum systems
has attracted considerable theoretical interest Esposito _et al._ (2009);
Campisi _et al._ (2011); Alicki and Kosloff (2018); Liu (2018). One of the
major issues is the statistics of random thermodynamic variables such as heat,
work, entropy production, and efficiency Kurchan (2000); Breuer (2003);
Talkner _et al._ (2007); De Roeck (2007); Talkner _et al._ (2008); Crooks
(2008); Garrahan and Lesanovsky (2010); Subaşı and Hu (2012); Horowitz (2012);
Hekking and Pekola (2013); Leggio _et al._ (2013); Horowitz and Parrondo
(2013); Žnidarič (2014); Verley _et al._ (2014); Gasparinetti _et al._
(2014); Cuetara _et al._ (2015); Carrega _et al._ (2015); Manzano _et al._
(2016); Suomela _et al._ (2016); Liu and Xi (2016); Strasberg _et al._
(2017); Wang _et al._ (2017); Restrepo _et al._ (2018); Carollo _et al._
(2018, 2019); Liu and Su (2020). The tilted or generalized quantum master
equation (TQME) is a useful approach for the study of these problems Esposito
_et al._ (2009). For instance, the fluctuation theorems of steady-states can
be demonstrated according to the symmetries implied in the maximal eigenvalue
of the equation Esposito _et al._ (2009); Gasparinetti _et al._ (2014);
Cuetara _et al._ (2015); Liu and Su (2020). To study the concrete probability
distributions of the random thermodynamic variables, we can numerically or
analytically solve the equation to obtain characteristic functions or moment
generating functions Silaev _et al._ (2014); Gasparinetti _et al._ (2014);
Cuetara _et al._ (2015); Carrega _et al._ (2015); Liu and Xi (2016); Wang
_et al._ (2017); Restrepo _et al._ (2018); the distributions and functions
are mathematically equivalent.
TQME was developed by Esposito et al. Esposito _et al._ (2009) in their
investigation of the fluctuation theorems of open quantum systems. Their
derivation is based on the celebrated two-energy measurement scheme Kurchan
(2000) and the whole procedure is similar to the derivation of the quantum
master equation Alicki _et al._ (2006); Rivas and Huelga (2012); Breuer _et
al._ (2000). Indeed, these two equations appear to be almost the same in form;
the only difference is an exponential term in the front of the jump terms
therein. Although TQME is established in stochastic thermodynamics, an
analogous equation was presented by Mollow in 1975 in a seminal paper about
photon emissions of quantum systems with discrete energy levels Mollow (1975).
Importantly, the notion of quantum jumps was proposed in this paper Carmichael
_et al._ (1989); Gardiner and Zoller (2004). To determine the photon-number
distributions in the many modes of fluorescence, Mollow developed a hierarchy
of equations. Garrahan and Lesanovsky argued that these equations are
equivalent to TQME Garrahan and Lesanovsky (2010). This result inspired us to
re-derive TQME from the perspective of quantum jumps Liu and Xi (2016); Liu
(2018). Different from the method of Esposito et al. Esposito _et al._
(2009), our method is based on the unraveling of the quantum master equation
into quantum jump trajectories Breuer _et al._ (2000), and Dyson expansion is
used Kist _et al._ (1999); Liu and Xi (2016).
Although TQME is widely adopted in the literature, several open questions
still remain. Quantum jump trajectories, or trajectories for short, are
composed of alternating deterministic pieces and random jumps of wave
functions of individual quantum systems. They are a type of stochastic process
called the piecewise deterministic process (PDP) Davis (1984). Hence, in
Hilbert space, there exists a Liouville-master equation that governs the
dynamics of the probability distribution of the random wave functions.
Moreover, this equation exactly yields the quantum master equation for the
reduced density matrix Breuer and Petruccione (1995a, b, 1997a, 1997b). This
naturally raises the following questions: Is there a tilted Liouville-master
equation in the Hilbert space? What is the relationship between the equation
and the Liouville-master equation? Does it yield TQME? We also list these
questions in Fig. (1). The aim of this paper is to present definite answers to
these questions.
Figure 1: Three questions of interest addressed in this paper: 1. Is there a
tilted Liouville-master equation in the Hilbert space? 2. What is the
relationship between it and the Liouville-master equation? 3. Does the
equation yield the conventional TQME?
The remainder of this paper is organized as follows: In Sec. (II), the
Markovian quantum master equation and its unraveling of trajectories are
briefly reviewed. In Sec. (III), by defining two functionals, we rewrite the
probability distribution functional of the trajectory in a new form. In Sec.
(IV), a Feynman-Kac formula in the Hilbert space is derived. Based on these
two results, in Sec. (V), we derive the tilted Liouville-master equation and
demonstrate that it is indeed the unraveling of TQME. Section (VI) concludes
the paper.
## II Unraveling of Markovian quantum master equation
We start with the Markovian quantum master equation. Because our aim is to
prepare the essential equations and notations, the description is brief; a
detailed explanation can be found in the standard textbooks, e.g., Ref. Breuer
_et al._ (2000). Let $\rho(t)$ be the reduced density matrix of an open
quantum system. Under appropriate assumptions and conditions, the ensemble
dynamics of the system is described by the Markovian quantum master equation
Davies (1974); Lindblad (1976); Gorini _et al._ (1976)
$\displaystyle\partial_{t}\rho(t)=-i[H,\rho(t)]+\sum_{\alpha=1}^{M}r_{\alpha}\left(A_{\alpha}\rho(t)A^{\dagger}_{\alpha}-\frac{1}{2}\left\\{A^{\dagger}_{\alpha}A_{\alpha},\rho(t)\right\\}\right),$
(1)
where the Planck constant $\hbar$ is set to 1, $H$ denotes the Hamiltonian of
the quantum system, $A_{\alpha}$ is the Lindblad operator, and nonnegative
$r_{\alpha}$, $\alpha=1,\cdots,M$ represent the correlation functions of the
environments surrounding the system. Remarkably, this general equation is also
equal to an ensemble average of the density matrices of the individual
stochastic quantum systems Breuer and Petruccione (1995a, b, 1997a, 1997b):
$\displaystyle\rho(t)=\int D\psi
D\psi^{*}P[\psi,t]|\psi(t)\rangle\langle\psi(t)|.$ (2)
Here, $D\psi D\psi^{*}$ is the Hilbert space volume element, and $P[\psi,t]$
is the probability distribution functional of the random wave function $\psi$
at time $t$. The latter satisfies the Liouville-master equation Breuer _et
al._ (2000); Breuer and Petruccione (1997b):
$\displaystyle\partial_{t}P[\psi,t]$ $\displaystyle=$ $\displaystyle i\int
dz\left(\frac{\delta}{\delta\psi(z)}G[\psi](z)-\frac{\delta}{\delta\psi^{*}(z)}G[\psi]^{*}(z)\right)P[\psi,t]+$
(3) $\displaystyle\int D\phi
D\phi^{*}\left(P[\phi,t]W[\phi|\psi]-P[\psi,t]W[\psi|\phi]\right),$
where $\delta/\delta\psi(z)$ and $\delta/\delta^{*}\psi(z)$ are functional
derivatives and $z$ denotes the positional coordinate. The operator $G$ in the
first integral of Eq. (3) is
$\displaystyle
G[\psi]=\left(\hat{H}+\frac{i}{2}\sum_{\alpha=1}^{M}r_{\alpha}\parallel
A_{\alpha}\psi\parallel^{2}\right)\psi,$ (4)
and $\hat{H}\equiv
H-({i}/{2})\sum_{\alpha=1}^{M}r_{\alpha}A_{\alpha}^{\dagger}A_{\alpha}$ is the
non-Hermitian Hamiltonian. In the second integral of Eq. (3), the transition
rate is
$\displaystyle
W[\phi|\psi]=\sum_{\alpha=1}^{M}k_{\alpha}[\phi]\delta\left[\frac{A_{\alpha}\phi}{\parallel
A_{\alpha}\phi\parallel}-\psi\right],$ (5)
where $\delta[$ $]$ denotes the Dirac functional,
$k_{\alpha}[\phi]=r_{\alpha}\parallel A_{\alpha}\phi\parallel^{2}$ is the rate
of jump of the wave function from $\phi$ to the target
$\psi_{\alpha}={A_{\alpha}\phi}/{\parallel A_{\alpha}\phi\parallel}$, and the
precise definition of the latter is given in Eq. (7) below. Eq. (3) is called
the unraveling of the quantum master equation (1).
The stochastic process underlying Eq. (3) is the PDP in the Hilbert space
Breuer _et al._ (2000). The evolutions or trajectories of the individual
quantum systems are composed of deterministic pieces and random jumps Breuer
_et al._ (2000): the former are the solutions of the nonlinear
Schr$\ddot{o}$dinger equation,
$\displaystyle\frac{d}{d\tau}\psi(\tau)$ $\displaystyle=$ $\displaystyle-
iG[\psi(\tau)],$ (6)
while the latter are the instantaneous jumps of wave functions given by
$\displaystyle\psi(\tau)\rightarrow\psi_{\alpha}=\frac{A_{\alpha}\psi(\tau)}{\parallel
A_{\alpha}\psi(\tau)\parallel},$ (7)
$\alpha=1,\cdots,M$, and the rates of the jumps are nothing but
$k_{\alpha}[\psi(\tau)]$. The target wave functions $\psi_{\alpha}$ in Eq. (7)
appear to depend on the concrete wave functions $\psi(\tau)$ at the jumping
times. However, in the cases of physical interest, they are uniquely specified
by the Lindblad operators $A_{\alpha}$ Breuer _et al._ (2000). In this paper,
we consider only such cases.
## III Probability distribution functional of trajectory
Let us denote the trajectories of the individual quantum systems as
$\Psi_{t}$, where $t$ denotes the duration of the quantum nonequilibrium
process. We denote the jumps along a trajectory by
$\alpha_{i}\in\\{1,\cdots,M\\}$, and $i$ represents the time $t_{i}$ in which
the $i$-th jump occurs, where $i=0,1,\cdots,N$, and $N$ is the total number of
jumps. Then, the probability distribution functional of monitoring a
trajectory $\Psi_{t}$ is simply Breuer _et al._ (2000)
$\displaystyle{\mathbf{P}}_{0}[\Psi_{t}]$ $\displaystyle=$
$\displaystyle\exp\left[-\int_{0}^{t}ds\Gamma[\psi(s)]\right]\prod_{i=1}^{N}k_{\alpha_{i}}[\psi(t_{i}^{-})],$
(8)
where we use the subscript $0$ on the left-hand side to denote the initial
wave function of the quantum system being in a target wave function
$\psi_{\alpha_{0}}$, and $\psi(t_{i}^{-})$ is the wave function of the system
immediately prior to the jumping time $t_{i}$. The total rate of jumps is
$\displaystyle\Gamma[\psi]$ $\displaystyle=$ $\displaystyle\int D\phi
D\phi^{*}W[\psi|\phi]$ (9) $\displaystyle=$
$\displaystyle\sum_{\alpha=1}^{M}k_{\alpha}[\psi].$
We emphasize that in the time interval between $t_{i}$ and $t_{i+1}$,
$\psi(s)$ follows the solution of Eq. (6); the initial condition is specified
by the target wave function after the last jump, that is
$\psi(s=t_{i}^{+})=\psi_{\alpha_{i}}$ and $t_{i}^{+}$ is the time immediately
after the jump. The exponential decay function in Eq. (8) implies that during
the successive evolutions, the quantum system has a probability of jumping
from the current state to the potential target wave functions. This is the
physical consequence arising from irreversible dissipations induced by
environments or from continuous measurements on the individual quantum systems
Breuer _et al._ (2000); Wiseman and Milburn (2010); Plenio and Knight (1998).
The probability distribution functional of the trajectory (8) has an
alternative form:
$\displaystyle\mathbf{P}_{0}[\Psi_{t}]$ $\displaystyle=$
$\displaystyle\exp\left(-t\int D\phi
D\phi^{*}\Gamma[\phi]P_{\phi}[\Psi_{t}]+t\sum_{\alpha=1}^{M}\int D\phi
D\phi^{*}\ln k_{\alpha}[\phi]F_{\phi,\alpha}[\Psi_{t}]\right),$ (10)
where we define two functionals,
$\displaystyle P_{\phi}[\Psi_{t}]$ $\displaystyle=$
$\displaystyle\frac{1}{t}\int_{0}^{t}ds\delta[\psi(s)-\phi],$ (11)
$\displaystyle F_{\phi,\alpha}[\Psi_{t}]$ $\displaystyle=$
$\displaystyle\frac{1}{t}\sum_{i=1}^{N}\delta[\phi-\psi(t_{i}^{-})]\delta_{\alpha,\alpha_{i}},$
(12)
and $\delta_{\alpha,\alpha_{i}}$ is the Kronecker delta symbol. It is not
difficult to see that Eq. (11) represents the fraction of time of the quantum
system occupying the wave function $\phi$ in the duration $t$, while Eq. (12)
is the empirical rate of jumping from the special $\phi$ to the special target
$\psi_{\alpha}$. At long time limits, we expect that these functionals reduce
to the probability distribution of wave functions and flows of the steady-
state of the quantum ensemble, respectively. Eq. (10) can be used to obtain
the rate functional of the level 2.5 large deviations of open quantum systems
Carollo _et al._ (2019, 2021). We do not pursue this issue in this paper.
To close this section, we introduce notation for formally expressing the
ensemble average of an arbitrary functional $O[\Psi_{t}]$ of trajectories:
$\displaystyle\left\langle
O\right\rangle\equiv\sum_{\alpha_{0},\Psi_{t}}P[\psi_{\alpha_{0}}]{\bf
P}_{0}[\Psi_{t}]O[\Psi_{t}],$ (13)
where $P[\psi_{\alpha_{0}}]$ represents the initial probability distribution
functional and the summations are over all possible trajectories and initial
target wave functions. Although Eq. (13) is not rigorous, it is adequate to
support our below discussions.
## IV A Feynman-Kac formula
In this section, we present a Feynman-Kac formula for the PDPs in the Hilbert
space. The basis of our discussion is the intuitive picture of the
trajectories and simple relations among the probabilities in Eqs. (6) and (7).
Consider a time-dependent integral
$\displaystyle u(t)=\int_{0}^{t}dsV[\psi(s)].$ (14)
We assume $u(t)$ to be a continuous real function of time $t$. Obviously, the
integral is also a functional of the trajectories.
Our aim is to calculate the probability distribution $p(u,t)$ of the random
variable $u(t)$. To this end, we need several intermediate quantities and
equations. First, we define $P[\psi,u,t]$ as the probability distribution
functional of finding the individual quantum systems with wave function $\psi$
and the integral (14) equal to $u$ at time $t$. We have $p(u,t)=\int D\psi
D\psi^{*}P[\psi,u,t]$. Moreover, we define again another functional
$P[\psi,u,t;\psi_{\alpha},v,\tau]$: it represents the joint probability
distribution functional when the wave function of the quantum system and the
integral at time $t$ are $\psi$ and $u$, respectively, the successive time of
evolution is $\tau$, the target wave function and the integral at the time of
last jump are equal to $\psi_{\alpha}$ and $v$, respectively. These two
distribution functionals are related by
$\displaystyle P[\psi,u,t]=\sum_{\alpha=1}^{M}\int dv\int_{0}^{\infty}d\tau
P[\psi,u,t;\psi_{\alpha},v,\tau],$ (15)
where the integration $\int dv$ is over the whole space of the random variable
$u(t)$.
Let $h$ be a small time interval. We can write a probability formula about the
detailed probability distribution functional as follows:
$\displaystyle P[\psi,u,t+h;\psi_{\alpha},v,\tau+h]$ $\displaystyle=$
$\displaystyle\int D\phi
D\phi^{*}dwP[\phi,w,t;\psi_{\alpha},v,\tau]\left(1-\Gamma[\phi]h\right)\delta[\psi-\phi+iG[\phi]h]$
(16) $\displaystyle\times\delta(u-w-V[\phi]h)+{\it o}(h).$
The reason for this is that if the quantum system starts with the target state
$\psi_{\alpha}$ with a value $v$ of the integral (14), and successively
evolves $\tau+h$ at time $t+h$, the system must also evolve $\tau$ at time
$t$, and no jumps occur during the interval $h$. We note that both the Dirac
functional $\delta[$ $]$ and function $\delta($ $)$ therein are the
consequences of the deterministic Eq. (6). We expand both sides of Eq. (16) in
terms of $h$ until its first order:
$\displaystyle
P[\psi,u,t;\psi_{\alpha},v,\tau]+h\partial_{t}P[\psi,u,t;\psi_{\alpha},v,\tau]+h\partial_{\tau}P[\psi,u,t;\psi_{\alpha},v,\tau]$
(17) $\displaystyle=$ $\displaystyle\int D\phi
D\phi^{*}dwP[\phi,w,t;\psi_{\alpha},v,\tau]\delta[\psi-\phi]\delta(u-w)-h\int
D\phi
D\phi^{*}dwP[\phi,w,t;\psi_{\alpha},v,\tau]\Gamma[\phi]\delta[\psi-\phi]\delta(u-w)+$
$\displaystyle ih\int D\phi
D\phi^{*}dwP[\phi,w,t;\psi_{\alpha},v,\tau]\left\\{\int
dz\left(\frac{\delta}{\delta\psi(z)}\delta[\psi-\phi]\right)G[\phi](z)\right.-$
$\displaystyle\left.\int
dz\left(\frac{\delta}{\delta\psi^{*}(z)}\delta[\psi-\phi]\right)G^{*}[\phi](z)\right\\}\delta(u-w)-$
$\displaystyle h\int D\phi
D\phi^{*}dwP[\phi,w,t;\psi_{\alpha},v,\tau]\delta[\psi-\phi]\left(\frac{\partial}{\partial
u}\delta(u-w)\right)V[\phi].$
Letting $h$ tend to zero and using the properties of the Dirac delta function
and functional, we obtain
$\displaystyle\partial_{t}P[\psi,u,t;\psi_{\alpha},v,\tau]+\partial_{\tau}P[\psi,u,t;\psi_{\alpha},v,\tau]$
(18) $\displaystyle=$ $\displaystyle i\int
dz\frac{\delta}{\delta\psi(z)}P[\psi,u,t;\psi_{\alpha},v,\tau]G[\psi](z)-i\int
dz\frac{\delta}{\delta\psi^{*}(z)}P[\psi,u,t;\psi_{\alpha},v,\tau]G[\psi]^{*}(z)-$
$\displaystyle\Gamma[\psi]P[\psi,u,t;\psi_{\alpha},v,\tau]-V[\psi]\frac{\partial}{\partial
u}P[\psi,u,t;\psi_{\alpha},v,\tau].$
On the other hand, $P[\psi,u,t;\psi_{\alpha},v,\tau=0]$ is also equal to the
probability distribution in which other wave functions jump to $\psi_{\alpha}$
at time $t$, and these functions may start with different target wave
functions with various successive times of evolution. Hence, we can
reformulate it as
$\displaystyle P[\psi,u,t;\psi_{\alpha},v,0]$ $\displaystyle=$
$\displaystyle\sum_{\beta=1}^{M}\int D\phi
D\phi^{*}dwdv^{\prime}\int_{0}^{\infty}d\tau
P[\phi,w,t;\psi_{\beta},v^{\prime},\tau]k_{\alpha}[\phi]\delta(w-u)\delta(u-v)\times$
(19) $\displaystyle\delta\left[\frac{A_{\alpha}\phi}{\parallel
A_{\alpha}\phi\parallel}-\psi_{\alpha}\right]\delta[\psi-\psi_{\alpha}].$
Eq. (19) appears to be somewhat lengthy. Nevertheless, the mean of probability
is clear. According to Eq. (15), by integrating and summing Eq. (18) over
$\tau$ and $\alpha$, respectively, we obtain a differential equation for the
probability distribution functional $P[\psi,u,t]$ as follows:
$\displaystyle\partial_{t}P[\psi,u,t]$ $\displaystyle=$ $\displaystyle i\int
dz\left(\frac{\delta}{\delta\psi(z)}G[\psi](z)-\frac{\delta}{\delta\psi^{*}(z)}G[\psi]^{*}(z)\right)P[\psi,u,t]-V[\psi]\frac{\partial}{\partial
u}P[\psi,u,t]$ (20) $\displaystyle-\Gamma[\psi]P[\psi,u,t]+\int D\phi
D\phi^{*}P[\phi,u,t]W[\phi|\psi].$
We are close to the Feynman-Kac formula. Defining Laplace transform of
$P[\psi,u,t]$ with respect to the variable $u$,
$\displaystyle\hat{K}[\psi,t]=\int due^{-\lambda u}P[\psi,u,t],$ (21)
and using Eq. (20), we derive an equation for the new functional:
$\displaystyle\partial_{t}\hat{K}[\psi,t]$ $\displaystyle=$ $\displaystyle
i\int
dz\left(\frac{\delta}{\delta\psi(z)}G[\psi](z)-\frac{\delta}{\delta\psi^{*}(z)}G[\psi]^{*}(z)\right)\hat{K}[\psi,t]+$
(22) $\displaystyle\int D\phi
D\phi^{*}(\hat{K}[\phi,t]W[\phi|\psi]-\hat{K}[\psi,t]W[\psi|\phi])-\lambda
V[\psi]\hat{K}[\psi,t].$
Because the Laplace transform $\Phi(\lambda)$ of the probability distribution
function $p(u,t)$ for the integral (14) is simply related to $\hat{K}[\psi,t]$
as
$\displaystyle\Phi(\lambda)=\int due^{-\lambda u}p(u,t)=\int D\psi
D\psi^{*}\hat{K}[\psi,t],$ (23)
if the latter is solved by Eq. (22), we can calculate $p(u,t)$ by performing
an inverse Laplace transform of $\Phi(\lambda)$. Finally, because $p(u,t)$ has
an expression of an ensemble average of the trajectories, Eq. (23) also leads
to
$\displaystyle\hat{K}[\psi,t]=\left\langle\delta[\psi-\psi(t)]e^{-\lambda\int_{0}^{t}dsV[\psi(s)]}\right\rangle.$
(24)
We name the collection of Eqs. (22) and (24) as the Feynman-Kac formula of the
PDP in the Hilbert space. This is a quantum counterpart of the classical
version in the space of classical states Davis (1984).
## V Tilted Liouville-master equation
In quantum stochastic thermodynamics, the statistics of heat are of great
interest and play a central role Esposito _et al._ (2009). According to the
interpretation of quantum thermodynamics, the random jumps of wave functions
along quantum trajectories indicate that discrete amounts of heat (the energy
quanta) are released to or absorbed from the environment surrounding the
quantum systems Kurchan (2000); Breuer (2003); Dereziński _et al._ (2008);
Campisi _et al._ (2011); Horowitz (2012); Hekking and Pekola (2013); Liu and
Xi (2016); Liu (2018); Liu and Su (2020). We write the heat in a general form:
$\displaystyle Q[\Psi_{T}]$ $\displaystyle=$
$\displaystyle\sum_{i=1}^{N}\omega_{\alpha_{i}}=t\sum_{\alpha=1}^{M}\omega_{\alpha}\left(\frac{1}{t}\sum_{i=1}^{N}\delta_{\alpha_{i},\alpha}\right),$
(25)
where $\omega_{\alpha_{i}}$ are some energy constants that are determined by
the target wave functions at time points $t_{i}$ of the jumps, e.g., the Bohr
frequencies Breuer _et al._ (2000); Liu and Xi (2016). The whole term in the
circle brackets of Eq. (25) is similar to Eq. (12). It is indeed an
integration of the latter, that is, $\int D\phi
D\phi^{*}F_{\phi,\alpha}[\Psi_{t}]$.
Now, we can construct a differential equation for the heat. Analogous to
previous cases, we focus on its Laplace transform (or the moment generating
function) rather than distribution: $\Xi(\lambda)=\left\langle\exp(-\lambda
Q)\right\rangle$, and the right-hand side of the equation is
$\displaystyle\sum_{\alpha_{0},\Psi_{t}}P[\psi_{\alpha_{0}}]\exp\left(-t\int
D\phi D\phi^{*}\Gamma[\phi]P_{\phi}[\Psi_{t}]+t\sum_{\alpha=1}^{M}\int D\phi
D\phi^{*}\ln\left(e^{-\lambda\omega_{\alpha}}k_{\alpha}[\phi]\right)F_{\phi,\alpha}[\Psi_{t}]\right).$
(26)
Eq. (26) inspires us to define an auxiliary open quantum system as follows:
its nonlinear Schr$\ddot{o}$dinger equation is the same as Eq. (6), but the
rates of the jumps of wave functions are modified to
$\displaystyle
k^{\prime}_{\alpha}[\phi]=e^{-\lambda\omega_{\alpha}}k_{\alpha}[\phi].$ (27)
In the remainder of this paper, we always denote quantities in the auxiliary
open quantum system with a prime symbol unless stated otherwise. With these
notations, we can rewrite the Laplace transform of the heat as
$\displaystyle\Xi(\lambda)$ $\displaystyle=$
$\displaystyle\sum_{\alpha_{0},\Psi_{t}}P[\psi_{\alpha_{0}}]\mathbf{P^{\prime}}_{0}[\Psi_{t}]\frac{\mathbf{P}_{0}[\Psi_{t}]}{\mathbf{P^{\prime}}_{0}[\Psi_{t}]}e^{-\lambda
Q[\Psi_{t}]}$ (28) $\displaystyle=$ $\displaystyle\int D\phi
D\phi^{*}\left\langle\delta[\phi-\psi(t)]e^{-\int_{0}^{t}ds(\Gamma[\psi(s)]-\Gamma^{\prime}[\psi(s)])}\right\rangle^{\prime},$
We note that the probability distribution functional
$\mathbf{P^{\prime}}_{0}[\Psi_{t}]$ of monitoring the trajectory $\Psi_{t}$ is
the same as Eq. (8) except that the rates therein are replaced by Eq (27). So
is the total rate $\Gamma^{\prime}[\psi]$; see Eq. (9). Defining the integrand
of Eq. (28) as $\hat{P}[\psi,t]$ and using the Feynman-Kac formula (22) and
(24), we immediately obtain
$\displaystyle\partial_{t}\hat{P}[\psi,t]$ $\displaystyle=$ $\displaystyle
i\int
dx\left(\frac{\delta}{\delta\psi(x)}G[\psi](x)-\frac{\delta}{\delta\psi^{*}(x)}G[\psi]^{*}(x)\right)\hat{P}[\psi,t]+$
(29) $\displaystyle\int D\phi
D\phi^{*}\left(\hat{P}[\phi,t]W^{\prime}[\phi|\psi]-\hat{P}[\psi,t]W[\psi|\phi]\right),$
where
$\displaystyle
W^{\prime}[\phi|\psi]=\sum_{\alpha=1}^{M}e^{-\lambda\omega_{\alpha}}k_{\alpha}[\phi]\delta\left[\frac{A_{\alpha}\phi}{\parallel
A_{\alpha}\phi\parallel}-\psi\right].$ (30)
We name Eq. (29) the tilted Liouville-master equation, which is the central
result of this paper. If $\lambda$ is zero, the equation yields the Liouville-
master equation (3). Hence, we have definitely answered the first two
questions in Fig. (1).
### V.1 Tilted quantum master equation
Here, we demonstrate that Eq. (29) can yield the tilted quantum master
equation Esposito _et al._ (2009), i.e., the third question of this paper.
The procedure is very similar to that of the Liouville-master equation and
quantum master equation Breuer and Petruccione (1995a, b, 1997a, 1997b). Let
us consider an operator $\hat{\rho}(t)$ that is defined as
$\displaystyle\hat{\rho}(z,z^{\prime},t)=\int D\psi
D\psi^{*}\hat{P}[\psi,t]\psi(z)\psi^{*}(z^{\prime})$ (31)
in the position representation. Taking the time derivatives of its both sides
and substituting Eq. (29), we have
$\displaystyle\partial_{t}\hat{\rho}(z,z^{\prime},t)$ $\displaystyle=$
$\displaystyle i\int D\psi D\psi^{*}\int
dx\psi(z)\psi^{*}(z^{\prime})\left(\frac{\delta}{\delta\psi(x)}G[\psi](x)-\frac{\delta}{\delta\psi^{*}(x)}G[\psi]^{*}(x)\right)\hat{P}[\psi,t]+$
(32) $\displaystyle\int D\psi D\psi^{*}\psi(z)\psi^{*}(z^{\prime})\int D\phi
D\phi^{*}\left(\hat{P}[\phi,t]W^{\prime}[\phi|\psi]-\hat{P}[\psi,t]W[\psi|\phi]\right).$
There are four terms on the right-hand side of Eq. (32). The first term is
equal to
$\displaystyle-i\int D\psi D\psi^{*}\int
dxG[\psi](x)\hat{P}[\psi,t]\left(\frac{\delta}{\delta\psi(x)}\psi(z)\psi^{*}(z^{\prime})\right)$
(33) $\displaystyle=$ $\displaystyle-i\int D\psi
D\psi^{*}\hat{P}[\psi,t]\psi^{*}(z^{\prime})G[\psi](z)$ $\displaystyle=$
$\displaystyle-i\int dy\int D\psi
D\psi^{*}\hat{P}[\psi,t]\psi^{*}(z^{\prime})\langle
z|\left(H-\frac{i}{2}\sum_{\alpha=1}^{M}r_{\alpha}A_{\alpha}^{\dagger}A_{\alpha}+\frac{i}{2}\sum_{\alpha=1}^{M}r_{\alpha}\parallel
A_{\alpha}\psi\parallel^{2}\right)|y\rangle\psi(y)$ $\displaystyle=$
$\displaystyle-i\int dy\langle
z|H|y\rangle\hat{\rho}(y,z^{\prime},t)-\frac{1}{2}\sum_{\alpha=1}^{M}r_{\alpha}\int
dy\langle
z|A_{\alpha}^{\dagger}A_{\alpha}|y\rangle\hat{\rho}(y,z^{\prime},t)+$
$\displaystyle\frac{1}{2}\sum_{\alpha=1}^{M}r_{\alpha}\int D\psi
D\psi^{*}\hat{P}[\psi,t]\psi(z)\psi^{*}(z^{\prime})\parallel
A_{\alpha}\psi\parallel^{2}.$
The first equation of Eq. (33) uses the functional integration by parts.
Because of the properties $\delta\psi(z)/\delta\psi(x)=\delta(z-x)$ and
$\delta\psi^{*}(z)/\delta\psi(x)=0$, we arrive at the second equation. The
third equation therein is a consequence of inserting Eq. (4). Performing
similar calculations, the second term of Eq. (32) is
$\displaystyle i\int dy\hat{\rho}(z,y,t)\langle
y|H|z^{\prime}\rangle-\frac{1}{2}\sum_{\alpha=1}^{M}r_{\alpha}\int
dy\hat{\rho}(z,y,t)\langle
y|A_{\alpha}^{\dagger}A_{\alpha}|z^{\prime}\rangle+$
$\displaystyle\frac{1}{2}\sum_{\alpha=\alpha}^{M}r_{\alpha}\int D\psi
D\psi^{*}\hat{P}[\psi,t]\psi(z)\psi^{*}(z^{\prime})\parallel
A_{\alpha}\psi\parallel^{2}.$ (34)
By inserting Eq. (30), we can rewrite the third term as
$\displaystyle\sum_{\alpha=1}^{M}r_{\alpha}e^{-\lambda\omega_{\alpha}}\int
D\psi D\psi^{*}\psi(z)\psi^{*}(z^{\prime})\int D\phi
D\phi^{*}\hat{P}[\phi,t]\parallel
A_{\alpha}\phi\parallel^{2}\delta\left[\frac{A_{\alpha}\phi}{\parallel
A_{\alpha}\phi\parallel}-\psi\right]$ (35) $\displaystyle=$
$\displaystyle\sum_{\alpha=1}^{M}r_{\alpha}e^{-\lambda\omega_{\alpha}}\int
D\phi D\phi^{*}\langle
z|A_{\alpha}|\phi\rangle\hat{P}[\phi,t]\langle\phi|A^{\dagger}_{\alpha}|z^{\prime}\rangle$
$\displaystyle=$
$\displaystyle\sum_{\alpha=1}^{M}r_{\alpha}e^{-\lambda\omega_{\alpha}}\int
dydy^{\prime}\langle z|A_{\alpha}|y\rangle\hat{\rho}(y,y^{\prime},t)\langle
y^{\prime}|A^{\dagger}_{\alpha}|z^{\prime}\rangle.$
The fourth term is relatively simple:
$\displaystyle-\sum_{\alpha=1}^{M}r_{\alpha}\int D\psi
D\psi^{*}\hat{P}[\psi,t]\psi(z)\psi^{*}(z^{\prime})\parallel
A_{\alpha}\psi\parallel^{2}.$ (36)
Eqs. (33)-(36) appears tedious. However, when we substitute them into the
right-hand side of Eq. (32), we find that this differential equation leads to
a concise form for the operator $\hat{\rho}(t)$:
$\displaystyle\partial_{t}\hat{\rho}(t)=-i[H,\hat{\rho}(t)]+\sum_{\alpha=1}^{M}r_{\alpha}\left(e^{-\lambda\omega_{\alpha}}A_{\alpha}\hat{\rho}(t)A^{\dagger}_{\alpha}-\frac{1}{2}\left\\{A^{\dagger}_{\alpha}A_{\alpha},\hat{\rho}(t)\right\\}\right).$
(37)
Accordingly, an alternative expression for the Laplace transform of the heat
is obtained: $\Xi(\lambda)={\rm Tr}[\hat{\rho}(t)]$. Eq. (37) is nothing but
TQME Esposito _et al._ (2009). Analogous to the relationship between Eqs. (1)
and (3), we name Eq. (29) the unraveling of Eq. (37) in Hilbert space. Because
of the absence of the abstract functional derivatives, the latter is more
advantageous for the calculation and analysis of the statistics of random heat
(25).
## VI Conclusion
We have obtained the tilted Liouville-master equation in the Hilbert space and
this equation can yield the tilted quantum master equation of open quantum
systems.
###### Acknowledgements.
We appreciate Prof. Hong Qian and Dr. Carollo for their inspiring discussions.
This work was supported by the National Science Foundation of China under
Grant No. 11174025 and No. 11575016.
## References
* Esposito _et al._ (2009) M. Esposito, U. Harbola, and S. Mukamel, Rev. Mod. Phys. 81, 1665 (2009).
* Campisi _et al._ (2011) M. Campisi, P. Hänggi, and P. Talkner, Rev. Mod. Phys. 83, 771 (2011).
* Alicki and Kosloff (2018) R. Alicki and R. Kosloff, in _Fundamental Theories of Physics_ (Springer International Publishing, 2018) pp. 1–33.
* Liu (2018) F. Liu, Prog. Phys. 38, 1 (2018).
* Kurchan (2000) J. Kurchan, arXiv preprint cond-mat/0007360 (2000).
* Breuer (2003) H.-P. Breuer, Phys. Rev. A 68, 032105 (2003).
* Talkner _et al._ (2007) P. Talkner, E. Lutz, and P. Hänggi, Phys. Rev. E 75, 050102 (2007).
* De Roeck (2007) W. De Roeck, C. R. Phys. 8, 674 (2007).
* Talkner _et al._ (2008) P. Talkner, P. S. Burada, and P. Hänggi, Phys. Rev. E 78, 011115 (2008).
* Crooks (2008) G. E. Crooks, Phys. Rev. A 77, 034101 (2008).
* Garrahan and Lesanovsky (2010) J. P. Garrahan and I. Lesanovsky, Phys. Rev. Lett. 104, 160601 (2010).
* Subaşı and Hu (2012) Y. Subaşı and B. L. Hu, Phys. Rev. E 85, 011112 (2012).
* Horowitz (2012) J. M. Horowitz, Phys. Rev. E 85, 031110 (2012).
* Hekking and Pekola (2013) F. W. J. Hekking and J. P. Pekola, Phys. Rev. Lett. 111, 093602 (2013).
* Leggio _et al._ (2013) B. Leggio, A. Napoli, A. Messina, and H.-P. Breuer, Phys. Rev. A 88, 042111 (2013).
* Horowitz and Parrondo (2013) J. M. Horowitz and J. M. Parrondo, New J. Phys. 15, 085028 (2013).
* Žnidarič (2014) M. Žnidarič, Phys. Rev. Lett. 112, 040602 (2014).
* Verley _et al._ (2014) G. Verley, T. Willaert, C. V. D. Broeck, and M. Esposito, Nature Commun. 5, 4721 (2014).
* Gasparinetti _et al._ (2014) S. Gasparinetti, P. Solinas, A. Braggio, and M. Sassetti, New J. Phys. 16, 115001 (2014).
* Cuetara _et al._ (2015) G. B. Cuetara, A. Engel, and M. Esposito, New J. Phys. 17, 055002 (2015).
* Carrega _et al._ (2015) M. Carrega, P. Solinas, A. Braggio, M. Sassetti, and U. Weiss, New J. 17, 045030 (2015).
* Manzano _et al._ (2016) G. Manzano, F. Galve, R. Zambrini, and J. M. R. Parrondo, Phys. Rev. E 93, 052120 (2016).
* Suomela _et al._ (2016) S. Suomela, A. Kutvonen, and T. Ala-Nissila, Phys. Rev. E 93, 062106 (2016).
* Liu and Xi (2016) F. Liu and J. Xi, Phys. Rev. E 94, 062133 (2016).
* Strasberg _et al._ (2017) P. Strasberg, G. Schaller, T. Brandes, and M. Esposito, Phys. Rev. X 7, 021003 (2017).
* Wang _et al._ (2017) C. Wang, J. Ren, and J. Cao, Phys. Rev. A 95, 023610 (2017).
* Restrepo _et al._ (2018) S. Restrepo, J. Cerrillo, P. Strasberg, and G. Schaller, New J. Phys. 20, 053063 (2018).
* Carollo _et al._ (2018) F. Carollo, J. P. Garrahan, I. Lesanovsky, and C. Pérez-Espigares, Phys. Rev. A 98, 010103(R) (2018).
* Carollo _et al._ (2019) F. Carollo, R. L. Jack, and J. P. Garrahan, Phys. Rev. Lett. 122, 130605 (2019).
* Liu and Su (2020) F. Liu and S. Su, Phys. Rev. E 101, 062144 (2020).
* Silaev _et al._ (2014) M. Silaev, T. T. Heikkilä, and P. Virtanen, Phys. Rev. E 90, 022103 (2014).
* Alicki _et al._ (2006) R. Alicki, D. A. Lidar, and P. Zanardi, Phys. Rev. A 73, 052311 (2006).
* Rivas and Huelga (2012) A. Rivas and S. F. Huelga, _Open Quantum Systems_ (Springer Berlin Heidelberg, 2012).
* Breuer _et al._ (2000) H. P. Breuer, W. Huber, and F. Petruccione, Phys. Rev. E 61, 4883 (2000).
* Mollow (1975) B. R. Mollow, Phys. Rev. A 12, 1919 (1975).
* Carmichael _et al._ (1989) H. J. Carmichael, S. Singh, R. Vyas, and P. R. Rice, Phys. Rev. A 39, 1200 (1989).
* Gardiner and Zoller (2004) C. Gardiner and P. Zoller, _Quantum noise: a handbook of Markovian and non-Markovian quantum stochastic methods with applications to quantum optics_ , Vol. 56 (Springer, 2004).
* Kist _et al._ (1999) T. B. L. Kist, M. Orszag, T. A. Brun, and L. Davidovich, J. Opt. B: Quantum Semiclass. Opt. 1, 251 (1999).
* Davis (1984) M. H. A. Davis, J. R. Statist. Soc. B 46, 353 (1984).
* Breuer and Petruccione (1995a) H.-P. Breuer and F. Petruccione, Z. Phys. B 98, 139 (1995a).
* Breuer and Petruccione (1995b) H.-P. Breuer and F. Petruccione, Phys. Rev. E 52, 428 (1995b).
* Breuer and Petruccione (1997a) H.-P. Breuer and F. Petruccione, Phys. Rev. A 55, 3101 (1997a).
* Breuer and Petruccione (1997b) H.-P. Breuer and F. Petruccione, Fortschr. Phys. 45, 39 (1997b).
* Davies (1974) E. B. Davies, Comm. Math. Phys. 39, 91 (1974).
* Lindblad (1976) G. Lindblad, Comm. Math. Phys. 48, 119 (1976).
* Gorini _et al._ (1976) V. Gorini, A. Kossakowski, and E. C. G. Sudarshan, J. Math. Phys. 17, 821 (1976).
* Wiseman and Milburn (2010) H. M. Wiseman and G. J. Milburn, _Quantum measurement and control_ (Cambridge University Press, 2010).
* Plenio and Knight (1998) M. Plenio and P. Knight, Rev. Mod. Phys. 70, 101 (1998).
* Carollo _et al._ (2021) F. Carollo, J. P. Garrahan, and R. L. Jack, arXiv:2101.04138v1 (2021).
* Dereziński _et al._ (2008) J. Dereziński, W. De Roeck, and C. Maes, J. Stat. Phys. 131, 341 (2008).
|
Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000.
10.1109/ACCESS.2017.DOI
Corresponding author: Chengcheng Xu (e-mail: xuchengcheng17@nudt.edu.cn).
# Multi-Antenna Joint Radar and Communications: Precoder Optimization and
Weighted Sum-Rate vs Probing Power Tradeoff
CHENGCHENG XU1 BRUNO CLERCKX2,, AND JIANYUN ZHANG1 College of Electronic
Engineering, National University of Defense Technology, Hefei 230037, China
Department of Electrical and Electronic Engineering, Imperial College London,
London SW7 2AZ, United Kingdom (e-mail<EMAIL_ADDRESS>
###### Abstract
In order to further exploit the potential of joint multi-antenna radar-
communication (RadCom) system, we propose two transmission techniques
respectively based on separated and shared antenna deployments. Both
techniques are designed to maximize weighted sum rate (WSR) and probing power
at target’s location under average power constraints at antennas such that the
system can simultaneously communicate with downlink users and detect the
target within the same frequency band. Based on a Weighted Minimized Mean
Square Errors (WMMSE) method, the separated deployment transmission is
designed via semidefinite programming (SDP) while the shared deployment
problem is solved by majorization-minimization (MM) algorithm. Numerical
results show that shared deployment outperforms separated deployment in radar
beamforming. The tradeoffs between WSR and probing power at target are
compared among both proposed transmissions and two practically simpler dual-
function implementations i.e., time division and frequency division. Results
show that although separated deployment has an advantage of realizing spectrum
sharing, it experiences a performance loss compared with frequency division,
while shared deployment outperforms both and surpasses time division in
certain conditions.
###### Index Terms:
Radar-communication, weighted sum rate, MIMO radar, beamforming
=-15pt
## I Introduction
The 4th and 5th generation wireless communication systems are competing with
long-range radar applications in the S-band (2-4GHz) and C-band (4-8GHz),
which will possibly result in severe spectrum congestion and hamper the higher
data rate requirements for the increasing demand in future wireless
communication[1]. Though efforts for new spectrum management regulations and
policies are needed, a longer-term solution is to enable communication and
radar spectrum sharing (CRSS). There are two main research topics in the field
of CRSS: 1) coexistence of existing radar and communication devices, 2) co-
design for dual-function systems.
### I-A Coexistence of Existing Radar and Communication Devices
For coexistence of existing radar and communication devices, research focuses
on designing high-quality wideband radar waveforms that achieve spectrum nulls
on communication frequency bands[2]. On this basis, [3, 4] then design
waveforms with more accurate spectrum shapes for higher spectrum efficiency.
However, instead of designing the radar waveforms only, [5] jointly designs
communication precoders together with the slow-time radar coding waveforms to
ensure both radar Signal-to-Interference-plus-Noise (SINR) and communication
rate requirements. Nevertheless, all the aforementioned works are limited to
single-antenna radar systems. As multi-antenna processing can greatly improve
radar performance[6], [7] extends the spectral constraint towards Multiple-
Input Multiple-Output (MIMO) radar waveform design and enables MIMO radar to
work in a spectrally crowded environment. In contrast, [8] designs the
precoder of the multi-user MIMO (MU-MIMO) communication base station (BS) to
coexist with the MIMO radar. Instead of designing the radar or communication
system solely, [9] and [10] have been devoted to the coordinated design of
both existing MIMO communication systems and MIMO radar systems to achieve
coexistence. Given the existing infrastructure, a coexistence approach manages
interference between radar and communication as much as it can. However, for
uncoordinated coexistence design, some important phenomena are not considered
in the simplified scenarios [11], while for coordinated coexistence,
governmental and military agencies might be unwilling to upgrade the existing
deployment [12].
### I-B Dual-function System Design
Accounting for the possible drawbacks aforementioned, designing a dual-
function system that makes the best use of the spectrum for both detecting and
communicating might be a better alternative. Early studies [13, 14] consider
single-antenna dual-function platforms without utilizing multi-antenna
processing. Then, based on the waveform diversity of MIMO radar and the
concept of space-division multiple access (SDMA) in MIMO communication, [15,
16] embed the information stream into radar pulses via a multi-antenna
platform, detecting targets at the mainlobe and transmitting information
streams at sidelobe. The sidelobe level is modulated via amplitude shift
keying (ASK), where different powers correspond to different communication
symbols in [15]. Likewise, [16] also develops phase shift keying (PSK) in this
system by representing the symbols as the different phases of the signals
received at the angle of the sidelobe. One significant restriction of such a
dual-function system is that the rate is limited by the Pulse Repetition
Frequency (PRF), which is far from satisfactory for communication
requirements. To overcome this problem, [17] proposes a joint multi-antenna
radar-communication (RadCom) system defined as a dual-function platform
simultaneously transmitting probing signals to radar targets and serving
multiple downlink users. Both functions are realized within the same frequency
band. Specifically, two antenna deployments are mentioned. Separated
deployment splits the antennas into two groups respectively working as MIMO
radar and BS, while the shared deployment only transmits communication streams
and the precoders are designed to form a desired radar beampattern and meet
the SINR requirements for communication users. However, only communication
SINR is adopted as a metric, but a more representative communication
performance metric such as rate is not considered in this work. In addition,
the performance of joint RadCom has not been compared with practically simpler
implementation using orthogonal resources in time or frequency to fulfill the
dual function, which is an essential criterion to decide where joint RadCom is
worth the efforts.
### I-C Contribution
In this paper, we propose two multi-antenna RadCom transmission design
techniques based on separated and shared antenna deployments respectively.
Both techniques enable the platform to simultaneously communicate with
downlink users and probe one target of interest within the same frequency
band. Major contributions are summarized as follows.
1. 1.
We propose transmission techniques that maximize the weighted sum rate (WSR)
of communication and the probing power at target’s location for both separated
and shared deployments.
Since WSR is the most representative metric of a communications system, we
consider WSR maximization instead of SINR constraint at each user in [17]. To
the best of our knowledge, we are the first to consider WSR maximization in
the system model with precoders. We also consider probing power maximization
at the target’s location rather than turning this metric into beampattern
approximation problem in [17]. This makes our transmission design more direct
for typical MIMO radar tracking and scanning mode, and enables a more clear
tradeoff comparison. However, by adopting WSR and probing power at target, the
transmission design problem becomes difficult. Specifically, maximizing WSR as
a sum of logarithms and probing power as a quadratic form under power
constraints makes the optimization problem highly non-convex and intractable.
2. 2.
We propose WMMSE-SDP and WMMSE-MM algorithms respectively to solve the two
proposed transmission design problems.
For separated deployment, we propose to reformulate the problem into
semidefinite programming (SDP) based on Weighted Minimized Mean Square Errors
(WMMSE) method. For the shared deployment, non-convex per-antenna power
constraint makes the design problem even more difficult. We propose a
majorization-minimization (MM) iterative algorithm based on WMMSE to
effectively solve the problem.
3. 3.
We compare the performance of the proposed transmission techniques with
practically simpler time-division and frequency-division dual-function
implementations.
In order to provide a well-rounded evaluation of our proposed techniques, we
compare the tradeoffs of both separated and shared multi-antenna RadCom
deployments with time-division and frequency-division dual-function
implementation which might also be practical options because of plain and easy
realization. Separated deployment has an advantage of realizing spectrum
sharing compared with frequency division, but is surpassed by the latter in
tradeoff performance. In contrast, the shared deployment outperforms frequency
division with a significant tradeoff gain, and exceeds time division in
certain conditions.
### I-D Organization
The rest of the paper is organized as follows. The system models and metrics
of both separated and shared deployments are illustrated in Section II. In
Section III, optimization problems of transmission designs for both
deployments are formulated. Algorithms for solving the optimization problems
are subsequently presented in Section IV. Section V demonstrates the
simulation results and analysis. Section VI concludes the paper.
## II System Model and metrics
In this work, we adopt the separated and shared deployment models of [17],
where either deployment works simultaneously as a BS serving downlink users
and a collocated MIMO radar probing the target of interest. Both deployments
are equipped with a total of $N_{\text{t}}$ antennas, serving $K$ single-
antenna users indexed as $\mathcal{K}=\\{1,\dots,K\\}$. Typically, we assume
that both deployments use a uniform linear array (ULA) in our system model.
The total power budget for either deployment is $P_{\text{t}}$. We assume that
the RadCom system works in a tracking mode as a radar, where there is
typically one target of interest at the azimuth angle of $\theta_{m}$ [18].
Beamforming is thus expected.
(a) Schematic diagram for separated deployment
(b) Schematic diagram for shared deployment
Figure 1: Schematic diagram for separated and shared multi-antenna joint
RadCom
### II-A Separated deployment
The separated deployment splits the antennas into two groups, i.e., a group of
$N_{\text{tr}}$ antennas only transmitting radar signals and the other group
of $N_{\text{tc}}$ antennas only transmitting communication signals. Both the
communication precoders and the radar signals are designed to fulfill the dual
function. The schematic diagram is shown in Fig. 1(a).
The received signal at user-$k$ can be expressed as
$y^{\text{S}}_{k}[l]={\bf h}_{k}^{H}\sum_{j\in\mathcal{K}}{\bf
p}_{j}{s}_{j}[l]+{\bf f}_{k}^{H}{\bf r}_{l}+n_{k}[l]$ (1)
where ${\bf h}_{k}\in\mathbb{C}^{N_{\text{tc}}\times 1}$ and ${\bf
f}_{k}\in\mathbb{C}^{N_{\text{tr}}\times 1}$ are respectively the non-line-of-
sight (NLOS) channel vectors from communication antennas and radar antennas to
user-$k$. $s_{j}[l]$ and $n_{j}[l]\sim\mathcal{CN}(0,\sigma_{n}^{2})$ are the
communication symbols and receiving noise of user-$j$ at the time index $l$.
Without loss of generality, we assume $\sigma_{n}^{2}=1$. ${\bf
p}_{j}\in\mathbb{C}^{N_{\text{tc}}\times 1}$ is the precoder for user-$j$, and
${\bf r}_{l}\in\mathbb{C}^{N_{\text{tr}}\times 1}$ is the $l$th snapshot of
radar antennas. The covariance matrix of transmit radar signal is ${\bf
R}_{\text{x}}=\frac{1}{L}{\bf r}_{l}{\bf r}_{l}^{H}$ with $L$ be the length of
signal on fast-time axis.
We first introduce WSR as the communication metric. The SINR of decoding
$s_{k}$ at user-$k$ is
$\gamma^{\text{S}}_{k}({\bf P},{\bf R}_{\text{x}})=\frac{\left|{\bf
h}_{k}^{H}{\bf p}_{k}\right|^{2}}{\sum_{j\in\mathcal{K},j\neq k}\left|{\bf
h}_{k}^{H}{\bf p}_{j}\right|^{2}+{\bf f}_{k}^{H}{\bf R}_{\text{x}}{\bf
f}_{k}+1},\forall k\in\mathcal{K}$ (2)
where ${\bf P}=[{\bf p}_{1},\dots,{\bf{p}}_{K}]$ is the precoder matrix of the
separated deployment. Therefore, the achievable rate at user-$k$ in the
separated deployment can be denoted as
$R^{\text{S}}_{k}({\bf P},{\bf
R}_{\text{x}})=\log_{2}\left(1+\gamma^{\text{S}}_{k}({\bf P},{\bf
R}_{\text{x}})\right).$ (3)
Denoting the rate weight of user-$k$ as $\mu_{k}$, WSR of the separated
deployment is $\sum_{k\in\mathcal{K}}\mu_{k}R^{\text{S}}_{k}({\bf P},{\bf
R}_{\text{x}})$.
Then, we consider our dual-function system works as a MIMO radar where the
target location is known or estimated and we aim at maximizing cumulated power
of the probing signals at the target location $\theta_{m}$. Note that this is
a simple and classic MIMO radar beamforming scenario illustrated in [19]. The
probing power is
$P^{\text{S}}_{\text{T}}(\theta_{m})={\bf a}^{H}(\theta_{m}){\bf
C}_{\text{t}}{\bf a}(\theta_{m})$ (4)
where ${\bf a}(\theta_{m})=[1,e^{j2\pi\delta
sin(\theta_{m})},\dots,e^{j2\pi(N_{\text{t}}-1)\delta
sin(\theta_{m})}]^{T}\in\mathbb{C}^{N_{\text{t}}\times 1}$ is the transmit
steering vector of ULA, and $\delta$ is the normalized distance (relative to
wavelength) between adjacent array elements. For other array structures, the
expression of ${\bf a}(\theta_{m})$ needs to be changed. ${\bf
C}_{\text{t}}\in\mathbb{C}^{N_{\text{t}}\times N_{\text{t}}}$ is the overall
transmit covariance matrix. Assuming the radar signals are statistically
independent with communication signals, we have
${\bf C}_{\text{t}}=\left[\begin{array}[]{cc}{\bf R}_{\text{x}}&{\bf 0}\\\
{\bf 0}&{\bf P}{\bf P}^{H}\end{array}\right].$ (5)
Thus (4) is reformulated as $P^{\text{S}}_{\text{T}}(\theta_{m})={\bf
a}_{1}^{H}(\theta_{m}){\bf R}_{\text{x}}{\bf a}_{1}(\theta_{m})+{\bf
a}_{2}^{H}(\theta_{m}){\bf P}{\bf P}^{H}{\bf a}_{2}(\theta_{m})$ where ${\bf
a}_{1}(\theta_{m})\in\mathbb{C}^{N_{\text{tr}}\times 1}$ and ${\bf
a}_{2}(\theta_{m})\in\mathbb{C}^{N_{\text{tc}}\times 1}$ satisfy ${\bf
a}(\theta_{m})=[{\bf a}_{1}(\theta_{m});{\bf a}_{2}(\theta_{m})]$.
### II-B Shared Deployment
For the shared deployment, $N_{\text{t}}$ antennas all transmit precoded
communication streams only and fulfill the dual function. The schematic
diagram is shown in Fig. 1(b). In this deployment, only the precoders are
designed. The received signal at user-$k$ is
$y^{\text{J}}_{k}[l]=\check{\bf h}_{k}^{H}\sum_{j\in\mathcal{K}}\check{\bf
p}_{j}{s}_{j}[l]+n_{k}[l]$ (6)
where $\check{\bf h}_{k}\in\mathbb{C}^{N_{\text{t}}\times 1}$ is the channel
vector between the shared system and user-$k$. Differently, without the
interference caused by radar signals, the SINR at user-$k$ is
$\gamma^{\text{J}}_{k}(\check{\bf P})=\frac{\left|\check{\bf
h}_{k}^{H}\check{\bf p}_{k}\right|^{2}}{\sum_{j\in\mathcal{K},j\neq
k}\left|\check{\bf h}_{k}^{H}\check{\bf p}_{j}\right|^{2}+1},\forall
k\in\mathcal{K}$ (7)
where $\check{\bf P}=[\check{\bf p}_{1},\dots,\check{\bf{p}}_{K}]$ is the
precoder matrix for the shared deployment. Thus, the rate at user-$k$ is
$R^{\text{J}}_{k}(\check{\bf
P})=\log_{2}\left(1+\gamma^{\text{J}}_{k}(\check{\bf P})\right)$ (8)
and WSR of the shared deployment is
$\sum_{k\in\mathcal{K}}\mu_{k}R^{\text{S}}_{k}({\bf P},{\bf R}_{\text{x}})$.
In the shared deployment, probing power at the target location $\theta_{m}$ is
$P^{\text{J}}_{\text{T}}(\theta_{m})={\bf a}^{H}(\theta_{m})\check{\bf
P}\check{\bf P}^{H}{\bf a}(\theta_{m})$. Although we do not focus on the radar
matched filter design in this work, the transmit signal of shared deployment
coincides with the transmit model of one collocated MIMO radar in [18] where
the matched filter settings for radar detection can be referred to as an
option. It is also derived in [18] that maximizing output signal-to-noise-
ratio (SNR) of the detector is equivalent to maximizing
$P^{\text{J}}_{\text{T}}(\theta_{m})$.
## III Problem Formulation
The transmission design problem for the separated deployment can be expressed
as
$\displaystyle\max\limits_{{\bf P},{\bf R}_{\text{x}}}\quad$
$\displaystyle\rho\sum_{k\in\mathcal{K}}\mu_{k}R^{\text{S}}_{k}({\bf P},{\bf
R}_{\text{x}})+{\bf a}_{1}^{H}(\theta_{m}){\bf R}_{\text{x}}{\bf
a}_{1}(\theta_{m})$ $\displaystyle+{\bf a}_{2}^{H}(\theta_{m}){\bf P}{\bf
P}^{H}{\bf a}_{2}(\theta_{m})$ (9a) $\displaystyle s.t.\quad$
$\displaystyle\text{diag}({\bf R}_{\text{x}})=\frac{P_{\text{r}}{\bf
1}^{N_{\text{tr}}\times 1}}{N_{\text{tr}}}$ (9b) $\displaystyle\text{tr}({\bf
P}{\bf P}^{H})\leq P_{\text{c}}$ (9c) $\displaystyle{\bf R}_{\text{x}}\succeq
0$ (9d)
where ${\bf P}$ is the precoders of communication streams, ${\bf
R}_{\text{x}}$ is the transmit covariance matrix of radar signals,
$P_{\text{c}}$ and $P_{\text{r}}$ are the transmit power budgets of radar and
communication sub-arrays respectively. The first counterpart of the objective
function (9a) represents WSR while the rest parts denote probing power at the
target. Both metrics are maximized via regularization with a parameter $\rho$.
Although communication and radar signals are separately transmitted by two
sub-systems, they show mutual effects on each other when operating
simultaneously, i.e., radar signals cause interference to communication users
and communication signals are supposed to help probe the target. Constraint
(9b) is the uniform elemental power constraint in radar implementation [19],
and (9c) is the total power constraint in communication implementation. (9d)
restricts ${\bf R}_{\text{x}}$ to be semi-definite.
However, the dual function can also be fulfilled by the shared deployment, of
which the transmission design problem can be formulated as
$\begin{split}\max\limits_{\check{\bf
P}}\quad&\rho\sum_{k\in\mathcal{K}}\mu_{k}R^{\text{J}}_{k}(\check{\bf P})+{\bf
a}^{H}(\theta_{m})\check{\bf P}\check{\bf P}^{H}{\bf a}(\theta_{m})\\\
s.t.\quad&\text{diag}(\check{\bf P}\check{\bf P}^{H})=\frac{P_{\text{t}}{\bf
1}^{N_{\text{t}}\times 1}}{N_{\text{t}}}.\end{split}$ (10)
Likewise, WSR and probing power maximization are combined in the objective
function via regularization. There is also an elemental power constraint for
all antennas. The total power budget constraint for communication is omitted
because it is certainly satisfied when the elemental power restriction is met.
It needs pointing out that the maximum probing power design approach used in
(9d) and (10) might have drawbacks when extended to the scenario of multiple
targets according to [19].
## IV WMMSE-based Solving Algorithms
It is clear that both separated and shared transmission design problems (9)
and (10) are non-convex because of the intractable form of WSR and maximizing
a quadratic power function in objective functions. However, this problem can
be reformulated using the WMMSE approach and solved through the WMMSE-based
Alternating Optimization (WMMSE-AO) algorithm following [20].
### IV-A WMMSE-SDP algorithm for separated transmission
We decode $s_{k}$ at user-$k$ via an equalizer $g_{k}$, and get the estimation
$\hat{s}_{k}$ of $s_{k}$ as $\hat{s}_{k}=g_{k}y^{\text{S}}_{k}$. Subsequently,
the Mean Square Errors (MSE) of estimation, defined as
$\varepsilon_{k}\triangleq\mathbb{E}\left\\{\left|\hat{s}_{k}-s_{k}\right|^{2}\right\\}$,
can be expressed as
$\varepsilon_{k}=\left|g_{k}\right|^{2}T_{k}-2\mathfrak{R}\left\\{g_{k}{\bf
h}_{k}^{H}{\bf p}_{k}\right\\}+1$ where
$T_{k}\triangleq\sum_{j\in\mathcal{K}}\left|{\bf h}_{k}^{H}{\bf
p}_{j}\right|^{2}+\text{tr}({\bf R}_{\text{x}}{\bf f}_{k}{\bf f}_{k}^{H})+1.$
(11)
Optimum equalizers are obtained by letting
$\frac{\partial\varepsilon_{k}}{\partial g_{k}}=0$, which are also the MMSE
equalizers given by
$g^{\text{MMSE}}_{k}={\bf p}_{k}^{H}{\bf h}_{k}(T_{k})^{-1}.$ (12)
Minimized MSEs (MMSEs) based on $g^{\text{MMSE}}_{k}$ are given by
$\varepsilon^{\text{MMSE}}_{k}\triangleq\min\limits_{g_{k}}\varepsilon_{k}=(T_{k})^{-1}\left(T_{k}-\left|{\bf
h}_{k}^{H}{\bf p}_{k}\right|^{2}\right).$ (13)
Hence, by comparing (13) with (2), we rewrite SINRs of decoding the intended
streams at user-$k$ as
$\gamma^{\text{S}}_{k}=(1/\varepsilon_{k}^{\text{MMSE}})-1$, and the rate as
$R^{\text{S}}_{k}=\log_{2}(1+\gamma_{k})=-\log_{2}(\varepsilon_{k}^{\text{MMSE}})$.
By allocating a positive weight $w_{k}$ to user-$k$’s rate, we define the
augmented WMSEs as $\xi_{k}\triangleq w_{k}\varepsilon_{k}-\log_{2}(w_{k})$.
After optimizing over the equalizers and weights, the Rate-WMMSE relationships
are
$\xi_{k}^{\text{MMSE}}\triangleq\min\limits_{w_{k},g_{k}}\xi_{k}=1-R^{\text{S}}_{k}$
(14)
where the optimum equalizers and the optimum weights are
$\begin{split}&g_{k}^{*}=g_{k}^{\text{MMSE}}\\\
&w_{k}^{*}=w_{k}^{\text{MMSE}}=\left(\varepsilon_{k}^{\text{MMSE}}\right)^{-1},\end{split}$
(15)
resulting from meeting the first order optimality conditions.
Using the rate-WMMSE relationships, we can then reformulate (9) as
$\displaystyle\min\limits_{{\bf P},{\bf R}_{\text{x}},{\bf w},{\bf g}}\quad$
$\displaystyle\rho\sum_{k\in\mathcal{K}}\mu_{k}\xi_{k}({\bf P},{\bf
R}_{\text{x}})-{\bf a}_{2}^{H}(\theta_{m})\left({\bf P}{\bf P}^{H}\right){\bf
a}_{2}(\theta_{m})$ $\displaystyle-{\bf a}_{1}^{H}(\theta_{m}){\bf
R}_{\text{x}}{\bf a}_{1}(\theta_{m})$ (16a) $\displaystyle s.t.\quad$
$\displaystyle\text{diag}({\bf R}_{\text{x}})=\frac{P_{\text{r}}{\bf
1}^{N_{\text{tr}}\times 1}}{N_{\text{tr}}/2}$ (16b)
$\displaystyle\text{tr}({\bf P}{\bf P}^{H})\leq P_{\text{c}}$ (16c)
$\displaystyle{\bf R}_{\text{x}}\succeq 0$ (16d)
where ${\bf w}=\left[w_{1},w_{1},\dots,w_{K}\right]$ is the vector of all MSE
weights. ${\bf g}=\left[g_{1},g_{2},\dots,g_{K}\right]$ is the vector of all
equalizers. It is worth noting that the second term $-{\bf
a}_{2}^{H}(\theta_{m})\left({\bf P}{\bf P}^{H}\right){\bf a}_{2}(\theta_{m})$
in (16a) is non-convex. To make it convex, we first reformulate this part as
$\begin{split}&-{\bf a}_{2}^{H}(\theta_{m})\left({\bf P}{\bf P}^{H}\right){\bf
a}_{2}(\theta_{m})=-\sum_{k\in\mathcal{K}}{\bf p}_{k}^{H}{\bf
a}_{2}(\theta_{m}){\bf a}_{2}^{H}(\theta_{m}){\bf p}_{k}\\\
&=N_{\text{tc}}\times P_{\text{c}}-\sum_{k\in\mathcal{K}}{\bf p}_{k}^{H}{\bf
a}_{2}(\theta_{m}){\bf a}_{2}^{H}(\theta_{m}){\bf p}_{k}-N_{\text{tc}}\times
P_{\text{c}}\\\ &=\sum_{k\in\mathcal{K}}{\bf p}_{k}^{H}(N_{\text{tc}}{\bf
I}){\bf p}_{k}-\sum_{k\in\mathcal{K}}{\bf p}_{k}^{H}{\bf
a}_{2}(\theta_{m}){\bf a}_{2}^{H}(\theta_{m}){\bf p}_{k}-N_{\text{tc}}\times
P_{\text{c}}\\\ &=\sum_{k\in\mathcal{K}}{\bf p}_{k}^{H}\left(N_{\text{tc}}{\bf
I}-{\bf a}_{2}(\theta_{m}){\bf a}_{2}^{H}(\theta_{m})\right){\bf
p}_{k}-N_{\text{tc}}\times P_{\text{c}}\\\ &=\sum_{k\in\mathcal{K}}{\bf
p}_{k}^{H}{\bf Z}(\theta_{m}){\bf p}_{k}-N_{\text{tc}}\times
P_{\text{c}}\end{split}$ (17)
where we denote ${\bf Z}(\theta_{m})=N_{\text{tc}}{\bf I}-{\bf
a}_{2}(\theta_{m}){\bf a}_{2}^{H}(\theta_{m})$. For the definition of steering
vector in (4), it is clear that ${\bf a}_{2}(\theta_{m}){\bf
a}_{2}^{H}(\theta_{m})$ is a rank-1 matrix with the eigenvalue of $\lVert{\bf
a}_{2}(\theta_{m})\lVert^{2}=N_{\text{tc}}$. Therefore, ${\bf Z}(\theta_{m})$
is semi-definite. By omitting the constant part, we have that minimizing
$-{\bf a}_{2}^{H}(\theta_{m})\left({\bf P}{\bf P}^{H}\right){\bf
a}_{2}(\theta_{m})$ is equivalent to minimizing $\sum_{k\in\mathcal{K}}{\bf
p}_{k}^{H}{\bf Z}(\theta_{m}){\bf p}_{k}$, which is convex.
Afterwards, (16) is equivalent to
$\begin{split}\min\limits_{{\bf P},{\bf R}_{\text{x}},{\bf w},{\bf
g}}\quad&\rho\sum_{k\in\mathcal{K}}\mu_{k}\xi_{k}({\bf P},{\bf
R}_{\text{x}})+\text{tr}\left({\bf Z}(\theta_{m}){\bf P}{\bf P}^{H}\right)\\\
&-\text{tr}\left({\bf R}_{\text{x}}{\bf a}_{1}(\theta_{m}){\bf
a}_{1}^{H}(\theta_{m})\right)\\\ s.t.\quad&\text{diag}({\bf
R}_{\text{x}})=\frac{P_{\text{r}}{\bf 1}^{N_{\text{tr}}\times
1}}{N_{\text{tr}}}\\\ &\text{tr}({\bf P}{\bf P}^{H})\leq P_{\text{c}}\\\ &{\bf
R}_{\text{x}}\succeq 0.\end{split}$ (18)
Note that when $\\{{\bf w},{\bf g}\\}$ are fixed, (18) is a semidefinite
programming (SDP) convex problem that can be efficiently solved by CVX
toolbox, and optimum $\\{{\bf w}^{*},{\bf g}^{*}\\}$ can be updated following
(LABEL:updatewg). Therefore, we here use the WMMSE-based AO algorithm with
details in [20] to solve the problem, which is summarized in Algorithm 1.
After having the optimum ${\bf R}_{\text{x}}$, the radar snapshots can be
further obtained using algorithms in [21].
Algorithm 1 WMMSE-SDP algorithm
1:$t\leftarrow 0$, ${\bf P}^{[t]}$
2:$\text{WSR}^{[t]}$ is calculated from ${\bf P}^{[t]}$
3:repeat
4: ${\bf w}^{*}\leftarrow{\bf w}^{\text{MMSE}}({\bf P}^{[t]})$;
5: ${\bf g}^{*}\leftarrow{\bf g}^{\text{MMSE}}({\bf P}^{[t]})$;
6: update ${\bf P}^{[t+1]}$, ${\bf R}^{[t+1]}_{\text{x}}$ by solving SDP (18)
with updated ${\bf w}^{*},{\bf g}^{*}$;
7: update $\text{WSR}^{[t+1]}$ using ${\bf P}^{[t+1]}$.
8: $t++$;
9:until $|\text{WSR}^{[t]}-\text{WSR}^{[t-1]}|\leq\epsilon_{1}$
### IV-B WMMSE-MM algorithm for shared transmission
For the shared transmission design problem, we first follow the same path in
Section IV.A and reformulate (10) with WMMSE method. To simplify, we omit the
repetitive parts and directly give the reformulated problem as
$\begin{split}\min\limits_{\check{\bf P},\check{{\bf w}},\check{{\bf
g}}}\quad&\rho\sum_{k\in\mathcal{K}}\mu_{k}\zeta_{k}(\check{\bf P})-{\bf
a}^{H}(\theta_{m})\check{\bf P}\check{\bf P}^{H}{\bf a}(\theta_{m})\\\
s.t.\quad&\text{diag}(\check{\bf P}\check{\bf P}^{H})=\frac{P_{\text{t}}{\bf
1}^{N_{\text{t}}\times 1}}{N_{\text{t}}},\end{split}$ (19)
where
$\begin{split}&\zeta_{k}(\check{\bf
P})\triangleq\check{w}_{k}\left(\left|\check{g}_{k}\right|^{2}\check{T}_{k}-2\mathfrak{R}\left\\{\check{g}_{k}\check{\bf
h}_{k}^{H}\check{\bf p}_{k}\right\\}+1\right)-\log_{2}(\check{w}_{k}),\\\
&\check{T}_{k}\triangleq\sum_{j\in\mathcal{K}}\left|\check{\bf
h}_{k}^{H}\check{\bf p}_{j}\right|^{2}+1.\end{split}$ (20)
The optimum equalizers and weights are respectively
$\begin{split}&\check{g}_{k}^{*}=\check{\bf p}_{k}^{H}\check{\bf
h}_{k}(\check{T}_{k})^{-1},\\\
&\check{w}_{k}^{*}=\frac{\check{T}_{k}}{\check{T}_{k}-\left|\check{\bf
h}_{k}^{H}\check{\bf p}_{k}\right|^{2}}.\end{split}$ (21)
We can see that (19) is non-convex because of the quadratic equality
constraint, which also makes it difficult to solve. In the following part, we
propose an MM-based iterative algorithm to solve this non-convex problem.
At first, to reformulate the problem into a more explicit form, we define
${\bf p}_{\text{v}}=\text{vec}(\check{\bf P})$ and
${\bf D}_{\text{p},k}=\left[{\bf
0}^{N_{\text{t}}\times(k-1)N_{\text{t}}}\quad{\bf I}_{N_{\text{t}}}\quad{\bf
0}^{N_{\text{t}}\times(K-k)N_{\text{t}}}\right],k\in\mathcal{K}.$ (22)
Then, the objective function in (19) can be rewritten as
$\begin{split}f({\bf p}_{\text{v}})=&{\bf p}_{\text{v}}^{H}{\bf Q}{\bf
p}_{\text{v}}-2\mathfrak{R}\left\\{\sum_{k\in\mathcal{K}}\mu_{k}\check{w}_{k}\check{g}_{k}\check{\bf
h}_{k}^{H}{\bf D}_{\text{p},k}{\bf p}_{\text{v}}\right\\}\end{split}$ (23)
where
$\begin{split}&{\bf Q}=\sum_{j\in\mathcal{K}}{\bf
D}_{\text{p},j}^{H}\left(\rho\sum_{k\in\mathcal{K}}\mu_{k}\check{w}_{k}\left|\check{g}_{k}\right|^{2}\check{\bf
h}_{k}\check{\bf h}_{k}^{H}\right){\bf D}_{\text{p},j}\\\
&+\sum_{k^{\prime}\in\mathcal{K}}\left(\frac{\rho\mu_{k}^{\prime}\check{w}_{k}^{\prime}}{P_{\text{t}}}(\left|\check{g}_{k^{\prime}}\right|^{2}+1){\bf
I}-{\bf D}_{\text{p},k^{\prime}}^{H}{\bf a}(\theta_{m}){\bf
a}(\theta_{m})^{H}{\bf D}_{\text{p},k^{\prime}}\right).\end{split}$ (24)
Afterwards, (19) is equivalent to
$\begin{split}\min\limits_{{\bf p}_{\text{v}}}\quad&f({\bf p}_{\text{v}})\\\
s.t.\quad&{\bf p}_{\text{v}}\in\mathcal{P}\end{split}$ (25)
where $\mathcal{P}=\left\\{{\bf
p}_{\text{v}}|\text{diag}(\sum_{k\in\mathcal{K}}{\bf D}_{\text{p},k}{\bf
p}_{\text{v}}{{\bf p}_{\text{v}}}^{H}{\bf
D}_{\text{p},k}^{H})=\frac{P_{\text{t}}{\bf 1}^{N_{\text{t}}\times
1}}{N_{\text{t}}}\right\\}$.
According to the MM framework [22], we then construct the majorization
function of $f({\bf p}_{\text{v}})$. We first recall Lemma 1 from [23] that is
###### Lemma 1
Let ${\bf L}$, ${\bf M}$ be the $n\times n$ Hermitian matrices and ${\bf
M}\succeq{\bf L}$. For any point ${\bf x}_{0}\in\mathbb{C}^{n}$, there is
${\bf x}^{H}{\bf L}{\bf x}\leq{\bf x}^{H}{\bf M}{\bf x}+2\mathfrak{R}\\{{\bf
x}^{H}({\bf L-M}){\bf x}_{0}\\}+{\bf x}_{0}^{H}({\bf M-L}){\bf x}_{0}$.
According to Lemma 1, we chose ${\bf M}=\lambda_{\text{max}}({\bf Q}){\bf I}$
where $\lambda_{\text{max}}({\bf Q})$ means the largest eigenvalue of ${\bf
Q}$, and have
$\begin{split}&{\bf p}_{\text{v}}^{H}{\bf Q}{\bf p}_{\text{v}}\\\ \leq&{\bf
p}_{\text{v}}^{H}{\bf M}{\bf p}_{\text{v}}+2\mathfrak{R}\\{{\bf
p}_{\text{v}}^{H}({\bf Q}-{\bf M}){\bf p}_{\text{v}}^{t^{\prime}}\\}+({\bf
p}_{\text{v}}^{t^{\prime}})^{H}({\bf M}-{\bf Q}){\bf
p}_{\text{v}}^{t^{\prime}}\\\ =&2\mathfrak{R}\\{{\bf p}_{\text{v}}^{H}({\bf
Q}-\lambda_{\text{max}}({\bf Q}){\bf I}){\bf
p}_{\text{v}}^{t^{\prime}}\\}+2\lambda_{\text{max}}({\bf Q})P_{\text{t}}-({\bf
p}_{\text{v}}^{t^{\prime}})^{H}{\bf Q}{\bf
p}_{\text{v}}^{t^{\prime}}.\end{split}$ (26)
Here the equality is achieved at ${\bf p}_{\text{v}}={\bf
p}_{\text{v}}^{t^{\prime}}$. By omitting the constant items in (26), we can
subsequently construct the majorization function of $f({\bf p}_{\text{v}})$ as
$g({\bf p}_{\text{v}}|{\bf
p}_{\text{v}}^{t^{\prime}})=2\mathfrak{R}\left\\{{\bf
p}_{\text{v}}^{H}\left[({\bf Q}-\lambda_{\text{max}}({\bf Q}){\bf I}){\bf
p}_{\text{v}}^{t^{\prime}}-{\bf q}\right]\right\\}$ (27)
where ${\bf q}=\sum_{k\in\mathcal{K}}\mu_{k}\check{w}_{k}\check{g}_{k}{\bf
D}_{\text{p},k}^{H}\check{\bf h}_{k}$. Then, (25) can be solved by iterating
${\bf p}_{\text{v}}^{t^{\prime}+1}=\arg\min_{{\bf
p}_{\text{v}}\in\mathcal{P}}g({\bf p}_{\text{v}}|{\bf
p}_{\text{v}}^{t^{\prime}}).$ (28)
However, (28) can be further investigated to find a closed-form solution.
First, we denote
$\hat{{\bf q}}={\bf q}-({\bf Q}-\lambda_{\text{max}}({\bf Q}){\bf I}){\bf
p}_{\text{v}}^{t^{\prime}}.$ (29)
Then, the optimization problem (28) is equivalent to
$\begin{split}\max_{{\bf p}_{\text{v}}}\quad&2\mathfrak{R}\left\\{{\bf
p}_{\text{v}}^{H}\hat{{\bf q}}\right\\}\\\
s.t.\quad&\text{diag}(\sum_{k\in\mathcal{K}}{\bf D}_{\text{p},k}{\bf
p}_{\text{v}}{{\bf p}_{\text{v}}}^{H}{\bf
D}_{\text{p},k}^{H})=\frac{P_{\text{t}}{\bf 1}^{N_{\text{t}}\times
1}}{N_{\text{t}}}.\end{split}$ (30)
In order to show the essence more clearly, we further denote
$\displaystyle\tilde{\bf
q}_{j}=[\hat{q}_{j},\hat{q}_{N_{\text{t}}+j},\dots,\hat{q}_{(K-1)N_{\text{t}}+j}]^{T},j=1,2,\dots,N_{\text{t}},$
(31) $\displaystyle\tilde{\bf
p}_{j}=[[p_{v}]_{j},[p_{v}]_{N_{\text{t}}+j},\dots,[p_{v}]_{(K-1)N_{\text{t}}+j}]^{T},j=1,2,\dots,N_{\text{t}},$
(32)
where $\hat{q}_{i}$ and $[p_{v}]_{j}$ respectively denote the $i$th entry of
$\hat{{\bf q}}$ and the $j$th entry of ${\bf p}_{\text{v}}$. We further define
the real form as
$\begin{split}\tilde{\bf q}^{\text{r}}_{j}=&[\mathfrak{R}\\{\tilde{\bf
q}_{j}\\};\mathfrak{I}\\{\tilde{\bf q}_{j}\\}],\quad
j=1,2,\dots,N_{\text{t}}\\\ \tilde{\bf
p}^{\text{r}}_{j}=&[\mathfrak{R}\\{\tilde{\bf
p}_{j}\\};\mathfrak{I}\\{\tilde{\bf p}_{j}\\}],\quad
j=1,2,\dots,N_{\text{t}}.\end{split}$ (33)
Then (30) can be reformulated as
$\begin{split}\max_{\tilde{\bf
p}^{\text{r}}_{j},j=1,2,\dots,N_{\text{t}}}\quad&\sum_{j=1}^{N_{\text{t}}}(\tilde{\bf
p}^{\text{r}}_{j})^{T}\tilde{\bf q}^{\text{r}}_{j}\\\
s.t.\quad&\lVert\tilde{\bf
p}^{\text{r}}_{j}\lVert_{2}^{2}=\frac{P_{\text{t}}}{N_{\text{t}}},j=1,2,\dots,N_{\text{t}}.\end{split}$
(34)
Following Cauchy-Schwartz inequality, we have
$\sum_{j=1}^{N_{\text{t}}}(\tilde{\bf p}^{\text{r}}_{j})^{T}\tilde{\bf
q}^{\text{r}}_{j}\leq\sum_{j=1}^{N_{\text{t}}}\lVert\tilde{\bf
p}^{\text{r}}_{j}\lVert_{2}\lVert\tilde{\bf
q}^{\text{r}}_{j}\lVert_{2}=\sqrt{\frac{P_{\text{t}}}{N_{\text{t}}}}\sum_{j=1}^{N_{\text{t}}}\lVert\tilde{\bf
q}^{\text{r}}_{j}\lVert_{2}$ (35)
where the last equality follows from the constraint $\lVert\hat{\bf
p}^{\text{r}}_{j}\lVert_{2}^{2}=P_{\text{t}}/N_{\text{t}}$. For the condition
of the equality, it is obvious that the optimal solution $\tilde{\bf
p}^{\text{r}\star}_{j}$ and $\tilde{\bf q}^{\text{r}}_{j}$ should be colinear,
i.e.,
$\tilde{\bf
p}^{\text{r}\star}_{j}=\frac{\sqrt{\frac{P_{\text{t}}}{N_{\text{t}}}}}{\lVert\tilde{\bf
q}^{\text{r}}_{j}\lVert_{2}}\tilde{\bf
q}^{\text{r}}_{j},j=1,2,\dots,N_{\text{t}}.$ (36)
Equivalently, we have
$\tilde{\bf
p}^{\star}_{j}=\frac{\sqrt{\frac{P_{\text{t}}}{N_{\text{t}}}}}{\lVert\tilde{\bf
q}_{j}\lVert_{2}}\tilde{\bf q}_{j},j=1,2,\dots,N_{\text{t}},$ (37)
and the solution ${\bf p}_{\text{v}}^{\star}$ can be recovered from
$\tilde{\bf p}^{\star}_{j}$ according to (32). Then, with the WMMSE-AO
framework, (10) can be solved. We summarize this WMMSE-MM algorithm as
Algorithm 2.
Algorithm 2 WMMSE-MM algorithm
1:$t\leftarrow 0$, $\check{\bf P}^{[t]}$
2:Calculate $\text{WSR}^{[t]}$ from $\check{\bf P}^{[t]}$;
3:repeat
4: Update ${\bf w}^{*}$ and ${\bf g}^{*}$ with $\check{\bf P}^{[t]}$ following
(LABEL:sharedwg);
5: $t^{\prime}\leftarrow 0$;
6: ${\bf p}^{[t^{\prime}]}_{\text{v}}=\text{vec}(\check{\bf P}^{[t]})$;
7: repeat
8: Calculate $\hat{{\bf q}}$ with ${\bf p}^{[t^{\prime}]}_{\text{v}}$
following (29);
9: for $j=1$ to $N_{\text{t}}$ do
10: $\tilde{\bf
q}_{j}=[\hat{q}_{j},\hat{q}_{N_{\text{t}}+j},\dots,\hat{q}_{(K-1)N_{\text{t}}+j}]^{T}$;
11: $\tilde{\bf
p}_{j}^{*}=\frac{\sqrt{\frac{P_{\text{t}}}{N_{\text{t}}}}}{\lVert\tilde{\bf
q}_{j}\lVert_{2}}\tilde{\bf q}_{j}$;
12: end for
13: Get ${\bf p}_{\text{v}}^{[t^{\prime}+1]}$ using $\tilde{\bf p}_{j}^{*}$ by
inverse operation of (32);
14: $t^{\prime}++$;
15: until $\lVert{\bf p}_{\text{v}}^{[t^{\prime}]}-{\bf
p}_{\text{v}}^{[t^{\prime}-1]}\lVert_{2}\leq\epsilon_{2}$
16: $\check{\bf P}^{[t+1]}=\text{mat}({\bf p}_{\text{v}}^{[t^{\prime}]})$;
17: Update $\text{WSR}^{[t+1]}$ using $\check{\bf P}^{[t+1]}$;
18: $t++$;
19:until $|\text{WSR}^{[t]}-\text{WSR}^{[t-1]}|\leq\epsilon_{1}$
## V Numerical Results
In this section, we provide numerical results to validate the performance of
both separated and shared transmission for joint multi-antenna RadCom system,
and further reveal the advantages of shared transmission.
We set that the platform adopts a ULA where $N_{\text{t}}=16$ with half-
wavelength spacing, and serves $K=4$ downlink users. We assume the total
transmit power budget is $P_{\text{t}}=20\text{dBm}$ and the noise power at
each user is $0\text{dBm}$. Target location is set to be $\theta_{m}=$0°. The
channel vectors of users are generated obeying the i.i.d. complex Gaussian
distribution. In the separated deployment, radar and communication subsystems
fairly share the available resources, i.e.,
$N_{\text{tc}}=N_{\text{tr}}=\frac{1}{2}N_{\text{t}}$ and
$P_{\text{c}}=P_{\text{r}}=\frac{1}{2}P_{\text{t}}$. Although radar power
budget is normally higher than communication, an even power allocation here is
more representative for comparison. Moreover, to ensure fair comparison
between the separated and shared transmission, we set up the same channel
environment in the simulations, i.e., $\check{\bf h}_{k}=[{\bf f}_{k};{\bf
h}_{k}]$.
### V-A Transmit beampattern comparison
In order to evaluate the performance of transmit beamforming at the target, we
first demonstrate the beampattern obtained by the separated and shared
transmission.
(a) WSR=2.4bps/Hz
(b) WSR=5.3bps/Hz
Figure 2: Transmit beampattern comparison for shared and separated
transmissions with different WSRs.
Fig. 2 compares the transmit beampattern of both transmissions obtained
respectively with low and high WSRs. We can see in Fig. 2(a) that when
WSR=2.4bps/Hz, shared transmission can nearly achieve the same beampattern as
the MIMO radar equipped with the same number of antennas, showing a 3dB
probing power gain at target’s location over separated transmission. Fig. 2(b)
displays that when WSR increases by 2.9bps/Hz, shared transmission experiences
a 1.45dB loss of probing power at target. However, it is clear that shared
transmission still keeps a 4.88dB gain over separated transmission, which is
even larger than the gain in Fig. 2(a). Fig. 2 also reveals that there is a
tradeoff between maximizing probing power at target and maximizing WSR.
Fig. 3 shows the average transmit power at each antenna in both separated and
shared transmissions corresponding to the two scenarios in Fig. 2. Recall that
the first eight antennas in the separated deployment are intended for the
radar function, it is clear that the per-antenna power constraints in both
shared and separated transmissions are met successfully, which proves that our
algorithms handle the power constraints effectively for both transmissions.
(a) WSR=1.5bps/Hz
(b) WSR=6.1bps/Hz
Figure 3: Average transmit power of each antenna for shared and separated
transmission. The first eight antennas in separated transmission are intended
for the radar function.
### V-B Tradeoff comparison
By varying the regularization parameter $\rho$ in (9) and (10), we obtain the
tradeoff between WSR and probing power at target for both the shared and the
separated transmissions in Fig. 4 via Monte Carlo experiments.
To give a well-rounded comparison, we also provide in Fig. 4 two simple
implementations that also achieve the dual function by orthogonalizing the
resources in time (i.e. time-division) or frequency (i.e. frequency-division).
Specifically, $N_{\text{t}}$-antenna frequency-division implementation means
that a $N_{\text{t}}$-antenna system simultaneously transmits precoded
communication streams and radar probing signals respectively with
$P_{\text{t}}/2$ power budget but within different frequency bands. There is
thus no interference between the radar and communication functions because of
the frequency orthogonality. To be fair, we assume the communication precoders
are optimized via SDMA based on multi-user linear precoding (MU-LP) in [24],
which maximizes WSR as well.
$N_{\text{t}}$-antenna time-division implementation means a system that spends
a fraction $\alpha$ of time working as a $N_{\text{t}}$-antenna BS with MU-LP
and $1-\alpha$ of time working as a $N_{\text{t}}$-antenna MIMO radar with a
power budget of $P_{\text{t}}$ on the same frequency band. Since radar and
communication functions are realized orthogonally, WSR and probing power of
both frequency-division and time-division implementations can be independently
obtained by using classic methods in [24] and [19] respectively.
Figure 4: Tradeoff between probing power at target and WSR
In Fig. 4, we can see that separated transmission experiences a considerable
performance loss as the cost of realizing spectrum sharing. To be specific,
separated transmission reaches the same achievable probing power as frequency-
division implementation, but sees an approximately 1bps/Hz WSR loss because it
only uses half the number of antennas to transmit communication streams
compared with frequency-division. Also, separated transmission shows a dual-
function tradeoff because of the interference imposed by radar signals on
communication users, which is a cost of sharing the same band compared with
frequency division. Therefore, although separated transmission meets the
RadCom requirement, it seems not to be a wise choice as the resources could be
used more efficiently to improve the overall performance.
In contrast, it is obvious that the shared transmission shows advantages
compared with all dual-function implementations. First, it outperforms
separated transmission with a maximum WSR gain of about 2bps/Hz, which results
from that the former adopts twice the number of antennas and twice the power
budget to transmit communication streams. We can also see that the shared
transmission achieves at least 3dB gain of probing power at target given the
same WSR. Second, it is clear that shared transmission surpasses frequency-
division implementation with a maximum 3dB probing power gain or around
1bps/Hz WSR gain, with an additional advantage of realizing spectrum sharing.
Third, as for time division, it needs pointing out that $\alpha$ varies
depending on practical scenarios where the radar tracking and BS communication
task are arranged based on specific demands. Therefore, for the convenience of
comparison, we only provide a key baseline with $\alpha=0.51$. For larger
$\alpha$, time division outperforms shared transmission, but shared
transmission still has the advantage of being able to fulfill the dual
function simultaneously.
## VI Conclusion
To conclude, we propose two transmission design techniques that maximize WSR
and probing power at target for both separated and shared RadCom deployments.
We propose WMMSE-SDP and WMMSE-MM algorithms to solve the non-convex and
intractable transmission design problems respectively. Numerical results show
that our proposed algorithms are effective, and that shared deployment
outperforms separated deployment given the same antenna number and power
budget. Compared with practically simpler dual-function implementations based
on time/frequency division, both separated and shared transmissions have an
advantage of being able to operate the dual function simultaneously within the
same frequency band. Separated transmission is less efficient in exploiting
the resource and experiences a considerable performance loss compared with
frequency division. In contrast, shared transmission outperforms frequency
division with a maximum 3dB probing power gain or 1bps/Hz WSR gain. However,
in some conditions, shared transmission is surpassed by time-division
implementation but still exceeds in the capability of operating the dual
function simultaneously.
## References
* [1] H. Griffiths, L. Cohen, S. Watts, E. Mokole, C. Baker, M. Wicks, and S. Blunt, “Radar spectrum engineering and management: Technical and regulatory issues,” _Proceedings of the IEEE_ , vol. 103, no. 1, pp. 85–102, Jan 2015.
* [2] A. Aubry, V. Carotenuto, A. De Maio, A. Farina, and L. Pallotta, “Optimization theory-based radar waveform design for spectrally dense environments,” _IEEE Aerospace and Electronic Systems Magazine_ , vol. 31, no. 12, pp. 14–25, 2016.
* [3] W. Rowe, P. Stoica, and J. Li, “Spectrally constrained waveform design [sp tips&tricks],” _IEEE Signal Processing Magazine_ , vol. 31, no. 3, pp. 157–162, 2014.
* [4] B. Tang and J. Liang, “Efficient algorithms for synthesizing probing waveforms with desired spectral shapes,” _IEEE Transactions on Aerospace and Electronic Systems_ , vol. 55, no. 3, pp. 1174–1189, June 2019.
* [5] L. Zheng, M. Lops, X. Wang, and E. Grossi, “Joint design of overlaid communication systems and pulsed radars,” _IEEE Transactions on Signal Processing_ , vol. 66, no. 1, pp. 139–154, 2018.
* [6] J. Li and P. Stoica, “MIMO radar with colocated antennas,” _IEEE Signal Processing Magazine_ , vol. 24, no. 5, pp. 106–114, 2007.
* [7] B. Tang and J. Li, “Spectrally constrained mimo radar waveform design based on mutual information,” _IEEE Transactions on Signal Processing_ , vol. 67, no. 3, pp. 821–834, 2019.
* [8] F. Liu, C. Masouros, A. Li, T. Ratnarajah, and J. Zhou, “Mimo radar and cellular coexistence: A power-efficient approach enabled by interference exploitation,” _IEEE Transactions on Signal Processing_ , vol. 66, no. 14, pp. 3681–3695, 2018.
* [9] B. Li and A. P. Petropulu, “Joint transmit designs for coexistence of MIMO wireless communications and sparse sensing radars in clutter,” _IEEE Transactions on Aerospace and Electronic Systems_ , vol. 53, no. 6, pp. 2846–2864, 2017.
* [10] J. Qian, M. Lops, L. Zheng, X. Wang, and Z. He, “Joint system design for coexistence of MIMO radar and MIMO communication,” _IEEE Transactions on Signal Processing_ , vol. 66, no. 13, pp. 3504–3519, 2018.
* [11] L. Zheng, M. Lops, Y. C. Eldar, and X. Wang, “Radar and communication coexistence: An overview: A review of recent methods,” _IEEE Signal Processing Magazine_ , vol. 36, no. 5, pp. 85–99, 2019.
* [12] F. Liu, A. Garcia-Rodriguez, C. Masouros, and G. Geraci, “Interfering channel estimation in radar-cellular coexistence: How much information do we need?” _IEEE Transactions on Wireless Communications_ , vol. 18, no. 9, pp. 4238–4253, 2019.
* [13] G. N. Saddik, R. S. Singh, and E. R. Brown, “Ultra-wideband multifunctional communications/radar system,” _IEEE Transactions on Microwave Theory and Techniques_ , vol. 55, no. 7, pp. 1431–1437, 2007.
* [14] C. Sturm and W. Wiesbeck, “Waveform design and signal processing aspects for fusion of wireless communications and radar sensing,” _Proceedings of the IEEE_ , vol. 99, no. 7, pp. 1236–1259, 2011.
* [15] A. Hassanien, M. G. Amin, Y. D. Zhang, and F. Ahmad, “Dual-function radar-communications: Information embedding using sidelobe control and waveform diversity,” _IEEE Transactions on Signal Processing_ , vol. 64, no. 8, pp. 2168–2181, 2015.
* [16] ——, “Phase-modulation based dual-function radar-communications,” _IET Radar, Sonar & Navigation_, vol. 10, no. 8, pp. 1411–1421, 2016.
* [17] F. Liu, C. Masouros, A. Li, H. Sun, and L. Hanzo, “MU-MIMO communications with MIMO radar: From co-existence to joint transmission,” _IEEE Transactions on Wireless Communications_ , vol. 17, no. 4, pp. 2755–2770, 2018\.
* [18] B. Friedlander, “On transmit beamforming for MIMO radar,” _IEEE Transactions on Aerospace and Electronic Systems_ , vol. 48, no. 4, pp. 3376–3388, October 2012.
* [19] P. Stoica, J. Li, and Y. Xie, “On probing signal design for MIMO radar,” _IEEE Transactions on Signal Processing_ , vol. 55, no. 8, pp. 4151–4161, 2007.
* [20] Y. Mao, B. Clerckx, and V. O. K. Li, “Rate-splitting for multi-user multi-antenna wireless information and power transfer,” in _2019 IEEE 20th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC)_ , July 2019, pp. 1–5.
* [21] Z.-Q. Luo, W.-k. Ma, A. So, Y. Ye, and S. Zhang, “Semidefinite relaxation of quadratic optimization problems,” _Signal Processing Magazine, IEEE_ , vol. 27, pp. 20 – 34, 06 2010.
* [22] Y. Sun, P. Babu, and D. P. Palomar, “Majorization-minimization algorithms in signal processing, communications, and machine learning,” _IEEE Transactions on Signal Processing_ , vol. 65, no. 3, pp. 794–816, 2017\.
* [23] J. Song, P. Babu, and D. P. Palomar, “Optimization methods for designing sequences with low autocorrelation sidelobes,” _IEEE Transactions on Signal Processing_ , vol. 63, no. 15, pp. 3998–4009, 2015.
* [24] Y. Mao, B. Clerckx, and V. O. Li, “Rate-splitting multiple access for downlink communication systems: bridging, generalizing, and outperforming SDMA and NOMA,” _EURASIP journal on wireless communications and networking_ , vol. 2018, no. 1, p. 133, 2018.
| Chengcheng Xu received the master’s degree in information and communication
engineering from National University of Defense Technology, China, in 2017,
where he is currently pursuing the Ph.D. degree with the College of Electronic
Engineering. Since 2019, he has been a visiting student with the
Communications and Signal Processing Group, Department of Electrical and
Electronic Engineering, Imperial College London. His research interests
include spectrum sensing, radar and communication spectrum sharing, and
waveform design.
---|---
| Bruno Clerckx (SM’17) is a (Full) Professor, the Head of the Wireless
Communications and Signal Processing Lab, and the Deputy Head of the
Communications and Signal Processing Group, within the Electrical and
Electronic Engineering Department, Imperial College London, London, U.K. He
received the M.S. and Ph.D. degrees in applied science from the Université
Catholique de Louvain, Louvain-la-Neuve, Belgium, in 2000 and 2005,
respectively. From 2006 to 2011, he was with Samsung Electronics, Suwon, South
Korea, where he actively contributed to 4G (3GPP LTE/LTE-A and IEEE 802.16m)
and acted as the Rapporteur for the 3GPP Coordinated Multi-Point (CoMP) Study
Item. Since 2011, he has been with Imperial College London, first as a
Lecturer from 2011 to 2015, Senior Lecturer from 2015 to 2017, Reader from
2017 to 2020, and now as a Full Professor. From 2014 to 2016, he also was an
Associate Professor with Korea University, Seoul, South Korea. He also held
various long or short-term visiting research appointments at Stanford
University, EURECOM, National University of Singapore, The University of Hong
Kong, Princeton University, The University of Edinburgh, The University of New
South Wales, and Tsinghua University. He has authored two books, 190 peer-
reviewed international research papers, and 150 standards contributions, and
is the inventor of 80 issued or pending patents among which 15 have been
adopted in the specifications of 4G standards and are used by billions of
devices worldwide. His research area is communication theory and signal
processing for wireless networks. He has been a TPC member, a symposium chair,
or a TPC chair of many symposia on communication theory, signal processing for
communication and wireless communication for several leading international
IEEE conferences. He was an Elected Member of the IEEE Signal Processing
Society SPCOM Technical Committee. He served as an Editor for the IEEE
TRANSACTIONS ON COMMUNICATIONS, the IEEE TRANSACTIONS ON WIRELESS
COMMUNICATIONS, and the IEEE TRANSACTIONS ON SIGNAL PROCESSING. He has also
been a (lead) guest editor for special issues of the EURASIP Journal on
Wireless Communications and Networking, IEEE ACCESS, the IEEE JOURNAL ON
SELECTED AREAS IN COMMUNICATIONS and the IEEE JOURNAL OF SELECTED TOPICS IN
SIGNAL PROCESSING. He was an Editor for the 3GPP LTE-Advanced Standard
Technical Report on CoMP.
---|---
| Jianyun Zhang received the B.S. degree in radar signal processing from the
Electronic Engineering Institute, Hefei, China, in 1984, and the M.S. and
Ph.D. degrees in digital signal processing from Xidian University, Xi’an,
China, in 1989 and 1994, respectively. From 1995 to 2001, he was an Associate
Professor with the Electronic Engineering Institute, where he became a
Professor from 2001 to 2016. Since 2016, he has been a Professor with the
College of Electronic Engineering, National University of Defense Technology.
His research interests include high-speed digital signal processing,
estimation theory, array signal processing, radar signal processing, and radar
system theory.
---|---
|
# Choice functions on posets
Danilov V. I Central Institute of Economics and Mathematics of the RAS, 47,
Nakhimovskii Prospect, 117418 Moscow, Russia. e-mail<EMAIL_ADDRESS>
###### Abstract
In the paper we study choice functions on posets satisfying the conditions of
heredity and outcast. For every well-ordered sequence of elements of a poset,
we define the corresponding ‘elementary’ choice function. Every such a choice
function satisfies the conditions of heredity and outcast. Inversely, every
choice function satisfying the conditions of heredity and outcast can be
represented as a union of several elementary choice functions. This result
generalizes the Aizerman-Malishevski theorem about the structure of path-
independent choice functions.
Keywords: order ideal, filter, well order, path independence, stable contract.
## 1 Introduction
The study of choice functions began in the theory of rational decision-making.
A class of choice functions introduced by Plott [9] under the name “path-
independent” was particularly distinguished. A comprehensive description of
such choice functions was given by Aizerman and Malishevsky in [1]. Later it
turned out that these functions appear naturally in other theories, such as
the theory of nonmonotonic logic [8] and the stable contract theory [2, 5, 6].
In the last theory, the choice functions were used to describe preferences of
agents, and choice functions proposed by Plott turned out to be the most
appropriate for this situation.
In particular, in the theory of contracts, the need for generalization and
transfer of the concept of such choice functions to partially ordered sets
(posets) was revealed. For example, the papers [2, 3] considered contracts
that could be concluded with some intensity between 0 and 1. To ensure the
existence of stable systems of contracts, the authors had to transfer the
concept of path-independent choice functions to sets equipped with partial
order. Further I call such choice functions _conservative_. In [5], the
existence and good properties of stable systems of contracts under the
assumption of conservativeness of choice functions of the agents were proved.
However, some important questions remained open - whether there are many such
choice functions, how to construct them, what is the structure on the set of
conservative choice functions? For instance, [2] limited themselves by two
examples of conservative choice functions; [5] does not even do that. In this
paper, we answer these questions. Namely, for each sequence of elements of a
poset $P$, we construct the corresponding ‘elementary’ conservative choice
function on $P$. We show that an arbitrary conservative choice function is
represented as the union of several elementary choice functions. This result
generalizes the Aizerman-Malishevsky theorem [1] just like its infinite
variant from [4].
In order to make the presentation more understandable, we first consider a
more simple case of finite poset $P$. Then we remove the finiteness
requirement. We begin with a reminder of some concepts and statements about
posets and choice functions on them.
## 2 Preliminaries
Posets. A _poset_ is a partially ordered set, i.e. a set $P$ equipped with an
order relation $\leq$ (reflexive, transitive, and antisymmetric) on it. Since
this relation will not change, the poset is simply denoted as $P$. A poset is
called _linear_ (or a _chain_) if any two of its elements are comparable
($x\leq y$ or $y\leq x$). A poset is called _discrete_ (or trivial), if any
two elements are incomparable. A more general class of posets covering two
previous ones is distinguished by the transitivity condition of the
comparability relation. Structurally, such a poset is a direct sum of chains.
Exactly such posets were used in [2].
A subset $I$ of $P$ is called an (order) _ideal_ (or a lower set, or a minor
set), if with each element it contains every smaller one. For example, the
principal ideal $I(x)=\\{y\in P,y\leq x\\}$. An ideal generated by a subset
$A$ in $P$ is denoted as $I(A)$; $I(A)=\cup_{a\in A}I(a)$. The dual is the
concept of a _filter_ ; a filter with each element contains larger ones. A
filter generated by a subset $A$ is denoted as $F(A)$, so that $F(A)=\\{x\in
P,x\geq a\text{ for some }a\in A\\}$. The set of all ideals is denoted by
$\mathcal{I}(P)$; it is a complete distributive sublattice of the Boolean
lattice $2^{P}$ of all subsets in $P$.
Choice functions. In a classical situation (when a poset $P$ is trivial that
is simply a non-structurated set), a choice function is a mapping $f:2^{P}\to
2^{P}$ such that $f(X)\subseteq X$ for any $X\subseteq P$. In decision theory,
choice functions are used to describe behavior; having access to a variety of
alternatives $X$ the decision-maker selects a subset $f(X)$. A rational
decision-maker chooses the best alternatives in some sense. The rationality
conditions of the corresponding choice function have been intensively studied
in the choice theory, see, for example, [1]. Two conditions turned out to be
the most popular. These are heredity and outcast conditions.
The _heredity_ property (known also as substitutability or persistency): if
$A,B\subseteq P$ and $A\subseteq B$ then $f(B)\cap A\subseteq f(A)$.
In other words, if an element from a smaller set $A$ is chosen in a larger set
$B$ then it should be chosen in the smaller one.
The _outcast_ property (known also as consistency or as Irrelevance of
Rejected Alternatives): if $f(B)\subseteq A\subseteq B$ then $f(A)=f(B)$.
In words – removing ‘bad’ (not selected) elements does not affect the choice.
All these notions are transferred to posets without any changes. A _choice
function_ (CF) on a poset $P$ is a mapping
$f:\mathcal{I}(P)\to\mathcal{I}(P)$, such that $f(X)\subseteq X$ for any ideal
$X$ in $P$. The conditions of heredity and outcast are formulated as before.
Definition. A CF is called _conservative_ if it has the heredity and outcast
properties.
One can formulate the conservativeness by a single condition ([7]): if
$f(A)\subseteq B$ then $f(B)\cap A\subseteq f(A)$. We find more convenient to
use heredity and outcast separately. Here are some simple properties of
conservative CFs.
1\. $f(f(X))=f(X)$ for any ideal $X$.
Apply the outcast to the chain of inclusions $f(X)\subseteq f(X)\subseteq X$.
2\. Any conservative CF $f$ has the so-called _path independence_ property (or
satisfies the Plott equality): for any ideals $X$ and $Y$
$f(X\cup Y)=f(f(X)\cup f(Y)).$
Indeed, $f(X)\subseteq X\cup Y$; from the heredity we obtain $f(X\cup Y)\cap
X\subseteq f(X)$. Similarly, $f(X\cup Y)\cap Y\subseteq f(Y)$. Hence $f(X\cup
Y)=f(X\cup Y)\cap(X\cup Y)\subseteq f(X)\cup f(Y)\subseteq X\cup Y$. Using the
outcast, we get $f(X\cup Y)=f(f(X)\cup f(Y))$.
In fact, this equality is true not only for two ideals, but for an arbitrary
number of them. The proof is the same. Note also that the path independence
implies the outcast property, but not the heredity.
3\. The union of conservative CFs is a conservative CF.
Indeed, let $f=\cup_{i}f_{i}$, that is $f(X)=\cup_{i}f_{i}(X)$ for any ideal
$X$. We have to check the heredity and outcast properties for CF $f$.
Heredity. Let $Y\subseteq X$. Then for any $i$ we have inclusions
$f_{i}(X)\cap Y\subseteq f_{i}(Y)$. And it remains to take the union and use
the distributivity.
Outcast. Let $f(X)\subseteq Y\subseteq X$. Since $f_{i}(X)\subseteq f(X)$, we
have the chain $f_{i}(X)\subseteq Y\subseteq X$ for any $i$. The outcast for
$f_{i}$ implies the equality $f_{i}(X)=f_{i}(Y)$ for any $i$, from where
$f(X)=f(Y)$.
This property means that having several conservative CFs $f_{i}$ we can form
new CF $\cup_{i}f_{i}$ which also is conservative. We come to the task of
finding an enough big stock of ‘simple’ conservative CFs sufficient for
construction of any conservative CF.
Example (a ‘constant’ CF). Let $I$ be some ideal in $P$. For an arbitrary
ideal $X$, we put $f^{I}(X)=I\cap X$. It is easy to see that $f^{I}$ is a
conservative CF on $P$.
If a poset $P$ is linear, then the previous construction gives all
conservative CFs. In fact, set $I=f(P)$. Since an arbitrary ideal $X$ lies in
$P$, we have from heredity the inclusion $I\cap X\subseteq f(X)$. Due to the
linearity of $P$, either $X\subseteq I$ or $I\subseteq X$. In the first case
$I\cap X=X$, $X\subseteq f(X)$, from where the equality $f(X)=X=I\cap X$
comes. In the second case, $I\cap X=I$. We have the chain $I=f(P)\subseteq
f(X)\subseteq P$, and the outcast gives $f(X)=f(P)=I=I\cap X$.
However, in the case of non-linear posets, there are other conservative CFs.
Two interesting examples are given in [2]. Since the union of constant CFs is
constant again, we need more flexible construction of ‘simple’ conservative
CFs. To make the ideas more transparent, we will temporarily assume that poset
$P$ is finite. In Section 6 we consider the general case.
## 3 Elementary CFs
Let $A=(a_{1},...,a_{k})$ ($k\geq 0$) be a sequence of elements of a poset $P$
. We use this sequence to build the following CF $f_{A}$. Suppose that $X$ is
an arbitrary ideal in $P$. We have to define $f_{A}(X)$. To do this denote by
$i=i(A,X)$ the first number $i$, such that $a_{i}\in X$. In other words,
$a_{1},...,a_{i-1}$ does not belong to $X$, but $a_{i}$ does. If none $a_{i}$
belongs to $X$, we set $i=k$. If $k=0$ (that is if $A$ is the empty sequence)
we set $i=0$.
Now, we put $f_{A}(X)$ to be equal to the intersection of $X$ with the ideal
$I(a_{1},...,a_{i})$ generated by $a_{1},...,a_{i}$, that is
$f_{A}(X)=X\cap I(a_{1},...,a_{i}).$
In particular, $f_{\emptyset}(X)=\emptyset$ for any $X$. We say that the
function $f_{A}$ is the _elementary CF_ , _associated with the sequence_
$A=(a_{1},...,a_{k})$.
It is easy to understand that constructing such CF, one can limit oneself to
non-repeating sequences, that is, assume that all $a_{i}$ are different. We
will do so.
The intuitive meaning of the elementary CF $f_{A}$ associated with the
sequence $A=(a_{1},...,a_{k})$ is the follows. We understand the sequence
$(a_{1},...,a_{k})$ as a hierarchy of goals of our decision-maker, and the
importance of goals $a_{i}$ decreases with the growth of $i$. The decision-
maker tries to choose the most important goal $a_{1}$ first of all. If it is
available, that is, if it lies in the ideal $X$, he selects it (along with all
smaller elements) and settles down (that is, completes the choice). If the
goal $a_{1}$ is not available, it includes in the choice all elements of $X$
which are less than $a_{1}$, and proceeds to achieve the next goal $a_{2}$.
And so on.
Proposition 1. _$f_{A}$ is a conservative CF._
_Proof._ If the sequence $A$ is empty, then $f_{A}=\emptyset$ and there is
nothing to check.
Let us check the heredity. Suppose $Y\subseteq X$, and $y$ belongs to both $Y$
and $f_{A}(X)$. We need to show that $y\in f_{A}(Y)$. Let $a_{i}$ be the first
term of the sequence $A=(a_{1},...,a_{k})$ that falls into $X$. The elements
$a_{1},...,a_{i-1}$ do not belong to $X$, and therefore do not belong to $Y$.
By the construction $f_{A}$, $y\leq$ of some of $a_{1},...,a_{i}$. But then
$y$ less than the same $a_{j}$, and therefore belongs to $f_{A}(Y)$.
Let us check the outcast. Let $f_{A}(X)\subseteq Y\subseteq X$. It is enough
to check that $f_{A}(Y)\subseteq f_{A}(X)$. Let $a_{i}$ be the first element
of the sequence which lies in $X$. Then $a_{1},...,a_{i-1}$ do not fall in
$X$, and more so in $Y$. As for $a_{i}$, it belongs to $f_{A}(X)$ and
therefore belongs to $Y$. If $y\in f_{A}(Y)$, then $y$ lies under some of
$a_{1},...,a_{i}$. And since $y\in X$, then it belongs to $f_{A}(X)$. $\Box$
Definition. A sequence $A=(a_{1},...,a_{k})$ is _compatible with_ a CF $f$ if,
for any $i$ from 1 to $k$, $a_{i}\in f(P-F(a_{1},...,a_{i-1}))$. (For $i=1$,
this means $a_{1}\in f(P)$. Recall that $F(a_{1},...,a_{j})$ is the filter
generated by $a_{1},...,a_{j}$.)
Proposition 2. _If a sequence $A$ is compatible with a hereditary CF $f$ then_
$f_{A}\subseteq f$.
_Proof._ Let $X$ be an arbitrary ideal; we need to show that
$f_{A}(X)\subseteq f(X)$. Recall how $f_{A}(X)$ was constructed. We find the
first member $a_{i}$ of the sequence such that $a_{i}\in X$; then
$f_{A}(X)=X\cap I(a_{1},...,a_{i})$. Suppose that $x$ is an element of
$f_{A}(X)$ and $x\leq a_{j}$ for some $j\leq i$. Since $a_{j}\in
f(P-F(a_{1},...,a_{j-1}))$ and $f(P-F(a_{1},...,a_{j-1}))$ is an ideal, we
obtain that $x\in f(P-F(a_{1},...,a_{j-1}))$. Because $x\in X\subset
P-F(a_{1},...,a_{j-1})$, the heredity of $f$ implies that $x\in f(X)$. $\Box$
## 4 The main theorem – a finite case
Theorem. _Let $f$ be a conservative CF on a finite poset $P$. Then it is the
union of some elementary CFs._
Actually we shall prove more precise assertion: _any conservative CF $f$ is
the union of elementary CFs associated with AC-sequences compatible with $f$._
Here a sequence $(a_{1},...,a_{k})$ is called _antichain sequence_ (AC-
sequence) if all $a_{i}$ are incomparable in the poset $P$. This assertion
follows from Proposition 2 and the following Proposition 3.
Proposition 3. _Let $f$ be a CF on finite poset $P$ satisfying the outcast
condition. Suppose that $x\in f(X)$ for some ideal $X$. Then there exists an
AC-sequence $A$ compatible with $f$ such that $x\in f_{A}(X)$._
_Proof._ One can suppose that $f$ is non-empty CF. We construct such a
sequence $A$ step by step.
Let us discuss the first step of the construction. If $x\in f(P)$, we put
$a_{1}=x$ and terminate the construction of the sequence. The definition of
$f_{A}$ shows that $x\in f_{A}(X)$.
So we can assume that $x$ does not belong to $f(P)$. This is possible only if
$f(P)$ is not contained in $X$. Indeed, otherwise $f(P)\subseteq X\subseteq P$
and from the outcast we get $f(P)=f(X)$ and $x$ belongs to $f(P)$, contrary to
the assumption. Hence, the set $f(P)-X$ is non-empty. We take $a_{1}$ to be a
minimal element in the set $f(P)-X$, put $B_{1}=P-F(a_{1})$, and go to the
second step. Note that $x\notin I(a_{1})$.
At the $k$-th step, we have:
a) an AC-sequence $(a_{1},...,a_{k})$,
b) $x$ does not belong to the ideal $I(a_{1},...,a_{k})$,
c) $X\subseteq B_{k}:=P-F(a_{1},...,a_{k})$ (where $B_{0}=P$),
d) for any $i$ from 1 to $k$, $a_{i}$ is a minimal element of the set
$f(B_{i-1})-X$.
In particular, d) implies that the sequence $(a_{1},...,a_{k})$ is compatible
with $f$. As above, we consider two cases: when $x$ belongs to $f(B_{k})$ and
when it doesn’t.
If $x\in f(B_{k})$, we set $a_{k+1}=x$ and terminate the construction of the
sequence $A$. It is clear that $x\in f_{A}(X)$. Due to b) and c), $a_{k+1}$ is
not comparable with $a_{1},...,a_{k}$.
Now suppose that $x\notin f(B_{k})$. If $f(B_{k})\subseteq X$, then from c)
and the outcast we have that $f(B_{k})=f(X)$ and contains $x$; a
contradiction. Hence $f(B_{k})$ is not contained in $X$. Put $a_{k+1}$ to be a
minimal element of the non-empty set $f(B_{k})-X$. We assert that _the
extended sequence $(a_{1},...,a_{k},a_{k+1})$ also satisfies the properties
a)-d). _
Let us prove a). Since $a_{k+1}\in B_{k}$, $a_{k+1}\notin F(a_{1},...,a_{k})$.
Therefore we have to show that $a_{k+1}\notin I(a_{1},...,a_{k})$. Suppose
that $a_{k+1}\leq a_{i}$ for some $i\leq k$. Since $a_{i}\in f(B_{i-1})$,
$a_{k+1}$ belongs to $f(B_{i-1})$ as well and does not belong to $X$. Since
$a_{i}$ is a minimal element of $f(B_{i-1})-X$ (due to d)), we obtain that
$a_{k+1}=a_{i}$. But this is contrary to the fact that $a_{k+1}\in B_{k}$ and
$B_{k}$ (see c)) does not contain $a_{i}$.
b) We have to show that $x$ does not lie under $a_{k+1}$. But if $x\leq
a_{k+1}$, then $x$ belongs to $f(B_{k})$ due to the ideality of $f(B_{k})$,
which contradicts the assumption that $x\notin f(B_{k})$.
Check c), that is $X\subseteq B_{k+1}$, or that $X$ does not intersect with
the filter $F(a_{k+1})$. If there is an $y\in X$ such that $y\geq a_{k+1}$,
then by ideality of $X$ the element $a_{k+1}$ also belongs to $X$, which
contradicts to the choice $a_{k+1}$ outside $X$.
Finally, d) follows from the previous d) and the choice of $a_{k+1}$.
Since the poset $P$ is finite, sooner or later the process ends, and we get an
AC-sequence $A$ such that $x\in f_{A}(X)$. $\Box$
## 5 Simplicity of elementary CFs
We have shown that any conservative CF is represented as the union of several
elementary CFs associated with AC-sequences. Now we will show that these
elementary blocks are ’simple’ in the sense that they no longer decompose into
a union of other conservative CFs. In other words, that they are join-
irreducible.
Lemma 1. _Let $f$ and $g$ be elementary CFs associated with AC-sequences
$A=(a_{1},...,a_{n})$ and $B=(b_{1},...,b_{k})$ correspondingly, with
$g\subseteq f$. Suppose, that $a_{1}=b_{1},...,a_{i-1}=b_{i-1}$ but $a_{i}$ is
different from $b_{i}$. Then $b_{i}<a_{i}$ (either $b_{i}$ is missing)._
Proof. We will assume that $b_{i}$ is actually present, and consider the ideal
$X=I(a_{i},b_{i})$. We state that $b_{i}\in g(X)$. If this is not the case,
then $b_{i}$ is not the first member of the sequence $B$, belonging to $X$;
there is a smaller number $j<i$, such that $b_{j}\in X$. Since it is not it is
true that $b_{j}\leq b_{i}$ (due to the incomparability of members of $B$),
then $b_{j}\leq a_{i}$. But $b_{j}=a_{j}$, and we again get a contradiction
with the incomparability of members of $A$.
Similarly $a_{i}$ is the first member of the sequence $A$ that falls into $X$.
So $f(X)=I(a_{1},...,a_{i})$. Therefore, $b_{i}\in I(a_{1},...,a_{i})$.
$b_{i}$ cannot be less than $a_{1}=b_{1},...,a_{i-1}=b_{i-1}$, because this
would contradict the incomparability of members of the sequence $B$. So
$b_{i}\leq a_{i}$. $\Box$
Proposition 4. _Let $f=f_{A}$ be an elementary CF associated with AC-sequence
$A=(a_{1},...,a_{n})$. If $f=g\cup h$, where $g$ and $h$ are conservative CFs,
then $g$ or $h$ is equal to $f$._
Proof. Using the theorem, we decompose $g$ and $h$ into elementary CFs. As a
result, we get the decomposition $f=g^{1}\cup...\cup g^{l}$, where each CF
$g^{j}=f_{B^{j}}$ is an elementary CF associated with AC-sequence
$B^{j}=(b_{1}^{j},...,b_{k_{j}}^{j})$. We want to show, that at least one of
$B^{j}$ is equal to $A$.
Consider one of these sequences $B=(b_{1},...,b_{k})$. For some time, it may
coincide with the sequence $A$. We denote by $d(B)$ the first index $i$ for
which $b_{i}$ is different from $a_{i}$ (for example, $b_{i}$ is simply
missing).
Assuming that all $B^{j}$ are different from $A$, we get that all
$d(B^{j})\leq n$. Denote by $d$ the maximum of the numbers
$d(B^{1}),...,d(B^{l})$. We call the index $j$ a ‘leader’ if $d(B^{j})=d$.
Let us form the ideal $X=I^{0}(a_{1})\cup...\cup I^{0}(a_{d-1})\cup I(a_{d})$,
where $I^{0}(a)=\\{x\in P,x<a\\}=I(a)-\\{a\\}$. It is clear that $a_{d}$ is
the first member of the sequence $A$ that falls into $X$. Because $a_{i}$
(with $i<d$) does not belong to $I^{0}(a_{i})$, and also does not belong to
other ideals $I^{0}(a_{j})$ due to the incomparability with the rest $a_{j}$.
Therefore, $f(X)=X$ and, in particular, $a_{d}\in f(X)$.
Let us now consider $g^{j}(X)$, where $j=1,...,l$; the corresponding the
sequence $B^{j}$ is simply denoted as $B=(b_{1},...,b_{k})$. Let $i$ is the
first number for which $b_{i}$ falls into X. It can’t be a number less than
$d(B)$, because for such a number $j$ (less than $i$ and so more than the
smaller $d$) $b_{j}=a_{j}$ and does not belong to $X$ (see the lemma above).
On the other hand, $b_{d(B)}$ is less than $a_{d(B)}$ (see Lemma), and
therefore belongs to $X$. If $j$ is not a leader, then
$g^{j}(X)=I^{0}(a_{1})\cup...\cup I^{0}(a_{d(B^{j})})$
(the last term is missing if $b_{d(B^{j}})$is missing). If $j$ is a leader,
then
$g^{j}(X)=I^{0}(a_{1})\cup...\cup I^{0}(a_{d-1})\cup I(b^{j}_{d})$
.
Since $b^{j}_{d}<a_{d}$ (by Lemma), we see that $a_{d}$ does not belong to any
ideal $g^{j}(X)$. In contradiction with $a_{d}\in f(X)=\cup_{j}g^{j}(X)$.
$\Box$
## 6 The main theorem – a general case
We assumed above that the poset $P$ is finite. Now we will consider the
general case. Everything is done the same way, only finite sequences need to
be replaced by infinite ones, indexed by elements of well ordered sets (or
ordinal numbers). Recall that a linearly ordered set $(I,\prec)$ is said to be
_well ordered_ if any non-empty subset in it has a minimal element. To be not
confused, ideals in the set of indices $I$ will be called _initial segments_.
Note that a chain $(I,\prec)$ is well ordered if and only if any initial
segment of $I$, other than the entire $I$, has the form $[\prec i]=\\{j\in
I,j\prec i\\}$ for some $i\in I$.
A _well_ _sequence_ in $P$ is a sequence $A=(a_{i},i\in I)$ elements of the
poset $P$, which set of indices $I$ is well ordered. With such a sequence, one
can associate the CF $f_{A}$ on the poset $P$, which is actually defined as
before. Namely, let $X$ be an ideal in $P$; denote $i=i(X)$ the first member
of the sequence which belong to $X$. In other words, $a_{i}\in X$, and
$a_{j}\notin X$ for $j\prec i$. Then $f_{A}(X)$ is equal to the intersection
of $X$ with the ideal in $P$ generated by all $a_{j}$, $j\leq i$. (If there is
no such number $i$, then the ideal is generated by all $a_{j}$, $j\in I$.)
It is easy to understand that we can confine ourselves by non-repeating
sequences, when all $a_{i}$ are different. In this case, one can consider $A$
as a subset of $P$ equipped with a well order $\prec$.
As in Proposition 1, it is checked that the CF $f_{A}$ is conservative.
The definition of compatibility of a sequence $(a_{i},i\in I)$ with CF $f$
remains the same: for any $i\in I$, $a_{i}\in f(B_{i})$, where
$B_{i}=P-\cup_{j<i}F(a_{j})$. And the same reasonings as in Proposition 2 show
that $f_{A}\subseteq f$ if $A$ is compatible with a hereditary CF $f$.
The main theorem says now that _any conservative CF $f$ is the union of some
elementary CFs compatible with $f$._
The most subtle part is a construction of well sequences compatible with CF
$f$. The following reasoning resembles Zermelo’s proof of that any set have a
well order.
Namely, let $\mathcal{F}$ denote the set of filters $F$ of $P$, for which
$f(P-F)\neq\emptyset$. We fix some ‘selector’ $p:\mathcal{F}\to P$, $p(F)\in
f(P-F)$.
Definition. A _gallery_ is a subset $U$ in $P$, equipped with a linear order
$\prec_{U}$, which have the following property:
($*$) if $V$ is an initial segment in $U$ other than $U$ then
$V=[\prec_{U}x]$, where $x=p(F(V))$.
Recall that $F(V)$ denotes the filter of $P$ generated by the set $V$. The
gallery $U$ is called _through_ one if $F(U)$ does not belong to
$\mathcal{F}$, that is, if $f(P-F(U))$ is empty.
By virtue of ($*$), it is obvious that the linear order $\prec_{U}$ of any
gallery $U$ is a well order. Therefore, a gallery can be considered as a well
sequence that is obviously compatible with CF $f$. It remains to show that
there are quite a lot of galleries. More precisely, we show that, _for any
$x\in f(X)$, there is a through gallery $U$ such that $x\in f_{U}(X)$._
But first we need to say about the main property of galleries.
Say that a gallery $U^{\prime}$ _continues_ a gallery $U$, if $U\subseteq
U^{\prime}$ and $U$ (as an ordered set) coincides with some initial segment
$U^{\prime}$. This define an order relation on the set of all galleries. Basic
property of galleries is that the order is linear, i.e. any two galleries are
comparable, one of them continues the other.
Lemma 2. _Any two galleries are comparable._
Indeed, let $U$ and $V$ be two galleries, and $W$ be the largest initial
segment in both galleries, i.e. $W=\\{x\in U\cap V,\
[\prec_{U}x]=[\prec_{V}x]$, and the restrictions $\prec_{U}$ and $\preceq_{V}$
on this set are the same}.
We claim, that $W$ is equal to $U$ or $V$; this is exactly what should be
proved.
Let us assume that this is not the case, and that $W$ is different from $U$
and $V$. Since $W$ is an initial segment in $U$, then, by the property ($*$),
$W$ has the form $[\prec_{U}x]$, where $x=p(F(W))$. Similarly, $W$ has the
form $[\prec_{V}y]$, where $y=p(F(W))$. So $x=y$. But then $x=y$ belongs to
both $U$ and $V$, hence belongs to $W$, despite the fact that
$W=[\prec_{U}x]$. A contradiction. $\Box$
Corollary. _For any selector $p$ there exists a unique through gallery._
Proof. Take the union of all galleries. $\Box$
Proposition 5. _Let $X$ be an ideal of $P$, and $x\in f(X)$. Then there is a
through gallery $U$ such that $x\in f_{U}(X)$._
Proof. To prove this, we take a special selector $p$. Namely, assume that a
filter $F$ does not intersect with the ideal $X$, that is $X\subseteq P-F$.
Then there are two possible situations. The first one is when $f(P-F)$ is not
contained in $X$; in this case, we choose $p(F)$ outside of X. The second one
is $f(P-F)\subseteq X$; since $X\subseteq P-F$ then from the Outcast
$f(P-F)=f(X)$ and contains $x$; in this case, we choose $p(F)=x$.
Now let $U$ be a through gallery for the selector $p$ which exists due to
Corollary. How does $f_{U}(X)$ look like? Let $V$ be the largest initial
segment of the ‘sequence’ $U$ that does not intersect with $X$. It can not be
entire $U$, since then $F(U)$ does not intersect with $X$, $X\subseteq
P-F(U)$, and from the outcast property $f(X)$ is empty, contrary to $x\in
f(X)$.
So $V=[\prec u]$ for $u=p(F(V))$ (see ($*$)). That is, $u$ is the first
element of the sequence $U$ that belongs to $X$. According to the rule of $p$,
this means that $u=x$. But in this case $x\in f_{U}(X)$, because $x$ belongs
to both $X$ and the ideal $I(V\cup\\{x\\})$. $\Box$
All this together proves the main theorem in the case of an arbitrary poset
$P$.
Data Availability All data generated during this study are included in this
article.
## References
* [1] Aizerman M.A., Malishevski A.V. General theory of best variants choice. IEEE Trans. Automatic Control, AC-26(5) (1981) 1030-1040.
* [2] Alkan A., Gale D. Stable schedule matching under revealed preferences. J. Econ. Theory, 112 (2003) 289-306.
* [3] Baiou M., and Balinski M. The stable allocation (or ordinal transportation) problem, Math. Oper.Res. 27 (2002) 485-503.
* [4] Danilov V., Koshevoy G., Savaglio E. Hyper-relations, choice functions, and orderings of opportunity sets. Soc. Choice Welfare, 45, 1 (2015), 51-69
* [5] Farooq R., Fleiner T., Tamura A. Matching with partially ordered contracts. Japan J. Industr. and Appl. Math. 29 (2012) 401-417.
* [6] Kelso A.S., Crawford V.P. Job matching, coalition formation and gross substitutes. Econoimetrica, 50 (1982) 1483-1593.
* [7] Komornik V., Komornik Z., Viauroux C. Stable schedule matchings by a fixed point method. Acta Mathematica Hungarica, 135 (2012) 67-79.
* [8] Kraus S., Lehmann D., Magidor M. Nonmonotonic reasoning, preferencial models and cumulative logics. Artif. Intell. 44, 1-2 (1990) 167-207 (arXiv:cs/0202021v1 [cs.AI] 18 Feb 2002)
* [9] Plott C.R. Path independence, rationality, and social choice. Econometrica, 41(6) (1973) 1075-1091
|
# Cubic-scaling all-electron $GW$ calculations with a separable density-
fitting space-time approach
Ivan Duchemin<EMAIL_ADDRESS>Univ. Grenoble Alpes, CEA, IRIG-MEM-L_Sim,
38054 Grenoble, France Xavier Blase Univ. Grenoble Alpes, CNRS, Inst NEEL,
F-38042 Grenoble, France
###### Abstract
We present an implementation of the $GW$ space-time approach that allows
cubic-scaling all-electron calculations with standard Gaussian basis sets
without exploiting any localization nor sparsity considerations. The
independent-electron susceptibility is constructed in a time representation
over a non-uniform distribution of real-space locations $\\{{\bf r}_{k}\\}$
optimized within a separable resolution-of-the-identity framework to reproduce
standard Coulomb-fitting calculations with meV accuracy. The compactness of
the obtained $\\{{\bf r}_{k}\\}$ distribution leads to a crossover with the
standard Coulomb-fitting scheme for system sizes below a few hundred
electrons. The needed analytic continuation follows a recent approach that
requires the continuation of the screened Coulomb potential rather than the
much more structured self-energy. The present scheme is benchmarked over large
molecular sets and scaling properties are demonstrated on a family of defected
hexagonal boron-nitride flakes containing up to 6000 electrons.
## 1 Introduction
The $GW$ approximation 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 to the exchange-
correlation self-energy has become a standard approach in solid-state physics
to explore the electronic properties of metallic or semiconducting materials.
Its accuracy was indeed proven superior to standard DFT calculations relying
on the Kohn-Sham ansatz for the electronic energy levels (for large benchmark
calculations on inorganic crystals, see e.g. Refs. 12, 13). Further, following
early applications in the late 90s, 14, 15, 16 the $GW$ formalism is nowadays
widely used as well for the study of gas phase or dense ordered or disordered
organic molecular systems. 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29,
30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48,
49, 50, 51, 52, 53, 54, 55, 56 The development of codes exploiting standard
Gaussian atomic basis sets allowed in particular the comparison of all-
electron $GW$ calculations with higher level quantum-chemistry techniques
(e.g. coupled-cluster) performed with the very same running parameters
(geometry, atomic basis sets, resolution-of-the-identity, etc.) 28, 35, 39, 41
The scaling of the number of operations needed to perform $GW$ calculations
with respect to the system size is typically $\mathcal{O}(N^{4})$ within
traditional planewave implementations. This scaling can be preserved with
localized basis sets provided that resolution-of-the-identity (RI) techniques
57, 58, 59, 60, 61, 62, 63 are used to avoid calculating response functions,
such as the susceptibility, in the product space associated with valence-to-
virtual molecular orbital products. While such a moderate scaling already
allows calculations on systems containing well over a hundred atoms on
supercomputers, 64, 65, 66, 67 attempts to deliver $GW$ calculations with a
lower scaling appeared with the seminal space-time approach by Rojas, Needs,
Godby in 1995 68 and are now blooming. 24, 69, 70, 42, 71, 72, 73, 74, 75, 76
This space-time formalism 68 stands as the first cubic-scaling $GW$ approach
relying on the separability of the independent-electron susceptibility
$\chi_{0}$ as the product of two Green’s functions when expressed over a real-
space grid, adopting further a time representation. This factorisation allows
decoupling the summation over occupied and virtual molecular orbital
contributions, leading to a cubic scaling scheme instead of the traditional
quartic scaling calculation of $\chi_{0}$. Such a reduced scaling does not
rely on any localization nor sparsity considerations associated with e.g.
3-center integrals within the local direct overlap metric 24, 71, 75 or a
range-truncated Coulomb metric. 76
The imaginary-time formulation at the heart of the $GW$ space-time approach is
identical to the Laplace transform idea already in use in quantum chemistry
for e.g. MP2 calculations. 77, 78 On the contrary, the use of a real space-
grid was more naturally rooted in the pseudopotential planewave community,
allowing by Fourier transform of the planewave basis to obtain relatively
sparse uniform real-space grids. The space-time $GW$ approach was more
recently adapted to a full potential projector-augmented wave methodology, 70
building on an earlier application to calculating RPA correlation energies
with cubic scaling. 79
In the case of all-electron calculations, the size of the real-space grid may
seem an a priori bottleneck. However, real-space quadrature strategies have
been already developed with much success in quantum chemistry for accelerating
the calculation of 2-electron Coulomb integrals, including the chain-of-sphere
(COSX) semi-numerical approach to exchange integrals 80, 81, 82 or the general
tensor hypercontraction mathematical framework in its specific least-square
grid optimization implementation (LS-THC). 83, 84, 85 More recently, the
interpolative separable density fitting (ISDF) approach 86, 87 represents a
versatile strategy to combine the standard quantum chemistry resolution-of-
the-identity (RI) techniques with a separable representation of the
coefficients of molecular orbital products over auxiliary basis sets. The ISDF
approach is now developing in the pseudopotential planewave or real-space grid
community, 88, 89, 90, 91, 72 including a recent $GW$ implementation. 72
Similarly, building on the expertise with resolution-of-the-identity (RI)
techniques and/or real-space quadratures for Coulomb integrals, the ISDF
scheme is also being explored by the quantum chemistry community working with
localized (e.g. Gaussian) basis sets for explicitly correlated all-electron
calculations such as QMC or Möller-Plesset techniques. 92, 93
In a recent study, we presented an alternative to the ISDF formalism applied
to all-electron Hartree-Fock, MP2 and RPA calculations with Gaussian basis
sets. 94 In this scheme, standard auxiliary $\\{P_{\mu}\\}$ basis sets (e.g.
cc-pVXZ-RI 95 or def2-XZVP-RI 96) are provided as an input, as in any standard
resolution-of-the-identity (RI) calculation, but the fitting procedure takes
as an intermediate the expression of wave function and densities over compact
non-uniform real-space grids $\\{{\bf r}_{k}\\}$. The corresponding fitting
weights result from solving a quadrature equation that aims at reproducing the
results of a standard Coulomb-fitting (RI-V) calculation. Adopting the space-
time approach, cubic-scaling calculations of the independent-electron
susceptibility at imaginary frequencies, and resulting RPA correlation energy,
could be achieved. As compared to the corresponding RI-V calculation, an
accuracy of a few $\mu$Hartree/electron was demonstrated for RPA correlation
energies with $\\{{\bf r}_{k}\\}$ distributions typically 4 times larger than
the input auxiliary basis, allowing a crossover with the standard quartic-
scaling RI-V RPA calculations for systems of the size of pentacene.
In the present work, we extend this formalism to all-electron cubic-scaling
$GW$ calculations. In contrast with the RPA formalism, where only imaginary-
frequency susceptibilities are needed, we further exploit a recently developed
97, 98 analytic continuation scheme that brings to the real-frequency axis the
dynamically screened Coulomb potential $W$ rather than the much more
structured $GW$ self-energy. As compared to standard RI-V $GW$ calculations,
we demonstrate an accuracy at the meV level for the quasiparticle energies of
large molecular sets. Finally, cubic scaling is evidenced using a family of
defected hexagonal boron-nitride flakes with increasing radius containing up
to 6000 electrons, with a crossover with the standard quartic scaling RI-V
$GW$ calculations for systems containing a very few hundred electrons.
## 2 Theory
### 2.1 The $GW$ formalism with resolution-of-the-identity
We start by describing the standard resolution-of-the-identity (RI) framework
for $GW$ calculations. A more detailed discussion on RI techniques applied to
MBPT can be found in Ref. 62. We just recall here that the essence of RI
approximations, developed in particular to tackle the calculation of
2-electron 4-center Coulomb integrals with localized atomic orbital (AO) basis
sets, 57, 58, 59, 60, 61, 62, 63 amounts to expressing the product of 2
molecular orbitals (MOs) over an auxiliary basis set $\\{P_{\mu}\\}$, namely:
$\phi_{n}({\bf r})\phi_{m}({\bf
r})=\sum_{\mu}\mathcal{F}_{\mu}(\phi_{n}\phi_{m})P_{\mu}({\bf r})$ (1)
where we work with finite size systems allowing real molecular orbitals. For
localized atomic-orbitals (AO) basis calculations, the auxiliary basis is
typically 2-3 times larger than the AO basis set used to expand the MOs. As an
example, the accurate RI-V Coulomb-fitting approach60 defines the coefficients
$\mathcal{F}_{\mu}$ as:
$\mathcal{F}^{V}_{\mu}(\phi_{n}\phi_{m})=\sum_{\nu}[V^{-1}]_{\mu\nu}(P_{\nu}|\phi_{n}\phi_{m})$
(2)
with $(P_{\nu}|\phi_{n}\phi_{m})$ the 3-center Coulomb integrals
$(P_{\nu}|\phi_{n}\phi_{m})=\int d{\bf r}d{\bf r}^{\prime}\;\frac{P_{\nu}({\bf
r})\phi_{n}({\bf r}^{\prime})\phi_{m}({\bf r}^{\prime})}{|{\bf r}-{\bf
r}^{\prime}|}$
and $V_{\mu\nu}$ the Coulomb matrix elements in the auxiliary basis. Coming
now to the $GW$ formalism, we start with the expression of the independent-
electron susceptibility along the imaginary-frequency axis:
$\chi_{0}({\bf r},{\bf r}^{\prime};i\omega)=2\sum_{ja}\frac{\phi_{j}^{*}({\bf
r})\phi_{a}({\bf r})\phi_{a}^{*}({\bf r}^{\prime})\phi_{j}({\bf
r}^{\prime})}{i\omega-(\varepsilon_{a}-\varepsilon_{i})}+c.c.$ (3)
where (j) and (a) index occupied and virtual molecular orbitals (MOs),
respectively, and where the factor (2) indicates a closed shell system.
Expanding MO products over an auxiliary basis in the case of real-valued MOs
leads to:
$\chi_{0}({\bf r},{\bf r}^{\prime};i\omega)=\sum_{\mu\nu}P_{\mu}({\bf
r})\cdot[\chi_{0}^{RI}(i\omega)]_{\mu\nu}\cdot P_{\nu}({\bf r}^{\prime})$ (4)
with
$[\chi_{0}^{RI}(i\omega)]_{\mu\nu}=2\sum_{ja}\frac{\mathcal{F}_{\mu}(\phi_{j}\phi_{a})\mathcal{F}_{\nu}(\phi_{j}\phi_{a})}{i\omega-(\varepsilon_{a}-\varepsilon_{j})}+c.c.$
(5)
In the standard RI framework, it is the quantity
$[\chi_{0}^{RI}(i\omega)]_{\mu\nu}$ that is calculated in the auxiliary
$\\{P_{\mu}\\}$ basis. The following steps start with the definition of the
$GW$ correlation self-energy as a convoluted integral along the real-energy
axis
$\Sigma^{C}({\bf r},{\bf
r}^{\prime};{\color[rgb]{0,0,0}E})=\frac{i}{2\pi}\int_{-\infty}^{\infty}d\omega\;e^{i\omega
0^{+}}G({\bf r},{\bf r}^{\prime};E+\omega)\widetilde{W}({\bf r},{\bf
r}^{\prime};\omega)$ (6)
with $\widetilde{W}=(W-V)$ and where $G$, $W$ and $V$ are the time-ordered
1-body Green’s function, the screened and bare Coulomb potentials,
respectively. In the contour-deformation approach, 4, 5 this expression is
transformed into an integral along the imaginary-energy axis, plus the
contribution of a few residues involving the screened Coulomb potential $W$
calculated at real energies:
$\displaystyle\Sigma_{C}^{GW}({\bf r},{\bf r}^{\prime};{\color[rgb]{0,0,0}E})$
$\displaystyle=\frac{-1}{2\pi}\int_{-\infty}^{\infty}d\omega\;G({\bf r},{\bf
r}^{\prime};E+i\omega)\widetilde{W}({\bf r},{\bf r}^{\prime};i\omega)$ (7)
$\displaystyle-\sum_{i}\phi_{i}({\bf r})\phi_{i}({\bf
r}^{\prime})\widetilde{W}({\bf r},{\bf
r}^{\prime};\varepsilon_{i}-E)\theta(\varepsilon_{i}-E)$
$\displaystyle+\sum_{a}\phi_{a}({\bf r})\phi_{a}({\bf
r}^{\prime})\widetilde{W}({\bf r},{\bf
r}^{\prime};E-\varepsilon_{a})\theta(E-\varepsilon_{a})$
Expressing the Green’s function in a quasiparticle form
$G({\bf r},{\bf r}^{\prime};i\omega)=\sum_{n}\frac{\phi_{n}({\bf
r})\phi_{n}({\bf
r}^{\prime})}{i\omega-\varepsilon_{n}+i\eta\times\text{sgn}(\varepsilon_{n}-\mu)}$
(8)
with $\eta=0^{+}$ and $\mu$ the chemical potential, it appears that the
expectation value of the $GW$ correlation self-energy
$\langle\phi_{n}|\Sigma_{C}^{GW}|\phi_{n}\rangle$ operator only requires
integrals of the kind:
$\langle\phi_{n}\phi_{m}|{W}(z)|\phi_{n}\phi_{m}\rangle=\sum_{\mu\nu}\mathcal{F}_{\mu}(\phi_{m}\phi_{n})\mathcal{F}_{\nu}(\phi_{m}\phi_{n})\langle
P_{\mu}|W(z)|P_{\nu}\rangle$ (9)
with $z$ along the imaginary or real-energy axes. The needed $\langle
P_{\mu}|W(z)|P_{\mu}\rangle$ matrix elements of the screened Coulomb potential
are obtained from a Dyson-like equation projected into the auxiliary basis
$\langle P_{\mu}|\ {W}(z)|P_{\nu}\rangle=\langle
P_{\mu}|V|P_{\nu}\rangle+\sum_{\zeta\rho}\langle
P_{\mu}|V|P_{\zeta}\rangle\cdot[\chi_{0}^{RI}(z)]_{\zeta\rho}\cdot\langle
P_{\rho}|W(z)|P_{\nu}\rangle$ (10)
where the random phase approximation (RPA) approximation is used.
### 2.2 The space-time approach from a separable resolution-of-identity
framework
With the size of the auxiliary basis scaling linearly with system size,
straightforward calculation of the $[\chi_{0}^{RI}(i\omega)]_{\mu\nu}$ matrix
elements from equation 5 requires $\mathcal{O}(N^{4})$ steps. In other words,
for each $(P_{\mu},P_{\nu})$ pair a double summation over occupied and
unoccupied MOs is required. Following Almlöf and Häser, 77, 78 the Laplace
transform:
$\frac{1}{i\omega-(\varepsilon_{a}-\varepsilon_{j})}+c.c.=-2\int_{0}^{+\infty}d\tau\;\cos(\omega\tau)e^{-(\varepsilon_{a}-\varepsilon_{j})\tau}$
(11)
where the time integral converges since $(\varepsilon_{j}-\varepsilon_{a})<0,$
allows to disentangle occupied and virtual energy levels in the denominator,
leading to an imaginary time formulation
$[\chi_{0}^{RI}(i\tau)]_{\mu\nu}=-2i\sum_{ja}\mathcal{F}_{\mu}(\phi_{j}\phi_{a})\mathcal{F}_{\nu}(\phi_{j}\phi_{a})e^{\varepsilon_{j}\tau}e^{-\varepsilon_{a}\tau}$
(12)
where the (i) factor is introduced to match the standard definition of the
independent-electron susceptibility in the time domain. However, occupied and
virtual MOs are still entangled in the
$\mathcal{F}_{\mu/\nu}(\phi_{j}\phi_{a})$ weight factors. This is precisely
the goal of the separable RI introduced in Ref. 94 in the context of RPA total
energies, with the expansion :
$\mathcal{F}_{\mu}^{RS}(\phi_{j}\phi_{a})=\sum_{k}M_{\mu k}\;\phi_{j}({\bf
r}_{k})\phi_{a}({\bf r}_{k})$ (13)
where the $\phi_{j}$ and $\phi_{a}$ MOs are factorized, leading to the wording
separable-RI that we label RI-RS where RS stands for real-space. The present
scheme targets a standard calculation with input molecular orbitals (MO)
Gaussian basis and its associated auxiliary basis sets, and the $\\{{\bf
r}_{k}\\}$ distribution is an intermediate representation designed to
reproduce the accuracy of a standard Coulomb-fitting calculation with the
input basis sets. This is described here below and in Ref. 94 for Hartre-Fock,
RPA and MP2 calculations.
The separable real-space RI (RI-RS) leads to expressing the independent-
electron susceptibility matrix elements in the auxiliary basis as
$[\chi_{0}(i\tau)]_{\mu\nu}\stackrel{{\scriptstyle RI-
RS}}{{=}}\sum_{kk^{\prime}}M_{\mu k}\cdot\chi_{0}({\bf r}_{k},{\bf
r}_{k^{\prime}};i\tau)\cdot M_{\nu k^{\prime}}$ (14)
where for any $({\bf r},{\bf r}^{\prime})$ pair of points in real-space
$\displaystyle\chi_{0}({\bf r},{\bf r}^{\prime};i\tau)$ $\displaystyle=$
$\displaystyle-iG({\bf r},{\bf r}^{\prime};i\tau)G({\bf r}^{\prime},{\bf
r};-i\tau)$ (15) $\displaystyle G({\bf r},{\bf
r}^{\prime};{\color[rgb]{0,0,0}i\tau})$ $\displaystyle=$
$\displaystyle\phantom{-}i\sum_{j}^{\text{occ}}\phi_{j}({\bf r})\phi_{j}({\bf
r}^{\prime})e^{\varepsilon_{j}\tau}\;\;\;\;(\tau>0)$ (16) $\displaystyle=$
$\displaystyle-i\sum_{a}^{\text{vir}}\phi_{a}({\bf r})\phi_{a}({\bf
r}^{\prime})e^{\varepsilon_{a}\tau}\;\;\;\;(\tau<0)$ (17)
where $G$ is the time-ordered one-body Green’s function. Considering here that
the MOs are spin-orbitals, one can recover the factor two in Equation 12 for
spin-restricted systems. Eq. 15 is the standard equation for the original
space-time approach68 with the summations over occupied and virtual MOs
completely decoupled, leading to a strictly cubic-scaling number of
operations, independently of any localization nor sparsity properties. After
construction of the imaginary-time independent-electron susceptibility in the
$\\{P_{\mu}\\}$ Gaussian auxiliary basis following Eq. 14, the
$[\chi_{0}^{RI}(i\omega)]_{\mu\nu}$ are obtained by Fourier transform at
imaginary frequencies. Following Eq. 10, the screened Coulomb potential
$\langle P_{\mu}|\ {W}(z)|P_{\nu}\rangle$ is finally obtained along the
imaginary frequency axis. Details about the time and frequency grids will be
given below when discussing the analytic continuation of $W$ to the real-axis.
The overall flow of calculations is presented in Fig. 1. While we use in the
present scheme the analytic continuation of $W$, the standard analytic
continuation of $\Sigma$ to the real axis can also be used once the
$\chi_{0}(P,Q;i\omega)$ susceptibilities are obtained using the real-space
imaginary-time approach.
Figure 1: Schematic representation of the steps involved in the present cubic
scaling all-electron space-time approach. The optimized set of real-space
positions $\\{{\bf r}_{k}\\}$ is typically 4 times as large as the
corresponding input Gaussian auxiliary basis $\\{P_{\mu}\\}$. The number of
imaginary times and frequencies is set by n$\tau$ and n$\omega$. In the
contour deformation approach (see Inset) only the screened Coulomb potential
$W$ needs to be continued from the imaginary to the real-energy axis, avoiding
the continuation of the much more structured self-energy (see Ref. 98).
### 2.3 Construction of the $\\{{\bf r}_{k}\\}$ distributions
A crucial aspect of the present scheme is the size of the $\\{{\bf
r}_{k}\\}$-set that controls the prefactor associated with the present cubic-
scaling scheme. Following our implementation of cubic-scaling RPA calculations
in an all-electron space-time approach, 94 the central idea is not to use a
generic real-space grid (such as Becke grids 99 adopted to express densities
in DFT codes) but to optimize for each atomic species a reduced set of
$\\{{\bf r}_{k}\\}$ points sufficient to reach the accuracy of the standard
Coulomb-fitting RI-V approximation in conjunction with the chosen auxiliary
basis set. Such task is performed by minimizing the difference between the
$\mathcal{F}^{RS}$ and $\mathcal{F}^{V}$ fitting procedure as defined in Eqs.
2 and 13, in the Coulomb norm sense, and taking into account all MO products
of a single atom of the species considered. As introduced in Ref. 94, the
$M_{\mu k}$ coefficients are fixed through a linear least square equation, and
thus only the $\\{{\bf r}_{k}\\}$-sets are considered as optimization
variables. The global minimization process thus writes
$\operatorname*{argmin}_{\\{{\bf
r}_{k}\\}}\;\sum_{\mu\alpha{\alpha}^{\prime}}\Big{|}\big{(}\mathcal{F}_{\mu}^{RS}(\alpha{\alpha}^{\prime})-\mathcal{F}_{\mu}^{V}(\alpha{\alpha}^{\prime})\big{)}P_{\mu}\Big{|}_{V}^{2}$
(18)
where the $\\{\alpha\\}$ are the Gaussian basis functions used to expand the
MOs.
For a given atom, the initial set of points are constructed as a superposition
of high symmetry subsets of Lebedev grids up to order 9, associated with
different sphere radii (see Supporting Information Ref. 94). The optimization
process starts by minimizing the penalty function of Eq. 18 adjusting first
the radii. This is similar to the optimization strategy adopted in the grid-
based formulation of LS-THC 85 but fitting the codensities coefficients rather
than the 4-center Coulomb integrals. In a second step, all constraints are
raised and every point is allowed to move independently. This non-linear
minimization process is performed using a basin-hopping mechanism coupled to a
L-BFGS (limited memory Broyden-Fletcher-Goldfarb-Shanno) algorithm. We
emphasize that such a step is done once for all for a given element and the
chosen basis sets. Experimenting with such a strategy for the def2-TZVP /
def2-TZVP-RI associated basis sets leads to 100 $\\{{\bf r}_{k}\\}$ points for
H and He, and 336, 436 and 536 points for elements in the second, third, and
fourth row of the periodic table, respectively. Such grid sizes allow an
agreement at the meV level between subsequent quasiparticle energies
calculated with the present real-space approach and the standard Coulomb-
fitting RI-V scheme. Except for the first row, this is typically 3.5 to 4.5
times larger than the number of elements in the def2-TZVP-RI set. Better
optimization schemes and reducing the initial number of Lebedev subsets may
lead to reduce these $\\{{\bf r}_{k}\\}$ distribution sizes. However, as shown
below, the present approach already provides an excellent accuracy-to-cost
ratio and appears to be very robust, with no outliers as tested over large
molecular sets.
In a second step, the $\\{{\bf r}_{k}\\}$ distribution for the molecular
system is built as the superposition of the isolated atoms $\\{{\bf r}_{k}\\}$
distributions and only the weights $\\{M_{\mu k}\\}$, as defined in Eq. 13,
need to be calculated for each considered molecular system. Such a step only
requires $\mathcal{O}(N^{3})$ operations since the least-square estimator
matrix $[M]_{\mu k}$ is obtained in a matrix multiplication/inversion
formulation from the target $\mathcal{F}^{V}_{\mu}$ coefficients (see Ref.
94). As such, the weights $\\{M_{\mu k}\\}$ are univocally defined once the
$\\{{\bf r}_{k}\\}$ grid and $\mathcal{F}^{V}_{\mu}$ factors are set-up. We
provide in the Supporting Information a graph confirming the cubic-scaling of
the $\\{M_{\mu k}\\}$ construction that amounts to about 25$\%$ of the total
CPU time, including the calculation of the target $\mathcal{F}^{V}_{\mu}$, for
non-self-consistent $G_{0}W_{0}$ calculations. This 2-step process, namely
the optimization of the $\\{{\bf r}_{k}\\}$-distribution on isolated atoms,
dramatically simplifies the minimization process while preserving excellent
accuracy as demonstrated below.
Concerning previous RPA calculations using the present separable RI with
Laplace transform scheme, 94 the crucial observation was that indeed $\\{{\bf
r}_{k}\\}$-sets typically 4 times larger than the used auxiliary
$\\{P_{\mu}\\}$ basis set were sufficient to reproduce the accuracy of
standard RI-V calculations to within a few $\mu$Hartree/electron for the
exchange, RPA and MP2 total energies. Namely, replacing the standard RI-V
$\mathcal{F}_{\mu}^{V}$ coefficients by their $\mathcal{F}_{\mu}^{RS}$
approximants preserved an excellent accuracy, with a number of points
sufficiently small to offer a crossover with the standard quartic-scaling RI-V
RPA approach for systems of the size of pentacene.
We will perform here below the corresponding accuracy check for the quantity
of interest here, namely the $GW$ quasiparticle energies, showing that meV
accuracy can be achieved with a crossover between the present separable RI-RS
scheme and standard RI-V calculations for systems containing less than a few
hundred electrons. This crossover is independent of the compactness and
dimensionality of the studied systems since sparsity and localization are not
exploited.
### 2.4 Analytic continuation, frequency and time grids
An important aspect of the space-time approach is the required analytic
continuation from the imaginary to the real-frequency axis. With the
calculation of the susceptibility $\chi_{0}(i\tau)$ at imaginary times, the
imaginary-frequency $\chi_{0}(i\omega)$ analog can be obtained by Fourier
transform. From such quantities, the screened Coulomb potential $W(i\omega)$
and self-energy $\Sigma(i\omega)$ can be obtained at imaginary frequencies
and efficiently continued analytically to the real energy axis as performed in
many codes.
An alternative to the analytic continuation of the self-energy was proposed by
Christoph Friedrich in the context of $GT$ calculations on solid iron, 97 and
by ourselves in the present case of $GW$ calculations on molecular systems
with extensive benchmark accuracy checks.98 The central idea is to adopt the
contour deformation scheme where the quantity needed along the real-axis is no
longer the self-energy directly, but the screened Coulomb potential (see
second and third lines of Eq. 7). Since the self-energy contains $(N_{W}\times
N_{G})$ poles, where $N_{G}$ and $N_{W}$ are respectively the number of poles
of the Green’s function and screened Coulomb potential, the screened Coulomb
potential is much less structured than the self-energy itself, leading to a
much more robust analytic-continuation scheme. Difficult test cases drawn from
the $GW$100 test sets, 36 such as the $MgO$ or $BN$ dimers, were shown to be
very accurately treated with the calculations of the screened Coulomb
potential $W(i\omega)$ for no more than 12 frequencies along the imaginary-
axis. In particular, theses frequencies are constructed so as to minimize the
error over the imaginary axis integration contribution to Equation 7. We
address the reader to Ref. 98 for a detailed presentation of this scheme. We
keep the number of imaginary frequencies to $n\omega$=12 that was shown in
this former study to lead to sub-meV accuracy, as compared to the contour
deformation scheme, for $GW$ calculations on frontier orbitals using this
“robust” analytic continuation scheme.
n$\omega$ | RI-V | RI-RS | RI-RS + LT
---|---|---|---
HOMO
6 | -7.55777 | -7.55777 | -7.55787
8 | -7.56073 | -7.56072 | -7.56071
10 | -7.56113 | -7.56112 | -7.56112
12 | -7.56112 | -7.56111 | -7.56111
14 | -7.56112 | -7.56111 | \- -
LUMO
6 | -0.75723 | -0.75719 | -0.75713
8 | -0.76020 | -0.76016 | -0.76018
10 | -0.76046 | -0.76042 | -0.76042
12 | -0.76044 | -0.76040 | -0.76040
14 | -0.76044 | -0.76040 | \- -
Table 1: Acridine def2-TZVP $G_{0}W_{0}$@PBE0 HOMO and LUMO (in eV)
convergence against the imaginary frequency grid size. We keep the ratio
$n\tau=1.5\times n\omega$. The real-space with Laplace transform (RI-RS + LT)
depends on both frequency and time grids. All calculations are performed with
the auxiliary def2-TZVP-RI basis set, while RI-RS calculations use an extra
$\\{{\bf r}_{k}\\}$ distribution optimized for the corresponding
def2-TZVP/def2-TZVP-RI basis sets association.
Once the imaginary frequencies are set, the corresponding imaginary-time grid
is constructed following Ref. 94, where the present space-time approach was
explored for RPA total energy calculations. The selected times
$\\{\tau_{p},p=1,n\tau\\}$ are optimized for the chosen set of imaginary
frequencies $z_{k}$ (k=1, n$\omega$) through the minimization process:
$\arg\min_{\omega_{k}^{p},\tau_{p}}\left[\sum_{k}\int^{\ln(E_{max})}_{\ln(E_{min})}du\left|\sum_{p}\omega_{k}^{p}e^{-\tau_{p}e^{u}}-\left[\frac{1}{e^{u}+iz_{k}}-\frac{1}{e^{u}-iz_{k}}\right]\right|^{2}\right]$
(19)
where $\omega_{k}^{p}$ is the weight associated with a given $\tau_{p}$ time
for a targeted $z_{k}$ frequency. The energies $E_{min}$ and $E_{max}$ are the
energy gap and the maximum ($\varepsilon_{a}-\varepsilon_{i})$ value,
respectively. The $1/(e^{u}\pm iz_{k})$ factors represent the pole structure
of the independent-electron susceptibility along the imaginary axis. The
$\sum_{p}\omega_{k}^{p}e^{-\tau_{p}e^{u}}$ approximant translates the fact
that $e^{-a|\tau|}$ ($a>0$) is the Fourier transform of
$2a/(a^{2}+\omega^{2})$ within a prefactor. Following Refs. 94, 98, the log
scale is used so as to allow a regular sampling of the error oscillations at
energies between $E_{min}$ and $E_{max}$. The problem can be then solved in a
traditional least square approach using a uniform sampling in $u$. Such a
formulation conserves an excellent accuracy, as demonstrated in Table 1, with
a number of grid points comparable to that commonly adopted with the more
elaborated minimax approach. 70, 79, 100 Similarly to the minimax formulation,
the grids points $\tau_{p}$ have been pre-tabulated with the $E_{max}/E_{min}$
ratio as a single parameter, so as to minimize the computational effort of the
setup. On the other hand, the $\omega^{p}_{k}$ coefficients can be
conveniently recalculated on the fly as the result of a simple linear least
square equation. In association with $n\omega$=12 imaginary frequencies,
$n\tau$=18 times are selected to reach sub-meV accuracy on the quasiparticle
energies. We provide in Table 1 a typical test of accuracy, selecting the
def2-TZVP $G_{0}W_{0}$@PBE0 HOMO and LUMO energies of acridine, the first
element of the molecular set of Ref. 41 studied in full details in the next
section. In this Table, RI-RS without Laplace transform only differs from the
standard Coulomb-fitting (RI-V) by the construction of the $\mathcal{F}_{\mu}$
fitting coefficients. We observe in particular that the dependence on the
n$\omega$ grid falls well below the meV for n$\omega\geq$ 10\. The real-space
approach with Laplace-transform (RI-RS+LT) depends further on the imaginary-
time grid. However, for a given n$\omega$ imaginary-frequency grid, a time-
grid with n$\tau=1.5\times n\omega$ introduces negligible errors, comforting
overall our choice of n$\omega=12$ and n$\tau$=18 running parameters.
## 3 Results
### 3.1 Validation and accuracy
We benchmark the accuracy of the present scheme using the recent set of 24
intermediate size molecules with acceptor character proposed in Ref. 41. Our
calculations are performed at the def2-TZVP $G_{0}W_{0}$@PBE0 level associated
with the corresponding def2-TZVP-RI auxiliary basis.96 Our goal here is not to
carry calculations in the complete-basis set limit, but rather to assess the
accuracy of the present space-time approach, as compared to the standard
Coulomb-fitting (RI-V) scheme, using a reasonable basis set. The real-space
$\\{{\bf r}_{k}\\}$ sets were thus optimized for the def2-TZVP and def2-TZVP-
RI basis sets association, following the scheme described above and summarized
in Eqn. 18. As discussed above, the size of the $\\{{\bf r}_{k}\\}$ atomic
distributions amounts to 136 for H and 336 for second row elements.
| HOMO | LUMO
---|---|---
| RI-V [eV] | RI-RS [eV] | RI-V [eV] | RI-RS [eV]
anthracene | -7.0787 | -7.0787 | -0.4233 | -0.4234
acridine | -7.5611 | -7.5611 | -0.7604 | -0.7604
phenazine | -7.9755 | -7.9754 | -1.1796 | -1.1799
azulene | -7.1271 | -7.1272 | -0.5595 | -0.5597
benzoquinone (BQ) | -9.7275 | -9.7273 | -1.6004 | -1.6006
naphthalenedione | -9.3297 | -9.3296 | -1.5520 | -1.5521
dichlone | -9.4112 | -9.4111 | -1.9812 | -1.9814
F4-BQ | -10.5855 | -10.5853 | -2.3221 | -2.3220
Cl4-BQ | -9.7245 | -9.7246 | -2.5192 | -2.5191
nitrobenzene | -9.7868 | -9.7867 | -0.5485 | -0.5486
F4-benzenedicarbonitrile | -10.2336 | -10.2334 | -1.7207 | -1.7210
dinitro-benzonitrile | -10.7596 | -10.7595 | -1.8683 | -1.8684
nitro-benzonitrile | -10.2038 | -10.2038 | -1.3992 | -1.3993
benzonitrile | -9.5108 | -9.5107 | 0.1831 | 0.1828
fumaronitrile | -10.9712 | -10.9710 | -1.0740 | -1.0740
mDCNB | -10.0173 | -10.0175 | -0.7227 | -0.7228
TCNE | -11.4676 | -11.4674 | -3.2543 | -3.2544
TCNQ | -9.1373 | -9.1373 | -3.5795 | -3.5795
maleic-anhydride | -10.7783 | -10.7782 | -1.0029 | -1.0031
phthalimide | -9.6757 | -9.6755 | -0.6380 | -0.6381
phthalic-anhydride | -10.1111 | -10.1111 | -0.9060 | -0.9061
Cl4-isobenzofuranedione | -9.5809 | -9.5807 | -1.7124 | -1.7127
NDCA | -8.7061 | -8.7058 | -1.3302 | -1.3304
BODIPY | -7.8008 | -7.8008 | -1.6315 | -1.6316
Max. err. | +0.32 meV / -0.23 meV | +0.07 meV / -0.33 meV
MAE | 0.12 meV | 0.14 meV
MSE | 0.08 meV | 0.12 meV
Table 2: HOMO and LUMO energies at the def2-TZVP $G_{0}W_{0}$@PBE0 level for
the molecular set of Ref. 41. Molecules are arranged by chemical families in
the order of Ref. 41. The standard Coulomb-fitting scheme (RI-V) performed
with the def2-TZVP-RI auxiliary basis set 96 is compared to the present real-
space Laplace-transform (RI-RS) calculations. Negative and positive maximum
errors, the mean absolute (MAE) and mean signed (MSE) errors are indicated in
meV. Values leading to the largest error are in bold.
We provide in Table 2 the def2-TZVP $G_{0}W_{0}$@PBE0 highest occupied (HOMO)
and lowest unoccupied (LUMO) molecular orbitals energies calculated using the
standard Coulomb-fitting (RI-V) scheme and the separable real-space (RI-RS)
approach in conjunction with the Laplace transform scheme. Both calculations
are performed with the “robust” analytic continuation (AC) scheme.98 Direct
contour-deformation calculations without any analytic continuation, namely
calculating directly the needed residues of $W$ along the real-axis within the
standard quartic scaling RI-V formalism, shows that the errors introduced by
the AC are well below the meV for the HOMO and LUMO energy levels of the
molecules contained in this set. The analysis of the results evidences that
for such $\\{{\bf r}_{K}\\}$ distributions, the error on the quasiparticle
energies remains below the meV. Such an accuracy may be tuned by
increasing/decreasing the size of the $\\{{\bf r}_{k}\\}$ distributions, but
the present accuracy-to-size trade-off is already excellent in practice.
We further perform benchmark def2-TZVP $G_{0}W_{0}$@PBE0 calculations on the
$GW$100 test set 36, 35, 101, 43, 47, 98, 72 that contains elements from the
third and fourth periods of the periodic table, including transition metal
complexes. We exclude the 5 systems containing 5-th period elements for which
the def2-TZVP basis set requires the use of an effective core potential,
namely Xe, Rb2, I2, vinyl iodide (C2H3I) and aluminum iodide (AlI3). Similarly
to the previous test set, the error induced by the real-space RI-RS with
Laplace transform technique, as compared to a standard RI-V calculations, is
below the meV for most molecules as reported in Fig. 2. Three systems (Kr,
CuCN, SF4) show an error on the HOMO slightly larger than 1 meV (in absolute
value), while all errors on the LUMO value are below the meV. As emphasized
above, optimizing further the distribution of $\\{{\bf r}_{k}\\}$ points may
bring all errors below the meV, but the purpose of the present study is to
show that the present scheme, as it stands, already brings very consistently
the error at the meV level. Overall, the mean absolute (MAE) errors amount to
0.21 meV and 0.09 meV for the HOMO/LUMO, respectively. All data can be found
in the Supporting Information. The present data confirm, in the specific case
of $GW$ calculations, previous studies reporting on the excellent accuracy-to-
cost ratio associated with grid-based techniques for the evaluation of exact
exchange or explicit correlation energies in the context of all-electron
atomic-orbital basis sets calculations. 80, 81, 82, 83, 84, 85, 102, 94, 103
Figure 2: HOMO/LUMO def2-TZVP $\mathrm{G_{0}W_{0}@PBE}$ quasi-particle energy
discrepancy analysis over the $GW$100 test, excluding the 5 systems containing
5-th period elements (see text). The error is that of the present real-space
Laplace-transform (RI-RS) approach with respect to the standard Coulomb-
fitting (RI-V) approach performed with the corresponding def2-TZVP-RI
auxiliary basis set. 96 The molecules of the set are sorted according to the
maximum period (row) involved within the periodic table.
### 3.2 Scaling analysis
We finally address the issue of scaling with respect to the system size
through the example of finite-size hexagonal boron-nitride (h-BN) “flakes”
containing a central point-defect. The study of the optical emission mediated
by defects in h-BN is an important technological research area, with the
prospect of having at hand stable, room-temperature, polarized and ultrabright
single-photon sources, together with a scientific challenge when it comes to
identify the defects and mechanisms responsible for such sharp emission lines
in the visible range. 104, 105, 106, 107, 108
We select as a test case the $C_{B}V_{N}$ (nitrogen vacancy plus carbon
substitution to neighbouring boron) defect that has been recently identified
as a possible candidate for emission at about 2 eV.109 Our goal here is not to
confirm the likeliness of such a defect, but rather to start exploring whether
many-body calculations with a typical defect can be performed using finite-
size clusters, rather than the traditional supercell approach using periodic
boundary conditions (PBC). Indeed, the use of PBC complicates the calculations
of charged excitations due to the electrostatic interaction between cells, and
the Coulomb potential must be truncated to avoid spurious contributions even
in the limit of large supercells. As such, the modeling of the opto-electronic
properties of defects in h-BN at the many-body $GW$ and Bethe-Salpeter level
remains scarce due in particular to the cost of performing the required large-
scale $GW$ calculations. 110, 109
The edge of the h-BN flakes are passivated by hydrogen atoms to avoid dangling
bonds and the HOMO-LUMO gap is clearly controlled by very localized defect
states yielding energy levels within the gap of pristine h-BN (see Inset Fig.
3 for the LUMO). The size of the studied flakes correspond to average radii
ranging from 21.5 to 56.4 Å, containing from 137 to 941 C, B or N atoms, that
is from 167 to 1019 atoms including passivating H atoms. The average diameter
is defined as $\overline{D}=2\sqrt{N_{at}S_{at}/\pi},$ where $N_{at}$ is the
number of B or N atoms, and $S_{at}=3\sqrt{3}d_{BN}^{2}/4$ is the effective
surface per B or N atom in the hexagonal lattice, with $d_{BN}$ the BN bond
length. Structural relaxation at the PBE0 6-311G* level indicates that the
ground-state for these systems is not spin-polarized.
Figure 3: Plot of the 6-311G* $G_{0}W_{0}$@PBE0 HOMO-LUMO gap as a function of
the inverse flake average diameter. The Inset represents the Kohn-Sham LUMO
associated with our largest flake (1019 atoms).
Before discussing scaling properties, we briefly comment on the evolution of
the HOMO-LUMO energy gap obtained at the 6-311G* $G_{0}W_{0}$@PBE0 level 111
as a function of system size (see Fig. 3). Our goal here is not to obtain
converged values with respect to basis completeness, but to study the
evolution of the gap with system size using a minimal triple-zeta plus
polarization basis, keeping in mind that the 6-311G* HOMO-LUMO gaps for our
defected flakes are typically 70 meV larger than that obtained with the larger
def2-TZVP basis set. The decrease of the gap with increasing diameter can be
attributed to polarization effects, namely the fact that upon calculating the
ionization potential or electronic affinity, as measured by a photo-emission
experiment, the added charge localized on the defect generates a long-range
Coulomb field that polarizes the surrounding atoms. Such a polarization,
properly described within the $GW$ formalism, stabilizes the added hole or
electron, closing the gap. In the case of finite size systems, this
polarization is incomplete as compared to an infinite sheet, leading to a
HOMO-LUMO gap that is too large. Performing a fit of the $GW$ gap up to second
order in (1/$\overline{D}$), the linear contribution is found to be
negligible, leading to a simple quadratic dependence. Such a quadratic
behaviour stems from the reaction field generated by the 2D density of dipoles
induced by a charge added or removed on/from the LUMO/HOMO levels. 112 The
extrapolated gap at infinite radius amounts to 4.96 eV, still $\simeq$60 meV
away from the gap of the largest flake studied. As expected, the $G_{0}W_{0}$
gap is much larger than the PBE0 Kohn-Sham gap of 3.07 eV obtained for the
largest system. Fitting the data associated with the 4 smallest flakes, the
extrapolated value remains within 20 meV of the extrapolated value with the
fit containing all points. This indicates the stability of the extrapolation
scheme and suggests that an accurate asymptotic value may be obtained with
calculations performed on systems containing a rather limited number of atoms.
While the isolated defect limit $GW$ quasiparticle gap needs extrapolating to
infinite sizes, preliminary results indicate that optical excitations, which
are neutral excitations of the system, converge much faster as a function of
system size.
We now plot (log scale) in Fig. 4 the total CPU time (namely the sum of all
cores CPU time, over the complete run) associated with the $G_{0}W_{0}$
calculations reported in Fig. 3. We compare in particular the standard RI-V
(Coulomb fitting) calculations (filled blue triangles) performed with the
universal Coulomb fitting auxiliary basis of Ref. 113 and the present real-
space Laplace-transform approach (filled black circles) with a real-space
$\\{{\bf r}_{k}\\}$ distribution optimized as described above for this
specific auxiliary basis set. Timings are reported in Tables S2 and S3 of the
Supporting Information where the specific contributions from the RI-RS set-up,
susceptibility calculations, Dyson-equation inversion and self-energy
calculations are further provided.
Figure 4: Scaling properties (log scale) for the standard Coulomb-fitting
(RI-V) $G_{0}W_{0}$ calculations compared with the present real-space Laplace-
transform (RI-RS+LT) scheme. Calculations have been performed with the
def2-TZVP and 6-311G* basis sets. For the standard RI-V scheme, the auxiliary
def2-TZVP-RI 96 and universal Coulomb fitting 113 basis sets, respectively,
were used. Calculations were performed on a set of hexagonal boron-nitride
flakes with up to 6000 electrons. Calculations have been performed on AMD Rome
Epyc nodes with 128 cores/node and 1.85 Gb/core. The black (RI-RS+LT) and
blue (RI-V) dot-dashed lines are cubic and quartic fits, respectively. An
unconstrained fit yields a scaling exponent of 3.07 for the (RI-RS+LT) scheme.
These calculations confirm that the present space-time scheme offers a cubic
scaling with system size (see black dot-dashed fit) with an (extrapolated)
crossover with the standard quartic scaling RI-V scheme taking place for about
350 electrons. We further explore limit of small system sizes with a larger
def2-TZVP basis associated with its def2-TZVP-RI auxiliary basis. We can see
that despite the larger basis sets, the overhead of the 128 core distribution
and pre-computation phases still hinders slightly the fit by perfect
cubic/quartic lines in the small system size limit. Nonetheless, these later
calculations confirms a crossover that consistently takes place at about 350
electrons ($\simeq$50-60 B/N atoms). A similar crossover was observed in the
case of cc-pVTZ RPA calculations where we used a similar real-space Laplace-
transform approach to build the independent-electron susceptibility (see Ref.
94).
All calculations have been performed on a supercomputer built of 128 cores 2,6
GHz AMD Rome nodes with 1.85 Gb/core memory. We only used fully filled nodes
in order to maintain consistency between the timings presented here, meaning
that the smallest system calculations have been distributed on a minimum of
128 cores. Under this constraint, we selected CPU grid sizes that roughly
match the minimum memory requirement for each calculation, as detailed in
Table S2 of the Supporting Information. Our real-space Laplace-transform (RI-
RS+LT) calculations require much less cores than the standard RI-V scheme,
partly due to the corresponding $\mathcal{O}(N^{2})$ memory footprint, with
the $\chi_{0}({\bf r}_{k},{\bf r}_{k}^{\prime};i\omega)$ being the largest
objects stored in memory. On the other hand, the memory requirement of our
standard RI-V implementation grows as $\mathcal{O}(N^{3})$, dominated in this
case by the storage of (occupied)$\times$(virtual) co-density auxiliary fits
$\mathcal{F}_{\mu}^{V}(\phi_{i}\phi_{a})$. Let us emphasize that within the
RI-RS+LT approach, each 3-center integral is computed and immediately
discarded during the RI setup.
## 4 Conclusions
We have presented an all-electron space-time $GW$ formalism relying on a
separable resolution-of-the-identity (RI) formalism, offering cubic-scaling
$GW$ calculations that do not exploit any sparsity nor localization
considerations. This allows a crossover with the quartic scaling Coulomb-
fitting RI-V $GW$ calculations for systems containing a very few hundred
electrons, independently of the dimensionality of the studied system. As
compared to the interpolative separable density fitting (ISDF) scheme, the
present approach preserves the use of standard auxiliary Gaussian basis sets
that are taken as an input, and not constructed by the ISDF algorithm. The
needed distribution of $\\{{\bf r}_{k}\\}$ points are optimized to recover at
the meV level the results of a standard Coulomb-fitting (RI-V) $GW$
calculation. Precalculated grids to be associated with a larger collection of
standard basis sets, beyond the 6-311G* and def2-TZVP sets adopted in this
study, comes now as a prerequisite for a broader use of the present scheme.
Scaling with system size could be further reduced possibly by exploiting
stochastic techniques 69 or the decay properties of the space-time Green’s
functions 114 at long-range in the case of very large systems. The
performances as they stand today clearly illustrate however the interest of
real-space quadrature, as developed by the quantum chemistry community for
all-electron atomic-orbital basis sets calculations, and the progress
performed by the $GW$ community to install this family of many-body
perturbation techniques as a valuable tool for moderately correlated systems,
with an excellent trade-off between accuracy and CPU time, allowing to tackle
finite size or periodic systems, metallic or semiconducting, containing up to
several hundred atoms.
The authors are indebted to Pascal Pochet for discussions concerning defects
stability in periodic and finite size systems and to Gabriele D’Avino for
explaining the quadratic behaviour of the polarization energy with inverse
diameter. Calculations have been performed thanks to an allocation on the
French CEA-TGCC Joliot-Curie supercomputer comprizing 2292 AMD Rome Epyc nodes
with 128 cores/node and 2 Go/core. This work received support from the French
Agence Nationale de la Recherche (ANR) under contract ANR-20-CE29-0005.
We provide in the Supporting Information the details of the $GW$100 test set
data (Table S1) and additional information about the systems size, timings and
number of cores used for the 6-311G* $G_{0}W_{0}$@PBE0 calculations on the
hexagonal boron-nitride flakes (Tables S2-S3 and Fig. S1).
## References
* Hedin 1965 Hedin, L. New Method for Calculating the One-Particle Green’s Function with Application to the Electron-Gas Problem. _Phys. Rev._ 1965, _139_ , A796–A823
* Strinati et al. 1980 Strinati, G.; Mattausch, H. J.; Hanke, W. Dynamical Correlation Effects on the Quasiparticle Bloch States of a Covalent Crystal. _Phys. Rev. Lett._ 1980, _45_ , 290–294
* Hybertsen and Louie 1986 Hybertsen, M. S.; Louie, S. G. Electron correlation in semiconductors and insulators: Band gaps and quasiparticle energies. _Phys. Rev. B_ 1986, _34_ , 5390–5413
* Godby et al. 1988 Godby, R. W.; Schlüter, M.; Sham, L. J. Self-energy operators and exchange-correlation potentials in semiconductors. _Phys. Rev. B_ 1988, _37_ , 10159–10175
* Farid et al. 1988 Farid, B.; Daling, R.; Lenstra, D.; van Haeringen, W. GW approach to the calculation of electron self-energies in semiconductors. _Phys. Rev. B_ 1988, _38_ , 7530–7534
* Aryasetiawan and Gunnarsson 1998 Aryasetiawan, F.; Gunnarsson, O. The GW method. _Rep. Prog. Phys._ 1998, _61_ , 237–312
* Farid 1999 Farid, B. In _Electron Correlation in the Solid State - Chapter 3_ ; March, N., Ed.; Imperial College Press, London, 1999
* Onida et al. 2002 Onida, G.; Reining, L.; Rubio, A. Electronic excitations: density-functional versus many-body Green’s-function approaches. _Rev. Mod. Phys._ 2002, _74_ , 601–659
* Ping et al. 2013 Ping, Y.; Rocca, D.; Galli, G. Electronic excitations in light absorbers for photoelectrochemical energy conversion: first principles calculations based on many body perturbation theory. _Chem. Soc. Rev._ 2013, _42_ , 2437–2469
* Martin et al. 2016 Martin, R.; Reining, L.; Ceperley, D. _Interacting Electrons: Theory and Computational Approaches_ ; Cambridge University Press, 2016
* Golze et al. 2019 Golze, D.; Dvorak, M.; Rinke, P. The GW Compendium: A Practical Guide to Theoretical Photoemission Spectroscopy. _Front. Chem._ 2019, _7_ , 377
* van Schilfgaarde et al. 2006 van Schilfgaarde, M.; Kotani, T.; Faleev, S. Quasiparticle Self-Consistent $GW$ Theory. _Phys. Rev. Lett._ 2006, _96_ , 226402
* Shishkin et al. 2007 Shishkin, M.; Marsman, M.; Kresse, G. Accurate Quasiparticle Spectra from Self-Consistent GW Calculations with Vertex Corrections. _Phys. Rev. Lett._ 2007, _99_ , 246403
* Ethridge et al. 1996 Ethridge, E. C.; Fry, J. L.; Zaider, M. Quasiparticle spectra of trans-polyacetylene. _Phys. Rev. B_ 1996, _53_ , 3662–3668
* van der Horst et al. 1999 van der Horst, J.-W.; Bobbert, P. A.; Michels, M. A. J.; Brocks, G.; Kelly, P. J. Ab Initio Calculation of the Electronic and Optical Excitations in Polythiophene: Effects of Intra- and Interchain Screening. _Phys. Rev. Lett._ 1999, _83_ , 4413–4416
* Rohlfing and Louie 1999 Rohlfing, M.; Louie, S. G. Optical Excitations in Conjugated Polymers. _Phys. Rev. Lett._ 1999, _82_ , 1959–1962
* Stan et al. 2006 Stan, A.; Dahlen, N. E.; van Leeuwen, R. Fully self-consistent GW calculations for atoms and molecules. _EPL_ 2006, _76_ , 298–304
* Sai et al. 2008 Sai, N.; Tiago, M. L.; Chelikowsky, J. R.; Reboredo, F. A. Optical spectra and exchange-correlation effects in molecular crystals. _Phys. Rev. B_ 2008, _77_ , 161306
* Ma et al. 2009 Ma, Y.; Rohlfing, M.; Molteni, C. Excited states of biological chromophores studied using many-body perturbation theory: Effects of resonant-antiresonant coupling and dynamical screening. _Phys. Rev. B_ 2009, _80_ , 241405
* Rostgaard et al. 2010 Rostgaard, C.; Jacobsen, K. W.; Thygesen, K. S. Fully self-consistent GW calculations for molecules. _Phys. Rev. B_ 2010, _81_ , 085103
* Blase et al. 2011 Blase, X.; Attaccalite, C.; Olevano, V. First-principles $\mathit{GW}$ calculations for fullerenes, porphyrins, phtalocyanine, and other molecules of interest for organic photovoltaic applications. _Phys. Rev. B_ 2011, _83_ , 115103
* Faber et al. 2011 Faber, C.; Attaccalite, C.; Olevano, V.; Runge, E.; Blase, X. First-principles $\mathit{GW}$ calculations for DNA and RNA nucleobases. _Phys. Rev. B_ 2011, _83_ , 115123
* Faber et al. 2011 Faber, C.; Janssen, J. L.; Côté, M.; Runge, E.; Blase, X. Electron-phonon coupling in the C60 fullerene within the many-body $GW$ approach. _Phys. Rev. B_ 2011, _84_ , 155104
* Foerster et al. 2011 Foerster, D.; Koval, P.; Sánchez-Portal, D. An O($N^{3}$) implementation of Hedin’s $GW$ approximation for molecules. _J. Chem. Phys._ 2011, _135_ , 074105
* Ke 2011 Ke, S.-H. All-electron $GW$ methods implemented in molecular orbital space: Ionization energy and electron affinity of conjugated molecules. _Phys. Rev. B_ 2011, _84_ , 205415
* Baumeier et al. 2012 Baumeier, B.; Andrienko, D.; Rohlfing, M. Frenkel and Charge-Transfer Excitations in Donor–acceptor Complexes from Many-Body Green’s Functions Theory. _J. Chem. Theory Comput._ 2012, _8_ , 2790–2795, PMID: 26592120
* Körzdörfer and Marom 2012 Körzdörfer, T.; Marom, N. Strategy for finding a reliable starting point for ${G}_{0}{W}_{0}$ demonstrated for molecules. _Phys. Rev. B_ 2012, _86_ , 041110
* Bruneval and Marques 2013 Bruneval, F.; Marques, M. A. L. Benchmarking the Starting Points of the GW Approximation for Molecules. _J. Chem. Theory Comput._ 2013, _9_ , 324–329, PMID: 26589035
* Pham et al. 2013 Pham, T. A.; Nguyen, H.-V.; Rocca, D.; Galli, G. $GW$ calculations using the spectral decomposition of the dielectric matrix: Verification, validation, and comparison of methods. _Phys. Rev. B_ 2013, _87_ , 155148
* van Setten et al. 2013 van Setten, M. J.; Weigend, F.; Evers, F. The GW-Method for Quantum Chemistry Applications: Theory and Implementation. _J. Chem. Theory Comput._ 2013, _9_ , 232–246, PMID: 26589026
* Umari et al. 2013 Umari, P.; Giacomazzi, L.; De Angelis, F.; Pastore, M.; Baroni, S. Energy-level alignment in organic dye-sensitized TiO2 from GW calculations. _J. Chem. Phys._ 2013, _139_ , 014709
* Cudazzo et al. 2013 Cudazzo, P.; Gatti, M.; Rubio, A.; Sottile, F. Frenkel versus charge-transfer exciton dispersion in molecular crystals. _Phys. Rev. B_ 2013, _88_ , 195152
* Lischner et al. 2014 Lischner, J.; Sharifzadeh, S.; Deslippe, J.; Neaton, J. B.; Louie, S. G. Effects of self-consistency and plasmon-pole models on $GW$ calculations for closed-shell molecules. _Phys. Rev. B_ 2014, _90_ , 115130
* Koval et al. 2014 Koval, P.; Foerster, D.; Sánchez-Portal, D. Fully self-consistent $GW$ and quasiparticle self-consistent $GW$ for molecules. _Phys. Rev. B_ 2014, _89_ , 155417
* Krause et al. 2015 Krause, K.; Harding, M. E.; Klopper, W. Coupled-cluster reference values for the GW27 and GW100 test sets for the assessment of GW methods. _Mol. Phys._ 2015, _113_ , 1952–1960
* van Setten et al. 2015 van Setten, M. J.; Caruso, F.; Sharifzadeh, S.; Ren, X.; Scheffler, M.; Liu, F.; Lischner, J.; Lin, L.; Deslippe, J. R.; Louie, S. G.; Yang, C.; Weigend, F.; Neaton, J. B.; Evers, F.; Rinke, P. $GW$100: Benchmarking $G_{0}W_{0}$ for Molecular Systems. _J. Chem. Theory Comput._ 2015, _11_ , 5665–5687, PMID: 26642984
* Kaplan et al. 2016 Kaplan, F.; Harding, M. E.; Seiler, C.; Weigend, F.; Evers, F.; van Setten, M. J. Quasi-Particle Self-Consistent GW for Molecules. _J. Chem. Theory Comput._ 2016, _12_ , 2528–2541, PMID: 27168352
* Wilhelm et al. 2016 Wilhelm, J.; Del Ben, M.; Hutter, J. $GW$ in the Gaussian and Plane Waves Scheme with Application to Linear Acenes. _J. Chem. Theory Comput._ 2016, _12_ , 3623–3635, PMID: 27348184
* Rangel et al. 2016 Rangel, T.; Hamed, S. M.; Bruneval, F.; Neaton, J. B. Evaluating the GW Approximation with CCSD(T) for Charged Excitations Across the Oligoacenes. _J. Chem. Theory Comput._ 2016, _12_ , 2834–2842, PMID: 27123935
* Scherpelz et al. 2016 Scherpelz, P.; Govoni, M.; Hamada, I.; Galli, G. Implementation and Validation of Fully Relativistic GW Calculations: Spin–Orbit Coupling in Molecules, Nanocrystals, and Solids. _J. Chem. Theory Comput._ 2016, _12_ , 3523–3544, PMID: 27331614
* Knight et al. 2016 Knight, J. W.; Wang, X.; Gallandi, L.; Dolgounitcheva, O.; Ren, X.; Ortiz, J. V.; Rinke, P.; Körzdörfer, T.; Marom, N. Accurate Ionization Potentials and Electron Affinities of Acceptor Molecules III: A Benchmark of GW Methods. _J. Chem. Theory Comput._ 2016, _12_ , 615–626
* Vlček et al. 2017 Vlček, V.; Rabani, E.; Neuhauser, D.; Baer, R. Stochastic GW Calculations for Molecules. _J. Chem. Theory Comput._ 2017, _13_ , 4997–5003, PMID: 28876912
* Maggio et al. 2017 Maggio, E.; Liu, P.; van Setten, M. J.; Kresse, G. GW100: A Plane Wave Perspective for Small Molecules. _J. Chem. Theory Comput._ 2017, _13_ , 635–648, PMID: 28094981
* Maggio and Kresse 2017 Maggio, E.; Kresse, G. GW Vertex Corrected Calculations for Molecular Systems. _J. Chem. Theory Comput._ 2017, _13_ , 4765–4778, PMID: 28873298
* Marom 2017 Marom, N. Accurate description of the electronic structure of organic semiconductors by the $GW$ methods. _J. Phys.: Cond. Matt._ 2017, _29_ , 103003
* Golze et al. 2018 Golze, D.; Wilhelm, J.; van Setten, M. J.; Rinke, P. Core-Level Binding Energies from $GW$: An Efficient Full-Frequency Approach within a Localized Basis. _J. Chem. Theory Comput._ 2018, _14_ , 4856–4869, PMID: 30092140
* Govoni and Galli 2018 Govoni, M.; Galli, G. $GW$100: Comparison of Methods and Accuracy of Results Obtained with the WEST Code. _J. Chem. Theory Comput._ 2018, _14_ , 1895–1909
* Véril et al. 2018 Véril, M.; Romaniello, P.; Berger, J. A.; Loos, P.-F. Unphysical Discontinuities in $GW$ Methods. _J. Chem. Theory Comput._ 2018, _14_ , 5220–5228, PMID: 30212627
* Wehner et al. 2018 Wehner, J.; Brombacher, L.; Brown, J.; Junghans, C.; Çaylak, O.; Khalak, Y.; Madhikar, P.; Tirimbò, G.; Baumeier, B. Electronic Excitations in Complex Molecular Environments: Many-Body Green’s Functions Theory in VOTCA-XTP. _J. Chem. Theory Comput._ 2018, _14_ , 6253–6268, PMID: 30404449
* Holzer and Klopper 2019 Holzer, C.; Klopper, W. Ionized, electron-attached, and excited states of molecular systems with spin–orbit coupling: Two-component $GW$ and Bethe–Salpeter implementations. _J. Chem. Phys._ 2019, _150_ , 204116
* Bruneval 2019 Bruneval, F. Assessment of the Linearized $GW$ Density Matrix for Molecules. _J. Chem. Theory Comput._ 2019, _15_ , 4069–4078, PMID: 31194540
* Li et al. 2019 Li, J.; Duchemin, I.; Roscioni, O. M.; Friederich, P.; Anderson, M.; Da Como, E.; Kociok-Köhn, G.; Wenzel, W.; Zannoni, C.; Beljonne, D.; Blase, X.; D’Avino, G. Host dependence of the electron affinity of molecular dopants. _Mater. Horiz._ 2019, _6_ , 107–114
* Koval et al. 2019 Koval, P.; Ljungberg, M. P.; Müller, M.; Sánchez-Portal, D. Toward Efficient $GW$ Calculations Using Numerical Atomic Orbitals: Benchmarking and Application to Molecular Dynamics Simulations. _J. Chem. Theory Comput._ 2019, _15_ , 4564–4580, PMID: 31318555
* Bruneval et al. 2020 Bruneval, F.; Maliyov, I.; Lapointe, C.; Marinica, M.-C. Extrapolating Unconverged $GW$ Energies up to the Complete Basis Set Limit with Linear Regression. _J. Chem. Theory Comput._ 2020, _16_ , 4399–4407, PMID: 32491851
* Loos et al. 2020 Loos, P.-F.; Pradines, B.; Scemama, A.; Giner, E.; Toulouse, J. Density-Based Basis-Set Incompleteness Correction for $GW$ Methods. _J. Chem. Theory Comput._ 2020, _16_ , 1018–1028, PMID: 31891503
* Berger et al. 2021 Berger, J. A.; Loos, P.-F.; Romaniello, P. Potential Energy Surfaces without Unphysical Discontinuities: The Coulomb Hole Plus Screened Exchange Approach. _J. Chem. Theory Comput._ 2021, _17_ , 191–200, PMID: 33306908
* Whitten 1973 Whitten, J. L. Coulombic potential energy integrals and approximations. _J. Chem. Phys._ 1973, _58_ , 4496–4501
* Baerends et al. 1973 Baerends, E.; Ellis, D.; Ros, P. Self-consistent molecular Hartree—Fock—Slater calculations I. The computational procedure. _Chem. Phys._ 1973, _2_ , 41 – 51
* Dunlap et al. 1979 Dunlap, B. I.; Connolly, J. W. D.; Sabin, J. R. On some approximations in applications of X$\alpha$ theory. _J. Chem. Phys._ 1979, _71_ , 3396–3402
* Vahtras et al. 1993 Vahtras, O.; Almlöf, J.; Feyereisen, M. Integral approximations for LCAO-SCF calculations. _Chem. Phys. Lett._ 1993, _213_ , 514 – 518
* Klopper and Samson 2002 Klopper, W.; Samson, C. C. M. Explicitly correlated second-order Møller–Plesset methods with auxiliary basis sets. _J. Chem. Phys._ 2002, _116_ , 6397–6410
* Ren et al. 2012 Ren, X.; Rinke, P.; Blum, V.; Wieferink, J.; Tkatchenko, A.; Sanfilippo, A.; Reuter, K.; Scheffler, M. Resolution-of-identity approach to Hartree-Fock, hybrid density functionals, RPA, MP2 and $GW$ with numeric atom-centered orbital basis functions. _New J. Phys._ 2012, _14_ , 053020
* Duchemin et al. 2017 Duchemin, I.; Li, J.; Blase, X. Hybrid and Constrained Resolution-of-Identity Techniques for Coulomb Integrals. _J. Chem. Theory Comput._ 2017, _13_ , 1199–1208, PMID: 28094983
* Duchemin et al. 2012 Duchemin, I.; Deutsch, T.; Blase, X. Short-Range to Long-Range Charge-Transfer Excitations in the Zincbacteriochlorin-Bacteriochlorin Complex: A Bethe-Salpeter Study. _Phys. Rev. Lett._ 2012, _109_ , 167801
* Govoni and Galli 2015 Govoni, M.; Galli, G. Large Scale $GW$ Calculations. _J. Chem. Theory Comput._ 2015, _11_ , 2680–2696, PMID: 26575564
* Li et al. 2017 Li, J.; D’Avino, G.; Pershin, A.; Jacquemin, D.; Duchemin, I.; Beljonne, D.; Blase, X. Correlated electron-hole mechanism for molecular doping in organic semiconductors. _Phys. Rev. Materials_ 2017, _1_ , 025602
* Ben et al. 2019 Ben, M. D.; da Jornada, F. H.; Canning, A.; Wichmann, N.; Raman, K.; Sasanka, R.; Yang, C.; Louie, S. G.; Deslippe, J. Large-scale $GW$ calculations on pre-exascale HPC systems. _Comput. Phys. Commun._ 2019, _235_ , 187 – 195
* Rojas et al. 1995 Rojas, H. N.; Godby, R. W.; Needs, R. J. Space-Time Method for Ab Initio Calculations of Self-Energies and Dielectric Response Functions of Solids. _Phys. Rev. Lett._ 1995, _74_ , 1827–1830
* Neuhauser et al. 2013 Neuhauser, D.; Rabani, E.; Baer, R. Expeditious Stochastic Calculation of Random-Phase Approximation Energies for Thousands of Electrons in Three Dimensions. _J. Phys. Chem. Lett._ 2013, _4_ , 1172–1176
* Liu et al. 2016 Liu, P.; Kaltak, M.; Klimeš, J.; Kresse, G. Cubic scaling $GW$: Towards fast quasiparticle calculations. _Phys. Rev. B_ 2016, _94_ , 165109
* Wilhelm et al. 2018 Wilhelm, J.; Golze, D.; Talirz, L.; Hutter, J.; Pignedoli, C. A. Toward GW Calculations on Thousands of Atoms. _J. Phys. Chem. Lett._ 2018, _9_ , 306–312, PMID: 29280376
* Gao and Chelikowsky 2020 Gao, W.; Chelikowsky, J. R. Accelerating Time-Dependent Density Functional Theory and $GW$ Calculations for Molecules and Nanoclusters with Symmetry Adapted Interpolative Separable Density Fitting. _J. Chem. Theory Comput._ 2020, _16_ , 2216–2223, PMID: 32074452
* Kim et al. 2020 Kim, M.; Martyna, G. J.; Ismail-Beigi, S. Complex-time shredded propagator method for large-scale $GW$ calculations. _Phys. Rev. B_ 2020, _101_ , 035139
* Kutepov 2020 Kutepov, A. Self-consistent $GW$ method: O(N) algorithm for polarizability and self energy. _Comput. Phys. Commun._ 2020, _257_ , 107502
* Förster and Visscher 2020 Förster, A.; Visscher, L. Low-Order Scaling $G_{0}W_{0}$ by Pair Atomic Density Fitting. _J. Chem. Theory Comput._ 2020, _16_ , 7381–7399, PMID: 33174743
* Wilhelm et al. 2021 Wilhelm, J.; Seewald, P.; Golze, D. Low-Scaling GW with Benchmark Accuracy and Application to Phosphorene Nanosheets. _J. Chem. Theory Comput._ 2021, _17_ , 1662–1677, PMID: 33621085
* Almlöf 1991 Almlöf, J. Elimination of energy denominators in Møller-Plesset perturbation theory by a Laplace transform approach. _Chem. Phys. Lett._ 1991, _181_ , 319 – 320
* Häser and Almlöf 1992 Häser, M.; Almlöf, J. Laplace transform techniques in Møller-Plesset perturbation theory. _J. Chem. Phys._ 1992, _96_ , 489–494
* Kaltak et al. 2014 Kaltak, M.; Klimeš, J.; Kresse, G. Cubic scaling algorithm for the random phase approximation: Self-interstitials and vacancies in Si. _Phys. Rev. B_ 2014, _90_ , 054115
* Neese et al. 2009 Neese, F.; Wennmohs, F.; Hansen, A.; Becker, U. Efficient, approximate and parallel Hartree–Fock and hybrid DFT calculations. A ‘chain-of-spheres’ algorithm for the Hartree–Fock exchange. _Chem. Phys._ 2009, _356_ , 98 – 109
* Izsák and Neese 2011 Izsák, R.; Neese, F. An overlap fitted chain of spheres exchange method. _J. Chem. Phys._ 2011, _135_ , 144105
* Izsák et al. 2013 Izsák, R.; Neese, F.; Klopper, W. Robust fitting techniques in the chain of spheres approximation to the Fock exchange: The role of the complementary space. _J. Chem. Phys._ 2013, _139_ , 094111
* Parrish et al. 2012 Parrish, R. M.; Hohenstein, E. G.; Martínez, T. J.; Sherrill, C. D. Tensor hypercontraction. II. Least-squares renormalization. _J. Chem. Phys._ 2012, _137_ , 224106
* Hohenstein et al. 2012 Hohenstein, E. G.; Parrish, R. M.; Sherrill, C. D.; Martínez, T. J. Communication: Tensor hypercontraction. III. Least-squares tensor hypercontraction for the determination of correlated wavefunctions. _J. Chem. Phys._ 2012, _137_ , 221101
* Kokkila Schumacher et al. 2015 Kokkila Schumacher, S. I. L.; Hohenstein, E. G.; Parrish, R. M.; Wang, L.-P.; Martínez, T. J. Tensor Hypercontraction Second-Order Møller–Plesset Perturbation Theory: Grid Optimization and Reaction Energies. _J. Chem. Theory Comput._ 2015, _11_ , 3042–3052, PMID: 26575741
* Lu and Ying 2015 Lu, J.; Ying, L. Compression of the electron repulsion integral tensor in tensor hypercontraction format with cubic scaling cost. _J. Comput. Phys._ 2015, _302_ , 329 – 335
* Lu and Ying 2016 Lu, J.; Ying, L. Fast algorithm for periodic density fitting for Bloch waves. _Ann. Math. Sci. Appl._ 2016, _1_ , 321 – 339
* Hu et al. 2017 Hu, W.; Lin, L.; Yang, C. Interpolative Separable Density Fitting Decomposition for Accelerating Hybrid Density Functional Calculations with Applications to Defects in Silicon. _J. Chem. Theory Comput._ 2017, _13_ , 5420–5431, PMID: 28960982
* Dong et al. 2018 Dong, K.; Hu, W.; Lin, L. Interpolative Separable Density Fitting through Centroidal Voronoi Tessellation with Applications to Hybrid Functional Electronic Structure Calculations. _J. Chem. Theory Comput._ 2018, _14_ , 1311–1320, PMID: 29370521
* Hu et al. 2018 Hu, W.; Shao, M.; Cepellotti, A.; da Jornada, F. H.; Lin, L.; Thicke, K.; Yang, C.; Louie, S. In _Lect. Notes Comput. Sci._ ; et al. (eds) Computational Science – ICCS 2018. ICCS 2018 Springer, S. Y., Ed.; 2018; Vol. 10861; Chapter Accelerating Optical Absorption Spectra and Exciton Energy Computation via Interpolative Separable Density Fitting
* Hu et al. 2020 Hu, W.; Liu, J.; Li, Y.; Ding, Z.; Yang, C.; Yang, J. Accelerating Excitation Energy Computation in Molecules and Solids within Linear-Response Time-Dependent Density Functional Theory via Interpolative Separable Density Fitting Decomposition. _J. Chem. Theory Comput._ 2020, _16_ , 964–973, PMID: 31899646
* Malone et al. 2019 Malone, F. D.; Zhang, S.; Morales, M. A. Overcoming the Memory Bottleneck in Auxiliary Field Quantum Monte Carlo Simulations with Interpolative Separable Density Fitting. _J. Chem.Theory Comput._ 2019, _15_ , 256–264
* Lee et al. 2020 Lee, J.; Lin, L.; Head-Gordon, M. Systematically Improvable Tensor Hypercontraction: Interpolative Separable Density-Fitting for Molecules Applied to Exact Exchange, Second- and Third-Order Møller–Plesset Perturbation Theory. _J. Chem. Theory Comput._ 2020, _16_ , 243–263, PMID: 31794667
* Duchemin and Blase 2019 Duchemin, I.; Blase, X. Separable resolution-of-the-identity with all-electron Gaussian bases: Application to cubic-scaling RPA. _J. Chem. Phys._ 2019, _150_ , 174120
* Weigend et al. 2002 Weigend, F.; Köhn, A.; Hättig, C. Efficient use of the correlation consistent basis sets in resolution of the identity MP2 calculations. _J. Chem. Phys._ 2002, _116_ , 3175–3183
* Weigend et al. 1998 Weigend, F.; Häser, M.; Patzelt, H.; Ahlrichs, R. RI-MP2: optimized auxiliary basis sets and demonstration of efficiency. _Chem. Phys. Lett._ 1998, _294_ , 143 – 152
* Friedrich 2019 Friedrich, C. Tetrahedron integration method for strongly varying functions: Application to the $GT$ self-energy. _Phys. Rev. B_ 2019, _100_ , 075142
* Duchemin and Blase 2020 Duchemin, I.; Blase, X. Robust Analytic-Continuation Approach to Many-Body GW Calculations. _J. Chem. Theory Comput._ 2020, _16_ , 1742–1756, PMID: 32023052
* Becke 1988 Becke, A. D. A multicenter numerical integration scheme for polyatomic molecules. _J. Chem. Phys._ 1988, _88_ , 2547–2553
* Kaltak et al. 2014 Kaltak, M.; Klimeš, J.; Kresse, G. Low Scaling Algorithms for the Random Phase Approximation: Imaginary Time and Laplace Transformations. _J. Chem. Theory Comput._ 2014, _10_ , 2498–2507, PMID: 26580770
* Caruso et al. 2016 Caruso, F.; Dauth, M.; van Setten, M. J.; Rinke, P. Benchmark of $GW$ Approaches for the $GW$100 Test Set. _J. Chem. Theory Comput._ 2016, _12_ , 5076–5087, PMID: 27631585
* Rebolini et al. 2016 Rebolini, E.; Izsák, R.; Reine, S. S.; Helgaker, T.; Pedersen, T. B. Comparison of Three Efficient Approximate Exact-Exchange Algorithms: The Chain-of-Spheres Algorithm, Pair-Atomic Resolution-of-the-Identity Method, and Auxiliary Density Matrix Method. _J. Chem. Theory Comput._ 2016, _12_ , 3514–3522, PMID: 27224306
* Matthews 2020 Matthews, D. A. Improved Grid Optimization and Fitting in Least Squares Tensor Hypercontraction. _J. Chem. Theory Comput._ 2020, _16_ , 1382–1385, PMID: 32004002
* Tran et al. 2015 Tran, T. T.; Bray, K.; Ford, M. J.; Toth, M.; Aharonovich, I. Quantum emission from hexagonal boron nitride monolayers. _Nat. Nanotechnol._ 2015, _11_ , 37
* Tran et al. 2016 Tran, T. T.; Zachreson, C.; Berhane, A. M.; Bray, K.; Sandstrom, R. G.; Li, L. H.; Taniguchi, T.; Watanabe, K.; Aharonovich, I.; Toth, M. Quantum Emission from Defects in Single-Crystalline Hexagonal Boron Nitride. _Phys. Rev. Appl._ 2016, _5_ , 034005
* Martínez et al. 2016 Martínez, L. J.; Pelini, T.; Waselowski, V.; Maze, J. R.; Gil, B.; Cassabois, G.; Jacques, V. Efficient single photon emission from a high-purity hexagonal boron nitride crystal. _Phys. Rev. B_ 2016, _94_ , 121405
* Bourrellier et al. 2016 Bourrellier, R.; Meuret, S.; Tararan, A.; Stéphan, O.; Kociak, M.; Tizei, L. H. G.; Zobelli, A. Bright UV Single Photon Emission at Point Defects in h-BN. _Nano Lett._ 2016, _16_ , 4317–4321, PMID: 27299915
* Jungwirth and Fuchs 2017 Jungwirth, N. R.; Fuchs, G. D. Optical Absorption and Emission Mechanisms of Single Defects in Hexagonal Boron Nitride. _Phys. Rev. Lett._ 2017, _119_ , 057401
* Wu et al. 2017 Wu, F.; Galatas, A.; Sundararaman, R.; Rocca, D.; Ping, Y. First-principles engineering of charged defects for two-dimensional quantum technologies. _Phys. Rev. Mater._ 2017, _1_ , 071001
* Attaccalite et al. 2011 Attaccalite, C.; Bockstedte, M.; Marini, A.; Rubio, A.; Wirtz, L. Coupling of excitons and defect states in boron-nitride nanostructures. _Phys. Rev. B_ 2011, _83_ , 144115
* Krishnan et al. 1980 Krishnan, R.; Binkley, J. S.; Seeger, R.; Pople, J. A. Self‐consistent molecular orbital methods. XX. A basis set for correlated wave functions. _J. Chem. Phys._ 1980, _72_ , 650–654
* 112 See e.g. Ref. D_Avino_2016 in the case of molecular thin films.
* Weigend 2006 Weigend, F. Accurate Coulomb-fitting basis sets for H to Rn. _Phys. Chem. Chem. Phys._ 2006, _8_ , 1057–1065
* Schindlmayr 2000 Schindlmayr, A. Decay properties of the one-particle Green function in real space and imaginary time. _Phys. Rev. B_ 2000, _62_ , 12573–12576
Scaling with system size for 6-311G* $G_{0}W_{0}$ calculations performed with
the cubic scaling real-space Laplace transform scheme (RI-RS+LT) as compared
to the quartic scaling traditional Coulomb fitting (RI-V) approach for a
family of defected boron-nitride flakes.
|
††thanks<EMAIL_ADDRESS><EMAIL_ADDRESS>
# Magnetic Field Effects on the Transport Properties of High-Tc Cuprates
E. C. Marino1 R. Arouca1,2 1Instituto de Física, Universidade Federal do Rio
de Janeiro, C.P. 68528, Rio de Janeiro, RJ, 21941-972, Brazil. 2Institute for
Theoretical Physics, Center for Extreme Matter and Emergent Phenomena, Utrecht
University, Princetonplein 5, 3584 CC Utrecht, The Netherlands.
###### Abstract
Starting from a recently proposed comprehensive theory for the high-Tc
superconductivity in cuprates, we derive a general analytic expression for the
planar resistivity, in the presence of an applied external magnetic field H
and explore its consequences in the different phases of these materials. As an
initial probe of our result, we show it compares very well with experimental
data for the resistivity of LSCO at different values of the applied field. We
also apply our result to Bi2201 and show that the magnetoresistivity in the
strange metal phase of this material, exhibits the $H^{2}$ to $H$ crossover,
as we move from the weak to the strong field regime. Yet, despite of that, the
magnetoresistivity does not present a quadrature scaling. Remarkably, the
resistivity H-field derivative does scale as a function of $\frac{H}{T}$, in
complete agreement with recent magneto-transport measurements made in the
strange metal phase of cuprates Ayres et al. (2020). We, finally, address the
issue of the $T$-power-law dependence of the resistivity of overdoped cuprates
and compare our results with experimental data for Tl2201. We show that this
provides a simple method to determine whether the quantum critical point
associated to the pseudogap temperature $T^{*}(x)$ belongs to the SC dome or
not.
1) Introduction
Any complete theory for superconductivity in the high-Tc cuprates must be
capable to describe, besides the superconductivity mechanism itself, the
properties of their normal phases. The comprehension of such phases of the
cuprates, actually, seems to be as challenging as that of the superconducting
phase itself.
An interesting issue, in connection to this, is the range of different
functional dependences on the temperature, which are exhibited by the
resistivity as we cross the $T_{c}(x)$ superconducting (SC) dome. These are
usually of the form $\rho(T)\propto T^{1+\delta}$, where apparently
$\delta\in[0,1]$. The precise value of $\delta$, however, is strongly
dependent on the specific region of the SC dome where we cross the SC
transition and, consequently, the previously vanishing resistivity acquires a
temperature dependence.
The situation becomes even richer, when we apply an external magnetic field
and consider the resistivity dependence on it. Then, a wide range of effects
can be observed, including the destruction of the superconducting phase.
A particularly interesting non-superconducting phase of the cuprates is the
so-called Strange Metal (SM) phase Ayres et al. (2020); Taillefer (2010); Hu
et al. (2017); Ando et al. (2000, 2004); Gurvitch and Fiory (1987); Keimer et
al. (2015); Varma et al. (1989); Varma (1999); Faulkner et al. (2010); Davison
et al. (2014); Patel et al. (2018); Zaanen (2004); Legros et al. (2019);
Zaanen (2019); Damle and Sachdev (1997); Sachdev (2011); Phillips and Chamon
(2005); Banerjee et al. (2020), where the resistivity grows linearly with the
temperature, with a slope that decreases with doping, proportionally to the
pseudogap temperature $T^{*}(x)$ Arouca and Marino (2020). Recent studies
reveal, however, that specially in the case of overdoped (OD) cuprates Hussey
et al. (2013), depending on the doping amount, we not always move directly
from the SC phase to a linear dependent resistivity. In many cases, for some
compounds, we rather observe a super-linear dependence on $T$ before we reach
the linear regime Ayres et al. (2020).
Interesting experimental studies have also addressed the issue of the effect
of an external magnetic field on the transport properties of OD cuprates Ayres
et al. (2020). Such studies reveal, for instance, the existence of a crossover
in the magnetic field dependence of the magnetoresistivity (MR) in the SM
phase, ranging from a quadratic behavior, at weak fields, to a linear one, in
the strong field regime Ayres et al. (2020). Such a behavior is analogous to
the one observed in quantum critical phases of electron doped cuprates Sarkar
et al. (2019) and pnictide superconductors Hayes et al. (2016); Giraldo-Gallo
et al. (2018).
In such systems, the crossover was ascribed to a quadrature scaling behavior,
in which the planar MR behaves according to the empirical expression
$\rho(T,H)-\rho(0,0)=\sqrt{\left(\alpha
k_{B}T\right)^{2}+\left(\gamma\mu_{B}\mu_{0}H\right)^{2}}$ (1)
where $\alpha$ and $\gamma$ are constant fitting parameters.
A benchmark of the quadrature behavior is that the quantity
$\Delta\rho/T=\left(\rho(T,H)-\rho(0,0)\right)/T$ becomes a function of the
ratio $H/T$, namely
$\left(\rho(T,H)-\rho(0,0)\right)/T\propto\sqrt{1+\left(\lambda\frac{\mu_{B}\mu_{0}H}{k_{B}T}\right)^{2}}.$
(2)
The study carried on in Ayres et al. (2020) on the cuprates Bi2201 and Tl2201
shows that in spite of exhibiting the $H^{2}$ to $H$ crossover in the MR field
dependence, the MR data for cuprates in the SM phase do not scale as the
quadrature would do, namely, as in (2).
Interestingly and remarkably, however, it was shown in Ayres et al. (2020)
that the MR data for the resistivity field derivative,
$(\partial\rho(T,H)/\partial H)$, do scale as in (2), namely,
$\frac{\partial\rho(T,H)}{\partial H}=f\left(\frac{H}{T}\right).$ (3)
In two recent publications Marino et al. (2020); Arouca and Marino (2020) we
developed a comprehensive theory for the high-Tc cuprates, whose most
distinguishable feature, perhaps, is to be testable. Indeed, our theory allows
for the theoretical determination of several physical quantities, which can be
directly compared with the experiments. Among these, we have obtained
analytical expressions for the superconducting (SC) and pseudogap (PG)
transition temperatures $T_{c}$ and $T^{*}$ as a function of quantities such
as the stoichiometric doping parameter, number of planes, pressure and
external magnetic field Marino et al. (2020); Arouca and Marino (2020). We
have also obtained a general expression for the resistivity as a function of
the temperature in the different non-superconducting phases of the high-Tc
cuprates Arouca and Marino (2020).These results are in excellent agreement
with the experiments for a wide range of cuprate systems with one, two and
three planes per unit cell.
In this work, we directly derive from the aforementioned theory, a general
expression for the planar resistivity as a function of an applied external
magnetic field $H$.
We firstly apply this result in order to describe the resistivity in LSCO and
specially to determine how it is modified when the system is under the action
of an external magnetic field.
We, then, consider our expression for the resistivity in the SM phase and show
that, interestingly, our expression completely agrees with the experimental
results found in Ayres et al. (2020) for Bi2201. In particular, it exhibits
the $H^{2}$ to $H$ crossover, in spite of the fact that it does not present
the quadrature scaling behavior. Yet, it satisfies the field derivative
scaling (3).
Finally we address the issue of the super-linearity of the resistivity of OD
cuprates, right above the the SC transition and offer a simple explanation,
which is illustrated by comparison with experimental data for Tl2201.
2) The Resistivity
2.1) General Expression
The resistivity can be obtained as the inverse conductivity matrix, which is
given by the Kubo formula
$\displaystyle\sigma^{ij}_{\text{DC}}=\lim\limits_{\omega\rightarrow
0}\frac{i}{\omega}\left[1-e^{-\beta\hbar\omega}\right]\lim\limits_{\mathbf{k}\rightarrow\mathbf{0}}\Pi^{ij}\left(\omega+i\epsilon,\mathbf{k}\right),$
(4)
where $\Pi^{ij}$ is the retarded, connected current-current correlation
function:
$\displaystyle\Pi^{ij}=\langle j^{i}j^{j}\rangle_{\text{C}}.$ (5)
This is given by the second functional derivative of the grand-canonical
potential in the presence of an applied electromagnetic vector potential
$\textbf{A}\left(\omega,\mathbf{k}\right)$, namely,
$\displaystyle\langle
j^{i}j^{j}\rangle_{C}\left(\omega,\mathbf{k}\right)=\frac{\delta^{2}\Omega[\textbf{A}]}{\delta\textbf{A}^{i}\left(\omega,\mathbf{k}\right)\delta\textbf{A}^{j}\left(\omega,\mathbf{k}\right)},$
(6)
$\Omega[\textbf{A}]$ relates to the grand-partition functional $Z[\textbf{A}]$
as
$\displaystyle\Omega[\textbf{A}]=-\frac{1}{\beta}\ln Z[\textbf{A}],$ (7)
which is given by
$\displaystyle Z[\textbf{A}]={\rm
Tr}_{Total}e^{-\beta\left[H[\textbf{A}]-\mu\mathcal{N}\right]}.$ (8)
In the expression above, $H[\textbf{A}]$, is our proposed Hamiltonian for the
cupratesMarino et al. (2020); Arouca and Marino (2020), in the presence of an
external field
$\textbf{A}=\frac{1}{2}\textbf{r}\times\textbf{B},$ (9)
which corresponds to a constant external magnetic field
$\textbf{B}=\mu_{0}\textbf{H}$.
The trace above can be evaluated with the help of the eigenvalues of
$H[\textbf{A}]-\mu\mathcal{N}$, which are given by Marino et al. (2020);
Arouca and Marino (2020)
$\displaystyle\mathcal{E}_{l}[\textbf{A}]=\sqrt{\Delta^{2}+\Big{(}\sqrt{v^{2}(\hbar\textbf{k}+e\textbf{A})^{2}+M^{2}}+l\mu\Big{)}^{2}},$
(10)
where $l=\pm 1$. The expression above is given in terms of the external field
and the ground-state expectation values: of the Cooper pair operator,
$\Delta$, of the exciton operator, $M$ and of the chemical potential, $\mu$.
The field dependence is conveniently expressed through the replacement
$\displaystyle M^{2}\longrightarrow M^{2}+2ev\
\hbar\textbf{k}\cdot\textbf{A}+e^{2}v^{2}\textbf{A}^{2},$ (11) $\displaystyle
M^{2}\longrightarrow M^{2}-2ev\
\langle\textbf{L}\rangle\cdot\textbf{H}+\frac{1}{4}e^{2}v^{2}\langle
r^{2}\rangle\textbf{H}^{2}$
in the presence of an applied field, where we replaced $r^{2}$ and L for their
average values. Since the ground state, either $|p_{x}\rangle$ or
$|p_{y}\rangle$ is a linear combination of $|l,m\rangle=|1,\pm 1\rangle$, it
follows that $\langle L_{z}\rangle=0$ and the second term in (Magnetic Field
Effects on the Transport Properties of High-Tc Cuprates) does not contribute,
thus confirming the observation made in Ayres et al. (2020) that there is no
contribution of the orbital coupling with the external field. This also leads
to results that are independent of the specific direction of the applied
external magnetic field, which is in agreement with the experimental
observations reported in Ayres et al. (2020).
The grand-partition functional $Z[\textbf{A}]$ follows from Eq. (8) and Eq.
(10), and after functional integration over the fermionic (holes), degrees of
freedom, namely Marino et al. (2020); Arouca and Marino (2020)
$\displaystyle Z[\textbf{A}]$ $\displaystyle=$
$\displaystyle\exp\left\\{-\beta\left\\{\frac{\left|\Delta\right|^{2}}{g_{S}}+\frac{\left|M\right|^{2}}{g_{P}}+N\mu\left(x\right)-NTA\sum\limits_{n=-\infty}^{\infty}\sum\limits_{l=\pm
1}\int\frac{d^{2}k}{4\pi^{2}}\ln\left[\left(\omega_{n}+i\omega_{0}\right)^{2}+\mathcal{E}_{l}^{2}[\textbf{A}]\right]\right\\}\right\\}$
(12) $\displaystyle=$ $\displaystyle Z[\textbf{0}]\exp\left\\{-\beta
T\sum_{\omega_{n}}\sum_{l=\pm
1}\int\frac{d^{2}k}{(2\pi)^{2}}\ln\left[\frac{\left(\omega_{n}+i\omega_{0}\right)^{2}+\mathcal{E}_{l}^{2}[\textbf{A}]}{\omega_{n}^{2}+\mathcal{E}_{l}^{2}[0]}\right]\right\\},$
where $\omega_{0}=\frac{\mu_{B}\mu_{0}H}{2\hbar}$ is the Zeeman coupling of
the external field to the holes’ spin and
$\omega_{n}=(2n+1)\frac{\pi}{\beta}$, are the Matsubara frequencies
corresponding to the fermion integration.
We now perform the sums in the previous equation, using
$\displaystyle\sum_{n=-\infty}^{\infty}\frac{1}{\left(\omega_{n}+i\omega_{0}\right)^{2}+\mathcal{E}_{l}^{2}}=\frac{\beta}{4\mathcal{E}_{l}}\left\\{\tanh\left[\frac{\left[\mathcal{E}_{l}[\textbf{A}]+\hbar\omega_{0}\right]}{2k_{B}T}\right]+\tanh\left[\frac{\left[\mathcal{E}_{l}[\textbf{A}]-\hbar\omega_{0}\right]}{2k_{B}T}\right]\right\\}$
(13)
Inserting (13) in (12), and using the rules of functional differentiation
Marino (2017), we obtain the average current: $\langle j^{i}\rangle$.
$\displaystyle\langle
j^{i}\rangle\left(\textbf{k}=0,\omega=0\right)=\frac{N}{2}\sum_{l=\pm
1}\frac{\partial\mathcal{E}_{l}[\textbf{A}]}{\partial\textbf{A}^{i}}\left\\{\tanh\left[\frac{\left[\mathcal{E}_{l}[\textbf{A}]+\hbar\omega_{0}\right]}{2k_{B}T}\right]+\tanh\left[\frac{\left[\mathcal{E}_{l}[\textbf{A}]-\hbar\omega_{0}\right]}{2k_{B}T}\right]\right\\}.$
(14)
To calculate the conductivity matrix, $\sigma^{ij}$, using (4) we need the
two-point current correlator, hence we must take the derivative of $\langle
j^{i}\rangle$ with respect to $\textbf{A}^{j}$, at $\textbf{k}=0$.
We shall be primarily interested in the diagonal elements of the conductivity
and resistivity matrices. For these
$\displaystyle\langle
j^{i}j^{i}\rangle\left(\textbf{k}=0,\omega=0\right)=\frac{Ne^{2}v^{2}}{\mathcal{M}}\sum_{l,s=\pm
1}\frac{\mathcal{M}+l\mu}{\sqrt{\Delta^{2}+\Big{(}\mathcal{M}+l\mu\Big{)}^{2}}}\tanh\left(\frac{\sqrt{\Delta^{2}+\Big{(}\mathcal{M}+l\mu}\Big{)}^{2}+s\hbar\omega_{0}}{2k_{B}T}\right).$
(15)
where, we used (Magnetic Field Effects on the Transport Properties of High-Tc
Cuprates), to define
$\mathcal{M}^{2}\equiv M^{2}+e^{2}v^{2}\textbf{A}^{2}.$ (16)
Considering that
$\textbf{A}^{2}=\frac{1}{4}\langle r^{2}\rangle\left(\mu_{0}H\right)^{2},$
(17)
where we have replaced the square of the position vector by its average value,
related to the de Broglie wavelength:
$\langle
r^{2}\rangle\simeq\left(\frac{\hbar}{mv}\right)^{2}=\left(\frac{\hbar}{m_{e}v}\right)^{2}\left(\frac{m_{e}}{m}\right)^{2}$
(18)
where $m$ is the effective quasiparticle mass and $m_{e}$ the electron mass,
we can express (16) as
$\mathcal{M}^{2}=M^{2}+e^{2}v^{2}\textbf{A}^{2}=M^{2}+\left(\frac{e\hbar}{2m_{e}}\right)^{2}\lambda_{2}^{2}(\mu_{0}H)^{2}$
(19)
which implies
$\frac{\mathcal{M}}{k_{B}T}\equiv\frac{\sqrt{M^{2}+(\lambda_{2}\mu_{B}\mu_{0}H)^{2}}}{k_{B}T}=\sqrt{\left(\frac{M}{k_{B}T}\right)^{2}+\lambda_{2}^{2}\left(\frac{\mu_{B}\mu_{0}H}{k_{B}T}\right)^{2}},$
(20)
where $\mu_{B}=\frac{e\hbar}{2m_{e}}$ is the Bohr magneton,
$\frac{\mu_{B}}{k_{B}}=0.671\ K/T$ and $\lambda_{2}\simeq\frac{m_{e}}{m}$.
The corresponding DC resistivity per CuO2 plane, then, will be given by (we
drop from now on, the $ij$-superscript).
$\displaystyle\
\rho=\left(\frac{\sigma_{\text{DC}}}{N}\right)^{-1}=\frac{\mathcal{M}}{\hbar\beta
V^{-1}e^{2}v^{2}\sum_{l,s=\pm
1}\frac{\mathcal{M}+l\mu}{\sqrt{\Delta^{2}+\Big{(}\mathcal{M}+l\mu\Big{)}^{2}}}\tanh\left(\frac{\sqrt{\Delta^{2}+\Big{(}\mathcal{M}+l\mu}\Big{)}^{2}+s\hbar\omega_{0}}{2k_{B}T}\right)}.$
(21)
where $V=da^{2}$ is the volume of the primitive unit cell, per CuO2 plane,
with $d$ being the distance between planes, $a$ the lattice parameter and $v$,
the characteristic velocity of the holes (such that for LSCO $\left(\hbar
v/a\right)\approx 2.86\times 10^{-2}eV$ Marino et al. (2020)).
In the SC phase, we have $\Delta\neq 0$, $M=0$ and we can see, from (21), that
in the absence of an applied magnetic field, we have $\rho\rightarrow 0$ as a
consequece of the fact that, in this case $\mathcal{M}\rightarrow 0$ in (21).
By the same token, we are able to understand why the resistivity is no longer
zero when an external magnetic field is applied: in this case, because of the
magnetic field in (16), we have $\mathcal{M}\neq 0$ and the resistivity in
(21) does not vanish.
2.2) The Scaling Function
Outside the superconducting phases, we have $\Delta=0$, which leads to the
following expression for the resistivity
$\displaystyle\rho=\frac{Vk_{B}}{\hbar
e^{2}v^{2}}\frac{\mathcal{M}T}{\left[\tanh\Big{(}\frac{\mathcal{M}+\mu+\hbar\omega_{0}}{2k_{B}T}\Big{)}+\tanh\Big{(}\frac{\mathcal{M}+\mu-\hbar\omega_{0}}{2k_{B}T}\Big{)}+\tanh\Big{(}\frac{\mathcal{M}-\mu+\hbar\omega_{0}}{2k_{B}T}\Big{)}+\tanh\Big{(}\frac{\mathcal{M}-\mu-\hbar\omega_{0}}{2k_{B}T}\Big{)}\right]}$
(22)
Using the identity,
$\displaystyle\frac{2\sinh(a)}{\cosh(a)+\cosh(b)}=\tanh\left(\frac{a+b}{2}\right)+\tanh\left(\frac{a-b}{2}\right)$
(23)
for the 1st.+ 4th. and 2nd + 3rd. terms above, this can be rewritten as
$\displaystyle\rho$ $\displaystyle=$ $\displaystyle\frac{Vk_{B}}{\hbar
e^{2}v^{2}}\frac{\mathcal{M}T}{\left[\frac{\sinh\left(\frac{\mathcal{M}}{k_{B}T}\right)}{\cosh\left(\frac{\mathcal{M}}{k_{B}T}\right)+\cosh\Big{(}\frac{\mu+\hbar\omega_{0}}{k_{B}T}\Big{)}}+\frac{\sinh\left(\frac{\mathcal{M}}{k_{B}T}\right)}{\cosh\left(\frac{\mathcal{M}}{k_{B}T}\right)+\cosh\Big{(}\frac{\mu-\hbar\omega_{0}}{k_{B}T}\Big{)}}\right]}$
(24)
or
$\displaystyle\rho$ $\displaystyle=$ $\displaystyle\frac{Vk_{B}}{\hbar
e^{2}v^{2}}\frac{\mathcal{M}T}{2\sinh\left(\frac{\mathcal{M}}{k_{B}T}\right)}\left\\{\frac{\left[\cosh\left(\frac{\mathcal{M}}{k_{B}T}\right)+\cosh\left(\frac{\mathcal{\mu}}{k_{B}T}\right)\cosh\left(\frac{\hbar\omega_{0}}{k_{B}T}\right)\right]^{2}-\left[\sinh\left(\frac{\mathcal{\mu}}{k_{B}T}\right)\sinh\left(\frac{\hbar\omega_{0}}{k_{B}T}\right]\right)^{2}}{\cosh\left(\frac{\mathcal{M}}{k_{B}T}\right)+\cosh\left(\frac{\mathcal{\mu}}{k_{B}T}\right)\cosh\left(\frac{\hbar\omega_{0}}{k_{B}T}\right)}\right\\}$
(25)
We can express the resistiviy in the presence of an applied magnetic field in
terms of a three-variable scaling function $G(K_{1},K_{2},K_{3})$, where
$K_{1}=\frac{M}{k_{B}T}\ \ ;\ \ K_{2}=\frac{\mu}{k_{B}T}\ \ ;\ \
K_{3}=\frac{\mu_{B}\mu_{0}H}{k_{B}T};$ (26)
namely
$\rho(x,T)=BT^{2}G\left(\frac{M}{k_{B}T},\frac{\mu}{k_{B}T},\frac{\mu_{B}\mu_{0}H}{k_{B}T}\right),$
(27)
where
$G\left(K_{1},K_{2},K_{3}\right)=\frac{\sqrt{K_{1}^{2}+(\lambda_{2}K_{3})^{2}}}{2\sinh\left(\sqrt{K_{1}^{2}+(\lambda_{2}K_{3})^{2}}\right)}\
\left[\frac{\left(\cosh\sqrt{K_{1}^{2}+(\lambda_{2}K_{3})^{2}}+\cosh
K_{2}\cosh K_{3}\right)^{2}-\left(\sinh K_{2}\sinh
K_{3}\right)^{2}}{\cosh\sqrt{K_{1}^{2}+(\lambda_{2}K_{3})^{2}}+\cosh
K_{2}\cosh K_{3}}\right]$ (28)
and $B$ is given by
$B=\frac{h}{e^{2}}\frac{d}{2\pi}\left(\frac{a}{\hbar v}\right)^{2}k_{B}^{2}.$
(29)
For LSCO, we have $B_{LSCO}=2.4457\ n\Omega\text{cm}/K^{2}$ and, in general,
we write $B=\lambda_{1}B_{LSCO}$.
Notice that in the zero field limit, $K_{3}\rightarrow 0$ and our expression
for the resistivity reduces to the one in Arouca and Marino (2020).
2.3) The Strange Metal Phase
Particularly interesting is the strange metal phase, where we have, both the
SC and PG parameters vanishing: $\Delta=0$ and $M=0$. I The chemical
potential, conversely, scales with $T$, namely $\mu=DT$, where $D=2.69\ eV/K$
Arouca and Marino (2020). Consequently, we will have $K_{1}=0$,
$K_{2}=D/k_{B}$, $K_{3}=\frac{\mu_{B}\mu_{0}H}{k_{B}T}$. Combining these
results in (24), we can express the resistivity as
$\displaystyle\rho$ $\displaystyle=$
$\displaystyle\frac{\lambda_{1}\lambda_{2}BT^{*}TK_{3}}{\sinh\left(\lambda_{2}K_{3}\right)}\left[\frac{1}{\cosh\left(\lambda_{2}K_{3}\right)+\cosh\left(K_{3}+D/k_{B}\right)}+\frac{1}{\cosh\left(\lambda_{2}K_{3}\right)+\cosh\left(K_{3}-D/k_{B}\right)}\right].$
(30)
where $T^{*}$ is the PG temperature.
2.4) The Zero Magnetic Field Regime
In the $H\rightarrow 0$ limit, we have $K_{3}\rightarrow 0$. In this case,
(28) reduces to
$\displaystyle G\left(K_{1},K_{2},K_{3}\right)\longrightarrow$ (31)
$\displaystyle G\left(K_{1},K_{2}\right)=\frac{K_{1}}{2\sinh K_{1}}\left[\cosh
K_{1}+\cosh K_{2}\right]$
In the SM phase, where we also have $K_{1}=0$, accordingly, the scaling
function becomes
$\displaystyle G\left(K_{1},K_{2}\right)=C\frac{T^{*}}{T}$ (32)
and the resistivity, according to (27), becomes
$\displaystyle\rho(T)=CT^{*}T$ (33)
3) The Resistivity of LSCO
Let us consider here a sample of LSCO, with a doping parameter $x=0.19$, which
has a $T_{c}=38.5K$, that has been studied in Giraldo-Gallo et al. (2018).
In Fig. 1 we plot our expression (30), for the zero field resistivity (solid
blue line), together with the experimental data from Giraldo-Gallo et al.
(2018). In Figs. 2 , 3, we represent the curves corresponding to our
expression (30), respectively for an applied magnetic field of $50T$ and
$80T$, along with the experimental data from Giraldo-Gallo et al. (2018). In
Fig 4, we depict the three curves together, along with the one for $30T$.
Notice that magnetic fields of $50T$ and up are strong enough to destroy the
SC phase.
We see that our expression for $\rho(T,H)$ is in excellent agreement with the
experimetal data for LSCO.
Figure 1: Resistivity of LSCO, $\mu_{0}H=0$. The solid line is the plot of the
theoretical expression derived from our theory, Eq. (30) for a sample with
$T_{c}=38.5K$ at zero magnetic field. Experimental data from Giraldo-Gallo et
al. (2018)
Figure 2: Resistivity of LSCO. The solid line is the plot of the theoretical
expression derived from our theory, Eq. (30) for a sample with $T_{c}=38.5K$
at an applied magnetic field of $50T$. Experimental data from Giraldo-Gallo et
al. (2018)
Figure 3: Resistivity of LSCO. The solid line is the plot of the theoretical
expression derived from our theory, Eq. (30) for a sample with $T_{c}=38.5K$
at an applied magnetic field of $80T$. Experimental data from Giraldo-Gallo et
al. (2018)
Figure 4: Influence of an applied magnetic field on the resistivity of LSCO,
for a sample with $T_{c}=38.5K$ at zero magnetic field (blue line),
$\mu_{0}H=30T$ (purple line), $\mu_{0}H=50T$ (green line) and $\mu_{0}H=80T$
(red line). Observe that an applied magnetic field of $50T$ or more,
completely destroys the SC phase.
4) The Magnetoresistivity of Bi2201
Let us now consider our general expression for the resisivity, in the SM
phase, Eq. (30), taken as a function of the applied magnetic field $H$. Let us
apply it for the sample of Bi2201, having $T_{c}\simeq 1K$ at a fixed
temperature $T=4.2K$ studied in Ayres et al. (2020).
According to our expression for the SC transition temperature of cuprates
Marino et al. (2020); Arouca and Marino (2020), for Bi2201 a critical SC
temperature of $T_{c}\simeq 1K$ corresponds to a stoichiometric doping
parameter $x=0.377$.
Then, according to our expression for the PG temperature $T^{*}$ Marino et al.
(2020); Arouca and Marino (2020) of cuprates, such doping parameter
corresponds to $T^{*}=3.15K$. The sample of Bi2201, studied in Hussey et al.
(2013) at a temperature of $T=4.2K$, therefore must be in the Strange Metal
phase, where $M=0$ and $\mu=DT$ Marino et al. (2020); Arouca and Marino
(2020).
Using our expression (30) at a fixed temperature of $T=4.2K$ and choosing
$\lambda_{1}=25.32$, $\lambda_{2}=3$ and a residual resistivity
$\rho_{0}=100\mu\Omega cm$, we obtain the curve depicted in green in Fig. 5.
The experimental data are from Ayres et al. (2020)
Figure 5: Magnetoresistance of Bi2201. Our theoretical expression, derived
from first principles (green line), accurately describes the experimental
result obtained in Ayres et al. (2020), for a sample of Bi2201 with
$T_{c}\simeq 1K$, which corresponds to a pseudogap temperature $T^{*}=3.15K$
Marino et al. (2020). The measurement is made at a temperature $T=4.2K$, which
is larger than $T^{*}$, implying that the material is in the SM phase. The
(linear) dashed line is added just to emphasize the crossover of the
dependency of $\rho$ with $H$.
Figure 6: Magnetoresistance of Bi2201, for the same sample of Fig. 5. The
different curves represent our analytical result corresponding to temperatures
of $4.2K$ (green), $5K$ (red), $10K$ (blue), $20K$ (gold) and $30K$ (cyan).
Experimental data are for the $4.2K$ sample (same as in Fig. 5.
In Fig. 6 we show the magnetoresistance curves for the same sample of B12201
at different temperatures.
5) The Resistivity of Tl2201 and the Location of the QCP
Let us take the case of Tl2201, in order to address the issue of the power-law
dependence of the zero field resistivity near the SC dome in OD cuprates. As
it turns out, knowledge of such power-law will enable to clarify the the issue
concerning the location of the QCP associated to the SM phase. For this
purpose, we are going to use the results obtained in Arouca and Marino (2020),
according to which, we have the following power-law regimes for the
resistivity just outside the SC dome: Strange Metal (SM), Fermi Liquid (FL),
Crossover (C) :
$\displaystyle\mathrm{SM}\ -\ \rho\propto T$ (34) $\displaystyle\mathrm{FL}\
-\ \rho\propto T^{2}$ $\displaystyle\mathrm{C}\ -\ \rho\propto T^{1+\delta}\ \
\ \delta\in[0,1]$
We also recall that the resistivity behavior in the upper PG phase shares the
$T$-linear behavior with the SM phase Arouca and Marino (2020).
The scaling function $G\left(K_{1},K_{2}\right)$ has the following types of
behavior in each of the regions above Arouca and Marino (2020)
$\displaystyle\mathrm{SM}\ -\ G\propto\frac{T^{*}}{T}$ (35)
$\displaystyle\mathrm{FL}\ -\ G\propto C$ $\displaystyle\mathrm{C}\ -\
G\propto\left(\frac{T^{*}}{T}\right)^{1-\delta}\ \ \ \delta\in[0,1]$
Let us consider now, the two following scenarios for the phase diagram of
cuprates, which we illustrate for the case of Tl2201.
In the first scenario (I), depicted in Fig. 13, the quantum critical point
(QCP) where the pseudogap line $T^{*}(x)$ ends, is located precisely at the
edge of the SC dome, while in the second scenario, (II) which is depicted in
Fig. 7, the quantum critical point (QCP) is located inside the SC dome.
Attentive inspection of these phase diagrams allows for the following
conclusion. In the first scenario, the transition from the SC dome, in the OD
region, always leads to a linear $\rho\propto T$ behavior of the resistivity.
In the second scenario, conversely, according to Fig. 8, the resistivity
behavior depends on where we cross the SC dome: if we do it below the green
line on the right-hand-side, we shall have a $\rho\propto T^{2}$ behavior.
When we cross the SC dome between the dashed line and the green line on the
right-hand-side, we shall have, conversely, a $\rho\propto T^{1+\delta}$,
super-linear behavior. Finally, when we cross the SC dome in between the green
line on the left-hand-side and the dashed line, we shall have a linear
behavior, $\rho\propto T$. In any of the three cases, however, as we raise the
temperature, we will eventually reach a $T$-linear behavior of the
resistivity.
From the behavior of the resistivity of a given cuprate material in the OD
region one may infer about what type of scenario we will observe in its phase
diagram, concerning especially the PG temperature line $T^{*}(x)$ and the
position of the QCP.
For the case of LSCO, for instance, the behavior exhibited in Fig. 1, strongly
suggests that scenario I applies to this material.
Let us consider now the case of Tl2201. We evaluated the zero field
resistivity,just above the SC transition for four samples, having,
respectively, transition temperatures $T_{c}=7K,22K,35K,57K$. We did the
calculation using (27), with the different scaling functions in (LABEL:ss).
The blue curves were obtained by using the scaling function of the FL phase.
The red curves, conversely, were obtained with the scaling function of the
Crossover.
The result, compared with experimental data of Hussey et al. (2013), is shown
in Figs. 9, 10, 11, 12. It shows unequivocally that the blue curves are the
ones that correctly describe the resistivity of TL2201, for the 7K,22K and 35K
samples of Tl2201 while the red curve correctly describe the resistivity of
the 57K sample. We conclude that this sample undergoes the SC transition into
the Crossover region while the other three samples do it from SC to FL phases.
Remarkably, we can confirm the previous conclusions by visual inspection of
the phase diagram in Fig. 8, where the four red dashed lines represent the
above samples of Tl2201.
The results above, consequently, strongly suggest that scenario II applies for
Tl2201.
Figure 7: Phase diagram of Tl2201. Scenario I where the QCP is located at the
edge of the SC dome, in the OD region. The green lines delimit the SM
region,on the left of the dashed line and the Crossover region, on the right.
The FL region is found below the second green line. Experimental data from
Honma and Hor (2006).
Figure 8: Phase diagram of Tl2201. Scenario II, where the QCP is inside of the
SC dome.The green lines delimit the SM region, on the left of the dashed line
and the Crossover region, on the right. The FL region is found below the
second green line. In any case, inside the SC dome, the SC state should be
energetically favorable. The four dashed red lines mark the $T_{c}$ of the
Tl2201 sample, respectively, 57K,35K,22K and 7K. Experimental data from Honma
and Hor (2006).
Figure 9: Resistivity of Tl2201 at zero magnetic field, for a sample with
$T_{c}=7K$. The blue line is our theoretical expression, calculated with the
scaling function appropriate for the FL phase, while the red line would be the
result, should we did the calculation with the one corresponding to the
Crossover. Experimental data from Hussey et al. (2013); Manako et al. (1992);
Hussey et al. (2004); Merino and McKenzie (2000).
Figure 10: Resistivity of Tl2201 at zero magnetic field, for a sample with
$T_{c}=22K$. The blue line is our theoretical expression, calculated with the
scaling function appropriate for the FL phase, while the red line would be the
result, should we did the calculation with the one corresponding to the
Crossover. Experimental data from Hussey et al. (2013); Manako et al. (1992);
Hussey et al. (2004); Merino and McKenzie (2000).
Figure 11: Resistivity of Tl2201 at zero magnetic field, for a sample with
$T_{c}=35K$. The blue line is our theoretical expression, calculated with the
scaling function appropriate for the FL phase, while the red line would be the
result, should we did the calculation with the one corresponding to the
Crossover. Experimental data from Hussey et al. (2013); Manako et al. (1992);
Hussey et al. (2004); Merino and McKenzie (2000).
Figure 12: Resistivity of Tl2201 at zero magnetic field, for a sample with
$T_{c}=57K$. The red line is our theoretical expression, calculated with the
scaling function appropriate for the Crossover, while the blue line would be
the result, should we did the calculation with the one corresponding to the FL
phase. Experimental data from Hussey et al. (2013); Manako et al. (1992);
Hussey et al. (2004); Merino and McKenzie (2000).
6) Quadrature and Scaling
It was pointed out in Ayres et al. (2020) the existence of a crossover in the
magnetoresistance in cuprates, from a quadratic behavior at low fields to a
linear behavior in the high-fields regime. Our theoretical expression
reproduces the experimentally observed crossover (see Figs. 5, 6).
This type of behavior, in materials such as electron doped cuprates and iron
pnictides is usually ascribed to a quadrature scaling of the MR supposed to be
associated to quantum critical phases.
In the case of cuprates, however, the same study of the magnetoresistivity in
the SM phase indicates that the quadrature scaling is violated, in spite of
the $H$-field crossover. Moreover, the MR field derivative is shown to scale
as function of $H/T$.
Our results indicate that despite exhibiting the quadratic to linear
crossover, which is observed experimentally, the resistivity of cuprates in
the Strange Metal phase does not show a quadrature scaling dependence. Rather
it depends on $H$ and $T$, through the function in (30), which was derived
from our general theory for the cuprates Marino et al. (2020); Arouca and
Marino (2020).
In Fig. 13, we display the field derivative of our expression (30), plotted as
a function of the ratio $H/T$, namely, for $y=\frac{H}{T}$,
$\frac{\partial\rho_{SM}(H,T)}{\partial
H}=\frac{\partial\rho_{SM}(y,T)}{\partial y}\frac{\partial y}{\partial
H}=\frac{T}{T}\frac{df(y)}{dy}=f^{\prime}(y),$ (36)
where we used that $\rho_{SM}(H,T)=Tf(y)$.
The resulting expression has precisely the form of the collapsed experimental
data exhibited in Ayres et al. (2020), which indicates the correction of the
magnetoresistance derived from our theory.
Figure 13: Magnetoresistance derivative for Bi2201, plotted as a function of
$H/T$.
7) Conclusion
We have derived, from our recently proposed theory for the high-Tc cuprates,
an analytic expression for the resistivity in the presence of an external
magnetic field. This shows an excellent agreement with the experimental data
for the resistivity of LSCO at different values of the applied magnetic field.
The associated MR presents the crossover from parabolic to linear dependence
despite the fact that it does not satisfy a quadrature scaling. Yet, the
magnetic field derivative of the magnetoresistance presents a $H/T$ scaling,
in complete agreement with the results of the experiments performed in Ayres
et al. (2020).
We introduced a method to determine whether the QCP associated to the
pseudogap temperature $T^{*}$ and the SM phase is located inside or outside
(at the edge) of the SC dome. This is based on the observation of the power-
law behavior of the resistivity, as a function of $T$, just above $T_{c}$, in
the OD region. Our results indicate that the QCP is inside the dome for Tl2201
and on its very edge, for LSCO.
Acknowledgments
E. C. Marino was supported in part by CNPq and by FAPERJ. R. Arouca
acknowledges funding from the Brazilian Coordination for the Improvement of
Higher Education Personnel (CAPES) and from the Delta Institute for
Theoretical Physics (DITP) consortium, a program of the Netherlands
Organization for Scientific Research (NWO) that is funded by the Dutch
Ministry of Education, Culture and Science.
Corresponding author: ECM<EMAIL_ADDRESS>
## References
* Ayres et al. (2020) J. Ayres, M. Berben, M. Culo, Y.-T. Hsu, E. van Heumen, Y. Huang, J. Zaanen, T. Kondo, T. Takeuchi, J. Cooper, et al., arXiv preprint arXiv:2012.01208 (2020).
* Taillefer (2010) L. Taillefer, Annu. Rev. Condens. Matter Phys. 1, 51 (2010).
* Hu et al. (2017) T. Hu, Y. Liu, H. Xiao, G. Mu, and Y. Yang, Scientific Reports 7, 1 (2017).
* Ando et al. (2000) Y. Ando, Y. Hanaki, S. Ono, T. Murayama, K. Segawa, N. Miyamoto, and S. Komiya, Physical Review B 61, R14956 (2000).
* Ando et al. (2004) Y. Ando, S. Komiya, K. Segawa, S. Ono, and Y. Kurita, Physical Review Letters 93, 267001 (2004).
* Gurvitch and Fiory (1987) M. Gurvitch and A. T. Fiory, Physical Review Letters 59, 1337 (1987).
* Keimer et al. (2015) B. Keimer, S. A. Kivelson, M. R. Norman, S. Uchida, and J. Zaanen, Nature 518, 179 (2015).
* Varma et al. (1989) C. M. Varma, P. B. Littlewood, S. Schmitt-Rink, E. Abrahams, and A. E. Ruckenstein, Physical Review Letters 63, 1996 (1989).
* Varma (1999) C. M. Varma, Physical Review Letters 83, 3538 (1999).
* Faulkner et al. (2010) T. Faulkner, N. Iqbal, H. Liu, J. McGreevy, and D. Vegh, Science 329, 1043 (2010).
* Davison et al. (2014) R. A. Davison, K. Schalm, and J. Zaanen, Physical Review B 89, 245116 (2014).
* Patel et al. (2018) A. A. Patel, J. McGreevy, D. P. Arovas, and S. Sachdev, Physical Review X 8, 021049 (2018).
* Zaanen (2004) J. Zaanen, Nature 430, 512 (2004).
* Legros et al. (2019) A. Legros, S. Benhabib, W. Tabis, F. Laliberté, M. Dion, M. Lizaire, B. Vignolle, D. Vignolles, H. Raffy, Z. Li, et al., Nature Physics 15, 142 (2019).
* Zaanen (2019) J. Zaanen, SciPost Phys. 6, 61 (2019).
* Damle and Sachdev (1997) K. Damle and S. Sachdev, Physical Review B 56, 8714 (1997).
* Sachdev (2011) S. Sachdev, _Quantum Phase Transitions_ (Cambridge University Press, 2011), 2nd ed.
* Phillips and Chamon (2005) P. Phillips and C. Chamon, Physical Review Letters 95, 107002 (2005).
* Banerjee et al. (2020) A. Banerjee, M. Grandadam, H. Freire, and C. Pépin, arXiv preprint arXiv:2009.09877 (2020).
* Arouca and Marino (2020) R. Arouca and E. C. Marino, Superconductor Science and Technology (2020).
* Hussey et al. (2013) N. Hussey, H. Gordon-Moys, J. Kokalj, and R. McKenzie, in _Journal of Physics: Conference Series_ (IOP Publishing, 2013), vol. 449, p. 012004.
* Sarkar et al. (2019) T. Sarkar, P. Mandal, N. Poniatowski, M. K. Chan, and R. L. Greene, Science advances 5, eaav6753 (2019).
* Hayes et al. (2016) I. M. Hayes, R. D. McDonald, N. P. Breznay, T. Helm, P. J. Moll, M. Wartenbe, A. Shekhter, and J. G. Analytis, Nature Physics 12, 916 (2016).
* Giraldo-Gallo et al. (2018) P. Giraldo-Gallo, J. Galvis, Z. Stegen, K. A. Modic, F. Balakirev, J. Betts, X. Lian, C. Moir, S. Riggs, J. Wu, et al., Science 361, 479 (2018).
* Marino et al. (2020) E. C. Marino, R. O. Corrêa Jr, R. Arouca, L. H. Nunes, and V. S. Alves, Superconductor Science and Technology 33, 035009 (2020).
* Marino (2017) E. C. Marino, _Quantum Field Theory Approach to Condensed Matter Physics_ (Cambridge University Press, 2017).
* Honma and Hor (2006) T. Honma and P. Hor, Superconductor Science and Technology 19, 907 (2006).
* Manako et al. (1992) T. Manako, Y. Kubo, and Y. Shimakawa, Physical Review B 46, 11019 (1992).
* Hussey et al. (2004) N. Hussey, K. Takenaka, and H. Takagi, Philosophical Magazine 84, 2847 (2004).
* Merino and McKenzie (2000) J. Merino and R. H. McKenzie, Physical Review B 61, 7996 (2000).
|
# A numerical approach for heat flux estimation in thin slabs continuous
casting molds using data assimilation
Umberto Emil Morelli1,2,3*<EMAIL_ADDRESS>, Patricia Barral1,2 ,
Peregrina Quintela1,2 , Gianluigi Rozza3 and Giovanni Stabile3 1Universidade
de Santiago de Compostela, Santiago de Compostela, Spain 2 Technological
Institute for Industrial Mathematics (ITMATI), Santiago de Compostela, Spain
3Scuola Internazionale Superiore di Studi Avanzati (SISSA), Trieste, Italy
###### Abstract.
In the present work, we consider the industrial problem of estimating in real-
time the mold-steel heat flux in continuous casting mold. We approach this
problem by first considering the mold modeling problem (direct problem). Then,
we plant the heat flux estimation problem as the inverse problem of estimating
a Neumann boundary condition having as data pointwise temperature measurements
in the interior of the mold domain. We also consider the case of having a
total heat flux measurement together with the temperature measurements. We
develop two methodologies for solving this inverse problem. The first one is
the traditional Alifanov’s regularization, the second one exploits the
parameterization of the heat flux. We develop the latter method to have an
offline-online decomposition with a computationally efficient online part to
be performed in real-time. In the last part of this work, we test these
methods on academic and industrial benchmarks. The results show that the
parameterization method outclasses Alifanov’s regularization both in
performance and computational cost. Moreover, it proves to be robust with
respect to the measurements noise. Finally, the tests confirm that the
computational cost is suitable for real-time estimation of the heat flux.
###### Key words and phrases:
Inverse Problem, Heat Transfer, Continuous Casting, Real-time, Data
Assimilation, Boundary Condition Estimation
Funded by the European Union’s Horizon 2020 research and innovation programme
under the Marie Skaodowska-Curie Grant Agreement No. 765374. It also was
partially supported by the Ministry of Economy, Industry and Competitiveness
through the Plan Nacional de I+D+i (MTM2015-68275-R), by the Agencia Estatal
de Investigacion through project [PID2019-105615RB-I00/ AEI /
10.13039/501100011033], by the European Union Funding for Research and
Innovation - Horizon 2020 Program - in the framework of European Research
Council Executive Agency: Consolidator Grant H2020 ERC CoG 2015 AROMA-CFD
project 681447 ”Advanced Reduced Order Methods with Applications in
Computational Fluid Dynamics” and INDAM-GNCS project ”Advanced intrusive and
non-intrusive model order reduction techniques and applications”, 2019.
## 1\. Introduction
Continuous Casting (CC) of steel is presently the most used process to produce
steel worldwide. For example, in 2017, 96% of the steel was produced by CC.[1]
This industrial process is not new at all. In fact, continuous casters as in
Figure 1 have been used for many decades now. Consequently, the process has
undergone a long sequence of improvements driven by experience of the
commercial operators and, more recently, numerical simulations.[2]
(a) CC process.
(b) Mold section.
Figure 1. (A) Schematic overview of the continuous casting process (adapted
from Klimes et al[3]). (B) Schematic of a horizontal section of the mold (the
casting direction is perpendicular to the image).
We can summarize the CC process as follows. The metal is liquefied and then
tapped into the ladle. When it is at the correct temperature, the metal goes
into the tundish. In the tundish, the metal flow is regulated and smoothed.
Through the Submerged Entry Nozzle (SEN), the metal is drained into a mold.
The role of the mold in the CC process is to cool down the steel until it has
a solid skin which is thick and cool enough to be supported by rollers.
At the outlet of the mold, the metal is still molten in its inner region, thus
a secondary cooling section follows the mold. Supported by rollers, it is
cooled until complete solidification by directly spraying water over it. At
the end of this secondary cooling region, the casting is completed. To be
ready for its final application, the strand generally continues through
additional mechanisms which may flatten, roll or extrude the metal into its
final shape. This is just a brief overview on the CC process. We refer to
Irving’s monograph for a detailed description.[4]
In this work, we focus on CC of thin slabs. Slabs are cataloged thin when
their thickness is smaller than 70 mm, while their width is between 1 and 1.5
m, in general. Thanks to the small thickness, the solidification in the slab
is relatively fast, consequently the casting speed is generally high, between
7 and 14 meters per minute.
Thin slab molds are made of four different plates: two wide plates and two
lateral plates, all made of copper (see Figure 1 (b)). In general, lateral
plates can be moved or changed to modify the slab section dimensions. The
geometry of these plates is more complex than one can expect: they have
drilled channels where the cooling water flows, slots in the outside face for
thermal expansion, thermocouples, and fastening bolts. To compensate the
shrinkage of the slab with the cooling and minimize the gap, the molds are
tapered. Moreover, the upper portion of the mold forms a funnel to accommodate
the SEN.
Several phenomena related to steel flow, solidification, mechanics and heat
transfer appear in the mold region. This complexity makes the mold the most
critical part of the CC process. Thus, when running a continuous caster,
productivity and safety issues must be addressed at the mold.
Regarding quality, the presence of imperfections on the external surface of
the casted piece (cracks, inclusions, etc.) must be avoided. In fact, since
casted pieces are generally laminated in later productions, surface defects
would become evident affecting also the mechanical properties of the final
products.
However, quality control is not the only issue arising at the mold. Due to the
creation of the solid skin, a frequent problem arising during CC is the
sticking of the steel to the mold. After the detection of a sticking, the
casting speed is reduced to reestablish the desired metal flow before
restoring the nominal casting speed. This affects the product quality and the
productivity of the caster. Also, if not detected on time, it can lead to
dangerous events forcing the shutdown of the caster.
Less frequent but more catastrophic events are the liquid break-out and the
excessive increase of the mold temperature. The former is due to a non-uniform
cooling of the metal with the skin being so thin to break. The latter is
generally considered as the most dangerous event in a casting plant. In fact,
if the mold temperature is high enough to cause the boiling of the cooling
water, we have a dramatic decrease in the heat extraction. Then, the
temperature in the mold quickly rise, that could cause the melting of the mold
itself. Both these incidents are very dangerous and costly. In fact, they
generally require the shutdown of the caster, the substitution of expensive
components and an extended turnaround.
For all these reasons, the early detection of problems in the mold is crucial
for a safe and productive operation of continuous casters. Their detection
becoming more difficult as casting speed (thus productivity) of the casters
increases.
Since, continuous caster has been running for decades, operators already faced
all these problems. To have insight of the scenario in the mold, they provided
the molds with measuring equipment. In particular, they measure the pointwise
temperature of the mold by thermocouples (see Figure 1 (b)) and the cooling
water temperature as well as its flow at the inlet and outlet of the cooling
system.
The way CC operators use the data coming from the measurement equipment is the
following. The thermocouples’ temperatures are used to have insight of the
mold temperature field. On the other hand, the water temperature rise is used
to approximate the heat extracted from the steel.
This approach allowed to run continuous casters for decades. Nevertheless, it
has several drawbacks: it relies on the experience of operators, gives very
limited information about the heat flux at the mold-slab interface, and is
customized for each geometry so it requires new effort to be applied to new
designs. So, a new tool for analyzing the mold behavior is necessary.
We begin by reporting that CC operators consider that knowing the local heat
flux between mold and slab is the most important information in analyzing the
casting in this region. By considering the mold itself to be our domain and
focusing our interest in its thermal behavior, the mold-slab heat flux can be
seen as a Neumann Boundary Condition (BC) in the model. To compute its value,
we pose the following inverse problem: from the temperature measurements
provided by the thermocouples, estimate the boundary heat flux at the mold-
slab interface.
In general, this is a complex problem which can be divided into three
different ones but with related ingredients:
* •
Accurate modeling of the thermal problem in the physical mold.
* •
Solution of the inverse problem of estimating the heat flux.
* •
Reduction of the computational cost of the inverse problem solution to achieve
real-time computation.
In this work, we address in detail the above three problems giving an overview
on the state of the art and presenting our approach and contribution for their
solution.
## 2\. Mold Thermal Model
In this section, we give a description of the physical phenomena that occur in
the mold region of a caster, giving an overview on previous efforts in
modeling them. Then, we present the physical problem that we will consider in
the present work. Finally, we present the mathematical model we use in the
rest of the paper.
### 2.1. Physical Problem
Going from the inside to the outside of the mold, we encounter several
physical phenomena. In the inner part of the mold, we have the liquid pool of
steel. There, we have a molten metal flow with dispersed argon bubbles and
inclusion particles. All around the liquid pool, we encounter the solid skin
and, in between, the mushy region. Here, the steel changes phase undergoing
solidification. Between the steel and the mold, there is a thin layer of flux
powder which is liquid close to the steel and solid where in contact with the
mold. Finally, we encounter the mold which is surrounding the flux powder (in
case of not perfect casting, we can also have an air gap between the mold and
the slab).
The mold is composed of a solid (copper) region and a liquid region (water)
representing its cooling system. In the copper, we have heat conduction due to
the temperature gradients. In the water, we have an incompressible flow inside
tubes. To prevent the water from boiling, it is pumped at a very high pressure
and flow rate. Therefore, a turbulent flow with high Nusselt number is
expected.
According to the previous description, the main physical phenomena for CC
include[5]:
* •
Fully-turbulent, transient fluid motion in a complex geometry (SEN, strand
liquid pool), affected by dispersed particles and thermal buoyancy.
* •
Thermodynamic reactions.
* •
Multiphase flow and heat transport.
* •
Dynamic motion of free liquid surfaces and interfaces.
* •
Thermal, fluid and mechanical interactions between solids, liquids and gases.
* •
Heat transfer in solids.
* •
Distortion and wear of the mold.
* •
Solidification of steel shell.
* •
Shrinkage of the solidifying steel shell.
* •
Crack formation.
Due to its complexity, the literature on CC mold modeling is extensive. For
each physical phenomenon in the previous list, we have at least a dedicated
model and several investigations. Just to name the most relevant works, Meng
and Thomas[6] investigated the modeling of transient slag-layer phenomena in
the steel-mold gap. Thomas et al[7] studied the steel flow and solidification
in the mold including the transport of superheat with the turbulent transient
flow of molten steel, surface level fluctuations, and the transport and
entrapment of inclusion particles. On the other hand, a detailed description
of the solidification in casting is available in the Stefanescu’s
monograph,[8] while a review on the initial solidification models was done by
Wang.[9]
As mentioned the literature on the subject is extensive.[10, 11, 12] Thus, we
redirect the interested reader to the review articles by Thomas et al[13, 2].
### 2.2. Specific Physical Problem
As discussed in Section 1, our goal is to study the real-time behavior of CC
molds and, in particular, to compute the mold-slab heat flux. One way to
compute it could be to simulate all the phenomena discussed above from the SEN
to the secondary cooling region. However, the resulting model would be quite
complex and computationally expensive to deal with, especially for real-time
applications. Then, we discard this option.
The fully experimental approach is also not feasible since it is not possible
to make direct measurements in the solidification region. The only
measurements available are temperature measurements by thermocouples that are
buried inside the mold plates and cooling water temperature measurements.
Then, our approach to study the real-time behavior of continuous casting molds
and be able to compute the mold-slab heat flux is to solve an inverse problem
having these data as control data. In the rest of this section, we describe
the mold thermal model that we use in the present investigation and the
related assumptions.
In modeling the thermal behavior of the mold, we consider the following well
established assumptions:
* •
The copper mold is assumed a homogeneous and isotropic solid material.
* •
The cooling water temperature is known.
* •
The thermal expansion of the mold and its mechanical distortion are
negligible.
* •
The material properties are assumed constant.
* •
The boundaries in contact with air are assumed adiabatic.
* •
No boiling in the water is assumed.
* •
The heat transmitted by radiation is neglected.
Since we want to have solution in real-time (e.g., at each second) and the
casting speed is of few meters per minute, we consider steady-state models.
Moreover, we only consider 3D mold models because we are interested in the
heat flux in all the mold-slab interface.
As a final remark, the running parameters of the cooling system and its
geometry ensure a fully developed turbulent flow. In fact, these molds are
equipped with a closed loop cooling system where the water is pumped at a high
pressure. The average velocity in each cooling channel is approximately 10
m/s, the diameter being approx. 10 mm. Thus, the Reynolds number in the
cooling system is around 105, which ensures a turbulent flow.
Thanks to the high Reynolds number of the flow, we can further assume that the
cooler and hotter water molecules are well mixed. Consequently, the
temperature in each section of the cooling channel is approximately constant.
Moreover the water is pumped in a closed circuit, so we can assume that the
water flow rate is constant. In turn, since the channels have constant
section, the velocity of the fluid is also uniform and constant (plug flow).
Then, we focus our attention on the following model:
1. (1)
The computational domain is only composed of the (solid) copper mold. We
consider a steady-state three-dimensional heat conduction model with a
convective BC in the portion of the boundary in contact with the cooling
water. The water temperature is known at the inlet and outlet of the cooling
system. The water temperature is assumed to be linear.
### 2.3. Computational Domain and Notation
Consider a solid domain, $\Omega$, which is an open Lipschitz bounded subset
of ${\rm I\\!R}^{3}$, with smooth boundary $\Gamma$ (see Figure 2). Let
$\Gamma=\Gamma_{s_{in}}\cup\Gamma_{s_{ex}}\cup\Gamma_{sf}$ where
$\mathring{\Gamma}_{s_{in}}$, $\mathring{\Gamma}_{s_{ex}}$ and
$\mathring{\Gamma}_{sf}$ are disjoint sets. The Eulerian Cartesian coordinate
vector is denoted by $\mathbf{x}\in\Omega$ and $\mathbf{n}(\mathbf{x})$ the
unit normal vector that is directed outwards from $\Omega$.
Figure 2. Schematic of the mold domain, $\Omega$, and its boundaries.
In this setting, $\Omega$ corresponds to the region of the space occupied by
the mold. The interface between the mold and the cooling system is denoted by
$\Gamma_{sf}$. While, $\Gamma_{s_{in}}$ is the portion of the mold boundary in
contact with the solidifying steel. Finally, we denote the remaining part of
the mold boundary with $\Gamma_{s_{ex}}$.
### 2.4. Mathematical Model
We shall assume all along the following assumptions on the data:
1. (H1)
The process is assumed to be steady-state.
2. (H2)
The thermal conductivity is constant and strictly positive: $k\in{\rm
I\\!R}^{+}$.
3. (H3)
The mold-steel heat flux, $g$, belongs to $L^{2}(\Gamma_{s_{in}})$.
4. (H4)
There is no heat source inside the mold domain.
5. (H5)
The heat transfer coefficient is known, constant and strictly positive:
$h\in{\rm I\\!R}^{+}$.
6. (H6)
The cooling water temperature, $T_{f}$, is known and belongs to
$L^{2}(\Gamma_{sf})$.
Under the assumptions (H1)-(H6), we propose the following three-dimensional,
steady-state, heat conduction model
###### Problem 2.1.
Find $T$ such that
(2.1) $-k\Delta T=0,\text{ in }\Omega,$
with BCs
(2.2) $\displaystyle-k\nabla T\cdot\mathbf{n}=g$ on $\Gamma_{s_{in}}$, (2.3)
$\displaystyle-k\nabla T\cdot\mathbf{n}=0$ on $\Gamma_{s_{ex}}$, (2.4)
$\displaystyle-k\nabla T\cdot\mathbf{n}=h(T-T_{f})$ on $\Gamma_{sf}$.
We recall that for this problem the following result is well established (see
Nittka, Theorem 3.14)[14]:
###### Theorem 2.1.
Under the assumptions (H3),(H5) and (H6), the solution to Problem 2.1 exists
and is unique in $H^{1}(\Omega)$. Moreover, there exists a $\gamma>0$ such
that the solution to Problem 2.1 belongs to $C^{0,\gamma}({\Omega})$.
As a final remark, we recall (see Raymond, Theorem 3.3.6)[15],
###### Theorem 2.2.
If $g$ and $T_{f}$ belong to $L^{s}(\Gamma_{s_{in}})$ and $L^{s}(\Gamma_{sf})$
respectively, with $s>2$, then the solution $T$ to Problem 2.1 belongs to
$C(\bar{\Omega})$ and
(2.5) $\left\lVert T\right\rVert_{C(\bar{\Omega})}\leq C\left(\left\lVert
g\right\rVert_{L^{s}(\Gamma_{s_{in}})}+\left\lVert
T_{f}\right\rVert_{L^{s}(\Gamma_{sf})}\right),$
where the constant $C$ is independent of $h$.
Regarding the numerical solution of Problem 2.1, we use the finite volume
method for its discretization. Given a tessellation $\mathcal{T}$ of the
domain, $\Omega$, we write the discrete unknown $(T_{C})_{C\in\mathcal{T}}$ as
the real vector $\mathbf{T}$, belonging to ${\rm I\\!R}^{N_{h}}$ with
$N_{h}=\text{size}(\mathcal{T})$. Then, we write the discretized problem as
the linear system.
(2.6) $\mathbf{A}\mathbf{T}=\mathbf{b},$
where $\mathbf{A}$ is the stiffness matrix and $\mathbf{b}$ the source term.
The value of each element of $\mathbf{A}$ and $\mathbf{b}$ depends on the
particular finite volume scheme for the discretization and the mesh used.
Since our problem is a classic diffusion problem, we refer for further details
regarding the finite volume discretization to the Eymard’s monograph.[16]
## 3\. Inverse Problem
This section is devoted to the formulation and the study of an inverse problem
related to Problem 2.1. We consider two different inverse problems. In the
first one, the available data are the thermocouples’ measurements only. In the
second one, the total heat flux measurement is available together with the
thermocouples’ measurements.
### 3.1. State of the Art
The literature on inverse heat transfer problems is vast.[17, 18, 19, 20] We
refer to Alifanov’s[21], Orlande’s[22], Beck and Clair’s[23] and Chang’s[24]
works for a detailed review. This literature also includes the particular
problem of computing the mold-slab heat flux from temperature measurements in
the mold.[25, 26, 27] From a mathematical point of view, the present problem
is the estimation of a Neumann BC (the heat flux) having as data pointwise
measurements of the state inside the domain. Such problems were also addressed
in investigations not related to heat transfer.[15, 28, 29] Due to the
vastness of the literature on the subject, we merely report the most relevant
works on this subject.
Research in inverse heat transfer problems started in the 50s. It was driven
by the interest in knowing thermal properties of heat shields and heat fluxes
on the surface of space vehicles during re-entry. From a heuristic approach in
the 50s, researchers moved to a more mathematically formal approach. In fact,
in the 60s and 70s, most of the regularization theory that we use nowadays to
treat ill-posed problems was developed.[30, 21, 31, 32, 33] Here, we discuss
in general the most popular methodologies used for the solution of inverse
heat transfer problems.
Traditionally in estimating the boundary heat flux in CC molds, a heat flux
profile is selected, then by trial and error it is adapted to match the
measured temperatures.[27] Pinhero et al[34] were the first to use an optimal
control framework and regularization methods. They used a steady-state version
of the 2D mold model proposed by Samarasekera and Brimacombe[35] and
parameterized the heat flux with a piecewise constant function. Finally, they
used Tikhonov’s regularization for solving the inverse problem and validated
the results with experimental measurements. A similar approach was used by
Rauter et al.[36] Ranut et al[37, 25] estimated the heat flux transferred from
the solidifying steel to the mold wall both in a 2D and 3D domain. They used a
steady-state heat conduction model for the mold and parameterized the heat
flux with a piecewise linear profile in 2D and symmetric cosine profile in 3D.
For the solution of the inverse problem, they used the Conjugate Gradient
Method (CGM) and a mixed GA-SIMPLEX algorithm[38] in 2D while in 3D they only
used the GA-SIMPLEX algorithm. Their results were also tested with
experimental data.
Hebi et al[39, 40] attempted to estimate the solidification in CC round
billets by using a 3D transient heat conduction model in the strand and the
mold with a Robin condition at the mold-strand interface. Then, they posed the
following inverse problem: find the inverse heat transfer coefficient between
mold and strand such that the distance between measured and computed
temperatures at the thermocouples is minimal. They assumed the inverse heat
transfer coefficient to be piecewise constant. Then, by using sensitivity
coefficients, each piece was iteratively adapted to match the measured
temperature. To allow convergence, a relaxation factor was introduced in
between the iterations. They validated the results with plant measurements
without obtaining a good agreement. A similar approach was used by Gonzalez et
al[41] and Wang et al,[42, 43, 44, 45] the latter using a Neumann condition at
the mold-strand interface.
Wang and Yao[46] used the aforementioned inverse problem solution technique to
estimate the inverse heat transfer coefficient for CC round steel billets.
Then, they used the results obtained to train a Neural Network (NN) for on-
line computation. Similarly, Chen et al[47] used the fuzzy inference method
for estimating the mold heat flux. They modeled the mold with a 2D steady-
state heat conduction model in the solid and parameterized the boundary heat
flux. They tested the results on a numerical benchmark obtaining a good
agreement.
Yu and Luo[48] considered a 2D vertical section of a slab and the
solidification problem therein. They developed a modified Levenberg-Marquardt
method to estimate the inverse heat transfer coefficient in the secondary
cooling region from temperature measurements on the surface of the slab.
Udayraj et al[26] applied CGM with adjoint problem for the solution of the
inverse steady-state 2D heat conduction problem, this methodology was first
proposed by Alifanov[21] for the regularization of boundary inverse heat
transfer problems. By using this method there is no need of parameterizing the
heat flux. However, the method underestimates the heat flux away from the
measurements. To overcome this issue, the authors proposed to average the
computed heat flux at each step and use the uniform averaged value as initial
estimation for the following step. Similarly, Chen et al[49] tackled the
problem of estimating the steady boundary heat flux in 2D circular CC
supporting rollers based on temperature measurements inside the domain. For
its solution, they used the CGM proposed by Alifanov[21].
We conclude this section by describing previous works that are related to the
present research but not to CC. Ambrosi et al[50, 28] studied the mathematical
formulation of the force traction microscopy problem. This inverse problem
consists in obtaining the boundary stress field on a portion of the boundary
(Neumann BC) based on the pointwise measurement of the displacement (state
variable) inside the domain. The similarity with the present research is the
presence of pointwise observations and a boundary inverse problem with
(linear) elliptic direct problem. In Vitale et al[50], they stated the 2D
direct problem and the related inverse problem in the standard optimal control
framework due to Lions[51] for which the unknown BC is the distributed
boundary control. Then in a following work[28], they extended the formulation
to the 3D linear elasticity model proving existence and uniqueness of the
optimal control solution.
Our contribution to the literature is to develop a novel method for solving
the 3D inverse heat transfer problem in CC molds that can achieve real-time
performances. Moreover, we design two benchmark cases for this application and
we use them to compare the performances of the proposed method with the
classical Alifanov’s regularization.
### 3.2. Inverse Problem with Thermocouples’ Measurements
The inverse problem we want to solve is to estimate the heat flux $g$ capable
of reproducing the measured temperatures at the thermocouples’ points. This
can be stated as an optimal control problem with pointwise observations.
We introduce the following notation. Let
$\Psi:=\\{\mathbf{x}_{1},\mathbf{x}_{2},\dots,\mathbf{x}_{M}\\}$ be a
collection of points in $\Omega$. We define the application
$\mathbf{x}_{i}\in\Psi\rightarrow\hat{T}(\mathbf{x}_{i})\in{\rm I\\!R}^{+}$,
$\hat{T}(\mathbf{x}_{i})$ being the experimentally measured temperature at
$\mathbf{x}_{i}\in\Psi$. Moreover, let $G_{ad}$ be a bounded set in
$L^{2}(\Gamma_{s_{in}})$.
Using a least square, deterministic approach, we state the inverse problem as
###### Problem 3.1.
(Inverse) Given $\\{\hat{T}(\mathbf{x}_{i})\\}_{i=1}^{M}$, find the heat flux
$g\in G_{ad}$ that minimizes the functional
$J_{1}:L^{2}(\Gamma_{s_{in}})\rightarrow{\rm I\\!R}^{+}$,
(3.1)
$J_{1}[g]:=\frac{1}{2}\sum^{M}_{i=1}[T[g](\mathbf{x}_{i})-\hat{T}(\mathbf{x}_{i})]^{2},$
where $T[g](\mathbf{x}_{i})$ is the solution of Problem 2.1 at points
$\mathbf{x}_{i}$, for all $i=1,2,\dots,M$.
Notice that, thanks to Theorem 2.1 the state variable $T$ is continuous in
$\Omega$, then its value at pointwise observations is well-defined.
We now introduce the sensitivity problem related to Problem 2.1. We derive it
by perturbing in Problem 2.1 the heat flux $g\rightarrow g+\delta g$, causing
a variation of the temperature field, $T[g]\rightarrow T[g]+\delta T[\delta
g]$. Subtracting Problem 2.1 from the obtained problem, we have
###### Problem 3.2.
(Sensitivity) Find $\delta T$ such that
(3.2) $-k\Delta\delta T[\delta g]=0,\quad\text{in }\Omega,$
with BCs
(3.3) $\displaystyle-k\nabla\delta T[\delta g]\cdot\mathbf{n}=\delta g$ on
$\Gamma_{s_{in}}$, (3.4) $\displaystyle-k\nabla\delta T[\delta
g]\cdot\mathbf{n}=0$ on $\Gamma_{s_{ex}}$, (3.5) $\displaystyle-k\nabla\delta
T[\delta g]\cdot\mathbf{n}=h(\delta T[\delta g])$ on $\Gamma_{sf}$.
Then, it is verified that $T[g+\delta g]=T[g]+\delta T[\delta g]$. Besides,
$\delta T$ is linear: $\delta T[\delta g_{1}+\delta g_{2}]=\delta T[\delta
g_{1}]+\delta T[\delta g_{2}]$.
We now derive in a formal way the adjoint of Problem 3.1. Firstly, we multiply
(2.1) by a Lagrange multiplier $\lambda$. Then, we integrate over $\Omega$ and
add it to (3.1) obtaining
(3.6)
$\mathcal{L}[g,\lambda]=\frac{1}{2}\sum_{i=1}^{M}(T[g](\mathbf{x}_{i})-\hat{T}(\mathbf{x}_{i}))^{2}+\int_{\Omega}k\Delta
T[g](\mathbf{x})\lambda(\mathbf{x})d\mathbf{x}.$
To compute the Fréchet derivative with respect to $g$ of
$\mathcal{L}[g,\lambda]$, denoted by $d\mathcal{L}_{g}[\delta g,\lambda]$, we
first write
(3.7) $\mathcal{L}[g+\delta
g,\lambda]-\mathcal{L}[g,\lambda]=\sum_{i=1}^{M}\delta T[\delta
g](\mathbf{x}_{i})(T[g](\mathbf{x}_{i})+\frac{1}{2}\delta T[\delta
g](\mathbf{x}_{i})-\hat{T}(\mathbf{x}_{i}))+\int_{\Omega}k\lambda(\mathbf{x})\Delta\delta
T[\delta g](\mathbf{x})d\mathbf{x}.$
The Fréchet derivative of $\mathcal{L}$ is then obtained by neglecting the
second order terms
(3.8) $d\mathcal{L}[\delta g,\lambda]=\sum_{i=1}^{M}\delta T[\delta
g](\mathbf{x}_{i})(T[g](\mathbf{x}_{i})-\hat{T}(\mathbf{x}_{i}))+\int_{\Omega}k\lambda(\mathbf{x})\Delta\delta
T[\delta g](\mathbf{x})d\mathbf{x}.$
Finally, integrating the second addition of the previous equality twice by
parts and applying the BCs of Problem 3.2, we can write
(3.9) $\displaystyle d\mathcal{L}[\delta g,\lambda]=$
$\displaystyle\sum_{i=1}^{M}\delta T[\delta
g](\mathbf{x}_{i})(T[g](\mathbf{x}_{i})-\hat{T}(\mathbf{x}_{i}))+\int_{\Omega}k\Delta\lambda(\mathbf{x})\delta
T[\delta
g](\mathbf{x})d\mathbf{x}-\int_{\Gamma_{s_{in}}\cup\Gamma_{s_{ex}}\cup\Gamma_{sf}}k\delta
T[\delta
g](\mathbf{x})\nabla\lambda(\mathbf{x})\cdot\mathbf{n}(\mathbf{x})d\Gamma$
$\displaystyle-\int_{\Gamma_{s_{in}}}\lambda(\mathbf{x})\delta
g(\mathbf{x})d\Gamma-\int_{\Gamma_{sf}}h\lambda(\mathbf{x})\delta T[\delta
g](\mathbf{x})d\Gamma.$
We can now state the adjoint problem as
###### Problem 3.3.
(Adjoint) Find $\lambda[g]$ such that
(3.10)
$k\Delta\lambda[g]+\sum_{i=1}^{M}(T[g](\mathbf{x})-\hat{T}(\mathbf{x}))\delta(\mathbf{x}-\mathbf{x}_{i})=0,\quad\text{in
}\Omega,$
with BCs
(3.11) $\displaystyle k\nabla\lambda[g]\cdot\mathbf{n}=0$ on
$\Gamma_{s_{in}}\cup\Gamma_{s_{ex}}$, (3.12) $\displaystyle
k\nabla\lambda[g]\cdot\mathbf{n}+h\lambda[g]=0$ on $\Gamma_{sf}$,
$\delta(\mathbf{x}-\mathbf{x}_{i})$ being the Dirac function centered at
$\mathbf{x}_{i}$.
We notice that if $\lambda[g]$ is solution of Problem 3.3, $-\lambda[g]$
represents the Fréchet derivative of the Lagrange function with respect to the
inner product in $L^{2}(\Gamma_{s_{in}})$. Then, we have
(3.13) $d\mathcal{L}[\delta
g,\lambda[g]]=-\int_{\Gamma_{s_{in}}}\lambda[g](\mathbf{x})\delta
g(\mathbf{x})d\Gamma=\langle-\lambda[g],\delta
g\rangle_{L^{2}(\Gamma_{s_{in}})}.$
Considering that $\mathcal{L}[g,\lambda[g]]=J_{1}[g]$, the Gâteaux derivative
of the functional $J_{1}[g]$ is
(3.14) $J^{\prime}_{1}[g]=-\lambda[g]\text{ in }L^{2}(\Gamma_{s_{in}}).$
Different methods can be used for the solution of this minimization problem.
Here, we discuss its solution by Alifanov’s regularization method[21] and by
parameterization of the heat flux, $g$.
#### 3.2.1. Alifanov’s Regularization
The Alifanov’s regularization method is a CGM applied on the adjoint
equation.[52]
We consider the following iterative procedure for the estimation of the
function $g$ that minimizes the functional (3.1). Given an initial estimation
$g^{0}\in L^{2}(\Gamma_{s_{in}})$, for $n>0$ a new iterant is computed as:
(3.15) $g^{n+1}=g^{n}-\beta^{n}P^{n},\quad n=0,1,2,\dots\,$
where $n$ is the iteration counter, $\beta^{n}$ is the stepsize, also called
correction factor, in the conjugate direction $P^{n}$ given by
(3.16) $P^{0}=J^{\prime}_{1}[g^{0}],\quad
P^{n+1}=J^{\prime}_{1}[g^{n+1}]+\gamma^{n+1}P^{n}\text{ for }n\geq 1,$
$\gamma^{n+1}$ being the conjugate coefficient, and $J^{\prime}_{1}[g]$ the
Gâteaux derivative of $J_{1}$ given by (3.14).
The stepsize $\beta^{n}$ in (3.15) is obtained by minimizing the functional
$J_{1}[g^{n}-\beta P^{n}]$ with respect to $\beta$. Therefore, $\beta^{n}$ is
the solution of the critical point equation of the functional $J_{1}$,
restricted to a line passing through $g^{n}$ in the direction defined by
$P^{n}$, i.e. $\beta^{n}$ is the critical point of $J_{1}[g^{n}-\beta P^{n}]$
which then satisfies
(3.17)
$J_{1}[g^{n}-\beta^{n}P^{n}]=\min_{\beta}\left\\{\frac{1}{2}\sum_{i=1}^{M}\left[T[g^{n}-\beta
P^{n}](\mathbf{x}_{i})-\hat{T}(\mathbf{x}_{i})\right]^{2}\right\\}.$
Recalling Problem 3.2,
(3.18) $J_{1}[g^{n}-\beta P^{n}]=\frac{1}{2}\sum_{i=1}^{M}\left[T[g^{n}-\beta
P^{n}](\mathbf{x}_{i})-\hat{T}(\mathbf{x}_{i})\right]^{2}=\frac{1}{2}\sum_{i=1}^{M}\left[(T[g^{n}]-\beta\delta
T[P^{n}])(\mathbf{x}_{i})-\hat{T}(\mathbf{x}_{i})\right]^{2}.$
Differentiating with respect to $\beta$, we obtain the critical point equation
(3.19)
$\frac{dJ_{1}[g^{n}-\beta^{n}P^{n}]}{d\beta}=\sum_{i=1}^{M}[(T[g^{n}]-\beta^{n}\delta
T[P^{n}])(\mathbf{x}_{i})-\hat{T}(\mathbf{x}_{i})](-\delta
T[P^{n}](\mathbf{x}_{i}))=0.$
Finally, we have
(3.20)
$\beta^{n}=\frac{\sum_{i=1}^{M}\left[T[g^{n}](\mathbf{x}_{i})-\hat{T}(\mathbf{x}_{i})\right]\delta
T[P^{n}](\mathbf{x}_{i})}{\sum_{i=1}^{M}(\delta
T[P^{n}](\mathbf{x}_{i}))^{2}}.$
With respect to the conjugate coefficient, $\gamma$, its value is zero for the
first iteration and for other iterations it can be calculated using Fletcher-
Reeves expression as follows[53]
(3.21) $\gamma^{n}=\frac{\left\lVert
J^{\prime}_{1}[g^{n}]\right\rVert_{L^{2}(\Gamma_{s_{in}})}^{2}}{\left\lVert{J^{\prime}_{1}[g^{n-1}]}\right\rVert_{L^{2}(\Gamma_{s_{in}})}^{2}}.$
Notice that, to use this iterative procedure, we have to compute at each
iteration the Gâteaux derivative $J^{\prime}_{1}[g](\mathbf{x})$ which is
given by (3.14). Thus, we must solve the adjoint problem to compute it.
Alifanov’s regularization algorithm is summarized in Algorithm 1.
Algorithm 1 Alifanov’s regularization.
Set $g^{0}$ and $n=0$
while $n<n_{max}$ do
Compute $T[g^{n}]$ by solving Problem 2.1
Compute $J_{1}[g^{n}]$ by (3.1)
if $J_{1}[g^{n}]<J_{1_{tol}}$ then
Stop
end if
Compute $\lambda[g^{n}]$ by solving Problem 3.3
Compute $J^{\prime}_{1}[g^{n}]$ by (3.14)
if $n\geq 1$ then
Compute the conjugate coefficient, $\gamma^{n}$, by (3.21)
Compute the search direction, $P^{n}$, by (3.16)
else
$P^{0}=J^{\prime}_{1}[g^{0}]$
end if
Compute $\delta T[P^{n}]$ by solving Problem 3.2 with $\delta g=P^{n}$
Compute the stepsize in the search direction, $\beta^{n}$, by (3.20)
Update heat flux $g^{n}$ by (3.15)
$n=n+1$
end while
return $g^{n}$
#### 3.2.2. Parameterization of the Boundary Conditions
In this section, we consider the parameterization of the boundary heat flux
$g$. In the literature, the parameterization of $g$ has already been
proposed.[25] However, we propose a novel approach both for the
parameterization and for the solution of the resulting inverse problem.
For the parameterization, we start considering that we want to parameterize an
unknown function in $L^{2}(\Gamma_{s_{in}})$. We then notice that in thin slab
casting molds, the thermocouples are all located few millimeters inward from
$\Gamma_{s_{in}}$. All together they form a uniform 2D grid. Then, to
parameterize $g$, we use Radial Basis Functions (RBFs) centered at the
projections of the thermocouples’ points on $\Gamma_{s_{in}}$.[54] Due to this
choice we have as many basis functions as thermocouples. In particular, we use
Gaussian RBFs. However, the following discussion can be applied to other basis
functions.
The parameterization of the boundary heat flux reads (see Prando’s
appendix[55])
(3.22) $g(\mathbf{x})\approx\sum_{j=1}^{M}w_{j}\phi_{j}(\mathbf{x}),$
where the $\phi_{j}(\mathbf{x})$ are $M$ known base functions, and the $w_{j}$
are the respective unknown weights. Notice that by doing the parameterization,
we change the problem from estimating a function in an infinite dimensional
space to estimating a vector $\mathbf{w}=(w_{1},w_{2},\dots,w_{M})^{T}$ in
${\rm I\\!R}^{M}$.
Let $\boldsymbol{\xi}_{i},1\leq i\leq M,$ be the projection of the measurement
point $\mathbf{x}_{i}\in\Psi$ on $\Gamma_{s_{in}}$ such that
(3.23)
$\boldsymbol{\xi}_{i}=\operatorname*{argmin}_{\boldsymbol{\xi}\in\Gamma_{s_{in}}}\left\lVert\mathbf{x}_{i}-\boldsymbol{\xi}\right\rVert_{2},\quad\mathbf{x}_{i}\in\Psi.$
By centering the RBFs in these points, their expressions are
(3.24)
$\phi_{j}(\mathbf{x})=e^{-\left(\eta\left\lVert\mathbf{x}-\boldsymbol{\xi}_{j}\right\rVert_{2}\right)^{2}},\quad\text{
for }j=1,2,\dots,M,$
where $\eta$ is the shape parameter of the Gaussian basis, increasing its
values the radial decay of the basis slows down.
Suppose to have the solutions of Problem 2.1, $T[\phi_{j}]$, for
$j=1,2,\dots,M$. Denote by $T_{ad}$ the solution of
###### Problem 3.4.
Find $T_{ad}$ such that
(3.25) $-k\Delta T_{ad}=0,\quad\text{in }\Omega,$
with BCs
(3.26) $\displaystyle-k\nabla T_{ad}\cdot\mathbf{n}=0$ on
$\Gamma_{s_{in}}\cup\Gamma_{s_{ex}}$, (3.27) $\displaystyle-k\nabla
T_{ad}\cdot\mathbf{n}=h(T_{ad}+T_{f})$ on $\Gamma_{sf}$.
Then, we see that
(3.28) $T[\mathbf{w}]=\sum_{j=1}^{M}w_{j}(T[\phi_{j}]+T_{ad})-T_{ad},$
is solution of Problem 2.1 since
(3.29)
$-k\Delta(\sum_{j=1}^{M}w_{j}(T[\phi_{j}]+T_{ad})-T_{ad})=0,\quad\text{in
}\Omega,$
and verifies the BCs associated to that problem
(3.30)
$\displaystyle-k\nabla(\sum_{j=1}^{M}w_{j}(T[\phi_{j}]+T_{ad})-T_{ad})\cdot\mathbf{n}=\sum_{j=1}^{M}w_{j}\phi_{j}=g$
on $\Gamma_{s_{in}}$, (3.31)
$\displaystyle-k\nabla(\sum_{j=1}^{M}w_{j}(T[\phi_{j}]+T_{ad})-T_{ad})\cdot\mathbf{n}=0$
on $\Gamma_{s_{ex}}$, (3.32)
$\displaystyle-k\nabla(\sum_{j=1}^{M}w_{j}(T[\phi_{j}]+T_{ad})-T_{ad})\cdot\mathbf{n}=h\Bigg{(}\sum_{j=1}^{M}w_{j}(T[\phi_{j}]+T_{ad})-T_{ad}-T_{f}\Bigg{)}$
on $\Gamma_{sf}$.
Now, the objective of the inverse problem is to determine $\mathbf{w}$ which
identifies $g$ once the elements of the base $\phi_{j}$, $j=1,2,\dots,M$ are
fixed. Notice that we consider all vectors as column vectors.
We rewrite the inverse Problem 3.1 as
###### Problem 3.5.
(Inverse) Given the temperature measurements $\hat{T}(\Psi)\in{\rm
I\\!R}^{M}$, find $\mathbf{w}\in{\rm I\\!R}^{M}$ which minimizes the
functional
(3.33)
$J_{1}[\mathbf{w}]=\frac{1}{2}\sum_{i=1}^{M}[T[\mathbf{w}](\mathbf{x}_{i})-\hat{T}(\mathbf{x}_{i})]^{2},$
where to simplify notation, and if there is no room for error, $T[\mathbf{w}]$
represents the solution $T[g]$ of Problem 2.1 with $g$ as in (3.22).
Given $\mathbf{w}$, we define the residual $\mathbf{R}[\mathbf{w}]\in{\rm
I\\!R}^{M}$ as the vector whose components are
(3.34)
$(\mathbf{R}[\mathbf{w}])_{i}:=(\mathbf{T}[\mathbf{w}])_{i}-(\hat{\mathbf{T}})_{i},$
where $\mathbf{T}[\mathbf{w}]$ and $\hat{\mathbf{T}}$ denote the vectors of
${\rm I\\!R}^{M}$ whose i-component is
$(\mathbf{T}[\mathbf{w}])_{i}=T[\mathbf{w}](\mathbf{x}_{i})$ and
$(\hat{\mathbf{T}})_{i}=\hat{T}(\mathbf{x}_{i})$, respectively. So, we rewrite
the cost functional
(3.35)
$J_{1}[\mathbf{w}]=\frac{1}{2}\mathbf{R}[\mathbf{w}]^{T}\mathbf{R}[\mathbf{w}].$
To minimize the functional $J_{1}[\mathbf{w}]$, we solve the critical point
equation
(3.36) $\frac{\partial J_{1}[\mathbf{w}]}{\partial
w_{j}}=\sum_{i=1}^{M}R[\mathbf{w}]_{i}\frac{\partial(T[\mathbf{w}])_{i}}{\partial
w_{j}}=0,\text{ for }j=1,2,\dots,M.$
Thanks to (3.28), equation (3.36) can be written as
(3.37)
$\mathbf{R}[\mathbf{w}]^{T}(\mathbf{T}[\phi_{j}]+\mathbf{T}_{ad})=0,\text{ for
}j=1,2,\dots,M,$
being $\mathbf{T}_{ad}$ the vector of ${\rm I\\!R}^{M}$ whose i-component is
$T_{ad}(\mathbf{x}_{i})$. Then, the vector associated to the solution of the
direct problem in the measurement points, $\mathbf{T}[\mathbf{w}]\in{\rm
I\\!R}^{M}$, is given by
(3.38)
$\mathbf{T}[\mathbf{w}]=\sum_{j=1}^{M}w_{j}\mathbf{T}[\phi_{j}]+(\sum_{j=1}^{M}w_{j}-1)\mathbf{T}_{ad}.$
We denote by $\Theta$ the matrix of ${\rm I\\!R}^{M\times M}$ such that
(3.39) $\Theta_{ij}:=T[\phi_{j}](\mathbf{x}_{i})+T_{ad}(\mathbf{x}_{i}).$
Therefore, equation (3.37) can now be written as
(3.40) $\Theta^{T}\mathbf{R}[\mathbf{w}]=\mathbf{0}.$
Recalling the definition of $\mathbf{R}$ and (3.38), we have
(3.41)
$\Theta^{T}\mathbf{R}[\mathbf{w}]=\Theta^{T}(\Theta\mathbf{w}-\mathbf{T}_{ad}-\hat{\mathbf{T}})=\mathbf{0}.$
The solution of the inverse problem is then obtained by solving the linear
system
(3.42)
$\Theta^{T}\Theta\mathbf{w}=\Theta^{T}(\hat{\mathbf{T}}+\mathbf{T}_{ad}).$
This is generally called the normal equation.
In this setting, (3.42) is a linear map from the observations to the heat flux
weights. Consequently, we have that the existence and uniqueness of the
solution of the inverse problem depends on the invertibility of the matrix
$\Theta^{T}\Theta$.
We can easily see that the matrix $\Theta^{T}\Theta$ is symmetric and positive
semi-definite. In general, however, we cannot ensure that it is invertible. In
fact, the invertibility depends on the choice of the basis function, the
computational domain and the BCs.
In the numerical tests, we will see that this matrix tends to be ill-
conditioned. This is a reflect of the ill-posedness of the inverse problem.
Different regularization techniques for linear systems are available to
overcome this issue.[56] Here, we consider the Truncated Singular Value
Decomposition (TSVD) regularization. We denote the Singular Value
Decomposition (SVD) of $\Theta^{T}\Theta$ by
(3.43) $\Theta^{T}\Theta=U\Sigma
V^{T}=\sum_{i=1}^{r}\mathbf{u}_{i}\sigma_{i}\mathbf{v}_{i}^{T},$
where $\sigma_{i}$ denotes the i-th singular value of $\Theta^{T}\Theta$
(numbered according to their decreasing value), $r$ denotes the first no null
singular value, i.e. the rank of $\Theta^{T}\Theta$, $\mathbf{u}_{i}$ and
$\mathbf{v}_{i}$ are the i-th columns of the semi-unitary matrices $U$ and $V$
respectively (both belonging to ${\rm I\\!R}^{M\times r}$), and $\Sigma$ is
the square diagonal matrix of ${\rm I\\!R}^{r\times r}$ such that
$\Sigma_{ii}=\sigma_{i}$ and $\Sigma_{ij}=0$ if $i\neq j$. Then, the TSVD
regularized solution of (3.42) is
(3.44)
$\mathbf{w}=\sum_{i=1}^{\alpha_{TSVD}}\left(\frac{\mathbf{u}_{i}^{T}\Theta^{T}(\hat{\mathbf{T}}+\mathbf{T}_{ad})}{\sigma_{i}}\right)\mathbf{v}_{i}.$
This solution differs from the least square solution only in that the sum is
truncated at $i=\alpha_{TSVD}$ instead of $i=r$.
We conclude our discussion of this method by noticing its most interesting
feature for our investigation. In fact, it is already suitable for real-time
computation since we can divide it into an offline (expensive) phase and an
online (cheap) phase. In the offline phase, we compute $T[\phi_{j}]$ for
$j=1,2,\dots,M$ and $T_{ad}$ by solving Problem 2.1 with each base as boundary
heat flux and Problem 3.4. Then in the online phase, we input the measurements
$\hat{\mathbf{T}}$ and solve the linear system (3.42). For the choice made
when selecting the basis functions, the linear system has the dimensions of
the number of thermocouples. Then, its solution can be easily done in real-
time even with limited computational power. This makes this method very
promising for our real-time application.
### 3.3. Inverse Problem with Thermocouples and Total Heat Measurement
In CC molds, we can have together with thermocouples’ pointwise measurements
also total heat flux measurements. Assuming all boundaries but
$\Gamma_{s_{in}}$ and $\Gamma_{sf}$ to be adiabatic, all heat is extracted by
the mold by the cooling water at $\Gamma_{sf}$. Further, assuming the water
heat capacity, $C_{p_{f}}$, to be constant and the the water mass flow rate
$\dot{m}$ to be known, the total heat flux is given by
(3.45)
$\hat{G}=\int_{\Gamma_{s_{in}}}gd\Gamma=\dot{m}C_{p_{f}}(T_{f_{out}}-T_{f_{in}}),$
where $T_{f_{in}}$ and $T_{f_{out}}$ are the cooling water temperatures at the
inlet and outlet of the cooling system, respectively. Then, the total heat
flux measurements is obtained by (3.45) after experimentally measuring
$T_{f_{in}}$, $T_{f_{out}}$ and the water mass flow rate $\dot{m}$.
In this section, we discuss the formulation and solution of the inverse
problem of estimating the boundary heat flux, $g$, by considering both the
thermocouples’ and total heat flux measurements.
Using again a least square, deterministic approach, we state the inverse
problem as
###### Problem 3.6.
(Inverse) Given $\\{\hat{T}(\mathbf{x}_{i})\\}_{i=1}^{M}$ and $\hat{G}$, find
the heat flux $g\in G_{ad}$ that minimizes the functional
$J_{2}:L^{2}(\Gamma_{s_{in}})\rightarrow{\rm I\\!R}^{+}$,
(3.46)
$J_{2}[g]:=\frac{1}{2}\sum^{M}_{i=1}[T[g](\mathbf{x}_{i})-\hat{T}(\mathbf{x}_{i})]^{2}+\frac{1}{2}p_{g}\bigg{(}\int_{\Gamma_{s_{in}}}gd\Gamma-\hat{G}\bigg{)}^{2},$
where $T[g](\mathbf{x}_{i})$ is the solution of Problem 2.1 at points
$\mathbf{x}_{i}$, for all $i=1,2,\dots,M$, and $p_{g}[\frac{K^{2}}{W^{2}}]$ is
a weight applied to the total heat measurement.
Notice that, thanks to Theorem 2.1 the state variable $T$ is continuous in
$\Omega_{s}$, then its value at pointwise observations is well-defined.
To derive the adjoint of Problem 3.6, we redo computations (3.6)-(3.9). It
turns out that the adjoint of Problem 3.6 is again Problem 3.3. However, the
Fréchet derivative with respect to the inner product in
$L^{2}(\Gamma_{s_{in}})$ of $J_{2}$ is
(3.47) $d\mathcal{L}_{g}[\delta
g,\lambda]=-\int_{\Gamma_{s_{in}}}\bigg{[}\lambda[g](\mathbf{x})-p_{g}\bigg{(}\int_{\Gamma_{s_{in}}}gd\Gamma-\hat{G}\bigg{)}\bigg{]}\delta
g(\mathbf{x})d\Gamma=\langle-\lambda[g]+p_{g}\bigg{(}\int_{\Gamma_{s_{in}}}gd\Gamma-\hat{G}\bigg{)},\delta
g\rangle_{L^{2}(\Gamma_{s_{in}})}.$
Considering that $\mathcal{L}[g,\lambda[g]]=J_{2}[g]$, the Gâteaux derivative
of the functional $J_{2}[g]$ is
(3.48)
$J^{\prime}_{2}[g]=-\lambda[g]+p_{g}\bigg{(}\int_{\Gamma_{s_{in}}}gd\Gamma-\hat{G}\bigg{)}\text{
in }L^{2}(\Gamma_{s_{in}}).$
Different methods can be used for the solution of this minimization problem.
As for the minimization of $J_{1}$, we discuss its solution by Alifanov’s
regularization method and by parameterization of the heat flux, $g$.
#### 3.3.1. Alifanov’s Regularization
In this section, we expand the discussion in Section 3.2.1 on Alifanov’s
regularization to the inverse Problem 3.6.
We consider the following iterative procedure for the estimation of the
function $g$ that minimizes functional (3.46). Given an initial estimation
$g^{0}\in L^{2}(\Gamma_{s_{in}})$, for $n>0$ a new iterant is computed by
(3.15) with the conjugate direction given by
(3.49) $P^{0}=J^{\prime}_{2}[g^{0}],\quad
P^{n+1}=J^{\prime}_{2}[g^{n+1}]+\gamma^{n+1}P^{n}\text{ for }n\geq 1,$
and the search step computed by
(3.50)
$\beta^{n}=\frac{\sum_{i=1}^{M}\left[T[g^{n}](\mathbf{x}_{i})-\hat{T}(\mathbf{x}_{i})\right]\delta
T[P^{n}](\mathbf{x}_{i})+p_{g}\left(\int_{\Gamma_{s_{in}}}P^{n}d\Gamma\right)\left(\int_{\Gamma_{s_{in}}}g^{n}d\Gamma-\hat{G}\right)}{\sum_{i=1}^{M}(\delta
T[P^{n}](\mathbf{x}_{i}))^{2}+p_{g}\left(\int_{\Gamma_{s_{in}}}P^{n}d\Gamma\right)^{2}}.$
Alifanov’s regularization algorithm is then as in Algorithm 1 where the
functional $J_{1}$ is substituted by $J_{2}$ and the search step and conjugate
direction are computed by (3.50) and (3.49), respectively.
#### 3.3.2. Parameterization of the Boundary Conditions
In this section, we apply the discussion made in Section 3.2.2 to Problem 3.6.
Considering the parameterization (3.22), due to (3.34), we rewrite (3.46) as
(3.51)
$J_{2}[\mathbf{w}]=\frac{1}{2}\mathbf{R}[\mathbf{w}]^{T}\mathbf{R}[\mathbf{w}]+\frac{1}{2}p_{g}\bigg{(}\int_{\Gamma_{s_{in}}}\sum_{j=1}^{M}w_{j}\phi_{j}(\mathbf{x})d\Gamma-\hat{G}\bigg{)}^{2}=\frac{1}{2}\mathbf{R}[\mathbf{w}]^{T}\mathbf{R}[\mathbf{w}]+\frac{1}{2}p_{g}\bigg{(}\sum_{j=1}^{M}w_{j}\int_{\Gamma_{s_{in}}}\phi_{j}(\mathbf{x})d\Gamma-\hat{G}\bigg{)}^{2}.$
Defining the vector in ${\rm I\\!R}^{M}$ such that
(3.52)
$(\boldsymbol{\phi})_{i}:=\int_{\Gamma_{s_{in}}}\phi_{i}(\mathbf{x})d\Gamma,$
we write
(3.53)
$J_{2}[\mathbf{w}]=\frac{1}{2}\mathbf{R}[\mathbf{w}]^{T}\mathbf{R}[\mathbf{w}]+\frac{1}{2}p_{g}(\mathbf{w}^{T}\boldsymbol{\phi}-\hat{G})^{2}.$
As in Section 3.2.2, we now write the critical point equation for
$J_{2}[\mathbf{w}]$
(3.54) $\frac{\partial J_{2}[\mathbf{w}]}{\partial
w_{j}}=\sum^{M}_{i=1}(R[\mathbf{w}])_{i}\frac{\partial(T[\mathbf{w}])_{i}}{\partial
w_{j}}+p_{g}(\mathbf{w}^{T}\boldsymbol{\phi}-\hat{G})(\boldsymbol{\phi})_{j}=0,\quad\text{
for }j=1,2,\dots,M.$
Then, introducing the matrix in ${\rm I\\!R}^{M\times M}$ such that
(3.55) $\Phi_{ij}=(\boldsymbol{\phi})_{i}(\boldsymbol{\phi})_{j},$
we can write the critical point equation as
(3.56)
$(\Theta^{T}\Theta+p_{g}\Phi)\mathbf{w}=p_{g}\hat{G}\boldsymbol{\phi}+\Theta^{T}(\mathbf{T}_{ad}+\hat{\mathbf{T}}).$
By solving the linear system (3.56) we obtain the weights $\mathbf{w}$ of the
parameterization. Then, by (3.22) we compute the estimated heat flux $g$. Also
in this setting, the discussion at the end of Section 3.2.2 on the
regularization of the linear system and the offline-online decomposition
holds.
## 4\. Analytical Benchmark
In this section, we propose an academic benchmark case. It is a steady-state
heat conduction problem in a homogeneous isotropic solid occupying a
rectangular parallelepiped domain. By carefully selecting the BCs on the faces
of the parallelepiped, we are able to compute the analytical solution of the
heat conduction problem. Then, we use this academic test to validate the
numerical solution of the direct problem. Moreover, by arbitrarily selecting
some temperature measurements points, we test the different inverse problem
solution methodologies discussed in Section 3.
Let the domain be $\Omega=(0,L)\times(0,W)\times(0,H)$ as in Figure 3 with
positive real constants $L,W$ and $H$. Let $\Gamma$ be boundary of $\Omega$.
Then, the different boundaries of the domain to be considered are
(4.1) $\displaystyle\Gamma_{sf}:=\\{\mathbf{x}\in\Gamma|\
\mathbf{x}=(x,W,z)\\},\quad$
$\displaystyle\Gamma_{s_{in}}:=\\{\mathbf{x}\in\Gamma|\
\mathbf{x}=(x,0,z)\\},$ $\displaystyle\Gamma_{I}:=\\{\mathbf{x}\in\Gamma|\
\mathbf{x}=(x,y,H)\\},\quad$
$\displaystyle\Gamma_{III}:=\\{\mathbf{x}\in\Gamma|\ \mathbf{x}=(x,y,0)\\},$
$\displaystyle\Gamma_{II}:=\\{\mathbf{x}\in\Gamma|\
\mathbf{x}=(L,y,z)\\},\quad$
$\displaystyle\Gamma_{IV}:=\\{\mathbf{x}\in\Gamma|\ \mathbf{x}=(0,y,z)\\}.$
Figure 3. Schematic of the solid rectangular parallelepiped domain.
To have an analytical solution, $T_{an}$, in $\Omega$, we consider a slight
modification of Problem 2.1 that does not change its essential aspects.
###### Problem 4.1.
Find $T$ such that
(4.2) $-k\Delta T=0,\text{ in }\Omega,$
with BCs
(4.3) $\displaystyle-k\nabla T\cdot\mathbf{n}=g_{an}$ on $\Gamma_{s_{in}}$,
(4.4) $\displaystyle-k\nabla T\cdot\mathbf{n}=q_{L}$ on
$\Gamma_{L},L\in\\{I,II,II,IV\\}$, (4.5) $\displaystyle-k\nabla
T\cdot\mathbf{n}=h(T-T_{f})$ on $\Gamma_{sf}$.
Let $a$, $b$, $c$ be real constants. To have an analytical solution in
$\Omega$, we consider the following data as BCs for Problem 4.1,
(4.6) $\displaystyle q_{I}(\mathbf{x})=2kaH,\quad$ $\displaystyle
q_{III}(\mathbf{x})=0,$ $\displaystyle q_{II}(\mathbf{x})=-k(2aL+by),\quad$
$\displaystyle q_{IV}(\mathbf{x})=kby,$ $\displaystyle
T_{f}(\mathbf{x})=\frac{k(bx+c)}{h}+ax^{2}+cy-az^{2}+bxW+c,$
with
(4.7) $g_{an}(\mathbf{x})=k(bx+c),$
$k$ being the thermal considered conductivity, that is assumed constant. Then,
(4.8) $T_{an}(\mathbf{x})=ax^{2}+bxy+cy-az^{2}+c,$
is the solution to Problem 4.1.
### 4.1. Direct Problem
We now discuss the numerical solution of Problem 4.1. Due to its simplicity,
the domain $\Omega$ is discretized by uniform, structured, orthogonal,
hexahedral meshes. To study the convergence of the numerical solution to the
analytical one, we consider grids with different degree of refinement. In all
tests, we use the same number of edges for the three axes.
With respect to the used finite volume scheme, since we have a structured
orthogonal grid, no correction is needed when computing the gradient normal to
the cells faces. Moreover, we use linear interpolation to interpolate the
values from cell centers to face centers. The resulting scheme is second order
accurate.
From the discretization of Problem 4.1, we obtain a linear system. We solve it
by using the preconditioned conjugate gradient solver with diagonal incomplete
Cholesky preconditioning. The tolerance used for the linear system solver is
$10^{-12}$. All the computations are performed in ITHACA-FV[57, 58] which is a
C++ library based on OpenFOAM[59] developed at the SISSA Mathlab.
Finally, Table 1 summarizes the parameters used for the computations.
Table 1. Parameters used for the simulation of the analytical benchmark. Parameter | Value
---|---
Thermal conductivity, $k$ | $3.0~{}W/(mK)$
Heat transfer coefficient, $h$ | $5.0~{}W/(m^{2}K)$
$a$ | $5~{}K/m^{2}$
$b$ | $10~{}K/m^{2}$
$c$ | $15~{}K/m^{2}$
$L$ | $1~{}m$
$W$ | $1~{}m$
$H$ | $1~{}m$
To evaluate the accuracy of the numerical solutions, we show in Figure 4 the
decay of the absolute and relative difference in the $L^{2}$-norm between the
computed and true temperature field. The test confirms the second order
accuracy of the used finite volume scheme. We conclude that Problem 4.1 is
numerically well solved.
Figure 4. Decay of the absolute and relative difference in $L^{2}$-norm
between the computed and true temperature field with the mesh refinement.
### 4.2. Inverse Problem with Temperature Measurements
To numerically analyze the performances of the inverse solvers, we design the
following test: we select a surface inside $\Omega_{s}$ which is parallel to
$\Gamma_{s_{in}}$, and on this surface we locate $M$ measurement points which
correspond to the location of M virtual thermocouples. The temperature in
these points is given by
$\hat{T}(\mathbf{x}_{i})=T_{an}(\mathbf{x}_{i}),i=1,\dots,M$, being $T_{an}$
the solution of Problem 4.1, given by (4.8). Using these temperatures as
measurements, we apply the methods described in Section 3 to solve the inverse
Problem 3.1, considering $T[g](\mathbf{x}_{i})$ as the solution of Problem 4.1
replacing $g_{an}$ by $g$.
The virtual thermocouples are located in the plane $y=0.2~{}m$. Their $(x,z)$
coordinates are shown in Figure 5. Then, we have 16 thermocouples located on
the nodes of a uniform lattice at the plane $y=0.2~{}m$, unless otherwise
stated.
Figure 5. Positions of the virtual thermocouples for the analytical test case.
The parameters used for the computations are summarized in Table 2. In this
section, we test the inverse methodologies of Section 3 analyzing the effect
of different parameters such as grid refinement, CG stopping criterion, RBF
shape parameter, measurement noise, etc. To analyze the numerical results, we
will often use the following error norms
(4.9)
$\left\lVert\varepsilon\right\rVert_{L^{2}(\Gamma_{s_{in}})}=\left\lVert\frac{g-g_{an}}{g_{an}}\right\rVert_{L^{2}(\Gamma_{s_{in}})},\quad\left\lVert\varepsilon\right\rVert_{L^{\infty}(\Gamma_{s_{in}})}=\left\lVert\frac{g-g_{an}}{g_{an}}\right\rVert_{L^{\infty}(\Gamma_{s_{in}})}.\quad$
Notice that from (4.7), $g_{an}>0$.
Table 2. Parameters used in testing the inverse problem solvers for the analytical benchmark case. Parameter | Value
---|---
N. of thermocouples | 16
Thermocouples plane | $y=0.2~{}m$
$g^{0}$ | $0~{}W/m^{2}$
RBF kernel | Gaussian
N. of RBF | 16
Shape parameter, $\eta$ | 0.7
#### 4.2.1. Alifanov’s Regularization
In this section, we analyze the effect that the grid refinement and the stop
criterion have on the results obtained by the Alifanov’s regularization.
We begin by comparing in Figure 6 (a) and (b) the behavior of the functional
$J_{1}$ together with the $L^{2}$\- and $L^{\infty}$-norm of the relative
error defined in (4.9) as functions of the number of iterations of the
algorithm. Both the cost function and the relative error have a sharp decay in
the first 10 iterations. Then, the convergence rate has a dramatic decrease
reaching a plateau after 60 iterations.
(a) Cost functional, $J_{1}$.
(b) Relative error.
Figure 6. Behavior of the cost functional $J_{1}$ (A) and of the heat flux
relative error $L^{2}$\- and $L^{\infty}$-norms (B) with respect to the
Alifanov’s regularization iterations for the analytical benchmark case.
To have qualitatively insight on the results, we compare the computed heat
flux at different iterations in Figure 8. In few iterations, the estimated
heat flux is already in good agreement with the analytical BC. Then, the last
iterations improve slightly the estimation.
(a) Iteration 10
(b) Iteration 70
(c) $g_{an}$
Figure 8. The estimated heat flux by Alifanov’s regularization at different
iterations (A,B) is compared to the analytical value (C) in the analytical
benchmark case.
We now investigate how the grid refinement influences the results. Figure 9
(a) shows the behavior of the relative error of the estimated heat flux with
the grid refinement. This test is performed with the stopping criterion
$J_{1}<J_{1_{tol}}=1e-4K^{2}$. The error in general decreases by increasing
the mesh refinement. However, the decrease is not monotonic with a small
increase for the $40^{3}$ elements grid. To further investigate the
convergence of the method, we tested the effect of increasing the number of
thermocouples, keeping the same number of thermocouples along the x- and
y-axis equal. Figure 9 (b) shows the obtained results for the $40^{3}$
elements grid. Notice that the error converges non-monotonically.
(a) Mesh refinement.
(b) Measurements refinement.
Figure 9. Behavior of the relative error norms (4.9) with the grid (A) and
thermocouples number (B) refinement in the analytical benchmark case.
#### 4.2.2. Parameterization of the Boundary Condition
We now test the performances of the parameterization method described in
Section 3.2.2. In particular, we consider the effects that the selection of
the basis functions have on the results and the conditioning of the linear
system (3.42). Moreover, also in this case, we test the effect of the mesh
refinement on the estimated heat flux.
As already mentioned, we consider Gaussian RBFs as basis functions for the
parameterization of the boundary heat flux. Recalling (3.24), the basis
functions are given by
$\phi_{j}(\mathbf{x})=e^{-\left(\eta\left\lVert\mathbf{x}-\boldsymbol{\xi}_{j}\right\rVert_{2}\right)^{2}},\quad\text{
for }j=1,2,\dots,M,$
where we locate the centers $\boldsymbol{\xi}_{j}$ at the projection of the
virtual measurement points on the boundary $\Gamma_{s_{in}}$.
Both the choice of the basis functions (3.24) and of the position of their
center are arbitrary. However, they come suggested from the physics of the
problem. The Gaussian RBFs are selected because with their radial decay reduce
the correlation between bases which are far away. For a similar reason, the
RBFs are centered at the projection of the measurements to have a relationship
between bases and measurements. This reasoning applies well to CC molds
because we have the thermocouples located in a surface parallel and close to
the boundary where we want to estimate the heat flux. In a more general
scenario, these choices lose their motivation.
To completely define the basis functions, we still must tune the shape
parameter $\eta$. Then, the first analysis we perform is the influence of
$\eta$ on the invertibility of system (3.42) and on the boundary heat flux
estimation. This parameter controls the decay of the RBF. For bigger (smaller)
values of $\eta$ the decay is faster (slower). Figure 10 (a) shows the decay
of the normalized singular values of $\Theta^{T}\Theta$ for different $\eta$.
The singular values are normalized by dividing them all by the first one. In
general, we can see that to bigger values of the shape parameter, correspond a
slower decay of the singular values.
(a) Decay of the singular values.
(b) Relative error and condition number.
Figure 10. Effect of the RBFs shape parameter, $\eta$, on (A) the normalized
singular values of the matrix $\Theta^{T}\Theta$ and on (B) the $L^{2}$\- and
$L^{\infty}$-norms of the relative error and on the condition number of the
linear system.
Figure 10 (b) shows the condition number of the linear system (3.42). The
condition number is computed as the ratio between the bigger and the smaller
singular value
(4.10) $\kappa_{\Theta^{T}\Theta}=\frac{\sigma_{max}}{\sigma_{min}}.$
The figure shows it together with the $L^{2}$\- and $L^{\infty}$-norms of the
relative error (4.9). The method used for the solution of (3.42) is standard
LU factorization with full pivoting. In the figure, we see that the best
results are obtained for $\eta=0.1$ (see Figure 12). Interestingly, looking at
the behavior of the condition number, we can conclude that the quality of the
results is not correlated to the conditioning of (4.9).
(a) $g_{an}$
(b) Estimated
(c) Relative Error
Figure 12. Comparison of the analytical (A) and estimated (B) boundary heat
flux for the analytical benchmark case. This result is obtained by using the
parameterization method with RBF shape parameter $\eta=0.1$.
As for Alifanov’s regularization, we test the effects of grid refinement on
the estimated heat flux. Figure 13 (b) shows that we do not have a decrease of
the relative error with grid refinement. In fact, the error is oscillating
between two very close values. We obtain this result because the
parameterization of the boundary heat flux is the same for all the grids and
we reach the best description that the RBF parameterization can provide of the
analytical heat flux as suggested by Figure 13 (b).
(a) Grid refinement.
(b) Measurements refinement.
Figure 13. Behavior of the relative error norms (4.9) with the grid (A) and
measurements (B) refinement using the parameterization method for the
analytical benchmark case. In the figure (B), the blue results are obtained by
solving the inverse problem and the black ones are the best possible
approximation of the true heat flux in the parameterized space (remember that
by increasing the number of thermocouples we increase the number of basis of
the heat flux parameterization).
#### 4.2.3. Noise in the Measurements
In all previous tests, we considered the measurement to be free of noise. This
is not the real case. In fact, thermocouples’ measurements are notoriously
noisy. Thus, we analyze in this section the effects that the measurement noise
have on the estimated heat flux, $g$. From the industrial point of view, this
analysis is of particular interest for our application.
We perform this analysis by adding to the measurements vector the Gaussian
random noise $\mathbf{w}_{n}$
(4.11)
$\hat{\mathbf{T}}_{\mathbf{w}}=\hat{\mathbf{T}}+\mathbf{w}_{n},\quad\mathbf{w}_{n}=\mathcal{N}(\boldsymbol{\mu},\Sigma),$
where, $\boldsymbol{\mu}\in{\rm I\\!R}^{M}$ is the mean vector and
$\Sigma\in{\rm I\\!R}^{M\times M}$ is the covariance matrix. In particular, we
choose $\mathbf{w}_{n}$ to be an independent and identically distributed (IID)
random variable with zero mean, i.e.
$\mathbf{w}_{n}=\mathcal{N}(\mathbf{0},\omega^{2}I)$.
To study the effect of noise, we perform several solutions of the inverse
problem using $\hat{\mathbf{T}}_{\mathbf{w}}$ as thermocouples’ measurements.
For each test, we compute 200 samples. All these computations are done on the
$40^{3}$ elements grid. Then, we analyze the statistical and qualitative
properties of the obtained results. In our first test, we analyze the behavior
of the relative error (4.9) for different values of the noise standard
deviation $\omega$.
Using Alifanov’s regularization for the minimization of $J_{1}$, we must use a
stopping criterion that regularize the solution. In fact, the regularization
parameter is the iteration counter $i$. Here, we use the Discrepancy Principle
(DP) as stopping criterion.[56] Thus, the iterations are stopped when
(4.12) $J_{1}[g^{i+1}]<\left(\frac{\omega^{2}M}{2}\right)^{2},$
where $M$ is the number of thermocouples.
Figure 15 illustrates the results of this first test. We notice that
Alifanov’s regularization is able to filter the noise only for $\omega<0.02$.
On the other hand, we see that for the parameterization method with LU
factorization the results are spread around the mean value. It suggests that
the noise is propagating from the measurements into the solution. As already
mentioned, we require some regularization technique in solving (3.42) to
regularize the solution.
(a) Alifanov’s regularization
(b) Parameterization method
Figure 15. Behavior of the relative error with respect to the standard
deviation of the noise in the measurements for the Alifanov’s regularization
(A) and parameterization method (B) in the analytical benchmark case (90%
quantile bars shown).
As described in Section 3.2.2, we use TSVD regularization in the
parameterization method. We opt for this technique because it is effective
when we have jumps in the singular values decay (see Figure 10). As already
said, attention must be paid when using regularization techniques in selecting
the regularization parameter. In our case, the regularization parameter,
$\alpha_{TSVD}$ is the number of singular values used in the truncation.
Different methodologies are available in the literature, e.g. unbiased
predictive risk estimator, DP, L-curve, U-curve, generalized cross
validation.[56] However, to show the dependency of the results on the
regularization parameter, we performed numerical tests.
Figure 17 shows the behavior of the $L^{2}$\- and $L^{\infty}$-norm of the
relative error with respect to regularization parameter $\alpha_{TSVD}$, for
different values of the noise standard deviation $\omega$. As expected, the
optimal value of the regularization parameter depends on the noise variance.
In fact, for low noise level we should use higher values of $\alpha_{TSVD}$
reducing it as the noise increases and vice versa.
Figure 17. Effect of the regularization parameter $\alpha_{TSVD}$ using the
TSVD in parameterization method for the analytical benchmark case (90%
quantile bars shown).
Testing again the TSVD regularization fixing $\alpha_{TSVD}$ and increasing
the noise standard deviation, we clearly see the regularizing effect of the
TSVD. Figure 19 shows the obtained results. In the figure, we appreciate the
importance of a right choice of the regularizing parameter. In fact, if we
have very low noise in the measurements, we should opt for higher values of
$\alpha_{TSVD}$ and vice versa.
Figure 19. Behavior of the relative error with respect to the standard
deviation of the noise in the measurements using the parameterization method
with TSVD regularization in the analytical benchmark case (90% quantile bars
shown).
We conclude this noise analysis by looking at a realization of the computed
heat flux with the different methods. Figure 21 provides a qualitative example
of the performances of the inverse solvers for $\omega=0.08$. As expected, the
noise is not well filtered by the Alifanov’s regularization while the
parameterized method with TSVD provides a smooth solution in good agreement
with the true value.
(a) Alifanov’s regularization
(b) LU w. full pivoting
(c) TSVD, $\alpha_{TSVD}=3$
Figure 21. Comparison of the estimated heat flux for Alifanov’s regularization
(A) and parameterization method with (C) and without (B) regularization for
for the analytical benchmark case with noisy measurements (noise standard
deviation $\omega=0.08$).
### 4.3. Inverse Problem with Temperature and Total Heat Flux Measurements
In this section, we discuss the numerical solution of the inverse Problem 3.6
where $T[g](\mathbf{x}_{i})$ is the solution of Problem 4.1 at points
$\mathbf{x}_{i}$, for all $i=1,2,\dots,M$ and
$\hat{G}=\int_{\Gamma_{s_{in}}}g_{an}d\Gamma$, $g_{an}$ being defined by
(4.7). All computations are performed on the $40^{3}$ elements grid and the
basis in the parameterization method are as in the previous section.
With respect to the previous section, we have one additional parameter: the
total heat weight, $p_{g}$. Since, it is not possible to set it a priori, we
analyze its effects on the solution. Figure 23 (a) shows the behavior of the
$L^{2}$\- and $L^{\infty}$-norm of the relative error for different values of
$p_{g}$ using Alifanov’s regularization for the solution of the inverse
problem. On the other hand, Figure 23 (b) shows the same graph for the
parameterization method with LU decomposition with full pivoting. These
computations are performed without errors in the measurements.
(a) Alifanov’s regularization.
(b) Parameterization method.
Figure 23. Behavior of the relative error with respect to the total heat
measurement weight, $p_{g}$, using Alifanov’s regularization (A) and
parameterization of the heat flux with LU decomposition with full pivoting (B)
in the analytical benchmark case.
Comparing the two figures (notice the different order of magnitude on the
y-axis), we see that adding the total heat measurement improves the boundary
heat flux estimation only for the parameterization method. In the Alifanov’s
regularization, we have a very small decrease of the relative error for
$p_{g}$ about $1e-4\frac{K^{2}}{W^{2}}$ before having a sudden negative jump.
In Figure 23 (a), we appreciate an interesting the jump in the error for
$1.5\frac{K^{2}}{W^{2}}<p_{g}<3\frac{K^{2}}{W^{2}}$. For these values, we
recast similar results to those obtained for $p_{g}<1e-4\frac{K^{2}}{W^{2}}$.
Figure 24 provides further information on the effect of $p_{g}$. In the
parameterization method, $p_{g}$ does not have any effect on the solution for
$p_{g}<1$. Then for higher values of $p_{g}$, the relative error decreases
linearly. On the other hand, the figure confirms the interesting behavior of
Alifanov’s regularization for
$1.5\frac{K^{2}}{W^{2}}<p_{g}<4\frac{K^{2}}{W^{2}}$. However, it shows also
for this method an almost linear decrease of the total heat relative error for
$p_{g}>4\frac{K^{2}}{W^{2}}$.
Figure 24. Behavior of the relative error in computing the total heat flux on
$\Gamma_{s_{in}}$ with respect to $p_{g}$.
### 4.4. Conclusions
To draw final conclusions on the performances of the tested inverse solvers,
we compare their computational cost. This is of particular interest in our
research because we want to achieve real-time performances. Table 3
illustrates the CPU time required for the computations with no error in the
measurements and $J_{tol}=10^{-4}~{}K^{2}$ in the case of only temperature
measurements available. Notice that all the computations were performed in
serial on a Intel® Core™ i7-8550U CPU processor.
Table 3. Inverse problem CPU time comparison for the analytical benchmark case. | Alifanov’s reg. | Parameterized heat flux
---|---|---
| | offline | online
CPU time | $18.8~{}s$ | $7.21~{}s$ | $0.0056~{}s$
These results confirm that the offline-online decomposition makes the
parameterized heat flux method eligible for real-time applications. On the
other hand, the Alifanov’s regularization requires several solutions of
direct, adjoint and sensitivity problem, so it cannot be employed in real-time
as it is.
With this final remark, we conclude that the parameterization method
outclasses Alifanov’s regularization both in the quality of the estimation
provided and, in the robustness (with TSVD regularization) with respect to
errors in the measurements.
Moreover, thanks to its offline-online decomposition, the parameterization
method has proved to be able to achieve real-time computation. In fact, it
requires a computationally expensive offline computation in which we solve
several direct problems. In the online phase it is very fast since we only
solve a linear system with dimension equal to the number of basis used in the
parameterization of the heat flux.
Finally, we considered the case of having as data for the inverse problem also
a total heat flux measurement. The parameterization method results are
improved under every aspect by introducing this additional data. On the other
hand, Alifanov’s regularization is only slightly affected by this additional
data.
## 5\. Industrial Benchmark
The benchmark case presented in this section is a numerical test case. This
benchmark is designed to mimic the real industrial scenario of a CC mold. In
particular, the domain is a simplification of a mold plate and the physical
quantities have typical industrial values. Also the thermocouples’ number and
positioning are those of a real mold. Table 4 summarizes the physical
properties for this test case and the chosen heat flux, $g_{true}$.
As for the previous benchmark, the direct problem is a steady-state heat
conduction problem in a homogeneous isotropic solid with a rectangular
parallelepiped domain. The domain $\Omega$ is as in Figure 3 with
$\Gamma_{s_{ex}}=\Gamma_{I}\cup\Gamma_{II}\cup\Gamma_{III}\cup\Gamma_{IV}$.
The mathematical formulation of the direct problem is that of Problem 2.1.
Table 4. Physical parameters of the industrial benchmark case. Parameter | Value
---|---
Thermal conductivity, $k$ | $300.0~{}W/(mK)$
Heat transfer coefficient, $h$ | $5.66e4~{}W/(m^{2}K)$
Water temperature, $T_{f}$ | $303+8(1.2-z)~{}K$
Heat flux (Figure 26 (a)), $g_{true}$ | $1e5[2(x-1)^{2}-2z-5]~{}W/m^{2}$
$L$ | $2~{}m$
$W$ | $0.1~{}m$
$H$ | $1.2~{}m$
(a) True heat flux.
(b) Thermocouples locations.
Figure 26. True heat flux (A) and position of the 100 thermocouples at the
plane $y=0.02~{}m$ (B) for the industrial benchmark case.
For the discretization of the domain, we use a structured orthogonal grid with
uniformly distributed elements along the three axes. We use 200, 50 and 100
elements on the x-, y- and z-axis respectively. Thus, the grid size is 1e6
elements.
The direct problem does not have an analytical solution. Then for this
benchmark, we assume that the direct problem is well solved and focus our
attention on the solution of the inverse problem.
As in the real industrial case under study, we locate the virtual
thermocouples in the plane $y=0.02~{}m$. In this plane, they are equally
distributed on the $x$\- and $z$-axis as shown in Figure 26 (b).
### 5.1. Inverse Problem with Temperature Measurements
In the present section, we analyze the performance of the proposed methods for
the solution of the inverse Problem 3.1 for the introduced numerical test
case. First, we analyze the performances of Alifanov’s regularization (see
Section 3.2.1). Table 5 shows the parameters used for the simulation.
Table 5. Parameters used in the Alifanov’s regularization algorithm for the solution on the industrial benchmark case. Parameter | Value
---|---
$g^{0}$ | $0~{}\frac{W}{m^{2}}$
$J_{1_{tol}}$ | $1e2~{}K^{2}$
$\frac{\left\lVert J_{1}^{n}-J_{1}^{n-1}\right\rVert}{J^{n}}$ | $1e-2$
Figure 28 illustrates the estimated heat flux, $g$, at different iterations of
the algorithm. We notice that the algorithm provides a solution not in
agreement with $g_{true}$. In particular, it overestimates the heat flux close
to the measurement points while far from the measurements the initial estimate
is not modified. Moreover, increasing the number of iterations does not
improve the results. Due to the inability of estimating the heat flux also in
the simplest case without measurement noise, we do not perform further tests
with this method.
(a) Iteration 1
(b) Iteration 80
Figure 28. Comparison of the computed heat flux by Alifanov’s regularization
at different iterations of the algorithm.
We now consider the parameterization method of Section 3.2.2. As for the
previous benchmark, we start by performing a numerical analysis on the
influence of the RBF shape parameter, $\eta$, on the invertibility of system
(3.42) and on the estimated heat flux. Figure 29 (a) shows the decay of the
singular values of $\Theta^{T}\Theta$ for different $\eta$. As for the
previous test case, to bigger values of the shape parameter correspond a
slower decay of the singular values. Moreover, we see from this singular value
decay and in Figure 29 (b) that for $\eta>1$ the condition number of the
system decreases. However, the relative error of the heat flux estimation
increases significantly for these values of $\eta$.
(a) Decay of the singular values.
(b) Relative error and condition number.
Figure 29. Effect of the RBFs shape parameter on (A) the singular values of
the matrix $\Theta^{T}\Theta$ and on (B) the relative error norms (4.9) using
LU with full pivoting and on the condition number of (3.42).
To conclude, there is no relationship between the condition number of the
linear system and the obtained results for this industrial benchmark test.
However according to Figure 29 (b), we obtain the best results for $\eta=0.3$.
Figure 30 shows the results obtained for this value of the RBF shape
parameter. Then, we use this value in the following tests.
(a) Heat flux
(b) Relative error
Figure 30. Estimated heat flux (A) and the respective relative error (B) using
the parameterization method with RBF shape parameter $\eta=0.3$ in the
industrial benchmark case.
We now analyze the effect of noise in the measurement. Figure 31 shows the
effect of different noise levels on $L^{2}$\- and $L^{\infty}$-norms of the
relative error (4.9) using LU factorization with full pivoting for the
solution of (3.42). The relative error increases linearly with the noise
level.
Figure 31. Effect of the measurements noise on the solution of the
parameterization method with LU factorization with full pivoting in the
industrial benchmark case (90% quantile bars shown).
As for the previous benchmark, we test the regularization properties of TSVD
on this problem. Figure 33 shows the effect of the regularization parameter
$\alpha_{TSVD}$ on the $L^{2}$\- and $L^{\infty}$-norms of the relative error
for different values of the noise standard deviation, $\omega$. As expected,
the optimal value of the regularizing parameter $\alpha_{TSVD}$ decreases as
the noise increases. However, for all the considered cases, we are able to
achieve a relative error that in the $L^{2}$-norm is below $2\%$.
Figure 33. Effect of the regularization parameter $\alpha_{TSVD}$ using the
TSVD in parameterization method for the industrial benchmark case (90%
quantile bars shown).
To conclude, Figure 35 shows the behavior of the relative error increasing the
measurement noise for $\alpha_{TSVD}=5$ and $\alpha_{TSVD}=7$. Notice that
also for severe noise in the thermocouples’ measurements, we are able to
obtain a valid reconstruction of the boundary heat flux.
Figure 35. Behavior of the relative error with respect to the standard
deviation of the noise in the measurements using the parameterization method
with TSVD regularization in the industrial benchmark case (90% quantile bars
shown).
### 5.2. Inverse Problem with Temperature and Total Heat Flux Measurements
In this section, we discuss the numerical solution of the inverse Problem 3.6
where $T[g](\mathbf{x}_{i})$ is the solution of Problem 2.1 at points
$\mathbf{x}_{i}$, for all $i=1,2,\dots,M$, $g_{true}$ as in Table 4 and
$\hat{G}=\int_{\Gamma_{s_{in}}}g_{true}d\Gamma$.
With respect to the previous section, we have one additional parameter: the
total heat weight, $p_{g}$. Since, it is not possible to set it a priori, we
analyze its effects on the solution. Figure 36 shows the behavior of the
$L^{2}$\- and $L^{\infty}$-norm of the relative error for different values of
$p_{g}$ using Alifanov’s regularization and the parameterization method with
LU factorization for the solution of the inverse problem. All these
computations are performed without noise in the measurements.
(a) Alifanov’s regularization.
(b) Parameterization method.
Figure 36. Effect of the functional weight $p_{g}$ on the $L^{2}$\- and
$L^{\infty}$-norms of the relative error (4.9) for the Alifanov’s
regularization (A) and the parameterization method with LU factorization (B).
The thermocouples’ measurements are free of noise.
Analyzing Figure 36, we appreciate a different behavior for the two methods.
Alifanov’s regularization improves its results, reaching a minimum of the
relative error for $p_{g}\approx 1e-8$. Then, the error goes quickly to a
plateau in which the estimated heat flux is uniform. On the other hand, the
parameterization method error increases at jumps with $p_{g}$.
Figure 37 shows the relative error on the total heat flux. It also provides
interesting information. While both methods linearly improve their performance
for $p_{g}>1e-5$, the parameterization method shows a very peculiar dependence
on the weight for lower values. However, the parameterization method has a
relative error two orders of magnitude smaller with respect to Alifanov’s
regularization.
Figure 37. Effect of the total heat measurement weight $p_{g}$ on the relative
error in the total heat.
### 5.3. Conclusions
In this industrial benchmark case, we tested the methods presented in Sections
3.2 and 3.3 in an industrial setting. Alifanov’s regularization proved to
perform very poorly. Due to the thermocouples located very close to the
boundary $\Gamma_{s_{in}}$, this regularization method overestimates the heat
flux close to the measurement points, underestimating it away from the
measurements. Including the total heat flux measurement in the cost functional
improves the obtained results, but not to a satisfactory level.
Also in this test case, the parameterization method proved to perform very
well providing excellent estimation of the heat flux. In this case,
introducing the total heat measurement caused a degradation of the estimated
heat flux. For this method, the TSVD regularization was used to filter the
measurement noise. It allowed to obtain nice heat flux estimations also in the
noisy scenario.
To conclude, Table 6 illustrates the CPU time required for the computations
with no error in the measurements and $J_{tol}=10^{2}~{}K^{2}$ in the case of
only temperature measurements available. Notice that all the computations were
performed in serial on a Intel® Core™ i7-8550U CPU processor. Recalling that
in this application the thermocouples sample at $1~{}Hz$, the parameterization
method allows the real-time estimation of the boundary heat flux.
Table 6. Inverse problem CPU time comparison for the industrial benchmark case. | Alifanov’s reg. | Parameterized heat flux
---|---|---
| | offline | online
CPU time | $221~{}s$ | $121.4~{}s$ | $0.15~{}s$
## 6\. Conclusions and Future Work
The objective of this work was to develop a methodology for the real-time
estimation of the steel-mold heat flux in CC molds.
We approached this problem by first studying the mold modeling (the direct
problem). With physical considerations on the problem, we justified some
simplifying assumptions for the mold model that allowed us to use a steady
heat conduction model for the solid portion of the mold. This model was
equipped with convective BCs on the portion of the boundary in contact with
the cooling water and a Neumann BC in the portion in contact with the cooling
steel. This latter BC is the heat flux that we want to estimate.
For the setup of the inverse problem, we considered two different measurement
settings: having as measurements only the thermocouples’ pointwise temperature
measurements or having them together with the total boundary heat flux
measurement. For the definition of the inverse problems, we used a
deterministic least square approach.
To solve the inverse problems, we used two different methodologies. The first
one is a traditional regularization method called the Alifanov’s
regularization. As a second method, we developed an inverse solver that
exploit the parameterization of the boundary heat flux. The latter is very
attractive for our problem because it allows for an offline-online
decomposition. It means that we have a computationally expensive offline phase
in which we solve several direct problems and a fast online phase that can be
computed in real-time.
We finally tested the developed methodologies in two different benchmark
cases: an academic test and an industrial one. In both cases, we tested the
quality of the heat flux reconstruction and the robustness of the methods to
the measurements noise. The results shown that the parameterization method
outperforms Alifanov’s regularization in all the tests. Moreover, it provided
good solutions also in presence of significant noise in the measurements.
Finally, it allows the real-time estimation of the boundary heat flux while
Alifanov’s regularization cannot be employed in real-time as it is.
In future work, we will focus mainly on two aspects. First, we will develop a
methodology for the real-time solution of this inverse problem in the unsteady
case comparing the obtained results to the steady case. Second, we will move
to a Bayesian approach to the inverse problem.[60, 61] With this approach, we
will be able to better deal with the errors not only in the measurements but
also in the model. For the industrial point of view, it is very valuable since
allows to conduct uncertainty quantification on the heat flux estimation. In
both cases, to achieve real-time computations we will exploit reduced order
modeling techniques.[62, 63, 64, 65, 66, 67]
### 6.1. Acknowledge
We would like to acknowledge the financial support of the European Union under
the Marie Sklodowska-Curie Grant Agreement No. 765374. We also acknowledge the
partial support by the Ministry of Economy, Industry and Competitiveness
through the Plan Nacional de I+D+i (MTM2015-68275-R), by the Agencia Estatal
de Investigacion through project [PID2019-105615RB-I00/ AEI /
10.13039/501100011033], by the European Union Funding for Research and
Innovation - Horizon 2020 Program - in the framework of European Research
Council Executive Agency: Consolidator Grant H2020 ERC CoG 2015 AROMA-CFD
project 681447 ”Advanced Reduced Order Methods with Applications in
Computational Fluid Dynamics” and INDAM-GNCS project ”Advanced intrusive and
non-intrusive model order reduction techniques and applications”, 2019.
Moreover, we gratefully thank Gianfranco Marconi, Federico Bianco and Riccardo
Conte from Danieli & C.Officine Meccaniche SpA for helping us in better
understanding the industrial problem and for the fruitful cooperation.
## References
* [1] World Steel Association . World steel in figures 2018. World Steel Association: Brussels, Belgium 2018\.
* [2] Thomas BG. Review on Modeling and Simulation of Continuous Casting. Steel Research Int. 2018; 89(1): 1700312. doi: https://doi.org/10.1002/srin.201700312
* [3] Klimeš L, Štětina J. A rapid GPU-based heat transfer and solidification model for dynamic computer simulations of continuous steel casting. Journal of Materials Processing Technology 2015; 226: 1–14. doi: https://doi.org/10.1016/j.jmatprotec.2015.06.016
* [4] Irving WR. Continuous casting of steel. The Institute of Materials (UK) . 1993\.
* [5] AISE Steel Foundation , Cramb AW. The making, shaping, and treating of steel: Casting Volume. AISE Steel Foundation . 2003\.
* [6] Meng Y, Thomas B. Modeling Transient Slag-Layer Phenomena in the Shell/mold Gap in Continuous Casting of Steel. Metallurgical and Materials Transactions B 2003; 34: 707-725. doi: 10.1007/s11663-003-0041-x
* [7] Thomas BG, Yuan Q, Zhao B, Vanka SP. Transient fluid-flow phenomena in the continuous steel-slab casting mold and defect formation. JOM-e 2006; 58: 16–36.
* [8] Stefanescu DM. Science and engineering of casting solidification. Springer . 2015\.
* [9] Wang W, Zhu C, Zhou L. Initial Solidification and Its Related Heat Transfer Phenomena in the Continuous Casting Mold. Steel Research International 2017; 88(10): 1-9. 1600488doi: https://doi.org/10.1002/srin.201600488
* [10] Goldschmit MB, Principe RJ, Koslowski M. Applications of a (k-e) model for the analysis of continuous casting processes. International Journal for Numerical Methods in Engineering 1999; 46(9): 1505-1519. doi: https://doi.org/10.1002/(SICI)1097-0207(19991130)46:9¡1505::AID-NME709¿3.0.CO;2-3
* [11] Williams JR, Lewis RW, Morgan K. An elasto-viscoplastic thermal stress model with applications to the continuous casting of metals. International Journal for Numerical Methods in Engineering 1979; 14(1): 1-9. doi: https://doi.org/10.1002/nme.1620140102
* [12] Koric S, Thomas BG. Efficient thermo-mechanical model for solidification processes. International Journal for Numerical Methods in Engineering 2006; 66(12): 1955-1989. doi: https://doi.org/10.1002/nme.1614
* [13] Thomas B. Modeling of the continuous casting of steel - Past, present, and future. Metallurgical and Materials Transactions B 2002; 33: 795-812. doi: 10.1007/s11663-002-0063-9
* [14] Nittka R. Regularity of solutions of linear second order elliptic and parabolic boundary value problems on Lipschitz domains. Journal of Differential Equations 2011; 251(4): 860 - 880. doi: https://doi.org/10.1016/j.jde.2011.05.019
* [15] Raymond JP. Optimal control of partial differential equations. Université Paul Sabatier, Internet 2013\.
* [16] Eymard R, Gallouët T, Herbin R. Finite volume methods. In: . 7 of Handbook of Numerical Analysis. Elsevier. 2000 (pp. 713 - 1018).
* [17] Ling X, Keanini RG, Cherukuri HP. A non-iterative finite element method for inverse heat conduction problems. International Journal for Numerical Methods in Engineering 2003; 56(9): 1315-1334. doi: https://doi.org/10.1002/nme.614
* [18] Loulou T, Scott EP. An inverse heat conduction problem with heat flux measurements. International Journal for Numerical Methods in Engineering 2006; 67(11): 1587-1616. doi: https://doi.org/10.1002/nme.1674
* [19] Jin B. Conjugate gradient method for the Robin inverse problem associated with the Laplace equation. International Journal for Numerical Methods in Engineering 2007; 71(4): 433-453. doi: https://doi.org/10.1002/nme.1949
* [20] Huang CH, Yan JY. An Inverse Problem In Predicting Temperature Dependent Heat Capacity Per Unit Volume Without Internal Measurements. International Journal for Numerical Methods in Engineering 1996; 39(4): 605-618. doi: https://doi.org/10.1002/(SICI)1097-0207(19960229)39:4¡605::AID-NME872¿3.0.CO;2-H
* [21] Alifanov OM. Inverse Heat Transfer Problems. Moscow Izdatel Mashinostroenie. 1 ed. 1988.
* [22] Orlande HRB. Inverse Problems in Heat Transfer: New Trends on Solution Methodologies and Applications. Journal of Heat Transfer 2012; 134(3): 1-13. 031011doi: 10.1115/1.4005131
* [23] Beck JV, Blackwell B, Clair Jr. CRS. Inverse heat conduction: Ill-posed problems. James Beck . 1985\.
* [24] Chang CW, Liu CH, Wang CC. Review of Computational Schemes in Inverse Heat Conduction Problems. Smart Science 2018; 6(1): 94-103. doi: 10.1080/23080477.2017.1408987
* [25] Ranut P. Optimization and Inverse Problems in Heat Transfer. PhD thesis. Universitá degli Studi di Udine, Via delle Scienze, 206, 33100 Udine UD, Italy; 2012.
* [26] Udayraj , Chakraborty S, Ganguly S, Chacko E, Ajmani S, Talukdar P. Estimation of surface heat flux in continuous casting mould with limited measurement of temperature. International Journal of Thermal Sciences 2017; 118: 435 - 447. doi: https://doi.org/10.1016/j.ijthermalsci.2017.05.012
* [27] Mahapatra R, Brimacombe J, Samarasekera I. Mold behavior and its influence on quality in the continuous casting of steel slabs: Part II. Mold heat transfer, mold flux behavior, formation of oscillation marks, longitudinal off-corner depressions, and subsurface cracks. Metallurgical and Materials Transactions B 1991; 22(6): 875–888. doi: 10.1007/BF02651164
* [28] Vitale G, Preziosi L, Ambrosi D. Force traction microscopy: An inverse problem with pointwise observations. Journal of Mathematical Analysis and Applications 2012; 395(2): 788 - 801. doi: https://doi.org/10.1016/j.jmaa.2012.05.074
* [29] Huang CH, Chen CW. A boundary element-based inverse-problem in estimating transient boundary conditions with conjugate gradient method. International Journal for Numerical Methods in Engineering 1998; 42(5): 943-965. doi: https://doi.org/10.1002/(SICI)1097-0207(19980715)42:5¡943::AID-NME395¿3.0.CO;2-V
* [30] Tikhonov AN. Solution of incorrectly formulated problems and the regularization method. Soviet Math. Dokl. 1963; 4: 1035–1038.
* [31] Beck JV. Surface heat flux determination using an integral method. Nuclear Engineering and Design 1968; 7(2): 170 - 178.
* [32] Beck JV. Nonlinear estimation applied to the nonlinear inverse heat conduction problem. International Journal of Heat and Mass Transfer 1970; 13(4): 703-716.
* [33] Chen CJ, Chiou JS. Prediction of surface temperature and heat flux from an interior temperature response. Letters in Heat and Mass Transfer 1976; 3(6): 539 - 548.
* [34] Pinheiro C, Samarasekera I, Brimacomb J, Walker B. Mould heat transfer and continuously cast billet quality with mould flux lubrication Part 1 Mould heat transfer. Ironmaking & Steelmaking 2000; 27(1): 37-54. doi: 10.1179/030192300677363
* [35] Samarasekera I, Brimacombe J. The influence of mold behavior on the production of continuously cast steel billets. Metallurgical Transactions B 1982; 13(1): 105–116.
* [36] Michelic S, Rauter W, Erker M, Brandl W, Bernhard C. Heat Transfer in a Round CC Mould: Measurement, Modelling and Validation. In: Steelmaking & Plastic Deformation Study Groups. Associazione Italiana di Metallurgia; 2008; Italy: Paper–30.
* [37] Ranut P, Persi C, Nobile E, Spagnul S. Estimation of Heat Flux Distribution in a Continuous Casting Mould by Inverse Heat Transfer Algorithms. In: . Volume 2: 31st Computers and Information in Engineering Conference, Parts A and B. The American Society of Mechanical Engineers. ; 2011: 389-398
* [38] Nelder JA, Mead R. A simplex method for function minimization. The computer journal 1965; 7(4): 308–313.
* [39] Man Y, Hebi Y, Dacheng F. Real-time Analysis on Non-uniform Heat Transfer and Solidification in Mould of Continuous Casting Round Billets. Isij International - ISIJ INT 2004; 44: 1696-1704. doi: 10.2355/isijinternational.44.1696
* [40] Hebi Y, Man Y, Dacheng F. 3-D Inverse Problem Continuous Model for Thermal Behavior of Mould Process Based on the Temperature Measurements in Plant Trial. Isij International - ISIJ INT 2006; 46: 539-545. doi: 10.2355/isijinternational.46.539
* [41] Gonzalez M, Goldschmit M, Assanelli A, Dvorkin E, Berdaguer E. Modeling of the solidification process in a continuous casting installation for steel slabs. Metallurgical and Materials Transactions B 2003; 34: 455-473. doi: 10.1007/s11663-003-0072-3
* [42] Wang X, Kong L, Du F, et al. Mathematical Modeling of Thermal Resistances of Mold Flux and Air Gap in Continuous Casting Mold Based on an Inverse Problem. ISIJ International 2016; 56: 803-811. doi: 10.2355/isijinternational.ISIJINT-2015-601
* [43] Zhang H, Wang W. Mold Simulator Study of Heat Transfer Phenomenon During the Initial Solidification in Continuous Casting Mold. Metallurgical and Materials Transactions B 2017; 48: 779-793. doi: 10.1007/s11663-016-0901-9
* [44] Hu P, Wang X, Wei J, Yao M, Guo Q. Investigation of Liquid/Solid Slag and Air Gap Behavior inside the Mold during Continuous Slab Casting. ISIJ International 2018; 58(5): 892-898. doi: 10.2355/isijinternational.ISIJINT-2017-393
* [45] Tang L, Yao M, Wang X, Zhang X. Non-uniform thermal behavior and shell growth within mould for wide and thick slab continuous casting. Steel Research International 2012; 83(12): 1203-1213. doi: 10.1002/srin.201200075
* [46] Wang X, Yao M. Neural networks for solving the inverse heat transfer problem of continuous casting mould. In: . 2. IEEE Circuits and Systems Society. ; 2011: 791-794.
* [47] Chen H, Su L, Wang G, Shibin W, Zhang L, Luo Z. Fuzzy estimation for heat flux distribution at the slab continuous casting mold surface. International Journal of Thermal Sciences 2014; 83: 80-88. doi: 10.1016/j.ijthermalsci.2014.04.012
* [48] Jinqiang Z, Luo X. Estimation of heat transfer coefficients and heat flux on the billet surface by an integrated approach. International Journal of Heat and Mass Transfer 2015; 90: 645-653. doi: 10.1016/j.ijheatmasstransfer.2015.07.008
* [49] Chen WL, Yang YC. Inverse problem of estimating the heat flux at the roller/workpiece interface during a rolling process. Applied Thermal Engineering 2010; 30(10): 1247 - 1254. doi: https://doi.org/10.1016/j.applthermaleng.2010.02.007
* [50] Ambrosi D. Cellular Traction as an Inverse Problem. SIAM Journal of Applied Mathematics 2006; 66: 2049-2060. doi: https://doi.org/10.1137/060657121
* [51] Lions JL. Optimal control of systems governed by partial differential equations problèmes aux limites. Springer . 1971\.
* [52] Moura Neto FD, da Silva Neto AJ. An Introduction to Inverse Problems with Applications. Springer Publishing Company, Incorporated . 2012\.
* [53] Fletcher R, Reeves CM. Function minimization by conjugate gradients. The Computer Journal 1964; 7(2): 149-154. doi: 10.1093/comjnl/7.2.149
* [54] Buhmann MD. Radial basis functions: theory and implementations. 12. Cambridge university press . 2003\.
* [55] Prando G. Non-Parametric Bayesian Methods for Linear System Identification. PhD thesis. Universitá di Padova, Via 8 Febbraio 1848, 2, 35122 Padova PD, Italy; 2016.
* [56] Bardsley JM. Computational Uncertainty Quantification for Inverse Problems. SIAM . 2018\.
* [57] Stabile G, Hijazi S, Mola A, Lorenzi S, Rozza G. POD-Galerkin reduced order methods for CFD using Finite Volume Discretisation: vortex shedding around a circular cylinder. Communications in Applied and Industrial Mathematics (2017); 8(1): 210-236. doi: 10.1515/caim-2017-0011
* [58] ITHACA-FV . https://mathlab.sissa.it/ithaca-fv; . Accessed: 2020-10-26.
* [59] Moukalled F, Mangani L, Darwish M. The Finite Volume Method in Computational Fluid Dynamics: An Advanced Introduction with OpenFOAM and Matlab. Springer Publishing Company, Incorporated. 1st ed. 2015.
* [60] Matthies HG, Zander E, Rosić BV, Litvinenko A, Pajonk O. Inverse Problems in a Bayesian Setting: 245–286; Cham: Springer International Publishing . 2016
* [61] Cotter SL, Dashti M, Stuart AM. Approximation of Bayesian inverse problems for PDEs. SIAM Journal on Numerical Analysis 2010; 48(1): 322–345.
* [62] Rozza G. Fundamentals of reduced basis method for problems governed by parametrized PDEs and applications: 153–227; CISM International Centre for Mechanical Sciences. Vienna: Springer Vienna . 2014
* [63] Lassila T, Manzoni A, Quarteroni A, Rozza G. Model order reduction in fluid dynamics: challenges and perspectives. In: Quarteroni A, Rozza G. , eds. Reduced Order Methods for Modeling and Computational Reduction. 9. Springer MS&A Series. 2014 (pp. 235–274)
* [64] Hesthaven JS, Rozza G, Stamm B. Certified Reduced Basis Methods for Parametrized Partial Differential Equations. Springer Briefs in MathematicsSwitzerland: Springer. 1 ed. 2015
* [65] Chinesta F, Huerta A, Rozza G, Willcox K. Encyclopedia of Computational Mechanics Second Edition,ch. Model Reduction Methods: 1-36; John Wiley & Sons . 2017
* [66] Chen P, Quarteroni A, Rozza G. Reduced Basis Methods for Uncertainty Quantification. SIAM/ASA Journal on Uncertainty Quantification 2017; 5: 813–869. doi: 10.1137/151004550
* [67] Georgaka S, Stabile G, Rozza G, Bluck MJ. Parametric POD-Galerkin Model Order Reduction for Unsteady-State Heat Transfer Problems. Communications in Computational Physics 2019; 27(1): 1–32. doi: 10.4208/cicp.OA-2018-0207
|
# Lipschitz continuity of the dilation of Bloch functions on the unit ball of
a Hilbert space and applications
Alejandro Miralles Alejandro Miralles. Departament de Matemàtiques and IMAC,
Universitat Jaume I, Castelló (Spain). _e_.mail<EMAIL_ADDRESS>
###### Abstract.
Let $B_{E}$ be the open unit ball of a complex finite or infinite dimensional
Hilbert space. If $f$ belongs to the space $\mathcal{B}(B_{E})$ of Bloch
functions on $B_{E}$, we prove that the dilation map given by
$x\mapsto(1-\|x\|^{2})\mathcal{R}f(x)$ for $x\in B_{E}$, where $\mathcal{R}f$
denotes the radial derivative of $f$, is Lipschitz continuous with respect to
the pseudohyperbolic distance $\rho_{E}$ in $B_{E}$, which extends to the
finite and infinite dimensional setting the result given for the classical
Bloch space $\mathcal{B}$. In order to provide this result, we will need to
prove that $\rho_{E}(zx,zy)\leq|z|\rho_{E}(x,y)$ for $x,y\in B_{E}$ under some
conditions on $z\in\mathbb{C}$. Lipschitz continuity of
$x\mapsto(1-\|x\|^{2})\mathcal{R}f(x)$ will yield some applications which also
extends classical results from $\mathcal{B}$ to $\mathcal{B}(B_{E})$. On the
one hand, we supply results on interpolating sequences for
$\mathcal{B}(B_{E})$: we show that it is necessary for a sequence in $B_{E}$
to be separated in order to be interpolating for $\mathcal{B}(B_{E})$ and we
also prove that any interpolating sequence for $\mathcal{B}(B_{E})$ can be
slightly perturbed and it remains interpolating. On the other hand, after a
deep study of the automorphisms of $B_{E}$, we provide necessary and suficient
conditions for a composition operator on $\mathcal{B}(B_{E})$ to be bounded
below.
###### Key words and phrases:
Bloch space, infinite dimensional holomorphy, pseudohyperbolic distance,
automorphisms of the unit ball, bounded below composition operator,
interpolating sequence
###### 2020 Mathematics Subject Classification:
Primary 46E50, 30H30 Secondary 47B33, 32A18
Supported by PGC2018-094431-B-100 (MICINN. Spain) and 8059/2019 (UJI)
## 1\. Introduction and background
Along this work, $E$ will denote a finite or infinite dimensional complex
Hilbert space. Its open unit ball will be denoted by $B_{E}$. After some
background, we will study in Section 2 the boundness of:
$\displaystyle\frac{\rho_{E}(zx,zy)}{|z|\rho_{E}(x,y)}$
for $z\in\mathbb{C}$ and $x,y\in B_{E}$ such that $zx,zy\in B_{E}$. First we
will show that, in general, this expression is unbounded. Nevertheless, we
will show that if $|z|$ is bounded above by:
$\frac{1+\max\\{\|x\|,\|y\|\\}}{2\max\\{\|x\|,\|y\|\\}},$
then the expression above is bounded by $2$ and this bound is the best
possible. We first prove the case when we deal with $E=\mathbb{C}$ and then
with any finite or infinite dimensional Hilbert space $E$. At the end of this
section, we will extend this result to the case when we deal with the Banach
space $C_{0}(S)$.
In Section 3, we deal with functions $f$ which belong to the Bloch space
$\mathcal{B}(B_{E})$, that is, the space of Bloch functions on the unit ball
$B_{E}$ of $E$. First, as a consequence of the boundness of (1), we show that
the dilation map $x\mapsto(1-\|x\|^{2})\mathcal{R}f(x)$ for $x\in B_{E}$ is
Lipschitz with respect to the pseudohyperbolic distance in Subsection 3.1,
extending to the finite and infinite dimensional setting the result given by
Attele in [2] and improved by Xiong in [15] for the classical Bloch space
$\mathcal{B}$. Hence, we derive some results about interpolating sequences for
$\mathcal{B}(B_{E})$ in Subsection 3.2. Indeed, we supply a proof that these
sequences are separated for the pseudohyperbolic distance. We also prove that
interpolating sequences can be slightly perturbed and they remain
interpolating, which also extends the result for $\mathcal{B}$ given in [2].
Finally, in Subsection 3.3 we first will study several properties of
automorphisms of $B_{E}$ and this will permit us to give necessary and
sufficient conditions for a composition operator on $\mathcal{B}(B_{E})$ to be
bounded below, which extends the results given in the one-dimensional case in
[6] to the finite and infinite dimensional setting.
### 1.1. The pseudohyperbolic and hyperbolic distance.
Let $\mathbf{D}$ be the open unit disk of the complex plane $\mathbb{C}$.
Recall that the pseudohyperbolic distance for $z,w\in\mathbf{D}$ is given by:
$\rho(z,w)=\left|\frac{z-w}{1-\bar{z}w}\right|.$
Let $X$ be a complex Banach space and let $B_{X}$ be its open unit ball.
Recall that $f:B_{X}\to\mathbb{C}$ is said to be holomorphic (or analytic) if
it is Fréchet differentiable for any $x\in B_{X}$ (see [12] for further
information). For any $x,y\in B_{X}$, the pseudohyperbolic distance
$\rho_{X}(x,y)$ is given by:
$\displaystyle\rho_{X}(x,y)=\sup\\{\rho(f(x),f(y)):f\in
H^{\infty}(B_{X}),\|f\|_{\infty}\leq 1\\},$
where $H^{\infty}(B_{X})$ is the space of bounded holomorphic functions on
$B_{X}$ which become a Banach space (a uniform algebra, indeed) endowed with
the sup-norm. The hyperbolic distance for $x,y\in B_{X}$ is given by:
$\beta_{X}(x,y)=\frac{1}{2}\log\left(\frac{1+\rho_{X}(x,y)}{1-\rho_{X}(x,y)}\right).$
### 1.2. Automorphisms and pseudohyperbolic distance on $B_{E}$
If we deal with a complex Hilbert space $E$, we will denote by $Aut(B_{E})$
the space of automorphisms of $B_{E}$, that is, the maps $\varphi:B_{E}\to
B_{E}$ which are bijective and bianalytic. These automorphisms are well-known
(see [9]) and we will need to make use of them during the sequel. For any
$x\in B_{E}$, the automorphism $\varphi_{x}:B_{E}\longrightarrow B_{E}$ is
defined according to:
(1.1) $\displaystyle\varphi_{x}(y)=(s_{x}Q_{x}+P_{x})(m_{x}(y))$
where $s_{x}=\sqrt{1-\|x\|^{2}},$ $m_{x}:B_{E}\longrightarrow B_{E}$ is the
analytic self-map:
$m_{x}(y)=\frac{x-y}{1-\langle y,x\rangle},$
$P_{x}:E\longrightarrow E$ is the orthogonal projection along the one-
dimensional subspace spanned by $x$, that is:
$P_{x}(y)=\frac{\langle y,x\rangle}{\langle x,x\rangle}x$
and $Q_{x}:E\longrightarrow E$ is the orthogonal complement
$Q_{x}=Id_{E}-P_{x}$, where $Id_{E}$ denotes the identity operator on $E$. It
is clear that $\varphi_{x}(0)=x$ and $\varphi_{x}(x)=0$. The automorphisms of
the unit ball $B_{E}$ turn to be compositions of these $\varphi_{x}$ with
unitary transformations $U$ of $E$.
It is well-known (see [9]) that the pseudohyperbolic distance on $B_{E}$ is
given by:
(1.2) $\displaystyle\rho_{E}(x,y)=\|\varphi_{y}(x)\|\mbox{ for any }x,y\in
B_{E}.$
and:
(1.3)
$\displaystyle\rho_{E}(x,y)^{2}=1-\frac{(1-\|x\|^{2})(1-\|y\|^{2})}{|1-\langle
x,y\rangle|^{2}}.$
### 1.3. The Bloch space
The classical Bloch space $\mathcal{B}$ is the set of holomorphic functions
$f:\mathbf{D}\to\mathbb{C}$ such that
$\|f\|_{B}=\sup_{z\in\mathbf{D}}(1-|z|^{2})|f^{\prime}(z)|$ is bounded. This
supremum defines a semi-norm which becomes a norm by adding up a constant:
$|f(0)|+\sup_{z\in\mathbf{D}}(1-|z|^{2})|f^{\prime}(z)|$. Hence, $\mathcal{B}$
becomes a complex Banach space. The semi-norm $\|\cdot\|_{B}$ is invariant by
automorphisms, that is, $\|f\circ\varphi\|_{B}=\|f\|_{B}$ for any
$f\in\mathcal{B}$ and $\varphi:\mathbf{D}\to\mathbf{D}$ an automorphism of
$\mathbf{D}$.
Timoney extended Bloch functions if we deal with a finite dimensional Hilbert
space (see [14]). Blasco, Galindo and Miralles extended them to the infinite
dimensional setting (see [5]). If we deal with a complex finite or infinite
dimensional Hilbert space $E$, the analytic function $f:B_{E}\to\mathbb{C}$ is
said to belong to the Bloch space $\mathcal{B}(B_{E})$ if:
$\|f\|_{\mathcal{B}}=\sup_{x\in B_{E}}(1-\|x\|^{2})\|\nabla f(x)\|<+\infty,$
where it is clear that $\nabla f(x)$ is the derivative $f^{\prime}(x)$ or,
equivalently, if:
$\|f\|_{\mathcal{R}}=\sup_{x\in B_{E}}(1-\|x\|^{2})\|Rf(x)\|<+\infty,$
where $\mathcal{R}f(x)$ is the radial derivative of $f$ at $x$ given by
$\mathcal{R}f(x)=\langle x,\overline{\nabla f(x)}\rangle$. These semi-norms
are equivalent to the following one:
(1.4) $\displaystyle\|f\|_{\mathcal{I}}=\sup_{x\in
B_{E}}\|\widetilde{\nabla}f(x)\|,$
where $\widetilde{\nabla}f(x)$ denotes the invariant gradient of $f$ at $x$
which is given by $\widetilde{\nabla}f(x)=\nabla(f\circ\varphi_{x})(0)$, where
$\varphi_{x}$ is the automorphism given in (1.1).
The three semi-norms $\|\cdot\|_{\mathcal{B}},\|\cdot\|_{\mathcal{R}}$ and
$\|\cdot\|_{\mathcal{I}}$ define equivalent Banach space norms-modulo the
constant functions- in $\mathcal{B}(B_{E})$ (see [5]). In particular, there
exists a constant $A_{0}>0$ such that:
(1.5)
$\displaystyle\|f\|_{\mathcal{R}}\leq\|f\|_{\mathcal{B}}\leq\|f\|_{\mathcal{I}}\leq
A_{0}\|f\|_{R}.$
Hence, the space $\mathcal{B}(B_{E})$ can be endowed with any of the norms:
$\|\cdot\|_{\mathcal{B}-Bloch}=|f(0)|+\|\cdot\|_{\mathcal{B}}$
or:
$\|\cdot\|_{\mathcal{R}-Bloch}=|f(0)|+\|\cdot\|_{\mathcal{R}}$
or:
$\|\cdot\|_{\mathcal{I}-Bloch}=|f(0)|+\|\cdot\|_{\mathcal{I}}$
and $\mathcal{B}(B_{E})$ becomes a Banach space. We will make use of these
three semi-norms and norms along the sequel. We will also make use of this
result, which states that Bloch functions on $B_{E}$ are Lipschitz with
respect to the hyperbolic distance (see [3]):
###### Proposition 1.1.
Let $E$ be a complex Hilbert space and let $f\in\mathcal{B}(B_{E})$. Then for
any $x,y\in B_{E}$:
$|f(x)-f(y)|\leq\|f\|_{\mathcal{I}}\beta_{E}(x,y).$
## 2\. Inequalities with the pseudohyperbolic distance
Let $X$ be a complex Banach space. If $\varphi:B_{X}\to B_{X}$ is an analytic
self-map, it is well-known that
$\rho_{X}(\varphi(x),\varphi(y))\leq\rho_{X}(x,y)$ for any $x,y\in B_{X}$ and
the equality is attained if and only if $\varphi$ is an automorphism of
$B_{X}$. Hence, if we consider $z\in\mathbb{C}$, $|z|\leq 1$ and $x,y\in
B_{X}$, it is clear that:
$\displaystyle\frac{\rho_{X}(zx,zy)}{\rho_{X}(x,y)}\leq 1$
since the map $\varphi:B_{X}\to B_{X}$ given by $\varphi(x)=zx$ is analytic on
$B_{X}$. However, this situation changes dramatically if we consider the
expression:
(2.1) $\displaystyle\frac{\rho_{X}(zx,zy)}{|z|\rho_{X}(x,y)}$
for any $z\in\mathbb{C}$ such that $zx,zy\in B_{X}$. We show that, in general,
this expression is unbounded. Anyway, if we deal with $X$ a complex Hilbert
space or $C_{0}(S)$ and $z\in\mathbb{C}$ satisfies:
$|z|\leq\frac{1+\max\\{\|x\|,\|y\|\\}}{2\max\\{\|x\|,\|y\|\\}},$
then expression (2.1) is bounded by $2$ and this will permit us to provide
several applications in Section 3.
### 2.1. Unboundness
In this section we prove that expression (2.1) is unbounded in general.
###### Proposition 2.1.
Let $E$ be a complex Hilbert space. There exist a sequence
$(z_{n})\subset\mathbb{C}$, $x\in B_{E}$ and a sequence $(y_{n})\subset B_{E}$
such that $z_{n}x,z_{n}y_{n}\in B_{E}$ but:
$\frac{\rho_{E}(z_{n}x,z_{n}y_{n})}{|z_{n}|\rho_{E}(x,y_{n})}$
is unbounded.
Proof. We prove it for $E=\mathbb{C}$. Take for instance $x=1/2$,
$y_{n}=1/2-\frac{1}{n}$ and $z_{n}=2-\frac{1}{n}$. It is clear that
$|z_{n}x|<1$ and $|z_{n}y_{n}|<1$. However:
$\displaystyle\rho(z_{n}x,z_{n}y_{n})=\frac{\frac{1}{n}\left(2-\frac{1}{n}\right)}{1-\frac{1}{2}\left(\frac{1}{2}-\frac{1}{n}\right)\left(2-\frac{1}{n}\right)^{2}}=\frac{\frac{2n-1}{n^{2}}}{\frac{2-9n+12n^{2}}{4n^{3}}}=\frac{4n(2n-1)}{12n^{2}-9n+2}$
and:
$\displaystyle|z_{n}|\rho(x,y_{n})=\left(2-\frac{1}{n}\right)\frac{\frac{1}{n}}{1-\frac{1}{2}\left(\frac{1}{2}-\frac{1}{n}\right)}=\frac{\frac{2n-1}{n^{2}}}{\frac{3n+2}{4n}}=\frac{4(2n-1)}{3n^{2}+2n}.$
Hence:
$\displaystyle\frac{\rho(z_{n}x,z_{n}y_{n})}{|z_{n}|\rho(x,y_{n})}=\frac{\frac{4n(2n-1)}{12n^{2}-9n+2}}{\frac{4(2n-1)}{3n^{2}+2n}}=\frac{3n^{3}+2n^{2}}{12n^{2}-9n+2}$
which is clearly unbounded since it tends to $\infty$ when
$n\rightarrow\infty$. The result remains true if we deal with any complex
Hilbert space $E$ since we can take $x_{0}\in E$ such that $\|x_{0}\|=1$ and
take $u=\frac{1}{2}x_{0}$, $v_{n}=y_{n}x_{0}$. We have that:
$\displaystyle\rho_{E}(z_{n}u,z_{n}v_{n})^{2}=1-\frac{(1-\|u_{n}\|^{2})(1-\|v_{n}\|^{2})}{|1-\langle
u_{n},v_{n}\rangle|^{2}}=$ $\displaystyle
1-\frac{(1-\|z_{n}x\|^{2})(1-\|z_{n}y_{n}\|^{2})}{|1-\langle
z_{n}x,z_{n}y_{n}\rangle|^{2}}=\rho_{E}(z_{n}x,z_{n}y_{n})^{2}$
and similarly we have $\rho_{E}(u,v_{n})=\rho_{E}(x,y_{n})$ so we apply the
case $E=\mathbb{C}$ and we are done. ∎
An easy consequence is a well-known result: the pseudohyperbolic distance
cannot be extended to a norm on $E$ since:
$\frac{\rho_{E}(zx,zy)}{|z|\rho_{E}(x,y)}$
is unbounded, so $\rho_{E}(zx,zy)\neq|z|\rho_{E}(x,y)$.
### 2.2. Boundness
The main result of this section is Theorem 2.7 which states that under
condition (2), then the expression (1) is bounded and the best bound possible
is given by $2$. The following lemma will be used to prove this result:
###### Lemma 2.2.
Let $E$ be a finite or infinite dimensional Hilbert space, $z\in\mathbb{C}$
and $x,y\in B_{E}$ such that:
$\displaystyle|z|\leq\frac{1+\max\\{\|x\|,\|y\|\\}}{2\max\\{\|x\|,\|y\|\\}}.$
Then $|1-p|\leq 2|1-|z|^{2}p|$ where $p$ denotes the scalar product $\langle
x,y\rangle$.
Proof. Suppose without loss of generality that $\|x\|\geq\|y\|$ and $x\neq 0$.
Otherwise, the inequality is clearly true for any $z\in\mathbb{C}$. Notice
that:
$|1-p|=|1-|z|^{2}p+|z|^{2}p-p|\leq|1-|z|^{2}p|+||z|^{2}-1||p|,$
so it is sufficient to prove that $|1-|z|^{2}p|+||z|^{2}-1||p|\leq
2|1-|z|^{2}p|$, which is equivalent to $|1-|z|^{2}p|\geq||z|^{2}-1||p|$. We
consider two cases:
i) if $|z|^{2}\leq 1$: then $1-|z|^{2}\geq 0$ so we need to prove
$|1-|z|^{2}p|\geq(1-|z|^{2})|p|$ which is clearly satisfied since:
$|1-|z|^{2}p|\geq 1-|z|^{2}|p|\geq|p|-|z|^{2}|p|=(1-|z|^{2})|p|$
where second inequality is satisfied since $|p|\leq\|x\|\|y\|<1$.
ii) On the other hand, suppose that $|z|^{2}>1$: we need to prove that
$|1-|z|^{2}p|\geq(|z|^{2}-1)|p|$. Since $|1-|z|^{2}p|\geq 1-|z|^{2}|p|$, it is
sufficient to prove that:
(2.2) $\displaystyle 1-|z|^{2}|p|\geq(|z|^{2}-1)|p|$
Notice that $1-|z|^{2}|p|>0$ since $|z|^{2}|p|\leq\|zx\|\|zy\|<1$ because
$zx,zy\in B_{E}$. Inequality (2.2) is equivalent to $2|z|^{2}|p|<1+|p|$ which
is true since:
$2|z|^{2}|p|\leq 2\left(\frac{1+\|x\|}{2\|x\|}\right)^{2}|p|$
so we need to prove that:
$\displaystyle 2\left(\frac{1+\|x\|}{2\|x\|}\right)^{2}|p|\leq 1+|p|\mbox{ if
and only if }\left(\frac{(1+\|x\|)^{2}}{2\|x\|^{2}}-1\right)|p|\leq 1.$
But:
$\displaystyle\left(\frac{(1+\|x\|)^{2}}{2\|x\|^{2}}-1\right)|p|=\frac{1+2\|x\|-\|x\|^{2}}{2\|x\|^{2}}|p|=\frac{1+(2-\|x\|)\|x\|}{2\|x\|^{2}}|p|\leq$
$\displaystyle\frac{1+1}{2\|x\|^{2}}\|x\|^{2}=1$
where last inequality is true because of the arithmetic mean-geometric mean
inequality and since $|p|\leq\|x\|\|y\|\leq\|x\|^{2}$. ∎
#### 2.2.1. The case $E=\mathbb{C}$
If we deal with $E=\mathbb{C}$, it is easy to prove that (2.1) is bounded:
###### Proposition 2.3.
Let $x,y\in\mathbf{D}$ and $z\in\mathbb{C}$ such that:
$|z|\leq\frac{1+\max\\{|x|,|y|\\}}{2\max\\{|x|,|y|\\}}.$
Then $zx,zy\in\mathbf{D}$ and:
$\frac{\rho(zx,zy)}{|z|\rho(x,y)}\leq 2.$
Proof. Suppose without loss of generality that $|x|\geq|y|$. Notice that
$zx,zy\in\mathbf{D}$ since:
$|zy|\leq|zx|\leq\frac{1+|x|}{2|x|}|x|=\frac{1+|x|}{2}<1.$
We have:
$\rho(zx,zy)=\left|\frac{zx-
zy}{1-\overline{zx}zy}\right|=|z|\left|\frac{x-y}{1-|z|^{2}\overline{x}y}\right|=|z|\frac{|x-y|}{|1-|z|^{2}\overline{x}y|}$
and:
$|z|\rho(x,y)=|z|\left|\frac{x-y}{1-\overline{x}y}\right|=|z|\frac{|x-y|}{|1-\overline{x}y|},$
so the inequality is equivalent to:
$|z|\frac{|x-y|}{|1-|z|^{2}\overline{x}y|}\leq
2|z|\frac{|x-y|}{|1-\overline{x}y|}$
or equivalently:
$|1-\overline{x}y|\leq 2|1-|z|^{2}\overline{x}y|.$
Calling $p=\overline{x}y$, we have to prove that $|1-p|\leq 2|1-|z|^{2}p|$.
Apply Lemma 2.2 for $E=\mathbb{C}$ and we are done. ∎
###### Remark 2.4.
Notice that the bound $2$ is the best possible. Indeed, take
$x_{n},y_{n}\in\mathbf{D}$ such that $x_{n}\rightarrow 1$ and
$y_{n}\rightarrow-1$. It is clear that for $z_{n}\rightarrow 0$, the
expression $|1-\overline{x_{n}}y_{n}|$ tends to $2$ when $n\rightarrow\infty$
and the expression $2|1-|z_{n}|^{2}\overline{x_{n}}y_{n}|$ also tends to $2$,
so the inequality above is sharp. ∎
#### 2.2.2. The case when $E$ is any complex Hilbert space
In the following lemma, we will consider $x,y\in\mathcal{B}_{E}$ and
$z\in\mathbb{C}$ such that $zx,zy\in B_{E}$ and we will denote by $r=\|x\|$,
$s=\|y\|$ and $p=\langle x,y\rangle$. We will also denote $m=\Re p$ and
$u=r^{2}+s^{2}$. Finally, we will also consider
$A=\|x-y\|^{2}=r^{2}+s^{2}-2\Re p=u-2m$ and $B=r^{2}s^{2}-|p|^{2}$. This
notation will be also used in Theorem 2.7. Notice that $A-B\geq 0$ since
$A-B=r^{2}+s^{2}-2m-r^{2}s^{2}+|p|^{2}=|1-p|^{2}-(1-r^{2})(1-s^{2})\geq 0$ if
and only if $|1-p|^{2}\geq(1-r^{2})(1-s^{2})$ if and only if
$1-\frac{(1-r^{2})(1-s^{2})}{|1-p|^{2}}=\rho(x,y)^{2}\geq 0$ which is clearly
true.
###### Lemma 2.5.
Let $E$ be a complex Hilbert space and $x,y\in B_{E}$. We have that:
$\frac{\|x-y\|^{2}|1-p|^{2}}{\|x-y\|^{2}-(\|x\|^{2}\|y\|^{2}-|p|^{2})}\leq
4\mbox{ or, equivalently: }\frac{A|1-p|^{2}}{A-B}\leq 4.$
Proof. The inequality is equivalent to:
$\displaystyle(1-2\Re p+|p|^{2})\|x-y\|^{2}\leq
4\|x-y\|^{2}-4r^{2}s^{2}+4|p|^{2}\mbox{ if and only if:}$
$\displaystyle(1-2\Re p+|p|^{2})(r^{2}+s^{2}-2\Re p)\leq 4(r^{2}+s^{2})-8\Re
p-4r^{2}s^{2}+4|p|^{2}.$
Bearing in mind that $m=\Re p$ and $u=r^{2}+s^{2}$, we need to prove:
$\displaystyle(1-2m+|p|^{2})(u-2m)\leq
4u-8m-4r^{2}s^{2}+4|p|^{2}\leftrightarrow$ $\displaystyle
u-2um+u|p|^{2}-2m+4m^{2}-2m|p|^{2}\leq
4u-8m-4r^{2}s^{2}+4|p|^{2}\leftrightarrow$ $\displaystyle
4u-8m-4r^{2}s^{2}+4|p|^{2}-u+2um-u|p|^{2}+2m-4m^{2}+2m|p|^{2}\geq 0.$
Notice that $|p|\geq|m|$ and since $4+2m-u\geq 0$, then
$(4+2m-u)|p|^{2}\geq(4+m-u)m^{2}$, so it is sufficient to prove:
$4u-8m-4r^{2}s^{2}+(4+2m-u)m^{2}-u+2um+2m-4m^{2}\geq 0.$
It is also clear that $u\geq 2rs$, so $u^{2}\geq 4r^{2}s^{2}$ and last
inequality is equivalent to:
$\displaystyle 4u-8m-u^{2}+(4+2m-u)m^{2}-u+2um+2m-4m^{2}\geq 0\leftrightarrow$
$\displaystyle 2m^{3}-um^{2}+2(u-3)m+3u-u^{2}\geq 0$
The expression at left can be easily factorized and equals to:
$(u-2m)(3-u-m^{2})$
where both factors are clearly greater or equal to $0$ and we are done. ∎
The following lemma will be used at the end of the proof ot Theorem 2.7:
###### Lemma 2.6.
Let $f(a,b,c)=(3-b^{2})(a-c)-(a^{2}-b^{2})(2-c)$. Then $f(a,b,c)\geq 0$ for
any $0\leq c\leq b\leq a\leq 1$.
Proof. Notice that $f(a,b,c)=(3-b^{2})a-2(a^{2}-b^{2})-(3-a^{2})c$, so $f$ is
affine with respect to $c$. Hence it is enough to prove the inequality for
$c=b$. The function becomes
$f(a,b,b)=(3-b^{2})(a-b)-(a^{2}-b^{2})(2-b)=(a-b)((3-b^{2})-(a+b)(2-b))$ and
since $a-b\geq 0$, it is enough to prove that $(3-b^{2})-(a+b)(2-b)\geq 0$.
The expression $g(a,b)=(3-b^{2})-(a+b)(2-b)$ is affine with respect to $a$, so
it is enough to prove it for $a=1$. Notice that
$g(1,b)=(3-b^{2})-(1+b)(2-b)=1-b$ which is clearly greater or equal to $0$, so
we are done. ∎
###### Theorem 2.7.
Let $E$ be a finite or infinite dimensional complex Hilbert space,
$z\in\mathbb{C}$ and $x,y\in B_{E}$. If:
$|z|\leq\frac{1+\max\\{\|x\|,\|y\|\\}}{2\max\\{\|x\|,\|y\|\\}},$
then $zx,zy\in B_{E}$ and:
$\frac{\rho_{E}(zx,zy)}{|z|\rho_{E}(x,y)}\leq 2.$
Proof. Suppose without loss of generality that $\|x\|\geq\|y\|$ and $z\neq 0$.
We will denote $\rho=\rho_{E}(x,y)$ and $\rho_{z}=\rho_{E}(zx,zy)$. If
$\frac{1}{2}\leq|z|<1$, then the result is clear since:
$\rho_{E}(zx,zy)\leq\rho_{E}(x,y)\leq 2|z|\rho_{E}(x,y)$
where first inequality is true because of the contractivity of the
pseudohyperbolic distance for the function $g:B_{E}\to B_{E}$ given by
$g(x)=zx$.
So let us prove it for $|z|<1/2$ or $|z|\geq 1$. Taking squares, the
inequality is equivalent to prove:
$\frac{\rho_{z}^{2}}{|z|^{2}\rho^{2}}\leq 4.$
Bear in mind the expression (1.3) for the pseudohyperbolic distance and call
$t=|z|^{2}$. So we have:
$\displaystyle\displaystyle\frac{\rho_{z}^{2}}{|z|^{2}\rho^{2}}=\frac{\frac{|1-tp|^{2}-(1-tr^{2})(1-ts^{2})}{|1-tp|^{2}}}{\frac{t(|1-p|^{2}-(1-r^{2})(1-s^{2}))}{|1-p|^{2}}}=$
$\displaystyle\frac{(|1-tp|^{2}-(1-tr^{2})(1-ts^{2}))|1-p|^{2}}{t(|1-p|^{2}-(1-r^{2})(1-s^{2}))|1-tp|^{2}}=$
$\displaystyle\frac{(1+t^{2}|p|^{2}-2t\Re
p-1-t^{2}r^{2}s^{2}+t(r^{2}+s^{2}))|1-p|^{2}}{t(1+|p|^{2}-2\Re
p-1-r^{2}s^{2}+r^{2}+s^{2})|1-tp|^{2}}=$
$\displaystyle\frac{(t^{2}|p|^{2}-2t\Re
p-t^{2}r^{2}s^{2}+t(r^{2}+s^{2}))|1-p|^{2}}{t(|p|^{2}-2\Re
p-r^{2}s^{2}+r^{2}+s^{2})|1-tp|^{2}}$
and dividing by $t$:
$\displaystyle\frac{\rho_{z}^{2}}{|z|^{2}\rho^{2}}\leq\frac{(t|p|^{2}-2\Re
p-tr^{2}s^{2}+r^{2}+s^{2})|1-p|^{2}}{(|p|^{2}-2\Re
p-r^{2}s^{2}+r^{2}+s^{2})|1-tp|^{2}}=$
$\displaystyle\frac{(\|x-y\|^{2}-t(r^{2}s^{2}-|p|^{2}))|1-p|^{2}}{(\|x-y\|^{2}-(r^{2}s^{2}-|p|^{2}))|1-tp|^{2}}.$
Using the notation above, last inequality is equivalent to:
(2.3) $\displaystyle\frac{(A-tB)|1-p|^{2}}{(A-B)|1-tp|^{2}}\leq 4$
so we need to prove (2.3) for $t\geq 1$ and $t\leq 1/4$.
If $t\geq 1$, then the result is clear since:
$\displaystyle\frac{(A-tB)|1-p|^{2}}{(A-B)|1-tp|^{2}}\leq\left(\frac{A-B}{A-B}\right)\frac{|1-p|^{2}}{|1-tp|^{2}}=\frac{|1-p|^{2}}{|1-tp|^{2}}\leq
4$
where last inequality is true by Lemma 2.2. So we need to prove inequality
(2.3) for $0\leq t\leq 1/4$. This inequality is clearly equivalent to:
(2.4) $\displaystyle 4(A-B)|1-tp|^{2}\geq(A-tB)|1-p|^{2}$
which is satisfied if and only if:
$\displaystyle 4(A-B)(1-2mt+t^{2}|p|^{2})\geq(A-tB)|1-p|^{2}.$
This is equivalent to:
$\displaystyle 4(A-B)|p|^{2}t^{2}+(B|1-p|^{2}-8m(A-B))t+4(A-B)-A|1-p|^{2}\geq
0.$
Since $B,t\geq 0$ and $4(A-B)-A|1-p|^{2}\geq 0$ by Lemma 2.5, the inequality
is clearly true if $m<0$. So we can suppose without loss of generality that
$m\geq 0$. We will prove inequality (2.4) for $m\geq 0$. The inequality is
equivalent to:
$\displaystyle 4(A-B)|1-tp|^{2}-(A-tB)|1-p|^{2}\geq 0$
so we will prove last inequality. Notice that:
$\displaystyle 4(A-B)|1-tp|^{2}-(A-tB)|1-p|^{2}=$ $\displaystyle
4(A-B)(1-2mt+t^{2}|p|^{2})-(A-B)(1-2m+|p|^{2})-B(1-t)|1-p|^{2}=$
$\displaystyle 4(A-B)(1-2m+|p|^{2}+2m(1-t)-(1-t^{2})|p|^{2})$
$\displaystyle-(A-B)(1-2m+|p|^{2})-B(1-t)|1-p|^{2}=$ $\displaystyle
3(A-B)(1-2m+|p|^{2})+8m(1-t)(A-B)$
$\displaystyle-4(1-t^{2})|p|^{2}(A-B)-B(1-t)|1-p|^{2}$
Since $0\leq t\leq 1/4$, we have that $3/4\leq 1-t\leq 1$. Hence:
$\displaystyle 4(A-B)|1-tp|^{2}-(A-tB)|1-p|^{2}\geq$ $\displaystyle
3(A-B)(1-2m+|p|^{2})+8m(1-t)(A-B)$
$\displaystyle-4(1-t^{2})|p|^{2}(A-B)-B(1-t)|1-p|^{2}\geq$ $\displaystyle
3(A-B)-6m(A-B)+3|p|^{2}(A-B)+8m\cdot\frac{3}{4}(A-B)$
$\displaystyle-4|p|^{2}(A-B)-B|1-p|^{2}=$ $\displaystyle
3(A-B)-6m(A-B)+6m(A-B)-|p|^{2}(A-B)-B|1-p|^{2}=$
$\displaystyle(3-|p|^{2})(A-B)-B|1-p|^{2}.$
Notice that:
$\displaystyle(3-|p|^{2})(A-B)-B|1-p|^{2}=$
$\displaystyle(3-|p|^{2})A-B(3-|p|^{2}+1-2m+|p|^{2})=$
$\displaystyle(3-|p|^{2})A-B(4-2m)=$
$\displaystyle(3-|p|^{2})(r^{2}+s^{2}-2m)-(4-2m)(r^{2}s^{2}-|p|^{2})\geq$
$\displaystyle(3-|p|^{2})(2rs-2m)-(4-2m)(r^{2}s^{2}-|p|^{2})=$ $\displaystyle
2(3-|p|^{2})(rs-m)-2(2-m)(r^{2}s^{2}-|p|^{2})=$ $\displaystyle
2(3-b^{2})(a-c)-2(a^{2}-b^{2})(2-c).$
To finish the proof, notice that if we call $a=rs$, $b=|p|$ and $c=m$, then
$0\leq c\leq b\leq a\leq 1$ so we use Lemma 2.6 and:
$\displaystyle(3-|p|^{2})(A-B)-B|1-p|^{2}\geq$ $\displaystyle
2(3-b^{2})(a-c)-2(a^{2}-b^{2})(2-c)=2f(a,b,c)\geq 0$
and we are done. ∎
#### 2.2.3. Results for $X=C_{0}(S)$
Let $S$ be a locally compact topological space and consider $X=C_{0}(S)$ given
by the space of continuous functions $f:S\to\mathbb{C}$ such that for any
$\varepsilon>0$, there exists a closed compact subset $K\subset S$ such that
$|f(x)|<\varepsilon$ for any $x\in S\setminus K$. Endowed with the sup-norm,
$C_{0}(S)$ becomes a Banach space and the pseudohyperblic distance for $x,y\in
C_{0}(S)$ is well-known (see [1]) and it is given by:
(2.5) $\displaystyle\rho_{X}(x,y)=\sup_{t\in S}\rho(x(t),y(t)).$
We prove that expression (2.1) is also bounded by $2$ when we deal with the
space $X=C_{0}(S)$:
###### Proposition 2.8.
Let $X=C_{0}(S)$ and $x,y\in X$. If $z\in\mathbb{C}$ satisfies:
$|z|\leq\frac{1+\max\\{\|x\|,\|y\|\\}}{2\max\\{\|x\|,\|y\|\\}},$
then:
$\frac{\rho_{X}(zx,zy)}{|z|\rho_{X}(x,y)}\leq 2.$
Proof. Suppose without loss of generality that $\|x\|\geq\|y\|$. For any $t\in
S$, we have that $x(t),y(t)\in\mathbf{D}$ since $\|x\|=\sup_{t\in S}|x(t)|<1$
and $\|y\|=\sup_{t\in S}|y(t)|<1$. The result is clear since:
$\displaystyle\rho_{X}(zx,zy)=\sup_{t\in S}\rho(zx(t),zy(t))\leq\sup_{t\in
S}2|z|\rho(x(t),y(t))=$ $\displaystyle 2|z|\sup_{t\in
S}\rho(x(t),y(t))=2|z|\rho_{X}(x,y)$
where first inequality is clear because of Proposition 2.3 and because for any
$t\in X$ we have that:
$|z|\leq\frac{1+\|x\|}{2\|x\|}=\frac{\frac{1}{\|x\|}+1}{2}\leq\inf_{t\in
X}\left\\{\frac{1+|x(t)|}{2|x(t)|}\right\\}\leq\frac{1+|x(t)|}{2|x(t)|}$
and we are done. ∎
## 3\. Applications
Now we give some applications related to Theorem 2.7. First, we will show that
the function $x\mapsto(1-\|x\|^{2})|\mathcal{R}f(x)|$ for $x\in B_{E}$ is
Lipschitz with respect to the pseudohyperbolic distance. Hence, we derive some
results about interpolating sequences for $\mathcal{B}(B_{E})$ in Subsection
3.2. Indeed, we provide a new proof that these sequences are separated for the
pseudohyperbolic distance. We also prove that these sequences can be slightly
perturbed and they remain interpolating. Finally, in Subsection 3.3 we will
take an in-depth look at the automorphisms of $B_{E}$. This will permit us to
give necessary and sufficient conditions for a composition operator on
$\mathcal{B}(B_{E})$ to be bounded below.
### 3.1. The Lipschitz continuity of $(1-\|x\|^{2})|\mathcal{R}f(x)|$
We will denote by $\Pi$ the unit circle of the complex plane $\mathbb{C}$,
that is, the set of complex numbers $u$ such that $|u|=1$.
###### Lemma 3.1.
Let $f\in\mathcal{B}(B_{E})$. Fix $\varepsilon>0$ and $x,y\in B_{E}$. If
$(1+\varepsilon u)x$ and $(1+\varepsilon u)y$ belongs to $B_{E}$ for any
$u\in\Pi$, then there exists $u_{0}\in\Pi$ such that:
(3.1)
$\displaystyle|\mathcal{R}f(x)-\mathcal{R}f(y)|\leq\frac{1}{\varepsilon}\|f\|_{\mathcal{I}}\beta((1+\varepsilon
u_{0})x,(1+\varepsilon u_{0})y).$
Proof. Fix $x,y\in B_{E}$ and $\varepsilon>0$. Notice that the function
$f(x+\varepsilon ux)-f(y+\varepsilon uy)$ defined for $u\in\Pi$ is continuous.
Since $\Pi$ is a compact set, there exists $u_{0}\in\Pi$ such that:
$f(x+\varepsilon u_{0}x)-f(y+\varepsilon u_{0}y)=\max\\{f(x+\varepsilon
ux)-f(y+\varepsilon uy):u\in\Pi\\}.$
Consider $g(u)=f(x+\varepsilon ux)$ for $u$ defined on an open disk of the
complex plane $\mathbb{C}$ which contains $\Pi$. It is clear that:
$g^{\prime}(u)=\nabla f(x+\varepsilon ux)(\varepsilon x),$
so $g^{\prime}(0)=\varepsilon\mathcal{R}f(x)$. Similarly, if
$h(u)=f(y+\varepsilon uy)$, then $h^{\prime}(0)=\varepsilon Rf(y)$. By the
Cauchy’s integral formula we have:
$\displaystyle|\mathcal{R}f(x)-\mathcal{R}f(y)|=|\langle x,\overline{\nabla
f(x)}\rangle-\langle y,\overline{\nabla f(y)}\rangle|=$
$\displaystyle\left|\frac{1}{\varepsilon}\frac{1}{2\pi
i}\int_{|u|=1}f(x+\varepsilon ux)-f(y+\varepsilon
uy)\frac{du}{u^{2}}\right|\leq$
$\displaystyle\frac{2\pi}{2\pi\varepsilon}|f(x+\varepsilon
u_{0}x)-f(y+\varepsilon
u_{0}y)|\leq\frac{1}{\varepsilon}\|f\|_{\mathcal{I}}\beta((1+\varepsilon
u_{0})x,(1+\varepsilon u_{0})y)$
where last inequality is true by Proposition 1.1. ∎
The proof of the following lemma is an easy calculation. It will be used in
Lemma 3.3.
###### Lemma 3.2.
For any $0\leq t<1$ we have:
$\frac{1}{2}\log\left(\frac{1+t}{1-t}\right)\leq\frac{t}{1-t}.$
###### Lemma 3.3.
Let $f\in\mathcal{B}(B_{E})$ and $x,y\in B_{E}$ such that $\|x\|\geq\|y\|$.
Then:
$\displaystyle(1-\|x\|^{2})|\mathcal{R}f(x)-\mathcal{R}f(y)|\leq
12\|f\|_{\mathcal{I}}\rho_{E}(x,y).$
Proof. Take:
$\varepsilon=\frac{1-\|x\|}{2\|x\|}>0.$
Notice that for any $u\in\Pi$ we have that $(1+\varepsilon u)x$ and
$(1+\varepsilon u)y$ belongs to $B_{E}$ since:
$(1+\varepsilon)\|x\|\leq\left(1+\frac{1-\|x\|}{2\|x\|}\right)\|x\|=\frac{1+\|x\|}{2\|x\|}\|x\|=\frac{1+\|x\|}{2}<1$
so clearly $\|(1+\varepsilon u)x\|\leq(1+\varepsilon)\|x\|<1$ and since
$\|y\|\leq\|x\|$:
$\|(1+\varepsilon u)y\|\leq(1+\varepsilon)\|y\|\leq(1+\varepsilon)\|x\|<1.$
By Lemma 3.1, there exists $u_{0}\in\Pi$ such that:
$\displaystyle|\mathcal{R}f(x)-\mathcal{R}f(y)|\leq\frac{2\|x\|}{1-\|x\|}\|f\|_{\mathcal{I}}\beta_{E}((1+\varepsilon
u_{0})x,(1+\varepsilon u_{0})y).$
Take $z_{0}=1+\varepsilon u_{0}$ which satifies:
$|z_{0}|\leq 1+\varepsilon=1+\frac{1-\|x\|}{2\|x\|}=\frac{1+\|x\|}{2\|x\|}.$
By Theorem 2.7 we have that $z_{0}x,z_{0}y\in B_{E}$ and:
(3.2) $\displaystyle\rho_{E}(z_{0}x,z_{0}y)\leq 2|z_{0}|\rho_{E}(x,y).$
Denote $B=(1-\|x\|^{2})|\mathcal{R}f(x)-\mathcal{R}f(y)|$. We obtain:
$B\leq(1-\|x\|^{2})\frac{2\|x\|}{1-\|x\|}\|f\|_{\mathcal{I}}\beta_{E}(z_{0}x,z_{0}y).$
By Lemma 3.2 we also have:
$\beta_{E}(z_{0}x,z_{0}y)\leq\frac{\rho_{E}(z_{0}x,z_{0}y)}{1-\rho_{E}(z_{0}x,z_{0}y)}$
We obtain:
$\displaystyle
B\leq(1+\|x\|)(1-\|x\|)\frac{2\|x\|}{1-\|x\|}\|f\|_{\mathcal{I}}\frac{\rho_{E}(z_{0}x,z_{0}y)}{1-\rho_{E}(z_{0}x,z_{0}y)}\leq$
$\displaystyle
4\|x\|\|f\|_{\mathcal{I}}\frac{\rho_{E}(z_{0}x,z_{0}y)}{1-\rho_{E}(z_{0}x,z_{0}y)}=\frac{4\|x\|\|f\|_{\mathcal{I}}}{\frac{1}{\rho_{E}(z_{0}x,z_{0}y)}-1}$
so:
$\left(\frac{1}{\rho_{E}(z_{0}x,z_{0}y)}-1\right)B\leq
4\|x\|\|f\|_{\mathcal{I}}$
which is equivalent to:
(3.3) $\displaystyle\frac{B}{\rho_{E}(z_{0}x,z_{0}y)}-B\leq
4\|x\|\|f\|_{\mathcal{I}}.$
Bear in mind (see (1.5)) that $\|f\|_{\mathcal{B}}\leq\|f\|_{\mathcal{I}}$. We
have:
$\displaystyle
B\leq(1-\|x\|^{2})|\mathcal{R}f(x)|+(1-\|x\|^{2})|\mathcal{R}f(y)|\leq$
$\displaystyle(1-\|x\|^{2})\|\nabla f(x)\|\|x\|+(1-\|y\|^{2})\|\nabla
f(y)\|\|y\|\leq$ $\displaystyle 2\|f\|_{\mathcal{B}}\|x\|\leq
2\|x\|\|f\|_{\mathcal{I}}$
so from inequality (3.3) we have:
$\frac{B}{\rho_{E}(z_{0}x,z_{0}y)}\leq 4\|x\|\|f\|_{\mathcal{I}}+B\leq
4\|x\|\|f\|_{\mathcal{I}}+2\|x\|\|f\|_{\mathcal{I}}=6\|x\|\|f\|_{\mathcal{I}}$
and we conclude $B\leq 6\|x\|\|f\|_{\mathcal{I}}\rho_{E}(z_{0}x,z_{0}y)$.
Finally, we apply inequality (3.2) and since
$|z_{0}|\leq\frac{1+\|x\|}{2\|x\|}$ we obtain:
$\displaystyle B\leq 6\|x\|\|f\|_{\mathcal{I}}2|z_{0}|\rho_{E}(x,y)\leq
12\|x\|\|f\|_{\mathcal{I}}\frac{1+\|x\|}{2\|x\|}\rho_{E}(x,y)=$ $\displaystyle
12\|f\|_{\mathcal{I}}\frac{1+\|x\|}{2}\rho_{E}(x,y)\leq
12\|f\|_{\mathcal{I}}\rho_{E}(x,y)$
and we are done. ∎
###### Theorem 3.4.
Let $f\in\mathcal{B}(B_{E})$ and $x,y\in B_{E}$. Then:
$|(1-\|x\|^{2})\mathcal{R}f(x)-(1-\|y\|^{2})\mathcal{R}f(y)|\leq
14\|f\|_{\mathcal{I}}\rho_{E}(x,y).$
Proof. Call $F=|(1-\|x\|^{2})\mathcal{R}f(x)-(1-\|y\|^{2})\mathcal{R}f(y)|$
and suppose without loss of generality that $\|x\|\geq\|y\|$. We have that:
$\displaystyle\
F=|(1-\|x\|^{2})(\mathcal{R}f(x)-\mathcal{R}f(y))-(\|x\|^{2}-\|y\|^{2})\mathcal{R}f(y)|\leq$
(3.4)
$\displaystyle(1-\|x\|^{2})|\mathcal{R}f(x)-\mathcal{R}f(y)|+(\|x\|^{2}-\|y\|^{2})|\mathcal{R}f(y)|.$
Since $\|x\|^{2}-\|y\|^{2}=(\|x\|+\|y\|)(\|x\|-\|y\|)\leq 2(\|x\|-\|y\|)$ and
bearing in mind that $\rho_{E}(\|x\|,\|y\|)\leq\rho_{E}(x,y)$ we obtain:
$\displaystyle(\|x\|^{2}-\|y\|^{2})|\mathcal{R}f(y)|\leq
2\frac{\|x\|-\|y\|}{1-\|x\|\|y\|}(1-\|x\|\|y\|)|\mathcal{R}f(y)|\leq$
$\displaystyle 2\rho_{E}(\|x\|,\|y\|)(1-\|y\|^{2})|\mathcal{R}f(y)|\leq
2\|y\|\|f\|_{\mathcal{B}}\rho_{E}(x,y)\leq 2\|f\|_{\mathcal{I}}\rho_{E}(x,y).$
By Lemma 3.3 we know that:
$(1-\|x\|^{2})|\mathcal{R}f(x)-\mathcal{R}f(y)|\leq
12\|f\|_{\mathcal{I}}\rho_{E}(x,y)$
so from (3.1) we conclude:
$F\leq
12\|f\|_{\mathcal{I}}\rho_{E}(x,y)+2\|f\|_{\mathcal{I}}\rho_{E}(x,y)=14\|f\|_{\mathcal{I}}\rho_{E}(x,y)$
and we are done. ∎
The following corollary extends to the finite and infinite dimensional setting
results given by Attele in [2] and improved by Xiong in [15] for the classical
Bloch space $\mathcal{B}$.
###### Corollary 3.5.
Let $E$ be a complex Hilbert space. The function
$x\mapsto(1-\|x\|^{2})|\mathcal{R}f(x)|$ for $x\in B_{E}$ is Lipschitz with
respect to the pseudohyperbolic distance and the following inequality holds:
$|(1-\|x\|^{2})|\mathcal{R}f(x)|-(1-\|y\|^{2})|\mathcal{R}f(y)||\leq
14\|f\|_{\mathcal{I}}\rho_{E}(x,y).$
Proof. Applying Theorem 3.4 it is clear that:
$\displaystyle|(1-\|x\|^{2})|\mathcal{R}f(x)|-(1-\|y\|^{2})|\mathcal{R}f(y)||\leq$
$\displaystyle|(1-\|x\|^{2})\mathcal{R}f(x)-(1-\|y\|^{2})\mathcal{R}f(y)|\leq
14\|f\|_{\mathcal{I}}\rho_{E}(x,y).$
and we are done. ∎
### 3.2. Results on interpolating sequences for the Bloch space
Let $E$ be a complex finite or infinite dimensional complex Hilbert space.
Recall that $(x_{n})\subset B_{E}\setminus\\{0\\}$ is said to be interpolating
for the Bloch space $\mathcal{B}(B_{E})$ if for any bounded sequence $(a_{n})$
of complex numbers, there exists $f\in\mathcal{B}(B_{E})$ such that
$(1-\|x_{n}\|^{2})\mathcal{R}f(x_{n})=a_{n}$. Attele studied in [2] this kind
of interpolation for the classical Bloch space $\mathcal{B}$ and the finite
and infinite dimensional setting was studied in [4]. We provide a new approach
to prove that a necessary condition for a sequence $(x_{n})\subset B_{E}$ to
be interpolating for $\mathcal{B}(B_{E})$ is to be separated for the
pseudohyperbolic distance:
###### Proposition 3.6.
Let $E$ be a complex Hilbert space. If $(x_{n})\subset B_{E}\setminus\\{0\\}$
is interpolating for $\mathcal{B}(B_{E})$, then there exists $C>0$ such that
$\rho(x_{k},x_{j})\geq C$ for any $k\neq j$, $k,j\in\mathbb{N}$.
Proof. Since $(x_{n})\subset B_{E}\setminus\\{0\\}$ is interpolating, there
exists a sequence $(f_{n})\subset\mathcal{B}(B_{E})$ such that:
$\displaystyle(1-\|x_{n}\|^{2})\mathcal{R}f_{n}(x_{n})=1\ \mbox{ and }\
(1-\|x_{k}\|^{2})\mathcal{R}f_{n}(x_{k})=0\ \mbox{ if }\ k\neq n.$
The operator $T:\mathcal{B}(B_{E})\to\ell_{\infty}$ given by
$T(f)=((1-\|x_{n}\|^{2})\mathcal{R}f(x_{n}))$ is surjective, so by the Open
Mapping Theorem, there exists $M>0$ such that $\|f\|_{R}\leq
M\sup_{j\in\mathbb{N}}(1-\|x_{j}\|^{2})|\mathcal{R}f(x_{j})|$ so
$\|f_{n}\|_{R}\leq M$ for any $n\in\mathbb{N}$. Applying Theorem 3.4, we have:
$\displaystyle|(1-\|x_{n}\|^{2})\mathcal{R}f_{n}(x_{n})-(1-\|x_{k}\|^{2})\mathcal{R}f_{n}(x_{k})|\leq
14\|f_{n}\|_{\mathcal{I}}\rho_{E}(x_{k},x_{j})\leq$ $\displaystyle
14A_{0}\|f_{n}\|_{\mathcal{R}}\rho_{E}(x_{k},x_{j})\leq
14A_{0}M\rho_{E}(x_{k},x_{j}).$
Hence, $1-0\leq 14A_{0}M\rho_{E}(x_{k},x_{j})$ and we conclude that:
$\rho_{E}(x_{k},x_{j})\geq\frac{1}{14A_{0}M}$
so we are done. ∎
Attele (see [2]) also proved that any interpolating sequence
$(z_{n})\subset\mathbf{D}$ for $\mathcal{B}$ can be slightly perturbed and the
sequence remains interpolating. By means of Theorem 3.4, we adapt his proof
and generalize the result to the case when we deal with any complex Hilbert
space $E$.
###### Theorem 3.7.
If $(x_{n})\subset B_{E}\setminus\\{0\\}$ is an interpolating sequence for
$\mathcal{B}(B_{E})$, then there exists $\delta>0$ such that if
$(y_{n})\subset B_{E}$ satisfies that
$\sup_{n\in\mathbb{N}}\rho_{E}(x_{n},y_{n})<\delta$ then $(y_{n})$ is also an
interpolating sequence for $\mathcal{B}(B_{E})$.
Proof. Since $(x_{n})$ is interpolating, the operator
$T:\mathcal{B}(B_{E})\to\ell_{\infty}$ given by
$T(f)=((1-\|x_{n}\|^{2})\mathcal{R}f(x_{n}))$ is surjective. Hence, its
adjoint $T^{*}:\ell_{\infty}^{*}\to(\mathcal{B}(B_{E}))^{*}$ is injective and
it has closed range. In particular, $T^{*}$ is left-invertible. The set of
left-invertible elements is open in the Banach algebra of linear operators
from $\ell_{\infty}^{*}$ to $(\mathcal{B}(B_{E}))^{*}$. So there exists
$\delta$ such that if $\|T^{*}-R\|<14A_{0}\delta$, then $R$ is left-
invertible. If we consider $S(f)=((1-\|y_{n}\|^{2})\mathcal{R}f(y_{n}))$, then
by Theorem 3.4:
$\displaystyle\|(T-S)(f)\|_{\infty}=\sup_{n\in\mathbb{N}}|(1-\|x_{n}\|^{2})\mathcal{R}f(x_{n})-(1-\|y_{n}\|^{2})\mathcal{R}f(y_{n})|\leq$
$\displaystyle 14\rho_{E}(x_{n},y_{n})\|f\|_{\mathcal{I}}\leq
14A_{0}\|f\|_{\mathcal{R}}\rho_{E}(x_{n},y_{n})<14A_{0}\delta\|f\|_{\mathcal{R}}$
so $\|T-S\|<14A_{0}\delta$ and hence $\|T^{*}-S^{*}\|=\|T-S\|<14A_{0}\delta$.
We conclude that $S^{*}$ is left-invertible and hence $S$ is surjective, as we
wanted. ∎
### 3.3. Bounded below composition operators
Recall that a linear operator between Banach spaces $T:X\to Y$ is bounded
below if there exists $k>0$ such that $\|x\|\leq k\|T(x)\|$. It is well-known
that the linear bounded operator $T$ is bounded below if and only if $T$ is
injective and has closed range.
Let $\varphi:\mathbf{D}\to\mathbf{D}$ be an analytic map. The composition
operator $C_{\varphi}:\mathcal{B}\to\mathcal{B}$ is given by
$C_{\varphi}(f)=f\circ\varphi$. It is well-known that $C_{\varphi}$ is bounded
for any $\varphi$. Denote by:
(3.5)
$\displaystyle\tau_{\varphi}(z)=\frac{1-|z|^{2}}{1-|\varphi(z)|^{2}}\varphi^{\prime}(z).$
In [8], the authors investigated conditions under which $\varphi$ induces a
composition operator with closed range on the Bloch space $\mathcal{B}$. In
particular, they proved the following necessary condition:
###### Proposition 3.8.
If $C_{\varphi}$ is bounded below, then there exist $\varepsilon,r>0$ with
$r<1$ such that for any $z\in\mathbf{D}$ we have $\rho(\varphi(w),z)\leq r$
for all $w\in\mathbf{D}$ satisfying $|\tau_{\varphi}(w)|>\varepsilon$.
In order to provide sufficient conditions, the authors studied the function
given by $z\mapsto(1-|z|^{2})|f^{\prime}(z)|$ for $z\in\mathbf{D}$ and they
proved this function is Lipschitz with respect to the pseudohyperbolic
distance if $f\in\mathcal{B}$. Indeed, they proved:
(3.6) $\displaystyle\ \
|(1-|z|^{2})|f^{\prime}(z)|-(1-|w|^{2})|f^{\prime}(w)||\leq
3.31\|f\|_{\mathcal{B}}\rho(z,w)$
for any $z,w\in\mathbf{D}$ and $f\in\mathcal{B}$. This result improves a
previous result given by Attele in [2] since the constant given by him was $9$
instead of $3.31$. Xiong improved the constant in [15], providing
$3\sqrt{3}/2\approx 2.6$. Inequality (3.6) permitted to give the following
sufficient condition for $C_{\varphi}$ to be bounded below (see [8]):
###### Theorem 3.9.
Let $\varphi:\mathbf{D}\to\mathbf{D}$ an analytic map. Suppose that there
exist $0<r<\frac{1}{4}$ and $\varepsilon>0$ such that for any $w\in\mathbf{D}$
there exists $z_{w}\in\mathbf{D}$ satisfying $\rho(\varphi(z_{w}),w)<r$ and
$|\tau_{\varphi}(z_{w})|>\varepsilon$. Then,
$C_{\varphi}:\mathcal{B}\to\mathcal{B}$ is bounded below.
Some authors (see [6] and [7]) extended these results by considering analytic
maps $\varphi:B_{n}\to B_{n}$, where $B_{n}$ denotes the open unit ball in the
finite dimensional Hilbert space $(\mathbb{C}^{n},\|\cdot\|_{2})$.
Nevertheless, the authors substituted $\tau_{\varphi}(z)$ given in (3.5) by
the expression:
(3.7)
$\displaystyle\left(\frac{1-\|z\|^{2}}{1-\|\varphi(z)\|^{2}}\right)^{(n+1)/2}|det(J_{\varphi}(z))|$
where $J_{\varphi}(z)$ denotes the Jacobian $n\times n$ matrix of $\varphi$.
Indeed, $\tau_{\varphi}(z)=1$ if $\varphi$ is an automorphism of $B_{n}$.
Furthermore, the authors also based their proofs on the definition of Bloch
function on $B_{n}$ introduced by Timoney (see [14]) using the Bergman metric.
We want to extend the necessary the classical results on $\mathcal{B}$ to the
finite and infinite dimensional setting, so we will provide necessary and
sufficient conditions avoiding expression (3.7) or the use of the Bergman
metric. So, consider a complex finite or infinite dimensional Hilbert space
$E$ and let $\psi:B_{E}\to B_{E}$ an analytic map. In order to extend the
classical results, we introduce $\tau_{\psi}(x)$ and
$\widetilde{\tau_{\psi}}(x)$ for $x\in B_{E}$ given by:
(3.8)
$\displaystyle{\tau_{\psi}}(x)=\frac{1-\|x\|^{2}}{1-\|\psi(x)\|^{2}}\|\psi^{\prime}(x)\|.$
and:
(3.9)
$\displaystyle\widetilde{\tau_{\psi}}(x)=\frac{\sqrt{1-\|x\|^{2}}}{1-\|\psi(x)\|^{2}}\|\psi^{\prime}(x)\|$
for $x\in B_{E}$. Notice that $\widetilde{\tau_{\psi}}(x)\geq{\tau_{\psi}}(x)$
for any $x\in B_{E}$.
The boundness and compactness of the composition operator
$C_{\psi}:\mathcal{B}(B_{E})\to\mathcal{B}(B_{E})$ given by
$C_{\psi}(f)=f\circ\varphi$ was studied in [3]. In particular, the authors
proved that $C_{\psi}$ is bounded for any analytic map $\psi:B_{E}\to B_{E}$.
In addition, the authors proved that
$\|f\circ\psi\|_{\mathcal{I}}\leq\|f\|_{\mathcal{I}}$.
The following result will be used in Lemma 3.11:
###### Lemma 3.10.
Let $E$ be a complex Hilbert space and $f\in\mathcal{B}(B_{E})$. Then for any
$x\in B_{E}$:
$|f(x)-f(0)|\leq\|x\|\frac{\|f\|_{\mathcal{B}}}{1-\|x\|^{2}}.$
Proof. We have that:
$\displaystyle|f(x)-f(0)|=\left|\left(\int_{0}^{1}f^{\prime}(xt)dt\right)(x)\right|\leq\|x\|\left\|\int_{0}^{1}\frac{f^{\prime}(xt)(1-\|tx\|^{2})}{1-\|tx\|^{2}}dt\right\|\leq$
$\displaystyle\|x\|\|f\|_{\mathcal{B}}\int_{0}^{1}\left|\frac{1}{1-\|tx\|^{2}}\right|dt\leq\|x\|\|f\|_{\mathcal{B}}\int_{0}^{1}\frac{1}{1-\|x\|^{2}}dt=\|x\|\frac{\|f\|_{\mathcal{B}}}{1-\|x\|^{2}}$
and we are done. ∎
Since the semi-norms $\|\cdot\|_{\mathcal{R}}$, $\|\cdot\|_{\mathcal{I}}$ and
$\|\cdot\|_{\mathcal{B}}$ and their corresponding norms
$\|\cdot\|_{\mathcal{R}-Bloch}$, $\|\cdot\|_{\mathcal{I}-Bloch}$ and
$\|\cdot\|_{\mathcal{B}-Bloch}$ are equivalent, we can consider any of them in
order to study if $C_{\psi}:\mathcal{B}(B_{E})\to\mathcal{B}(B_{E})$ is
bounded below.
###### Lemma 3.11.
Let $E$ be a complex Hilbert space and let $\psi:B_{E}\to B_{E}$ be an
analytic map. The composition operator
$C_{\psi}:\mathcal{B}(B_{E})\to\mathcal{B}(B_{E})$ is bounded below if and
only if there exists $k>0$ such that:
$\|C_{\psi}(f)\|_{\mathcal{I}}\geq k\|f\|_{\mathcal{I}}.$
Proof. Suppose that $C_{\psi}$ is bounded below and let
$f\in\mathcal{B}(B_{E})$. There exists $k>0$ such that
$\|C_{\psi}(f)\|_{\mathcal{I}-Bloch}\geq k\|f\|_{\mathcal{I}-Bloch}$. Consider
$g(x)=f(x)-f(\psi(0))$. It is clear that $g(\psi(0))=0$ so:
$\displaystyle\|C_{\psi}(f)\|_{\mathcal{I}}=\|f\circ\psi\|_{\mathcal{I}}=\|g\circ\psi\|_{\mathcal{I}}=\|g\circ\psi\|_{\mathcal{I}-Bloch}\geq$
$\displaystyle k\|g\|_{\mathcal{I}-Bloch}\geq
k\|g\|_{\mathcal{I}}=k\|f\|_{\mathcal{I}}.$
Now supose that $\|C_{\psi}(f)\|_{\mathcal{I}}\geq k\|f\|_{\mathcal{I}}$ for
$0<k\leq 1$. We will prove that there exists $k^{\prime}>0$ such that
$\|C_{\psi}(f)\|_{\mathcal{I}-Bloch}\geq k^{\prime}\|f\|_{\mathcal{I}-Bloch}.$
By Lemma 3.10 we have:
$|f(\psi(0))-f(0)|\leq\|\psi(0)\|\frac{\|f\|_{\mathcal{B}}}{1-\|\psi(0)\|^{2}}$
so:
$\displaystyle|f(\psi(0))|\geq|f(0)|-\|\psi(0)\|\frac{\|f\|_{\mathcal{B}}}{1-\|\psi(0)\|^{2}}\geq|f(0)|-\frac{\|f\|_{\mathcal{I}}}{1-\|\psi(0)\|^{2}}.$
and we get:
$|f(\psi(0))|+\frac{1}{(1-\|\psi(0)\|^{2})}\|f\|_{\mathcal{I}}\geq|f(0)|.$
Hence:
$\displaystyle
k(1-\|\psi(0)\|^{2})|f(\psi(0))|+\|C_{\psi}(f)\|_{\mathcal{I}}\geq$
$\displaystyle k(1-\|\psi(0)\|^{2})|f(\psi(0))|+k\|f\|_{\mathcal{I}}\geq
k(1-\|\psi(0)\|^{2})|f(0)|$
so we conclude:
$\displaystyle
2(|f(\psi(0))|+\|C_{\psi}(f)\|_{\mathcal{I}})=2|f(\psi(0))|+\|C_{\psi}(f)\|_{\mathcal{I}}+\|C_{\psi}(f)\|_{\mathcal{I}}\geq$
$\displaystyle
k(1-\|\psi(0)\|^{2})|f(\psi(0))|+\|C_{\psi}(f)\|_{\mathcal{I}}+\|C_{\psi}(f)\|_{\mathcal{I}}\geq$
$\displaystyle k(1-\|\psi(0)\|^{2})|f(0)|+\|C_{\psi}(f)\|_{\mathcal{I}}\geq
k(1-\|\psi(0)\|^{2})(|f(0)|+\|C_{\psi}(f)\|_{\mathcal{I}})$
and we conclude:
$\displaystyle\|C_{\psi}(f))\|_{\mathcal{I}-Bloch}\geq\frac{k(1-\|\psi(0)\|^{2})}{2}\|f\|_{\mathcal{I}-Bloch}$
so we take $k^{\prime}=k(1-\|\psi(0)\|^{2})/2$ and we conclude that $C_{\psi}$
is bounded below as we wanted. ∎
#### 3.3.1. Study of the automorphisms $\varphi_{x}$
In order to study necessary and sufficient conditions for bounded below
composition operators, we need to provide several calculations related to the
automorphisms $\varphi_{x}$ of $B_{E}$ introduced in (1.1). It is well-known
that if $E$ is a finite dimensional Hilbert space, then $\varphi_{x}$ is an
involution (see Theorem 2.2.2 in [13]). Nevertheless, the proof of this result
makes use of the Cartan’s uniqueness theorem. So our first result provides a
new proof of this assertion without the use of the Cartan’s theorem, that is,
we prove that for any $x\in B_{E}$, the automorphism $\varphi_{x}:B_{E}\to
B_{E}$ is an involution for any finite or infinite dimensional complex Hilbert
space $E$:
###### Lemma 3.12.
Let $E$ be a complex Hilbert space and $x\in B_{E}$. Then $\varphi_{x}$ is an
involution, that is, $\varphi_{x}\circ\varphi_{x}=Id_{E}$.
Proof. By (1.1), we know that:
$\displaystyle\varphi_{x}(\varphi_{x}(y))=(s_{x}Q_{x}+P_{x})(m_{x}(\varphi_{x}(y))=(s_{x}Q_{x}+P_{x})\left(\frac{x-\varphi_{x}(y)}{1-\langle\varphi_{x}(y),x\rangle}\right)$
and bearing in mind (see Lemma 3.6 in [11]) that:
$1-\langle\varphi_{x}(y),x\rangle=1-\langle\varphi_{x}(y),\varphi_{x}(0)\rangle=\frac{1-\|x\|^{2}}{1-\langle
y,x\rangle}$
then:
$\displaystyle\varphi_{x}(\varphi_{x}(y))=\frac{1-\langle
y,x\rangle}{1-\|x\|^{2}}(s_{x}Q_{x}+P_{x})\left(x-\varphi_{x}(y)\right)=$
$\displaystyle\frac{1-\langle
y,x\rangle}{1-\|x\|^{2}}\left((s_{x}Q_{x}+P_{x})(x)-(s_{x}Q_{x}+P_{x})((s_{x}Q_{x}+P_{x})(m_{x}(y)))\right).$
Since $P_{x}$ is an orthogonal projection and $Q_{x}$ is its orthogonal
complement, we have that $P_{x}\circ Q_{x}=Q_{x}\circ P_{x}=0$ and
$P_{x}+Q_{x}=Id_{E}$ and $P_{x}^{2}=P_{x}$, $Q_{x}^{2}=Q_{x}$ so:
$\displaystyle\varphi_{x}(\varphi_{x}(y))=\frac{1-\langle
y,x\rangle}{1-\|x\|^{2}}\left(x-(s_{x}^{2}Q_{x}+P_{x})\left(\frac{x-y}{1-\langle
y,x\rangle}\right)\right)=$ $\displaystyle\frac{1-\langle
y,x\rangle}{(1-\|x\|^{2})(1-\langle y,x\rangle)}\left((1-\langle
y,x\rangle)x-(s_{x}^{2}Q_{x}+P_{x})\left(x-y\right)\right)=$
$\displaystyle\frac{1}{(1-\|x\|^{2})}\left((x-\|x\|^{2}P_{x}(y)-x+(1-\|x\|^{2})Q_{x}(y)+P_{x}(y)\right)=$
$\displaystyle\frac{1}{(1-\|x\|^{2})}(1-\|x\|^{2})(P_{x}(y)+Q_{x}(y))=y$
and we are done. ∎
###### Lemma 3.13.
If $x\in B_{E}$ then $\varphi_{x}^{\prime}(0)$ is an invertible operator and
$\varphi_{x}^{\prime}(0)^{-1}=\varphi_{x}^{\prime}(x)$.
Proof. By Lemma 3.12, it is clear that
$(\varphi_{x}\circ\varphi_{x})^{\prime}(0)=Id_{E}^{\prime}(0)=Id_{E}$ so:
$\displaystyle\varphi_{x}^{\prime}(\varphi_{x}(0))\circ\varphi_{x}^{\prime}(0)=\varphi_{x}^{\prime}(x)\circ\varphi_{x}^{\prime}(0)=Id_{E}$
and we conclude the result. ∎
As we mentioned in (1.4), $\|f\|_{\mathcal{I}}=\sup_{x\in
B_{E}}\|\widetilde{\nabla}f(x)\|$. Notice that for any $x\in B_{E}$ we have:
(3.10)
$\displaystyle\|\widetilde{\nabla}f(x)\|=\sup_{u\in\overline{B_{E}}}\|f^{\prime}(\varphi_{x}(0))\circ\varphi_{x}^{\prime}(0)(u)\|=\sup_{w\in
E\setminus\\{0\\}}\frac{|f^{\prime}(x)(w)|}{\|\varphi_{x}^{\prime}(0)^{-1}(w)\|}$
and for any $w\in E$ the expression $\|\varphi_{x}^{\prime}(0)^{-1}(w)\|^{2}$
is given (see [5]) by:
(3.11)
$\displaystyle\|\varphi_{x}^{\prime}(0)^{-1}(w)\|^{2}=\frac{(1-\|x\|^{2})\|w\|^{2}+|\langle
w,x\rangle|^{2}}{(1-\|x\|^{2})^{2}}.$
The authors also proved in [5] that:
(3.12)
$\displaystyle\|\widetilde{\nabla}f(x)\|^{2}=(1-\|x\|^{2})\left(\|\nabla
f(x)\|^{2}-|\mathcal{R}f(x)|^{2}\right).$
In order to simplify notation, for an analytic self-map $\psi:B_{E}\to B_{E}$,
$x\in B_{E}$ and $w\in E$ we will denote:
(3.13) $\displaystyle
B(x,w)=\|\varphi_{\psi(x)}^{\prime}(0)^{-1}(\psi^{\prime}(x)(w))\|$
$\displaystyle\mbox{and}\ C(x,w)=\|\varphi_{x}^{\prime}(0)^{-1}(w)\|.$
The following lemma just need easy calculations:
###### Lemma 3.14.
Let $E$ be a complex Hilbert space, $\psi:B_{E}\to B_{E}$ an analytic self-map
and $x\in B_{E}$. We have:
* a)
If $w\in E$ then:
(3.14) $\displaystyle\frac{\|w\|^{2}}{1-\|x\|^{2}}\leq
C(x,w)^{2}\leq\frac{\|w\|^{2}}{(1-\|x\|^{2})^{2}}$
and:
(3.15) $\displaystyle\frac{\|\psi^{\prime}(x)(w)\|^{2}}{1-\|\psi(x)\|^{2}}\leq
B(x,w)^{2}\leq\frac{\|\psi^{\prime}(x)(w)\|^{2}}{(1-\|\psi(x)\|^{2})^{2}}$
* b)
If there exists $w_{x}\in E$ satisfying
$\psi^{\prime}(x)(w_{x})=\|\psi^{\prime}(x)\|\psi(x)$ then:
(3.16)
$\displaystyle\frac{\|\psi^{\prime}(x)\|\|\psi(x)\|}{1-\|\psi(x)\|^{2}}=B(x,w_{x})\leq\frac{\|\psi^{\prime}(x)\|}{1-\|\psi(x)\|^{2}}$
and if, in addition, $w_{x}\neq 0$, then:
(3.17) $\displaystyle\ \ \ \ \ \ \
\frac{B(x,w_{x})}{C(x,w_{x})}\geq\tau_{\psi}(x)\frac{\|\psi(x)\|}{\|w_{x}\|}.$
Proof. To prove a), by (3.13) and (3.11) we have:
$\displaystyle C(x,w)^{2}=\frac{(1-\|x\|^{2})\|w\|^{2}+|\langle
w,x\rangle|^{2}}{(1-\|x\|^{2})^{2}}$
so:
$\displaystyle\frac{\|w\|^{2}}{(1-\|x\|^{2})}\leq
C(x,w)^{2}\leq\frac{\|w\|^{2}}{(1-\|x\|^{2})^{2}}$
where last inequality is clear since $|\langle w,x\rangle|\leq\|w\|\|x\|$.
Hence we conclude inequalities (3.14). The proof for (3.15) follows the same
pattern.
To prove b), making calculations and bearing in mind the expression of
$B(x,w_{x})$ in (3.13) and (3.11) we have:
$\displaystyle
B(x,w_{x})^{2}=\frac{(1-\|\psi(x)\|^{2})\|\psi^{\prime}(x)(w_{x})\|^{2}+|\langle\psi^{\prime}(x)(w_{x}),\psi(x)\rangle|^{2}}{(1-\|\psi(x)\|^{2})^{2}}=$
$\displaystyle\frac{(1-\|\psi(x)\|^{2})\|\psi^{\prime}(x)\|^{2}\|\psi(x)\|^{2}+\|\psi(x)\|^{4}\|\psi^{\prime}(x)\|^{2}}{(1-\|\psi(x)\|^{2})^{2}}=$
$\displaystyle\frac{\|\psi^{\prime}(x)\|^{2}\|\psi(x)\|^{2}}{(1-\|\psi(x)\|^{2})^{2}}$
and we conclude inequality (3.16). This inequality together with inequality
(3.14) result in inequality (3.17) since:
$\displaystyle\frac{B(x,w_{x})}{C(x,w_{x})}\geq\frac{1-\|x\|^{2}}{1-\|\psi(x)\|^{2}}\frac{\|\psi^{\prime}(x)\|\|\psi(x)\|}{\|w_{x}\|}$
and we are done. ∎
From Lemma 3.14 we easily conclude:
###### Lemma 3.15.
For any $x\in B_{E}$ and $w\in E\setminus\\{0\\}$:
(3.18)
$\displaystyle\frac{B(x,w)}{C(x,w)}\leq\frac{\sqrt{1-\|x\|^{2}}}{1-\|\psi(x)\|^{2}}\left\|\psi^{\prime}(x)\left(\frac{w}{\|w\|}\right)\right\|$
and:
(3.19)
$\displaystyle\frac{B(x,w)}{C(x,w)}\geq\frac{1-\|x\|^{2}}{\sqrt{1-\|\psi(x)\|^{2}}}\left\|\psi^{\prime}(x)\left(\frac{w}{\|w\|}\right)\right\|$
We will also need the following result:
###### Lemma 3.16.
Let $\psi:B_{E}\to B_{E}$ be an analytic map. Then for any $x\in B_{E}$ and
$w\in E\setminus\\{0\\}$ we have:
$\frac{B(x,w)}{C(x,w)}=\frac{\|\varphi_{\psi(x)}^{\prime}(0)^{-1}(\psi^{\prime}(x)(w))\|}{\|\varphi_{x}^{\prime}(0)^{-1}(w)\|}\leq
1.$
Proof. First suppose that $\psi(0)=0$ and consider the analytic function
$f:\mathbf{D}\to\mathbb{C}$ given by:
$f(z)=\frac{\langle\psi(zw),\psi^{\prime}(0)(w)\rangle}{\|\psi^{\prime}(0)(w)\|}.$
We suppose without loss of generality that $w$ belongs to the unit spehere
$S_{E}$ of $E$ since otherwise we can divide both numerator and denominator by
$\|w\|$. It is clear that $|f(z)|<1$ and $f(0)=0$. By the Schwarz lemma we
have that $|f^{\prime}(0)|=\|\psi^{\prime}(0)(w)\|\leq 1$ or equivalently,
$\|\psi^{\prime}(0)(w)\|\leq\|w\|$ for any $w\in E$. Consider
$\mu=\varphi_{\psi(x)}\circ\psi\circ\varphi_{x}$. Notice that $\mu$ is a well-
defined analytic self-map on $B_{E}$ and
$\mu(0)=\varphi_{\psi(x)}(\psi(x))=0$, so $\|\mu^{\prime}(0)(w)\|\leq\|w\|$.
So for any $w\in E$ we have:
$\|\varphi_{\psi(x)}^{\prime}(\psi(x))\circ\psi^{\prime}(x)\circ\varphi_{x}^{\prime}(0)(w)\|\leq\|w\|$
and since $\varphi_{x}^{\prime}(0)^{-1}$ is a bijection on $B_{E}$, take $v\in
E$ such that $w=\varphi_{x}^{\prime}(0)^{-1}(v)$ and we obtain for any $v\in
E$:
$\|\varphi_{\psi(x)}^{\prime}(\psi(x))\circ\psi^{\prime}(x)(v)\|\leq\|\varphi_{x}^{\prime}(0)^{-1}(v)\|$
By Lemma 3.13 we know that
$\varphi_{\psi(x)}^{\prime}(\psi(x))=\varphi_{\psi(x)}^{\prime}(0)^{-1}$ and
we conclude the result. ∎
The following corollary generalizes a result given by Kalaj (see [10]) to the
infinite dimensional setting :
###### Corollary 3.17.
Let $E$ be a complex Hilbert space and $\psi:B_{E}\to B_{E}$ an analytic map.
Then for any $x\in B_{E}$:
$\frac{1-\|x\|^{2}}{\sqrt{1-\|\psi(x)\|^{2}}}\|\psi^{\prime}(x)\|\leq 1$
Proof. It is sufficient to apply Lemma 3.16 and inequality (3.19) in Lemma
3.14. ∎
###### Remark 3.18.
Kalaj proved that Corollary 3.17 is sharp by considering for any
$t\in(0,\pi/2)$ the analytic self-map $\psi_{t}:B_{2}\to B_{2}$ given by
$\psi_{t}(z,w)=(z\sin t,\cos t)$ (see [10]).
#### 3.3.2. Main results on bounded below composition operators on
$\mathcal{B}(B_{E})$
We will apply results on the automorphisms $\varphi_{x}$ to the study of
bounded below composition operators. First we provide a necessary condition by
adapting the proof of Theorem 2 in [7]:
###### Theorem 3.19.
Let $E$ be a complex Hilbert space and $\varphi:B_{E}\to B_{E}$ be an analytic
map. Suppose that $C_{\psi}:\mathcal{B}(B_{E})\to\mathcal{B}(B_{E})$ is
bounded below. Then there exists $\varepsilon>0$ and $0<r<1$ such that if
$y\in B_{E}$ we have $\rho(\varphi(x_{y}),y)\leq r$ for any $x_{y}\in B_{E}$
satisfying $\widetilde{\tau_{\psi}}(x_{y})\geq\varepsilon$.
Proof. Suppose that $C_{\psi}$ is bounded below. Let $y\in B_{E}$ and consider
the analytic function $f:B_{E}\to\mathbb{C}$ given by $f_{y}(x)=1/(1-\langle
x,y\rangle).$ It is easy that:
$f_{y}^{\prime}(x)=\frac{y}{(1-\langle x,y\rangle)^{2}}$
so:
$\displaystyle\|f_{y}\|_{\mathcal{B}}=\sup_{x\in
B_{E}}(1-\|x\|^{2})\|f_{y}^{\prime}(x)\|=\sup_{x\in
B_{E}}(1-\|x\|^{2})\frac{\|y\|}{|1-\langle x,y\rangle|^{2}}=$
$\displaystyle\sup_{x\in
B_{E}}\|y\|\frac{1-\|\varphi_{y}(x)\|^{2}}{1-\|y\|^{2}}=\frac{\|y\|}{1-\|y\|^{2}}.$
The analytic function $g_{y}:B_{E}\to\mathbb{C}$ given by
$g_{y}(x)=\displaystyle f_{y}(x)/\|f_{y}\|_{\mathcal{B}}$ satisfies
$\|g_{y}\|_{\mathcal{I}}\geq\|g_{y}\|_{\mathcal{B}}=1$. By Lemma 3.11, there
exists $k>0$ such that $\|g_{y}\circ\psi\|_{\mathcal{I}}\geq
k\|g_{y}\|_{\mathcal{I}}$ so bearing in mind that:
$\|g_{y}\circ\psi\|_{\mathcal{I}}=\sup_{x\in
B_{E}}\|\widetilde{\nabla}(g_{y}\circ\psi)(x)\|,$
there exists $x_{y}\in B_{E}$ such that
$\|\widetilde{\nabla}(g_{y}\circ\psi)(x_{y})\|\geq k/2$. So:
$\displaystyle\frac{k}{2}\leq\|\widetilde{\nabla}(g_{y}\circ\psi)(x_{y})\|=\sup_{w\in
E\setminus\\{0\\}}\|\widetilde{\nabla}(g_{y}\circ\psi)(x_{y})(w)\|=$
$\displaystyle\sup_{w\in
E\setminus\\{0\\}}\frac{|g_{y}^{\prime}(\psi(x_{y}))(\psi^{\prime}(x_{y})(w))|}{\|\varphi_{x_{y}}^{\prime}(0)^{-1}(w)\|}=$
(3.20) $\displaystyle\sup_{w\in
E\setminus\\{0\\}}\frac{|g_{y}^{\prime}(\psi(x_{y}))(\psi^{\prime}(x_{y})(w))|}{B(x_{y},w)}\frac{B(x_{y},w)}{C(x_{y},w)}\leq\|\widetilde{\nabla}g_{y}(\psi(x_{y}))\|\widetilde{\tau_{\psi}}(x_{y})$
where last inequality is clear by (3.10) and (3.18) in Lemma 3.15. By (3.12)
we have that:
$\displaystyle\|\widetilde{\nabla}g_{y}(\psi(x_{y}))\|^{2}=(1-\|\psi(x_{y})\|^{2})(\|\nabla
g_{y}(\psi(x_{y}))\|^{2}-|\mathcal{R}g_{y}(\psi(x_{y}))|^{2})=$
$\displaystyle(1-\|\psi(x_{y})\|^{2})\frac{(1-\|y\|^{2})^{2}}{\|y\|^{2}}\left(\frac{\|y\|^{2}}{|1-\langle\psi(x_{y}),y\rangle|^{4}}-\frac{|\langle\psi(x_{y}),y\rangle|^{2}}{|1-\langle\psi(x_{y}),y\rangle|^{4}}\right)=$
$\displaystyle(1-\|\psi(x_{y})\|^{2})(1-\|y\|^{2})^{2}\frac{1-\left|\Big{\langle}\psi(x_{y}),\frac{y}{\|y\|}\Big{\rangle}\right|^{2}}{|1-\langle\psi(x_{y}),y\rangle|^{4}}.$
Notice that $|1-\langle a,b/\|b\|\rangle|\leq 2|1-\langle a,b\rangle|$ for any
$a,b\in B_{E}$ because:
$\displaystyle|1-\langle a,b/\|b\|\rangle|\leq|1-\langle a,b\rangle|+|\langle
a,b-b/\|b\|\rangle|\leq$ $\displaystyle|1-\langle
a,b\rangle|+1-\|b\|\leq|1-\langle a,b\rangle|+1-|\langle
a,b\rangle|=2|1-\langle a,b\rangle|.$
Since:
$1-\left|\Big{\langle}\psi(x_{y}),\frac{y}{\|y\|}\Big{\rangle}\right|^{2}\leq\left(1+\left|\Big{\langle}\psi(x_{y}),\frac{y}{\|y\|}\Big{\rangle}\right|\right)\left(1-\left|\Big{\langle}\psi(x_{y}),\frac{y}{\|y\|}\Big{\rangle}\right|\right)$
we have:
$\displaystyle\|\widetilde{\nabla}g(\psi(x_{y}))\|^{2}\leq
4(1-\|\psi(x_{y})\|^{2})(1-\|y\|^{2})\frac{1}{|1-\langle\psi(x_{y}),y\rangle|^{2}}=$
$\displaystyle
4(1-\|\varphi_{y}(\psi(x_{y}))\|^{2})=4(1-\rho(y,\psi(x_{y}))^{2}).$
Hence:
$\displaystyle\frac{k}{2}\leq
2(1-\rho(y,\psi(x_{y}))^{2})^{1/2}\widetilde{\tau_{\psi}}(x_{y})$
which is satisfied if and only if:
$\displaystyle\frac{k}{4}\leq(1-\rho(y,\psi(x_{y}))^{2})^{1/2}\widetilde{\tau_{\psi}}(x_{y})\leq\widetilde{\tau_{\psi}}(x_{y})$
and we conclude that $\widetilde{\tau_{\psi}}(x_{y})\geq\frac{k}{4}$.
From (3.20) we have:
$\frac{k}{2}\leq 2(1-\rho(y,\psi(x_{y}))^{2})^{1/2}\sup_{w\in
E\setminus\\{0\\}}\frac{B(x_{y},w)}{C(x_{y},w)}$
and by Lemma 3.16 we conclude:
$(1-\rho(y,\psi(x_{y}))^{2})^{1/2}\geq k/4$
which is satisfied if and only if:
$\rho(y,\psi(x_{y}))\leq\sqrt{1-k^{2}/16}.$
Notice that if we choose another $x_{y}^{\prime}$ satisfying
$\widetilde{\tau_{\psi}}(x_{y}^{\prime})\geq\frac{k}{4}$, then we have
$\rho(y,\psi(x_{y}^{\prime}))\leq\sqrt{1-k^{2}/16}$ by following the same
argument from (3.20). Take $r=\sqrt{1-k^{2}/16}$ and $\varepsilon=k/4$ and we
are done. ∎
Now we provide a sufficient condition for a composition operator
$C_{\psi}:\mathcal{B}(B_{E})\to\mathcal{B}(B_{E})$ to be bounded below. In
order to extend the classical result given in Theorem 3.9, we will add a
condition: we will need that $\psi(x_{y})$ belongs to the range of
$\psi^{\prime}(x_{y})$.
###### Theorem 3.20.
Let $E$ be a complex Hilbert space and $\psi:B_{E}\to B_{E}$ be an analytic
map. Suppose that there exist constants $0<r<\frac{1}{15A_{0}}$ and
$\varepsilon>0$ such that for each $y\in\mathcal{B}_{E}$ there is a point
$x_{y}\in B_{E}$ satisfying $\rho(\psi(x_{y}),y)<r$ and
${\tau_{\psi}}(x_{y})>\varepsilon$. Suppose in addition that
$\psi(x_{y})\in\psi^{\prime}(x_{y})(E)$. Then
$C_{\psi}:\mathcal{B}(B_{E})\to\mathcal{B}(B_{E})$ is bounded below.
Proof. Let $f\in\mathcal{B}(B_{E})$ such that $\|f\|_{\mathcal{I}}=1$. We will
prove that there exists $k>0$ such that $\|f\circ\psi\|_{\mathcal{I}}\geq k$.
By (1.5) we have that $\|f\|_{\mathcal{R}}\geq\|f\|_{\mathcal{I}}/A_{0}$ so
$\|f\|_{\mathcal{R}}\geq 1/A_{0}$. Take $y\in B_{E}$ such that
$|Rf(y)|(1-\|y\|^{2})\geq 14/(15A_{0})$. Then there exists $x_{y}\in B_{E}$
such that $\rho(y,\psi(x_{y}))<r$ and $\tau_{\psi}(x_{y})>\varepsilon$. By
(1.4) and (3.10) and bearing in mind (3.13), we have that for any $w\in
E\setminus\\{0\\}$:
$\displaystyle\|f\circ\psi\|_{\mathcal{I}}=\sup_{x\in
B_{E}}\|\widetilde{\nabla}(f\circ\psi)(x)\|\geq$
$\displaystyle\frac{|(f\circ\psi)^{\prime}(x_{y})(w)|}{\|\varphi_{x_{y}}^{\prime}(0)^{-1}(w)\|}=\frac{|f^{\prime}(\psi(x_{y}))(\psi^{\prime}(x_{y})(w))|}{B(x_{y},w)}\frac{B(x_{y},w)}{C(x_{y},w)}.$
Since $\psi(x_{y})\in\psi^{\prime}(x_{y})(E)$, there exists $w_{x}\in E$ such
that $\psi^{\prime}(x_{y})(w_{x})=\|\psi^{\prime}(x_{y})\|\psi(x_{y})$ so, in
particular, the inequality above is true if we take $w_{x}$. By (3.16) in
Lemma 3.14 we have:
$\displaystyle\frac{|f^{\prime}(\psi(x_{y}))(\psi^{\prime}(x_{y})(w_{x}))|}{B(x_{y},w_{x})}=\frac{|f^{\prime}(\psi(x_{y}))(\|\psi^{\prime}(x_{y})\|\psi(x_{y}))|}{B(x_{y},w_{x})}=$
$\displaystyle\frac{\|\psi^{\prime}(x_{y})\||f^{\prime}(\psi(x_{y}))(\psi(x_{y}))|(1-\|\psi(x_{y})\|^{2})}{\|\psi^{\prime}(x_{y})\|\|\psi(x_{y})\|}=\frac{|\mathcal{R}f(\psi(x_{y}))|(1-\|\psi(x_{y})\|^{2})}{\|\psi(x_{y})\|}$
so:
$\displaystyle\|f\circ\psi\|_{\mathcal{I}}\geq\mathcal{R}f(\psi(x_{y}))(1-\|\psi(x_{y})\|^{2})\frac{B(x_{y},w_{x})}{C(x_{y},w_{x})}$
and by (3.17) in Lemma 3.14 we get:
$\displaystyle\|f\circ\psi\|_{\mathcal{I}}\geq\frac{|\mathcal{R}f(\psi(x_{y}))|(1-\|\psi(x_{y})\|^{2})}{\|\psi(x_{y})\|}\frac{\|\psi(x_{y})\|\tau_{\psi}(x_{y})}{\|w_{x}\|}\geq$
$\displaystyle|\mathcal{R}f(\psi(x_{y}))|(1-\|\psi(x_{y})\|^{2})\frac{\varepsilon}{\|w_{x}\|}.$
Using Corollary 3.5, we get:
$\displaystyle||\mathcal{R}f(\psi(x_{y}))|(1-\|\psi(x_{y})\|^{2})-|\mathcal{R}f(y)|(1-\|y\|^{2})|\leq
14\|f\|_{\mathcal{I}}\rho_{E}(\psi(x_{y}),y)$
and since $\|f\|_{\mathcal{I}}=1$, we have:
$\displaystyle\|f\circ\psi\|_{\mathcal{I}}\geq(|\mathcal{R}f(y)|(1-\|y\|^{2})|-14\rho(\psi(x_{y}),y))\varepsilon\geq$
$\displaystyle\left(\frac{14}{15A_{0}}-14r\right)\frac{\varepsilon}{\|w_{x}\|}$
so we take:
$k=\displaystyle
14\left(\frac{1}{15A_{0}}-r\right)\frac{\varepsilon}{\|w_{x}\|}>0$
and we conclude $\|C_{\psi}(f)\|_{\mathcal{I}}\geq k$. ∎
Finally we check that for any $a\in B_{E}$, the automorphisms $\varphi_{a}$ of
$B_{E}$ satisfy conditions of Theorem 3.20. First, we provide the following
proposition, which asserts that $\tau_{\varphi_{a}}(x)\geq 1$ for any $x\in
B_{E}$.
###### Lemma 3.21.
If $a\in B_{E}$ then $\tau_{\varphi_{a}}(x)\geq 1$ for any $x\in B_{E}$.
Proof. By (1.3) we have:
$\displaystyle\frac{1-\|x\|^{2}}{1-\|\varphi_{a}(x)\|^{2}}=\frac{|1-\langle
x,a\rangle|^{2}}{1-\|a\|^{2}}$
and since $\varphi_{a}(x)=(P_{a}+s_{a}Q_{a})\left(m_{a}(x)\right)$, then:
$\displaystyle\varphi_{a}^{\prime}(x)=(P_{a}+s_{a}Q_{a})^{\prime}(m_{a}(x))\circ
m_{a}^{\prime}(x)=(P_{a}+s_{a}Q_{a})(m_{a}^{\prime}(x))$
so:
$\displaystyle\|\varphi_{a}^{\prime}(x)\|^{2}=\|P_{a}(m_{a}^{\prime}(x))\|^{2}+s_{a}^{2}\|Q_{a}(m_{a}^{\prime}(x))\|^{2}\geq\|P_{a}(m_{a}^{\prime}(x))\|^{2}.$
It is clear that:
$\displaystyle m_{a}^{\prime}(x)(y)=\frac{-(1-\langle x,a\rangle)y+\langle
y,a\rangle(a-x)}{(1-\langle x,a\rangle)^{2}}$
so:
$\displaystyle\|\varphi_{a}^{\prime}(x)\|\geq\|P_{a}(m_{a}^{\prime}(x))\|=\sup_{y\in\overline{B}_{E}}\|P_{a}(m_{a}^{\prime}(x))(y))\|\geq$
$\displaystyle\left\|P_{a}\left(m_{a}^{\prime}(x)\left(\frac{a}{\|a\|}\right)\right)\right\|=\left\|P_{a}\left(\frac{-(1-\langle
x,a\rangle)\frac{a}{\|a\|}+\langle\frac{a}{\|a\|},a\rangle(a-x)}{(1-\langle
x,a\rangle)^{2}}\right)\right\|$
and we deduce:
$\displaystyle\tau_{\varphi_{a}}(x)\geq\frac{|1-\langle
x,a\rangle|^{2}}{1-\|a\|^{2}}\left\|P_{a}\left(\frac{-(1-\langle
x,a\rangle)\frac{a}{\|a\|}+\langle\frac{a}{\|a\|},a\rangle(a-x)}{(1-\langle
x,a\rangle)^{2}}\right)\right\|=$
$\displaystyle\frac{1}{1-\|a\|^{2}}\left\|P_{a}\left(-(1-\langle
x,a\rangle)\frac{a}{\|a\|}+\langle\frac{a}{\|a\|},a\rangle(a-x)\right)\right\|=$
$\displaystyle\frac{1}{1-\|a\|^{2}}\left\|\left(-(1-\langle
x,a\rangle)\frac{a}{\|a\|}+\|a\|a-\frac{\langle
x,a\rangle}{\|a\|^{2}}\|a\|a\right)\right\|=$
$\displaystyle\frac{1}{1-\|a\|^{2}}\left\|-\frac{1-\|a\|^{2}}{\|a\|}a\right\|=1$
and we get $\tau_{\varphi_{a}}(x)\geq 1$ as we wanted. ∎
###### Remark 3.22.
For any $a\in B_{E}$, the automorphisms $\varphi_{a}$ of $B_{E}$ satisfy
conditions of Theorem 3.20 since by Proposition 3.21 we have:
$\frac{1-\|x\|^{2}}{1-\|\varphi_{a}(x)\|^{2}}\|\varphi_{a}^{\prime}(x)\|\geq
1$
so we can take $\varepsilon=1$, $r=0$ and for any $y\in B_{E}$ we choose
$x_{y}=\varphi_{a}(y)$. In addition,
$\varphi_{a}(x_{y})=\varphi_{a}(\varphi_{a}(y))=y\in\varphi_{a}^{\prime}(x_{y})(E)$
since $\varphi_{a}^{\prime}(x_{y})$ is an invertible operator on $E$.
## References
* [1] R. M. Aron, P. Galindo and M. Lindström, _Connected components in the space of composition operators of $H^{\infty}$ functions of many variables_, Integr. equ. oper. theory 45 (2003), 1–14.
* [2] K. R. M. Attele, Interpolating sequences for the derivatives of the Bloch functions, Glasgow Math. J. 34 (1992), 35–41.
* [3] O. Blasco, P. Galindo, M. Lindström and A. Miralles, Composition operators on the Bloch space of the unit ball of a Hilbert space. Banach J. of Math. Anal. 11 n.2 (2017), 311–334.
* [4] O. Blasco, P. Galindo, M. Lindström and A. Miralles, _Interpolating sequences for weighted spaces of analytic functions on the unit ball of a Hilbert space_ , Rev. Mat. Complut. 32 n.1 (2019), 115–139.
* [5] O. Blasco, P. Galindo and A. Miralles, Bloch functions on the unit ball of an infinite dimensional Hilbert space, J. Funct. Anal. 267 (2014), 1188–1204.
* [6] H. Chen, _Boundness from below of composition operators on the Bloch spaces_ , Sci. China Ser. A 46 n.6 (2003), 838–846.
* [7] F. Deng, L. Jiang and C. Ouyang, _Closed range composition operators on the Bloch space in the unit ball of $\mathbb{C}^{n}$_, Complex Var. Elliptic Equ. 52 n. 10-11 (2007), 1029–1037.
* [8] P. Ghatage, J. Yan and D. Zheng, _Composition operators with closed range on the Bloch space_ , Proc. Amer. Math. Soc. 129, 7 (2000), 2039–2044.
* [9] K. Goebel and S. Reich, _Uniform convexity, hyperbolic geometry, and nonexpansive mappings_ , Marcel Dekker, Inc., New York and Basel (1984).
* [10] D. Kalaj, _Schwarz lemma for holomorphic mappings in the unit ball_ , Glasg. Math. J. 60 n.1 (2018) 219–224.
* [11] A. Miralles, Interpolating sequences for $H^{\infty}(B_{H})$, Quaest. Math. 39 n.6 (2016) 785–795.
* [12] J. Mujica, _Complex analysis in Banach spaces_ , Math. Studies 120, North-Holland, Amsterdam (1986).
* [13] W. Rudin, _Function theory in the unit ball of $C^{n}$, Reprint of the $1980$ edition_, Classics in Mathematics, Springer-Verlag, Berlin (2008).
* [14] R. M. Timoney, _Bloch functions in several complex variables I_ , Bull. London Math. Soc. 12 (1980), 241–267.
* [15] C. Xiong, _On the Lipschitz continuity of the dilation of Bloch functions_ , Per. Math. Hung. 47, 1–2 (2003), 233–238.
|
# Insulator-to-metal transition in the pyrochlore iridates series
(Eu1-xBix)2Ir2O7 probed using Hard X-ray Photoemission Spectroscopy
Prachi Telang<EMAIL_ADDRESS>Department of Physics,
Indian Institute of Science Education and Research, Pune, Maharashtra-411008,
India Kshiti Mishra Department of Physics, Indian Institute of Science
Education and Research, Pune, Maharashtra-411008, India Rabindranath Bag
Department of Physics, Indian Institute of Science Education and Research,
Pune, Maharashtra-411008, India A. Gloskovskii Photon Science, Deutsches
Elektronen-Synchrotron DESY, Notkestr. 85, 22607 Hamburg, Germany Yu.
Matveyev Photon Science, Deutsches Elektronen-Synchrotron DESY, Notkestr. 85,
22607 Hamburg, Germany Surjeet Singh<EMAIL_ADDRESS>Department of Physics, Indian Institute of Science Education and Research,
Pune, Maharashtra-411008, India Center for Energy Science, Indian Institute
of Science Education and Research, Pune, Maharashtra-411008, India
###### Abstract
Eu2Ir2O7, a candidate Weyl semimetal, shows an insulator-to-metal transition
as a function of Bi substitution at the Eu site. In this work, we investigate
the (Eu1-xBix)2Ir2O7 series via Hard X-ray Photoelectron Spectroscopy
(HAXPES), where substitution of larger Bi3+ for Eu3+ is reported to result in
an anomalous lattice contraction for $x\leqslant 0.035$. Using HAXPES we
confirm that all the cations retain their nominal valence state throughout the
series. The asymmetric nature of Bi core-level spectra for compositions in the
metallic region indicates that Bi contributes to the density of states at the
Fermi energy in this doping range. The valence band spectra show that the Bi
$6s$ peak is unaltered throughout the series and is situated deep within the
valence band. Instead, we argue that Bi $6p$–Ir $5d$ hybridisation drives the
insulator-to-metal transition.
###### pacs:
## I Introduction
In pyrochlore oxides, A2B2O7, with a $5d$ transition metal ion (e.g., Ir4+) at
the B-site, the competing strengths of relativistic spin-orbit (SO)
interaction, on-site Coulomb repulsion and crystal field splitting gives rise
to non-trivial topological phases that have attracted considerable attention
in the recent years WanWeyl ; Krempa2014 ; Wang2017 . Due to SO interaction,
the $t_{2g}$ level of Ir${}^{4+}(d^{5})$ in an octahedral crystal field splits
into a completely-filled quadruplet (J${}_{\textit{eff}}=3/2$), and a higher-
lying half-filled doublet (J${}_{\textit{eff}}=1/2$). Thus, an effective
J${}_{\textit{eff}}=1/2$ resides on the frustrated pyrochlore lattice, which
is one of the key ingredients for realizing the interesting physical
properties predicted for these compounds. The pyrochlores iridates exhibit a
change of ground state from an antiferromagnetic(AFM)/insulator for smaller
and heavier rare-earths (i.e., A$=$ Gd, Tb, Dy, Ho, Er and Yb, including Y) to
a correlated metal for the end member Pr2Ir2O7, which remains magnetically
unordered down to the lowest measurement temperature Tokiwa . The intermediate
members with A $=$ Nd, Sm and Eu, however, show a sharp metal-insulator
transition (MI) concomitant with the onset of AFM long-range ordering of Ir4+
moments Matsuhira2011 . The AFM state is of all-in/all-out type where all the
Ir moments point into or away from the center of the tetrahedron formed by the
nearest neighbor Ir atoms. This strong dependence of the ground state on the
A-site ionic size is not so well understood. Also, while a variety of
experiments have pointed to indirect signatures of exotic electronic phases in
pyrochlore iridates (see for example Ref. [6; 7; 8]), there exists very little
direct experimental evidence. This underscores the importance of performing a
detailed electronic structure study to throw light on the dependence of ground
state properties on the A-site ionic radius or the lattice constant vis-à-vis
corresponding changes in the electronic structure.
From among these pyrochlores, Eu2Ir2O7 is of particular interest as it is the
only member in this series with a non-magnetic A-site and yet located in the
close proximity of the MI phase boundary. The non-magnetic nature of A-site is
a consequence of the fact that in an Eu3+(f7) ion the spin ($S$) and orbital
angular momenta ($L$) compensate each other, leading to a $J=0$ ground state.
Recently, some of us reported the magnetic and transport properties of
(Eu1-xBix)2Ir2O7 series Telang2019 . Similar to Eu2Ir2O7, Bi2Ir2O7 also
crystallize with the pyrochlore structure with a nonmagnetic but larger-in-
size Bi3+ ion occupying the A-site. We found that contrary to the normal
expectations, substitution of the larger Bi ion for Eu results in an anomalous
lattice contraction for $x\leqslant 0.035$ without any change in the lattice
symmetry. In this region, the ground state remains insulating and the MI
transition, as well as the coincident AIAO ordering, becomes even more
pronounced. In the range $0.05\leqslant x\leqslant 0.1$, the MI or AIAO
transition is rapidly suppressed, and for $x\geqslant 0.1$ a metallic ground
state, akin to Pr2Ir2O7, persists Telang2019 . While the transition from
insulator to metal with Bi doping is not unexpected, the manner in which it
occurs is rather interesting. In particular, we were intrigued by the
observation of a large ($\simeq 20\%$) negative lattice expansion, against
which the ground state of Eu2Ir2O7 remains robust. This motivated us to
investigate and understand what drives the insulator-to-metal transition in
this series, and what is the underlying reason for the anomalous lattice
contraction. In particular, one of the questions we are asking is: Could the
observed behavior be driven by a valence change of the Eu or Bi ions? Here, we
study the oxidation state of cations and valence band spectra for different
compositions using HAXPES, which is a powerful technique with a significant
probing depth to overcome the surface effects, and hence the spectra obtained
are representative of the bulk material. We establish that the anomalous
lattice contraction in the range $0\leqslant x\leqslant 0.035$ is not caused
by a change in the oxidation state of either Bi or Eu. From the asymmetric
shape of core-level spectra for Bi and Ir and restructuring of the valence
band spectra in the metallic region, we argue that Bi $6p$–Ir $5d$
hybridisation drives the insulator-to-metal transition in this series.
## II Experimental Details
All the samples in the series (Eu1-xBix)2Ir2O7 were synthesized in air using
solid-state synthesis method similar to the previous reportTelang2019 . The
phase purity of all the studied samples and their structural characterization
was done using high-quality synchrotron data Telang2019 . As shown in Ref. 9,
the low-temperature resistivity of samples ($0\leqslant x\leqslant 0.035$) in
the insulting region is of the order of $10^{3}$ m$\Omega$ cm; it decreases to
about $10$ m$\Omega$ cm ($0.05\leqslant x\leqslant 0.1$) in the cross-over
region, and finally to $1$ m$\Omega$ cm in the insulating region. HAXPES
measurements were carried out at P22 beamlineXPS of PETRA III (DESY)
utilizing Si$(311)$ double crystal monochromator. The excitation energy was E
$=5$ keV and the beam size at the sample $40\times 20~{}\mu$m2. Spectra were
acquired with Specs $225$ HV analyzer, the overall energy resolution was set
to $0.28$ eV.
## III Results & discussion
### III.1 Europium core-level spectra
We start by presenting the core-level spectra for Europium. Similar to the
rare-earth elements Ce, Pr, Sm, and Yb, Eu can also exhibit variable oxidation
states. In the case of Eu, apart from the more commonly observed Eu3+
(f${}^{6},J=0$) oxidation state, Eu2+ (f${}^{7},J=7/2$) oxidation state is
also known to exist in some compounds Mercier . Here, we probe the Eu $3$d
core-level in the binding energy ranging from $1100$ to $1200$ eV. Fig. 1
shows the $3d$ core-level spectra of Eu for various Bi doping concentrations.
The precise values of binding energies for various peaks is given in Table 1.
Figure 1: 3d core-level spectra of Europium in series (Eu1-xBix)2Ir2O7. The
spectra consist of the standard 3d3/2 and 3d5/2 peaks along with three
satellite peaks denoted by S1, S2, and S3. The shape and intensity of the
peaks remain unchanged across the series implying lack of participation of Eu
orbitals in driving the insulator-to-metal transition.
Due to the spin-orbit splitting, $3$d doublet corresponding to $3$d3/2 and
$3$d5/2 is observed. These two peaks are separated in energy by $29.7$ eV, and
are located at $\sim 1163.6$ eV ($3$d3/2) and $\sim 1133.9$ eV ($3$d5/2).
Additionally, we also observe four small satellite peaks located at $1127.9$
eV (S1), $1142.4$ eV (S2), $1159$ eV (S3), and $1168.1$ eV (S4). From Fig. 1,
one can see that the peak shape and peak positions remain unaltered with
changing Bi doping concentration within the energy resolution of the detector.
Also, the value of peak binding energies agrees well with that reported for
Eu3+ [12]. This shows that Eu oxidation state as well as chemical environment
is not affected by Bi substitution. The presence of satellites peaks can be
attributed to the final state effect which arises due to the fact that in
rare-earths, the empty 4f-subshells are lowered in energy through their
interaction with the photoionized hole states. These 4f-subshells, thus, can
trap the electrons, which leads to the appearance of “shake-up” or “shake-
down” satellite peaks, depending on whether the electrons gain or lose energy,
respectively. In europium compounds, the degeneracy of $4$f65d1 and $4$f75d0
configurations leads to the observation of both features in the final-state
Mercier . Further, absence of any peak corresponding to divalent Eu in the d
region, where the divalent and trivalent peaks are well separated (data not
shown for brevity), further confirms that the additional peaks correspond to
satellite peaks rather than divalent Eu. Magnetic susceptibility data further
supports this conclusion as for all compositions the measured susceptibility
conforms to the trivalent nature of the Eu ion Telang .
Table 1: Eu 3d core-level peaks for various compositions ($x$) of the series
(Eu1-xBix)2Ir2O7. In addition to the core-level lines, there are four
satellite peaks corresponding to S1, S2, S3, and S4.
$x$ | 3d3/2 | 3d5/2 | S1 | S2 | S3 | S4
---|---|---|---|---|---|---
$0$ | $\hskip 8.5359pt1163.6$ | $1133.9$ | $1127.9$ | $1142.4$ | $1159.0$ | $1168.1$
$0.02$ | $\hskip 8.5359pt1163.6$ | $1133.9$ | $1127.9$ | $1142.3$ | $1158.8$ | $1168.0$
$0.035$ | $1163.7$ | $1134.0$ | $1127.8$ | $1142.5$ | $1159.0$ | $1168.1$
$0.05$ | $1163.6$ | $1133.9$ | $1127.9$ | $1142.3$ | $1159.0$ | $1168.1$
$0.1$ | $1163.6$ | $1133.9$ | $1127.9$ | $1142.4$ | $1158.8$ | $1167.9$
$0.25$ | $1163.8$ | $1134.0$ | $1127.9$ | $1142.5$ | $1159.0$ | $1168.0$
$0.5$ | $1163.9$ | $1134.0$ | $1127.9$ | $1142.3$ | $1158.8$ | $1168.0$
## IV Iridium core-level spectra
The photoemission spectra of a number of Ir based compounds exhibit a distinct
asymmetric shape, with satellite peaks associated with the core lines
appearing in their neighborhood Kahk ; Kennedy02 ; Freakley2017 . In previous
XPS studies on pyrochlore iridates, these satellites were often attributed to
the presence of higher oxidation states of Ir Kennedy02 . However, in a recent
study on Iridium oxide (IrO2), Kahk et al. Kahk , reinvestigated this issue
using HAXPES and density functional calculations. They show that the presence
of peaks at higher binding energy do not originate from higher oxidation
states. They ascribed these peaks to the final state effect, which we will
discuss after showing our results and pointing out similarities and
differences between our data and XPS or HAXPES data published previously.
Fig. 2 shows Ir 4f core-level photoelectron spectra for various samples in the
series. The spin-orbit coupling results in peaks corresponding to 4f5/2 and
4f7/2. However, each of these peaks is accompanied by a satellite peak at
slightly lower binding energy. Altogether, we find that a complete
deconvolution of the Ir photoelectron spectra requires five different peaks,
which includes an additional satellite peak (S1) at relatively higher binding
energy. Appropriate constraints corresponding to FWHM and area ratio were
imposed while fitting the spin-orbit doublets. The obtained binding energies
across the series for the twin Ir 4f5/2 peaks fall in the energy range from
$\approx 65.0$ eV to $\approx 65.9$ eV (4f5/2 (I)) and from $\sim 64.4$ eV to
$\approx 64.6$ eV (4f5/2 (II)), whereas the Ir 4f7/2 peaks ranged from
$\approx 62.1$ eV to $\approx 63$ eV (4f7/2 (I)) and $61.5$ eV to $61.7$ eV
(4f7/2 (II)) for all the compositions. The additional satellite peak S1 is
observed around $\approx 66.6$ eV for all the compositions (see Table 2 for
details).
In the insulating region ($0\leq x\leq 0.035$) the spectra remains
qualitatively unchanged, yielding a satisfactorily fit using the five peaks.
Upon transiting into the crossover region ($0.05\leq x\leq 0.01$), a slight
increase in the intensity around $61.5$ eV and $64.5$ eV is observed but the
spectra could still be fitted using the five peaks as before. In the core-
level spectra for $x~{}\geqslant~{}0.1$ (i.e. for the highly metallic
samples), the peak shape becomes increasingly asymmetric, mandating inclusion
of two additional peaks to achieve a satisfactory fit. These peaks are labeled
as S2 and S3 and they appear around $63$ eV and $66$ eV, respectively. Also,
the components $4$f5/2(II) and $4$f7/2(II) of the spin-orbit doublet, that are
broad and less intense up to $x=0.035$, dominate the spectra for the metallic
($x$ $\geq$ $0.1$) samples. On the other hand, the components $4$f5/2(I) and
$4$f7/2(I) dominant in the insulating region but are significantly suppressed
in the metallic region. The HAXPES data of our Bi2Ir2O7 agrees very well with
that reported previouslySardar ; Sun , which rules out any extrinsic origin
for these additional satellite peaks that are observed only for the metallic
samples.
Figure 2: 4f core-level photoemission spectra of Iridium for different
compositions in the series (Eu1-xBix)2Ir2O7. The spectrum shows significant
changes in both shape and intensity of the peaks across the insulator-to-metal
transition as discussed in the text.
The changes upon going from insulating to metallic regime outlined above are
analogous to those previously observed for some insulating and metallic
ruthenate pyrochlores Cox . A qualitative understanding of these changes can
be gained by considering that when a photoelectron leaves the core-level, the
core-level gets ionized and creates an electrostatic perturbation at the
ionized center. If the core-valence coulomb interaction exceeds the width of
the one-electron conduction band, the ionized core will disengage one of the
valence orbitals of the ionized atom from the conduction band Campagna ;
Beatham . The resulting localized atomic state lies below the Fermi energy,
and can trap the emitted electron. Thus, photoelectrons with two different
energies are detected, depending on whether the photoelectron was trapped
(screened) or not (unscreened) in the localized energy state. This is also in
agreement with a comprehensive DMFT study Kim on various ruthenates with
varying degree of electronic correlations, where it was shown that the PE
spectra exhibits a twin peak structure corresponding to the “screened” and
“unscreened” components. Further, it was shown that the screened peak
disappears in the Mott insulating state, but progressively develops as the
band width increases and sample turns metallic. Similar changes in our Ir 4f
core spectra are suggestive of a progressive conduction-band widening as the
Bi doping concentration increases driving the system into a metallic state. To
summarize this section, our experimental results find an excellent match with
previous XPS studies on pyrochlore iridates Pfeifer ; Sun ; Yang and show
that throughout the series Iridium retains its ${+4}$ oxidation state and the
complex nature of the measured spectra arises, at least partly, due to
“screened” and “unscreened” components rather than due to the presence of
different oxidation states of Ir.
Table 2: Ir 4f core-level peaks for all the measured compositions ($x$) in the series Eu1-xBix)2Ir2O7. In addition to the core-level peaks corresponding to 4f5/2(I & II) and 4f7/2(I & II), there are additional satellite peaks named as S1, S2, and S3. The peaks S2 and S3 were required during the fitting of samples deep in the metallic regime i.e. $x\geq 0.25$ and can be ascribed to the final state effect (see text for details). $x$ | $4f_{5/2}$(I) | $4f_{5/2}$(II) | $4f_{7/2}$(I) | $4f_{7/2}$(II) | S1 | S2 | S3
---|---|---|---|---|---|---|---
$0$ | $65.88$ | $64.65$ | $62.95$ | $61.72$ | $66.53$ | - | -
$0.02$ | $65.87$ | $64.60$ | $62.94$ | $61.68$ | $66.50$ | - | -
$0.035$ | $65.86$ | $64.59$ | $62.93$ | $61.66$ | $66.71$ | - | -
$0.05$ | $65.86$ | $64.54$ | $62.93$ | $61.60$ | $66.58$ | - | -
$0.1$ | $65.84$ | $64.55$ | $62.95$ | $61.55$ | $66.57$ | - | -
$0.25$ | $65.02$ | $64.52$ | $62.04$ | $61.59$ | $66.50$ | $66.05$ | $63.30$
$0.5$ | $64.86$ | $64.42$ | $61.93$ | $61.49$ | $66.50$ | $65.95$ | $63.17$
$1$ | $65.04$ | $64.59$ | $62.11$ | $61.66$ | $66.49$ | $66.04$ | $63.22$
## V Bismuth core-level spectra
Figure 3: 4f core-level spectra of Bismuth for various compositions ($x$) of
the series (Eu1-xBix)2Ir2O7. Changes in peak-shape and peak intensity with Bi
doping is due to insulator-to-metal transition as discussed in the text.
Next, we probe the Bi 4f core-level to gain knowledge about its valence state.
Bi can take valence states ranging from -3 to +5 Whitmire . The spectra,
plotted in Fig. 3, show interesting changes with increasing Bi concentration.
For all the compositions, the spectra could be deconvoluted using four peaks
corresponding to two peaks each for the spin-orbit doublet 4f5/2 and 4f7/2,
similar to the reports of other Bi based pyrochlore iridates where the twin
peaks are not observed for insulating samples like Bi2O3, but are a typical
feature of conducting samples where hybridization as discussed further plays a
role Cox ; Sardar ; Sun . The peak position remains unchanged within the
energy resolution throughout the series and matches well with previous reports
for Bi3+ Abdullah . However, the relative peak intensity and peak shape
exhibit significant changes upon going from insulating to metallic
compositions. The peak positions for all the observed peaks are listed in
Table 3.
Table 3: Bi 4f core-level peaks for all the measured compositions ($x$) of the
series Eu2-2xBi2xIr2O7. E represents the extraneous feature in the spectra
which was held fixed at a given value while fitting the spectra for all the
compositions.
$x$ | $4f_{5/2}$(I) | $4f_{5/2}$(II) | $4f_{7/2}$(I) | $4f_{7/2}$(II) | E
---|---|---|---|---|---
$0.02$ | $164.48$ | $163.42$ | $159.18$ | $158.12$ | $166.30$
$0.035$ | $164.65$ | $163.39$ | $159.35$ | $158.09$ | $166.30$
$0.05$ | $164.47$ | $163.62$ | $159.17$ | $158.32$ | $166.30$
$0.1$ | $164.32$ | $163.50$ | $159.02$ | $158.20$ | $166.30$
$0.25$ | 164.32 | 163.5 | 159.02 | 158.2 | 166.3
$0.5$ | 164.26 | 163.47 | 158.96 | 158.17 | 166.3
$1$ | 164.36 | 163.5 | 159.06 | 158.2 | 166.3
For the insulating compositions ($x=0.02$ and $0.035$), the peak shape is
rather symmetric and the intensity of the components at lower binding energy
(designated by 4f5/2(I) and 4f7/2(I)) is small compared to the higher binding
energy components (designated by 4f5/2(II) and 4f7/2(II)). On the other hand,
the peak shape becomes asymmetric and the intensity of 4f5/2(I) and 4f7/2(I)
increases significantly upon entering the metallic regime, and at the same
time the intensity of 4f5/2(II) and 4f7/2(II) decreases.
Figure 4: (a)Valence band spectra for different compositions of the series
(Eu1-xBix)2Ir2O7. The spectra shows significant changes in shape and intensity
upon Bi substitution. (b) The region of the VBS in the vicinity of Fermi
energy is shown on an enlarged scale.
Apart from the standard Bi 4f doublets, we also observed an additional peak at
around $166.3$ eV. As a peak at the same energy is observed even for Eu2Ir2O7
(not shown here), which does not contain any Bi, we ascribe this feature to
some extrinsic contribution. The peak position was kept fixed while fitting
the spectra for all the compositions. Concerning the asymmetry of the core
lines, the core photoelectron spectra of simple metals are known to display
characteristic asymmetry due to electron-hole excitation in the final state
Cox ; Wertheim . Further, for metallic samples, it has been shown that the
asymmetry of the core peak of a constituting element is proportional to the
square of the partial density of states at the Fermi energy provided by the
valence orbitals of that element Folmer . The increasing asymmetry of the core
doublets in the Bi XPS spectra with insulator-to-metal transition suggests
increasing contribution due to Bi at the Fermi energy.
## VI Valence Band Spectra
We measured the valence band spectra (VBS) of all the samples to gain further
insight into the insulator-to-metal transition with Bi substitution for Eu in
Eu2Ir2O7. VBS represents all the contributing bands near the Fermi level (EF),
which has been marked by $0$ eV in the spectra. The Fermi energy in our
experiments was calibrated by measuring the VBS for a gold metal foil. In the
VBS shown in Fig. 4, normalized intensities are plotted. The normalization is
done by dividing the measured intensity by the total integrated intensity for
a given composition. The region from $0$ eV to $9$ eV is expected to be
dominated by $5d$ orbitals of Ir Sun . On the other hand, the peak in VBS near
$11$ eV, which is particularly becomes progressively pronounced as Bi-doping
increases, can be attributed to Bi($6s^{2}$), as reported, for example, for
Bi2O3 Walsh2006 . Since the $6s$ contribution is embedded deep below the Fermi
energy, and its position remains invariant with Bi-doping, we can infer that
Bi $6s$ hybridizes only weakly with Ir $5d$ orbitals. This suggestion is in
line with previous photoemission and first principle study on Bi-based
pyrochlore ruthenate Bi2Ru2O7 Hsu . Any significant contribution from O($2p$)
in the measured VBS can also be ruled out due to its small photoionization
cross-section at $5$ keV incident radiation. Now with $6s$ contribution due to
Bi located deep within the valence band at 11 eV, the significant changes in
the VBS near the Fermi energy on Bi-doping as highlighted in Fig. 4b, can be
attributed to possible hybridization between Ir($5d$) and Bi($6p$) orbitals in
agreement with Ref. [30]. A closer look near the Fermi energy (Fig. 4b)
reveals that the spectral weight gradually increases at the Fermi energy with
increasing Bi doping, in agreement with the observation that with increasing
Bi concentration the electrical conductivity increases i.e., the samples
become more and more conducting. Also a shoulder peak appears near the Fermi
energy, indicated with an arrow in Fig. 4b, for $x\geq 0.1$, where the
insulator-to-metal transition takes place in the series.
## VII Conclusion
We present the core-level spectra of the cations and valence band spectra for
all compositions in the series (Eu1-xBix)2Ir2O7. We observed that the peak
positions in the core-level spectra for all the cations only vary within the
energy resolution of the spectrometer. This shows that all the cation retains
their ideal valence state and importantly, the anomalous lattice contraction
for $x\leq 0.035$ in the series (Eu1-xBix)2Ir2O7 is not a result of varying
oxidation state of any of the cations involved. However, the relative peak
intensity and the peak shape vary considerably for the core-level spectra of
both Ir and Bi.
Going from insulating compositions for low Bi-doping to metallic compositions
for high Bi doping, the core-level spectra for Ir and Bi shows pronounced
asymmetry. The asymmetries of the Bi 4f and Ir 4f core signals provide direct
evidence of an increasing partial density of states from these two elements at
the Fermi energy. In addition to this, the appearance of Bi 6s peak without
any shape or peak position change in the valence band spectra further supports
our conjecture that metallicity in compositions above $x\gtrsim 0.1$ is driven
by Bi $6p$–Ir$5d$ orbitals.
Our study elucidates the mechanism of metal-insulator transition in the series
(Eu1-xBix)2Ir2O7. This systemattic study tracks the changes in the core lines
of the cations where isovalent substitution of Bi results in significant
changes in physical and structural properties. Apart from ruling out the
possibility of variable oxidation states, our study explains the debated role
of Bi ($6s/6p$) hybridization in Bi-based metallic pyrochlore oxides. As Bi
substitution preserves the Ir${4+}$ sublattice, it offers an avenue to realize
non-trivial and possibly exotic metallic ground states in pyrochlore iridates
and ruthenates.
## Acknowledgments
SS and PT are thankful to Prof. Sugata Ray for fruitful discussion. The
authors acknowledges financial support from DST/SERB India under grant nos.
EMR/2016/003792/PHY and SR/NM/TP-13/2016. PT and SS thank DST for financial
support to perform experiments at Petra III, DESY. Funding for the HAXPES
instrument at beamline P22 by the Federal Ministry of Education and Research
(BMBF) under contracts 05KS7UM1 and 05K10UMA with Universität Mainz; 05KS7WW3,
05K10WW1 and 05K13WW1 with Universität Würzburg is gratefully acknowledged.
## References
* (1) X. Wan, A. M. Turner, A. Vishwanath, and S. Y. Savrasov, Phys. Rev. B 83, 205101 (May 2011), http://link.aps.org/doi/10.1103/PhysRevB.83.205101.
* (2) W. Witczak-Krempa, G. Chen, Y. B. Kim, and L. Balents, Annual Review of Condensed Matter Physics 5(1), 57 (2014).
* (3) R. Wang, A. Go, and A. J. Millis, Phys. Rev. B 95, 045133 (Jan 2017).
* (4) Y. Tokiwa, J. J. Ishikawa, S. Nakatsuji, and P. Gegenwart, Nature Materials 13, 356 (2014).
* (5) K. Matsuhira, M. Wakeshima, Y. Hinatsu, and S. Takagi, Journal of the Physical Society of Japan 80(9), 094701 (2011), http://dx.doi.org/10.1143/JPSJ.80.094701.
* (6) F. F. Tafti, J. J. Ishikawa, A. McCollam, S. Nakatsuji, and S. R. Julian, Phys. Rev. B 85, 205104 (May 2012).
* (7) A. B. Sushkov, J. B. Hofmann, G. S. Jenkins, J. Ishikawa, S. Nakatsuji, S. Das Sarma, and H. D. Drew, Phys. Rev. B 92, 241108 (Dec 2015).
* (8) K. Ueda, T. Oh, B. J. Yang, R. Kaneko, J. Fujioka, N. Nagaosa, and Y. Tokura, Nature Communications 8, 15515 (May 2017).
* (9) P. Telang, K. Mishra, G. Prando, A. K. Sood, and S. Singh, Phys. Rev. B 99, 201112 (May 2019), https://link.aps.org/doi/10.1103/PhysRevB.99.201112.
* (10) C. Schlueter, A. Gloskovskii, K. Ederer, I. Schostak, S. Piec, I. Sarkar, Y. Matveyev, P. Lömker, M. Sing, R. Claessen, _et al._ , AIP Conference Proceedings 2054(1), 040010 (2019), https://aip.scitation.org/doi/abs/10.1063/1.5084611.
* (11) F. Mercier, C. Alliot, L. Bion, N. Thromat, and P. Toulhoat, Journal of Electron Spectroscopy and Related Phenomena 150(1), 21 (2006), http://www.sciencedirect.com/science/article/pii/S0368204805004366.
* (12) C. Caspers, M. Müller, A. X. Gray, A. M. Kaiser, A. Gloskovskii, C. S. Fadley, W. Drube, and C. M. Schneider, Phys. Rev. B 84, 205217 (Nov 2011).
* (13) P. Telang, K. Mishra, A. K. Sood, and S. Singh, Phys. Rev. B 97, 235118 (Jun 2018).
* (14) J. M. Kahk, C. G. Poll, F. E. Oropeza, J. M. Ablett, D. Céolin, J.-P. Rueff, S. Agrestini, Y. Utsumi, K. D. Tsuei, Y. F. Liao, _et al._ , Phys. Rev. Lett. 112, 117601 (Mar 2014).
* (15) B. J. Kennedy, Physica B: Condensed Matter 241, 303 (1997).
* (16) S. J. Freakley, J. Ruiz-Esquius, and D. J. Morgan, Surface and Interface Analysis 49(8), 794 (2017), eprint https://onlinelibrary.wiley.com/doi/pdf/10.1002/sia.6225, https://onlinelibrary.wiley.com/doi/abs/10.1002/sia.6225.
* (17) K. Sardar, S. C. Ball, J. D. Sharman, D. Thompsett, J. M. Fisher, R. A. Smith, P. K. Biswas, M. R. Lees, R. J. Kashtiban, J. Sloan, _et al._ , Chemistry of Materials 24(21), 4192 (2012), eprint https://doi.org/10.1021/cm302468b, https://doi.org/10.1021/cm302468b.
* (18) W. Sun, J.-Y. Liu, X.-Q. Gong, W.-Q. Zaman, L.-M. Cao, and J. Yang, Scientific Reports 6, 38429 (Dec 2016).
* (19) P. A. Cox, R. G. Egdell, J. B. Goodenough, A. Hamnett, and C. C. Naish, Journal of Physics C: Solid State Physics 16(32), 6221 (nov 1983).
* (20) M. Campagna, G. K. Wertheim, H. R. Shanks, F. Zumsteg, and E. Banks, Phys. Rev. Lett. 34, 738 (Mar 1975), https://link.aps.org/doi/10.1103/PhysRevLett.34.738.
* (21) N. Beatham, P. Cox, R. Egdell, and A. Orchard, Chemical Physics Letters 69(3), 479 (1980).
* (22) B. J. Kim, H. Jin, S. J. Moon, J.-Y. Kim, B.-G. Park, C. S. Leem, J. Yu, T. W. Noh, C. Kim, S.-J. Oh, _et al._ , Phys. Rev. Lett. 101, 076402 (Aug 2008).
* (23) V. Pfeifer, T. E. Jones, J. J. Velasco Velez, C. Massue, R. Arrigo, D. Teschner, F. Girgsdies, M. Scherzer, M. T. Greiner, J. Allan, _et al._ , Surface and Interface Analysis 48(5), 261 (2016).
* (24) W. C. Yang, Y. T. Xie, W. K. Zhu, K. Park, A. P. Chen, Y. Losovyj, Z. Li, H. M. Liu, M. Starr, J. A. Acosta, _et al._ , Scientific Reports 7740 (Aug 2017).
* (25) K. H. Whitmire, _Bismuth: Inorganic Chemistry_ (American Cancer Society, 2014), ISBN 9781119951438.
* (26) E. A. Abdullah and T. K. Ban, E-Journal of Chemistry 9, 2429 (Jan 2012).
* (27) P. Day, _Emission and Scattering Techniques: Studies of Inorganic Molecules, Solids, and Surfaces_ , Nato Science Series C: (Springer Netherlands, 2012), ISBN 9789400985254.
* (28) J. Folmer and D. de Boer, Solid State Communications 38(12), 1135 (1981).
* (29) A. Walsh, G. W. Watson, D. J. Payne, R. G. Edgell, J. Guo, P.-A. Glans, T. Learmonth, and K. E. Smith, Phys. Rev. B 73, 235104 (Jun 2006), https://link.aps.org/doi/10.1103/PhysRevB.73.235104.
* (30) W. Y. Hsu, R. V. Kasowski, T. Miller, and T. Chiang, Applied Physics Letters 52(10), 792 (1988).
|
Also at ]Harbor Branch Oceanographic Institute, Florida Atlantic University,
Fort Pierce, FL 34946, USA
# Aerosol generation in public restrooms
Jesse H. Schreck<EMAIL_ADDRESS>Department of Ocean and Mechanical
Engineering, Florida Atlantic University, Boca Raton, FL 33431, USA Masoud
Jahandar Lashaki<EMAIL_ADDRESS>Department of Civil, Environmental
and Geomatics Engineering, Florida Atlantic University, Boca Raton, FL 33431,
USA Javad Hashemi<EMAIL_ADDRESS>Department of Ocean and Mechanical
Engineering, Florida Atlantic University, Boca Raton, FL 33431, USA Manhar
Dhanak<EMAIL_ADDRESS>Department of Ocean and Mechanical Engineering, Florida
Atlantic University, Boca Raton, FL 33431, USA Siddhartha Verma
<EMAIL_ADDRESS>http://www.computation.fau.edu [ Department of Ocean and
Mechanical Engineering, Florida Atlantic University, Boca Raton, FL 33431, USA
###### Abstract
Aerosolized droplets play a central role in the transmission of various
infectious diseases, including Legionnaire’s disease, gastroenteritis-causing
norovirus, and most recently COVID-19. Respiratory droplets are known to be
the most prominent source of transmission for COVID-19, however, alternative
routes may exist given the discovery of small numbers of viable viruses in
urine and stool samples. Flushing biomatter can lead to the aerosolization of
microorganisms, thus, there is a likelihood that bioaerosols generated in
public restrooms may pose a concern for the transmission of COVID-19,
especially since these areas are relatively confined, experience heavy foot
traffic, and may suffer from inadequate ventilation. To quantify the extent of
aerosolization, we measure the size and number of droplets generated by
flushing toilets and urinals in a public restroom. The results indicate that
the particular designs tested in the study generate a large number of droplets
in the size range $0.3\mu m$ to $3\mu m$, which can reach heights of at least
$1.52m$. Covering the toilet reduced aerosol levels but did not eliminate them
completely, suggesting that aerosolized droplets escaped through small gaps
between the cover and the seat. In addition to consistent increases in aerosol
levels immediately after flushing, there was a notable rise in ambient aerosol
levels due to the accumulation of droplets from multiple flushes conducted
during the tests. This highlights the need for incorporating adequate
ventilation in the design and operation of public spaces, which can help
prevent aerosol accumulation in high occupancy areas and mitigate the risk of
airborne disease transmission.
††preprint: AIP/123-QED
## I Introduction
The aerosolization of biomatter caused by flushing toilets has long been known
to be a potential source of transmission of infectious microorganisms Darlow
and Bale (1959); Gerba, Wallis, and Melnick (1975). Toilet flushing can
generate large quantities of microbe-containing aerosols Johnson _et al._
(2013a) depending on the design and water pressure or flushing energy of the
toilet Bound and Atkinson (1966); Johnson _et al._ (2013b); Lai _et al._
(2018). A variety of different pathogens which are found in stagnant water or
in waste products (e.g., urine, feces, and vomit) can get dispersed widely via
such aerosolization, including the legionella bacterium responsible for
causing Legionnaire’s disease Hamilton _et al._ (2018); Couturier _et al._
(2020), the Ebola virus Lin and Marr (2017), the norovirus which causes severe
gastroenteritis (food poisoning) Caul (1994); Marks _et al._ (2000), and the
Middle East Respiratory Syndrome coronavirus (MERS-CoV) Zhou _et al._ (2017).
Such airborne dispersion is suspected to have played a key role in the
outbreak of viral gastroenteritis aboard a cruise ship, where infection was
twice as prevalent among passengers who used shared toilets compared to those
who had private bathrooms Ho _et al._ (1989). Similarly, transmission of
norovirus via aerosolized droplets was linked to the occurrence of vomiting or
diarrhea within an aircraft restroom Widdowson _et al._ (2005), as passengers
and crew who got infected were more likely to have visited restrooms than
those that were not infected. The participants in the study reported that all
of the restroom surfaces appeared to be clean, which indicates that infection
is likely to have occurred via bioaerosols suspended within the restroom.
In more controlled studies investigating toilet-generated aerosols, Barker &
Bloomfield Barker and Bloomfield (2000) isolated salmonella bacteria from air
samples collected after flushing. Bacteria and viruses could be isolated from
settle plates for up to an hour to 90 minutes after flushing Barker and Jones
(2005); Best, Sandoe, and Wilcox (2012), suggesting that the microorganisms
were present in aerosolized droplets and droplet nuclei. An experimental study
in a hospital-based setting measured bioaerosol generation when fecal matter
was flushed by patients Knowlton _et al._ (2018). A significant increase in
bioaerosols was observed right after flushing, and the droplets remained
detectable for up to 30 minutes afterwards. Notably, flushing does not remove
all of the microorganisms which may be present in the bowl. In various studies
where the toilet bowl was seeded with microorganisms, sequential flushes led
to a drop in microbe count, however, some residual microbes remained in the
bowl even after up to 24 flushes Gerba, Wallis, and Melnick (1975); Barker and
Bloomfield (2000); Barker and Jones (2005); Johnson _et al._ (2017); Aithinne
_et al._ (2019). In some cases, residual microbial contamination was shown to
persist in biofilm formed within the toilet bowl for several days to weeks
Barker and Bloomfield (2000).
In an effort to reduce aerosol dispersal, certain studies conducted
measurements with the toilet seat lid closed Barker and Jones (2005); Best,
Sandoe, and Wilcox (2012). Closing the lid led to a decrease, but not a
complete absence of bacteria recovered from air samples. This suggests that
smaller aerosolized droplets were able to escape through the gap between the
seat and the lid. In addition to the experiment-based studies mentioned here,
numerical simulations have been used recently to investigate the ejection of
aerosolized particles from toilets and urinals, specifically in the context of
COVID-19 transmission Li, Wang, and Chen (2020); Wang _et al._ (2020a).
The issue of aerosolization is particularly acute for viruses compared to
bacteria, given their different response to levels of relative-humidity (RH).
High RH levels result in slower evaporation of aerosolized droplets, whereas
lower levels accelerate the phenomenon, leading to the formation of extremely
small droplet nuclei which can remain airborne for long periods of time and
can deposit deep into the lungs Mallik, Mukherjee, and Panchagnula (2020);
Wang _et al._ (2020b). Various studies have indicated that the viability of
bacteria decreases at low RH levels Won and Ross (1966); Lin and Marr (2020),
which makes them less likely to retain their infectivity in droplet nuclei
form. On the other hand, viruses exhibit lowest viability at intermediate RH
levels, and retain their viability at either low or high RH values Songer
(1967); Benbough (1971); Schaffer, Soergel, and Straube (1976); Donaldson and
Ferris (1976); Lin and Marr (2020), making them more likely to remain intact
in droplet nuclei which can stay suspended from hours to days. Viruses are
also more likely to aerosolize easily, as indicated by Lee at al. Lee, Pruden,
and Marr (2016) who used wastewater sludge (both synthetic and real) to
demonstrate that when viruses were seeded into the sludge, $94\%$ stayed
mobile in the liquid phase while only a small fraction adhered to the solid
biomatter or to the surfaces of the toilet. This suggests that the presence of
solid biomatter, which is more difficult to aerosolize, might not reduce the
potential for virus transmission since they are more likely to get aerosolized
with the liquid phase.
Apart from gastrointestinal diseases, viruses associated with respiratory
illnesses have also been detected in patients’ stool and urine samples. For
instance, the SARS-CoV (Severe Acute Respiratory Syndrome Coronavirus)
responsible for the SARS outbreak of 2003 was found in patients’ urine and
stool specimens for longer than 4 weeks Xu _et al._ (2005). Similarly, recent
studies have confirmed the presence of SARS-CoV-2 (the virus associated with
COVID-19) viral RNA in patients’ stool samples Xiao _et al._ (2020a); Wu _et
al._ (2020a); Chen _et al._ (2020); Xiao _et al._ (2020a); Zhang _et al._
(2020); Foladori _et al._ (2020); Gupta _et al._ (2020), even if they did
not experience gastrointestinal symptoms and regardless of the severity of
their respiratory symptoms Wu _et al._ (2020a); Chen _et al._ (2020); Zhang,
Wang, and Xue (2020); Ling _et al._ (2020). Surprisingly, viral RNA could be
detected in feces for several days to weeks after it was no longer detectable
in respiratory samples from nasal and oral swabs Xiao _et al._ (2020a); Wu
_et al._ (2020a); Chen _et al._ (2020); Gupta _et al._ (2020). Moreover, Wu
et al. Wu _et al._ (2020b) recovered large quantities of viral RNA from urban
wastewater treatment facilities. The levels detected were several orders of
magnitude higher than would be expected for the number of clinically confirmed
cases in the region, which suggests that there was a high prevalence of
asymptomatic and undetected cases.
Although enveloped viruses like SARS-CoV-2 are susceptible to the acids and
bile salts found in digestive juices, it has been shown that they can survive
when engulfed within mucus produced by the digestive system. Hirose et al.
Hirose _et al._ (2017) demonstrated that influenza viruses could be protected
from degradation by simulated digestive juices using both artificial and
natural mucus. This might help explain why recent studies have been able to
isolate viable SARS-CoV-2 virus particles (i.e., those able to infect new
cells) that remained intact when passing through the digestive and urinary
systems, albeit in smaller quantities compared to respiratory fluids Jones
_et al._ (2020). Wang et al. Wang _et al._ (2020c) detected live virus in
feces from patients who did not have diarrhea, and Xiao et al. Xiao _et al._
(2020b) demonstrated the infectivity of intact virions isolated from a
patient’s stool samples. In urine specimens, SARS-CoV-2 RNA is found less
frequently than in fecal and respiratory samples Peng _et al._ (2020); Ling
_et al._ (2020); Xiao _et al._ (2020a). However, Sun et al. Sun _et al._
(2020) managed to isolate the virus from a severely infected patient’s urine,
and showed that these virions were capable of infecting new susceptible cells.
As with fecal samples, viral RNA has been found in urine even after the virus
is no longer detectable in respiratory swabs Ling _et al._ (2020).
These findings suggest that the aerosolization of biomatter could play a
potential role in the transmission of SARS-CoV-2, which is known to remain
viable in aerosol form van Doremalen _et al._ (2020); Fears _et al._ (2020).
Environmental samples taken by Ding et al. Ding _et al._ (2021) in a hospital
designated specifically for COVID-19 patients indicated high prevalence of the
virus within bathrooms used by the patients, both on surfaces and in air
samples. The authors hypothesized that aerosolized fecal matter may have
dispersed the virus within the bathroom, since viral samples were not detected
on surfaces in the patients’ rooms.
Given the potential role of aerosolized biomatter in spreading a wide variety
of gastrointestinal and respiratory illnesses, we investigate droplet
generation from toilets and urinals in a public restroom operating under
normal ventilation condition. We examine the size, number, and various heights
to which the droplets rise when generated by the flushing water. The main aim
is to better understand the risk of infection transmission that the droplets
pose in public restrooms, since these relatively confined locations often
experience heavy foot traffic. The experimental methodology is described in
Section II followed by results and discussion in Section III and conclusion in
Section IV.
## II Methods
The flush-generated aerosol measurements were recorded in a medium sized
restroom on the university campus, consisting of 3 bathroom cubicles, 6
urinals, and 3 sinks. The restroom was deep cleaned and closed twenty four
hours prior to conducting the experiments, with the ventilation system
operating normally to remove any aerosols generated during cleaning. The
temperature and relative humidity within the restroom were measured to be
$21\degree C$ and $52\%$, respectively. For the measurements reported here,
one particular toilet and one urinal were selected, both equipped with
flushometer type flushing systems. The urinal used 3.8 liters of water per
flush whereas the toilet used 4.8 liters per flush.
The size and concentration of aerosols generated by flushing were measured
using a handheld particle counter (9306-V2 - TSI Incorporated). The sensor’s
size resolution is less than $15\%$, which is indicative of the uncertainty in
the measured particle diameter. More specifically, the resolution is specified
as the ratio of standard deviation to the mean size of the particles being
sampled. The counting efficiency of the sensor is $50\%$ at $0.3\mu m$ and
$100\%$ for particles larger than $0.45\mu m$. These values denote the ratio
of particle numbers measured by the counter to those measured using a
reference instrument. Handheld counters with comparable specifications have
been used for estimating the likelihood of aerosol transmission in typical
public spaces Somsen _et al._ (2020).
The particle counter was positioned at various heights close to the toilet and
the urinal as shown in Figure 1.
(a)
(b)
Figure 1: Measurement locations where the aerosol sensor was placed for
(1(a)) the toilet and (1(b)) the urinal. Measurements for the toilet were
taken at heights of $0.43m$ from the ground ($1ft\ 5in$), $1.22m$ ($4ft$), and
$1.52m$ ($5ft$), whereas those for the urinal were taken at $0.53m$ ($1ft\
9in$), $0.97m$ ($3ft\ 2in$), and $1.22m$ ($4ft$).
Measurements for the toilet were taken at 3 different heights, at
approximately $0.43m$ from the ground ($1ft\ 5in$), $1.22m$ ($4ft$), and
$1.52m$ ($5ft$), with the toilet seat raised up. The lowest level corresponds
to the distance between the ground and the toilet seat, and represents the
scenario where the particle counter was placed level with the seat.
Measurements for the urinal were taken at 3 different heights, at
approximately $0.53m$ from the ground ($1ft\ 9in$), $0.97m$ ($3ft\ 2in$), and
$1.22m$ ($4ft$). The particle counter’s intake probe was oriented parallel to
the floor and perpendicular to the back wall, with the inlet pointing in the
direction of the flushing water. The probe was centered laterally for both
toilet and urinal measurements. The placement and orientation were selected to
be representative of a person breathing in when flushing the toilet/urinal
after use, since different choices were observed have a notable impact on the
measured droplet count. The probe inlet was positioned $5cm$ inside the rim of
the toilet, and it was placed $5cm$ outside the edge of the urinal, as
depicted in Figure 1. In addition to measurements taken during normal
operation of the toilet, aerosol measurements were recorded after a large flat
plate was placed over the toilet opening, to assess the impact of flushing
with the lid closed. The use of a separate cover was necessary since public
restrooms in the United States often do not come equipped with toilet seat
lids.
The particle counter drew air samples at a volume flow rate of 2.83 liters per
minute (0.1 Cubic Feet per Minute - CFM), and measured aerosol concentrations
in six different size ranges, namely, (0.3 to 0.5)$\mu m$, (0.5 to 1.0)$\mu
m$, (1.0 to 3.0)$\mu m$, (3.0 to 5.0)$\mu m$, (5.0 to 10.0)$\mu m$, and (10.0
to 25.0)$\mu m$. For the tests reported here, air samples were recorded at a
sampling frequency of $1Hz$ for a total of 300 seconds at each of the levels
depicted in Figure 1. We note that although it is feasible to compute droplet
concentration at a given measurement location, it is difficult to determine
overall characteristic droplet production rates for the toilet or urinal,
since the measured values depend on both the location and orientation of the
probe. During the 300-second sampling, the toilet and urinal were flushed
manually 5 different times at the 30, 90, 150, 210, and 270 second mark, with
the flushing handle held down for five consecutive seconds. The data obtained
from the three different scenarios, i.e., toilet flushing, covered toilet
flushing, and urinal flushing, were analyzed to determine the increase in
aerosol concentration. The behavior of droplets of different sizes, the
heights that they rose to, and the impact of covering the toilet are discussed
in detail in Section III.
## III Results and Discussion
The measurements from the particle counter were analyzed to determine the
extent of aerosolization, and the various heights to which the droplets rise
after flushing. Figure 2 shows the time-variation of the total number of
particles recorded by the sensor from measurements for the uncovered toilet.
(a)
(b)
(c)
Figure 2: Particle-count from the toilet-flushing test, measured at a height
of $0.43m$ ($1ft5in$). The time series plots are shown for particles in
various size ranges: (2(a)) (0.3 to 0.5)$\mu m$ \- black, and (0.5 to 1)$\mu
m$ \- blue; (2(b)) (1 to 3)$\mu m$ \- black, and (3 to 5)$\mu m$ \- blue;
(2(c)) (5 to 10)$\mu m$ \- black, and (10 to 25)$\mu m$ \- blue. The black
curves in (2(a)) and (2(b)) correspond to the left vertical axes, whereas the
blue curves correspond to the right vertical axes. The dashed gray lines
indicate the instances when the flushing handle was depressed and held down
for 5 seconds.
The data plotted has been smoothed using a moving average window of size 4 to
reduce noise levels. Figure 2(a) depicts the time series for particles of size
(0.3 to 0.5)$\mu m$ and (0.5 to 1)$\mu m$, whereas the size groups (1 to
3)$\mu m$ and (3 to 5)$\mu m$ are shown in Figure 2(b), and size groups (5 to
10)$\mu m$ and (10 to 25)$\mu m$ are shown in Figure 2(c). We observe a
noticeable increase in particle count for all of the size ranges a few seconds
after flushing. This indicates that flushing the toilet generates droplets in
significant numbers, which can be detected at seat-level for up to 30 seconds
after initiating the flush.
In Figure 2(a) we observe a large variation in the measured levels of the
smallest particles, i.e., those smaller than $1\mu m$. These particles are
highly susceptible to flow disturbances in the ambient environment due to
their low mass, which may account for the high variability. The time series
for particles larger than 1$\mu m$ (Figures 2(b) and 2(c)) exhibit distinctive
surges in particle count after each flushing event. Importantly, the total
number of droplets generated in the smaller size ranges is considerably larger
than that generated in the larger ranges, even though the surges appear to be
less prominent for the smaller droplets. We note that for the smallest
aerosols (i.e., those smaller than 1$\mu m$), ambient levels in the restroom
were relatively high prior to starting the experiment ($\sim O(3000)$). Thus,
in these size ranges the flush-generated droplets comprise a small fraction of
the total particle count. On the other hand, ambient levels for particle sizes
larger than 1$\mu m$ were negligible in the restroom ($\sim O(1)$ to $O(10)$),
resulting in the distinctive surges observed after flushing.
Similar plots depicting the time-variation of droplet counts for the covered
toilet test and the urinal-flushing test are shown in Figure 3 and Figure 4,
respectively.
(a)
(b)
(c)
Figure 3: Particle-count from the flushing test when the toilet was covered
using a large flat plate. Measurements taken at a height of $0.43m$
($1ft5in$). The time series plots are shown for particles in various size
ranges: (3(a)) (0.3 to 0.5)$\mu m$ \- black, and (0.5 to 1)$\mu m$ \- blue;
(3(b)) (1 to 3)$\mu m$ \- black, and (3 to 5)$\mu m$ \- blue; (3(c)) (5 to
10)$\mu m$ \- black, and (10 to 25)$\mu m$ \- blue. The black curves in (3(a))
and (3(b)) correspond to the left vertical axes, whereas the blue curves
correspond to the right vertical axes. The dashed gray lines indicate the
instances when the flushing handle was depressed and held down for 5 seconds.
(a)
(b)
(c)
Figure 4: Particle-count from the urinal-flushing test, measured at a height
of $0.53m$ ($1ft9in$). The time series plots are shown for particles in
various size ranges: (4(a)) (0.3 to 0.5)$\mu m$ \- black, and (0.5 to 1)$\mu
m$ \- blue; (4(b)) (1 to 3)$\mu m$ \- black, and (3 to 5)$\mu m$ \- blue;
(4(c)) (5 to 10)$\mu m$ \- black, and (10 to 25)$\mu m$ \- blue. The black
curves in (4(a)) and (4(b)) correspond to the left vertical axes, whereas the
blue curves correspond to the right vertical axes. The dashed gray lines
indicate the instances when the flush was activated using the proximity
sensor.
For the covered toilet, the plots display a large variation in the number of
the smallest droplets in Figure 3(a), and comparatively small surges relative
to ambient levels due to the background count being high. Importantly, the
observed peak values of the surges are lower for the covered toilet compared
to the uncovered tests. This is evident in Figure 3(b), where the peak values
are approximately 35 droplets on average for the (1 to 3)$\mu m$ range, and 3
droplets for the (3 to 5)$\mu m$ range. The same numbers for the uncovered
toilet are approximately 50 droplets and 5 droplets, respectively, in Figure
2(b). Notably, there is a significant reduction in the number of droplets
larger than 5$\mu m$ for the covered toilet (Figure 3(c)) compared to the
uncovered toilet (Figure 2(c)). This indicates that the covering helps to
reduce the dispersion of flush-generated droplets, especially those larger
than 5$\mu m$, but it does not completely contain the escape of droplets
smaller than 5$\mu m$.
The data from the urinal-flushing tests in Figure 4 indicate a large number of
droplets generated in all size ranges observed; the post-flush surges are much
more pronounced than those for the toilet-flushing tests, even for droplets
smaller than 1$\mu m$ (Figure 4(a)). This may be related to the closer
proximity of the sensor to the water drain in the urinal, compared to the
toilet-flushing tests for which the sensor was placed at the outer edge of the
toilet bowl. We observe that there is no consistent increasing or decreasing
trend in either the peaks or the baseline levels with subsequent flushes in
the time series plots. The same holds true for data from the toilet-flushing
tests in Figures 2 and 3. Thus, any short term changes in temperature and RH
at the measurement location due to flushing do not have a noticeable impact on
the droplet count. Furthermore, while the smallest droplets will remain
suspended for longer than $300s$, the time series plots indicate that droplet
counts at the sensor location return to ambient levels within approximately
half a minute. Nonetheless, as these droplets move past the particle counter
they become part of the ambient environment, leading to a measurable increase
in background levels as demonstrated later in this section.
To compare the increase in droplet concentration for the three different
scenarios at various measurement heights, the time series data were examined
manually to identify the time delay between flush initiation and the observed
rise in particle count, as well as the total time span for which the particle
counts remained elevated. The corresponding values are provided in Table 1.
Table 1: Average time delay between flush initiation and the observed rise in particle count. The last column indicates the average time taken for the particle count to return to ambient levels. | Height | Time Delay [s] | Time Span [s]
---|---|---|---
Toilet | $0.43m$ ($1ft5in$) | 10 | 20
$1.22m$ ($4ft$) | 10 | 20
$1.52m$ ($5ft$) | 10 | 20
Covered Toilet | $0.43m$ ($1ft5in$) | 0 | 20
$1.22m$ ($4ft$) | 5 | 20
Urinal | $0.53m$ ($1ft9in$) | 0 | 15
$0.97m$ ($3ft2in$) | 5 | 15
| $1.22m$ ($4ft$) | 6 | 20
We note that the time delay between flush initiation and the measured surge
for the uncovered toilet at seat-level was 10 seconds, whereas that for the
covered toilet was 0 seconds. Furthermore, the delay was smaller for the
covered toilet at a height of $1.22m$ ($5s$ versus $10s$), suggesting that the
aerosols were forced through gaps in between the seat and the plate for the
covered toilet. In both cases, the droplet counts remained elevated for a
further 20 seconds after first detection of the surge. For the covered toilet
and the urinal, we observe a consistent increase in time delay with increasing
height, which also corresponds to increasing distance from the flushing water,
but the observed delay remained nearly constant for the uncovered toilet at
$10s$. We remark that the time delay and detection duration are expected to be
influenced strongly by the placement of the sensor, the fixture geometry, the
flushing mechanism, as well as the water volume and pressure.
The number of droplets produced during the flushes were determined by
numerically integrating the sections comprising the ‘surge’ segments in the
unfiltered time series. More specifically, within each 1-minute window
associated with a particular flush, the start of the surge was identified
using the time-delay values specified in Table 1. Starting at this time the
area under the particle-count curve was computed numerically up until the end
of the surge, the corresponding time span for which is also specified in Table
1. The average surge count was determined by dividing this area by the
corresponding time span. The area under the remaining parts of the curve,
i.e., the segments lying outside the surge but within the 1-minute time
window, was determined similarly to obtain the average ambient droplet count.
This ambient count was subtracted from the surge count to yield the average
number of flush-generated droplets measured per second by the particle
counter. The resulting values from the 4 different full-minute flush
measurements were averaged to obtain the increase in droplet count per second,
and the standard deviation was computed to determine the uncertainty. The
resulting data for the flushing toilet is depicted graphically in Figure 5,
and the corresponding numerical values are provided in Table 2.
Figure 5: Average increase in the number of droplets measured per second after flushing the toilet. The error bars indicate the standard deviation of the measured increase from multiple flushes. Each bar cluster corresponds to particles in a given size range, and indicates how the droplet count varies with measurement height. The corresponding values are provided in Table 2. Table 2: Numerical values for the average increase in droplet count per second from the toilet-flushing tests, with the standard deviation provided in parentheses. The data corresponds to the bar graphs shown in Figure 5. Height | (0.3 to 0.5)$\mu m$ | (0.5 to 1)$\mu m$ | (1 to 3)$\mu m$
---|---|---|---
$0.43m$ | 186 ($\pm 25$) | 51 ($\pm 20$) | 17 ($\pm 3$)
$1.22m$ | 27 ($\pm 24$) | 14 ($\pm 7$) | 7 ($\pm 2$)
$1.52m$ | 29 ($\pm 5$) | 13 ($\pm 5$) | 5 ($\pm 2$)
We note that droplets larger than 3$\mu m$ were excluded from this analysis
since very few droplets in these size ranges were detected at the higher
locations, which made it difficult to distinguish between the measured values
and background noise.
The bar graphs in Figure 5 indicate that a significant number of droplets
smaller than $0.5\mu m$ were generated by the flushing toilet. If these
droplets contain infectious microorganisms from aerosolized biomatter, they
can pose a significant transmission risk since they remain suspended for long
periods of time. For instance, in a poorly ventilated location where
gravitational settling is the only means of removing suspended particles, the
Stokes settling time for a spherical water droplet of size $0.5\mu m$ from a
height of $1.52m$ ($5ft$) would be approximately 56 hours, or more than 2
days. Apart from the smallest aerosols, comparatively larger aerosols also
pose a risk in poorly ventilated areas even though they experience stronger
gravitational settling. They often undergo rapid evaporation in the ambient
environment and the resulting decreases in size and mass, or the eventual
formation of droplet nuclei, can allow microbes to remain suspended for
several hours Wells (1934); Duguid (2020); Basu _et al._ (2020).
In Figure 5, we observe a large variation for aerosols in the size range (0.3
to 0.5)$\mu m$. This may be attributed to the small droplets’ high sensitivity
to ambient flow fluctuations, as well as to the sensor’s limited counting
efficiency in this range. Notably, droplets smaller than $3\mu m$ are
detectable in significant numbers even at a height of $1.52m$ ($5ft$). We
observe a consistent decline in droplet count with increasing height; there is
a significant drop in droplet count going from seat-level to $1.22m$, and a
very small decrease with a further move up to $1.52m$ for droplets larger than
$0.5\mu m$. The smallest aerosols exhibit some variation in the trend, which
is likely due to the sensor limitations mentioned above. The observed decrease
in droplet count with increasing measurement height is expected, since the
droplet concentration is highest when the probe is placed closer to the
flushing water, and it decreases at farther locations due to dispersal of the
droplets over a wider area. We remark that gravitational forces are not
expected to play a dominant role in the observed behavior, given the extremely
small mass of the aerosols being considered here. Rather, it is aerodynamic
drag that dominates. The Stokes settling speed for the largest aerosol being
considered, i.e., a $3\mu m$ droplet, is approximately $0.00027m/s$. This
amounts to a settling time of $1589s$ from a height of $0.43m$, and even
longer for the smaller droplets. Thus, the effects of gravitational forces are
not dominant at the time scales being considered ($\sim O(10s)$). Finally, the
monotonic decrease in particle count with increasing particle size is similar
to the trend observed by Johnson et al. Johnson _et al._ (2013b) for various
toilet designs and flushing mechanisms.
The data collected after flushing the covered toilet and the urinal were also
processed in a similar manner to determine the corresponding increases in
droplet count per second. The results for the covered toilet are presented in
Figure 6 and Table 3, whereas those from flushing the urinal are presented in
Figure 7 and Table 4. Results from measurements at $1.52m$ height were not
included in the analysis for the covered toilet, since it was difficult to
discern droplet counts from background noise due to the extremely low measured
values. This indicates that the covering plate prevented the aerosols from
rising upward and instead deflected them to lower levels, also resulting in
shorter time delays compared to the uncovered toilet (Table 1). Over the long
term however, these aerosols could rise up with updrafts created by the
ventilation system or by the movement of people in the restroom.
We observe a large number of aerosolized droplets smaller than 1$\mu m$ in
Figure 6, and an appreciable number of droplets in the (1 to 3)$\mu m$ range.
This suggests that while the covering is able to suppress the dispersion of
droplets to some extent, it does not eliminate them completely. Thus, although
a toilet lid may appear to be a straightforward solution for reducing aerosol
dispersal, other alternatives may need to be evaluated when designing public
restrooms, such as modifying the fixture design, water pressure, vent
placement, airflow rate, or even employing a liquid ‘curtain’ incorporated
into the fixture Wu _et al._ (2020c).
Figure 6: Average increase in number of droplets measured per second from flushing the covered toilet. The error bars indicate the standard deviation of the measured increase from multiple flushes. Each bar cluster corresponds to particles in a given size range, and indicates how the droplet count varies with measurement height. The corresponding values are provided in Table 3. Table 3: Numerical values for the average increase in droplet count per second from the covered toilet-flushing tests, with the standard deviation provided in parentheses. The data corresponds to the bar graphs shown in Figure 6. Height | (0.3 to 0.5)$\mu m$ | (0.5 to 1)$\mu m$ | (1 to 3)$\mu m$
---|---|---|---
$0.43m$ | 147 ($\pm 47$) | 35 ($\pm 9$) | 9 ($\pm 2$)
$1.22m$ | 80 ($\pm 36$) | 27 ($\pm 7$) | 7 ($\pm 3$)
The bars in Figure 6 display a consistent decline in droplet count with
increasing height, similar to the trend observed for the uncovered toilet. One
unexpected observation is the occurrence of higher droplet counts for the
covered toilet at $1.22m$, compared to analogous measurements for the
uncovered toilet in Table 2. We remark that this does not indicate that the
covering led to an increase in droplet count, but rather that the aerosols
were redirected in higher concentrations to the position where the counter was
located, after being forced through gaps between the seat and the cover.
Examining the data from the the urinal-flushing tests in Figure 7, we observe
a similar decline in droplet count with increasing height as for the other two
cases. A large number of droplets were detected in the (0.3 to 0.5)$\mu m$
size range (approximately 300 droplets per second on average) at the lowest
measurement level, which can be attributed to the close proximity of the
sensor to the flushing water. Moreover, a significant number of droplets
reached heights of up to $1.22m$ ($4ft$) from the ground, similar to the
toilet-flushing tests.
Figure 7: Average increase in number of droplets measured per second from flushing the urinal. The error bars indicate the standard deviation of the measured increase from multiple flushes. Each bar cluster corresponds to particles in a given size range, and indicates how the droplet count varies with measurement height. The corresponding values are provided in Table 4. Table 4: Numerical values for the average increase in droplet count per second from the urinal-flushing tests, with the standard deviation provided in parentheses. The data corresponds to the bar graphs shown in Figure 7. Height | (0.3 to 0.5)$\mu m$ | (0.5 to 1)$\mu m$ | (1 to 3)$\mu m$
---|---|---|---
$0.53m$ | 315 ($\pm 209$) | 80 ($\pm 47$) | 17 ($\pm 8$)
$0.97m$ | 46 ($\pm 23$) | 14 ($\pm 5$) | 8 ($\pm 2$)
$1.22m$ | 34 ($\pm 12$) | 10 ($\pm 2$) | 5 ($\pm 2$)
We remark that the total number of droplets generated in each flushing test
described here can range in the tens of thousands. The numbers reported here
indicate average droplet count per second, for cases where the time span for
each surge varies from $15s$ to $20s$ (Table 1). Thus, an average count of 50
droplets per second for one size range would amount to a total of 750 to 1000
droplets at one particular measurement location. Considering that similar
measurements could be made all around the periphery of the fixtures, and that
droplets are generated in several different size ranges, the overall total
count would likely end up being significantly higher. Furthermore, droplet
generation and accumulation depend on a variety of factors, such as the design
of the toilet fixtures, the water pressure, the ventilation positioning,
airflow, temperature, and RH, to name a few. The aim of the present work is
not to present detailed characterizations of the influence of these factors on
droplet dynamics, but instead to highlight the occurrence of aerosol
generation and accumulation within public restrooms. These observations can
help stimulate further studies to investigate steps to mitigate the issues
involved. We further note that while the results presented here are restricted
to specific measurement heights, there is a high likelihood of the aerosols
getting dispersed throughout the room over time due to updrafts created by the
ventilation system or by the movement of people.
In addition to the flush-generated aerosol measurements, ambient aerosol
levels were measured prior to starting the experiments and again after
completing all of the tests. After approximately 3 hours of tests involving
over 100 flushes, there was a substantial increase in the measured aerosol
levels in the ambient environment. The corresponding data is presented in
Figure 8 and Table 5.
Figure 8: Particle-count from ambient measurements within the restroom. The plot indicates the time-variation of particles in two different size ranges, (0.3 to 0.5)$\mu m$ \- black, and (0.5 to 1)$\mu m$ \- blue. The black curves correspond to the left vertical axis, whereas the blue curves correspond to the right vertical axis. The dashed lines indicate initial background readings before conducting any flushing tests, whereas the solid lines indicate measurements taken at the conclusion of all tests, approximately 3 hours and 100 flushes later. Table 5: Average values for the background measurements shown in Figure 8. Additionally, average measurements for the (1 to 3)$\mu m$ size group are also provided below. The ‘Before’ column indicates the average ambient levels measured within a 5-minute time window before conducting any flushing experiments, and the ‘After’ column indicates similar measurements taken after concluding all the experiments. Particle Size Group | Before | After | Percent Change
---|---|---|---
0.3 to 0.5 $\mu$m | 2537 | 4301 | 69.5%
0.5 to 1 $\mu$m | 201 | 621 | 209%
1 to 3 $\mu$m | 8 | 12 | 50%
There was a $69.5\%$ increase in measured levels for particles of size (0.3 to
0.5)$\mu m$, a $209\%$ increase for the (0.5 to 1)$\mu m$ particles, and a
$50\%$ increase for the (1 to 3)$\mu m$ particles. Particles larger than 3$\mu
m$ were excluded from the analysis due to the the impact of background noise
on the extremely low measured values. The results point to significant
accumulation of flush-generated aerosolized droplets within the restroom over
time, which indicates that the ventilation system was not effective in
removing them from the enclosed space, although there was no perceptible lack
of airflow within the restroom; the room was equipped with two vents rated at
volume flow rates of $7.5m^{3}/min$ (265 CFM) and $5.66m^{3}/min$ (200 CFM).
Furthermore, a comparison with ambient levels outside the restroom (a few
meters away from the closed restroom door, but within the same building)
indicated that the levels of droplets smaller than $1\mu m$ were more than 10
times higher within the restroom compared to ambient levels outside the
restroom. This was unexpected since the restroom had been closed off for more
than 24 hours after deep cleaning, with the ventilation system operating
normally. While it is difficult to ascertain the exact source of the droplets
that contributed to high background levels within the restroom, it is likely
that they were generated during the cleaning operation. There were no other
readily apparent sources, since both locations, i.e., inside and outside the
restroom, employed the same centralized air-conditioning system, and the RH
and temperature were maintained at comparable levels (Figure 9). These
observations further highlight the importance of employing adequate
ventilation in enclosed spaces to extract suspended droplets effectively, in
order to reduce the chances of infection transmission via aerosolized
droplets.
Figure 9: Relative humidity (black) and temperature measurements (blue)
inside and outside the restroom. Solid lines indicate measurements taken
inside the restroom, whereas dashed lines correspond to measurements outside
the restroom, a few meters away from the closed restroom door.
The results presented here indicate that although the likelihood of infection
for respiratory illnesses via bioaerosols may be low compared to the risk
posed by respiratory droplets (since virions are detected in larger quantities
in respiratory samples), it presents a viable transmission route especially in
public restrooms which often experience heavy foot-traffic within a relatively
confined area. As demonstrated here, multiple flush-use over time can lead to
an accumulation of potentially infectious aerosols, which poses a measurable
risk considering the large number of individuals who may visit a public
restroom and subsequently disperse into the broader community. Moreover, apart
from flush-generated bioaerosols, the accumulation of respiratory aerosols
also poses a concern in public restrooms if adequate ventilation is not
available. Overall, the results presented here highlight the crucial need for
ensuring effective aerosol removal capability in high density and frequently
visited public spaces.
## IV Conclusion
The aerosolization of biomatter from flushing toilets is known to play a
potential role in spreading a wide variety of gastrointestinal and respiratory
illnesses. To better understand the risk of infection transmission that such
droplets may pose in confined spaces, this paper investigates droplet-
generation by flushing toilets and urinals in a public restroom operating
under normal ventilation condition. The measurements were conducted inside a
medium-sized public restroom, with a particle counter placed at various
heights to determine the size and number of droplets generated upon flushing.
The results indicate that both toilets and urinals generate large quantities
of droplets smaller than 3$\mu m$ in size, which can pose a significant
transmission risk if they contain infectious microorganisms from aerosolized
biomatter. The droplets were detected at heights of up to $1.52m$ ($5ft$) for
20 seconds or longer after initiating the flush. Owing to their small size,
these droplets can remain suspended for long periods of time, as is
demonstrated in the present study via ambient measurements taken before and
after conducting the experiments. When a large flat plate was used to cover
the toilet opening, it led to a decrease in droplet dispersion but not a
complete absence of the measured aerosols. This indicates that installing
toilet seat lids in public restrooms may help reduce droplet dispersal to some
extent, but it may not sufficiently address the risk posed by the smallest
aerosolized droplets. Ambient aerosol levels measured before and after
conducting the experiments indicated a substantial increase in particle count,
pointing to significant accumulation of flush-generated aerosols within the
restroom over time. This indicates that the ventilation system was not
effective in removing the aerosols, although there was no perceptible lack of
airflow within the restroom. Importantly, this suggests that multiple flush-
use over time can lead to the accumulation of high levels of potentially
infectious aerosols within public restrooms, which poses an elevated risk of
airborne disease transmission. In addition to flush-generated bioaerosols, the
accumulation of respiratory aerosols also poses a concern in public restrooms
in the absence of adequate ventilation. Overall, the results presented here
indicate that ensuring adequate ventilation in public restrooms is essential,
since these relatively confined areas often experience heavy foot traffic and
could pose a risk for widespread community transmission of various
gastrointestinal and respiratory illnesses.
## Data Availability
The data that support the findings of this study are available from the
corresponding author upon reasonable request.
## References
* Darlow and Bale (1959) H. Darlow and W. Bale, “Infective hazards of water-closets,” The Lancet 273, 1196 – 1200 (1959), originally published as Volume 1, Issue 7084.
* Gerba, Wallis, and Melnick (1975) C. P. Gerba, C. Wallis, and J. L. Melnick, “Microbiological Hazards of Household Toilets: Droplet Production and the Fate of Residual Organisms,” Applied Microbiology 30, 229–237 (1975).
* Johnson _et al._ (2013a) D. L. Johnson, K. R. Mead, R. A. Lynch, and D. V. Hirst, “Lifting the lid on toilet plume aerosol: A literature review with suggestions for future research,” American Journal of Infection Control 41, 254–258 (2013a).
* Bound and Atkinson (1966) W. Bound and R. Atkinson, “Bacterial aerosol from water closets: A comparison of two types of pan and two types of cover,” The Lancet 287, 1369 – 1370 (1966), originally published as Volume 1, Issue 7451.
* Johnson _et al._ (2013b) D. Johnson, R. Lynch, C. Marshall, K. Mead, and D. Hirst, “Aerosol generation by modern flush toilets,” Aerosol Science and Technology 47, 1047–1057 (2013b).
* Lai _et al._ (2018) A. C. Lai, T. F. Tan, W. S. Li, and D. K. Ip, “Emission strength of airborne pathogens during toilet flushing,” Indoor Air 28, 73–79 (2018).
* Hamilton _et al._ (2018) K. A. Hamilton, M. T. Hamilton, W. Johnson, P. Jjemba, Z. Bukhari, M. LeChevallier, and C. N. Haas, “Health risks from exposure to Legionella in reclaimed water aerosols: Toilet flushing, spray irrigation, and cooling towers,” Water Research 134, 261–279 (2018).
* Couturier _et al._ (2020) J. Couturier, C. Ginevra, D. Nesa, M. Adam, C. Gouot, G. Descours, C. Campèse, G. Battipaglia, E. Brissot, L. Beraud, A. G. Ranc, S. Jarraud, and F. Barbut, “Transmission of Legionnaires’ Disease through Toilet Flushing,” Emerging Infectious Diseases 26, 1526–1528 (2020).
* Lin and Marr (2017) K. Lin and L. C. Marr, “Aerosolization of Ebola Virus Surrogates in Wastewater Systems,” Environmental Science and Technology 51, 2669–2675 (2017).
* Caul (1994) E. Caul, “Small round structured viruses: airborne transmission and hospital control,” The Lancet 343, 1240 – 1242 (1994), originally published as Volume 1, Issue 8908.
* Marks _et al._ (2000) P. J. Marks, I. B. Vipond, D. Carlisle, D. Deakin, R. E. Fey, and E. O. Caul, “Evidence for airborne transmission of Norwalk-like virus (NLV) in a hotel restaurant,” Epidemiology and Infection 124, 481–487 (2000).
* Zhou _et al._ (2017) J. Zhou, C. Li, G. Zhao, H. Chu, D. Wang, H. H. N. Yan, V. K. M. Poon, L. Wen, B. H. Y. Wong, X. Zhao, M. C. Chiu, D. Yang, Y. Wang, R. K. Au-Yeung, I. H. Y. Chan, S. Sun, J. F. W. Chan, K. K. W. To, Z. A. Memish, V. M. Corman, C. Drosten, I. F. N. Hung, Y. Zhou, S. Y. Leung, and K. Y. Yuen, “Human intestinal tract serves as an alternative infection route for Middle East respiratory syndrome coronavirus,” Science Advances 3 (2017), 10.1126/sciadv.aao4966.
* Ho _et al._ (1989) M.-S. Ho, S. Monroe, S. Stine, D. Cubitt, R. Glass, H. Madore, P. Pinsky, C. Ashley, and E. Caul, “Viral gastroenteritis aboard a cruise ship,” The Lancet 334, 961 – 965 (1989).
* Widdowson _et al._ (2005) M.-A. Widdowson, R. Glass, S. Monroe, R. S. Beard, J. W. Bateman, P. Lurie, and C. Johnson, “Probable Transmission of Norovirus on an Airplane,” JAMA - Journal of the American Medical Association 293, 1855–1860 (2005).
* Barker and Bloomfield (2000) J. Barker and S. Bloomfield, “Survival of Salmonella in bathrooms and toilets in domestic homes following salmonellosis,” Journal of Applied Microbiology 89, 137–144 (2000).
* Barker and Jones (2005) J. Barker and M. V. Jones, “The potential spread of infection caused by aerosol contamination of surfaces after flushing a domestic toilet,” Journal of Applied Microbiology 99, 339–347 (2005).
* Best, Sandoe, and Wilcox (2012) E. L. Best, J. A. T. Sandoe, and M. H. Wilcox, Journal of Hospital Infection 80, 1–5 (2012).
* Knowlton _et al._ (2018) S. D. Knowlton, C. L. Boles, E. N. Perencevich, D. J. Diekema, M. W. Nonnenmann, and CDC Epicenters Program, “Bioaerosol concentrations generated from toilet flushing in a hospital-based patient care setting,” Antimicrobial Resistance & Infection Control 7, 16 (2018).
* Johnson _et al._ (2017) D. L. Johnson, R. A. Lynch, S. M. Villanella, J. F. Jones, H. Fang, K. R. Mead, and D. V. L. Hirst, “Persistence of Bowl Water Contamination during Sequential Flushes of Contaminated Toilets,” Journal of environmental health 80, 34–49 (2017).
* Aithinne _et al._ (2019) K. A. Aithinne, C. W. Cooper, R. A. Lynch, and D. L. Johnson, “Toilet plume aerosol generation rate and environmental contamination following bowl water inoculation with Clostridium difficile spores,” American Journal of Infection Control 47, 515–520 (2019).
* Li, Wang, and Chen (2020) Y. Y. Li, J. X. Wang, and X. Chen, “Can a toilet promote virus transmission? From a fluid dynamics perspective,” Physics of Fluids 32 (2020), 10.1063/5.0013318.
* Wang _et al._ (2020a) J. X. Wang, Y. Y. Li, X. D. Liu, and X. Cao, “Virus transmission from urinals,” Physics of Fluids 32 (2020a), 10.1063/5.0021450.
* Mallik, Mukherjee, and Panchagnula (2020) A. K. Mallik, S. Mukherjee, and M. V. Panchagnula, “An experimental study of respiratory aerosol transport in phantom lung bronchioles,” Physics of Fluids 32, 111903 (2020).
* Wang _et al._ (2020b) H. Wang, Z. Li, X. Zhang, L. Zhu, Y. Liu, and S. Wang, “The motion of respiratory droplets produced by coughing,” Physics of Fluids 32, 125102 (2020b).
* Won and Ross (1966) W. D. Won and H. Ross, “Effect of diluent and relative humidity on apparent viability of airborne pasteurella pestis,” Applied and Environmental Microbiology 14, 742–745 (1966).
* Lin and Marr (2020) K. Lin and L. C. Marr, “Humidity-Dependent Decay of Viruses, but Not Bacteria, in Aerosols and Droplets Follows Disinfection Kinetics,” Environmental Science and Technology 54, 1024–1032 (2020).
* Songer (1967) J. R. Songer, “Influence of relative humidity on the survival of some airborne viruses,” Applied and Environmental Microbiology 15, 35–42 (1967).
* Benbough (1971) J. E. Benbough, “Some factors affecting the survival of airborne viruses,” Journal of General Virology 10, 209–220 (1971).
* Schaffer, Soergel, and Straube (1976) F. L. Schaffer, M. E. Soergel, and D. C. Straube, “Survival of airborne influenza virus: Effects of propagating host, relative humidity, and composition of spray fluids,” Archives of Virology 51, 263–273 (1976).
* Donaldson and Ferris (1976) A. I. Donaldson and N. P. Ferris, “The survival of some air-borne animal viruses in relation to relative humidity,” Veterinary Microbiology 1, 413–420 (1976).
* Lee, Pruden, and Marr (2016) M. T. Lee, A. Pruden, and L. C. Marr, “Partitioning of Viruses in Wastewater Systems and Potential for Aerosolization,” Environmental Science and Technology Letters 3, 210–215 (2016).
* Xu _et al._ (2005) D. Xu, Z. Zhang, L. Jin, F. Chu, Y. Mao, H. Wang, M. Liu, M. Wang, L. Zhang, G. F. Gao, and F. S. Wang, “Persistent shedding of viable SARS-CoV in urine and stool of SARS patients during the convalescent phase,” European Journal of Clinical Microbiology and Infectious Diseases 24, 165–171 (2005).
* Xiao _et al._ (2020a) F. Xiao, M. Tang, X. Zheng, Y. Liu, X. Li, and H. Shan, “Evidence for Gastrointestinal Infection of SARS-CoV-2,” Gastroenterology 158, 1831–1833.e3 (2020a).
* Wu _et al._ (2020a) Y. Wu, C. Guo, L. Tang, Z. Hong, J. Zhou, X. Dong, H. Yin, Q. Xiao, Y. Tang, X. Qu, L. Kuang, X. Fang, N. Mishra, J. Lu, H. Shan, G. Jiang, and X. Huang, “Prolonged presence of SARS-CoV-2 viral RNA in faecal samples,” The Lancet Gastroenterology & Hepatology 5, 434–435 (2020a).
* Chen _et al._ (2020) Y. Chen, L. Chen, Q. Deng, G. Zhang, K. Wu, L. Ni, Y. Yang, B. Liu, W. Wang, C. Wei, J. Yang, G. Ye, and Z. Cheng, “The presence of SARS-CoV-2 RNA in the feces of COVID-19 patients,” Journal of Medical Virology 92, 833–840 (2020).
* Zhang _et al._ (2020) W. Zhang, R.-H. Du, B. Li, X.-S. Zheng, X.-L. Yang, B. Hu, Y.-Y. Wang, G.-F. Xiao, B. Yan, Z.-L. Shi, and P. Zhou, “Molecular and serological investigation of 2019-nCoV infected patients: implication of multiple shedding routes,” Emerging Microbes & Infections 9, 386–389 (2020).
* Foladori _et al._ (2020) P. Foladori, F. Cutrupi, N. Segata, S. Manara, F. Pinto, F. Malpei, L. Bruni, and G. La Rosa, “SARS-CoV-2 from faeces to wastewater treatment: What do we know? A review,” Science of The Total Environment 743, 140444 (2020).
* Gupta _et al._ (2020) S. Gupta, J. Parker, S. Smits, J. Underwood, and S. Dolwani, “Persistent viral shedding of SARS-CoV-2 in faeces – a rapid review,” Colorectal Disease 22, 611–620 (2020).
* Zhang, Wang, and Xue (2020) J. Zhang, S. Wang, and Y. Xue, “Fecal specimen diagnosis 2019 novel coronavirus-infected pneumonia,” Journal of Medical Virology 92, 680–682 (2020).
* Ling _et al._ (2020) Y. Ling, S.-B. Xu, Y.-X. Lin, D. Tian, Z.-Q. Zhu, F.-H. Dai, F. Wu, Z.-G. Song, W. Huang, J. Chen, B.-J. Hu, S. Wang, E.-Q. Mao, L. Zhu, W.-H. Zhang, and H.-Z. Lu, “Persistence and clearance of viral RNA in 2019 novel coronavirus disease rehabilitation patients,” Chinese Medical Journal 133 (2020).
* Wu _et al._ (2020b) F. Wu, J. Zhang, A. Xiao, X. Gu, W. L. Lee, F. Armas, K. Kauffman, W. Hanage, M. Matus, N. Ghaeli, N. Endo, C. Duvallet, M. Poyet, K. Moniz, A. D. Washburne, T. B. Erickson, P. R. Chai, J. Thompson, and E. J. Alm, “SARS-CoV-2 Titers in Wastewater Are Higher than Expected from Clinically Confirmed Cases,” mSystems 5 (2020b), 10.1128/mSystems.00614-20.
* Hirose _et al._ (2017) R. Hirose, T. Nakaya, Y. Naito, T. Daidoji, Y. Watanabe, H. Yasuda, H. Konishi, and Y. Itoh, “Mechanism of Human Influenza Virus RNA Persistence and Virion Survival in Feces: Mucus Protects Virions from Acid and Digestive Juices,” Journal of Infectious Diseases 216, 105–109 (2017).
* Jones _et al._ (2020) D. L. Jones, M. Q. Baluja, D. W. Graham, A. Corbishley, J. E. McDonald, S. K. Malham, L. S. Hillary, T. R. Connor, W. H. Gaze, I. B. Moura, M. H. Wilcox, and K. Farkas, “Shedding of SARS-CoV-2 in feces and urine and its potential role in person-to-person transmission and the environment-based spread of COVID-19,” Science of The Total Environment 749, 141364 (2020).
* Wang _et al._ (2020c) W. Wang, Y. Xu, R. Gao, R. Lu, K. Han, G. Wu, and W. Tan, “Detection of SARS-CoV-2 in Different Types of Clinical Specimens,” JAMA - Journal of the American Medical Association 323, 1843–1844 (2020c).
* Xiao _et al._ (2020b) F. Xiao, J. Sun, Y. Xu, F. Li, X. Huang, H. Li, J. Zhao, J. Huang, and J. Zhao, “Infectious SARS-CoV-2 in feces of patient with severe COVID-19,” Emerging Infectious Diseases 26, 1920–1922 (2020b).
* Peng _et al._ (2020) L. Peng, J. Liu, W. Xu, Q. Luo, D. Chen, Z. Lei, Z. Huang, X. Li, K. Deng, B. Lin, and Z. Gao, “SARS-CoV-2 can be detected in urine, blood, anal swabs, and oropharyngeal swabs specimens,” Journal of Medical Virology 92, 1676–1680 (2020).
* Sun _et al._ (2020) J. Sun, A. Zhu, H. Li, K. Zheng, Z. Zhuang, Z. Chen, Y. Shi, Z. Zhang, S. bei Chen, X. Liu, J. Dai, X. Li, S. Huang, X. Huang, L. Luo, L. Wen, J. Zhuo, Y. Li, Y. Wang, L. Zhang, Y. Zhang, F. Li, L. Feng, X. Chen, N. Zhong, Z. Yang, J. Huang, J. Zhao, and Y. min Li, “Isolation of infectious SARS-CoV-2 from urine of a COVID-19 patient,” Emerging Microbes and Infections 9, 991–993 (2020).
* van Doremalen _et al._ (2020) N. van Doremalen, T. Bushmaker, D. H. Morris, M. G. Holbrook, A. Gamble, B. N. Williamson, A. Tamin, J. L. Harcourt, N. J. Thornburg, S. I. Gerber, J. O. Lloyd-Smith, E. de Wit, and V. J. Munster, “Aerosol and Surface Stability of SARS-CoV-2 as Compared with SARS-CoV-1,” New England Journal of Medicine 382, 1564–1567 (2020).
* Fears _et al._ (2020) A. C. Fears, W. B. Klimstra, P. Duprex, A. Hartman, S. C. Weaver, K. S. Plante, D. Mirchandani, J. A. Plante, P. V. Aguilar, D. Fernández, A. Nalca, A. Totura, D. Dyer, B. Kearney, M. Lackemeyer, J. K. Bohannon, R. Johnson, R. F. Garry, D. S. Reed, and C. J. Roy, “Persistence of Severe Acute Respiratory Syndrome Coronavirus 2 in Aerosol Suspensions,” Emerging infectious diseases 26 (2020), 10.3201/eid2609.201806.
* Ding _et al._ (2021) Z. Ding, H. Qian, B. Xu, Y. Huang, T. Miao, H.-L. Yen, S. Xiao, L. Cui, X. Wu, W. Shao, Y. Song, L. Sha, L. Zhou, Y. Xu, B. Zhu, and Y. Li, “Toilets dominate environmental detection of severe acute respiratory syndrome coronavirus 2 in a hospital,” Science of The Total Environment 753, 141710 (2021).
* Somsen _et al._ (2020) G. A. Somsen, C. J. M. van Rijn, S. Kooij, R. A. Bem, and D. Bonn, “Measurement of small droplet aerosol concentrations in public spaces using handheld particle counters,” Physics of Fluids 32, 121707 (2020).
* Wells (1934) W. F. Wells, “On air-borne infection: Study II. Droplets and droplet nuclei.” American Journal of Epidemiology 20, 611–618 (1934).
* Duguid (2020) J. P. Duguid, “The size and the duration of air-carriage of respiratory droplets and droplet-nuclei,” The Journal of hygiene 78, 471–479 (2020).
* Basu _et al._ (2020) S. Basu, P. Kabi, S. Chaudhuri, and A. Saha, “Insights on drying and precipitation dynamics of respiratory droplets from the perspective of covid-19,” Physics of Fluids 32, 123317 (2020).
* Wu _et al._ (2020c) S.-C. Wu, M.-Y. Guo, J.-X. Wang, S. Yao, J. Chen, and Y.-y. Li, “Liquid-curtain-based strategy to restrain plume during flushing,” Physics of Fluids 32, 111707 (2020c).
|
# Grading and Filtrations of Gamma Rings
Shadi Shaqaqha and Afnan Dagher Department of Mathematics, Yarmouk
University, Irbid, Jordan<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract.
The aim of this paper is to introduce and study graded and filtered gamma
rings and gamma modules. We prove that the filtered $\Gamma$-ring (module) is
a generalization of the notion of graded ring (module). Also, we construct a
graded $\Gamma$-ring from a filtred $\Gamma$-ring. We investigate some
properties of graded and fileterd Gamma rings and Gamma modules. Finally we
define and study the strongly graded gamma rings.
###### Key words and phrases:
Gamma rings; Gamma ring homomorphism; graded $\Gamma$-rings; Graded
$\Gamma$-modules; Filtered Gamma rings; Filtered Gamma modules; Strongly
graded Gamma rings.
###### 2000 Mathematics Subject Classification:
16W50, 16W70, 16U80
## 1\. INTRODUCTION
The concept of $\Gamma$-rings was first introduced by Nobusawa in 1964 ([13]).
Later, Barnes generalized the notion of Nobusawa’s $\Gamma$-rings and
established a new definition of a $\Gamma$-ring ([4]). Nowadays, the notion of
$\Gamma$-rings means the $\Gamma$-rings due to Barnes. Many mathematicians
have been involved in extending the concepts and results of ring theory to the
broader framework of the $\Gamma$-ring setting (see e.g. [2, 16, 18, 19, 20]
and references therein).
The graded and filtered rings play a role in various branches of mathematics,
and they have become increasingly important. Consequently, the graded
analogues of different concept are widely studied (see [3, 5, 6, 8, 12, 10,
14]). For example, S. Shaqaqha, in his dissertation, considered gradings and
filtrations on colour Lie superalgebras and obtained more Schreier-type
formulas in the case of subalgebras of free colour Lie superalgebras ( [14]).
So, our results are expected to be useful in various applications.
In [9], a group graded gamma ring is defined to be a $\Gamma$-ring $R$ which
is the direct sum of additive subgroups $R_{g}$, $g\in K$, such that
$R_{g}\Gamma R_{h}\subseteq R_{gh}$, for all $g,h\in K$, where $K$ is a group
defined multiplicatively. In this paper we replace the group grading with a
semigroup then we introduce the notion of filterd $\Gamma$-rings. We extend
known results in the the case of graded rings to graded $\Gamma$-rings.
The article is organized as follows. In Section 2, we recall some definitions
and notations that will be used in our work. In Section 3, we introduce the
notion of graded gamma rings and we ofer some examples. In Section 4, we
obtain some properties of semigroup graded gamma rings. We present ways to
obtain new graded gamma rings from old ones. In Section 5, we define filtered
gamma rings. We give a recipe to form new graded gamma ring from a filtered
gamma ring. In Section 6, we study semigroup graded gamma modules. In Section
7, we obtain a way to produce a ring out of old gamma rings. In Section 8, we
study filtered gamma modules and obtain some properties. The strongly graded
gamma rings are introduced and studied in Section 9.
## 2\. Preliminaries
The notion of gamma rings is a one of the generalizations of rings.
###### Definition 2.1.
Let $\Gamma$ be an additive abelian group. A $\Gamma$-ring (in the sense of
Barnes) is an additive abelian group $R$ together with a map
$R\times\Gamma\times R\rightarrow R:(x,\alpha,y)\mapsto x\alpha y$
and satisfying the following two identities, for any $x,y,z\in R$ and
$\alpha,\beta\in\Gamma$:
* (i)
$(x+y)\alpha z=x\alpha z+y\alpha z$, $x(\alpha+\beta)z=x\alpha z+x\beta z$,
$x\alpha(y+z)=x\alpha y+x\alpha z$.
* (ii)
$(x\alpha y)\beta z=x\alpha(y\beta z)$.
Clearly every ring $R$ can be regarded as an $R$-ring. Let $G$ be an additive
group. We donote by $G_{m,n}$ the set of all $m\times n$ matrices over $G$.
Let $M$ be a $\Gamma$-ring. Then $M_{m,n}$ forms a $\Gamma_{n,m}$-ring (see
[19]).
Let $R$ be a $\Gamma$-ring. An element $1\in R$ is called a unity of $R$ if
$a\gamma 1=1\gamma a=a$ for all $a\in R$ and for some $\gamma\in\Gamma$. In
this case, we say $R$ is a $\Gamma$-ring with unity. One can easily show that
if a $\Gamma$-ring has a unity with respect to $\gamma\in\Gamma$, then it is
unique.
A nonempty subset $S$ of a $\Gamma$-ring $R$ is a sub-$\Gamma$-ring of $R$ if
$a-b\in S$ and $a\gamma b\in S$ for any $a,b\in S$ and $\gamma\in\Gamma$. A
subset $I$ of the $\Gamma$-ring $R$ is a left (right) ideal of $R$ if $I$ is
an additive subgroup of $R$ and $R\Gamma I=\\{r\alpha a~{}|~{}r\in
R,\alpha\in\Gamma,a\in I\\}$ ($I\Gamma R$) is contained in $I$. If $I$ is both
a left and a right ideal of $I$, then we say that $I$ is an ideal (or two-
sided ideal) of $R$. It is clear that the intersection of any number of left
(respectively right or two-sided) ideals of $R$ is also a left (respectively
right or two-sided) ideal of $R$.
Let $I$ be a (two-sided) ideal of a $\Gamma$-ring $R$. then the factor group
$R/I={x+I:x\in R}$ of all cosets of $I$ is a $\Gamma$-ring with respect to the
following multiplication:
$\displaystyle(x+I)(\gamma)(y+I)$ $\displaystyle=$ $\displaystyle((x\gamma
y)+I$
for $x,y\in R$ and $\gamma\in\Gamma$. Let $R_{1},R_{2},\ldots,R_{n}$ be
$\Gamma$-rings. Then the direct product
$\prod_{i=1}^{n}R_{i}=R_{1}\times\cdots\times R_{n}$ has the structure of a
$\Gamma$-ring under the componentwise addition and the following
multiplication
$(r_{1},\ldots,r_{n})\gamma(s_{,},\ldots,s_{n})=(r_{1}\gamma
s_{1},\ldots,r_{n}\gamma s_{n})$
for all $r_{i},s_{i}\in R_{i}~{}(i=1,\ldots,n)$ and $\gamma\in\Gamma$
Suppose that $R_{1}$ and $R_{2}$ are $\Gamma$-rings and let
$\varphi:\Gamma\rightarrow\Gamma$ be a group isomorphism. Then
$f:R_{1}\rightarrow R_{2}$ is $\varphi$-homomorphism if $f(x+y)=f(x)+f(y)$ and
$f(x\gamma y)=f(x)\varphi(\gamma)f(y)$ for all $x,y\in R_{1}$ and
$\gamma\in\Gamma$. In particular, if $\varphi$ is the identity map, then $f$
is homomorphism.
## 3\. Basic Definition and Examples
###### Definition 3.1.
Suppose that $G$ is an abelian semigroup written multiplicatively. Let $R$ be
a $\Gamma-$ring. Then $R$ is called a graded $\Gamma-$ring of type $G$ if
there exist additive subgroups $R_{g}$ of $R$ such that $R=\oplus_{g\in
G}R_{g}$ and $R_{g}\Gamma R_{h}\subseteq R_{gh}$ for all $g,h\in G$. A nonzero
element $r\in R$ is called homogeneous if there is $g\in G$ such that $r\in
R_{g}$, and in this case we say the degree of $r$ is $g$ and we write
$d(m)=g$.
So, any nonzero element $r\in R$, where $R$ is a graded $\Gamma$-ring of type
$G$, has a unique representation as a sum of a finite set of homogenous
elements $r=r_{g_{1}}+\cdots+r_{g_{k}}$. In this case, we say that
$r_{g_{1}},r_{g_{2}},\ldots$, and $r_{g_{k}}$ are the homogeneous components
of $r$. The set $G_{R}=\\{g\in G~{}|~{}R_{g}\neq\\{0\\}\\}$ is called the
support of $R$.
Throughout the paper we denote by $G$ an abelian semigroup (written
multiplicatively) unless otherwise stated.
A two-sided ideal $I$ of a graded $\Gamma$-ring $R$ of type $G$ is called a
graded $\Gamma$-ideal (or homogeneous $\Gamma$-ideal) if $I=\bigoplus_{g\in
G}I_{g}$, where $I_{g}=I\cap R_{g}$. In other words, $I$ is a graded
$\Gamma$-ideal if and only if , for each $x\in I$, its homogeneous components
belong also to $I$.
###### Example 3.1.
* (i)
Let $G$ be a semigroup with identity $e$ (or a monoid). Any $\Gamma$-ring $R$
may be considered as a graded $\Gamma$-ring of type $G$ by putting $R_{e}=R$,
and $R_{g}=\\{0\\}$ for any $g\neq e$ in $G$ (i.e. $M$ has the trivial
subgroup support). Such a graded $\Gamma$-ring is called trivial.
* (ii)
If $I$ is a graded $\Gamma$-ideal of a graded $\Gamma$-ring $R$ of type $G$,
then $R/I$ is also a graded $\Gamma$-ring and has the following decomposition:
$R/I=\bigoplus_{g\in G}\left(R_{g}+I\right)/I\cong\bigoplus_{g\in
G}\left(R_{g}/\left(R_{g}\cap I_{g}\right)\right).$
(The last congruence follows from the second isomorphism theorem for groups).
* (iii)
Given a semigroup $G$ and a $\Gamma$-ring $R$. Set $RG$ to be the set of all
linear combinations
$r=\sum_{g\in G}r_{g}g,$
where $r_{g}\in R$ and where only finitely many of the $r_{g}^{\prime}s$ are
nonzero. Next, we define the addition and the multiplication on $RG$ as
follows: for $\alpha=\sum_{g\in G}a_{g}g,\beta=\sum_{g\in G}b_{g}g\in RG$ and
$\gamma\in\Gamma$, we have
$\displaystyle\alpha+\beta$ $\displaystyle=$ $\displaystyle\sum_{g\in
G}(a_{g}+b_{g})g,$ $\displaystyle\alpha\gamma\beta$ $\displaystyle=$
$\displaystyle\sum_{g,h\in G}(a_{g}\gamma b_{h})gh.$
Then $RG$ is a graded $\Gamma$-ring of type $G$. Such graded $\Gamma$-ring is
called semigroup $\Gamma$-ring (and if $G$ is group, then it is called group
Gamma ring).
* (iv)
Let $R$ be a $\Gamma$-ring. Then $R^{0}=R$ is a $\Gamma$-ring too under the
same addition on $R$, but the multiplication is defined by $x\circ\gamma\circ
y=y\gamma x$ for $x,y\in R^{0}$, and $\gamma\in\Gamma$. Furthermore, if $R$ is
a graded $\Gamma$-ring of type $G$, where $G$ is a group, then $R^{0}$ is a
graded $\Gamma$-ring of type $G$ too by setting $(R^{0})_{g}=R_{g^{-1}}$,
$g\in G$. Indeed for $x\in(R^{0})_{g}$, $y\in(R^{0})_{h}$, and
$\alpha\in\Gamma$, we have $x\circ\alpha\circ y=y\alpha x\in
R_{h^{-1}g^{-1}}=R_{(gh)^{-1}}=(R^{0})_{gh}$.
* (v)
Let $R=\bigoplus_{g\in G}R_{g}$ and $S=\bigoplus_{g\in G}S_{g}$ be graded
$\Gamma$-rings of type $G$. Then the $\Gamma$-ring $R\times S$ may be made
into a graded $\Gamma$-ring by setting:
$(R\times S)_{g}=R_{g}\times S_{g}~{}\forall g\in G.$
More generally, if $(R_{i}){i\in I}$, where $I$ is finite, is a family of
graded $\Gamma$-rings of type $G$, then $R=\prod_{i\in I}R_{i}$ is a graded
$\Gamma$-ring of type $G$ by setting
$\left(\prod_{i\in I}R_{i}\right)_{g}=\prod_{i\in I}(R_{i})_{g}.$
## 4\. Some Properties of Graded Gamma Rings
The following two theorems give us recipes for forming new graded gamma ring
from old ones.
###### Theorem 4.1.
Let $R$ be a graded $\Gamma$-ring of type $G$ where $G$ is a semigroup. If
$\varphi:G\rightarrow H$ is an onto homomorphism (epimorphism) of semigroups,
then the $\Gamma$-ring $S=R$ with gradation
$S_{h}=\bigoplus_{g\in\varphi^{-1}(h)}R_{g}$
for all $h\in H$ is a graded $\Gamma$-ring of type $H$ (such graded
$\Gamma$-ring is denoted by $R_{(H)}$).
Proof. Let $a,b\in H$, $x\in S_{a}$ and $y\in S_{b}$. Then
$x=x_{a_{1}}+\cdots+x_{a_{r}}$ and $y=y_{b_{1}}+\cdots+y_{b_{s}}$, where
$\varphi(a_{i})=a$ and $\varphi(b_{j})=b$ for all $i,j$. Now, for
$\gamma\in\Gamma$, we obtain
$x\gamma y=\sum_{1\leq i\leq r,1\leq j\leq m}x_{a_{i}}\gamma y_{b_{j}}.$
Also, for $1\leq i\leq r$ and $1\leq j\leq m$, we have
$\varphi(a_{i}b_{j})=\varphi(a_{i})\varphi(b_{j})=ab$. Thus $x\gamma y\in
S_{ab}$. $\Box$
Also, we have the following result. We omit the proof since the proof is
straightforward.
###### Theorem 4.2.
Let $R$ be a graded $\Gamma$-ring of type $G$, and let $H$ be a subsemigroup
of $G$. Then
* (i)
$R^{\prime}=\bigoplus_{h\in H}R_{h}$ is a graded $\Gamma$-ring of type $H$
(such graded is denoted by $R^{(H)}$). In particular, if $e$ is identity of
$G$, then $R_{e}$ corresponds to the trivial subsemigroup of $G$.
* (ii)
If $H$ is a normal subsemigroup of $G$, then
$R=\bigoplus_{gN\in G/N}R_{gN}$
is a graded $\Gamma$-ring of type $G/N$, where
$R_{gN}=\bigoplus_{n\in N}R_{gn}.$
The following theorem extends a well known result to group graded rings.
###### Theorem 4.3.
Let $G$ be a monoid with the identity element $e$, and let $R$ be a graded
$\Gamma$-ring of type $G$. Then $R_{e}$ is a sub-$\Gamma$-ring of $R$. Also if
$1$ is the unity element with respect to $\gamma_{0}\in\Gamma$, then $1\in
R_{e}$.
Proof. For $a,b\in R_{e}$, we have $a-b\in R_{e}$ since $M_{e}$ is an
additive subgroup of $R$, and also $a\gamma b\in R_{ee}=R_{e}$ for any
$\gamma\in\Gamma$. It follows that $R_{e}$ is a sub-$\Gamma$-ring of $R$.
Next, as $R=\bigoplus_{g\in G}R_{g}$, we may assume
$1=r_{g_{1}}+r_{g_{2}}+\cdots+r_{g_{n}}$. Pick $\tau\in G$, and a nonzero
element $\lambda_{\tau}\in R_{\tau}$, then
$1\gamma_{0}\lambda_{\tau}=\lambda_{\tau}=r_{g_{1}}\gamma_{0}\lambda_{\tau}+r_{g_{2}}\gamma_{0}\lambda_{\tau}+\cdots+r_{g_{m}}\gamma_{0}\lambda_{\tau}.$
and so, for all $g_{i}\in G$ with $g_{i}\neq e$ we have
$r_{g_{i}}\gamma_{0}\lambda_{\tau}=0$. Hence,
$1\gamma_{0}\lambda_{\tau}=r_{e}\gamma_{0}\lambda_{\tau}$. Therefore, by the
uniqueness of the unity, we have $1=r_{e}\in R_{e}$. $\Box$
Let $R$ be a $\Gamma$-ring with a unity $1$ corresponding to
$\gamma_{0}\in\Gamma$. An element $r\in R$ is called invertible with respect
to $1$ and $\gamma_{0}$ if there is an element (unique) $r^{-1}\in R$ such
that $r^{-1}\gamma_{0}r=r\gamma_{0}r^{-1}=1$.
###### Theorem 4.4.
Let $G$ be a group with the identity element $e$, and let $R$ be a graded
$\Gamma$-ring of type $G$. If $r\in R_{g}$ is an invertible (corresponding to
$\gamma_{0}\in\Gamma$) homogeneous element, then $r^{-1}$ is homogeneous too
of degree $g^{-1}$.
Proof. Let $s=s_{g_{1}}+\cdots+s_{g_{k}}$ be the inverse of $r$. Then
$1=r\gamma_{0}s=r\gamma_{0}s_{g_{1}}+\cdots+r\gamma_{0}s_{g_{k}},$
where $r\gamma_{0}s_{g_{i}}\in R_{gg_{i}}$ for all $i=1,\ldots,k$. By Theorem
4.3, $1$ is homogeneous of degree $e$, and also, the decomposition is unique,
so that $r\gamma_{0}s_{g_{i}}=0$ for each $g_{i}\neq g^{-1}$. Since $r$ is
invertible, $s_{g^{-1}}\neq 0$, and so $s=s_{g^{-1}}\in R_{g^{-1}}$ as
desired. $\Box$
## 5\. Filtered Gamma Rings
The following definition produces the notion of filtered $\Gamma$-rings. It is
a generalization of the notion of filtered rings [11].
###### Definition 5.1.
Let $R$ be a $\Gamma$-ring and let $R^{0}\subseteq R^{1}\subseteq
R^{2}\subseteq\cdots$ be a chain of additive subgroups of $R$. This chain is
called an (ascending) filtration of $R$ if
$\bigcup_{r\geq 0}R^{m}=R~{}\mathrm{and}~{}R^{i}\Gamma R^{j}\subseteq
R^{i+j}~{}\forall i,j\geq 0.$
In this case we say that $R$ is a filtered $R_{\Gamma}$-ring.
Let $R$ and $S$ be filtered $\Gamma$-rings. A homomorphism
$\varphi:R\rightarrow S$ is called a filtered homomorphism if
$\varphi(R_{i})\subseteq S_{i}$ for $i=0,1,\ldots$.
###### Example 5.1.
* (i)
Any $\Gamma$-ring $R$ can be made into a filtered $\Gamma$-ring by setting
$R^{k}=R$ for all $k\geq 0$. Such filtration is called trivial.
* (ii)
Let $R$ be a $\Gamma-$ ring, and let
$R[x]=\\{p(x)=a_{0}+a_{1}x+\cdots+a_{n}x^{n}:a_{0},a_{1},\cdots,a_{n}\in
R\\}.$
Then it is easy to see that $R[x]$ is a $\Gamma$-ring under the ordinary
addition and the multiplication defined as follows: if
$p(x)=a_{0}+a_{1}x+\cdots+a_{n}x^{n},q(x)=b_{0}+b_{1}x+\cdots+b_{m}x^{m}\in
R[x]$ such that $a_{n}\neq 0\neq b_{m}$, then
$p(x).\alpha.q(x)=a_{0}\alpha b_{0}+(a_{0}\alpha b_{1}+b_{0}\alpha
a_{1})x+\cdots+a_{n}\alpha b_{m}x^{m+n}$
. Moreover, $R[x]$ has a filtration $\bigcup_{k=0}^{\infty}R^{k}$, where
$R^{k}$ is spanned by all monomials of degree less than or equal to $k$.
* (iii)
Given a graded $\Gamma$-ring $R=\bigoplus_{i\geq 0}R_{i}$ of type
$\mathbb{Z}$, then there is a corresponding filtration
$\bigcup_{k=0}^{\infty}R^{k},$
where $R^{k}=\bigoplus_{j=0}^{k}R_{j}$.
Conversely, the following result gives us a way to produce a graded
$\Gamma$-ring out of a filtered $\Gamma$-ring.
###### Theorem 5.1.
Let $R$ be a $\Gamma$-ring with filtration $R^{0}\subseteq R^{1}\subseteq
R^{2}\subseteq\cdots$. Set $R^{-1}=0$, and let
$\mathrm{gr}R=\bigoplus_{k=0}^{\infty}\left(R^{k}/R^{k-1}\right)$. Then define
multiplication
$R^{m}/R^{m-1}\times\Gamma\times R^{n}/R^{n-1}\rightarrow R^{m+n}/R^{m+n-1}$
by $\left(r+R^{m-1}\right)\gamma\left(s+R^{n-1}\right)=(r\gamma s)+R^{m+n-1}$.
Extending this multiplication by multilinearity makes $\mathrm{gr}R$ is a
graded $\Gamma$-ring.
Proof. We must show that the map is well defined. Suppose that $x+R^{m-1}=x\
^{\prime}+R^{m-1}$, $y+R^{n-1}=y\ ^{\prime}+R^{n-1}$, and $\gamma\in\Gamma$.
Then $x-x\ ^{\prime}=r_{0}$ and $y-y\ ^{\prime}=r_{1}$ for some $r_{0}\in
R^{m-1}$ and $r_{1}\in R^{n-1}$. Thus
$\displaystyle x\gamma y-x\ ^{\prime}\gamma y\ ^{\prime}$ $\displaystyle=$
$\displaystyle(x\ ^{\prime}+r_{0})\gamma(y\ ^{\prime}+r_{1})-x\
^{\prime}\gamma y\ ^{\prime}$ $\displaystyle=$ $\displaystyle x\
^{\prime}\gamma r_{1}+r_{0}\gamma y\ ^{\prime}+r_{0}\gamma r_{1}$
is a member of $R^{m+n-1}$. The remainder of the proof is quite
straightforward.
$\Box$
###### Remark 1.
If the filtration $R^{k}$, given in the theorem above, comes from a grading
(i.e. $R^{k}=\bigoplus_{i=0}^{k}R_{i}$), then $R^{k}/R^{k-1}\cong R_{k}$ as
additive groups, and so $\mathrm{gr}R\cong R$ as $\Gamma$-rings.
## 6\. Graded Gamma Modules
Let $R$ be a $\Gamma$-ring. An abelian group $M$ together with a mapping
$R\times\Gamma\times M\rightarrow M;~{}(r,\gamma,m)\mapsto r\gamma m,$
such that the following identities are satisfied for all $r_{1},r_{2},r\in R$,
$\gamma_{1},\gamma_{2},\gamma\in\Gamma$, and $m_{1},m_{2},m\in M$:
* (i)
$r\gamma(m_{1}+m_{2})=r\gamma m_{1}+r\gamma m_{2}$,
* (ii)
$(r_{1}+r_{2})\gamma m=r_{1}\gamma m+r_{2}\gamma m$,
* (iii)
$r(\gamma_{1}+\gamma_{2})m=r\gamma_{1}m+r\gamma_{2}m$,
* (iv)
$(r_{1}\gamma_{1})(r_{2}\gamma_{2}m)=(r_{1}\gamma_{1}r_{2})\gamma_{2}m$
is called a (left) $R_{\Gamma}$-module. A right $R_{\Gamma}$-module is defined
in analogous manner ([1]). If $R$ and $S$ are $\Gamma$-rings, then we say $M$
is an $(R,S)_{\Gamma}$-bimodule if it is both left $R_{\Gamma}$-module and
right $S_{\Gamma}$-module and simultaneously $(r\alpha m)\beta
s=r\alpha(m\beta s)$ for all $r\in R$, $m\in M$, $s\in S$, and
$\alpha,\beta\in\Gamma$. A (left) $R_{\Gamma}$-module $M$ is unitary if there
exists an element , say $1$, in $R$ and $\gamma_{0}\in\Gamma$ such that
$1\gamma_{0}m=m~{}\forall m\in M$. Note that if $M$ is a left
$R_{\Gamma}$-module then it is easy to verify that $0\gamma m=r0m=r\gamma 0=0$
for any $r\in R,\gamma\in\Gamma$ and $m\in M$.
###### Example 6.1.
* (i)
If $R$ is a $\Gamma$-ring and $M$ is an abelian group, then $M$ is an
$R_{\Gamma}$-module by letting $r\gamma m=0~{}\forall r\in
R,\gamma\in\Gamma,m\in M$.
* (ii)
Every $\Gamma$-ring $R$ is an $R_{\Gamma}$-module by
$R\times\Gamma\times R\rightarrow R;~{}(r_{1},\gamma,r_{2})\mapsto r_{1}\gamma
r_{2}.$
* (iii)
Let $R$ be a graded $\Gamma$-ring of type $G$, where $G$ is a monoid with
identity $e$. Then $R$ can be considered as an
$\left(R_{e},R_{e}\right)_{\Gamma}$-bimodule.
Let $R$ be a $\Gamma$-ring. A nonempty subset $M_{1}$ of $M$ is called a
(left) $R_{\Gamma}$-submodule of $M$ if $M_{1}$ is a subgroup of $M$ and
$R\Gamma M_{1}\subseteq M_{1}$. In this case we write $M_{1}\leq M$. For a
$\Gamma$-ring $R$, $\\{0\\}$ and $R$ are $R_{\Gamma}$-submodules of
$R_{\Gamma}$-module $R$. Also, every ideal of $R$ is a submodule of $R$.
The following definition is a generalization of the notion of a graded module.
###### Definition 6.1.
Let $R$ be a graded $\Gamma$-ring of type $G$, and $M$ be an
$R_{\Gamma}$-module. Then $M$ is said to be graded (left) $R_{\Gamma}$-module
if there is a family $\\{M_{g}~{}:~{}g\in G\\}$ of additive subgroups of $M$
such that $M=\bigoplus_{g\in G}M_{g}$ and $R_{g}\Gamma M_{h}\subseteq M_{gh}$
for all $g,h\in G$. A nonzero element $m\in M$ is called homogeneous of degree
$g$, and we write $\mathrm{deg}(m)=g$, if there $g\in G$ such that $m\in
M_{g}$. A submodule $N$ of $M$ is a graded submodule if $N=\bigoplus_{g\in
G}(N\cap M_{g})$ (equivalently, for any $x\in N$ the homogeneous component(s)
of $x$ are again in $N$).
###### Example 6.2.
Let $R$ be a graded $\Gamma$-ring of type $G$, $M$ be a graded
$R_{\Gamma}$-module, and $K$ be a submodule of $M$. Then
$K^{\prime}=\bigoplus_{g\in G}K_{g}$, where $K_{g}$ is the additive subgroup
of $K$ generated by $K\cap(h(M))_{g}$ where $(h(M))_{g}$ is the set of all
homogeneous elements of degree $g$ (i.e. $K_{g}$ is the smallest subgroup of
$G$ that contains $K\cap(h(M))_{g}$), is a graded submodule of $M$. One can
easily show that it is a maximal submodule of $K$ that is a graded submodule
of $M$.
It is known that if $R$ is a $\Gamma$ ring, $M$ is a $R_{\Gamma}$-ring, and
$K$ is a subgroup of $M$, then that factor group $M/K$ is an
$R_{\Gamma}$-module under the mapping
$R\times\Gamma\times M/K\rightharpoonup M/K;~{}(r,\gamma,m+K)\mapsto(r\gamma
m)+M$
(See [1]). Using the same arguments in the graded rings we can prove the
following result.
###### Proposition 6.1.
Let $R$ be a graded $\Gamma$-ring of type $G$ and $M$ be a graded
$R_{\Gamma}$-module. If $K$ is a submodule of $M$, then the factor module
$M/K$ is a graded $R_{\Gamma}$-module by setting
$(M/K)_{g}=(M_{g}+K)/K\cong M_{g}/(K\cap M_{g})$
for any $g\in G$.
## 7\. On Homomorphisms of Gamma Modules
Suppose that $R$ is a graded $\Gamma$-ring of type $G$ and that $M$ and $K$
are (left) graded $R_{\Gamma}$-module. Let $\varphi:\Gamma\rightarrow\Gamma$
be a group isomorphism. Then $f:M\rightarrow K$ is $\varphi$-homomorphism if
$f(x+y)=f(x)+f(y)$ and $f(r\gamma x)=r\varphi(\gamma)f(x)$ for all $x,y\in M$,
$r\in R$, and $\gamma\in\Gamma$. In particular, if $\varphi$ is the identity
map, then $f$ is homomorphism. It is called isomorphism if it is one to one
and onto.
For graded (left) $R_{\Gamma}$-modules $M$ and $K$. A ($\varphi$-)homomorphism
$f:M\rightarrow K$ is called homogeneous of degree $h\in G$ if for all $g\in
G$, we have $f(M_{g})\subseteq K_{hg}$. In particular, for graded
$\Gamma$-rings $R$ and $K$. A ($\varphi$-)homomorphism of $\Gamma$-ringe
$f:R\rightarrow K$ is called homogeneous of degree $h\in G$ if for all $g\in
G$, we have $f(R_{g})\subseteq K_{hg}$. A homogeneous $(\varphi-)$homomorphism
of $R_{\Gamma}$-modules of degree $e\in G$ (the identity element of $G$) will
be called a degree preserving map. Clearly the set $\mathrm{Hom}_{R}(M,K)$ of
all homomorphisms of $R_{\Gamma}$ modules from $M$ into $K$ forms a group
under the ordinary addition. Also, for graded $R_{\Gamma}$-modules
$M_{1},M_{2}$, and $M_{3}$ the composition of a homogeneous
($\varphi$-)homomorphism $f:M_{1}\rightarrow M_{2}$ of degree $h\in G$ with a
($\varphi$-)homomorphism $g:M_{2}\rightarrow M_{3}$ of degree $k\in G$ is
($\varphi$-)homomorphism of degree $hk$.
###### Definition 7.1.
Let $R$ be a $\Gamma$-ring and $M$ be an $R_{\Gamma}$-module. Then we say $M$
is finitely generated if there exist $m_{1},\ldots,m_{k}$ in $M$ such that if
$x\in M$, then there exist $r_{1},\ldots,r_{k}\in R$ and
$\gamma_{1},\ldots,\gamma_{k}\in\Gamma$ with
$x=r_{1}\gamma_{1}m_{1}+\cdots+r_{k}\gamma_{k}m_{k}$. In this case the set
$\\{m_{1},\ldots,m_{k}\\}$ is said a generating set to $M$.
The following theorem gives us a procedure to produce a ring out of old
$\Gamma$-ring.
###### Theorem 7.1.
Let $M$ and $K$ be graded $R_{\Gamma}$-modules of type $G$. If $M$ is a
finitely generated, then
$\mathrm{Hom}_{R}(M,K)=\bigoplus_{g\in G}\left(\mathrm{Hom}(M,K)\right)_{g}$
is a graded abelian group, where $\mathrm{Hom}_{R}(M,K)_{g}$ is the subgroup
of $\mathrm{Hom}_{R}(M,K)$ that consisting of all homomorphisms of from $M$ to
$K$ of degree $g$. Moreover if $M=K$, then $\mathrm{Hom}(M,M)$ is a graded
ring of type $G$.
Proof. Suppose that $f\in\mathrm{Hom}_{R}(M,K)$. Let $g\in G$. Define a map
$f_{g}:M\rightarrow K$
as follows: for $m=m_{g_{1}}+m_{g_{2}}+\cdots+m_{g_{k}}\in M$
($g_{1},g_{2},\ldots,g_{k}\in G$), set
$f_{g}(m)=\left(f(m_{g_{1}g^{-1}})\right)_{g_{1}}+\cdots+\left(f(m_{g_{k}g^{-1}})\right)_{g_{k}}.$
Then $f_{g}\in\mathrm{Hom}_{R}(M,K)$. Indeed if $x=\sum_{h\in G}x_{h}$,
$y=\sum_{h\in G}y_{h}\in M$, then
$\displaystyle f_{g}(x+y)$ $\displaystyle=$ $\displaystyle\sum_{h\in
G}\left(f(x+y)_{hg^{-1}}\right)_{h}$ $\displaystyle=$ $\displaystyle\sum_{h\in
G}\left((f(x_{hg^{-1}}+y_{hg^{-1}})\right)_{h}$ $\displaystyle=$
$\displaystyle\sum_{h\in G}\left((f(x_{hg^{-1}})+f(y_{hg^{-1}})\right)_{h}$
$\displaystyle=$ $\displaystyle\sum_{h\in
G}\left(\left(f(x_{hg^{-1}}\right)_{h}+\left((f(y_{hg^{-1}}\right)_{h}\right)$
$\displaystyle=$ $\displaystyle f_{g}(x)+f_{g}(y).$
Also, for $r\in R$ and $\gamma\in\Gamma$, we have
$\displaystyle f_{g}(r\gamma x)$ $\displaystyle=$ $\displaystyle\sum_{h\in
G}\left(f((r\gamma x)_{hg^{-1}})\right)_{h}$ $\displaystyle=$
$\displaystyle\sum_{h\in G}\left(r\gamma f(x_{hg^{-1}})\right)_{h}$
$\displaystyle=$ $\displaystyle r\gamma\sum_{h\in
G}\left((f(x_{hg^{-1}})\right)_{h}$ $\displaystyle=$ $\displaystyle r\gamma
f_{g}(x).$
In addition, $f_{g}\in\left(\mathrm{Hom}_{R}(M,K)\right)_{g}$. Indeed for any
$h\in G$ and $m\in M_{h}$, we have $f_{g}(m)=(f(m))_{hg}\in K_{hg}$.
Let us assume that $\\{m_{1},\ldots,m_{s}\\}$ is a generating set to $M$. For
$x\in M$, there exist $r_{1},\ldots,r_{s}\in R$ and
$\gamma_{1},\ldots,\gamma_{s}\in\Gamma$ with
$x=r_{1}\gamma_{1}m_{1}+\cdots+r_{s}\gamma_{s}m_{s}$. Thus
$f(x)=r_{1}\gamma_{1}f(m_{1})+\cdots+r_{s}\gamma_{s}f(m_{s})$. On the other
hand for each $t=1,2,\ldots,r$, $f_{g}(m_{t})=0$ for all but a finite number
of $g\in G$ and
$\sum_{g\in G}f_{g}(m_{t})=\sum_{g\in G}(f(m_{t}))_{hg}=f(m_{t}).$
It follows again that $f_{g}(x)=0$ for all but finitely number of $g\in G$,
and $f=\sum_{g\in G}f_{g}$. We have so far shown that
$\mathrm{Hom}_{R}(M,K)=\sum_{g\in G}(\mathrm{Hom}_{R}(M,K))_{g}$. On the other
hand if $f\in\mathrm{Hom}_{R}(M,K)_{h_{1}}\cap\mathrm{Hom}_{R}(M,K)_{h_{2}}$
where $h_{1},h_{2}\in G$ and $h_{1}\neq h_{2}$, then for any $g\in G$ we have
$f(M_{g})\subseteq K_{h_{1}g}\cap K_{h_{2}g}=\\{0\\}$. Hence $f=0$. This shows
that $\mathrm{Hom}_{R}(M,K)=\bigoplus_{g\in G}(\mathrm{Hom}_{R}(M,K))_{g}$. In
particular, if $M=K$, we obtain $\mathrm{Hom}_{R}(M,M)=\bigoplus_{g\in
G}(\mathrm{Hom}_{R}(M,M))_{g}$. For $\rho\in\mathrm{Hom}_{R}(M,M))_{h_{1}}$
and $\sigma\in\mathrm{Hom}_{R}(M,M)_{h_{2}}$ we have
$\rho\sigma\in\mathrm{Hom}_{R}(M,M)_{h_{1}h_{2}}$. It is clear now that
$\mathrm{Hom}_{R}(M,M)$ is a graded ring of type $G$. $\Box$
## 8\. Filtered $\Gamma$-Modules
Let $R$ be a filtered $\Gamma$-ring and $M$ be a (left) $R_{\Gamma}$-module.
Then $M$ is called a filtered $R_{\Gamma}$ module if there is an (ascending)
chain
$M^{0}\subseteq M^{1}\subseteq M^{2}\subseteq\cdots$
of additive subgroups of $M$ such that $\bigcup_{k\geq 0}M^{k}=M$ and
$R^{i}\Gamma M^{j}\subseteq M^{i+j}$ for any $i,j$.
###### Example 8.1.
* (i)
It is clear that if $R$ is a filtered $\Gamma$-ring, then $R$ is a filtered
$R_{\Gamma}$-module.
* (ii)
If the filtration of a graded $\Gamma$-ring $R$ is trivial and $M$ is
$R_{\Gamma}$-module, then any ascending chain of submodules $K^{s}$, where
$\bigcup_{s\geq 0}K^{s}=M$, of $M$ defines a filtration for $M$.
* (iii)
Let $R$ be a $\Gamma$-ring and $I\subseteq R$ be an ideal of $R$. Then $R$ has
a descending filtration $\bigcup_{i=0}^{\infty}R^{k}$, where
$R^{0}=R,R^{1}=I,R^{2}=I\Gamma I,~{}\mathrm{and}~{}R^{k}=R^{k-1}\Gamma
I~{}\mathrm{for}~{}k\geq 3.$
Such filtration is called $I$-adic filtration. Let $M$ be $R_{\Gamma}$-module.
The corresponding descending filtration of $M$
$M\supseteq R^{1}\Gamma M\supseteq R^{2}\Gamma M\supseteq\cdots.$
Let $R$ be a graded $\Gamma$-ring and let $M=\bigoplus_{i=0}^{\infty}M_{i}$ be
a graded $R_{\Gamma}$-module. Consider the corresponding filtration of $R$ as
in Example 5.1(iii), then there is a corresponding filtration on $M$
$M^{0}\subseteq M^{1}\subseteq M^{2}\subseteq\cdots$
where $M^{k}=\bigoplus_{j=0}^{k}M_{j}$. Conversely, one can prove the
following result.
###### Theorem 8.1.
If $M$ is a filtered $R_{\Gamma}$ module, then
$\mathrm{gr}(M)=\bigoplus_{i=0}^{\infty}M_{i}/M_{i-1}$
is a graded $(\mathrm{gr}R)_{\Gamma}$ module where
$\mathrm{gr}R=\bigoplus_{i=0}^{\infty}R_{i}/R_{i-1}$.
Let $R$ be a filtered $\Gamma$-ring and $M=\bigcup_{k\geq 0}M^{k}$ be a
filtered $R_{\Gamma}$-module. It is trivial that $\bigcup_{k\geq}M^{k}$ is an
$R_{\Gamma}$-submodule of $M$. The following theorem answers the question: Is
$\bigcap_{k\geq}M^{k}$ an $R_{\Gamma}$-submodule?
###### Theorem 8.2.
Let $R$ be a filtered $\Gamma$-ring and $M$ be a filtered $R_{\Gamma}$-module
by $M^{k}$. Then $\bigcap M^{k}$ is an $R_{\Gamma}$-submodule of $M$.
Proof. The intersection of any arbitrary collection of subgroups of a group
is subgroup, so $\bigcap M^{k}$ is a subgroup of $M$. For $\lambda\in R$,
$\gamma\in\Gamma$, and $x\in\bigcap M^{k}$. There exists $p\in\mathbb{N}_{0}$
such that $\lambda\in R^{p}$. As $x\in M^{k-p}$ for all $k$, we have
$\lambda\gamma x\in M^{k}$ for all $k$. That is $\lambda\gamma x\in\bigcap
M^{k}$. $\Box$
###### Remark 2.
One can focus on studying graded $\Gamma$-bimodules. Let $R$ and $S$ be graded
$\Gamma$-rings of type $G$ and let $M$ be an abelian group. Then
$M=\bigoplus_{g\in G}M_{g}$ is a graded $(R,S)_{\Gamma}$-bimodule of type $G$
if the following conditions are satisfied:
* (i)
$M$ is an $(R,S)_{\Gamma}$-bimodule,
* (ii)
$M$ is a graded left $R_{\Gamma}$-module,
* (iii)
$M$ is a graded right $S_{\Gamma}$-module.
Therefore $N_{g}\Gamma M_{h}\Gamma S_{k}\subseteq M_{ghk}$ for $g,h,k\in G$.
## 9\. Strongly Graded Gamma Rings
Let $M$ be a graded $\Gamma$-ring, where $G$ is a monoid, with identity $1$.
According to Theorem 4.3, we have $1\in M_{e}$. Thus $M_{g}\Gamma M_{e}=M_{g}$
and $M_{e}\Gamma M_{g}=M_{g}$ for all $g\in G$. If these equalities hold for
any arbitrary two elements $g,h,G$, we get a strongly graded $\Gamma$-ring. In
other words we have the following definition.
###### Definition 9.1.
Let $G$ be a semigroup. A graded $\Gamma$-ring $M$ of type $G$ is called a
strongly graded $\Gamma$-ring if $M_{g}\Gamma M_{h}=M_{gh}$ for all $g,h\in
G$.
###### Example 9.1.
* (i)
Let $G$ be a group and $M$ be a $\Gamma$-ring. The group $\Gamma$-ring $MG$ is
a strongly graded $\Gamma$-ring of type $G$.
* (ii)
If $M$ is a strongly graded $\Gamma$-ring, then so is $M/I$ for every graded
ideal $I$ of $M$.
* (iii)
If $M$ is a strongly graded $\Gamma$-ring of type $G$ and $\phi:G\rightarrow
H$ is an epimorphism, then $M_{(H)}$ is strongly graded.
* (iv)
If $M$ is a strongly graded $\Gamma$-ring of type $G$, and $H$ is a
sub(semi-)group of $G$, then $M^{(H)}$ is strongly graded.
* (v)
Let $(M_{i})_{i\in I}$ be a family of graded $\Gamma$ rings of type $G$ such
that $I$ is finite. Then $M=\prod_{i\in I}M_{i}$ is strongly graded if and
only if $M_{i}$ is strongly graded for each $i\in I$.
Let $M$ be a $\Gamma$-ring with unity $1_{\gamma_{0}}$, and let $M^{*}$
denotes the set of invertible elements in $M$ with respect to $\gamma_{0}$.
Define multiplication on $M^{*}$ as follows: $mn=m\gamma_{0}n$ for $m,n\in M$.
Then the set $M^{*}$ forms a group. For if $m,n\in M^{*}$, then
$mnn^{-1}m^{-1}=(m\gamma_{0}n)\gamma_{0}(n^{-1}\gamma_{0}n^{-1})=1_{\gamma_{0}}$.
Also, the associativity follows from one of the properties of $\Gamma$-rings.
The support of invertible homogeneous elements of $M$ is defined by
$G_{M}^{*}=\\{g\in G~{}|~{}M_{g}^{*}\neq\phi\\}$
where $M_{g}^{*}=M_{g}\cap M^{*}$. Using Theorem 4.4 it is easy to see that
$G_{M}^{*}$ is a group. Also $G_{M}^{*}\subseteq G_{M}$ where $G_{M}=\\{g\in
G~{}|~{}M_{g}\neq\\{0\\}\\}$ is the support of $M$.
###### Definition 9.2.
A graded $\Gamma$-ring $M=\bigoplus_{g\in G}M_{g}$ with unity
$1_{\gamma_{0}}$, where $G$ is a group, is called crossed product if there is
an invertible element in every homogeneous element of $M$, that is
$G_{M}^{*}=G$.
###### Theorem 9.1.
Let $M=\bigoplus_{g\in G}M_{g}$ be a graded $\Gamma$-ring of type $G$, where
$G$ is a group, with unity $1_{\gamma_{0}}$. Then
* (i)
$M$ is strongly graded if and only if $1_{\gamma_{0}}\in M_{g}\Gamma
M_{g^{-1}}$ for all $g\in G$,
* (ii)
if $M$ is strongly graded, then the support of $M$ is $G$,
* (iii)
any crossed product $\Gamma$-ring is strongly graded,
* iv)
if $N$ is a graded $\Gamma$-ring and $\varphi:M\rightarrow N$ is a degree
preserving homomorphism of $\Gamma$-rings, and $M$ is strongly graded
$\Gamma$-ring, then $N$ is strongly graded too.
Proof.
* (i)
Suppose that $M$ is a strongly graded $\Gamma$-ring. Then $M_{e}=M_{g}\Gamma
M_{g^{-1}}$ for any $g\in G$. Now the result follows from Theorem 4.3.
Conversely, let $x\in M_{e}$ and $g\in G$. Then
$x=1_{\gamma_{0}}\gamma_{0}x\in\left(M_{g}\Gamma
M_{g^{-1}}\right)\gamma_{0}M_{e}=M_{g}\Gamma\left(M_{g^{-1}}\gamma_{0}M_{e}\right)\subseteq
M_{g}\Gamma M_{g^{-1}}$. Hence $M_{e}=M_{g}\Gamma M_{g^{-1}}$ for all $g\in
G$. Next for $g,h\in G$ we have
$\displaystyle M_{gh}$ $\displaystyle\subseteq$ $\displaystyle M_{e}\Gamma
M_{gh}$ $\displaystyle=$ $\displaystyle\left(M_{g}\Gamma
M_{g^{-1}}\right)\Gamma M_{gh}$ $\displaystyle=$ $\displaystyle
M_{g}\Gamma\left(M_{g^{-1}}\Gamma M_{gh}\right)$ $\displaystyle\subseteq$
$\displaystyle M_{g}\Gamma M_{h}.$
This proves that $M_{gh}=M_{g}\Gamma M_{h}$ for all $g,h\in G$. Therefore $M$
is strongly graded.
* (ii)
Let $g\in G$. Then using (i), we have $1_{\gamma_{0}}\in M_{g}\Gamma
M_{g^{-1}}$. Thus $M_{g}\neq\\{0\\}$.
* (iii)
Suppose that $M$ is a crossed product $\Gamma$-ring. Let $g\in G$. Then there
exists an invertible element (homogeneous) $m\in M_{g}$. According to Theorem
4.4 we have $m^{-1}\in M_{g^{-1}}$. So $1_{\gamma_{0}}=m\gamma_{0}m^{-1}\in
M_{g}\Gamma M_{g^{-1}}$. Now the result follows from (i).
* (iv)
$M$ is strongly graded, so for all $g\in G$ we have $1_{\gamma_{0}}\in
M_{g}\Gamma M_{g^{-1}}$. It follows
$\varphi(1_{\gamma_{0}})=1_{\gamma_{0}}\in\varphi(M_{g})\Gamma\varphi(M_{g^{-1}})\subseteq
N_{g}\Gamma N_{g^{-1}}$ for all $g\in G$.
$\Box$
###### Remark 3.
Let $R$ be a graded $\Gamma$-ring, and $M$ be a graded $R_{\Gamma}$-module.
Then $M$ is strongly graded $R_{\Gamma}$-module if $R_{g}\Gamma M_{h}=M_{gh}$
for all $g,h\in G$. One can focus on considering such graded $\Gamma$-rings.
Also, one of the future works will focus on studying tensor product of graded
$\Gamma$-modules.
## References
* [1] R. Ameri, R. Sadeghi, Gamma modules, Ratio Mathematica, 20 (2010), 127-147.
* [2] D. D. Anderson, Some remarks on multiplication ideals, Math Japon 25 (1) (1980), 463-469.
* [3] Y. Bahturin, S. Sehgal and M. Zaicev, Group gradings on associative algebras, J. Algebra 241 (2001), 677-698.
* [4] W. E. Barnes, On the gamma rings of Nobusawa, Pacific J. Math 18 (1966), 411-422.
* [5] M. Cohen and L. H. Rowen, Group graded rings, Communications in Algebra 11 (11) (1983), 1253-1270.
* [6] E. Dade, Group graded rings and modules, Math. Z 174 (3) (1980), 241-262 .
* [7] A. Estaji, A. Khorasani and S. Baghdari, Multiplication ideals in $\Gamma$-rings, Journal of Hyperstructures 2 (1) (2013), 30-39.
* [8] Fida. M. and Maisa. K., Graded modules over first strongly graded rings, Malaysian Journal of Mathematical Sciences 11 (2) (2018), 205-220.
* [9] F. Fusheng, G. Lingzhong, Group graded gamma rings, Northeast. Math. J. 14 (2) (1998), 177-186.
* [10] R. Hazrat , Group rings and graded grothendieck groups, London Mathematical Society Lecture Note Series 435, Cambridge university press (2016).
* [11] C. Nastasescu and F. van Oystaeyen, Graded and filtered rings and modules, Lecture notes in mathematics 758, Springer, Berlin (1979).
* [12] C. Nastasescu and F. van Oystaeyen, Graded ring theory, North-Holland, Amsterdam (1982).
* [13] N. Nobusawa, On a generalization of the ring theory, Osaka J. Math. 1 (1964), 81-89.
* [14] S. Shaqaqha, Hilbert series for free lie superalgebras and related topics. Ph.D. Thesis, MUN (2015).
* [15] M. S. Uddin, M. S. Islam, Gamma rings of ramma endomorphisms, Pure and Applied Mathematics 3 (1) (2013), 94-99.
* [16] M. Dumitru, Gamma-ring: some interpretations used in the study of their radicals, U.P.B. Sci. Bull., Series A 71 (3) (2009), 9-22.
* [17] A.C. Paul and Md. Sabur, Decomposition in neotherian gamma rings, International Archive of Applied Sciences and Technology, 2 (2) (2011), 38-42.
* [18] S. Kyuno, On the radicals of $\Gamma$-rings, Osaka J. Math., 12 (1975), 639-645.
* [19] S. Kyuno, On prime gamma rings, Pacific J. Math., 75 (1) (1978), 185-190.
* [20] S. Kyuno, Prime ideals in gamma rings, Pacific J. Math., 98 (2) (1982), 375-379.
|
# BIP! DB: A Dataset of Impact Measures for Scientific Publications
Thanasis Vergoulis<EMAIL_ADDRESS>IMSI, ATHENA RCAthensGreece , Ilias
Kanellos<EMAIL_ADDRESS>IMSI, ATHENA RCAthensGreece , Claudio
Atzori<EMAIL_ADDRESS>ISTI, CNRPisaItaly , Andrea Mannocci
<EMAIL_ADDRESS>ISTI, CNRPisaItaly , Serafeim Chatzopoulos
<EMAIL_ADDRESS>IMSI, ATHENA RCAthensGreece , Sandro La Bruzzo
<EMAIL_ADDRESS>ISTI, CNRPisaItaly , Natalia Manola
<EMAIL_ADDRESS>OpenAIREAthensGreece and Paolo Manghi
<EMAIL_ADDRESS>ISTI, CNRPisaItaly
(2021)
###### Abstract.
The growth rate of the number of scientific publications is constantly
increasing, creating important challenges in the identification of valuable
research and in various scholarly data management applications, in general. In
this context, measures which can effectively quantify the scientific impact
could be invaluable. In this work, we present BIP! DB, an open dataset that
contains a variety of impact measures calculated for a large collection of
more than $100$ million scientific publications from various disciplines.
scientometrics, impact, research assessment
††journalyear: 2021††copyright: iw3c2w3††conference: Companion Proceedings of
the Web Conference 2021; April 19–23, 2021; Ljubljana, Slovenia††booktitle:
Companion Proceedings of the Web Conference 2021 (WWW ’21 Companion), April
19–23, 2021, Ljubljana, Slovenia††doi: 10.1145/3442442.3451369††isbn:
978-1-4503-8313-4/21/04
## 1\. Introduction
The growth rate of the number of published scientific articles is constantly
increasing (Bornmann and Mutz, 2015). At the same time, studies suggest that,
among the vast number of published works, many are of low impact or may even
contain research of questionable quality (Ioannidis, 2005). Consequently,
identifying the most valuable publications for any given research topic has
become extremely tedious and time consuming.
Quantifying the impact of scientific publications could facilitate this and
other related tasks, which make up the daily routine of researchers and other
professionals of the broader scientific and academic community. For instance,
most contemporary search engines for research publications (e.g., Google
Scholar, Semantic Scholar) combine keyword-based relevance with a scientific
impact measure (usually citation counts) to rank their search results, in an
attempt to help their users prioritise reading for literature review.
However, many impact measures, which are widely used in various applications,
have inherent drawbacks. For instance, citation counts cannot differentiate
citations based on the importance of the citing articles. This is an important
drawback since citation counts may, for example, present a publication in a
predatory journal, heavily cited by other trivial works, as invaluable, while
disregarding the seminal importance of an otherwise sparsely cited publication
that has influenced (and is cited by) a breakthrough article. For the same
reason, citation counts are also vulnerable to various malpractices (e.g.,
excessive self-citation, citation cartels).
Another important issue, often overlooked by most existing academic search
engines, is the importance of capturing publication impact from a broader set
of perspectives. It is an oversimplification to rely on one impact measure
only, since there are many different aspects of scientific impact (Bollen et
al., 2009; Kanellos et al., 2019). Indicatively, a publication’s _influence_
is its overall, long-term importance, which can be calculated based on its
whole citation history; its _popularity_ , on the other hand, is the
publication’s current attention in the research community, which is indicative
of its expected short-term impact. These impact aspects are not entirely
correlated and each one may be preferable for different applications.
Consider, for example, an experienced researcher who needs to reexamine a
topic of interest to learn about its latest developments. Ranking articles
based on their popularity would be preferable for her. On the other hand, a
young researcher wanting to delve into the same topic to prepare a survey
would prefer to rank the relevant articles based on their influence. Although
using citation counts would satisfy the needs of the young researcher to an
extent, they would fail to help the needs of the experienced one, since
citation counts are biased against recent articles. This is because any recent
article (irrespective of its current attention in the research community)
usually requires months or even years to receive its first citations (Smith,
2009), and eventually gain momentum.
Overlooking the above aspects is problematic. On the one hand, recently
published research may be at the center of the current attention of the
scientific community, although this is not reflected by the traditional
measures. On the other hand, scientific impact should not be examined through
a limited set of measures. This is important not only because a larger set of
measures captures a wider range of impact aspects, providing a more complete
picture about a publication’s impact; since any individual measure used for
research assessment is bound to be abused, based on Goodhart’s law111Also
known as Campbell’s law (an example of “Cobra effect”)., consulting multiple
measures can work as a countermeasure to reduce the effects of relevant
attacks/malpractices. Taking all these into consideration, it is part of the
good practices in research assessment to consider a range of quantitative
measures as inclusive as possible (e.g., this is also emphasized in the
Declaration on Research Assessment222The Declaration on Research Assessment
(DORA), https://sfdora.org/read).
In this work, we present BIP! DB, an open dataset that contains various impact
measures calculated for more than $104M$ scientific articles, taking into
consideration more than $1.25B$ citation links between them. The production of
this dataset is based on the integration of three major datasets, which
provide citation data: OpenCitation’s (Peroni and Shotton, 2020) COCI dataset,
Microsoft Academic Graph, and Crossref. Based on these data, we perform
citation network analysis to produce five useful impact measures, which
capture three distinct aspects of scientific impact. This set of measures
makes up a valuable resource that could be leveraged by various applications
in the field of scholarly data management.
The remainder of this manuscript is structured as follows. In Section 2 we
discuss various aspects of scientific impact and present the impact measures
in the BIP! DB dataset. In Section 3 we provide technical details on the
production of our dataset and in Section 4 we empirically show how distinct
the different measures are and elaborate on some interesting issues about the
dataset’s potential uses and extensions. Finally, in Section 5 we conclude the
work.
## 2\. Impact Aspects & Measures
As mentioned in Section 1, there are many perspectives from which one can
study scientific impact (Bollen et al., 2009; Kanellos et al., 2019).
Consequently, a multitude of impact measures and indicators have been
introduced in the literature, each of them better capturing a different
aspect. In this work, we present an open dataset of different impact measures,
which we freely provide. We focus on measures that quantify three aspects of
scientific impact, which can be useful for various real-life applications:
_popularity_ , which reflects a publication’s current attention (and its
expected impact in the near future), _influence_ which reflects its overall,
long-term importance, and _impulse_ , which better captures its initial impact
during its “incubation phase”, i.e., during the first years following its
publication.
We focus on the first two aspects because they are well-studied: a recent
experimental study (Kanellos et al., 2019) has revealed the strengths and
weaknesses of various measures in terms of quantifying popularity and
influence. Based on this analysis, and on a set of subsequent experiments
(Kanellos et al., 2020), we select the following measures: Citation Count,
PageRank, RAM, and AttRank. We also include the impulse, since it is a
distinct impact aspect that is reflected by a Citation Count variant, which is
utilised for the production of some useful metrics, like the FWCI333More info
on this measure can be found in the Snowball metrics cookbook:
https://snowballmetrics.com.. This variant considers only citations received
during the “incubation” phase of a publication, i.e., during the first $y$
years after its publication (usually, $y=3$); we refer to this useful Citation
Count variant as _Incubation Citation Count_ , and we select it for inclusion
in the BIP! DB collection, as well. In the next paragraphs, we elaborate on
all the impact measures of our dataset.
Citation Count (CC). This is the most widely used scientific impact indicator,
which sums all citations received by each article. The citation count of a
publication $i$ corresponds to the in-degree of the corresponding node in the
underlying citation network: $s_{i}=\sum_{j}A_{i,j}$, where $A$ is the
adjacency matrix of the network (i.e., $A_{i,j}=1$ when paper $j$ cites paper
$i$, while $A_{i,j}=0$ otherwise). Citation count can be viewed as a measure
of a publication’s overall impact, since it conveys the number of other works
that directly drew on it.
“Incubation” Citation Count (iCC). This measure is essentially a time-
restricted version of the citation count, where the time window is distinct
for each paper, i.e., only citations $y$ years after its publication are
counted (usually, $y=3$). The “incubation” citation count of a paper $i$ is
calculated as: $s_{i}=\sum_{j,t_{j}\leq t_{i}+3}A_{i,j}$, where $A$ is the
adjacency matrix and $t_{j},t_{i}$ are the citing and cited paper’s
publication years, respectively. iCC can be seen as an indicator of a paper’s
initial momentum (impulse) directly after its publication.
PageRank (PR). Originally developed to rank Web pages (Page et al., 1999),
PageRank has been also widely used to rank publications in citation networks
(e.g., (Chen et al., 2007; Ma et al., 2008; Vergoulis et al., 2019)). In this
latter context, a publication’s PageRank score also serves as a measure of its
influence. In particular, the PageRank score of a publication is calculated as
its probability of being read by a researcher that either randomly selects
publications to read or selects publications based on the references of her
latest read. Formally, the score of a publication $i$ is given by:
(1) $s_{i}=\alpha\cdot\sum_{j}P_{i,j}\cdot s_{j}+(1-\alpha)\cdot\frac{1}{N}$
where $P$ is the stochastic transition matrix, which corresponds to the column
normalised version of adjacency matrix $A$, $\alpha\in[0,1]$, and $N$ is the
number of publications in the citation network. The first addend of Equation 1
corresponds to the selection (with probability $\alpha$) of following a
reference, while the second one to the selection of randomly choosing any
publication in the network. It should be noted that the score of each
publication relies of the score of publications citing it (the algorithm is
executed iteratively until all scores converge). As a result, PageRank
differentiates citations based on the importance of citing articles, thus
alleviating the corresponding issue of the Citation Count.
RAM. RAM (Ghosh et al., 2011) is essentially a modified Citation Count, where
recent citations are considered of higher importance compared to older ones.
Hence, it better captures the popularity of publications. This “time-
awareness” of citations alleviates the bias of methods like Citation Count and
PageRank against recently published articles, which have not had “enough” time
to gather as many citations. The RAM score of each paper $i$ is calculated as
follows:
(2) $s_{i}=\sum_{j}{R_{i,j}}$
where $R$ is the so-called Retained Adjacency Matrix (RAM) and
$R_{i,j}=\gamma^{t_{c}-t_{j}}$ when publication $j$ cites publication $i$, and
$R_{i,j}=0$ otherwise. Parameter $\gamma\in(0,1)$, $t_{c}$ corresponds to the
current year and $t_{j}$ corresponds to the publication year of citing article
$j$.
AttRank. AttRank (Kanellos et al., 2020) is a PageRank variant that alleviates
its bias against recent publications (i.e., it is tailored to capture
popularity). AttRank achieves this by modifying PageRank’s probability of
randomly selecting a publication. Instead of using a uniform probability,
AttRank defines it based on a combination of the publication’s age and the
citations it received in recent years. The AttRank score of each publication
$i$ is calculated based on:
(3) $s_{i}=\alpha\cdot\sum_{j}P_{i,j}\cdot s_{j}+\beta\cdot Att(i)+\gamma\cdot
c\cdot e^{-\rho\cdot(t_{c}-t_{i})}$
where $\alpha+\beta+\gamma=1$ and $\alpha,\beta,\gamma\in[0,1]$. $Att(i)$
denotes a recent attention-based score for publication $i$, which reflects its
share of citations in the $y$ most recent years, $t_{i}$ is the publication
year of article $i$, $t_{c}$ denotes the current year, and $c$ is a
normalisation constant. Finally, $P$ is the stochastic transition matrix.
## 3\. The BIP! DB Dataset
### 3.1. Data Collection & Integration
BIP! DB’s impact measures all rely on citation network analysis. Hence, a
major challenge in our work was to construct an interdisciplinary and as
inclusive as possible citation network on which all impact measures would be
calculated. To achieve this, we gathered citation data and metadata from three
data sources: OpenCitations’ (Peroni and Shotton, 2020) COCI dataset,
Microsoft’s Academic Graph (MAG) (Sinha et al., 2015; Wang et al., 2020), and
Crossref (Hendricks et al., 2020). The current dataset version (regular
updates are scheduled in the future) exploits the latest version of the COCI
dataset (Sep 2020), and recent snapshots of MAG (Aug 2020) and Crossref (May
2020). Our dataset production workflow collects, cleans, and integrates data
from these sources to produce a citation graph based on the distinct DOI-to-
DOI relationships found. Since the publication year is required for some of
the measures to be calculated, publications lacking this information were
excluded from the final network. Table 1 summarises some statistics for the
original data sources and the complete, integrated dataset.
### 3.2. Calculation of Impact Measures
As discussed in Section 3.1, the volume of processed data exceeds $100$
million publications and $1$ billion references. Hence, particular care must
be taken by any algorithms developed for the calculation of the required
impact scores, to allow for the handling of time-efficient and scalable
updates.
By examining the formulas of the impact measures presented in Section 2, we
can observe that all of them rely on the analysis of the underlying citation
network, by calculating a (possibly weighted) sum of scores, which are
received by each publication by its citing articles. Hence, we can take
advantage of data parallelism when implementing the algorithms calculating
these measures. In particular, each measure can be implemented as a set of
MapReduce operations where each publication _maps_ its score (e.g., a
citation, its PageRank score, etc) to its cited papers. The final score of
each publication, in turn, results from an aggregation (_reduce_) of all
scores mapped to it. Additionally, PageRank, and AttRank in particular, are
iterative processes, which require such sets of operations to repeat until the
calculated publication scores converge. Therefore, we chose Spark, which is
particularly suitable for data parallel iterative processes, as our
development platform of reference. In particular, we implemented all
algorithms as PySpark scripts, running on Spark version 2.3. All impact
measure calculations are performed on a cluster of $10$ VMs, each with $4$
cores and $8$GB RAM, and each script runs with up to 35 Spark workers.
### 3.3. Published Data Records
Dataset | DOIs | Citations
---|---|---
COCI | $59,455,882$ | $733,366,727$
CrossRef | $96,703,144$ | $596,803,579$
MAG | $90,224,789$ | $1,177,733,277$
Unified Graph | $104,769,307$ | $1,254,817,030$
Table 1. Distinct DOIs and citations per data source.
The current version of the BIP! DB dataset consists of five compressed TSV
files, one for each impact measure provided. All files follow the same format:
each line contains two data columns, where the first corresponds to the DOI of
a publication, followed by the column which corresponds to the score of the
measure.
For the sake of clarity, each of the files published contains in its name the
algorithm configuration that produced the respective measure scores (the
parameters were selected according to previous experiments, e.g., (Kanellos et
al., 2019)). For example, the file named
“PR_graph_universe2_1.txt_a0.5_error1e-12.gz” contains the PageRank scores
calculated with parameter $\alpha=0.5$ and with a convergence error set to
$\epsilon\leq 10^{-12}$. All files published are freely available in
Zenodo444BIP! DB dump, https://doi.org/10.5281/zenodo.4386934 under the
Creative Commons Attribution 4.0 International license.
### 3.4. Updated BIP! API
Extra effort was given to update the existing BIP! API (Vergoulis et al.,
2019) with the most recent version of the BIP! DB dataset so to provide
programmatic access to the impact measures of the same set of publications. As
a result, all calculated impact measures are also accessible via a public REST
API555BIP! API documentation, https://bip-api.imsi.athenarc.gr/documentation.
It supports retrieving impact scores for a given article or for a number of
articles given a list of DOI identifiers. The API response includes all five
impact scores in a simple JSON object, one for each requested DOI.
## 4\. Discussion
To highlight the fact that the different measures capture semantically diverse
impact aspects we present, in Table 2, the pairwise top-$k$ correlations666We
use Spearman’s top-$k$ rank correlation $\rho_{min}$ as defined in (Fagin et
al., 2003) ($\rho\in[-1,1]$, with $-1,0,1$ indicating perfect inverse, no, and
perfect correlation, respectively). of the top-ranking $1,000,000$ papers
(corresponding roughly to the top-$1\%$ of papers) for each pair of impact
measures we calculated. Intuitively, we expect to see high correlations
between measures that capture the same impact aspect. In general, our findings
confirm this intuition since the popularity measures (AttRank & RAM) appear to
be highly correlated ($\rho>0.9$), while the influence ones (CC & PR) appear
to have a moderate correlation ($\rho>0.4$). Of course, as discussed, there
are differences between measures of the same aspect, thus we do not expect
pairs of measures for the same impact aspect to correlate perfectly. In
addition, in this experiment we only examine the correlation of the top-$1\%$
publications; the full set would reveal larger correlations for the measures
of the same aspect (e.g., using the top $10\%$ publications we measured $\rho$
values greater than $0.6$ for CC and PR). Finally, based on the same
measurements, iCC at best correlates weakly ($\rho<0.4$) to other measures,
supporting the intuition that it captures a distinct impact aspect (impulse).
| iCC | CC | PR | AttRank | RAM
---|---|---|---|---|---
iCC | $1$ | $0.0985$ | $-0.3468$ | $0.3141$ | $0.3042$
CC | | $1$ | $0.4144$ | $0.4583$ | $0.2774$
PR | | | $1$ | $-0.0675$ | $-0.2598$
AttRank | | | | $1$ | $0.9056$
RAM | | | | | $1$
Table 2. Top-$1\%$ pairwise correlations of impact measures.
Our data are openly available both on Zenodo and through an open API, so to
facilitate their utilization by third-party research teams and to enable
building useful services on top of them. Furthermore, the files in the
repository are available in TSV format, to allow for easy editing and/or
processing for import in various database management systems. Finally, in line
with the need for any relevant applications to use fresh data, we plan to
update the provided files regularly, taking into consideration the update rate
of the integrated data sources.
There are many possible applications that can leverage the data provided by
BIP! DB. For instance, academic search engines may utilise impact measures to
rank publications based on different impact aspects777Our academic search
engine BIP! Finder (Vergoulis et al., 2019), is an attempt in this direction.,
while science monitors (like the Open Science Observatory (Papastefanatos et
al., 2020)) may use them to produce intuitive reports and visualisations.
Furthermore, publication-level impact measures can be propagated and
aggregated to related entities (e.g., datasets, software packages, individual
researchers) to quantify their expected impact, which may be useful for
various applications (e.g., planning for officers in funding organisations,
decision support for HR departments of research institutions). Finally, the
calculated impact measures may be used as features in various machine learning
applications that apply data mining on citation networks.
At this point, we would like to highlight that researchers should always have
a large toolbox of impact measures available, in order to get the full picture
about a publication’s impact and to successfully secure themselves from
various types of attacks and malpractices in the filed of research assessment.
This is why we envisage to continuously update and extend the BIP! DB dataset
to always contain up-to-date data and to be inline with the latest
developments in the field of research assessment and scientometrics. To this
end, we plan to update our set of included measures with new ones performing
better in terms of effectiveness, and to extend BIP! DB to include measures
that reflect additional impact aspects. Finally, since scientific impact is
not always entirely correlated with publication quality, we intend to also
include measures that capture other aspects of a publication’s merit (e.g.,
novelty, readability). In general, contrary to other works, that construct and
make available unified citation graphs (e.g., (Peroni and Shotton, 2020;
Herzog et al., 2020)), the focus of our work is not on providing the unified
graph itself, but on producing an open dataset of easily interpretable
scientific impact measures, calculated on a unified graph, which can satisfy
diverse needs and are ready to use.
## 5\. Conclusions
We presented BIP! DB, an open dataset containing various impact measures,
calculated for hundreds of millions of scientific articles. Our dataset
provides a multidimensional view of article impact, and thus may be
potentially beneficial for many different applications and diverse
stakeholders. Furthermore, we aim to deliver regular updates of our dataset in
line with the updates of the data sources we use. Finally, in the future, we
additionally plan to extend the published dataset not only with further impact
measures, but also with other indicators capturing aspects other than the ones
strictly related to scientific impact, such as readability and novelty.
###### Acknowledgements.
This project has received funding from the European Union’s Horizon 2020
research and innovation programme under grant agreement No 101017452.
## References
* (1)
* Bollen et al. (2009) Johan Bollen, Herbert Van de Sompel, Aric Hagberg, and Ryan Chute. 2009. A principal component analysis of 39 scientific impact measures. _PloS one_ 4, 6 (2009), e6022.
* Bornmann and Mutz (2015) Lutz Bornmann and Rüdiger Mutz. 2015. Growth rates of modern science: a bibliometric analysis based on the number of publications and cited references. _JASIST_ 66, 11 (2015), 2215–2222.
* Chen et al. (2007) Peng Chen, Huafeng Xie, Sergei Maslov, and Sidney Redner. 2007\. Finding scientific gems with Google’s PageRank algorithm. _Journal of Informetrics_ 1, 1 (2007), 8–15.
* Fagin et al. (2003) Ronald Fagin, Ravi Kumar, and Dakshinamurthi Sivakumar. 2003\. Comparing top k lists. _SIAM Journal on discrete mathematics_ 17, 1 (2003), 134–160.
* Ghosh et al. (2011) Rumi Ghosh, Tsung-Ting Kuo, Chun-Nan Hsu, Shou-De Lin, and Kristina Lerman. 2011. Time-aware ranking in dynamic citation networks. In _Data Mining Workshops (ICDMW), 2011 IEEE 11th International Conference on_. IEEE, 373–380.
* Hendricks et al. (2020) Ginny Hendricks, Dominika Tkaczyk, Jennifer Lin, and Patricia Feeney. 2020. Crossref: The sustainable source of community-owned scholarly metadata. 1, 1 (2020), 414–427. https://doi.org/10.1162/qss_a_00022
* Herzog et al. (2020) Christian Herzog, Daniel W. Hook, and Stacy R. Konkiel. 2020\. Dimensions: Bringing down barriers between scientometricians and data. _Quant. Sci. Stud._ 1, 1 (2020), 387–395. https://doi.org/10.1162/qss_a_00020
* Ioannidis (2005) John PA Ioannidis. 2005\. Why most published research findings are false. _PLoS medicine_ 2, 8 (2005), e124.
* Kanellos et al. (2019) Ilias Kanellos, Thanasis Vergoulis, Dimitris Sacharidis, Theodore Dalamagas, and Yannis Vassiliou. 2019\. Impact-Based Ranking of Scientific Publications: A Survey and Experimental Evaluation. _IEEE TKDE_ (2019).
* Kanellos et al. (2020) Ilias Kanellos, Thanasis Vergoulis, Dimitris Sacharidis, Theodore Dalamagas, and Yannis Vassiliou. 2020\. Ranking Papers by their Short-Term Scientific Impact. arXiv:2006.00951 [cs.DL]
* Ma et al. (2008) Nan Ma, Jiancheng Guan, and Yi Zhao. 2008. Bringing PageRank to the citation analysis. _Information Processing & Management_ 44, 2 (2008), 800–810.
* Page et al. (1999) Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The PageRank citation ranking: Bringing order to the web. (1999).
* Papastefanatos et al. (2020) George Papastefanatos, Elli Papadopoulou, Marios Meimaris, Antonis Lempesis, Stefania Martziou, Paolo Manghi, and Natalia Manola. 2020\. Open Science Observatory: Monitoring Open Science in Europe. In _AIMinScience 2020_ _(Communications in Computer and Information Science, Vol. 1260)_. Springer, 341–346.
* Peroni and Shotton (2020) Silvio Peroni and David M. Shotton. 2020. OpenCitations, an infrastructure organization for open scholarship. _Quant. Sci. Stud._ 1, 1 (2020), 428–444.
* Sinha et al. (2015) Arnab Sinha, Zhihong Shen, Yang Song, Hao Ma, Darrin Eide, Bo-June (Paul) Hsu, and Kuansan Wang. 2015. An Overview of Microsoft Academic Service (MAS) and Applications. In _Proceedings of WWW ’15 Companion_. 243–246.
* Smith (2009) Derek R Smith. 2009\. A 30-year citation analysis of bibliometric trends at the Archives of Environmental Health, 1975–2004. _Archives of environmental & occupational health_ 64, sup1 (2009), 43–54.
* Vergoulis et al. (2019) Thanasis Vergoulis, Serafeim Chatzopoulos, Ilias Kanellos, Panagiotis Deligiannis, Christos Tryfonopoulos, and Theodore Dalamagas. 2019\. BIP! Finder: Facilitating Scientific Literature Search by Exploiting Impact-Based Ranking. In _Proceedings of CIKM 2019_. ACM, 2937–2940. https://doi.org/10.1145/3357384.3357850
* Wang et al. (2020) Kuansan Wang, Zhihong Shen, Chiyuan Huang, Chieh-Han Wu, Yuxiao Dong, and Anshul Kanakia. 2020\. Microsoft Academic Graph: When experts are not enough. _QSS_ 1, 1 (2020), 396–413.
|
# Copula-based conformal prediction for Multi-Target Regression
Soundouss Messoudi
HEUDIASYC - UMR CNRS 7253
Université de Technologie de Compiègne
60203 COMPIEGNE - FRANCE
<EMAIL_ADDRESS>
&Sébastien Destercke
HEUDIASYC - UMR CNRS 7253
Université de Technologie de Compiègne
60203 COMPIEGNE - FRANCE
<EMAIL_ADDRESS>
&Sylvain Rousseau
HEUDIASYC - UMR CNRS 7253
Université de Technologie de Compiègne
60203 COMPIEGNE - FRANCE
<EMAIL_ADDRESS>
###### Abstract
There are relatively few works dealing with conformal prediction for multi-
task learning issues, and this is particularly true for multi-target
regression. This paper focuses on the problem of providing valid (i.e.,
frequency calibrated) multi-variate predictions. To do so, we propose to use
copula functions applied to deep neural networks for inductive conformal
prediction. We show that the proposed method ensures efficiency and validity
for multi-target regression problems on various data sets.
_K_ eywords Inductive conformal prediction $\cdot$ Copula functions $\cdot$
Multi-target regression $\cdot$ Deep neural networks.
## 1 Introduction
The most common supervised task in machine learning is to learn a single-task,
single-output prediction model. However, such a setting can be ill-adapted to
some problems and applications.
On the one hand, producing a single output can be undesirable when data is
scarce and when producing reliable, possibly set-valued predictions is
important (for instance in the medical domain where examples are very hard to
collect for specific targets, and where predictions are used for critical
decisions). Such an issue can be solved by using conformal prediction
approaches [1]. It was initially proposed as a transductive online learning
approach to provide set predictions (in the classification case) or interval
predictions (in the case of regression) with a statistical guarantee depending
on the probability of error tolerated by the user, but was then extended to
handle inductive processes [2]. On the other hand, there are many situations
where there are multiple, possibly correlated output variables to predict at
once, and it is then natural to try to leverage such correlations to improve
predictions. Such learning tasks are commonly called Multi-task in the
literature [3].
Most research work on conformal prediction for multi-task learning focuses on
the problem of multi-label prediction [4, 5], where each task is a binary
classification one. Conformal prediction for multi-target regression has been
less explored, with only a few studies dealing with it: Kuleshov _et al._ [6]
provide a theoretical framework to use conformal predictors within manifold
(e.g., to provide a mono-dimensional embedding of the multi-variate output),
while Neeven and Smirnov [7] use a straightforward multi-target extension of a
conformal single-output $k$-nearest neighbor regressor [8] to provide weather
forecasts. However, this latter essentially verifies validity (i.e., having
well-calibrated outputs) for each individual target. Recently, we proposed a
simple method to have an approximate validity for the multi-variate prediction
[9], that generally provided overly conservative results.
In this paper, we propose a new conformal prediction method fitted to multi-
target regression, that makes use of copulas [10] (a common tool to model
dependence between multi-variate random variables) to provide valid multi-
variate predictions. The interest of such a framework is that it remains very
easy to apply while linking multi-variate conformal predictions to the
theoretically sound framework that are copulas. Experiments also show that it
works quite well, and allows to improve upon previous heuristics [9].
Section 2 provides a general overview of our problem: a brief introduction to
conformal prediction and multi-target regression will be presented in Sections
2.1 and 2.2, before raising the problematic of applying conformal prediction
to the multi-target regression setting in Section 2.3. We will then present
our setting in Section 3: we will first recall the needed basic principles and
theorems of copulas in Section 3.1, before detailing our conformal multi-
target approach in Section 3.2. The experiments and their results are
described in Section 4.
## 2 Inductive conformal prediction (ICP) for Multi-Target Regression
This section recalls the basics of inductive conformal regression and multi-
target regression, before introducing the issues we will tackle in this paper.
### 2.1 Inductive conformal regression
In regression tasks, conformal prediction is a method that provides a
statistical guarantee to the predictions by giving an interval prediction
instead of a point prediction in the regression case. By statistical
guarantee, it is meant that the set-valued predictions cover the true value
with a given frequency, i.e., they are calibrated. It was first introduced as
a transductive online learning approach [11] and then adapted to the inductive
framework [2] where one uses a model induced from training examples to get
conformal predictions for the new instances. The two desirable features in
conformal regressors are (a) validity, i.e. the error rate does not exceed
$\epsilon$ for each chosen confidence level $1-\epsilon$, and (b) efficiency,
meaning prediction intervals are as small as possible.
Let $\lbag
z_{1}=(x_{1},y_{1}),z_{2}=(x_{2},y_{2}),\dots,z_{n}=(x_{n},y_{n})\rbag$ be the
successive pairs of an object $x_{i}\in X$ and its real-valued label
$y_{i}\in\mathbb{R}$, which constitute the observed examples. Assuming that
the underlying random variables are exchangeable (a weaker condition than
i.i.d.), we can predict $y_{n+1}\in\mathbb{R}$ for any new object $x_{n+1}\in
X$ by following the inductive conformal framework.
The first step consists of splitting the original data set $Z=\lbag
z_{1},\dots,z_{n}\rbag$ into a training set $Z^{tr}=\lbag
z_{1},\dots,z_{l}\rbag$ and a calibration set $Z^{cal}=\lbag
z_{l+1},\dots,z_{n}\rbag$, with $|Z^{cal}|=n-l$. Then, an underlying algorithm
is trained on $Z^{tr}$ to obtain the non-conformity measure $A_{l}$, a measure
that evaluates the strangeness of an example compared to other examples of a
bag, called the non-conformity score. Hence, we can calculate the non-
conformity score ${\alpha}_{k}$ for an example $z_{k}$ compared to the other
examples in the bag $\lbag z_{1},\dots,z_{l}\rbag$ with
${\alpha}_{k}=A_{l}(\lbag z_{1},\dots,z_{l}\rbag,z_{k})$.
By computing the non-conformity score ${\alpha}_{i}$ for each example $z_{i}$
of $Z^{cal}$ using this equation, we get the sequence
${\alpha}_{l+1},\ldots,{\alpha}_{n}$. When making a prediction for a new
example $x_{n+1}$, we use the underlying algorithm to associate to any
possible prediction $\hat{y}$ its non-conformity score
${\alpha}^{\hat{y}}_{n+1}$, and calculate its p-value which indicates the
proportion of less conforming examples than $z_{n+1}$, with:
$p(\hat{y}_{n+1})=\frac{|\\{i=l+1,\dots,n,n+1:{\alpha}_{i}\geq{\alpha}^{\hat{y}}_{n+1}\\}|}{n-l+1}.$
(1)
The final step before producing the conformal prediction consists of choosing
the significance level $\epsilon\in(0,1)$ to get a prediction set with a
confidence level of $1-\epsilon$, which is the statistical guarantee of
coverage of the true value $y_{n+1}$ by the interval prediction
$\hat{\mathbf{y}}_{n+1}$ such that
$\hat{\mathbf{y}}_{n+1}=\\{\hat{y}_{n+1}\in\mathbb{R}:p(\hat{y}_{n+1})>\epsilon\\}.$
The most basic non-conformity measure in a regression setting is the absolute
difference between the actual value $y_{i}$ and the predicted value
$\hat{y}_{i}$ by the underlying algorithm. The non-conformity score is then
calculated as follows:
${\alpha}_{i}=|y_{i}-\hat{y}_{i}|.$ (2)
The sequence of non-conformity scores ${\alpha}_{l+1},\ldots,{\alpha}_{n}$ for
all examples in $Z^{cal}$ are obtained and sorted in descending order. Then,
we compute the index of the $(1-\epsilon)$-percentile non-conformity score
${\alpha}_{s}$, based on the chosen significance level $\epsilon$, such as:
$\mathbb{P}(|y_{i}-\hat{y_{i}}|\leq\alpha_{s})\geq 1-\epsilon.$ (3)
Finally, the prediction interval for each new example $x_{n+1}$, which covers
the true output $y_{n+1}$ with probability $1-\epsilon$ is calculated as:
$\hat{\mathbf{y}}_{n+1}=[{\hat{y}}_{n+1}-{\alpha}_{s},{\hat{y}}_{n+1}+{\alpha}_{s}].$
(4)
The drawback of this standard non-conformity measure is that all prediction
intervals are equally sized ($2{\alpha}_{s}$) for a given confidence level.
Adopting a normalized non-conformity measure instead provides personalized
individual bounds for each new example by scaling the standard non-conformity
measure with ${\sigma}_{i}$, a term that estimates the difficulty of
predicting $y_{i}$. This means that using a normalized non-conformity measure
gives a smaller prediction interval for “easy” examples, and a bigger one for
“hard” examples. Thus, two distinct examples with the same ${\alpha}_{s}$
calculated by (2) will have two different interval predictions depending on
their difficulty. In this case, the normalized non-conformity score is as
follows:
${\alpha}_{i}=\frac{|y_{i}-\hat{y}_{i}|}{{\sigma}_{i}}.$ (5)
Thus, we have:
$\mathbb{P}\left(\frac{|y_{i}-\hat{y_{i}}|}{\sigma_{i}}\leq\alpha_{s}\right)\geq
1-\epsilon,$ (6)
which becomes an equality if the method is perfectly calibrated. For a new
example $x_{n+1}$, the prediction interval becomes :
$\hat{\mathbf{y}}_{n+1}=\left[{\hat{y}}_{n+1}-{\alpha}_{s}{\sigma}_{n+1},{\hat{y}}_{n+1}+{\alpha}_{s}{\sigma}_{n+1}\right].$
(7)
The value ${\sigma}_{i}$ can be defined in various ways. A popular approach
proposed by Papadopoulos and Haralambous [12] consists of training a small
neural network to estimate the error of the underlying algorithm by predicting
the value ${\mu}_{i}=\ln(|y_{i}-\hat{y}_{i}|)$. In this case, the non-
conformity score is defined as:
${\alpha}_{i}=\frac{|y_{i}-\hat{y}_{i}|}{\exp({\mu}_{i})+\beta},$ (8)
where $\beta\geq 0$ is a sensitivity parameter. With the significance level
$\epsilon$, we have:
$\mathbb{P}\left(\frac{|y_{i}-\hat{y_{i}}|}{\exp({\mu}_{i})+\beta}\leq\alpha_{s}\right)\geq
1-\epsilon.$ (9)
For a new example $x_{n+1}$, the prediction interval is:
$\hat{\mathbf{y}}_{n+1}=\left[{\hat{y}}_{n+1}-{\alpha}_{s}(\exp({\mu}_{n+1})+\beta),{\hat{y}}_{n+1}+{\alpha}_{s}(\exp({\mu}_{n+1})+\beta)\right].$
(10)
Other approaches use different algorithms to normalize the non-conformity
scores, such as regression trees [13] and $k$-nearest neighbors [8]. Before
introducing the problem of multi-target regression, let us first note that,
assuming that our method is well-calibrated and that
$|y_{i}-\hat{y_{i}}|/\sigma_{i}$ is associated to a random variable $Q$, (6)
can be rewritten as
$\mathbb{P}(Q\leq\alpha_{s})=1-\epsilon:=F_{Q}(\alpha_{s}),$ (11)
which will be instrumental when dealing with copulas and multi-variate outputs
later on. Also note that this means that specifying a confidence $\epsilon$
uniquely defines a value $\alpha_{s}$.
### 2.2 Multi-target regression (MTR)
In multi-target regression, the feature space $X$ is the same as in standard
regression, but the target space $Y\subset\mathbb{R}^{m}$ is made of $m$ real-
valued targets. This means that observations are i.i.d pairs $(x_{i},y_{i})$
drawn from a probability distribution on $X\times Y$, where each instance
$x_{i}\in X$ is associated to an $m$ dimensional real-valued target
$y_{i}=(y_{i}^{1},\ldots,y_{i}^{m})\in Y$. The usual objective of multi-target
regression is then to learn a predictor $h:X\rightarrow Y$, i.e. to predict
multiple outputs based on the input features characterizing the data set,
which generalizes standard regression. There are two distinct approaches to
treat MTR called algorithm adaptation and problem transformation methods.
For algorithm adaptation approaches, standard single-output regression
algorithms are extended to the multi-target regression problem. Many models
were adapted to the MTR problem, such as Support Vector Regressors [14],
regression trees [15], kernel methods [16] and rule ensembles [17].
In problem transformation, one usually decomposes the initial multi-variate
problems into several simpler problems, thus allowing the use of standard
classification methods without the need for an adaptation that can be tricky
or computationally costly. A prototypical example of such a transformation is
the chaining method [18], where one predicts each target sequentially, using
the output and predictions of previous targets as inputs for the next one,
thus capturing some correlations between the targets.
As our goal here is not to produce a new MTR method, but rather to propose a
flexible means to make their predictions reliable through conformal
prediction, we will not make a more detailed review of those methods. The
reader interested in different methods can consult for instance [18]. We will
now detail how conformal prediction and MTR can be combined. Let us just
mention that exploiting the possible relationships allow in general to improve
performances of the methods [19, 20].
### 2.3 Inductive conformal prediction for Multi-Target Regression
As said before, previous studies about conformal MTR focused on providing
valid and efficient inferences target-wise [7], thus potentially neglecting
the potential advantages of exploiting target relations. Our main goal in this
paper is to provide an easy conformal MTR method allowing to do so.
Within the MTR setting, we have a multi-dimensional output
$\\{Y^{1},\ldots,Y^{m}\\}$ (we will use superscripts to denote the dimensions,
and subscripts to denote sample indices) with
$Y^{j}\in\mathbb{R},j\in\\{1,\ldots,m\\}$ the different individual real-valued
$m$ targets. Let $\underline{\hat{y}}_{n+1}^{j},\overline{\hat{y}}_{n+1}^{j}$
be respectively the lower and upper bounds of the interval predictions given
by the non-conformity measure for each target $Y^{j}$ given a new instance
$x_{n+1}$. We define the hyper-rectangle $[\hat{\mathbf{y}}_{n+1}]$ as the
following Cartesian product:
$[\hat{\mathbf{y}}_{n+1}]=\times_{j=0}^{m}[\underline{\hat{y}}_{n+1}^{j},\overline{\hat{y}}_{n+1}^{j}].$
(12)
This hyper-rectangle forms the volume
$\prod_{j=0}^{m}(\overline{\hat{y}}_{n+1}^{j}-\underline{\hat{y}}_{n+1}^{j})$
to which a global prediction $y_{n+1}$ of a new example $x_{n+1}$ should
belong in order to be valid, i.e. each single prediction $y_{n+1}^{j}$ for
each individual target $Y^{j}$ should be between the bounds
$\underline{\hat{y}}^{j}_{n+1},\overline{\hat{y}}^{j}_{n+1}$ of its interval
prediction. With this view, the objective of the conformal prediction
framework for MTR in the normalized setting is to satisfy a global
significance level $\epsilon_{g}$ required by the user such that:
$\mathbb{P}(y_{n+1}\in[\hat{\mathbf{y}}_{n+1}])\geq 1-\epsilon_{g}.$ (13)
This probability can also be written as follows:
$\displaystyle\mathbb{P}(y_{n+1}^{1}\in[\underline{y_{n+1}^{1}},\overline{y_{n+1}^{1}}],\ldots,y_{n+1}^{m}\in[\underline{y_{n+1}^{m}},\overline{y_{n+1}^{m}}])$
$\displaystyle=\mathbb{P}\left(\frac{|y_{n+1}^{1}-\hat{y}_{n+1}^{1}|}{\sigma_{n+1}^{1}}\leq\alpha^{1}_{s},\ldots,\frac{|y_{n+1}^{m}-\hat{y}_{n+1}^{m}|}{\sigma_{n+1}^{m}}\leq\alpha^{m}_{s}\right)\geq
1-\epsilon_{g}.$ (14)
Thus, we need to find the individual non-conformity scores
$\alpha^{1}_{s},\ldots,\alpha^{m}_{s}$, defined for instance by target-wise
confidence levels $\epsilon_{j}$, such that we ensure a global confidence
level $1-\epsilon_{g}$. Extending (11) and considering the random variables
$Q^{j}=|y^{j}-\hat{y}^{j}|/\sigma^{j}$, $j\in\\{1,\ldots,m\\}$, we get:
$\mathbb{P}(Q^{1}\leq\alpha^{1}_{s},\ldots,Q^{m}\leq\alpha^{m}_{s})\geq
1-\epsilon_{g}.$ (15)
Should we know the joint distribution in (15), and therefore the dependence
relations between target predictions, it would be relatively easy to get the
individual significance levels111Note that there may be multiple choices for
such individual levels. Here we will fix them to be equal for simplicity.
$\epsilon_{j}$ associated to the individual non-conformity scores
$\alpha^{j}_{s}$ such that we satisfy the chosen confidence level
$1-\epsilon_{g}$. Yet, such a joint distribution is usually unknown. The next
section proposes a simple and efficient method to do so, leveraging the
connection between (15) and copulas. Before doing that, note again that under
the assumption that we are well calibrated, we can transform (15) into
$F(\alpha^{1}_{s},\ldots,\alpha^{m}_{s})=1-\epsilon_{g},$ (16)
where $F$ denotes here the joint cumulative distribution induced by
$\mathbb{P}$.
## 3 Copula-based conformal Multi-Target Regression
This section introduces our approach to obtain valid or better conformal
prediction in the multi-variate regression setting. We first recall some
basics of copulas and refer to Nelsen [10] for a full introduction, before
detailing how we apply them to conformal approaches.
### 3.1 Overview on copulas
A copula is a mathematical function that can describe the dependence between
multiple random variables. The term “copula” was first introduced by Sklar
[21] in his famous theorem, which is one of the fundamentals of copula theory,
now known as Sklar’s theorem. However, these tools have already been used
before, as for instance in Fréchet’s paper [22] and Höffding’s work [23, 24]
(reprinted as [25]). Copulas are popular in the statistical and financial
fields [26], but they are nowadays more and more used in other domains as
well, such as hydrology [27], medicine [28], and machine learning [29].
Let $\mathbf{Q}=(Q^{1},\ldots,Q^{m})$ be an $m$-dimensional random vector
composed of the random variables $Q^{1},\ldots,Q^{m}$. Let its cumulative
distribution function (c.d.f.) be $F=F_{Q}:\mathbb{R}^{m}\rightarrow[0,1]$.
This c.d.f. carries two important pieces of information:
* •
The c.d.f. of each random variable $Q^{j}$ s.t.
$F_{j}(q^{j})=\mathbb{P}(Q^{j}\leq q^{j})$, for all $j\in\\{1,\ldots m\\}.$
* •
The dependence structure between them.
The objective of copulas is to isolate the dependence structure from the
marginals $Q^{j}$ by transforming them into uniformly distributed random
variables $U^{j}$ and then expressing the dependence structure between the
$U^{j}$’s. In other words, an $m$-dimensional copula
$C:[0,1]^{m}\rightarrow[0,1]$ is a c.d.f. with standard uniform marginals. It
is characterized by the following properties:
1. 1.
$C$ is grounded, i.e. if $u^{j}=0$ for at least one $j\in\\{1,\ldots,m\\}$,
then $C(u^{1},\ldots,u^{m})=0$.
2. 2.
If all components of $C$ are equal to 1 except $u^{j}$ for all $u^{j}\in[0,1]$
and $j\in\\{1,\ldots,m\\}$, then $C(1,\ldots,1,u^{j},1,\ldots,1)=u^{j}$.
3. 3.
$C$ is $m$-increasing, i.e., for all $\mathbf{a},\mathbf{b}\in[0,1]^{m}$ with
$\mathbf{a}\leq\mathbf{b}$ :
${\Delta}_{(\mathbf{a},\mathbf{b}]}C=\sum_{j\in\\{0,1\\}^{m}}(-1)^{\sum_{k=1}^{m}j_{k}}C(a_{1}^{j_{1}}b_{1}^{1-j_{1}},\dots,a_{m}^{j_{m}}b_{m}^{1-j_{m}})\geq
0.$
The last inequality simply ensures that the copula is a well-defined c.d.f.
inducing non-negative probability for every event. The idea of copulas is
based on probability and quantile transformations [30]. Using these latter, we
can see that all multivariate distribution functions include copulas and that
we can use a mixture of univariate marginal distributions and a suitable
copula to produce a multivariate distribution function. This is described in
Sklar’s theorem [21] as follows:
###### Theorem 3.1 (Sklar’s theorem)
For any $m$-dimensional cumulative distribution function (c.d.f.) $F$ with
marginal distributions $F_{1},\dots,F_{m}$, there exists a copula
$C:[0,1]^{m}\rightarrow[0,1]$ such that:
$F(\mathbf{q})=F(q^{1},\ldots,q^{m})=C(F_{1}(q^{1}),\ldots,F_{m}(q^{m})),\quad\mathbf{q}\in\mathbb{R}^{m}.$
(17)
If $F_{j}$ is continuous for all $j\in\\{1,\ldots,m\\}$, then $C$ is unique.
Denoting the pseudo inverse of $F_{j}$ as $F^{\leftarrow}_{j}$ [30], we can
get from (17) that
$C(\mathbf{u})=C(u^{1},\ldots,u^{m})=F(F^{\leftarrow}_{1}(u^{1}),\ldots,F^{\leftarrow}_{m}(u^{m})).$
(18)
There are a few noticeable copulas, among which are:
* •
the product copula: $\Pi(\mathbf{u})=\prod_{j=1}^{m}u^{j}$;
* •
the Fréchet-Höffding upper bound copula 222$M$ is a copula for all $m\geq 2$.:
$M(\mathbf{u})=\min_{1\leq j\leq m}\\{u^{j}\\}$;
* •
the Fréchet-Höffding lower bound copula 333$W$ is a copula if and only if
$m=2$.: $W(\mathbf{u})=\max\\{\sum_{j=1}^{m}u^{j}-m+1,0\\}$.
While the product copula corresponds to classical stochastic independence, the
Fréchet-Höffding bound copulas play an important role as they correspond to
extreme cases of dependence [31]. Indeed, any $m$-dimensional copula $C$ is
such that $W(\mathbf{u})\leq C(\mathbf{u})\leq
M(\mathbf{u}),\mathbf{u}\in[0,1]^{m}.$
Another important class of copulas are so-called Archimedean copulas, which
are based on generator functions $\phi$ of specific kinds. More precisely, a
continuous, strictly decreasing, convex function
$\phi:[0,1]\rightarrow[0,\infty]$ satisfying $\phi(1)=0$ is known as an
Archimedean copula generator. It is known as a strict generator if
$\phi(0)=\infty$. The generated copula is then given by
$C(u^{1},\ldots,u^{m})={\phi}^{[-1]}(\phi(u^{1})+\ldots+\phi(u^{m})).$ (19)
Table 1 provides examples and details of three one parameter Archimedean
copula families [30], which are particularly convenient in estimation problems
(being based on a single parameter).
Family | Generator $\phi(t)$ | $\theta$ range | Strict | Lower | Upper
---|---|---|---|---|---
Gumbel [32] | $(-\ln t)^{\theta}$ | $\theta\geq 1$ | Yes | $\Pi$ | $M$
Clayton [33] | $\frac{1}{\theta}(t^{-\theta}-1)$ | $\theta\geq-1$ | $\theta\geq 0$ | $W$ | $M$
Frank [34] | $-\ln\left(\frac{e^{-\theta t}-1}{e^{-\theta}-1}\right)$ | $\theta\in\mathbb{R}$ | Yes | $W$ | $M$
Table 1: Archimedean copula families.
### 3.2 Copula-based conformal Multi-Target Regression
Let us now revisit our previous problem of finding the significance levels
$\epsilon_{j}$ for each target so that the hyper-rectangle prediction
$[\hat{\mathbf{y}}]$ covers the true value with confidence $1-\epsilon_{g}$.
Let us first consider (16). Following Sklar’s theorem, we have
$\displaystyle F(\alpha^{1}_{s},\ldots,\alpha^{m}_{s})$
$\displaystyle=C(F_{1}(\alpha^{1}_{s}),\ldots,F_{m}(\alpha^{m}_{s}))$
$\displaystyle=C(1-\epsilon^{1},\ldots,1-\epsilon^{m})$
$\displaystyle=1-\epsilon_{g}$
where the second line is obtained from (6). Clearly, if we knew the copula
$C$, then we could search for values $\epsilon_{j}$ providing the desired
global confidence.
A major issue is then to obtain or estimate the copula modelling the
dependence structure between the targets and their confidence levels. As
copulas are classically estimated from multi-variate observations, a simple
means that we will use here is to estimate them from the non-conformity scores
generated from the calibration set $Z^{cal}$. Namely, if $\alpha_{i}^{j}$ is
the non-conformity score corresponding to the $j^{th}$ target of the $z_{i}$
example of $Z^{cal}$ for $i\in\\{l+1,\ldots,n\\}$, we simply propose to
estimate a copula $C$ from the matrix
$A=\begin{bmatrix}\alpha_{l+1}^{1}&\alpha_{l+1}^{2}&\dots\\\ \vdots&\ddots&\\\
\alpha_{n}^{1}&&\alpha_{n}^{m}\end{bmatrix}.$ (20)
### 3.3 On three specific copulas
We will now provide some detail about the copulas we performed experiments on.
They have been chosen to go from the one requiring the most assumptions to the
one requiring the least assumptions.
#### 3.3.1 The Independent copula
The Independent copula means that the $m$ targets are considered as being
independent, with no relationship between them. It is a strong assumption, but
it does not require any estimation of the copula. In this case, (15) becomes:
$\displaystyle\Pi(F_{1}(\alpha^{1}_{s}),\ldots,F_{m}(\alpha^{m}_{s}))$
$\displaystyle=\prod_{j=1}^{m}F_{j}(\alpha^{j}_{s})=\prod_{j=1}^{m}\mathbb{P}(Q^{j}\leq\alpha^{j}_{s})$
$\displaystyle\geq\prod_{j=1}^{m}(1-\epsilon^{j})=1-\epsilon_{g},$
If we assume that all $\epsilon^{1},\ldots,\epsilon^{m}$ equal the same value
$\epsilon_{t}$, then:
$\prod_{j=1}^{m}(1-\epsilon^{j})=(1-\epsilon_{t})^{m}=1-\epsilon_{g}.$
Thus, we simply obtain
$\epsilon_{t}=1-\sqrt[m]{1-\epsilon_{g}}.$ (21)
This individual significance level $\epsilon_{t}$ is then used to calculate
the different non-conformity scores $\alpha^{j}_{s}$ for each target in the
multi-target regression problem for the Independent copula.
#### 3.3.2 The Gumbel copula
The Gumbel copula is a member of the Archimedean copula family which depends
on only one parameter, and in this sense is a good representative of
parametric copulas. It comes down to applying the generator function
$\phi(F_{j}(\alpha^{j}_{s}))=(-\ln F_{j}(\alpha^{j}_{s}))^{\theta}$ and its
inverse
${\phi}^{[-1]}(F_{j}(\alpha^{j}_{s}))=\exp{-(F_{j}(\alpha^{j}_{s}))^{1/\theta}}$
to (19), resulting in the expression
$C^{\theta}_{G}(F_{1}(\alpha^{1}_{s}),\ldots,F_{m}(\alpha^{m}_{s}))=\exp{-\left(\sum_{j=1}^{m}\left(-\ln
F_{j}(\alpha^{j}_{s})\right)^{\theta}\right)^{1/\theta}}.$ (22)
In this case, we need to estimate the parameter $\theta$. Since the marginals
$F_{j}(\alpha^{j})$ are unknown, we also need to estimate them. In our case,
we will simply use the empirical c.d.f. induced by the non-conformity scores
$\alpha_{i}^{j}$ of matrix $A$. An alternative would be to also assume a
parametric form of the $F_{j}$, but this seems in contradiction with the very
spirit of non-conformity scores. In particular, we will denote by
$\hat{F}_{j}$ the empirical cumulative distribution such that
$\hat{F}_{j}(\beta)=\frac{|\\{\alpha^{j}_{i}:\alpha^{j}_{i}\leq\beta,i\in\\{l+1,\ldots,n\\}\\}|}{n-l},\quad\beta\in\mathbb{R}.$
The parameter $\theta$ can then be estimated from matrix $A$ using the Maximum
Pseudo-Likelihood Estimator [35] with a numerical optimization, for instance
by using the Python library “copulae”444https://pypi.org/project/copulae/.
Once this is obtained, we then get for a particular choice of $\epsilon_{j}$
that
$\displaystyle C_{G}^{\hat{\theta}}$
$\displaystyle=\exp{-\left(\sum_{j=1}^{m}\left(-\ln(1-\epsilon_{j})\right)^{\hat{\theta}}\right)}^{1/{\hat{\theta}}}$
(23) $\displaystyle=\exp{-\left(\sum_{j=1}^{m}\left(-\ln
F_{j}(\alpha^{j}_{s})\right)^{\hat{\theta}}\right)}^{1/{\hat{\theta}}}$ (24)
And we can search for values $\epsilon_{j}$ that will make this equation equal
to $1-\epsilon_{g}$, using the estimations $\hat{F}_{j}$. The solution is
especially easy to obtain analytically if we consider that
$\epsilon^{1}=\ldots=\epsilon^{m}=\epsilon_{t}$, as we then have that
$\epsilon_{t}=1-(1-\epsilon_{g})^{1/\sqrt[\theta]{m}},$
and one can then obtain the corresponding non-conformity scores
$\alpha^{1}_{s},\ldots,\alpha^{m}_{s}$ by replacing $F_{j}$ by $\hat{F}_{j}$.
We chose this particular family of Archimedean copulas because its lower bound
is the Independent copula (as seen in Table 1). We can easily verify this by
taking $\hat{\theta}=1$. Thus, we can capture independence if it is verified,
and otherwise search in the direction of positive dependence. One reason for
such a choice is that previous experiments [9] indicate that the product
copula gives overly conservative results.
#### 3.3.3 The Empirical copula
Parametric copulas, as all parametric models, have the advantage of requiring
less data to be well estimated, while having the possibly important
disadvantage that they induce some bias in the estimation, that is likely to
grow as the number of target increases. The Empirical copula presents a non-
parametric way of estimating the marginals directly from the observations [36,
37]. It is defined as follows [35]:
$C_{E}(\mathbf{u})=\frac{1}{n-l}\sum_{i=l+1}^{n}\mathbbm{1}_{\mathbf{u}_{i}\leq\mathbf{u}}=\frac{1}{n-l}\sum_{i=l+1}^{n}\prod_{j=1}^{m}\mathbbm{1}_{u_{i}^{j}\leq
u^{j}},\quad\mathbf{u}\in[0,1]^{m},$ (25)
where $\mathbbm{1}_{A}$ is the indicator function of event $A$, and the
inequalities $\mathbf{u}_{i}\leq\mathbf{u}$ for $i\in\\{l+1,\ldots,n\\}$ need
to be understood component-wise. $\mathbf{u}_{i}$ are the pseudo-observations
that replace the unknown marginal distributions, which are defined as:
$\mathbf{u}_{i}=(u_{i}^{1},\ldots,u_{i}^{m})=(\hat{F}_{1}(\alpha_{i}^{1}),\ldots,\hat{F}_{m}(\alpha_{i}^{m})),\quad
i\in\\{l+1,\ldots,n\\},$ (26)
where distributions $\hat{F}_{j}$ are defined as before. Simply put, the
Empirical copula corresponds to consider as our joint probability the
Empirical joint cumulative distribution. We then have that
$C_{E}(F_{1}(\alpha^{1}_{s}),\ldots,F_{m}(\alpha^{m}_{s}))=\frac{1}{n-l}\sum_{i=l+1}^{n}\prod_{j=1}^{m}\mathbbm{1}_{u_{i}^{j}\leq
F_{j}(\alpha^{j}_{s})}.$ (27)
Using that $F_{j}(\alpha^{j}_{s})=1-\epsilon_{j}$, we can then search for
values of $\epsilon_{j}$, $j=1,\ldots,m$ that will make (27) equal to
$1-\epsilon_{g}$. Note that in this case, even assuming that
$\epsilon^{1}=\ldots=\epsilon^{m}=\epsilon_{t}$ will require an algorithmic
search, which is however easy as $C_{E}$ is an increasing function, meaning
that we can use a simple dichotomic search.
## 4 Evaluation
In this section, we describe the experimental setting (underlying algorithm,
data sets and performance metrics) and the results of our study.
### 4.1 Experimental setting
We choose to work with a deep neural network as the underlying algorithm. We
keep the same underlying algorithm for all non-conformity measures, since our
focus is to compare between the three copula functions chosen to get the
different non-conformity scores.
To compute the non-conformity scores over the calibration set, we use the
normalized non-conformity score given by (8) as described in [12], and predict
${\mu}_{i}=\ln(|y_{i}-\hat{y}_{i}|)$ simultaneously for all targets by a
single multivariate multi-layer perceptron. In this case, ${\mu}_{i}$
represents the estimation of the underlying algorithm’s error. As mentioned
before, the approach can be adapted to any conformal regression approach.
Experiments are conducted on normalized data with a mean of 0 and a standard
deviation of 1 to simplify the deep neural network optimization, with a
10-fold cross validation to avoid the impact of biased results, and with a
calibration set equal to $10\%$ of the training examples for all data sets. We
take the value $\beta=0.1$ for the sensitivity parameter and do not optimize
it when calculating the normalizing coefficient ${\mu}_{i}$. After getting the
proper training data $(X^{tr},Y^{tr})$, calibration data $(X^{cal},Y^{cal})$
and test data $(X^{ts},Y^{ts})$ for each fold, we follow the steps described
below:
1. 1.
Train the underlying algorithm (a deep neural network) on the proper training
data $(X^{tr},Y^{tr})$. Its architecture is composed of a first dense layer
applied to the input with “selu” activation (scaled exponential linear units
[38]), three hidden dense layers with dropouts and “selu” activation, and a
final dense layer with $m$ outputs and a linear activation.
2. 2.
Predict $\hat{Y}^{cal}$ and $\hat{Y}^{ts}$ for calibration and test data
respectively using the underlying algorithm.
3. 3.
Train the normalizing multi-layer perceptron on the proper training data
$(X^{tr},\mu_{tr}=\ln(|Y^{tr}-\hat{Y}^{tr}|)$, corresponding to the error
estimation of the underlying algorithm. The normalizing MLP consists of three
hidden dense layers with “selu” activation and dropouts and a final dense
layer with $m$ outputs for predicting all targets simultaneously.
4. 4.
Predict $\mu_{cal}$ and $\mu_{ts}$ for calibration and test data respectively
using the normalizing MLP.
5. 5.
If needed, get an estimation555In the case of the Gumbel copula, we use a
Maximum Pseudo-Likelihood Estimator with a numerical optimization using the
BFGS algorithm of the copula $C$ from the matrix $A$ of calibration non-
conformity scores.
6. 6.
For each global significance level $\epsilon_{g}$:
* •
Get the individual significance level $\epsilon_{j}=\epsilon_{t}$ for
$j\in\\{1,\ldots,m\\}$ and calculate
$\alpha_{s}=\\{\alpha^{1}_{s},\ldots,\alpha^{m}_{s}\\}$ for all targets using
calibration data, according to the methods mentioned in Section 3.3.
* •
Get the interval predictions for the test data with:
$\left[{\hat{Y}}^{ts}-{\alpha}_{s}(\exp({\mu}_{ts})+\beta),{\hat{Y}}^{ts}+{\alpha}_{s}(\exp({\mu}_{ts})+\beta)\right].$
(28)
###### Remark 4.1
We choose $\epsilon_{j}=\epsilon_{t}$ for $j\in\\{1,\ldots,m\\}$ as we have no
indication that individual targets should be treated with different degree of
cautiousness. However, since copulas are functions from $[0,1]^{m}$ to
$[0,1]$, there is in principle no problem in considering different confidence
degrees for different tasks, if an application calls for it. How to determine
and elicit such degrees is however, to our knowledge, an open question.
The implementation was done using Python and Tensorflow. The copula part of
our experiments was based on the book [35] and the Python library “copulae”.
Names | Examples | Features | Targets
---|---|---|---
music origin [39] | 1059 | 68 | 2
indoor loc [40] | 21049 | 520 | 3
scpf [41] | 1137 | 23 | 3
sgemm [42] | 241600 | 14 | 4
rf1 [41] | 9125 | 64 | 8
rf2 [41] | 9125 | 576 | 8
scm1d [41] | 9803 | 280 | 16
scm20d [41] | 8966 | 61 | 16
Table 2: Information on the used multi-target regression data sets.
We use eight data sets with different numbers of targets and varying sizes.
They are summarized in Table 2.
### 4.2 Results
This section presents the results of our experiments, investigating in
particular the validity and efficiency of the proposed approaches. Figures 1
and 2 detail these results for “music origin” and “sgemm”. The figures for all
other data sets can be found in A.
To verify the validity of each non-conformity measure, we calculate the
accuracy of each one and compare it with the calibration line. This line
represents the case where the error rate is exactly equal to $\epsilon_{g}$
for a confidence level $1-\epsilon_{g}$, which is the desired outcome of using
conformal prediction. In multi-target regression, the accuracy is computed
based on whether the observation $y$ belongs to the hyper-rectangle
$[\hat{\mathbf{y}}]$ or not depending on the significance level
$\epsilon_{g}$. Thus, a correctly predicted example must verify that all its
individual predictions $y_{i}$ for each individual target $Y_{i}$ is in its
corresponding individual interval predictions. Concretely, for each considered
confidence level $\epsilon_{g}$ and test example $x\in X^{ts}$, we obtain a
prediction $[\hat{\mathbf{y}}]_{\epsilon_{g}}$. From this, we can compute the
empirical validity as the percentage of times that
$[\hat{\mathbf{y}}]_{\epsilon_{g}}$ contains the true observed value, i.e.,
$\frac{\sum_{(x,y)\in
Z^{ts}}\mathbbm{1}_{y\in[\hat{\mathbf{y}}]_{\epsilon_{g}}}}{|Z^{ts}|}.$
Doing it for several values of $\epsilon_{g}$, we obtain a calibration curve
that should be as close as possible to the identity function.
(a) Empirical validity
(b) Hyper-rectangle median volume
Figure 1: Results for music origin.
(a) Empirical validity
(b) Hyper-rectangle median volume
Figure 2: Results for sgemm.
The results of the error rate or accuracy curves are shown in sub-figure a of
each figure for the Independent, Gumbel and Empirical multivariate non-
conformity measures. The outcomes clearly show that the best performance is
obtained by using the Empirical copula, where the model is well calibrated.
For most of the studied data sets, the Empirical copula accuracy curve is
almost perfectly aligned with the calibration line, and thus almost exactly
valid. This is due to the fact that Empirical copula functions non-
parametrically estimate the marginals based on the observations, which enables
the model to better adapt to the dependence structure of each data set. This
dependence structure is neglected when using an Independent copula-based non-
conformity measure, as the $m$ targets are treated as if they were
independent, and so the link between them is not exploited when computing
$\epsilon_{t}$. This also means that the difference between the Empirical and
the Independent copula-based non-conformity measures is bigger when there is a
strong dependence between the non-conformity scores, and is an indication of
the strength of this dependence. For instance, we can deduce that the targets
are strongly related for “sgemm” by the big gap between the Independent and
Empirical accuracy curves (sub-figure 2(a)). For the Gumbel copula, the
accuracy curve is generally closer to the calibration line than the one for
the Independent copula. This supports the existence of a dependence structure
between the targets, since the lower bound of the Gumbel copula is the
Independent copula, which means that if the targets were in fact independent,
the two curves would perfectly match. This can be seen in sub-figure 1(a) for
“music origin”, where the accuracy curves almost overlap all the time, meaning
that the targets are likely to be independent.
From the empirical validity results, we also noticed that the Empirical copula
non-conformity measure can be slightly invalid sometimes (sub-figure 4(a) for
“scpf”). We explain this by the fewer number of examples, in which case one
could use a more regularized form than the Empirical copula. However, when a
lot of examples are available (for instance, more than 20000 observations for
“sgemm”), the validity curve of the Empirical copula non-conformity measure is
perfectly aligned with the calibration line, meaning that this measure is
exactly valid (sub-figure 2(a)).
In single-output regression, efficiency is measured by the size of the
intervals, and a method is all the more efficient as predicted intervals are
small. To assess efficiency in multi-target regression, we can simply compute
the volume of the obtained predictions $[\hat{\mathbf{y}}]_{\epsilon_{g}}$,
after (12). For each experiment, we then compute the median value of those
hyper-rectangle volumes (for the estimation to be robust against very large
hyper-rectangles).
Efficiency results are shown in sub-figure b for all data sets for
$\epsilon_{g}=0.1$. They show that, in general, the Independent copula has a
bigger median hyper-rectangle volume compared to the Gumbel and Empirical
copulas, especially in those cases where the existence of a dependence
structure is confirmed by the calibration curves. This is due to the fact that
using an Independent copula ignores the dependence between the non-conformity
scores, which leads to an over-estimation of the global hyper-rectangle error.
This impact is avoided when using the Empirical copula because it takes
advantage of the dependence structure to construct better interval
predictions. Another remark concerning efficiency is that the box plots for
Empirical copula are tighter than the other two, which shows that the values
are homogeneous on all folds compared to the Independent copula for instance,
where the variation is much more visible.
The empirical validity and hyper-rectangle median volume results are
summarized in Tables 3 and 4. The validity simply provides the average
difference between a perfect calibration (the identity function) and the
observed curve for each copula. This means, in particular, that a negative
value indicates that the observed frequency is in average below the specified
confidence degree.
Data sets Independent Gumbel Empirical music origin $7.06\times 10^{1}\pm
5.12$ $8.48\times 10^{1}\pm 5.72$ $\mathbf{2.90\times 10^{1}\pm 5.48}$ indoor
loc $2.99\pm 1.17$ $2.00\pm 1.28$ $\mathbf{3.24\times 10^{-1}\pm 1.28}$ scpf
$9.04\pm 5.07$ $2.73\pm 5.64$ $\mathbf{-1.42\pm 4.16}$ sgemm $2.54\times
10^{1}\pm 1.00$ $3.26\pm 6.53\times 10^{-1}$ $\mathbf{-1.35\times 10^{1}\pm
3.00\times 10^{-1}}$ rf1 $5.60\pm 1.59$ $3.46\pm 1.56$ $\mathbf{-9.35\times
10^{-3}\pm 1.51}$ rf2 $6.09\pm 1.86$ $2.19\pm 2.27$ $\mathbf{-3.61\times
10^{-1}\pm 2.14}$ scm1d $1.44\times 10^{1}\pm 1.82$ $1.03\times 10^{1}\pm
2.98$ $\mathbf{-7.03\times 10^{-1}\pm 2.32}$ scm20d $1.68\times 10^{1}\pm
1.43$ $1.02\times 10^{1}\pm 2.35$ $\mathbf{-1.34\pm 2.25}$
Table 3: Validity (average gap between the empirical validity curve and the
calibration line in percentage) summarized results for all data sets.
Data sets Independent Gumbel Empirical music origin $\mathbf{1.97\times
10^{1}\pm 2.99}$ $3.19\times 10^{1}\pm 1.73\times 10^{1}$ $2.90\times
10^{1}\pm 1.39\times 10^{1}$ indoor loc $1.70\times 10^{-1}\pm 5.12\times
10^{-2}$ $9.54\times 10^{-2}\pm 2.04\times 10^{-2}$ $\mathbf{8.69\times
10^{-2}\pm 1.86\times 10^{-2}}$ scpf $5.10\pm 5.31$ $3.06\pm 3.7$
$\mathbf{2.39\pm 3.67}$ sgemm $1.17\times 10^{-3}\pm 5.69\times 10^{-4}$
$2.56\times 10^{-4}\pm 1.95\times 10^{-4}$ $\mathbf{2.20\times 10^{-4}\pm
1.60\times 10^{-4}}$ rf1 $1.18\times 10^{-2}\pm 1.52\times 10^{-2}$
$5.52\times 10^{-3}\pm 1.05\times 10^{-2}$ $\mathbf{3.61\times 10^{-3}\pm
6.00\times 10^{-3}}$ rf2 $2.56\times 10^{-3}\pm 1.87\times 10^{-3}$
$7.48\times 10^{-4}\pm 8.44\times 10^{-4}$ $\mathbf{7.00\times 10^{-4}\pm
8.48\times 10^{-4}}$ scm1d $3.49\times 10^{4}\pm 2.89\times 10^{4}$
$1.28\times 10^{4}\pm 1.20\times 10^{4}$ $\mathbf{1.15\times 10^{3}\pm
1.22\times 10^{3}}$ scm20d $5.43\times 10^{6}\pm 4.43\times 10^{6}$
$1.80\times 10^{5}\pm 2.15\times 10^{5}$ $\mathbf{4.14\times 10^{4}\pm
6.66\times 10^{4}}$
Table 4: Efficiency (hyper-rectangle median volume for $\epsilon_{g}=0.1$)
summarized results for all data sets.
The numbers confirm our previous observations on the graphs, as the average
gap is systematically higher for the Independent copula and lower for the
Empirical one, with Gumbel in-between. We can however notice that while the
Empirical copula provides the best results, it is also often a bit under the
calibration line, indicating that if conservativeness is to be sought, one
should maybe prefer the Gumbel copula. About the same conclusions can be given
regarding efficiency, with the Empirical copula giving the best results and
the Independent one the worst.
## 5 Conclusion and discussion
In this paper, we provided a quite easy and flexible way to obtain valid
conformal predictions in a multi-variate regression setting. We did so by
exploiting a link between non-conformity scores and copulas, a commonly used
tool to model multi-variate distribution.
Experiments on various data sets for a small choice of representative copulas
show that the method indeed allows to improve upon the naive independence
assumption. Those first results indicate in particular that while parametric,
simple copulas may provide valid results for some data sets, more complex
copulas may be needed in general to obtain well calibrated predictions, with
the cost that good estimations of such copulas require a lot of calibration
data.
As future lines of work, we would like to explore further the flexibility of
our framework, for instance by exploring the possibility of using vines [43]
to model complex dependencies, or by proposing protocols allowing to obtain
$\epsilon_{g}$ from different individual, user-defined confidence degrees,
taking up on our Remark 4.1.
Finally, while we mostly focused on multi-variate regression in the present
paper, it would be interesting to try to extend the current approach to other
multi-task settings, such as multi-label problems. A possibility could be to
make such problems continuous, as proposed for instance by Liu [29].
## 6 Acknowledgments
This research was supported by the UTC foundation.
## References
* [1] Glenn Shafer and Vladimir Vovk. A tutorial on conformal prediction. Journal of Machine Learning Research, 9(Mar):371–421, 2008.
* [2] Harris Papadopoulos, Kostas Proedrou, Volodya Vovk, and Alex Gammerman. Inductive confidence machines for regression. In European Conference on Machine Learning, pages 345–356. Springer, 2002.
* [3] Rich Caruana. A dozen tricks with multitask learning. In Neural networks: tricks of the trade, pages 165–191. Springer, 1998.
* [4] Huazhen Wang, Xin Liu, Ilia Nouretdinov, and Zhiyuan Luo. A comparison of three implementations of multi-label conformal prediction. In International Symposium on Statistical Learning and Data Sciences, pages 241–250. Springer, 2015.
* [5] Ran Wang, Sam Kwong, Xu Wang, and Yuheng Jia. Active k-labelsets ensemble for multi-label classification. Pattern Recognition, page 107583, 2020.
* [6] Alexander Kuleshov, Alexander Bernstein, and Evgeny Burnaev. Conformal prediction in manifold learning. In Conformal and Probabilistic Prediction and Applications, pages 234–253, 2018.
* [7] Jelmer Neeven and Evgueni Smirnov. Conformal stacked weather forecasting. In Conformal and Probabilistic Prediction and Applications, pages 220–233, 2018.
* [8] Harris Papadopoulos, Vladimir Vovk, and Alexander Gammerman. Regression conformal prediction with nearest neighbours. Journal of Artificial Intelligence Research, 40:815–840, 2011.
* [9] Soundouss Messoudi, Sébastien Destercke, and Sylvain Rousseau. Conformal multi-target regression using neural networks. In Conformal and Probabilistic Prediction and Applications, pages 65–83. PMLR, 2020.
* [10] Roger B Nelsen. An introduction to copulas, volume 139 of. Lecture Notes in Statistics, 1999.
* [11] Alex Gammerman, Volodya Vovk, and Vladimir Vapnik. Learning by transduction. Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence, page 148–155, 1998.
* [12] Harris Papadopoulos and Haris Haralambous. Reliable prediction intervals with regression neural networks. Neural Networks, 24(8):842–851, 2011.
* [13] Ulf Johansson, Henrik Linusson, Tuve Löfström, and Henrik Boström. Interpretable regression trees using conformal prediction. Expert systems with applications, 97:394–404, 2018.
* [14] Matilde Sánchez-Fernández, Mario de Prado-Cumplido, Jerónimo Arenas-García, and Fernando Pérez-Cruz. Svm multiregression for nonlinear channel estimation in multiple-input multiple-output systems. IEEE transactions on signal processing, 52(8):2298–2307, 2004.
* [15] Glenn De’Ath. Multivariate regression trees: a new technique for modeling species–environment relationships. Ecology, 83(4):1105–1117, 2002.
* [16] Luca Baldassarre, Lorenzo Rosasco, Annalisa Barla, and Alessandro Verri. Multi-output learning via spectral filtering. Machine learning, 87(3):259–301, 2012.
* [17] Timo Aho, Bernard Ženko, and Sašo Džeroski. Rule ensembles for multi-target regression. In 2009 Ninth IEEE International Conference on Data Mining, pages 21–30. IEEE, 2009.
* [18] Eleftherios Spyromitros-Xioufis, Grigorios Tsoumakas, William Groves, and Ioannis Vlahavas. Multi-target regression via input space expansion: treating targets as inputs. Machine Learning, 104(1):55–98, 2016.
* [19] Sebastian Ruder. An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098, 2017.
* [20] Rich Caruana. Multitask learning: A knowledge-based source of inductive bias icml. Google Scholar Google Scholar Digital Library Digital Library, 1993\.
* [21] M Sklar. Fonctions de repartition an dimensions et leurs marges. Publ. inst. statist. univ. Paris, 8:229–231, 1959.
* [22] Maurice Fréchet. Sur les tableaux de corrélation dont les marges sont données. Ann. Univ. Lyon, 3^ e serie, Sciences, Sect. A, 14:53–77, 1951\.
* [23] Wassilij Höffding. Masstabinvariante korrelationstheorie. Schriften des Mathematischen Instituts und Instituts fur Angewandte Mathematik der Universitat Berlin, 5:181–233, 1940.
* [24] Wassily Höffding. Masstabinvariante korrelationsmasse für diskontinuierliche verteilungen. Archiv für mathematische Wirtschafts-und Sozialforschung, 7:49–70, 1941.
* [25] Wassily Höffding. Scale—invariant correlation theory. In The collected works of Wassily Höffding, pages 57–107. Springer, 1994.
* [26] Paul Embrechts, Alexander McNeil, and Daniel Straumann. Correlation and dependence in risk management: properties and pitfalls. Risk management: value at risk and beyond, 1:176–223, 2002.
* [27] Anne-Catherine Favre, Salaheddine El Adlouni, Luc Perreault, Nathalie Thiémonge, and Bernard Bobée. Multivariate hydrological frequency analysis using copulas. Water resources research, 40(1), 2004.
* [28] Aristidis K Nikoloulopoulos and Dimitris Karlis. Multivariate logit copula model with an application to dental data. Statistics in Medicine, 27(30):6393–6406, 2008.
* [29] Weiwei Liu. Copula multi-label learning. In Advances in Neural Information Processing Systems, pages 6337–6346, 2019.
* [30] Alexander J McNeil, Rüdiger Frey, and Paul Embrechts. Quantitative risk management: concepts, techniques and tools-revised edition. Princeton university press, 2015.
* [31] Thorsten Schmidt. Coping with copulas. Copulas-From theory to application in finance, pages 3–34, 2007\.
* [32] Emil Julius Gumbel. Distributions des valeurs extremes en plusiers dimensions. Publ. Inst. Statist. Univ. Paris, 9:171–173, 1960.
* [33] Christian Genest and Louis-Paul Rivest. Statistical inference procedures for bivariate archimedean copulas. Journal of the American statistical Association, 88(423):1034–1043, 1993.
* [34] Maurice J Frank. On the simultaneous associativity off (x, y) andx+y- f (x, y). Aequationes mathematicae, 19(1):194–226, 1979.
* [35] Marius Hofert, Ivan Kojadinovic, Martin Mächler, and Jun Yan. Elements of copula modeling with R. Springer, 2019.
* [36] Ludger Ruschendorf. Asymptotic distributions of multivariate rank order statistics. The Annals of Statistics, pages 912–923, 1976.
* [37] Frederik Hendrik Ruymgaart. Asymptotic theory of rank tests for independence. MC Tracts, 1978.
* [38] Günter Klambauer, Thomas Unterthiner, Andreas Mayr, and Sepp Hochreiter. Self-normalizing neural networks. In Advances in neural information processing systems, pages 971–980, 2017.
* [39] Fang Zhou, Q Claire, and Ross D King. Predicting the geographical origin of music. In 2014 IEEE International Conference on Data Mining, pages 1115–1120. IEEE, 2014.
* [40] Joaquín Torres-Sospedra, Raúl Montoliu, Adolfo Martínez-Usó, Joan P Avariento, Tomás J Arnau, Mauri Benedito-Bordonau, and Joaquín Huerta. Ujiindoorloc: A new multi-building and multi-floor database for wlan fingerprint-based indoor localization problems. In 2014 international conference on indoor positioning and indoor navigation (IPIN), pages 261–270. IEEE, 2014.
* [41] Grigorios Tsoumakas, Eleftherios Spyromitros-Xioufis, Jozef Vilcek, and Ioannis Vlahavas. Mulan: A java library for multi-label learning. Journal of Machine Learning Research, 12(71):2411–2414, 2011.
* [42] Cedric Nugteren and Valeriu Codreanu. Cltune: A generic auto-tuner for opencl kernels. In 2015 IEEE 9th International Symposium on Embedded Multicore/Many-core Systems-on-Chip, pages 195–202. IEEE, 2015.
* [43] Harry Joe and Dorota Kurowicka. Dependence modeling: vine copula handbook. World Scientific, 2011.
## Appendix A Validity and efficiency figures
This appendix contains the figures for empirical validity and hyper-rectangle
median volume for all remaining data sets.
(a) Empirical validity
(b) Hyper-rectangle median volume
Figure 3: Results for indoor loc.
(a) Empirical validity
(b) Hyper-rectangle median volume
Figure 4: Results for scpf.
(a) Empirical validity
(b) Hyper-rectangle median volume
Figure 5: Results for rf1.
(a) Empirical validity
(b) Hyper-rectangle median volume
Figure 6: Results for rf2.
(a) Empirical validity
(b) Hyper-rectangle median volume
Figure 7: Results for scm1d.
(a) Empirical validity
(b) Hyper-rectangle median volume
Figure 8: Results for scm20d.
|
# Multigrid as an exact solver
Adem Kaya<EMAIL_ADDRESS>Institut für Mathematik, Universität Potsdam
Karl-Liebknecht-Str. 24-25 14476 Potsdam/Golm Germany
###### Abstract
We provide an alternative Fourier analysis for multigrid applied to the
Poisson problem in 1D, based on explicit derivation of spectra of the
iteration matrix. The new Fourier analysis has advantages over the existing
one. It is easy to understand and enables us to write the error equation in
terms of the eigenvector of the stiffness matrix. When weighted-Jacobi is used
as a smoother with two different weights, multigrid is an exact solver.
###### keywords:
Multigrid, Fourier analysis, smoother, weighted-Jacobi
## 1 Introduction
We consider the Poisson problem with Dirichlet boundary conditions given by
$\displaystyle\left\\{\begin{array}[]{ll}-u^{\prime\prime}(x)=f(x),\qquad
0<x<1,\\\ u(0)=u(1)=0.\end{array}\right.$ (3)
The domain of the problem is partitioned into $n-1$ uniform subintervals using
the grid points $x_{j}=jh$ where $h=1/(n-1)$ is the grid size, and $n$ is an
odd integer. Discretization of the Poisson problem (3) with central finite
difference scheme gives the linear systems of equations,
$A\mathbf{u}=\mathbf{f}$
where
$A=1/h^{2}\text{tridiag}\left(-1,2,-1\right)\in\mathbb{R}^{(n-2)\times(n-2)}$.
The matrix $A$ assumes the eigenvectors
$\mathbf{v}_{k}^{j}=\sin\left(\frac{jk\pi}{n-1}\right)$ with the corresponding
eigenvalues
$\lambda_{k}(A)=\frac{4}{h^{2}}\sin^{2}\left(\frac{k\pi}{2(n-1)}\right)$ for
$k=1,...,n-2$. Note that $\mathbf{v}_{k}^{j}$ represent the $j-$th entry of
the eigenvector $\mathbf{v}_{k}$.
## 2 Smoother
We use wieghted-Jacobi relaxation as a smoother. Let $a_{i,j}$,
$i,j=1,...,n-2$, represent the entries of $A$. We split the matrix $A$ as
follows.
$A=D+K$
where $D$ is the diagonal matrix with entries $d_{i,i}=a_{i,i}$,
($i=1,...,n-2$). Weighted-Jacobi is defined as
$\mathbf{x}^{k+1}=(I-\omega D^{-1}A)\mathbf{x}^{k}+\omega D^{-1}\mathbf{f},$
(4)
where $\omega$ is the weight to be determined. There is no restriction on
$\omega$ because it is not necessary for a smoother to be convergent for a
multigrid method to be convergent. Let $R_{j}^{\omega}$ represent the
iteration matrix of the weighted-Jacobi given in (4).
$R_{J}^{\omega}\equiv I-\omega D^{-1}A.$
We can apply the weighted-Jacobi more than one with different $\omega$’s to
further accelerate the convergence. To this end, we use the following
notation.
$S_{J}^{m}\equiv S_{J}(\omega_{1},...,\omega_{m})\equiv
R_{J}^{\omega_{1}}....R_{J}^{\omega_{m}},$
which means that the weighted-Jacobi is applied $m$ times with the parameters
$\omega_{i}$, $i=1,...,m$. Note that the matrix $S_{J}^{m}$ assumes the same
eigenvectors with the matrix $A$. The best way to find the optimal weights of
the weighted-Jacobi method as a smoother, is to do spectral analysis. To this
end, we carry out a spectral analysis based on explicit derivation of the
spectra of the iteration matrix of the two-grid.
## 3 Other elements of the multigrid method and derivation of the spectrum of
the iteration matrix of the two-grid
In this section, we introduce interpolation and restriction operators and show
some equalities related to them which are necessary to obtain the spectrum of
the iteration matrix of the two-grid. We start by setting $A=A^{h}$,
$\mathbf{u}=\mathbf{u}^{h}$ and $\mathbf{f}=\mathbf{f}^{h}$ where the
superscript $h$ which is equivalent to the grid size $h$, stands for the fine
grid. The two-grid iteration matrix with only pre-smoothing with damped Jacobi
relaxation is given by [2]
$\displaystyle R^{TG}=(I-I_{2h}^{h}(A^{2h})^{-1}I_{h}^{2h}A^{h})S_{J}^{m}.$
(5)
Our aim is to find the spectrum of $R^{TG}$. Just for easiness of our
analysis, we assumed that $n$ is an odd integer. In Equation (5), the
prolongation (interpolation) operator $I_{2h}^{h}$ is the linear interpolation
which has the matrix form
$I_{2h}^{h}=\frac{1}{2}\begin{bmatrix}&1&&&\\\ &2&&&\\\ &1&1&&\\\
&&2&\ddots&\\\ &&1&\ddots&\\\ &&&&1\\\ &&&&2\\\
&&&&1\end{bmatrix}\in\mathbb{R}^{(n-2)\times(n-3)/2}$
and restriction operator $I_{h}^{2h}$ is the transpose of the prolongation
operator
$\displaystyle I_{h}^{2h}=\left(I_{2h}^{h}\right)^{T}.$
Coarse grid matrix $A^{2h}$ is defined by Galerkin projection
$\displaystyle A^{2h}=I_{h}^{2h}A^{h}I_{2h}^{h}.$ (6)
From the above definition, it is easy to show that $A^{2h}$ is also symmetric.
We apply only pre-smoothing and do not apply post-smoothing.
The prolongation operator $I_{2h}^{h}$ satisfies
$\displaystyle
I_{2h}^{h}\mathbf{v}_{k}^{2h}=\cos^{2}\left(\frac{k\pi}{2(n-1)}\right)\mathbf{v}_{k}^{h}-\sin^{2}\left(\frac{k\pi}{2(n-1)}\right)\mathbf{v}_{n-1-k}^{h},\qquad
1\leq k\leq\frac{n-3}{2}$ (7)
where
$\displaystyle\mathbf{v}_{k,j}^{2h}=\sin\left(\frac{2jk\pi}{n-1}\right),\qquad
1\leq j\leq\frac{n-3}{2}.$ (8)
The restriction operator $I_{h}^{2h}$ which is the transpose of the
prolongation operator has the following properties.
$\displaystyle
I_{h}^{2h}\mathbf{v}_{k}^{h}=2\cos^{2}\left(\frac{k\pi}{2(n-1)}\right)\mathbf{v}_{k}^{2h}\quad\textmd{for}\quad
1\leq k\leq\frac{n-3}{2}$ (9)
and
$\displaystyle
I_{h}^{2h}\mathbf{v}_{n-1-k}^{h}=-2\sin^{2}\left(\frac{k\pi}{2(n-1)}\right)\mathbf{v}_{k}^{2h}\quad\textmd{for}\quad
1\leq k\leq\frac{n-3}{2}.$ (10)
As we stated before, the coarse grid matrix is obtained by Galerkin
projection. That is,
$\displaystyle A^{2h}=I_{h}^{2h}A^{h}I_{2h}^{h}.$
Using this definition and properties of the restriction and prolongation
operators in (7), (9) and (10), we obtain the spectrum of the coarse matrix
$A^{2h}$.
$\displaystyle
A^{2h}\mathbf{v}_{k}^{2h}=\left(2\lambda_{k}(A^{h})\cos^{4}(k\pi
h/2)+2\lambda_{n-1-k}(A^{h})\sin^{4}(k\pi
h/2)\right)\mathbf{v}_{k}^{2h},\qquad 1\leq k\leq\frac{n-3}{2}.$ (11)
From the above observations, it is very reasonable to expect that the
eigenvectors of the matrix $R^{TG}$ are linear combinations of
$\mathbf{v}_{k}$ and $\mathbf{v}_{n-1-k}$. We assume that
$\mathbf{b}_{k}=\mathbf{v}_{k}+c\mathbf{v}_{n-1-k}$ is an eigenvector of the
matrix $R^{TG}$ where $c$ is to be determined. Imposing $\mathbf{b}_{k}$ into
the definition of $R^{TG}$ in (5) and using the properties of the smoother,
prolongation, restriction operators and coarse grid matrix we end up with
$\displaystyle
R^{TG}\mathbf{b}_{k}=\mathbf{v}_{k}\left(\lambda_{k}(S_{j}^{m})-\frac{2\cos^{4}(k\pi
h/2)\lambda_{k}(A^{h})\lambda_{k}(S_{j}^{m})}{\lambda_{k}(A^{2h})}+c\frac{2\sin^{2}(k\pi
h/2)\cos^{2}(k\pi
h/2)\lambda_{n-1-k}(A^{h})\lambda_{n-1-k}(S_{j}^{m})}{\lambda_{k}(A^{2h})}\right)$
$\displaystyle+c\mathbf{v}_{n-1-k}\left(\lambda_{n-1-k}(S_{j}^{m})-\frac{2\sin^{4}(k\pi
h/2)\lambda_{n-1-k}(A^{h})\lambda_{n-1-k}(S_{j}^{m})}{\lambda_{k}(A^{2h})}+\frac{1}{c}\frac{2\sin^{2}(k\pi
h/2)\cos^{2}(k\pi
h/2)\lambda_{k}(A^{h})\lambda_{k}(S_{j}^{m})}{\lambda_{k}(A^{2h})}\right).$
Using $\lambda_{k}(A^{2h})$ given in (11) and equating the coefficients of
$\mathbf{v}_{k}$ and $c\mathbf{v}_{n-1-k}$ in above equation, we get the
following quadratic equation.
$\displaystyle c^{2}\left(2\sin^{2}(k\pi h/2)\cos^{2}(k\pi
h/2)\lambda_{n-1-k}(A^{h})\lambda_{n-1-k}(S_{j}^{m})\right)$
$\displaystyle+c\left(2\sin^{4}(k\pi
h/2)\lambda_{n-1-k}(A^{h})\lambda_{k}(S_{j}^{m})-2\cos^{4}(k\pi
h/2)\lambda_{k}(A^{h})\lambda_{n-1-k}(S_{j}^{m})\right)$
$\displaystyle-2\sin^{2}(k\pi h/2)\cos^{2}(k\pi
h/2)\lambda_{k}(A^{h})\lambda_{k}(S_{j}^{m})=0.$
Solving the above equation for $c$, we obtain
$\displaystyle c_{1}=\frac{\cos^{2}(k\pi h/2)\lambda_{k}(A^{h})}{\sin^{2}(k\pi
h/2)\lambda_{n-1-k}(A^{h})}$ (12)
and
$\displaystyle c_{2}=-\frac{\sin^{2}(k\pi
h/2)\lambda_{k}(S_{j}^{m})}{\cos^{2}(k\pi h/2)\lambda_{n-1-k}(S_{j}^{m})}.$
(13)
Note that eigenvalues associated to $c_{2}$ are all zero. More precisely, the
two-grid iteration matrix $R^{TG}$ assumes the eigenvectors
$\displaystyle\mathbf{b}_{k}=\left\\{\begin{array}[]{ll}\mathbf{v}_{k}+c_{1}\mathbf{v}_{n-1-k},\qquad
1\leq k\leq\frac{n-1}{2}\\\
\mathbf{v}_{k}+c_{2}\mathbf{v}_{n-1-k},\qquad\frac{n-1}{2}<k\leq
n-2\end{array}\right.$ (16)
with the corresponding eigenvalues
$\displaystyle\lambda_{k}(R^{TG})=\left\\{\begin{array}[]{ll}\frac{\sin^{4}(k\pi
h/2)\lambda_{n-1-k}(A^{h})\lambda_{k}(\mathbf{S}_{j}^{m})+\cos^{4}(k\pi
h/2)\lambda_{k}(A^{h})\lambda_{n-1-k}(S_{j}^{m})}{\lambda_{k}(A^{2h})},\qquad
1\leq k\leq\frac{n-1}{2},\\\ 0,\qquad\frac{n-1}{2}<k\leq
n-2.\end{array}\right.$ (19)
Note that the coarse grid matrix $\mathbf{A}^{2h}$ obtained by Galerkin
projection, is just a constant multiple of the original matrix which is
obtained by rediscretization of the problem (3) on coarse grid. Since
$\lambda_{k}(A^{h})=\frac{4}{h^{2}}\sin^{2}(k\pi h/2)$ and
$\lambda_{n-1-k}(A^{h})=\frac{4}{h^{2}}\cos^{2}(k\pi h/2)$, Equation (11)
reduces to
$\displaystyle A^{2h}v_{k}^{2h}=\left(\frac{8}{h^{2}}\sin^{2}(k\pi
h/2)\cos^{2}(k\pi h/2)\right)\mathbf{v}_{k}^{2h}=\frac{2}{h^{2}}\sin^{2}(k\pi
h)\mathbf{v}_{k}^{2h},\qquad 1\leq k\leq\frac{n-3}{2}$ (20)
where $\mathbf{v}_{k}^{2h}$ is given in (8). Using explicit expressions of
$\lambda_{k}(A^{h})$ and $\lambda_{n-1-k}(A^{h})$, it is easy to show that
$c_{1}$ given in (12), is equal to one. Hence, in a more compact form,
$R^{TG}$ assumes the eigenvectors
$\displaystyle\mathbf{b}_{k}=\left\\{\begin{array}[]{ll}\mathbf{v}_{k}+\mathbf{v}_{n-1-k},\qquad
1\leq k\leq\frac{n-1}{2},\\\
\mathbf{v}_{k}+c_{2}\mathbf{v}_{n-1-k},\qquad\frac{n-1}{2}<k\leq
n-2\end{array}\right.$
with the corresponding eigenvalues
$\displaystyle\lambda_{k}(R^{TG})=\left\\{\begin{array}[]{ll}\lambda_{k}(S_{j}^{m})\sin^{2}(k\pi
h/2)+\lambda_{n-1-k}(S_{j}^{m})\cos^{2}(k\pi h/2),\qquad 1\leq
k\leq\frac{n-1}{2},\\\ 0,\qquad\frac{n-1}{2}<k\leq n-2\end{array}\right.$ (24)
where $c_{2}$ is given in (13).
If we apply only one pre-smoothing with weighted-Jacobi, that is, for
$S_{j}^{1}$, nonzero eigenvalues of $R^{TG}$ become
$\displaystyle\lambda_{k}(R^{TG})=1-2\omega\left(\sin^{4}(k\pi
h/2)+\cos^{4}(k\pi h/2)\right),\qquad 1\leq k\leq\frac{n-1}{2}.$
Note that $\sin^{4}(k\pi h/2)+\cos^{4}(k\pi h/2)$ receives its minimum value
which is $0.5$ when $k=(n-1)/2$, and its maximum value when $k=1$. The maximum
value is very close to one, but it is less than one. Assuming that its maximum
value is one, we find the optimal $\omega$. In order to minimize the spectral
radius of $R^{TG}$ we set
$\displaystyle 1-2\omega\left(\sin^{4}(k\pi h/2)+\cos^{4}(k\pi
h/2)\right)|_{k=1}=-1+2\omega\left(\sin^{4}(k\pi h/2)+\cos^{4}(k\pi
h/2)\right)|_{k=(n-1)/2}.$
Solving above equation (under the assumption $\left(\sin^{4}(k\pi
h/2)+\cos^{4}(k\pi h/2)\right)|_{k=1}=1$), we get $\omega=2/3$. This value is
same with the value proposed [1, 2]. The difference is that our derivation is
totally algebraic. Furthermore, for $\omega=\frac{2}{3}$, $\rho(R^{TG})=1/3$.
This algebraic derivation also verifies the usability of the following
classification. The Fourier modes (eigenvectors) $\mathbf{v}_{k}$ in the range
$1\leq k<\frac{n-1}{2}$ are called low-frequency or smooth modes and the
Fourier modes in the range $\frac{n-1}{2}\leq k\leq n-1$ are called high-
frequency or oscillatory modes..
We now consider the case $m=2$ which means that pre-smoothing is applied two
times with weighted-Jacobi with different $\omega$’s. This case has not been
considered much by the researchers. If more than one pre-smoothing is applied,
then the one which is found as the optimal for one step pre-smoothing, is
generally applied. In this case ($m=2$), the nonzero eigenvalues of $R^{TG}$
are given by
$\displaystyle\lambda_{k}(R^{TG})=1-2(\omega_{1}+\omega_{2})\left(\sin^{4}(k\pi
h/2)+\cos^{4}(k\pi h/2)\right)+4\omega_{1}\omega_{2}\left(\sin^{6}(k\pi
h/2)+\cos^{6}(k\pi h/2)\right),\qquad 1\leq k\leq\frac{n-1}{2}.$ (25)
###### Lemma 1
The following equality holds
$\displaystyle
3(\sin^{4}(x)+\cos^{4}(x))-2(\sin^{6}(x)+\cos^{6}(x))=1,\qquad\text{for
all}\quad x\in\mathbb{R}.$
###### Proof 1
$\displaystyle
3(\sin^{4}(x)+\cos^{4}(x))-2(\sin^{6}(x)+\cos^{6}(x))=3(\sin^{4}(x)+\cos^{4}(x))-2(\sin^{2}(x)$
$\displaystyle+\cos^{2}(x))(\sin^{4}(x)-\sin^{2}(x)\cos^{2}(x)+\cos^{4}(x))=\sin^{4}(x)+\cos^{4}(x)+2\sin^{2}(x)\cos^{2}(x)$
$\displaystyle=(\sin^{2}(x)+\cos^{2}(x))^{2}=1.$
First, let us observe what happens if we apply weighted-Jacobi with the
optimal weight found for $m=1$, two times. Substituting
$\omega_{1}=\omega_{2}=\frac{2}{3}$ into (25), we get
$\lambda_{k}(R^{TG})=1-\frac{8}{9}=0.\overline{1}$ for all $k$. That is,
$\rho(R^{TG})=0.\overline{1}$. Now, we look for different $\omega$’s for which
the spectral radius of $R^{TG}$ is reduced further. By the Lemma 1, for the
choices $\omega_{1}=1$ and $\omega_{2}=\frac{1}{2}$, the eigenvalues in (25)
become all zero. This means that all eigenvalues of $R^{TG}$ are zero. In
other words, two-grid is an exact solver with only one iteration. Moreover,
since the coarse matrix obtained by Galarkin projection is just a constant
multiple of the original matrix on coarse grid, multigrid is also an exact
solver with only one iteration. Eigenvalues of the smoothers
$S_{j}(\frac{2}{3},\frac{2}{3})$ and $S_{j}(1,\frac{1}{2})$ for $n=33$ are
presented in Figure 1. Although for $S_{j}(1,\frac{1}{2})$, the two-grid
method is an exact solver, we see from Figure 1 that corresponding eigenvalues
of the oscillatory modes are not zero. Furthermore, the maximum eigenvalue of
$\mathbf{S}_{j}(1,\frac{1}{2})$ in magnitude in oscillatory region, is
$|\lambda_{21}(S_{j}(1,\frac{1}{2}))|=0.124581$ which is grater than the
maximum eigenvalue of $S_{j}(\frac{2}{3},\frac{2}{3})$ in magnitude in
oscillatory region, which is
$\lambda_{31}(S_{j}(1,\frac{1}{2}))=0.\overline{1}$.
Figure 1: Eigenvalues of weighted-Jacobi with two steps $S_{j}^{2}$ for
different $\omega$’s for $n=33$. We assume that the eigenvalues were
continuous in $k$.
## 4 The error equation
Since we have explicit expressions for the eigenvalues of the two-grid
iteration matrix $R^{TG}$ in (19) and of the corresponding eigenvectors in
(16), which are linear combination of the eigenvectors of $A$, we can see
which modes are damped more rapidly. To this end, we write the error
$\mathbf{e}$ in terms of the eigenvectors of $R^{TG}$.
$\displaystyle\mathbf{e}=\sum_{k=1}^{n-2}d_{i}\mathbf{b}_{k}=\sum_{k=1}^{(n-1)/2}d_{k}(\mathbf{v}_{k}+c_{1}(k)\mathbf{v}_{n-1-k})+\sum_{k=(n+1)/2}^{n-2}d_{k}(\mathbf{v}_{k}+c_{2}(k)\mathbf{v}_{n-1-k})$
where $d_{k}$ is any constant, $c_{1}=c_{1}(k)$ and $c_{2}=c_{2}(k)$ which are
given in (12) and (13), respectively. Since $\lambda_{k}(R^{TG})=0$ for
$k=(n+1)/2...n-2$, after $m$ iterations, the error becomes
$\displaystyle\mathbf{e}^{m}=\sum_{k=1}^{(n-1)/2}d_{k}\lambda_{k}^{m}(R^{TG})(\mathbf{v}_{k}+c_{1}(k)\mathbf{v}_{n-1-k})=\sum_{k=1}^{(n-1)/2}d_{k}\lambda_{k}^{m}(R^{TG})\mathbf{v}_{k}+\sum_{k=(n-1)/2}^{n-2}l_{k}\lambda_{n-1-k}^{m}(R^{TG})c_{1}(n-1-k)\mathbf{v}_{k}$
where $\mathbf{e}^{m}$ stands for the error after $m$ iterations and $l_{k}$
is any constant. In above equation on the right, the first sum contains the
smooth modes and the second sum contains oscillatory modes. The first
eigenvalue $\lambda_{1}(\mathbf{R}^{TG})$ is associated with the smoothest and
the most oscillatory mode. The eigenvalue $\lambda_{(n-1)/2}(R^{TG})$ is
associated only with the eigenvector $\mathbf{v}_{(n-1)/2}$.
## 5 Conclusion
In this work, we provided an alternative Fourier analysis for multigrid
applied to the Poisson problem in 1D. We related multigrid with the exact
solver.
Note: This work is not going to be submitted to any journal. It is free to
download and disseminate it.
## References
* [1] W Briggs, V Henson, and S McCormick. A Multigrid Tutorial, Second Edition. Society for Industrial and Applied Mathematics, second edition, 2000.
* [2] Wolfgang Hackbusch. Multi-Grid Methods and Applications. Springer-Verlag Berlin Heidelberg, 1 edition, 1985.
|
# Pure gravity traveling quasi-periodic
water waves with constant vorticity
M. Berti111 SISSA, Via Bonomea 265, 34136, Trieste, Italy. Email:
<EMAIL_ADDRESS>, L. Franzoi222 NYUAD Research Institute, New York University
Abu Dhabi, PO Box 129188, Abu Dhabi, United Arab Emirates. Email:
<EMAIL_ADDRESS>, A. Maspero333 SISSA, Via Bonomea 265, 34136, Trieste, Italy.
Email<EMAIL_ADDRESS>
Abstract. We prove the existence of small amplitude time quasi-periodic
solutions of the pure gravity water waves equations with constant vorticity,
for a bidimensional fluid over a flat bottom delimited by a space periodic
free interface. Using a Nash-Moser implicit function iterative scheme we
construct traveling nonlinear waves which pass through each other slightly
deforming and retaining forever a quasiperiodic structure. These solutions
exist for any fixed value of depth and gravity and restricting the vorticity
parameter to a Borel set of asymptotically full Lebesgue measure.
Keywords: Traveling waves, Water waves, vorticity, KAM for PDEs, quasi-
periodic solutions.
MSC 2010: 76B15, 37K55, 35C07, (37K50, 35S05).
###### Contents
1. 1 Introduction
2. 2 Hamiltonian structure and linearization at the origin
1. 2.1 Linearization at the equilibrium
2. 2.2 Tangential and normal subspaces of the phase space
3. 3 Functional setting
1. 3.1 Pseudodifferential calculus
2. 3.2 ${\mathcal{D}}^{k_{0}}$-tame and $(-\tfrac{1}{2})$-modulo-tame operators
3. 3.3 Hamiltonian, Reversible and Momentum preserving operators
4. 4 Transversality of linear frequencies
5. 5 Proof of Theorem 1.2
1. 5.1 Nash-Moser theorem of hypothetical conjugation
2. 5.2 Measure estimates: proof of Theorem 1.2
6. 6 Approximate inverse
7. 7 The linearized operator in the normal subspace
1. 7.1 Linearized good unknown of Alinhac
2. 7.2 Almost-straightening of the first order transport operator
3. 7.3 Symmetrization of the order $1/2$
4. 7.4 Symmetrization up to smoothing remainders
5. 7.5 Reduction of the order 1/2
6. 7.6 Reduction of the order 0
7. 7.7 Conclusion: reduction of ${\mathcal{L}}_{\omega}$
8. 8 Almost-diagonalization and invertibility of ${\mathcal{L}}_{\omega}$
9. 9 Proof of Theorem 5.1
10. A Almost straightening of a transport operator
## 1 Introduction
A problem of fundamental importance in fluid mechanics regards the search for
traveling surface waves. Since the pioneering work of Stokes [33] in 1847, a
huge literature has established the existence of steady traveling waves,
namely solutions (either periodic or localized in space) which look stationary
in a moving frame. The majority of the results concern bidimensional fluids.
At the end of the section we shortly report on the vast literature on this
problem.
In the recent work [7] we proved the first bifurcation result of time quasi-
periodic traveling solutions of the water waves equations under the effects of
gravity, constant vorticity, and exploiting the capillarity effects at the
free surface. These solutions can not be reduced to steady solutions in any
moving frame. For pure gravity irrotational water waves with infinite depth,
quasi-periodic traveling waves has been obtained in Feola-Giuliani [16].
The goal of this paper is to prove the existence of time quasi-periodic
traveling water waves, also in the physically important case of the pure
gravity equations with non zero constant vorticity, for any value of the depth
of the water, finite or infinite. In this work we are able to use the
vorticity as a parameter: the solutions that we construct exist for any value
of gravity and depth of the fluid, provided the vorticity is restricted to a
Borel set of asymptotically full measure, see Theorem 1.2. We also remark
that, in case of non zero vorticity, one can not expect the bifurcation of
standing waves since they are not allowed by the linear theory.
It is well known that this is a subtle small divisor problem. Major
difficulties are that: ($i$) the vorticity parameter enters in the dispersion
relation only at the zero order; ($ii$) there are resonances among the linear
frequencies which can be avoided only for traveling waves; $(iii)$ the
dispersion relation of the pure gravity equations is sublinear at infinity;
($iv$) the nonlinear transport term is a singular perturbation of the
unperturbed linear water waves vector field. Related difficulties appear in
the search of pure gravity time periodic standing waves which have been
constructed in the last years for irrotational fluids by Iooss, Plotnikov,
Toland [31, 24, 21], extended to time quasi-periodic standing waves solutions
in Baldi-Berti-Haus-Montalto [2]. In presence of surface tension, time
periodic standing waves solutions were constructed by Alazard-Baldi [1],
extended to time quasi-periodic solutions by Berti-Montalto [9]. We mention
that also the construction of gravity steady traveling waves periodic in space
presents small divisor difficulties for three dimensional fluids. These
solutions, in a moving frame, look steady bi-periodic waves and have been
constructed for irrotational fluids by Iooss-Plotnikov [22, 23] using the
speed as a bidimensional parameter (for capillary waves in [13] is not a small
divisor problem).
We now recall the pure gravity water waves equations with constant vorticity.
#### The water waves equations.
We consider the Euler equations of hydrodynamics for a 2-dimensional
incompressible and inviscid fluid with constant vorticity $\gamma$, under the
action of pure gravity. The fluid occupies the region
${\mathcal{D}}_{\eta,{\mathtt{h}}}:=\big{\\{}(x,y)\in{\mathbb{T}}\times{\mathbb{R}}\
:\
-{\mathtt{h}}<y<\eta(t,x)\big{\\}}\,,\quad{\mathbb{T}}:={\mathbb{T}}_{x}:={\mathbb{R}}/(2\pi{\mathbb{Z}})\,,$
(1.1)
with a (possibly infinite) depth ${\mathtt{h}}>0$ and space periodic boundary
conditions. The unknowns of the problem are the free surface $y=\eta(t,x)$ of
the time dependent domain ${\mathcal{D}}_{\eta,{\mathtt{h}}}$ and the
divergence free velocity field ${\bigl{(}\begin{smallmatrix}u(t,x,y)\\\
v(t,x,y)\end{smallmatrix}\bigr{)}}$. If the fluid has constant vorticity
$v_{x}-u_{y}=\gamma\,,$
the velocity field is the sum of the Couette flow
${\bigl{(}\begin{smallmatrix}-\gamma y\\\ 0\end{smallmatrix}\bigr{)}}$
(recently studied in [5], [38] and references therein), which carries all the
vorticity $\gamma$ of the fluid, and an irrotational field, expressed as the
gradient of a harmonic function $\Phi$, called the generalized velocity
potential. Denoting $\psi(t,x):=\Phi(t,x,\eta(t,x))$ the evaluation of the
generalized velocity potential at the free interface, one recovers $\Phi$ by
solving the elliptic problem
$\Delta\Phi=0\ \mbox{ in }{\mathcal{D}}_{\eta,{\mathtt{h}}}\,,\quad\Phi=\psi\
\mbox{ at }y=\eta(t,x)\,,\quad\Phi_{y}\to 0\ \mbox{ as }y\to-{\mathtt{h}}\,.$
(1.2)
The third condition in (1.2) means the impermeability property of the bottom
$\Phi_{y}(t,x,-{\mathtt{h}})=0$ if ${\mathtt{h}}<\infty$, and
$\lim\limits_{y\to-\infty}\Phi_{y}(t,x,y)=0$, if ${\mathtt{h}}=+\infty$.
Imposing that the fluid particles at the free surface remain on it along the
evolution (kinematic boundary condition), and that the pressure of the fluid
is equal to the constant atmospheric pressure at the free surface (dynamic
boundary condition), the time evolution of the fluid is determined by the
following system of equations
$\begin{cases}\eta_{t}=G(\eta)\psi+\gamma\eta\eta_{x}\\\
\displaystyle{\psi_{t}=-g\eta-\frac{\psi_{x}^{2}}{2}+\frac{(\eta_{x}\psi_{x}+G(\eta)\psi)^{2}}{2(1+\eta_{x}^{2})}+\gamma\eta\psi_{x}+\gamma\partial_{x}^{-1}G(\eta)\psi}\,.\end{cases}$
(1.3)
Here $g$ is the gravity and $G(\eta)$ is the Dirichlet-Neumann operator
$G(\eta)\psi:=G(\eta,{\mathtt{h}})\psi:=\sqrt{1+\eta_{x}^{2}}\,(\partial_{\vec{n}}\Phi)|_{y=\eta(x)}=(-\Phi_{x}\eta_{x}+\Phi_{y})|_{y=\eta(x)}\,.$
(1.4)
As observed in the irrotational case by Zakharov [40], and in presence of
constant vorticity by Wahlén [37], the water waves equations (1.3) are the
Hamiltonian system
$\eta_{t}=\nabla_{\psi}H(\eta,\psi)\,,\quad\psi_{t}=(-\nabla_{\eta}+\gamma\partial_{x}^{-1}\nabla_{\psi})H(\eta,\psi)\,,$
(1.5)
where $\nabla$ denotes the $L^{2}$-gradient, with Hamiltonian
$H(\eta,\psi)=\frac{1}{2}\int_{{\mathbb{T}}}\Big{(}\psi\,G(\eta)\psi+g\eta^{2}\Big{)}\,{\rm
d}{x}+\frac{\gamma}{2}\int_{{\mathbb{T}}}\Big{(}-\psi_{x}\eta^{2}+\frac{\gamma}{3}\eta^{3}\Big{)}\,{\rm
d}{x}\,.$ (1.6)
For any value of the vorticity $\gamma\neq 0$, the system (1.5) is endowed
with a non canonical Poisson structure, discussed in detail in Section 2. The
equations (1.3) enjoy two important symmetries. First of all, they are time
reversible. We say that a solution of (1.3) is _reversible_ if
$\eta(-t,-x)=\eta(t,x)\,,\quad\psi(-t,-x)=-\psi(t,x)\,.$ (1.7)
Second, since the bottom of the fluid domain is flat, they are _invariant by
space translations_.
The variables $(\eta,\psi)$ of system (1.3) belong to some Sobolev space
$H^{s}_{0}({\mathbb{T}})\times\dot{H}^{s}({\mathbb{T}})$ for some $s$ large.
Here $H^{s}_{0}({\mathbb{T}})$, $s\in{\mathbb{R}}$, denotes the Sobolev space
of functions with zero average $H^{s}_{0}({\mathbb{T}}):=\big{\\{}u\in
H^{s}({\mathbb{T}})\ \colon\ \int_{\mathbb{T}}u(x){\rm d}x=0\big{\\}}$ and
$\dot{H}^{s}({\mathbb{T}})$, $s\in{\mathbb{R}}$, the corresponding homogeneous
Sobolev space, namely the quotient space obtained by identifying the functions
in $H^{s}({\mathbb{T}})$ which differ by a constant. This choice of the phase
space is allowed because $\int_{\mathbb{T}}\eta(t,x)\,{\rm d}{x}$ is a prime
integral of (1.3) and the right hand side of (1.3) depends only on $\eta$ and
$\psi-\frac{1}{2\pi}\int_{\mathbb{T}}\psi\,{\rm d}x$.
#### Linear water waves.
Linearizing (1.3) at the equilibrium $(\eta,\psi)=(0,0)$ gives the system
$\begin{cases}\partial_{t}\eta&=G(0)\psi\\\
\partial_{t}\psi&=-g\eta+\gamma\partial_{x}^{-1}G(0)\psi\,,\end{cases}$ (1.8)
where $G(0)$ is the Dirichlet-Neumann operator at the flat surface $\eta=0$. A
direct computation reveals that $G(0)$ is the Fourier multiplier operator
$G(0):=G(0,{\mathtt{h}})=\begin{cases}D\,\tanh({\mathtt{h}}D)&{\rm if}\
{\mathtt{h}}<\infty\\\ |D|&{\rm if}\
{\mathtt{h}}=+\infty\,,\end{cases}\qquad{\rm where}\qquad D:=\frac{1}{{\rm
i}}\partial_{x}\,,$ (1.9)
with symbol, for any $j\in{\mathbb{Z}}$,
$G_{j}(0):=G_{j}(0,{\mathtt{h}})=\begin{cases}j\tanh({\mathtt{h}}j)&\text{ if
}{\mathtt{h}}<\infty\\\ \left|j\right|&\text{ if
}{\mathtt{h}}=+\infty\,.\end{cases}$ (1.10)
As we will show in Section 2.1, all reversible solutions, i.e. satisfying
(1.7), of (1.8) are the linear superposition of plane waves, traveling either
to the right or to the left, given by
$\displaystyle\begin{pmatrix}\eta(t,x)\\\ \psi(t,x)\end{pmatrix}$
$\displaystyle=\sum_{n\in{\mathbb{N}}}\begin{pmatrix}M_{n}\rho_{n}\cos(nx-\Omega_{n}(\gamma)t)\\\
P_{n}\rho_{n}\sin(nx-\Omega_{n}(\gamma)t)\end{pmatrix}+\begin{pmatrix}M_{n}\rho_{-n}\cos(nx+\Omega_{-n}(\gamma)t)\\\
P_{-n}\rho_{-n}\sin(nx+\Omega_{-n}(\gamma)t)\end{pmatrix}\,,$ (1.11)
where $\rho_{n}\geq 0$ are arbitrary amplitudes and $M_{n}$, $P_{\pm n}$ are
the real coefficients
$M_{j}:=\left(\frac{G_{j}(0)}{g+\frac{\gamma^{2}}{4}\frac{G_{j}(0)}{j^{2}}}\right)^{\frac{1}{4}},\
j\in{\mathbb{Z}}\setminus\\{0\\}\,,\quad P_{\pm
n}:=\frac{\gamma}{2}\frac{M_{n}}{n}\pm M_{n}^{-1},\ n\in{\mathbb{N}}\,.$
(1.12)
The frequencies $\Omega_{\pm n}(\gamma)$ in (1.11) are
$\Omega_{j}(\gamma):=\sqrt{\Big{(}g+\frac{\gamma^{2}}{4}\frac{G_{j}(0)}{j^{2}}\Big{)}G_{j}(0)}+\frac{\gamma}{2}\frac{G_{j}(0)}{j}\,,\quad
j\in{\mathbb{Z}}\setminus\\{0\\}\,.$ (1.13)
Note that the map $j\mapsto\Omega_{j}(\gamma)$ is not even due to the
vorticity term $\gamma G_{j}(0)/j$, which is odd in $j$.
All the linear solutions (1.11) are either time periodic, quasi-periodic or
almost-periodic, depending on the irrationality properties of the frequencies
$\Omega_{\pm n}(\gamma)$ and the number of non zero amplitudes $\rho_{\pm n}$.
The problem of the existence of the traveling quasi-periodic in time water
waves is formulated as follows.
###### Definition 1.1.
(Quasi-periodic traveling wave) We say that $(\eta(t,x),\psi(t,x))$ is a time
quasi-periodic traveling wave with irrational frequency vector
$\omega=(\omega_{1},\ldots,\omega_{\nu})\in{\mathbb{R}}^{\nu}$,
$\nu\in{\mathbb{N}}$, i.e. $\omega\cdot\ell\neq 0$ for any
$\ell\in{\mathbb{Z}}^{\nu}\setminus\\{0\\}$, and “wave vectors”
$(j_{1},\ldots,j_{\nu})\in{\mathbb{Z}}^{\nu}$, if there exist functions
$(\breve{\eta},\breve{\psi}):{\mathbb{T}}^{\nu}\to{\mathbb{R}}^{2}$ such that
$\begin{pmatrix}\eta(t,x)\\\
\psi(t,x)\end{pmatrix}=\begin{pmatrix}\breve{\eta}(\omega_{1}t-j_{1}x,\ldots,\omega_{\nu}t-j_{\nu}x)\\\
\breve{\psi}(\omega_{1}t-j_{1}x,\ldots,\omega_{\nu}t-j_{\nu}x)\end{pmatrix}\,.$
(1.14)
Note that, if $\nu=1$, such functions are time periodic and indeed stationary
in a moving frame with speed $\omega_{1}/j_{1}$. If the number of the
irrational frequencies in greater or equal than $2$, the waves (1.14) cannot
be reduced to steady waves by any appropriate choice of the moving frame.
We shall construct traveling quasi-periodic solutions of the nonlinear
equations (1.3) with a diophantine frequency vector $\omega$ belonging to an
open bounded subset ${\mathtt{\Omega}}$ in ${\mathbb{R}}^{\nu}$, namely, for
some $\upsilon\in(0,1)$, $\tau>\nu-1$,
${\mathtt{D}}{\mathtt{C}}(\upsilon,\tau):=\Big{\\{}\omega\in{\mathtt{\Omega}}\subset{\mathbb{R}}^{\nu}\
:\ \left|\omega\cdot\ell\right|\geq\upsilon\braket{\ell}^{-\tau}\ ,\
\forall\,\ell\in{\mathbb{Z}}^{\nu}\setminus\\{0\\}\Big{\\}}\,,\quad\langle\ell\rangle:=\max\\{1,|\ell|\\}\,.$
(1.15)
Regarding regularity, we will prove the existence of quasi-periodic traveling
waves $(\breve{\eta},\breve{\psi})$ belonging to some Sobolev space
$H^{s}({\mathbb{T}}^{\nu},{\mathbb{R}}^{2})=\Big{\\{}\breve{f}({\varphi})=\sum_{\ell\in{\mathbb{Z}}^{\nu}}f_{\ell}\,e^{{\rm
i}\ell\cdot{\varphi}}\ ,\ \ \ f_{\ell}\in{\mathbb{R}}^{2}\ \
:\,\|\breve{f}\|_{s}^{2}:=\sum_{\ell\in{\mathbb{Z}}^{\nu}}|f_{\ell}|^{2}\langle\ell\rangle^{2s}<\infty\Big{\\}}\,.$
(1.16)
Fixed finitely many arbitrary distinct natural numbers
${\mathbb{S}}^{+}:=\\{\overline{n}_{1},\ldots,\overline{n}_{\nu}\\}\subset{\mathbb{N}}\
,\quad 1\leq\overline{n}_{1}<\ldots<\overline{n}_{\nu}\,,$ (1.17)
and signs
$\Sigma:=\\{\sigma_{1},\ldots,\sigma_{\nu}\\},\quad\sigma_{a}\in\\{-1,1\\}\,,\quad
a=1,\ldots,\nu\,,$ (1.18)
we consider reversible quasi-periodic traveling wave solutions of the linear
system (1.8), given by
$\displaystyle\begin{pmatrix}\eta(t,x)\\\ \psi(t,x)\end{pmatrix}$
$\displaystyle=\sum_{a\in\\{1,\ldots,\nu\colon\sigma_{a}=+1\\}}\begin{pmatrix}M_{\overline{n}_{a}}\sqrt{\xi_{\overline{n}_{a}}}\cos(\overline{n}_{a}x-\Omega_{\overline{n}_{a}}(\gamma)t)\\\
P_{\overline{n}_{a}}\sqrt{\xi_{\overline{n}_{a}}}\sin(\overline{n}_{a}x-\Omega_{\overline{n}_{a}}(\gamma)t)\end{pmatrix}$
(1.19)
$\displaystyle+\sum_{a\in\\{1,\ldots,\nu\colon\sigma_{a}=-1\\}}\begin{pmatrix}M_{\overline{n}_{a}}\sqrt{\xi_{-\overline{n}_{a}}}\cos(\overline{n}_{a}x+\Omega_{-\overline{n}_{a}}(\gamma)t)\\\
P_{-\overline{n}_{a}}\sqrt{\xi_{-\overline{n}_{a}}}\sin(\overline{n}_{a}x+\Omega_{-\overline{n}_{a}}(\gamma)t)\
\end{pmatrix}$
where $\xi_{\pm\overline{n}_{a}}>0$, $a=1,\ldots,\nu$. The frequency vector of
(1.19) is given by
$\vec{\Omega}(\gamma):=(\Omega_{\sigma_{a}\overline{n}_{a}}(\gamma))_{a=1,\ldots,\nu}\in{\mathbb{R}}^{\nu}\,.$
(1.20)
Theorem 1.2 shows that the linear solutions (1.19) can be continued to quasi-
periodic traveling wave solutions of the nonlinear water waves equations
(1.3), for most values of the vorticity
$\gamma\in[\gamma_{1},\gamma_{2}]\subset{\mathbb{R}}$, with a frequency vector
$\widetilde{\Omega}:=(\widetilde{\Omega}_{\sigma_{a}\overline{n}_{a}})_{a=1,\ldots,\nu}$,
close to
$\vec{\Omega}(\gamma):=(\Omega_{\sigma_{a}\overline{n}_{a}}(\gamma))_{a=1,\ldots,\nu}$.
###### Theorem 1.2.
(KAM for traveling gravity water waves with constant vorticity) Consider
finitely many tangential sites ${\mathbb{S}}^{+}\subset{\mathbb{N}}$ as in
(1.17) and signs $\Sigma$ as in (1.18). Then there exist $\overline{s}>0$,
$\varepsilon_{0}\in(0,1)$ such that, for any $|\xi|\leq\varepsilon_{0}^{2}$,
$\xi:=(\xi_{\sigma_{a}{\overline{n}}_{a}})_{a=1,\ldots,\nu}\in{\mathbb{R}}_{+}^{\nu}$,
the following hold:
1. 1.
there exists a Cantor-like set ${\cal G}_{\xi}\subset[\gamma_{1},\gamma_{2}]$
with asymptotically full measure as $\xi\to 0$, i.e. $\lim_{\xi\to 0}|{\cal
G}_{\xi}|={\gamma}_{2}-{\gamma}_{1}$;
2. 2.
for any $\gamma\in{\cal G}_{\xi}$, the gravity water waves equations (1.3)
have a reversible quasi-periodic traveling wave solution (according to
Definition 1.1) of the form
$\displaystyle\begin{pmatrix}\eta(t,x)\\\ \psi(t,x)\end{pmatrix}$
$\displaystyle=\sum_{a\in\\{1,\ldots,\nu\\}\colon\sigma_{a}=+1}\begin{pmatrix}M_{\overline{n}_{a}}\sqrt{\xi_{\overline{n}_{a}}}\cos(\overline{n}_{a}x-\widetilde{\Omega}_{\overline{n}_{a}}(\gamma)t)\\\
P_{\overline{n}_{a}}\sqrt{\xi_{\overline{n}_{a}}}\sin(\overline{n}_{a}x-\widetilde{\Omega}_{\overline{n}_{a}}(\gamma)t)\end{pmatrix}$
(1.21)
$\displaystyle+\sum_{a\in\\{1,\ldots,\nu\\}\colon\sigma_{a}=-1}\begin{pmatrix}M_{\overline{n}_{a}}\sqrt{\xi_{-\overline{n}_{a}}}\cos(\overline{n}_{a}x+\widetilde{\Omega}_{-\overline{n}_{a}}(\gamma)t)\\\
P_{-\overline{n}_{a}}\sqrt{\xi_{-\overline{n}_{a}}}\sin(\overline{n}_{a}x+\widetilde{\Omega}_{-\overline{n}_{a}}(\gamma)t)\end{pmatrix}+r(t,x)$
where
$r(t,x)=\breve{r}({\widetilde{\Omega}}_{\sigma_{1}\overline{n}_{1}}(\gamma)t-\sigma_{1}\overline{n}_{1}x,\ldots,{\widetilde{\Omega}}_{\sigma_{\nu}\overline{n}_{\nu}}(\gamma)t-\sigma_{\nu}\overline{n}_{\nu}x)\,,\quad\breve{r}\in
H^{\overline{s}}({\mathbb{T}}^{\nu},{\mathbb{R}}^{2})\,,\quad\lim_{\xi\to
0}\frac{\|\breve{r}\|_{\overline{s}}}{\sqrt{|\xi|}}=0\,,$
with a Diophantine frequency vector
$\widetilde{\Omega}:=(\widetilde{\Omega}_{\sigma_{a}\overline{n}_{a}})_{a=1,\ldots,\nu}\in{\mathbb{R}}^{\nu}$,
depending on $\gamma,\xi$, and satisfying $\lim_{\xi\to
0}{\widetilde{\Omega}}=\vec{\Omega}(\gamma)$. In addition these quasi-periodic
solutions are linearly stable.
Let us make some comments about the result.
1) Vorticity as parameter and irrotational quasi-periodic traveling waves. We
are able to use the vorticity $\gamma$ as a parameter, even though the
dependence of the linear frequencies $\Omega_{j}(\gamma)$ in (1.13) with
respect to $\gamma$ affects only the order $0$. In Section 4 we prove the non-
degeneracy and the transversality of the linear frequencies
$\Omega_{j}(\gamma)$ with respect to $\gamma$. If $\gamma_{1}<0<\gamma_{2}$ we
do not know if the value $\gamma=0$ belongs to the set ${\mathcal{G}}_{\xi}$
for which the quasi periodic solutions (1.21) exist. Nevertheless,
irrotational quasi-periodic traveling solutions for the gravity water waves
equations (1.3) exist for most values of the depth
${\mathtt{h}}\in[{\mathtt{h}}_{1},{\mathtt{h}}_{2}]$, see Remark 4.6. These
traveling wave solutions do not clearly reduce to the standing wave solutions
constructed in [2], which are even in the space variable.
2) More general traveling solutions. The Diophantine condition (5.14) could be
weakened requiring only $|\omega\cdot\ell|\geq\
\upsilon\langle\ell\rangle^{-\tau}$ for any
$\ell\in{\mathbb{Z}}^{\nu}\setminus\\{0\\}$ with
$\ell_{1}\,\sigma_{1}\overline{n}_{1}+...+\ell_{\nu}\,\sigma_{\nu}\overline{n}_{\nu}=0$.
In such a case the vector $\omega$ could admit one non-trivial resonance. This
is the natural minimal requirement to look for traveling solutions of the form
$U(\omega t-\vec{\jmath}x)$, see Definition 3.1 and Remark 5.2. For $\nu=2$
solutions of these kind could be time periodic, with clearly a completely
different shape with respect to the classical Stokes traveling waves [33].
Let us make some comments about the proof.
3) Symmetrization and reduction in order of the linearized operator. The
leading order of the linearization of the water waves system (1.3) at any
quasi-periodic traveling wave is given by the Hamiltonian transport operator
(see (7.18))
${\mathcal{L}}_{\rm
TR}:=\omega\cdot\partial_{\varphi}+\begin{pmatrix}\partial_{x}{\widetilde{V}}&0\\\
0&{\widetilde{V}}\partial_{x}\end{pmatrix}$
where ${\widetilde{V}}({\varphi},x)$ is a small quasi-periodic traveling wave.
By the almost-straightening result of Lemma 7.7, for any $(\omega,\gamma)$
satisfying suitable non-resonance conditions as in (5.15), we conjugate
${\mathcal{L}}_{\rm TR}$ via a symplectic transformation to a transport
operator of the form
$\omega\cdot\partial_{\varphi}+\begin{pmatrix}{\mathtt{m}}_{1}\partial_{x}&0\\\
0&{\mathtt{m}}_{1}\partial_{x}\end{pmatrix}+\begin{pmatrix}\partial_{y}\,p_{\overline{\mathtt{n}}}&0\\\
0&p_{\overline{\mathtt{n}}}\,\partial_{y}\end{pmatrix}\,,$
where ${\mathtt{m}}_{1}\in{\mathbb{R}}$ is a constant to be determined and
$p_{\overline{\mathtt{n}}}(\varphi,x)$ is an exponentially small function, see
(7.29). For the standing waves problem in [2] we have that
${\mathtt{m}}_{1}=0$ and the complete conjugation of ${\cal L}_{\rm TR}$ is
proved for any $\omega$ diophantine. The almost-straightening Theorem A.2,
which implies the conjugation in Lemma 7.7, is performed in the same spirit of
the almost-reducibility Theorem 8.2. The KAM algebraic reduction scheme is
like in [17] and in [3]. Here we do not perform the full straightening of the
transport operator ${\mathcal{L}}_{\rm TR}$ (i.e. we have
$\overline{\mathtt{n}}<\infty$) in order to formulate a simple non-resonance
condition as in (5.15). The resulting almost-remainders are then considered
along the Nash-Moser nonlinear iteration (the estimates obtained in the proof
in [17] after finitely many iterative steps are not sufficient for our
purposes).
As the almost-straightening above, we also perform in a symplectic way other
steps of the reduction to constant coefficients of the lower order terms of
the linearized operator. This is needed in order to prevent the appearance of
unstable operators. Since Section 7.4 we preserve only the reversible
structure. Due to the pseudo-differential nature of the vorticity vector field
in (1.3), there appear along the reduction process, since Section 7.2,
smoothing tame remainders (cfr. (7.50)), unlike [2].
4) Traveling waves and Melnikov non-resonance conditions. We strongly use the
invariance under space translations of the Hamiltonian nonlinear water waves
vector field (1.3), i.e. the “momentum conservation”, in the construction of
the traveling quasi-periodic waves. We list the main points in which it
occurs:
(i) The Floquet exponents (5.12) of the quasi-periodic solutions (1.21) are a
singular perturbation of the unperturbed linear frequencies in (1.13), with
leading terms of order $1$. The Melnikov non-resonance conditions formulated
in the Cantor-like set ${\cal C}_{\infty}^{\upsilon}$ in (5.14)-(5.18) hold on
a set of large measure only thanks to the conservation of the momentum, see
Section 5.2.
(ii) Thanks to the restriction on the Fourier indexes coming from the space
translation invariance, we can impose Melnikov conditions that _do not lose_
space derivatives, see (5.17). This simplifies considerably the reduction in
decreasing orders of Section 7 and the KAM reducibility scheme of Section 8.
Indeed, it is enough to reduce to constant coefficients the linearized vector
operator until the order $0$ (included, in order to have a sufficiently good
asymptotic expansion of the perturbed frequencies to prove the inclusion Lemma
5.7). Conversely, in [2] the second order Melnikov conditions verified for the
standing pure gravity waves lose several space derivatives and many more steps
of regularization are needed.
($iii$) The invariance by space translations in the construction of the
quasiperiodic traveling waves allows to avoid resonances between the linear
frequencies. For example, with infinite depth ${\mathtt{h}}=+\infty$, these
are given by $\Omega_{j}(\gamma)=\omega_{j}(\gamma)+\frac{\gamma}{2}{\rm
sign}(j)$. In this case there exist
$\ell\in{\mathbb{Z}}^{\nu}\setminus\\{0\\}$ and
$j,j^{\prime}\not\in\\{\sigma_{a}\overline{n}_{a}\\}_{a=1,\ldots,\nu}$, with
$j\neq j^{\prime}$, such that
$\sum_{a=1}^{\nu}\ell_{a}\,\Omega_{\sigma_{a}\overline{n}_{a}}(\gamma)+\Omega_{j}(\gamma)-\Omega_{j^{\prime}}(\gamma)\equiv
0\quad\forall\,\gamma\,.$ (1.22)
For example if $\sigma_{1}=\sigma_{2}$, it is sufficient to take
$\ell=(\ell_{1},\ell_{2},0,\ldots,0)=(-1,1,0,\ldots,0)$ and
$j=-\sigma_{1}\overline{n}_{1}$, $j^{\prime}=-\sigma_{2}\overline{n}_{2}$. To
exclude this resonance we exploit the conservation of momentum, which
guarantees that resonances of the form (1.22) have to be checked only on
indexes fulfilling
$\sum_{a=1}^{\nu}\ell_{a}\,\sigma_{a}\overline{n}_{a}+j-j^{\prime}=0$. The
indexes above violate this constraint, as
$\overline{n}_{1}\neq\overline{n}_{2}$ by (1.17). Along the proof, we
systematically use this kind of arguments to exclude nontrivial resonances.
Before concluding this introduction, we shortly describe the huge literature
regarding time periodic traveling wave solutions, which are steady in a moving
frame.
Literature about time periodic traveling wave solutions. After the pioneering
work of Stokes [33], the first rigorous construction of small amplitude space
periodic steady traveling waves goes back to the 1920’s with the papers of
Nekrasov [30], Levi-Civita [26] and Struik [34], in case of irrotational
bidimensional flows under the action of pure gravity. In the presence of
vorticity, Gerstner [19] in 1802 gave an explicit example of periodic
traveling wave, in infinite depth, and non-zero vorticity, but it is only with
Dubreil-Jacotin [15] in 1934 the first bifurcation result of periodic
traveling waves with small vorticity, subsequently extended by Goyon [20] and
Zeidler [41] for large vorticity. More recently we point out the works of
Wahlén [36] for capillary-gravity waves and non-constant vorticity, and of
Martin [28] and Walhén [37] for constant vorticity. All these results deal
with 2d water waves, and can ultimately be deduced by the classical Crandall-
Rabinowitz bifurcation theorem from a simple eigenvalue.
We also mention that these local bifurcation results can be extended to global
branches of steady traveling waves by the theory of global analytic, or
topological, bifurcation. We refer to Keady-Norbury [27], Toland [35], McLeod
[29] for irrotational flows and Constantin-Strauss [12] for fluids with non-
constant vorticity. We suggest the reading of [10] for further results.
We finally quote the recent numerical work of Wilkening-Zhao [39] about
spatially quasi-periodic gravity-capillary $1$d-water waves.
## 2 Hamiltonian structure and linearization at the origin
The Hamiltonian formulation of the water waves equations (1.3) with non-zero
constant vorticity was obtained by Constantin-Ivanov-Prodanov [11] and Wahlén
[37] in the case of finite depth. For irrotational flows it reduces to the
classical Craig-Sulem-Zakharov formulation in [40], [14].
On the phase space $H^{1}_{0}({\mathbb{T}})\times\dot{H}^{1}({\mathbb{T}})$,
endowed with the non canonical Poisson tensor
$J_{M}(\gamma):=\begin{pmatrix}0&{\rm Id}\\\ -{\rm
Id}&\gamma\partial_{x}^{-1}\end{pmatrix}\,,$ (2.1)
we consider the Hamiltonian $H$ defined in (1.6). Such Hamiltonian is well
defined on $H^{1}_{0}({\mathbb{T}})\times\dot{H}^{1}({\mathbb{T}})$ since
$G(\eta)[1]=0$ and $\int_{{\mathbb{T}}}G(\eta)\psi\,{\rm d}x=0$. It turns out
[11, 37] that equations (1.3) are the Hamiltonian system generated by
$H(\eta,\psi)$ with respect to the Poisson tensor $J_{M}(\gamma)$, namely
$\partial_{t}\begin{pmatrix}\eta\\\
\psi\end{pmatrix}=J_{M}(\gamma)\begin{pmatrix}\nabla_{\eta}H\\\
\nabla_{\psi}H\end{pmatrix}$ (2.2)
where $(\nabla_{\eta}H,\nabla_{\psi}H)\in\dot{L}^{2}({\mathbb{T}})\times
L^{2}_{0}({\mathbb{T}})$ denote the $L^{2}$-gradients. The non canonical
Poisson tensor $J_{M}(\gamma)$ in (2.1) has to be regarded as an operator from
(subspaces of) $(L_{0}^{2}\times\dot{L}^{2})^{*}=\dot{L}^{2}\times L_{0}^{2}$
to $L_{0}^{2}\times\dot{L}^{2}$, that is
$J_{M}(\gamma)=\begin{pmatrix}0&{\rm Id}_{L_{0}^{2}\to L_{0}^{2}}\\\ -{\rm
Id}_{\dot{L}^{2}\to\dot{L}^{2}}&\gamma\partial_{x}^{-1}\end{pmatrix}\,.$
For sake of simplicity, throughout the paper we omit this detail, see [7] for
a more precise analysis.
We describe now some symmetries of the Hamiltonian (1.6).
Reversible structure. Defining on the phase space
$H_{0}^{1}({\mathbb{T}})\times\dot{H}^{1}({\mathbb{T}})$ the involution
${\mathcal{S}}\left(\begin{matrix}\eta\\\
\psi\end{matrix}\right):=\left(\begin{matrix}\eta^{\vee}\\\
-\psi^{\vee}\end{matrix}\right)\,,\quad\eta^{\vee}(x):=\eta(-x)\,,$ (2.3)
the Hamiltonian (1.6) is invariant under ${\mathcal{S}}$, that is
$H\circ{\mathcal{S}}=H$. Equivalently, the water waves vector field $X$ in the
right hand side on (1.3) satisfies
$X\circ{\mathcal{S}}=-{\mathcal{S}}\circ X\,.$ (2.4)
This property follows since the Dirichlet-Neumann operator satisfies
$G(\eta^{\vee})[\psi^{\vee}]=\left(G(\eta)[\psi]\right)^{\vee}$.
Translation invariance. Since the bottom of the fluid domain (1.1) is flat (or
in case of infinite depth there is no bottom), the water waves equations (1.3)
are invariant under space translations. Specifically, defining the translation
operator
$\tau_{\varsigma}\colon u(x)\mapsto
u(x+\varsigma)\,,\quad\varsigma\in{\mathbb{R}}\,,$ (2.5)
the Hamiltonian (1.6) satisfies $H\circ\tau_{\varsigma}=H$ for any
$\varsigma\in{\mathbb{R}}$. Equivalently, the water waves vector field $X$ in
the right hand side on (1.3) satisfies
$X\circ\tau_{\varsigma}=\tau_{\varsigma}\circ
X\,,\quad\forall\,\varsigma\in{\mathbb{R}}\,.$ (2.6)
This property follows since $\tau_{\varsigma}\circ
G(\eta)=G(\tau_{\varsigma}\eta)\circ\tau_{\varsigma}$ for any
$\varsigma\in{\mathbb{R}}$.
Wahlén coordinates. We introduce the Wahlén [37] coordinates $(\eta,\zeta)$
via the map
$\left(\begin{matrix}\eta\\\
\psi\end{matrix}\right)=W\left(\begin{matrix}\eta\\\
\zeta\end{matrix}\right)\,,\quad W:=\left(\begin{matrix}{\rm Id}&0\\\
\frac{\gamma}{2}\partial_{x}^{-1}&{\rm Id}\end{matrix}\right)\,,\quad
W^{-1}:=\left(\begin{matrix}{\rm Id}&0\\\
-\frac{\gamma}{2}\partial_{x}^{-1}&{\rm Id}\end{matrix}\right)\ .$ (2.7)
The change of coordinates $W$ maps the phase space
$H^{1}_{0}\times\dot{H}^{1}$ into itself, and it conjugates the Poisson tensor
$J_{M}(\gamma)$ to the canonical one
$W^{-1}J_{M}(\gamma)(W^{-1})^{*}=J\,,\quad J:=\begin{pmatrix}0&{\rm Id}\\\
-{\rm Id}&0\end{pmatrix}\,,$ (2.8)
so that $(\eta,\zeta)$ are Darboux coordinates. The Hamiltonian (1.6) becomes
${\mathcal{H}}:=H\circ W\,,\quad\text{ i.e. }\quad{\cal
H}(\eta,\zeta):=H\Big{(}\eta,\zeta+\frac{\gamma}{2}\partial_{x}^{-1}\eta\Big{)}\,,$
(2.9)
and the Hamiltonian equations (2.2) (i.e. (1.3)) are transformed into
$\partial_{t}\left(\begin{matrix}\eta\\\ \zeta\end{matrix}\right)=X_{\cal
H}(\eta,\zeta)\,,\quad X_{\cal
H}(\eta,\zeta):=J\begin{pmatrix}\nabla_{\eta}{\cal H}\\\ \nabla_{\zeta}{\cal
H}\end{pmatrix}(\eta,\zeta)\,.$ (2.10)
By (2.8), the symplectic form of (2.10) is the standard one,
$\displaystyle{\cal W}\left(\begin{pmatrix}\eta_{1}\\\
\zeta_{1}\end{pmatrix},\begin{pmatrix}\eta_{2}\\\
\zeta_{2}\end{pmatrix}\right):=\left(J^{-1}\left(\begin{matrix}\eta_{1}\\\
\zeta_{1}\end{matrix}\right),\left(\begin{matrix}\eta_{2}\\\
\zeta_{2}\end{matrix}\right)\right)_{L^{2}}=(-\zeta_{1},\eta_{2})_{L^{2}}+(\eta_{1},\zeta_{2})_{L^{2}}\,,$
(2.11)
where $J^{-1}=\begin{pmatrix}0&-{\rm Id}\\\ {\rm Id}&0\end{pmatrix}$ is
regarded as a map from $L_{0}^{2}\times\dot{L}^{2}$ into $\dot{L}^{2}\times
L_{0}^{2}$.
The transformation $W$ defined in (2.7) is reversibility preserving, namely it
commutes with the involution ${\cal S}$ in (2.3) (see Definition 3.19 below),
and thus also the Hamiltonian ${\mathcal{H}}$ in (2.9) is invariant under the
involution ${\mathcal{S}}$. For this reason we look for solutions
$(\eta(t,x),\zeta(t,x))$ of (2.10) that are reversible, i.e., see (1.7),
$\left(\begin{matrix}\eta\\\
\zeta\end{matrix}\right)(-t)={\mathcal{S}}\left(\begin{matrix}\eta\\\
\zeta\end{matrix}\right)(t)\,.$ (2.12)
The corresponding solutions $(\eta(t,x),\psi(t,x))$ of (1.3) induced by (2.7)
are reversible as well.
We finally note that the transformation $W$ defined in (2.7) commutes with the
translation operator $\tau_{\varsigma}$, therefore the Hamiltonian
${\mathcal{H}}$ in (2.9) is invariant under $\tau_{\varsigma}$.
### 2.1 Linearization at the equilibrium
We now prove that the reversible solutions of the linear system (1.8) have the
form (1.11); we proceed in a similar way as in [7], Section 2.1, and we refer
to it for details. The linear system (1.8) is Hamiltonian and it is generated
by the quadratic Hamiltonian
$H_{L}(\eta,\psi):=\frac{1}{2}\int_{\mathbb{T}}\left(\psi
G(0)\psi+g\eta^{2}\right)\,{\rm
d}{x}=\frac{1}{2}\left({\bf{\Omega}}_{L}\begin{pmatrix}\eta\\\
\psi\end{pmatrix},\begin{pmatrix}\eta\\\ \psi\end{pmatrix}\right)_{L^{2}}\,.$
Thus, recalling (2.2), the linear system (1.8) is
$\partial_{t}\begin{pmatrix}\eta\\\
\psi\end{pmatrix}=J_{M}(\gamma){\bf{\Omega}}_{L}\begin{pmatrix}\eta\\\
\psi\end{pmatrix}\ ,\qquad{\bf{\Omega}}_{L}:=\begin{pmatrix}g&0\\\
0&G(0)\end{pmatrix}\,.$ (2.13)
In the Wahlén coordinates (2.7), system (2.13) is transformed into the linear
Hamiltonian system
$\displaystyle\partial_{t}\begin{pmatrix}\eta\\\
\zeta\end{pmatrix}=J{\bf{\Omega}}_{W}\begin{pmatrix}\eta\\\
\zeta\end{pmatrix}\ ,$ (2.14)
$\displaystyle{\bf{\Omega}}_{W}:=W^{*}{\bf{\Omega}}_{L}W=\begin{pmatrix}g-\left(\frac{\gamma}{2}\right)^{2}\partial_{x}^{-1}G(0)\partial_{x}^{-1}&-\frac{\gamma}{2}\partial_{x}^{-1}G(0)\\\
\frac{\gamma}{2}G(0)\partial_{x}^{-1}&G(0)\end{pmatrix}$
generated by the quadratic Hamiltonian
${\mathcal{H}}_{L}(\eta,\zeta):=(H_{L}\circ
W)(\eta,\zeta)=\frac{1}{2}\left({\bf{\Omega}}_{W}\left(\begin{matrix}\eta\\\
\zeta\end{matrix}\right),\left(\begin{matrix}\eta\\\
\zeta\end{matrix}\right)\right)_{L^{2}}\,.$ (2.15)
Let us diagonalize (2.14). We first conjugate (2.14) under the symplectic
transformation (with respect to the standard symplectic form ${\cal W}$ in
(2.11)) of the phase space
$\begin{pmatrix}\eta\\\ \zeta\end{pmatrix}={\mathcal{M}}\begin{pmatrix}u\\\
v\end{pmatrix}$
where ${\mathcal{M}}$ is the diagonal matrix of self-adjoint Fourier
multipliers
${\mathcal{M}}:=\left(\begin{matrix}M(D)&0\\\ 0&M(D)^{-1}\end{matrix}\right)\
,\quad
M(D):=\left(\frac{G(0)}{g-\frac{\gamma^{2}}{4}\partial_{x}^{-1}G(0)\partial_{x}^{-1}}\right)^{1/4}\,,$
(2.16)
with the real valued symbol $M_{j}$ defined in (1.12). The map $\cal{M}$ is
reversibility preserving.
###### Remark 2.1.
$M(D)^{-1}$ denotes the Fourier multiplier operator in $\dot{H}^{1}$ defined
as $M(D)^{-1}[\zeta]:=\big{[}\sum_{j\neq 0}M_{j}^{-1}\zeta_{j}e^{{\rm
i}jx}\big{]}$, $\zeta(x)=\sum_{j\in{\mathbb{Z}}}\zeta_{j}e^{{\rm i}jx}$ where
$[\zeta]$ is the element in $\dot{H}^{1}$ with representative $\zeta(x)$.
By a direct computation, the Hamiltonian system (2.14) assumes the symmetric
form
$\displaystyle\partial_{t}\begin{pmatrix}u\\\
v\end{pmatrix}=J{\bf{\Omega}}_{S}\begin{pmatrix}u\\\ v\end{pmatrix}\,,\ \
{\bf{\Omega}}_{S}:={\mathcal{M}}^{*}{\bf{\Omega}}_{W}{\mathcal{M}}=\left(\begin{matrix}\omega(\gamma,D)&-\frac{\gamma}{2}\partial_{x}^{-1}G(0)\\\
\frac{\gamma}{2}G(0)\partial_{x}^{-1}&\omega(\gamma,D)\end{matrix}\right)\,,$
(2.17)
where
$\omega(\gamma,D):=\sqrt{g\,G(0)-\left(\frac{\gamma}{2}\partial_{x}^{-1}G(0)\right)^{2}}\,.$
(2.18)
Now we introduce complex coordinates by the transformation
$\begin{pmatrix}u\\\ v\end{pmatrix}={\mathcal{C}}\begin{pmatrix}z\\\
\overline{z}\end{pmatrix}\,,\qquad{\mathcal{C}}:=\frac{1}{\sqrt{2}}\left(\begin{matrix}{\rm
Id}&{\rm Id}\\\ -{\rm i}&{\rm i}\end{matrix}\right)\
,\quad{\mathcal{C}}^{-1}:=\frac{1}{\sqrt{2}}\left(\begin{matrix}{\rm Id}&{\rm
i}\\\ {\rm Id}&-{\rm i}\end{matrix}\right)\,.$ (2.19)
In these variables, the Hamiltonian system (2.17) becomes the diagonal system
$\displaystyle\partial_{t}\left(\begin{matrix}z\\\
\overline{z}\end{matrix}\right)=\begin{pmatrix}-{\rm i}&0\\\ 0&{\rm
i}\end{pmatrix}{\bf{\Omega}}_{D}\left(\begin{matrix}z\\\
\overline{z}\end{matrix}\right)\,,\quad{\bf{\Omega}}_{D}:={\mathcal{C}}^{*}{\bf{\Omega}}_{S}{\mathcal{C}}=\begin{pmatrix}\Omega(\gamma,D)&0\\\
0&\overline{\Omega}(\gamma,D)\end{pmatrix}\,,$ (2.20)
where
$\Omega(\gamma,D):=\omega(\gamma,D)+{\rm
i}\,\frac{\gamma}{2}\partial_{x}^{-1}G(0)$ (2.21)
is the Fourier multiplier with symbol $\Omega_{j}(\gamma)$ defined in (1.13)
and $\overline{\Omega}(\gamma,D)$ is defined by
$\overline{\Omega}(\gamma,D)z:=\overline{\Omega(\gamma,D)\overline{z}}\,,\quad\overline{\Omega}(\gamma,D)=\omega(\gamma,D)-{\rm
i}\,\frac{\gamma}{2}\partial_{x}^{-1}G(0)\,.$
Note that $\overline{\Omega}(\gamma,D)$ is the Fourier multiplier with symbol
$\\{\Omega_{-j}(\gamma)\\}_{j\in{\mathbb{Z}}\setminus\\{0\\}}$.
We regard the system (2.20) in $\dot{H}^{1}\times\dot{H}^{1}$. The diagonal
system (2.20) amounts to the scalar equation
$\partial_{t}z=-{\rm i}\Omega(\gamma,D)z\,,\quad
z(x)=\sum_{j\in{\mathbb{Z}}\setminus\\{0\\}}z_{j}e^{{\rm i}jx}\,,$ (2.22)
which, written in the exponential Fourier basis, is an infinite collections of
decoupled harmonic oscillators
$\dot{z}_{j}=-{\rm i}\Omega_{j}(\gamma)z_{j}\,,\quad
j\in{\mathbb{Z}}\setminus\\{0\\}\,.$ (2.23)
Note that, in these complex coordinates, the involution ${\mathcal{S}}$
defined in (2.3) reads as the map
$\begin{pmatrix}z(x)\\\
\overline{z(x)}\end{pmatrix}\mapsto\begin{pmatrix}\,\overline{z(-x)}\\\
z(-x)\,\end{pmatrix}\ ,$ (2.24)
whereas, in the Fourier coordinates introduced in (2.22), it amounts to
$z_{j}\mapsto\overline{z_{j}}\,,\quad\forall
j\in{\mathbb{Z}}\setminus\\{0\\}\,.$ (2.25)
In view of (2.23) and (2.25) any reversible solution (which is characterized
as in (2.12)) of (2.22) has the form
$z(t,x):=\frac{1}{\sqrt{2}}\sum_{j\in{\mathbb{Z}}\setminus\\{0\\}}\rho_{j}\,e^{-{\rm
i}\,\left(\Omega_{j}(\gamma)t-j\,x\right)}\quad{\rm
with}\quad\rho_{j}\in{\mathbb{R}}\,.$ (2.26)
Let us see the form of these solutions back in the original variables
$(\eta,\psi)$. First, by (2.16), (2.19),
$\begin{pmatrix}\eta\\\
\zeta\end{pmatrix}={\mathcal{M}}\,{\mathcal{C}}\begin{pmatrix}z\\\
\overline{z}\end{pmatrix}=\frac{1}{\sqrt{2}}\begin{pmatrix}M(D)&M(D)\\\ -{\rm
i}M(D)^{-1}&{\rm i}M(D)^{-1}\end{pmatrix}\begin{pmatrix}z\\\
\overline{z}\end{pmatrix}=\frac{1}{\sqrt{2}}\begin{pmatrix}M(D)(z+\overline{z})\\\
-{\rm i}M(D)^{-1}(z-\overline{z})\end{pmatrix}\,,$ (2.27)
and the solutions (2.26) assume the form
$\displaystyle\begin{pmatrix}\eta(t,x)\\\ \zeta(t,x)\end{pmatrix}$
$\displaystyle=\sum_{n\in{\mathbb{N}}}\begin{pmatrix}M_{n}\rho_{n}\cos(nx-\Omega_{n}(\gamma)t)\\\
M_{n}^{-1}\rho_{n}\sin(nx-\Omega_{n}(\gamma)t)\end{pmatrix}+\begin{pmatrix}M_{n}\rho_{-n}\cos(nx+\Omega_{-n}(\gamma)t)\\\
-M_{n}^{-1}\rho_{-n}\sin(nx+\Omega_{-n}(\gamma)t)\end{pmatrix}\,.$
Back to the variables $(\eta,\psi)$ with the change of coordinates (2.7) one
obtains formula (1.11).
#### Decomposition of the phase space in Lagrangian subspaces invariant under
(2.14).
We express the Fourier coefficients $z_{j}\in{\mathbb{C}}$ in (2.22) as
$z_{j}=\frac{\alpha_{j}+{\rm
i}\beta_{j}}{\sqrt{2}},\quad(\alpha_{j},\beta_{j})\in{\mathbb{R}}^{2}\,,\quad
j\in{\mathbb{Z}}\setminus\\{0\\}\,.$
In the new coordinates
$(\alpha_{j},\beta_{j})_{j\in{\mathbb{Z}}\setminus\\{0\\}}$, we write (2.27)
as (recall that $M_{j}=M_{-j}$)
$\begin{pmatrix}\eta(x)\\\
\zeta(x)\end{pmatrix}=\sum_{j\in{\mathbb{Z}}\setminus\\{0\\}}\begin{pmatrix}M_{j}(\alpha_{j}\cos(jx)-\beta_{j}\sin(jx))\\\
M_{j}^{-1}(\beta_{j}\cos(jx)+\alpha_{j}\sin(jx))\end{pmatrix}$ (2.28)
with
$\displaystyle\alpha_{j}=\frac{1}{2\pi}\Big{(}M_{j}^{-1}(\eta,\cos(jx))_{L^{2}}+M_{j}(\zeta,\sin(jx))_{L^{2}}\Big{)}\,,$
(2.29)
$\displaystyle\beta_{j}=\frac{1}{2\pi}\Big{(}M_{j}(\zeta,\cos(jx))_{L^{2}}-M_{j}^{-1}(\eta,\sin(jx))_{L^{2}}\Big{)}\,.$
The symplectic form (2.11) then becomes
$2\pi\sum_{j\in{\mathbb{Z}}\setminus\\{0\\}}{\rm d}\alpha_{j}\wedge{\rm
d}\beta_{j}$. Each $2$-dimensional subspace in the sum (2.28), spanned by
$(\alpha_{j},\beta_{j})\in{\mathbb{R}}^{2}$ is therefore a symplectic
subspace. The quadratic Hamiltonian ${\mathcal{H}}_{L}$ in (2.15) reads
$2\pi\sum_{j\in{\mathbb{Z}}\setminus\\{0\\}}\frac{\Omega_{j}(\gamma)}{2}(\alpha_{j}^{2}+\beta_{j}^{2})\,.$
(2.30)
In view of (2.28), the involution ${\mathcal{S}}$ defined in (2.3) reads
$(\alpha_{j},\beta_{j})\mapsto(\alpha_{j},-\beta_{j})$, $\forall
j\in{\mathbb{Z}}\setminus\\{0\\}$.
We may also enumerate the independent variables
$(\alpha_{j},\beta_{j})_{j\in{\mathbb{Z}}\setminus\\{0\\}}$ as
$\big{(}\alpha_{-n},\beta_{-n},\alpha_{n},\beta_{n}\big{)}$,
$n\in{\mathbb{N}}$. Thus the phase space
$\mathfrak{H}:=L^{2}_{0}\times\dot{L}^{2}$ of (2.10) decomposes as the direct
sum
$\mathfrak{H}=\sum_{n\in{\mathbb{N}}}V_{n,+}\oplus V_{n,-}$ (2.31)
of $2$-dimensional Lagrangian symplectic subspaces
$\displaystyle V_{n,+}$ $\displaystyle:=\left\\{\begin{pmatrix}\eta\\\
\zeta\end{pmatrix}=\begin{pmatrix}M_{n}(\alpha_{n}\cos(nx)-\beta_{n}\sin(nx))\\\
M_{n}^{-1}(\beta_{n}\cos(nx)+\alpha_{n}\sin(nx))\end{pmatrix}\,,(\alpha_{n},\beta_{n})\in{\mathbb{R}}^{2}\right\\}\,,$
(2.32) $\displaystyle V_{n,-}$ $\displaystyle:=\left\\{\begin{pmatrix}\eta\\\
\zeta\end{pmatrix}=\begin{pmatrix}M_{n}(\alpha_{-n}\cos(nx)+\beta_{-n}\sin(nx))\\\
M_{n}^{-1}(\beta_{-n}\cos(nx)-\alpha_{-n}\sin(nx))\end{pmatrix}\,,(\alpha_{-n},\beta_{-n})\in{\mathbb{R}}^{2}\right\\}\,,$
(2.33)
which are invariant for the linear Hamiltonian system (2.14), namely
$J{\bf{\Omega}}_{W}:V_{n,\sigma}\mapsto V_{n,\sigma}$. Note that the
involution ${\mathcal{S}}$ defined in (2.3) and the translation operator
$\tau_{\varsigma}$ in (2.5) leave the subspaces $V_{n,\sigma}$,
$\sigma\in\\{\pm\\}$, invariant.
### 2.2 Tangential and normal subspaces of the phase space
We split the phase space $\mathfrak{H}$ in (2.31) into a direct sum of
tangential and normal Lagrangian subspaces
$\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\intercal}$ and
$\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}$. Note that the main part of
the solutions (1.21) that we shall obtain in Theorem 1.2 is the component in
the tangential subspace $\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\intercal}$,
whereas the component in the normal subspace
$\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}$ is much smaller.
Recalling the definition of the sets ${\mathbb{S}}^{+}$ and $\Sigma$ defined
in (1.17) respectively (1.18), we split
$\mathfrak{H}=\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\intercal}\oplus\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}$
(2.34)
where $\mathfrak{H}^{\intercal}_{{\mathbb{S}}^{+},\Sigma}$ is the finite
dimensional tangential subspace
$\mathfrak{H}^{\intercal}_{{\mathbb{S}}^{+},\Sigma}:=\sum_{a=1}^{\nu}V_{{\overline{n}}_{a},\sigma_{a}}$
(2.35)
and $\mathfrak{H}^{\angle}_{{\mathbb{S}}^{+},\Sigma}$ is the normal subspace
defined as its symplectic orthogonal
$\mathfrak{H}^{\angle}_{{\mathbb{S}}^{+},\Sigma}:=\sum_{a=1}^{\nu}V_{{\overline{n}}_{a},-\sigma_{a}}\oplus\sum_{n\in{\mathbb{N}}\setminus{\mathbb{S}}^{+}}\big{(}V_{n,+}\oplus
V_{n,-}\big{)}\,.$ (2.36)
Both the subspaces $\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\intercal}$ and
$\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}$ are Lagrangian. We denote by
$\Pi_{{\mathbb{S}}^{+},\Sigma}^{\intercal}$ and
$\Pi_{{\mathbb{S}}^{+},\Sigma}^{\angle}$ the symplectic projections on the
subspaces $\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\intercal}$ and
$\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}$, respectively. The
restricted symplectic form
${\mathcal{W}}|_{\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}}$ is
represented by the symplectic structure
$J_{\angle}^{-1}:\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}\to\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}\,,\
\quad
J_{\angle}^{-1}:=\Pi^{L^{2}}_{\angle}\,J^{-1}_{|\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}}\,,$
where $\Pi^{L^{2}}_{\angle}$ is the $L^{2}$-projector on the subspace
$\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}$. Its associated Poisson
tensor is
$J_{\angle}:\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}\to\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}\,,\quad
J_{\angle}:=\Pi^{\angle}_{{\mathbb{S}}^{+},\Sigma}\,J_{|\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}}\,.$
By Lemma 2.6 in [7], we have $J^{-1}_{\angle}\,J_{\angle}=$
$J_{\angle}\,J^{-1}_{\angle}=$ ${\rm
Id}_{\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}}$.
Action-angle coordinates. We introduce action-angle coordinates on the
tangential subspace $\mathfrak{H}^{\intercal}_{{\mathbb{S}}^{+},\Sigma}$
defined in (2.35). Given the sets ${\mathbb{S}}^{+}$ and $\Sigma$ defined
respectively in (1.17) and (1.18), we define the set
${\mathbb{S}}:=\\{\overline{\jmath}_{1},\ldots,\overline{\jmath}_{\nu}\\}\subset{\mathbb{Z}}\,\setminus\\{0\\}\,,\quad\overline{\jmath}_{a}:=\sigma_{a}\overline{n}_{a}\,,\quad
a=1,\ldots,\nu\,,$ (2.37)
and the action-angle coordinates $(\theta_{j},I_{j})_{j\in{\mathbb{S}}}$, by
the relations
$\alpha_{j}=\sqrt{\frac{1}{\pi}(I_{j}+\xi_{j})}\cos(\theta_{j})\,,\
\beta_{j}=-\sqrt{\frac{1}{\pi}(I_{j}+\xi_{j})}\sin(\theta_{j})\,,\quad\xi_{j}>0\,,\
|I_{j}|<\xi_{j}\,,\ \forall j\in{\mathbb{S}}\,.$ (2.38)
In view of (2.34)-(2.36), we represent any function of the phase space
$\mathfrak{H}$ as
$\displaystyle A(\theta,I,w)$ $\displaystyle:=v^{\intercal}(\theta,I)+w\,,$
$\displaystyle:=\frac{1}{\sqrt{\pi}}\sum_{j\in{\mathbb{S}}}\left[\begin{pmatrix}M_{j}\sqrt{I_{j}+\xi_{j}}\cos(\theta_{j})\\\
-M_{j}^{-1}\sqrt{I_{j}+\xi_{j}}\sin(\theta_{j})\end{pmatrix}\cos(jx)+\begin{pmatrix}M_{j}\sqrt{I_{j}+\xi_{j}}\sin(\theta_{j})\\\
M_{j}^{-1}\sqrt{I_{j}+\xi_{j}}\cos(\theta_{j})\end{pmatrix}\sin(jx)\right]+w$
$\displaystyle=\frac{1}{\sqrt{\pi}}\sum_{j\in{\mathbb{S}}}\left[\begin{pmatrix}M_{j}\sqrt{I_{j}+\xi_{j}}\cos(\theta_{j}-jx)\\\
-M_{j}^{-1}\sqrt{I_{j}+\xi_{j}}\sin(\theta_{j}-jx)\end{pmatrix}\right]+w$
(2.39)
where $\theta:=(\theta_{j})_{j\in{\mathbb{S}}}\in{\mathbb{T}}^{\nu}$,
$I:=(I_{j})_{j\in{\mathbb{S}}}\in{\mathbb{R}}^{\nu}$ and
$w\in\mathfrak{H}^{\angle}_{{\mathbb{S}}^{+},\Sigma}$.
In view of (2.2), the involution ${\mathcal{S}}$ in (2.3) reads
$\vec{\mathcal{S}}:(\theta,I,w)\mapsto\left(-\theta,I,{\mathcal{S}}w\right)\,,$
(2.40)
the translation operator $\tau_{\varsigma}$ in (2.5) reads
$\vec{\tau}_{\varsigma}:(\theta,\,I,\,w)\mapsto(\theta-\vec{\jmath}\varsigma,\,I,\,\tau_{\varsigma}w),\quad\forall\varsigma\in{\mathbb{R}}\,,$
(2.41)
where
$\vec{\jmath}:=(j)_{j\in{\mathbb{S}}}=(\overline{\jmath}_{1},\ldots,\overline{\jmath}_{\nu})\in{\mathbb{Z}}^{\nu}\setminus\\{0\\}\,,$
(2.42)
and the symplectic 2-form (2.11) becomes
${\cal W}=\sum_{j\in{\mathbb{S}}}({\rm d}\theta_{j}\wedge{\rm
d}I_{j})\,\oplus\,{\cal
W}|_{\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}}\,.$ (2.43)
We also note that ${\cal W}$ is exact, namely
${\cal W}=d\Lambda\,,\qquad{\rm
where}\qquad\Lambda_{(\theta,I,w)}[{\widehat{\theta}},{\widehat{I}},{\widehat{w}}]:=-\sum_{j\in{\mathbb{S}}}I_{j}{\widehat{\theta}}_{j}+\tfrac{1}{2}\left(J_{\angle}^{-1}w,{\widehat{w}}\right)_{L^{2}}$
(2.44)
is the associated Liouville 1-form. Finally, given a Hamiltonian
$K\colon{\mathbb{T}}^{\nu}\times{\mathbb{R}}^{\nu}\times\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}\to{\mathbb{R}}$,
the associated Hamiltonian vector field (with respect to the symplectic form
(2.43)) is
$X_{K}:=\big{(}\partial_{I}K,-\partial_{\theta}K,J_{\angle}\nabla_{w}K\big{)}=\big{(}\partial_{I}K,-\partial_{\theta}K,\Pi_{{\mathbb{S}}^{+},\Sigma}^{\angle}J\nabla_{w}K\big{)}\,,$
where $\nabla_{w}K$ denotes the $L^{2}$ gradient of $K$ with respect to
$w\in\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}$.
Tangential and normal subspaces in complex variables. Each $2$-dimensional
symplectic subspace $V_{n,\sigma}$, $n\in{\mathbb{N}}$, $\sigma=\pm 1$,
defined in (2.32)-(2.33) is isomorphic, through the linear map ${\cal M}{\cal
C}$ defined in (2.27), to the complex subspace
${\bf H}_{j}:=\Big{\\{}\begin{pmatrix}z_{j}e^{{\rm i}jx}\\\
\overline{z_{j}}e^{-{\rm i}jx}\end{pmatrix}\,,\
z_{j}\in{\mathbb{C}}\Big{\\}}\qquad{\rm with}\qquad
j=n\sigma\in{\mathbb{Z}}\,.$
Denoting by $\Pi_{j}$ the $L^{2}$-projection on ${\bf H}_{j}$, we have that
$\Pi_{V_{n,\sigma}}={\cal M}{\cal C}\,\Pi_{j}\,({\cal M}{\cal C})^{-1}$. Thus
${\cal M}{\cal C}$ is an isomorphism between the tangential subspace
$\mathfrak{H}^{\intercal}_{{\mathbb{S}}^{+},\Sigma}$ defined in (2.35) and
${\bf H}_{\mathbb{S}}:=\Big{\\{}\begin{pmatrix}z\\\
\overline{z}\end{pmatrix}\,:\,z(x)=\sum_{j\in{\mathbb{S}}}z_{j}e^{{\rm
i}jx}\Big{\\}}$
and between the normal subspace
$\mathfrak{H}^{\angle}_{{\mathbb{S}}^{+},\Sigma}$ defined in (2.36) and
${\bf H}_{{\mathbb{S}}_{0}}^{\bot}:=\Big{\\{}\begin{pmatrix}z\\\
\overline{z}\end{pmatrix}\,:\,z(x)=\sum_{j\in{\mathbb{S}}_{0}^{c}}z_{j}e^{{\rm
i}jx}\in
L^{2}\Big{\\}}\,,\quad{\mathbb{S}}_{0}^{c}:={\mathbb{Z}}\setminus({\mathbb{S}}\cup\\{0\\})\,.$
(2.45)
Denoting by $\Pi_{{\mathbb{S}}}^{\intercal}$,
$\Pi_{{\mathbb{S}}_{0}}^{\perp}$, the $L^{2}$-orthogonal projections on the
subspaces ${\bf H}_{\mathbb{S}}$ and ${\bf H}_{{\mathbb{S}}_{0}}^{\perp}$, we
have that
$\Pi_{{\mathbb{S}}^{+},\Sigma}^{\intercal}={\cal M}{\cal
C}\,\Pi_{{\mathbb{S}}}^{\intercal}\,({\cal M}{\cal
C})^{-1}\,,\quad\Pi_{{\mathbb{S}}^{+},\Sigma}^{\angle}={\cal M}{\cal
C}\,\Pi_{{\mathbb{S}}_{0}}^{\perp}\,({\cal M}{\cal C})^{-1}\,.$ (2.46)
From this analysis, it follows that (cfr. Lemma 2.9 in [7])
$\left(v^{\intercal},{\bf{\Omega}}_{W}w\right)_{L^{2}}=0\ ,\qquad\forall
v^{\intercal}\in\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\intercal},\ \ \forall
w\in\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}\ .$ (2.47)
Notation. For $a\lesssim_{s}b$ means that $a\leq C(s)b$ for some positive
constant $C(s)$. We denote ${\mathbb{N}}:=\\{1,2,\ldots\\}$ and
${\mathbb{N}}_{0}:=\\{0\\}\cup{\mathbb{N}}$.
## 3 Functional setting
In this section we report basic notation, definitions, and results used along
the paper, concerning traveling waves, pseudo-differential operators, tame
operators, and the algebraic properties of Hamiltonian, reversible and
momentum preserving operators.
We consider functions $u({\varphi},x)\in
L^{2}\left({\mathbb{T}}^{\nu+1},{\mathbb{C}}\right)$ depending on the space
variable $x\in{\mathbb{T}}={\mathbb{T}}_{x}$ and the angles
${\varphi}\in{\mathbb{T}}^{\nu}={\mathbb{T}}_{\varphi}^{\nu}$ (so that
${\mathbb{T}}^{\nu+1}={\mathbb{T}}_{\varphi}^{\nu}\times{\mathbb{T}}_{x}$)
which we expand in Fourier series as
$u({\varphi},x)=\sum_{j\in{\mathbb{Z}}}u_{j}({\varphi})e^{{\rm
i}\,jx}=\sum_{\ell\in{\mathbb{Z}}^{\nu},j\in{\mathbb{Z}}}u_{\ell,j}e^{{\rm
i}(\ell\cdot{\varphi}+jx)}\,.$ (3.1)
We also consider real valued functions $u({\varphi},x)\in{\mathbb{R}}$, as
well as vector valued functions $u({\varphi},x)\in{\mathbb{C}}^{2}$ (or
$u({\varphi},x)\in{\mathbb{R}}^{2}$). When no confusion appears, we denote
simply by $L^{2}$, $L^{2}({\mathbb{T}}^{\nu+1})$,
$L_{x}^{2}:=L^{2}({\mathbb{T}}_{x})$,
$L_{\varphi}^{2}:=L^{2}({\mathbb{T}}^{\nu})$ either the spaces of real/complex
valued, scalar/vector valued, $L^{2}$-functions.
#### Quasi-periodic traveling waves.
We first provide the following definition:
###### Definition 3.1.
(Quasi-periodic traveling waves) Let
$\vec{\jmath}:=(\overline{\jmath}_{1},\ldots,\overline{\jmath}_{\nu})\in{\mathbb{Z}}^{\nu}$
be the vector defined in (2.42). A function $u({\varphi},x)$ is called a
quasi-periodic traveling wave if it has the form
$u({\varphi},x)=U({\varphi}-\vec{\jmath}x)$ where
$U:{\mathbb{T}}^{\nu}\to{\mathbb{C}}^{K}$, $K\in{\mathbb{N}}$, is a
$(2\pi)^{\nu}$-periodic function.
Comparing with Definition 1.1, we find convenient to call quasi-periodic
traveling wave both the function $u({\varphi},x)=U({\varphi}-\vec{\jmath}x)$
and the function of time $u(\omega t,x)=U(\omega t-\vec{\jmath}x)$.
Quasi-periodic traveling waves are characterized by the relation
$u({\varphi}-\vec{\jmath}\varsigma,\cdot)=\tau_{\varsigma}u$ for any
$\varsigma\in{\mathbb{R}}$, where $\tau_{\varsigma}$ is the translation
operator in (2.5).
Product and composition of quasi-periodic traveling waves are quasi-periodic
traveling waves. Expanded in Fourier series as in (3.1), a quasi-periodic
traveling wave has the form
$u({\varphi},x)=\sum_{\ell\in{\mathbb{Z}}^{\nu},j\in{\mathbb{Z}},j+\vec{\jmath}\cdot\ell=0}u_{\ell,j}e^{{\rm
i}(\ell\cdot{\varphi}+jx)}\,,$ (3.2)
namely, comparing with Definition 3.1,
$u({\varphi},x)=U({\varphi}-\vec{\jmath}x)\,,\quad
U(\psi)=\sum_{\ell\in{\mathbb{Z}}^{\nu}}U_{\ell}e^{{\rm
i}\ell\cdot\psi}\,,\quad U_{\ell}=u_{\ell,-\vec{\jmath}\cdot\ell}\,.$ (3.3)
The quasi-periodic traveling waves $u({\varphi},x)=U({\varphi}-\vec{\jmath}x)$
where $U(\cdot)$ belongs to the Sobolev space
$H^{s}({\mathbb{T}}^{\nu},{\mathbb{C}}^{K})$ in (1.16) (with values in
${\mathbb{C}}^{K}$, $K\in{\mathbb{N}}$), form a subspace of the Sobolev space
$H^{s}({\mathbb{T}}^{\nu+1})=\Big{\\{}u=\sum_{(\ell,j)\in{\mathbb{Z}}^{\nu+1}}u_{\ell,j}\,e^{{\rm
i}(\ell\cdot{\varphi}+jx)}\,:\,\|u\|_{s}^{2}:=\sum_{(\ell,j)\in{\mathbb{Z}}^{\nu+1}}|u_{\ell,j}|^{2}\langle\ell,j\rangle^{2s}<\infty\Big{\\}}$
(3.4)
where $\langle\ell,j\rangle:=\max\\{1,|\ell|,|j|\\}$. Note the equivalence of
the norms
$\|u\|_{H^{s}({\mathbb{T}}^{\nu}_{\varphi}\times{\mathbb{T}}_{x})}\simeq_{s}\|U\|_{H^{s}({\mathbb{T}}^{\nu})}$.
For $s\geq s_{0}:=\big{[}\frac{\nu+1}{2}\big{]}+1\in{\mathbb{N}}$ one has
$H^{s}({\mathbb{T}}^{\nu+1})\subset C({\mathbb{T}}^{\nu+1})$, and
$H^{s}({\mathbb{T}}^{\nu+1})$ is an algebra. Along the paper we denote by $\|\
\|_{s}$ both the Sobolev norms in (1.16) and (3.4).
For $K\geq 1$ we define the smoothing operator $\Pi_{K}$ on the quasi-periodic
traveling waves
$\Pi_{K}:u=\sum_{\ell\in{\mathbb{Z}}^{\nu},\,j\in{\mathbb{S}}_{0}^{c},\,j+\vec{\jmath}\cdot\ell=0}u_{\ell,j}e^{{\rm
i}(\ell\cdot{\varphi}+jx)}\mapsto\Pi_{K}u=\sum_{\braket{\ell}\leq
K,\,j\in{\mathbb{S}}_{0}^{c},\,j+\vec{\jmath}\cdot\ell=0}u_{\ell,j}e^{{\rm
i}(\ell\cdot{\varphi}+jx)}\,,$ (3.5)
and $\Pi_{K}^{\perp}:={\rm Id}-\Pi_{K}$. Writing a traveling wave as in (3.3),
the projector $\Pi_{K}$ in (3.5) is equal to
$(\Pi_{K}u)({\varphi},x)=U_{K}({\varphi}-\vec{\jmath}x)\,,\quad
U_{K}(\psi):=\sum_{\ell\in{\mathbb{Z}}^{\nu},\,\langle\ell\rangle\leq
K}U_{\ell}e^{{\rm i}\ell\cdot\psi}\,.$
For a function $u(\varphi,x)$ we define the averages
$\langle
u\rangle_{\varphi,x}:=\frac{1}{(2\pi)^{\nu+1}}\int_{{\mathbb{T}}^{\nu+1}}u(\varphi,x)\,{\rm
d}{\varphi}\,{\rm d}{x}\,,\quad\langle
u\rangle_{\varphi}(x):=\frac{1}{(2\pi)^{\nu}}\int_{{\mathbb{T}}^{\nu+1}}u(\varphi,x)\,{\rm
d}{\varphi}\,,$ (3.6)
and we note that, if $u(\varphi,x)$ is a quasi-periodic traveling wave then
$\langle u\rangle_{\varphi}=\langle u\rangle_{\varphi,x}$.
Whitney-Sobolev functions. Along the paper we consider families of Sobolev
functions $\lambda\mapsto u(\lambda)\in H^{s}({\mathbb{T}}^{\nu+1})$ and
$\lambda\mapsto U(\lambda)\in H^{s}({\mathbb{T}}^{\nu})$ which are
$k_{0}$-times differentiable in the sense of Whitney with respect to the
parameter $\lambda:=(\omega,\gamma)\in
F\subset{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]$ where
$F\subset{\mathbb{R}}^{\nu+1}$ is a closed set. We refer to Definition 2.1 in
[2], for the definition of a Whitney-Sobolev function $u:F\to H^{s}$ where
$H^{s}$ may be either the Hilbert space
$H^{s}({\mathbb{T}}^{\nu}\times{\mathbb{T}})$ or $H^{s}({\mathbb{T}}^{\nu})$.
Here we mention that, given $\upsilon\in(0,1)$, we can identify a Whitney-
Sobolev function $u:F\to H^{s}$ with $k_{0}$ derivatives with the equivalence
class of functions $f\in
W^{k_{0},\infty,\upsilon}({\mathbb{R}}^{\nu+1},H^{s})/\sim$ with respect to
the equivalence relation $f\sim g$ when
$\partial_{\lambda}^{j}f(\lambda)=\partial_{\lambda}^{j}g(\lambda)$ for all
$\lambda\in F$, $\left|j\right|\leq k_{0}-1$, with equivalence of the norms
$\|u\|_{s,F}^{k_{0},\upsilon}\sim_{\nu,k_{0}}\left\|u\right\|_{W^{k_{0},\infty,\upsilon}({\mathbb{R}}^{\nu+1},H^{s})}:=\sum_{\left|\alpha\right|\leq
k_{0}}\upsilon^{\left|\alpha\right|}\|\partial_{\lambda}^{\alpha}u\|_{L^{\infty}({\mathbb{R}}^{\nu+1},H^{s})}\,.$
The key result is the Whitney extension theorem, which associates to a
Whitney-Sobolev function $u:F\to H^{s}$ with $k_{0}$-derivatives a function
${\widetilde{u}}:{\mathbb{R}}^{\nu+1}\to H^{s}$, ${\widetilde{u}}$ in
$W^{k_{0},\infty}({\mathbb{R}}^{\nu+1},H^{s})$ (independently of the target
Sobolev space $H^{s}$) with an equivalent norm. For sake of simplicity we
often denote $\|\ \|_{s,F}^{k_{0},\upsilon}=\|\ \|_{s}^{k_{0},\upsilon}$.
Thanks to this equivalence, all the tame estimates which hold for Sobolev
spaces carry over for Whitney-Sobolev functions. For example the following
classical tame estimate for the product holds: (see e.g. Lemma 2.4 in [2]):
for all $s\geq s_{0}>(\nu+1)/2$,
$\|uv\|_{s}^{k_{0},\upsilon}\leq
C(s,k_{0})\|u\|_{s}^{k_{0},\upsilon}\|v\|_{s_{0}}^{k_{0},\upsilon}+C(s_{0},k_{0})\|u\|_{s_{0}}^{k_{0},\upsilon}\|v\|_{s}^{k_{0},\upsilon}\,.$
(3.7)
Moreover the following estimates hold for the smoothing operators defined in
(3.5): for any quasi-periodic traveling wave $u$
$\|\Pi_{K}u\|_{s}^{k_{0},\upsilon}\leq
K^{\alpha}\|u\|_{s-\alpha}^{k_{0},\upsilon}\,,\ 0\leq\alpha\leq
s\,,\quad\|\Pi_{K}^{\perp}u\|_{s}^{k_{0},\upsilon}\leq
K^{-\alpha}\|u\|_{s+\alpha}^{k_{0},\upsilon}\,,\ \alpha\geq 0\,.$ (3.8)
We also state a standard Moser tame estimate for the nonlinear composition
operator, see e.g. Lemma 2.6 in [2],
$u({\varphi},x)\mapsto{\mathtt{f}}(u)({\varphi},x):=f({\varphi},x,u({\varphi},x))$.
Since the variables $({\varphi},x)=:y$ have the same role, we state it for a
generic Sobolev space $H^{s}({\mathbb{T}}^{d})$.
###### Lemma 3.2.
(Composition operator) Let
$f\in{\mathcal{C}}^{\infty}({\mathbb{T}}^{d}\times{\mathbb{R}},{\mathbb{R}})$.
If $u(\lambda)\in H^{s}({\mathbb{T}}^{d})$ is a family of Sobolev functions
satisfying $\|u\|_{s_{0}}^{k_{0},\upsilon}\leq 1$, then, for all $s\geq
s_{0}:=(d+1)/2$,
$\|{\mathtt{f}}(u)\|_{s}^{k_{0},\upsilon}\leq
C(s,k_{0},f)\big{(}1+\|u\|_{s}^{k_{0},\upsilon}\big{)}\,.$
If $f(\varphi,x,0)=0$ then $\|{\mathtt{f}}(u)\|_{s}^{k_{0},\upsilon}\leq
C(s,k_{0},f)\|u\|_{s}^{k_{0},\upsilon}$.
#### Constant transport equation on quasi-periodic traveling waves.
Let
${\mathtt{m}}:{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]\to{\mathbb{R}}$,
$(\omega,\gamma)\mapsto{\mathtt{m}}(\omega,\gamma)$ be a real function. For
any $(\omega,\gamma)$ in
${\mathtt{T}}{\mathtt{C}}({\mathtt{m}};\upsilon,\tau):=\big{\\{}(\omega,\gamma)\in{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]\,:\,|\omega\cdot\ell+{\mathtt{m}}\,j|\geq\upsilon\braket{\ell}^{-\tau},\forall(\ell,j)\in{\mathbb{Z}}^{\nu+1}\setminus\\{0\\}\,,\text{
with }\vec{\jmath}\cdot\ell+j=0\big{\\}}\,,$ (3.9)
then, for a quasi-periodic traveling wave $u({\varphi},x)$ with zero average
with respect to ${\varphi}$ (and therefore with respect to $({\varphi},x)$),
the transport equation
$(\omega\cdot\partial_{\varphi}+{\mathtt{m}}\,\partial_{x})v=u$ has the quasi-
periodic traveling wave solution (see (3.2))
$(\omega\cdot\partial_{\varphi}+{\mathtt{m}}\,\partial_{x})^{-1}u:=\sum_{(\ell,j)\in{\mathbb{Z}}^{\nu+1}\setminus\\{0\\}\atop\vec{\jmath}\cdot\ell+j=0}\frac{u_{\ell,j}}{{\rm
i}(\omega\cdot\ell+{\mathtt{m}}\,j)}e^{{\rm i}(\ell\cdot{\varphi}+jx)}\,.$
For any $(\omega,\gamma)\in{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]$,
we define its extension
$(\omega\cdot\partial_{\varphi}+{\mathtt{m}}\,\partial_{x})_{\rm
ext}^{-1}u({\varphi},x):=\sum_{(\ell,j)\in{\mathbb{Z}}^{\nu+1}\atop\vec{\jmath}\cdot\ell+j=0}\frac{\chi((\omega\cdot\ell+{\mathtt{m}}\,j)\upsilon^{-1}\braket{\ell}^{\tau})}{{\rm
i}(\omega\cdot\ell+{\mathtt{m}}\,j)}u_{\ell,j}e^{{\rm
i}(\ell\cdot{\varphi}+jx)}\,,$ (3.10)
where $\chi\in{\mathcal{C}}^{\infty}({\mathbb{R}},{\mathbb{R}})$ is an even
positive ${\mathcal{C}}^{\infty}$ cut-off function such that
$\chi(\xi)=\begin{cases}0&\text{ if }\ \left|\xi\right|\leq\frac{1}{3}\\\
1&\text{ if }\
\left|\xi\right|\geq\frac{2}{3}\end{cases}\,,\qquad\partial_{\xi}\chi(\xi)>0\quad\forall\,\xi\in(\tfrac{1}{3},\tfrac{2}{3})\,.$
(3.11)
Note that $(\omega\cdot\partial_{\varphi}+{\mathtt{m}}\,\partial_{x})_{\rm
ext}^{-1}u=(\omega\cdot\partial_{\varphi}+{\mathtt{m}}\,\partial_{x})^{-1}u$
for all
$(\omega,\gamma)\in{\mathtt{T}}{\mathtt{C}}({\mathtt{m}};\upsilon,\tau)$.
If $|{\mathtt{m}}|^{k_{0},\upsilon}\leq C$ then the following estimate holds
$\|(\omega\cdot\partial_{\varphi}+{\mathtt{m}}\,\partial_{x})_{\rm
ext}^{-1}u\|_{s,{\mathbb{R}}^{\nu+1}}^{k_{0},\upsilon}\leq
C(k_{0})\upsilon^{-1}\|u\|_{s+\mu,{\mathbb{R}}^{\nu+1}}^{k_{0},\upsilon}\,,\quad\mu:=k_{0}+\tau(k_{0}+1)\,.$
(3.12)
Furthermore one has the estimate, for any $\omega\in{\mathbb{R}}^{\nu}$,
${\mathtt{m}}_{1},{\mathtt{m}}_{2}\in{\mathbb{R}}$ and $s\geq 0$
$\left\|\left((\omega\cdot\partial_{\varphi}+{\mathtt{m}}_{1}\,\partial_{x})_{\rm
ext}^{-1}-(\omega\cdot\partial_{\varphi}+{\mathtt{m}}_{2}\,\partial_{x})_{\rm
ext}^{-1}\right)u\right\|_{s}\leq
C\,\upsilon^{-2}\,\left|{\mathtt{m}}_{1}-{\mathtt{m}}_{2}\right|\,\left\|u\right\|_{s+2\tau+1}\,.$
(3.13)
#### Linear operators.
We consider ${\varphi}$-dependent families of linear operators
$A:{\mathbb{T}}^{\nu}\mapsto{\mathcal{L}}(L^{2}({\mathbb{T}}_{x}))$,
${\varphi}\mapsto A({\varphi})$, acting on subspaces of
$L^{2}({\mathbb{T}}_{x})$. We also regard $A$ as an operator (which for
simplicity we denote by $A$ as well) that acts on functions $u({\varphi},x)$
of space and time, that is
$(Au)({\varphi},x):=\left(A({\varphi})u({\varphi},\,\cdot\,)\right)(x)\,.$
(3.14)
The action of an operator $A$ as in (3.14) on a scalar function
$u({\varphi},x)\in L^{2}$ expanded as in (3.1) is
$\displaystyle Au({\varphi},x)$
$\displaystyle=\sum_{j,j^{\prime}\in{\mathbb{Z}}}A_{j}^{j^{\prime}}({\varphi})u_{j^{\prime}}({\varphi})e^{{\rm
i}\,jx}=\sum_{j,j^{\prime}\in{\mathbb{Z}}}\sum_{\ell,\ell^{\prime}\in{\mathbb{Z}}^{\nu}}A_{j}^{j^{\prime}}(\ell-\ell^{\prime})u_{\ell^{\prime},j^{\prime}}e^{{\rm
i}\left(\ell\cdot{\varphi}+jx\right)}\,.$ (3.15)
We identify an operator $A$ with its matrix
$\big{(}A_{j}^{j^{\prime}}(\ell-\ell^{\prime})\big{)}_{j,j^{\prime}\in{\mathbb{Z}},\ell,\ell^{\prime}\in{\mathbb{Z}}^{\nu}}$,
which is Töplitz with respect to the index $\ell$. In this paper we always
consider Töplitz operators as in (3.14), (3.15).
Real operators. A linear operator $A$ is real if $A=\overline{A}$, where
$\overline{A}$ is defined by $\overline{A}(u):=\overline{A(\overline{u})}$. We
represent a real operator acting on $(\eta,\zeta)$ belonging to (a subspace
of) $L^{2}({\mathbb{T}}_{x},{\mathbb{R}}^{2})$ by a matrix
${\mathcal{R}}=\begin{pmatrix}A&B\\\ C&D\end{pmatrix}$ (3.16)
where $A,B,C,D$ are real operators acting on the scalar valued components
$\eta,\zeta\in L^{2}({\mathbb{T}}_{x},{\mathbb{R}})$.
The change of coordinates (2.19) transforms a real operator ${\mathcal{R}}$
into a complex one acting on the variables $(z,\overline{z})$, given by the
matrix
${\bf
R}:={\mathcal{C}}^{-1}{\mathcal{R}}{\mathcal{C}}=\left(\begin{matrix}{\mathcal{R}}_{1}&{\mathcal{R}}_{2}\\\
\overline{\mathcal{R}}_{2}&\overline{\mathcal{R}}_{1}\end{matrix}\right)\ ,\\\
\qquad\begin{matrix}{\mathcal{R}}_{1}:=\left\\{(A+D)-{\rm
i}(B-C)\right\\}/2\,,\\\ {\mathcal{R}}_{2}:=\left\\{(A-D)+{\rm
i}(B+C)\right\\}/2\,.\end{matrix}$ (3.17)
We call _real_ a matrix operator acting on the complex variables
$(z,\overline{z})$ of the form (3.17). We shall also consider real operators
${\bf R}$ of the form (3.17) acting on subspaces of $L^{2}$.
Lie expansion. Let $X({\varphi})$ be a linear operator with associated flow
$\Phi^{\tau}({\varphi})$ defined by
$\partial_{\tau}\Phi^{\tau}({\varphi})=X({\varphi})\Phi^{\tau}({\varphi})\,,\quad\Phi^{0}({\varphi})={\rm
Id}\,,\quad\tau\in[0,1]\,.$
Let $\Phi({\varphi}):=\Phi^{\tau}({\varphi})_{|\tau=1}$ denote the time-$1$
flow. Given a linear operator $A({\varphi})$, the conjugated operator
$A^{+}({\varphi}):=\Phi({\varphi})^{-1}A({\varphi})\Phi({\varphi})$ admits the
Lie expansion, for any $M\in{\mathbb{N}}_{0}$,
$\displaystyle A^{+}({\varphi})=\sum_{m=0}^{M}\frac{(-1)^{m}}{m!}{\rm
ad}_{X({\varphi})}^{m}(A({\varphi}))+R_{M}({\varphi})\,,$ (3.18)
$\displaystyle R_{M}({\varphi})$
$\displaystyle=\frac{(-1)^{M+1}}{M!}\int_{0}^{1}(1-\tau)^{M}\,(\Phi^{\tau}({\varphi}))^{-1}{\rm
ad}_{X({\varphi})}^{M+1}(A({\varphi}))\Phi^{\tau}({\varphi})\,\,{\rm
d}{\tau}\,,$
where ${\rm
ad}_{X({\varphi})}(A({\varphi})):=[X({\varphi}),A({\varphi})]=X({\varphi})A({\varphi})-A({\varphi})X({\varphi})$
and ${\rm ad}_{X({\varphi})}^{0}:={\rm Id}$.
In particular, for $A=\omega\cdot\partial_{\varphi}$, since
$[X({\varphi}),\omega\cdot\partial_{\varphi}]=-(\omega\cdot\partial_{\varphi}X)({\varphi})$,
we obtain
$\displaystyle\Phi({\varphi})^{-1}\circ\omega\cdot\partial_{\varphi}\circ\Phi({\varphi})=$
$\displaystyle\,\omega\cdot\partial_{\varphi}+\sum_{m=1}^{M}\frac{(-1)^{m+1}}{m!}{\rm
ad}_{X({\varphi})}^{m-1}(\omega\cdot\partial_{\varphi}X({\varphi}))$ (3.19)
$\displaystyle+\frac{(-1)^{M}}{M!}\int_{0}^{1}(1-\tau)^{M}(\Phi^{\tau}({\varphi}))^{-1}{\rm
ad}_{X({\varphi})}^{M}(\omega\cdot\partial_{\varphi}X({\varphi}))\Phi^{\tau}({\varphi})\,{\rm
d}{\tau}\,.$
For matrices of operators ${\bf X}({\varphi})$ and ${\bf A}({\varphi})$ as in
(3.17), the same formula (3.18) holds.
### 3.1 Pseudodifferential calculus
In this section we report fundamental notions of pseudodifferential calculus,
following [9].
###### Definition 3.3.
($\Psi$DO) A _pseudodifferential_ symbol $a(x,j)$ of order $m$ is the
restriction to ${\mathbb{R}}\times{\mathbb{Z}}$ of a function $a(x,\xi)$ which
is ${\mathcal{C}}^{\infty}$-smooth on ${\mathbb{R}}\times{\mathbb{R}}$,
$2\pi$-periodic in $x$, and satisfies,
$\forall\alpha,\beta\in{\mathbb{N}}_{0}$,
$|\partial_{x}^{\alpha}\partial_{\xi}^{\beta}a(x,\xi)|\leq
C_{\alpha,\beta}\langle\xi\rangle^{m-\beta}$. We denote by $S^{m}$ the class
of symbols of order $m$ and $S^{-\infty}:=\cap_{m\geq 0}S^{m}$. To a symbol
$a(x,\xi)$ in $S^{m}$ we associate its quantization acting on a
$2\pi$-periodic function $u(x)=\sum_{j\in{\mathbb{Z}}}u_{j}\,e^{{\rm i}jx}$ as
$[{\rm Op}(a)u](x):=\sum_{j\in{\mathbb{Z}}}a(x,j)u_{j}\,e^{{\rm i}jx}\,.$
We denote by ${\rm OP}S^{m}$ the set of pseudodifferential operators of order
$m$ and ${\rm OP}S^{-\infty}:=\bigcap_{m\in{\mathbb{R}}}{\rm OP}S^{m}$. For a
matrix of pseudodifferential operators
${\bf A}=\begin{pmatrix}A_{1}&A_{2}\\\ A_{3}&A_{4}\end{pmatrix},\quad
A_{i}\in{\rm OP}S^{m},\quad i=1,\ldots,4$ (3.20)
we say that ${\bf A}\in{\rm OP}S^{m}$.
When the symbol $a(x)$ is independent of $\xi$, the operator ${\rm Op}(a)$ is
the multiplication operator by the function $a(x)$, i.e. ${\rm
Op}(a):u(x)\mapsto a(x)u(x)$. In such a case we also denote ${\rm
Op}(a)=a(x)$.
We shall use the following notation, used also in [1, 9, 2]. For any
$m\in{\mathbb{R}}\setminus\\{0\\}$, we set
$|D|^{m}:={\rm Op}\big{(}\chi(\xi)|\xi|^{m}\big{)}\,,$
where $\chi$ is an even, positive ${\mathcal{C}}^{\infty}$ cut-off satisfying
(3.11). We also identify the Hilbert transform ${\mathcal{H}}$, acting on the
$2\pi$-periodic functions, defined by
${\mathcal{H}}(e^{{\rm i}jx}):=-{\rm i}\,{\rm sign}\,(j)e^{{\rm
i}jx}\,\quad\forall j\neq 0\,,\quad{\mathcal{H}}(1):=0\,,$ (3.21)
with the Fourier multiplier ${\rm Op}(-{\rm i}\,{\rm sign}\,(\xi)\chi(\xi))$.
Similarly we regard the operator
$\partial_{x}^{-1}\left[e^{{\rm i}jx}\right]:=-\,{\rm i}\,j^{-1}\,e^{{\rm
i}jx}\,\quad\forall\,j\neq 0\,,\quad\partial_{x}^{-1}[1]:=0\,,$ (3.22)
as the Fourier multiplier $\partial_{x}^{-1}={\rm Op}\left(-{\rm
i}\,\chi(\xi)\xi^{-1}\right)$ and the projector $\pi_{0}$, defined on the
$2\pi$-periodic functions as
$\pi_{0}u:=\frac{1}{2\pi}\int_{\mathbb{T}}u(x)\,dx\,,$ (3.23)
with the Fourier multiplier ${\rm Op}\big{(}1-\chi(\xi)\big{)}$. Finally we
define, for any $m\in{\mathbb{R}}\setminus\\{0\\}$,
$\langle D\rangle^{m}:=\pi_{0}+|D|^{m}:={\rm
Op}\big{(}(1-\chi(\xi))+\chi(\xi)|\xi|^{m}\big{)}\,.$
Along the paper we consider families of pseudodifferential operators with a
symbol $a(\lambda;{\varphi},x,\xi)$ which is $k_{0}$-times differentiable with
respect to a parameter $\lambda:=(\omega,\gamma)$ in an open subset
$\Lambda_{0}\subset{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]$. Note that
$\partial_{\lambda}^{k}A={\rm Op}\left(\partial_{\lambda}^{k}a\right)$ for any
$k\in{\mathbb{N}}_{0}^{\nu+1}$.
We recall the pseudodifferential norm introduced in Definition 2.11 in [9].
###### Definition 3.4.
(Weighted $\Psi$DO norm) Let $A(\lambda):=a(\lambda;{\varphi},x,D)\in{\rm
OP}S^{m}$ be a family of pseudodifferential operators with symbol
$a(\lambda;{\varphi},x,\xi)\in S^{m}$, $m\in{\mathbb{R}}$, which are
$k_{0}$-times differentiable with respect to
$\lambda\in\Lambda_{0}\subset{\mathbb{R}}^{\nu+1}$. For $\upsilon\in(0,1)$,
$\alpha\in{\mathbb{N}}_{0}$, $s\geq 0$, we define
$\left\|A\right\|_{m,s,\alpha}^{k_{0},\upsilon}:=\sum_{|k|\leq
k_{0}}\upsilon^{|k|}\sup_{\lambda\in{\Lambda}_{0}}\left\|\partial_{\lambda}^{k}A(\lambda)\right\|_{m,s,\alpha}$
where
$\left\|A(\lambda)\right\|_{m,s,\alpha}:=\max_{0\leq\beta\leq\alpha}\,\sup_{\xi\in{\mathbb{R}}}\|\partial_{\xi}^{\beta}a(\lambda,\cdot,\cdot,\xi)\|_{s}\
\langle\xi\rangle^{-m+\beta}$. For a matrix of pseudodifferential operators
${\bf A}\in{\rm OP}S^{m}$ as in (3.20), we define $\left\|{\bf
A}\right\|_{m,s,\alpha}^{k_{0},\upsilon}:=\max_{i=1,\ldots,4}\left\|A_{i}\right\|_{m,s,\alpha}^{k_{0},\upsilon}\,.$
Given a function $a(\lambda;{\varphi},x)\in{\mathcal{C}}^{\infty}$ which is
$k_{0}$-times differentiable with respect to $\lambda$, the weighted norm of
the corresponding multiplication operator is $\|{\rm
Op}(a)\|_{0,s,\alpha}^{k_{0},\upsilon}=\|a\|_{s}^{k_{0},\upsilon}$,
$\forall\alpha\in{\mathbb{N}}_{0}$.
#### Composition of pseudodifferential operators.
If ${\rm Op}(a)$, ${\rm Op}(b)$ are pseudodifferential operators with symbols
$a\in S^{m}$, $b\in S^{m^{\prime}}$, $m,m^{\prime}\in{\mathbb{R}}$, then the
composition operator ${\rm Op}(a){\rm Op}(b)$ is a pseudodifferential operator
${\rm Op}(a\\#b)$ with symbol $a\\#b\in S^{m+m^{\prime}}$. It admits the
asymptotic expansion: for any $N\geq 1$
$\displaystyle(a\\#b)(\lambda;{\varphi},x,\xi)$
$\displaystyle=\sum_{\beta=0}^{N-1}\frac{1}{{\rm
i}^{\beta}\beta!}\partial_{\xi}^{\beta}a(\lambda;{\varphi},x,\xi)\partial_{x}^{\beta}b(\lambda;{\varphi},x,\xi)+(r_{N}(a,b))(\lambda;{\varphi},x,\xi)$
(3.24)
where $r_{N}(a,b)\in S^{m+m^{\prime}-N}$. The following result is proved in
Lemma 2.13 in [9].
###### Lemma 3.5.
(Composition) Let $A=a(\lambda;{\varphi},x,D)$, $B=b(\lambda;{\varphi},x,D)$
be pseudodifferential operators with symbols $a(\lambda;{\varphi},x,\xi)\in
S^{m}$, $b(\lambda;{\varphi},x,\xi)\in S^{m^{\prime}}$,
$m,m^{\prime}\in{\mathbb{R}}$. Then $A\circ B\in{\rm OP}S^{m+m^{\prime}}$
satisfies, for any $\alpha\in{\mathbb{N}}_{0}$, $s\geq s_{0}$,
$\begin{split}\left\|AB\right\|_{m+m^{\prime},s,\alpha}^{k_{0},\upsilon}&\lesssim_{m,\alpha,k_{0}}C(s)\left\|A\right\|_{m,s,\alpha}^{k_{0},\upsilon}\left\|B\right\|_{m^{\prime},s_{0}+|m|+\alpha,\alpha}^{k_{0},\upsilon}\\\
&\
\quad\qquad+C(s_{0})\left\|A\right\|_{m,s_{0},\alpha}^{k_{0},\upsilon}\left\|B\right\|_{m^{\prime},s+|m|+\alpha,\alpha}^{k_{0},\upsilon}\,.\end{split}$
(3.25)
Moreover, for any integer $N\geq 1$, the remainder $R_{N}:={\rm Op}(r_{N})$ in
(3.24) satisfies
$\displaystyle\left\|{\rm
Op}(r_{N}(a,b))\right\|_{m+m^{\prime}-N,s,\alpha}^{k_{0},\upsilon}$
$\displaystyle\lesssim_{m,N,\alpha,k_{0}}C(s)\left\|A\right\|_{m,s,N+\alpha}^{k_{0},\upsilon}\left\|B\right\|_{m^{\prime},s_{0}+\left|m\right|+2N+\alpha,N+\alpha}^{k_{0},\upsilon}$
(3.26) $\displaystyle\
\qquad\qquad+C(s_{0})\left\|A\right\|_{m,s_{0},N+\alpha}^{k_{0},\upsilon}\left\|B\right\|_{m^{\prime},s+|m|+2N+\alpha,N+\alpha}^{k_{0},\upsilon}.$
Both (3.25)-(3.26) hold with the constant $C(s_{0})$ interchanged with $C(s)$.
The commutator between two pseudodifferential operators ${\rm Op}(a)\in{\rm
OP}S^{m}$ and ${\rm Op}(b)\in{\rm OP}S^{m^{\prime}}$ is a pseudodifferential
operator in ${\rm OP}S^{m+m^{\prime}-1}$ with symbol $a\star b\in
S^{m+m^{\prime}-1}$, namely $\left[{\rm Op}(a),{\rm Op}(b)\right]={\rm
Op}\left(a\star b\right)$, that admits, by (3.24), the expansion
$\displaystyle a\star b=-{\rm
i}\left\\{a,b\right\\}+{\widetilde{r}}_{2}(a,b)\,,\quad{\widetilde{r}}_{2}(a,b):=r_{2}(a,b)-r_{2}(b,a)\in
S^{m+m^{\prime}-2}\,,$ (3.27) $\displaystyle{\rm
where}\quad\\{a,b\\}:=\partial_{\xi}a\partial_{x}b-\partial_{x}a\partial_{\xi}b\,,$
is the Poisson bracket between $a(x,\xi)$ and $b(x,\xi)$. As a corollary of
Lemma 3.5 we have:
###### Lemma 3.6.
(Commutator) Let $A={\rm Op}(a)$ and $B={\rm Op}(b)$ be pseudodifferential
operators with symbols $a(\lambda;{\varphi},x,\xi)\in S^{m}$,
$b(\lambda;{\varphi},x,\xi)\in S^{m^{\prime}}$, $m,m^{\prime}\in{\mathbb{R}}$.
Then the commutator $[A,B]:=AB-BA\in{\rm OP}S^{m+m^{\prime}-1}$ satisfies
$\displaystyle\left\|[A,B]\right\|_{m+m^{\prime}-1,s,\alpha}^{k_{0},\upsilon}$
$\displaystyle\lesssim_{m,m^{\prime},\alpha,k_{0}}C(s)\left\|A\right\|_{m,s+|m^{\prime}|+\alpha+2,\alpha+1}^{k_{0},\upsilon}\left\|B\right\|_{m^{\prime},s_{0}+|m|+\alpha+2,\alpha+1}^{k_{0},\upsilon}$
(3.28) $\displaystyle\qquad\quad\
+C(s_{0})\left\|A\right\|_{m,s_{0}+|m^{\prime}|+\alpha+2,\alpha+1}^{k_{0},\upsilon}\left\|B\right\|_{m^{\prime},s+|m|+\alpha+2,\alpha+1}^{k_{0},\upsilon}\,.$
Finally we consider the exponential of a pseudodifferential operator of order
$0$. The following lemma follows as in Lemma 2.12 of [8] (or Lemma 2.17 in
[9]).
###### Lemma 3.7.
(Exponential map) If $A:={\rm Op}(a(\lambda;{\varphi},x,\xi))$ is in
$OPS^{0}$, then $e^{A}$ is in $OPS^{0}$ and for any $s\geq s_{0}$,
$\alpha\in{\mathbb{N}}_{0}$, there is a constant $C(s,\alpha)>0$ so that
$\|e^{A}-{\rm
Id}\|_{0,s,\alpha}^{k_{0},\upsilon}\leq\|A\|_{0,s+\alpha,\alpha}^{k_{0},\upsilon}\,{\rm
exp}\big{(}C(s,\alpha)\|A\|_{0,s_{0}+\alpha,\alpha}^{k_{0},\upsilon}\big{)}\,.$
The same holds for a matrix ${\bf A}$ of the form (3.20) in ${\rm OP}S^{0}$.
#### Egorov Theorem.
Consider the family of $\varphi$-dependent diffeomorphisms of
${\mathbb{T}}_{x}$ defined by $y=x+\beta({\varphi},x)$, with inverse
$x=y+\breve{\beta}({\varphi},y)$, where $\beta({\varphi},x)$ is a small smooth
function, and the induced operators
$({\mathcal{B}}u)({\varphi},x):=u({\varphi},x+\beta({\varphi},x))$ and
$({\mathcal{B}}^{-1}u)({\varphi},y):=u({\varphi},y+\breve{\beta}({\varphi},y))$.
###### Lemma 3.8.
(Composition) Let
$\|\beta\|_{2s_{0}+k_{0}+2}^{k_{0},\upsilon}\leq\delta(s_{0},k_{0})$ small
enough. Then the composition operator ${\mathcal{B}}$ satisfies the tame
estimates, for any $s\geq s_{0}$,
$\|{\mathcal{B}}u\|_{s}^{k_{0},\upsilon}\lesssim_{s,k_{0}}\|u\|_{s+k_{0}}^{k_{0},\upsilon}+\|\beta\|_{s}^{k_{0},\upsilon}\|u\|_{s_{0}+k_{0}+1}^{k_{0},\upsilon}\,,$
(3.29)
and the function $\breve{\beta}$ defined by the inverse diffeomorphism
satisfies
$\|\breve{\beta}\|_{s}^{k_{0},\upsilon}\lesssim_{s,k_{0}}\|\beta\|_{s+k_{0}}^{k_{0},\upsilon}$.
The following result is a small variation of Proposition 2.28 of [8].
###### Proposition 3.9.
(Egorov) Let $N\in{\mathbb{N}}$, ${\mathtt{q}}_{0}\in{\mathbb{N}}_{0}$,
$S>s_{0}$ and assume that $\partial_{\lambda}^{k}\beta(\lambda;\cdot,\cdot)$
are ${\mathcal{C}}^{\infty}$ for all $|k|\leq k_{0}$. There exist constants
$\sigma_{N},\sigma_{N}({\mathtt{q}}_{0})>0$,
$\delta=\delta(S,N,{\mathtt{q}}_{0},k_{0})\in(0,1)$ such that, if
$\|\beta\|_{s_{0}+\sigma_{N}({\mathtt{q}}_{0})}^{k_{0},\upsilon}\leq\delta$,
then the conjugated operator
${\mathcal{B}}^{-1}\circ\partial_{x}^{m}\circ{\mathcal{B}}$,
$m\in{\mathbb{Z}}$, is a pseudodifferential operator of order $m$ with an
expansion of the form
${\mathcal{B}}^{-1}\circ\partial_{x}^{m}\circ{\mathcal{B}}=\sum_{i=0}^{N}p_{m-i}(\lambda;{\varphi},y)\partial_{y}^{m-i}+{\mathcal{R}}_{N}({\varphi})$
with the following properties:
1\. The principal symbol
$p_{m}(\lambda;{\varphi},y)=\Big{(}[1+\beta_{x}(\lambda;{\varphi},x)]^{m}\Big{)}|_{x=y+\breve{\beta}(\lambda;{\varphi},y)}$.
For any $s\geq s_{0}$ and $i=1,\ldots,N$,
$\|p_{m}-1\|_{s}^{k_{0},\upsilon}\,,\
\|p_{m-i}\|_{s}^{k_{0},\upsilon}\lesssim_{s,N}\|\beta\|_{s+\sigma_{N}}^{k_{0},\upsilon}\,.$
(3.30)
2\. For any ${\mathtt{q}}\in{\mathbb{N}}^{\nu}_{0}$ with
$|{\mathtt{q}}|\leq{\mathtt{q}}_{0}$, $n_{1},n_{2}\in{\mathbb{N}}_{0}$ with
$n_{1}+n_{2}+{\mathtt{q}}_{0}\leq N+1-k_{0}-m$, the operator $\langle
D\rangle^{n_{1}}\partial_{\varphi}^{\mathtt{q}}{\cal R}_{N}(\varphi)\langle
D\rangle^{n_{2}}$ is ${\mathcal{D}}^{k_{0}}$-tame with a tame constant
satisfying, for any $s_{0}\leq s\leq S$,
${\mathfrak{M}}_{\langle D\rangle^{n_{1}}\partial_{\varphi}^{\mathtt{q}}{\cal
R}_{N}(\varphi)\langle
D\rangle^{n_{2}}}(s)\lesssim_{S,N,{\mathtt{q}}_{0}}\|\beta\|_{s+\sigma_{N}({\mathtt{q}}_{0})}^{k_{0},\upsilon}\,.$
(3.31)
3\. Let $s_{0}<s_{1}$ and assume that
$\|\beta_{j}\|_{s_{1}+\sigma_{N}({\mathtt{q}}_{0})}\leq\delta,$ $j=1,2$. Then
$\|\Delta_{12}p_{m-i}\|_{s_{1}}\lesssim_{s_{1},N}\|\Delta_{12}\beta\|_{s_{1}+\sigma_{N}}$,
$i=0,\ldots,N$, and, for any $|{\mathtt{q}}|\leq{\mathtt{q}}_{0}$,
$n_{1},n_{2}\in{\mathbb{N}}_{0}$ with $n_{1}+n_{2}+{\mathtt{q}}_{0}\leq N-m$,
$\|\langle D\rangle^{n_{1}}\partial_{\varphi}^{\mathtt{q}}\Delta_{12}{\cal
R}_{N}(\varphi)\langle D\rangle^{n_{2}}\|_{{\cal
B}(H^{s_{1}})}\lesssim_{s_{1},N,n_{1},n_{2}}\|\Delta_{12}\beta\|_{s_{1}+\sigma_{N}({\mathtt{q}}_{0})}\,.$
Finally, if $\beta({\varphi},x)$ is a quasi-periodic traveling wave, then
${\mathcal{B}}$ is momentum preserving (we refer to Definition 3.22), as well
as the conjugated operator
${\mathcal{B}}^{-1}\circ\partial_{x}^{m}\circ{\mathcal{B}}$, and each function
$p_{m-i}$, $i=0,\ldots,N$, is a quasi-periodic traveling wave.
Dirichlet-Neumann operator. We finally remind the following decomposition of
the Dirichlet-Neumann operator proved in [9], in the case of infinite depth,
and in [2], for finite depth.
###### Lemma 3.10.
(Dirichlet-Neumann) Assume that
$\partial_{\lambda}^{k}\eta(\lambda,\cdot,\cdot)$ is
${\mathcal{C}}^{\infty}({\mathbb{T}}^{\nu}\times{\mathbb{T}}_{x})$ for all
$|k|\leq k_{0}$. There exists $\delta(s_{0},k_{0})>0$ such that, if
$\|\eta\|_{2s_{0}+2k_{0}+1}^{k_{0},\upsilon}\leq\delta(s_{0},k_{0})$, then the
Dirichlet-Neumann operator $G(\eta)=G(\eta,{\mathtt{h}})$ may be written as
$G(\eta,{\mathtt{h}})=G(0,{\mathtt{h}})+{\mathcal{R}}_{G}(\eta)$ (3.32)
where ${\mathcal{R}}_{G}(\eta):={\mathcal{R}}_{G}(\eta,{\mathtt{h}})\in{\rm
OP}S^{-\infty}$ satisfies, for all $m,s,\alpha\in{\mathbb{N}}_{0}$, the
estimate
$\displaystyle\|{\mathcal{R}}_{G}(\eta)\|_{-m,s,\alpha}^{k_{0},\upsilon}\leq
C(s,m,\alpha,k_{0})\|\eta\|_{s+s_{0}+2k_{0}+m+\alpha+3}^{k_{0},\upsilon}\,.$
(3.33)
### 3.2 ${\mathcal{D}}^{k_{0}}$-tame and $(-\tfrac{1}{2})$-modulo-tame
operators
Tame and modulo tame operators were introduced in [9]. Let $A:=A(\lambda)$ be
a linear operator as in (3.14), $k_{0}$-times differentiable with respect to
the parameter $\lambda$ in an open set
$\Lambda_{0}\subset{\mathbb{R}}^{\nu+1}$.
###### Definition 3.11.
(${\mathcal{D}}^{k_{0}}$-$\sigma$-tame) Let $\sigma\geq 0$. A linear operator
$A:=A(\lambda)$ is ${\mathcal{D}}^{k_{0}}$-$\sigma$-tame if there exists a
non-decreasing function $[s_{0},S]\rightarrow[0,+\infty)$,
$s\mapsto{\mathfrak{M}}_{A}(s)$, with possibly $S=+\infty$, such that, for all
$s_{0}\leq s\leq S$ and $u\in H^{s+\sigma}$,
$\sup_{\left|k\right|\leq
k_{0}}\sup_{\lambda\in\Lambda_{0}}\upsilon^{\left|k\right|}\left\|(\partial_{\lambda}^{k}A(\lambda))u\right\|_{s}\leq{\mathfrak{M}}_{A}(s_{0})\left\|u\right\|_{s+\sigma}+{\mathfrak{M}}_{A}(s)\left\|u\right\|_{s_{0}+\sigma}\,.$
We say that ${\mathfrak{M}}_{A}(s)$ is a _tame constant_ of the operator $A$.
The constant ${\mathfrak{M}}_{A}(s)={\mathfrak{M}}_{A}(k_{0},\sigma,s)$ may
also depend on $k_{0},\sigma$ but we shall often omit to write them. When the
"loss of derivatives" $\sigma$ is zero, we simply write
${\mathcal{D}}^{k_{0}}$-tame instead of ${\mathcal{D}}^{k_{0}}$-$0$-tame. For
a matrix operator as in (3.17), we denote the tame constant
${\mathfrak{M}}_{{\bf
R}}(s):=\max\left\\{{\mathfrak{M}}_{{\mathcal{R}}_{1}}(s),{\mathfrak{M}}_{{\mathcal{R}}_{2}}(s)\right\\}$.
The class of ${\mathcal{D}}^{k_{0}}$-$\sigma$-tame operators is closed under
composition, see Lemma 2.20 in [9].
###### Lemma 3.12.
(Composition) Let $A,B$ be respectively
${\mathcal{D}}^{k_{0}}$-$\sigma_{A}$-tame and
${\mathcal{D}}^{k_{0}}$-$\sigma_{B}$-tame operators with tame constants
respectively ${\mathfrak{M}}_{A}(s)$ and ${\mathfrak{M}}_{B}(s)$. Then the
composed operator $A\circ B$ is
${\mathcal{D}}^{k_{0}}$-$(\sigma_{A}+\sigma_{B})$-tame with a tame constant
${\mathfrak{M}}_{AB}(s)\leq
C(k_{0})\left({\mathfrak{M}}_{A}(s){\mathfrak{M}}_{B}(s_{0}+\sigma_{A})+{\mathfrak{M}}_{A}(s_{0}){\mathfrak{M}}_{B}(s+\sigma_{A})\right)\,.$
It is proved in Lemma 2.22 in [9] that the action of a
${\mathcal{D}}^{k_{0}}$-$\sigma$-tame operator $A(\lambda)$ on a Sobolev
function $u=u(\lambda)\in H^{s+\sigma}$ is bounded by
$\|Au\|_{s}^{k_{0},\upsilon}\lesssim_{k_{0}}{\mathfrak{M}}_{A}(s_{0})\|u\|_{s+\sigma}^{k_{0},\upsilon}+{\mathfrak{M}}_{A}(s)\|u\|_{s_{0}+\sigma}^{k_{0},\upsilon}$.
Pseudodifferential operators are tame operators. We use in particular the
following lemma which is Lemma 2.21 in [9].
###### Lemma 3.13.
Let $A=a(\lambda;{\varphi},x,D)\in{\rm OP}S^{0}$ be a family of
pseudodifferential operators satisfying
$\|A\|_{0,s,0}^{k_{0},\upsilon}<\infty$ for $s\geq s_{0}$. Then $A$ is
${\mathcal{D}}^{k_{0}}$-tame with a tame constant satisfying
${\mathfrak{M}}_{A}(s)\leq C(s)\|A\|_{0,s,0}^{k_{0},\upsilon}$, for any $s\geq
s_{0}$.
In view of the KAM reducibility scheme of Section 8 we also consider the
notion of ${\mathcal{D}}^{k_{0}}$-$(-\frac{1}{2})$-modulo-tame operator. We
first recall that, given a linear operator $A$ acting as in (3.15), the
majorant operator $|A|$ is defined to have the matrix elements
$(|A_{j}^{j^{\prime}}(\ell-\ell^{\prime})|)_{\ell,\ell^{\prime}\in{\mathbb{Z}}^{\nu},j,j^{\prime}\in{\mathbb{Z}}}$.
###### Definition 3.14.
(${\mathcal{D}}^{k_{0}}$-$(-\frac{1}{2})$-modulo-tame) A linear operator
$A=A(\lambda)$ is ${\mathcal{D}}^{k_{0}}$-$(-\frac{1}{2})$-_modulo-tame_ if
there exists a non-decreasing function $[s_{0},S]\rightarrow[0,+\infty]$,
$s\mapsto{\mathfrak{M}}_{\langle D\rangle^{\frac{1}{4}}A\langle
D\rangle^{\frac{1}{4}}}^{\sharp}(s)$, such that for all
$k\in{\mathbb{N}}_{0}^{\nu+1}$, $\left|k\right|\leq k_{0}$, the majorant
operator $\langle D\rangle^{\frac{1}{4}}|\partial_{\lambda}^{k}A|\langle
D\rangle^{\frac{1}{4}}$ satisfies, for all $s_{0}\leq s\leq S$ and $u\in
H^{s}$,
$\sup_{|k|\leq k_{0}}\sup_{\lambda\in{\Lambda}_{0}}\upsilon^{|k|}\|\langle
D\rangle^{\frac{1}{4}}|\partial_{\lambda}^{k}A|\langle
D\rangle^{\frac{1}{4}}u\|_{s}\leq{\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}A\langle
D\rangle^{\frac{1}{4}}}^{\sharp}(s_{0})\left\|u\right\|_{s}+{\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}A\langle
D\rangle^{\frac{1}{4}}}^{\sharp}(s)\left\|u\right\|_{s_{0}}\,.$ (3.34)
For a matrix as in (3.17), we denote ${\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}{\bf R}\langle
D\rangle^{\frac{1}{4}}}^{\sharp}(s):=\max\big{\\{}{\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}{\mathcal{R}}_{1}\langle
D\rangle^{\frac{1}{4}}}^{\sharp}(s),{\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}{\mathcal{R}}_{2}\langle
D\rangle^{\frac{1}{4}}}^{\sharp}(s)\big{\\}}$.
Given a linear operator $A$ acting as in (3.15), we define the operator
$\braket{\partial_{{\varphi}}}^{\mathtt{b}}A$, ${\mathtt{b}}\in{\mathbb{R}}$,
whose matrix elements are
$\braket{\ell-\ell^{\prime}}^{\mathtt{b}}A_{j}^{j^{\prime}}(\ell-\ell^{\prime})$.
From Lemma A.5-(iv) in [18], we deduce the following lemma.
###### Lemma 3.15.
(Sum and composition) Let $A$, $B$,
$\braket{\partial_{{\varphi}}}^{\mathtt{b}}A$,
$\braket{\partial_{{\varphi}}}^{\mathtt{b}}B$ be
${\mathcal{D}}^{k_{0}}$-$(-\frac{1}{2})$-modulo-tame operators. Then $A+B$,
$A\circ B$ and $\braket{\partial_{{\varphi}}}^{\mathtt{b}}(AB)$ are
${\mathcal{D}}^{k_{0}}$-$(-\frac{1}{2})$-modulo-tame with
$\displaystyle{\mathfrak{M}}_{\langle D\rangle^{\frac{1}{4}}(A+B)\langle
D\rangle^{\frac{1}{4}}}^{\sharp}(s)\leq{\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}A\langle
D\rangle^{\frac{1}{4}}}^{\sharp}(s)+{\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}B\langle D\rangle^{\frac{1}{4}}}^{\sharp}(s)$
$\displaystyle{\mathfrak{M}}_{\langle D\rangle^{\frac{1}{4}}AB\langle
D\rangle^{\frac{1}{4}}}^{\sharp}(s)\lesssim_{k_{0}}\Big{(}{\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}A\langle
D\rangle^{\frac{1}{4}}}^{\sharp}(s){\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}B\langle
D\rangle^{\frac{1}{4}}}^{\sharp}(s_{0})+{\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}A\langle
D\rangle^{\frac{1}{4}}}^{\sharp}(s_{0}){\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}B\langle D\rangle^{\frac{1}{4}}}^{\sharp}(s)\Big{)}$
$\displaystyle{\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}\langle\partial_{{\varphi}}\rangle^{\mathtt{b}}(AB)\langle
D\rangle^{\frac{1}{4}}}^{\sharp}(s)\lesssim_{{\mathtt{b}},k_{0}}$
$\displaystyle\quad\quad\Big{(}{\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}\langle\partial_{{\varphi}}\rangle^{\mathtt{b}}A\langle
D\rangle^{\frac{1}{4}}}^{\sharp}(s){\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}B\langle
D\rangle^{\frac{1}{4}}}^{\sharp}(s_{0})+{\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}\langle\partial_{{\varphi}}\rangle^{\mathtt{b}}A\langle
D\rangle^{\frac{1}{4}}}^{\sharp}(s_{0}){\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}B\langle D\rangle^{\frac{1}{4}}}^{\sharp}(s)$
$\displaystyle\quad\quad+{\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}A\langle
D\rangle^{\frac{1}{4}}}^{\sharp}(s){\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}\langle\partial_{{\varphi}}\rangle^{\mathtt{b}}B\langle
D\rangle^{\frac{1}{4}}}^{\sharp}(s_{0})+{\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}A\langle
D\rangle^{\frac{1}{4}}}^{\sharp}(s_{0}){\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}\langle\partial_{{\varphi}}\rangle^{\mathtt{b}}B\langle
D\rangle^{\frac{1}{4}}}^{\sharp}(s)\Big{)}\,.$
From the proof of Lemma 2.22 in [8], we deduce the following lemma.
###### Lemma 3.16.
(Exponential) Let $A$, $\braket{\partial_{\varphi}}^{\mathtt{b}}A$ be
${\mathcal{D}}^{k_{0}}$-$(-\frac{1}{2})$-modulo-tame and assume
${\mathfrak{M}}_{\langle D\rangle^{\frac{1}{4}}A\langle
D\rangle^{\frac{1}{4}}}^{\sharp}(s_{0})\leq 1$. Then $e^{\pm A}-{\rm Id}$ and
$\braket{\partial_{\varphi}}^{\mathtt{b}}(e^{\pm A}-{\rm Id})$ are
${\mathcal{D}}^{k_{0}}$-$(-\frac{1}{2})$-modulo-tame with
$\displaystyle{\mathfrak{M}}_{\langle D\rangle^{\frac{1}{4}}(e^{\pm A}-{\rm
Id})\langle
D\rangle^{\frac{1}{4}}}^{\sharp}(s)\lesssim_{k_{0}}{\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}A\langle D\rangle^{\frac{1}{4}}}^{\sharp}(s)\,,$
$\displaystyle{\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}\langle\partial_{\varphi}\rangle^{\mathtt{b}}(e^{\pm
A}-{\rm Id})\langle
D\rangle^{\frac{1}{4}}}^{\sharp}(s)\lesssim_{k_{0},{\mathtt{b}}}{\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}\langle\partial_{\varphi}\rangle^{\mathtt{b}}A\langle
D\rangle^{\frac{1}{4}}}^{\sharp}(s)+{\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}A\langle
D\rangle^{\frac{1}{4}}}^{\sharp}(s){\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}\langle\partial_{\varphi}\rangle^{\mathtt{b}}A\langle
D\rangle^{\frac{1}{4}}}^{\sharp}(s_{0})\,.$
Given a linear operator $A$ acting as in (3.15), we define the _smoothed
operator_ $\Pi_{N}A$, $N\in{\mathbb{N}}$ whose matrix elements are
$(\Pi_{N}A)_{j}^{j^{\prime}}(\ell-\ell^{\prime}):=\begin{cases}A_{j}^{j^{\prime}}(\ell-\ell^{\prime})&\text{if
}\braket{\ell-\ell^{\prime}}\leq N\\\ 0&\text{otherwise}\,.\end{cases}$ (3.35)
We also denote $\Pi_{N}^{\perp}:={\rm Id}-\Pi_{N}$. Arguing as in Lemma 2.27
in [9], we have that
${\mathfrak{M}}_{\langle D\rangle^{\frac{1}{4}}\Pi_{N}^{\perp}A\langle
D\rangle^{\frac{1}{4}}}^{\sharp}(s)\leq
N^{-{\mathtt{b}}}{\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}\langle\partial_{{\varphi}}\rangle^{\mathtt{b}}A\langle
D\rangle^{\frac{1}{4}}}^{\sharp}(s)\,,\ \ {\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}\Pi_{N}^{\perp}A\langle
D\rangle^{\frac{1}{4}}}^{\sharp}(s)\leq{\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}A\langle D\rangle^{\frac{1}{4}}}^{\sharp}(s)\,.$ (3.36)
In the next lemma we provide a sufficient condition for an operator to be
${\mathcal{D}}^{k_{0}}$-$(-\frac{1}{2})$-modulo-tame.
###### Lemma 3.17.
Let the operators $\langle D\rangle^{\frac{1}{4}}R\langle
D\rangle^{\frac{1}{4}}$, $\langle
D\rangle^{\frac{1}{4}}[R,\partial_{x}]\langle D\rangle^{\frac{1}{4}}$,
$\langle D\rangle^{\frac{1}{4}}\partial_{{\varphi}_{m}}^{s_{0}}R\langle
D\rangle^{\frac{1}{4}}$, $\langle
D\rangle^{\frac{1}{4}}[\partial_{{\varphi}_{m}}^{s_{0}}R,\partial_{x}]\langle
D\rangle^{\frac{1}{4}}$ and $\langle
D\rangle^{\frac{1}{4}}\partial_{{\varphi}_{m}}^{s_{0}+{\mathtt{b}}}R\langle
D\rangle^{\frac{1}{4}}$, $\langle
D\rangle^{\frac{1}{4}}[\partial_{{\varphi}_{m}}^{s_{0}+{\mathtt{b}}}R,\partial_{x}]\langle
D\rangle^{\frac{1}{4}}$, with $m=1,..,\nu$, be ${\mathcal{D}}^{k_{0}}$-tame.
Set
$\displaystyle{\widetilde{\mathbb{M}}}(s):=\max\big{\\{}{\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}R\langle
D\rangle^{\frac{1}{4}}}(s),{\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}[R,\partial_{x}]\langle D\rangle^{\frac{1}{4}}}(s),$
(3.37) $\displaystyle\quad\quad\quad\quad\quad\quad{\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}\partial_{{\varphi}_{m}}^{s_{0}}R,\langle
D\rangle^{\frac{1}{4}}}(s),{\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}[\partial_{{\varphi}_{m}}^{s_{0}}R,\partial_{x}]\langle
D\rangle^{\frac{1}{4}}}(s)\,:\,m=1,...,\nu\big{\\}}\,,$
$\displaystyle{\widetilde{\mathbb{M}}}(s,{\mathtt{b}}):=\max\big{\\{}{\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}\partial_{{\varphi}_{m}}^{s_{0}+{\mathtt{b}}}R\langle
D\rangle^{\frac{1}{4}}}(s),{\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}[\partial_{{\varphi}_{m}}^{s_{0}+{\mathtt{b}}}R,\partial_{x}]\langle
D\rangle^{\frac{1}{4}}}(s)\,:\,m=1,...,\nu\big{\\}}\,,$
$\displaystyle{\widetilde{{\mathfrak{M}}}}(s,{\mathtt{b}}):=\max\big{\\{}{\widetilde{\mathbb{M}}}(s),{\widetilde{\mathbb{M}}}(s,{\mathtt{b}})\big{\\}}\,.$
(3.38)
Then $R$ and $\langle\partial_{{\varphi}}\rangle^{\mathtt{b}}R$ are
${\mathcal{D}}^{k_{0}}$-$(-\tfrac{1}{2})$-modulo-tame, with
${\mathfrak{M}}_{\langle D\rangle^{\frac{1}{4}}R\langle
D\rangle^{\frac{1}{4}}}^{\sharp}(s)\,,\ {\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}\langle\partial_{{\varphi}}\rangle^{\mathtt{b}}R\langle
D\rangle^{\frac{1}{4}}}^{\sharp}(s)\lesssim_{s_{0}}{\widetilde{{\mathfrak{M}}}}(s,{\mathtt{b}})\,.$
###### Proof.
The matrix elements of $\langle
D\rangle^{\frac{1}{4}}[\langle\partial_{{\varphi}}\rangle^{\mathtt{b}}R,\partial_{x}]\langle
D\rangle^{\frac{1}{4}}$ are given, for any
$\ell,\ell^{\prime}\in{\mathbb{Z}}^{\nu}$, $j,j^{\prime}\in{\mathbb{Z}}$, by
$\langle
j\rangle^{\frac{1}{4}}\langle\ell-\ell^{\prime}\rangle^{{\mathtt{b}}}{\rm
i}(j-j^{\prime})\langle
j^{\prime}\rangle^{\frac{1}{4}}R_{j}^{j^{\prime}}(\ell-\ell^{\prime})$. From
Definition 3.11 with $\sigma=0$, we have, for any $|k|\leq k_{0}$,
$\ell^{\prime}\in{\mathbb{Z}}^{\nu}$, $j\in{\mathbb{Z}}$,
$\upsilon^{2|k|}\sum_{\ell,j}\langle\ell,j\rangle^{2s}|(\partial_{\lambda}^{k}R)_{j}^{j^{\prime}}(\ell-\ell^{\prime})|^{2}\leq
2({\mathfrak{M}}_{R}(s))^{2}\langle\ell^{\prime},j^{\prime}\rangle^{2s_{0}}+2({\mathfrak{M}}_{R}(s_{0}))^{2}\langle\ell^{\prime},j^{\prime}\rangle^{2s}\,.$
Using the inequality
$\langle\ell-\ell^{\prime}\rangle^{2(s_{0}+{\mathtt{b}})}\langle
j-j^{\prime}\rangle^{2}\lesssim_{s_{0}+{\mathtt{b}}}1+|\ell-\ell^{\prime}|^{2(s_{0}+{\mathtt{b}})}+|j-j^{\prime}|^{2}+|\ell-\ell^{\prime}|^{2(s_{0}+{\mathtt{b}})}|j-j^{\prime}|^{2}$,
we therefore obtain, for any $\ell\in{\mathbb{Z}}^{\nu}$, $j\in{\mathbb{Z}}$,
recalling (3.38),
$\displaystyle\upsilon^{2|k|}\sum_{\ell,j}\langle\ell,j\rangle^{2s}\langle
j\rangle^{\frac{1}{2}}\langle\ell-\ell^{\prime}\rangle^{2(s_{0}+{\mathtt{b}})}\langle
j-j^{\prime}\rangle^{2}\langle
j^{\prime}\rangle^{\frac{1}{2}}|(\partial_{\lambda}^{k}R)_{j}^{j^{\prime}}(\ell-\ell^{\prime})|^{2}$
$\displaystyle\quad\lesssim_{s_{0}+{\mathtt{b}}}({\widetilde{{\mathfrak{M}}}}(s,{\mathtt{b}}))^{2}\langle\ell^{\prime},j^{\prime}\rangle^{2s_{0}}+({\widetilde{{\mathfrak{M}}}}(s_{0},{\mathtt{b}}))^{2}\langle\ell^{\prime},j^{\prime}\rangle^{2s}\,.$
For any $s_{0}\leq s\leq S$ and any $|k|\leq k_{0}$, by Cauchy-Schwartz
inequality, we finally deduce
$\displaystyle\|\langle
D\rangle^{\frac{1}{4}}|\langle\partial_{{\varphi}}\rangle^{\mathtt{b}}\partial_{\lambda}^{k}R|\langle
D\rangle^{\frac{1}{4}}h\|_{s}^{2}\leq\sum_{\ell,j}\langle\ell,j\rangle^{2s}\Big{(}\sum_{\ell^{\prime},j^{\prime}}\langle
j\rangle^{\frac{1}{4}}\langle\ell-\ell^{\prime}\rangle^{{\mathtt{b}}}\langle
j^{\prime}\rangle^{\frac{1}{4}}|(\partial_{\lambda}^{k}R)_{j}^{j^{\prime}}(\ell-\ell^{\prime})||h_{\ell^{\prime},j^{\prime}}|\Big{)}^{2}$
$\displaystyle\quad=\sum_{\ell,j}\langle\ell,j\rangle^{2s}\Big{(}\sum_{\ell^{\prime},j^{\prime}}\langle
j\rangle^{\frac{1}{4}}\langle\ell-\ell^{\prime}\rangle^{s_{0}+{\mathtt{b}}}\langle
j-j^{\prime}\rangle\langle
j^{\prime}\rangle^{\frac{1}{4}}|(\partial_{\lambda}^{k}R)_{j}^{j^{\prime}}(\ell-\ell^{\prime})||h_{\ell^{\prime},j^{\prime}}|\frac{1}{\langle\ell-\ell^{\prime}\rangle^{s_{0}}\langle
j-j^{\prime}\rangle}\Big{)}^{2}$
$\displaystyle\lesssim_{s_{0}}\sum_{\ell,j}\langle\ell,j\rangle^{2s}\sum_{\ell^{\prime},j^{\prime}}\langle
j\rangle^{\frac{1}{2}}\langle\ell-\ell^{\prime}\rangle^{2(s_{0}+{\mathtt{b}})}\langle
j-j^{\prime}\rangle^{2}\langle
j^{\prime}\rangle^{\frac{1}{2}}|(\partial_{\lambda}^{k}R)_{j}^{j^{\prime}}(\ell-\ell^{\prime})|^{2}|h_{\ell^{\prime},j^{\prime}}|^{2}$
$\displaystyle\lesssim_{s_{0},{\mathtt{b}}}\upsilon^{-2|k|}\sum_{\ell^{\prime},j^{\prime}}|h_{\ell^{\prime},j^{\prime}}|^{2}\big{(}({\widetilde{{\mathfrak{M}}}}(s,{\mathtt{b}}))^{2}\langle\ell^{\prime},j^{\prime}\rangle^{2s_{0}}+({\widetilde{{\mathfrak{M}}}}(s_{0},{\mathtt{b}}))^{2}\langle\ell^{\prime},j^{\prime}\rangle^{2s}\big{)}\,.$
This proves that $\langle\partial_{\varphi}\rangle^{\mathtt{b}}R$ is
${\mathcal{D}}^{k_{0}}$-$(-\frac{1}{2})$-modulo-tame, with
${\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}\langle\partial_{{\varphi}}\rangle^{\mathtt{b}}R\langle
D\rangle^{\frac{1}{4}}}^{\sharp}(s)\lesssim_{s_{0}}{\widetilde{{\mathfrak{M}}}}(s,{\mathtt{b}})$.
∎
### 3.3 Hamiltonian, Reversible and Momentum preserving operators
Along the paper we exploit in a crucial way several algebraic properties of
the water waves equations: the Hamiltonian and the reversible structure as
well as the invariance under space translations. We characterize these
properties following [7].
###### Definition 3.18.
(Hamiltonian and Symplectic operators) A matrix operator ${\mathcal{R}}$ as in
(3.16) is
1. 1.
Hamiltonian if the matrix $J^{-1}{\mathcal{R}}$ is self-adjoint, namely
$B^{*}=B$, $C^{*}=C$, $A^{*}=-D$ and $A,B,C,D$ are real;
2. 2.
symplectic if ${\cal W}({\mathcal{R}}u,{\mathcal{R}}v)={\cal W}(u,v)$ for any
$u,v\in L^{2}({\mathbb{T}}_{x},{\mathbb{R}}^{2})$, where the symplectic 2-form
${\cal W}$ is defined in (2.11).
Let ${\mathcal{S}}$ be an involution as in (2.3) acting on the real variables
$(\eta,\zeta)\in{\mathbb{R}}^{2}$, or as in (2.40) acting on the action-angle-
normal variables $(\theta,I,w)$, or as in (2.24) acting in the
$(z,\overline{z})$ complex variables introduced in (2.19).
###### Definition 3.19.
(Reversible and reversibility preserving operators) A ${\varphi}$-dependent
family of operators ${\mathcal{R}}({\varphi})$,
${\varphi}\in{\mathbb{T}}^{\nu}$, is _reversible_ if
${\mathcal{R}}(-{\varphi})\circ{\mathcal{S}}=-{\mathcal{S}}\circ{\mathcal{R}}({\varphi})$
for all ${\varphi}\in{\mathbb{T}}^{\nu}$. It is _reversibility preserving_ if
${\mathcal{R}}(-{\varphi})\circ{\mathcal{S}}={\mathcal{S}}\circ{\mathcal{R}}({\varphi})$
for all ${\varphi}\in{\mathbb{T}}^{\nu}$.
Since in the complex coordinates $(z,\overline{z})$ the involution
${\mathcal{S}}$ defined in (2.3) reads as in (2.24), an operator ${\bf
R}({\varphi})$ as in (3.17) is reversible, respectively anti-reversible, if,
for any $i=1,2$,
${\mathcal{R}}_{i}(-{\varphi})\circ{\mathcal{S}}=-{\mathcal{S}}\circ{\mathcal{R}}_{i}({\varphi})\,,\quad{\rm
resp.}\ \
{\mathcal{R}}_{i}(-{\varphi})\circ{\mathcal{S}}={\mathcal{S}}\circ{\mathcal{R}}_{i}({\varphi})\,,$
(3.39)
where, with a small abuse of notation, we still denote
$({\mathcal{S}}u)(x)=\overline{u(-x)}$. Moreover, recalling that in the
Fourier coordinates such involution reads as in (2.25), we obtain the
following lemma (cfr. Lemmata 3.18 and 3.19 of [7]).
###### Lemma 3.20.
A ${\varphi}$-dependent family of operators ${\bf R}({\varphi})$,
${\varphi}\in{\mathbb{T}}^{\nu}$, as in (3.17) is reversible if, for any
$i=1,2$,
$\left({\mathcal{R}}_{i}\right)_{j}^{j^{\prime}}(-{\varphi})=-\overline{\left({\mathcal{R}}_{i}\right)_{j}^{j^{\prime}}({\varphi})}\quad\forall\,{\varphi}\in{\mathbb{T}}^{\nu}\,,\
\ i.e.\
\left({\mathcal{R}}_{i}\right)_{j}^{j^{\prime}}(\ell)=-\overline{\left({\mathcal{R}}_{i}\right)_{j}^{j^{\prime}}(\ell)}\,\quad\forall\,\ell\in{\mathbb{Z}}^{\nu}\,;$
(3.40)
it is reversibility preserving if, for any $i=1,2$,
$\left({\mathcal{R}}_{i}\right)_{j}^{j^{\prime}}(-{\varphi})=\overline{\left({\mathcal{R}}_{i}\right)_{j}^{j^{\prime}}({\varphi})}\
\ \forall\,{\varphi}\in{\mathbb{T}}^{\nu}\,,\ \ i.e.\
\left({\mathcal{R}}_{i}\right)_{j}^{j^{\prime}}(\ell)=\overline{\left({\mathcal{R}}_{i}\right)_{j}^{j^{\prime}}(\ell)}\,\
\ \forall\,\ell\in{\mathbb{Z}}^{\nu}\,.$
A pseudodifferential operator ${\rm Op}(a({\varphi},x,\xi))$ is reversible,
respectively reversibility preserving, if and only if its symbol satisfies
$a(-{\varphi},-x,\xi)=-\overline{a({\varphi},x,\xi)}$, resp.
$a(-{\varphi},-x,\xi)=\overline{a({\varphi},x,\xi)}$.
Note that the composition of a reversible operator with a reversibility
preserving operator is reversible. The flow generated by a reversibility
preserving operator is reversibility preserving. If ${\mathcal{R}}({\varphi})$
is reversibility preserving, then
$(\omega\cdot\partial_{\varphi}{\mathcal{R}})({\varphi})$ is reversible.
We shall say that a linear operator of the form
$\omega\cdot\partial_{\varphi}+A({\varphi})$ is reversible if $A({\varphi})$
is reversible. Conjugating the linear operator
$\omega\cdot\partial_{\varphi}+A({\varphi})$ by a family of invertible linear
maps $\Phi({\varphi})$, we get the transformed operator
$\displaystyle\Phi^{-1}({\varphi})\circ\big{(}\omega\cdot\partial_{\varphi}+A({\varphi})\big{)}\circ\Phi({\varphi})=\omega\cdot\partial_{\varphi}+A_{+}({\varphi})\,,$
(3.41) $\displaystyle
A_{+}({\varphi}):=\Phi^{-1}({\varphi})\left(\omega\cdot\partial_{\varphi}\Phi({\varphi})\right)+\Phi^{-1}({\varphi})A({\varphi})\Phi({\varphi})\,.$
The conjugation of a reversible operator with a reversibility preserving
operator is reversible.
A function $u({\varphi},\cdot)$ is called reversible if
${\mathcal{S}}u({\varphi},\cdot)=u(-{\varphi},\cdot)$ and antireversible if
$-{\mathcal{S}}u({\varphi},\cdot)=u(-{\varphi},\cdot)$. The same definition
holds in the action-angle-normal variables $(\theta,I,w)$ with the involution
$\vec{\mathcal{S}}$ defined in (2.40) and in the $(z,\overline{z})$ complex
variables with the involution in (2.24).
A reversibility preserving operator maps reversible, respectively anti-
reversible, functions into reversible, respectively anti-reversible,
functions, see Lemma 3.22 in [7].
We also remark that, if $X$ is a reversible vector field, according to (2.4),
and $u({\varphi},x)$ is a reversible quasi-periodic function, then the
linearized operator ${\rm d}_{u}X(u({\varphi},\cdot))$ is reversible,
according to Definition 3.19 (see e.g. Lemma 3.22 in [7]).
Finally we recall that the projections
$\Pi^{\intercal}_{{\mathbb{S}}^{+},\Sigma}$,
$\Pi^{\angle}_{{\mathbb{S}}^{+},\Sigma}$ of Section 2.2 are reversibility
preserving.
###### Lemma 3.21.
(Lemma 3.23 in [7]) The projections
$\Pi^{\intercal}_{{\mathbb{S}}^{+},\Sigma}$,
$\Pi^{\angle}_{{\mathbb{S}}^{+},\Sigma}$ defined in Section 2.2 commute with
the involution ${\mathcal{S}}$ defined in (2.3), i.e. are reversibility
preserving. The orthogonal projectors $\Pi_{{\mathbb{S}}}$ and
$\Pi_{{\mathbb{S}}_{0}}^{\bot}$ commute with the involution in (2.24), i.e.
are reversibility preserving.
Next we define momentum preserving operators.
###### Definition 3.22.
(Momentum preserving operators) A ${\varphi}$-dependent family of linear
operators $A({\varphi})$, ${\varphi}\in{\mathbb{T}}^{\nu}$, is momentum
preserving if
$A({\varphi}-\vec{\jmath}\varsigma)\circ\tau_{\varsigma}=\tau_{\varsigma}\circ
A({\varphi})\,,\quad\forall\,{\varphi}\in{\mathbb{T}}^{\nu}\,,\
\varsigma\in{\mathbb{R}}\,,$
where the translation operator $\tau_{\varsigma}$ is defined in (2.5). A
linear matrix operator ${\bf A}({\varphi})$ of the form (3.16) or (3.17) is
momentum preserving if each of its components is momentum preserving.
If $X$ is a vector field translation invariant, i.e. (2.6) holds, and $u$ is a
quasi-periodic traveling wave, then the linearized operator ${\rm
d}_{u}X(u({\varphi},\cdot))$ is momentum preserving.
Momentum preserving operators are closed under several operations (cfr. Lemma
3.25 in [7]):
###### Lemma 3.23.
Let $A({\varphi}),B({\varphi})$ be momentum preserving operators. Then the
composition $A({\varphi})\circ B({\varphi})$ and the adjoint
$(A({\varphi}))^{*}$ are momentum preserving. If $A({\varphi})$ is invertible,
then $A({\varphi})^{-1}$ is momentum preserving. Assume that
$\partial_{t}\Phi^{t}({\varphi})=A({\varphi})\Phi^{t}({\varphi})$,
$\Phi^{0}({\varphi})={\rm Id}$, has a unique propagator $\Phi^{t}({\varphi})$,
$t\in[0,1]$. Then $\Phi^{t}({\varphi})$ is momentum preserving.
We shall say that a linear operator of the form
$\omega\cdot\partial_{\varphi}+A({\varphi})$ is momentum preserving if
$A({\varphi})$ is momentum preserving. In particular, conjugating a momentum
preserving operator $\omega\cdot\partial_{\varphi}+A({\varphi})$ by a family
of invertible linear momentum preserving maps $\Phi({\varphi})$, we obtain the
transformed operator $\omega\cdot\partial_{\varphi}+A_{+}({\varphi})$ in
(3.41) which is momentum preserving.
Given a momentum preserving linear operator $A({\varphi})$ and a quasi-
periodic traveling wave $u$, according to Definition 3.1, then $A({\varphi})u$
is a quasi-periodic traveling wave.
The characterizations of the momentum preserving property, in Fourier space
and for a pseudo-differential operator, is given below (see Lemmata 3.28 and
3.29 in [7]).
###### Lemma 3.24.
Let ${\varphi}$-dependent family of operators $A({\varphi})$,
${\varphi}\in{\mathbb{T}}^{\nu}$, is momentum preserving if and only if the
matrix elements of $A({\varphi})$, defined by (3.15), fulfill
$A_{j}^{j^{\prime}}(\ell)\neq
0\quad\Rightarrow\quad\vec{\jmath}\cdot\ell+j-j^{\prime}=0\,,\quad\forall\,\ell\in{\mathbb{Z}}^{\nu},\
\ j,j^{\prime}\in{\mathbb{Z}}\,.$
A pseudodifferential operator ${\rm Op}(a({\varphi},x,\xi))$ is momentum
preserving if and only if its symbol satisfies
$a({\varphi}-\vec{\jmath}\varsigma,x,\xi)=a({\varphi},x+\varsigma,\xi)$ for
any $\varsigma\in{\mathbb{R}}$.
We finally note that the symplectic projections
$\Pi^{\intercal}_{{\mathbb{S}}^{+},\Sigma}$,
$\Pi^{\angle}_{{\mathbb{S}}^{+},\Sigma}$, are momentum preserving.
###### Lemma 3.25.
(Lemma 3.31 in [7]) The symplectic projections
$\Pi^{\intercal}_{{\mathbb{S}}^{+},\Sigma}$,
$\Pi^{\angle}_{{\mathbb{S}}^{+},\Sigma}$, the $L^{2}$-projections
$\Pi^{L^{2}}_{\angle}$ and $\Pi_{{\mathbb{S}}}$,
$\Pi_{{\mathbb{S}}_{0}}^{\bot}$ defined in Section 2.2 commute with the
translation operators $\tau_{\varsigma}$ defined in (2.5), i.e. are momentum
preserving.
#### Quasi-periodic traveling waves in action-angle-normal coordinates.
We now discuss how the momentum preserving condition reads in the coordinates
$(\theta,I,w)$ introduced in (2.2). Recalling (2.41), if $u({\varphi},x)$ is a
quasi-periodic traveling wave with action-angle-normal components
$(\theta({\varphi}),I({\varphi}),w({\varphi},x))$, the condition
$\tau_{\varsigma}u=u({\varphi}-\vec{\jmath}\varsigma,\cdot)$ becomes
$\begin{pmatrix}\theta({\varphi})-\vec{\jmath}\varsigma\\\ I({\varphi})\\\
\tau_{\varsigma}w({\varphi},\cdot)\end{pmatrix}=\begin{pmatrix}\theta({\varphi}-\vec{\jmath}\varsigma)\\\
I({\varphi}-\vec{\jmath}\varsigma)\\\
w({\varphi}-\vec{\jmath}\varsigma,\cdot)\end{pmatrix}\,,\quad\forall\,\varsigma\in{\mathbb{R}}\,.$
As we look for $\theta({\varphi})$ of the form
$\theta({\varphi})={\varphi}+\Theta({\varphi})$, with a
$(2\pi)^{\nu}$-periodic function
$\Theta:{\mathbb{R}}^{\nu}\mapsto{\mathbb{R}}^{\nu}$,
${\varphi}\mapsto\Theta({\varphi})$, the traveling wave condition becomes
$\begin{pmatrix}\Theta({\varphi})\\\ I({\varphi})\\\
\tau_{\varsigma}w({\varphi},\cdot)\end{pmatrix}=\begin{pmatrix}\Theta({\varphi}-\vec{\jmath}\varsigma)\\\
I({\varphi}-\vec{\jmath}\varsigma)\\\
w({\varphi}-\vec{\jmath}\varsigma,\cdot)\end{pmatrix}\,,\quad\forall\,\varsigma\in{\mathbb{R}}\,.$
(3.42)
###### Definition 3.26.
(Traveling wave variation) We call a traveling wave variation
$g({\varphi})=(g_{1}({\varphi}),g_{2}({\varphi}),g_{3}({\varphi},\cdot))\in{\mathbb{R}}^{\nu}\times{\mathbb{R}}^{\nu}\times\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}$
a function satisfying (3.42), i.e.
$g_{1}({\varphi})=g_{1}({\varphi}-\vec{\jmath}\varsigma)$,
$g_{2}({\varphi})=g_{2}({\varphi}-\vec{\jmath}\varsigma)$,
$\tau_{\varsigma}g_{3}({\varphi})=g_{3}({\varphi}-\vec{\jmath}\varsigma)$ for
any $\varsigma\in{\mathbb{R}}$, or equivalently
$D\vec{\tau}_{\varsigma}g({\varphi})=g({\varphi}-\vec{\jmath}\varsigma)$ for
any $\varsigma\in{\mathbb{R}}$, where $D\vec{\tau}_{\varsigma}$ is the
differential of $\vec{\tau}_{\varsigma}$, namely
$D\vec{\tau}_{\varsigma}\begin{pmatrix}\Theta\\\ I\\\
w\end{pmatrix}=\begin{pmatrix}\Theta\\\ I\\\
\tau_{\varsigma}w\end{pmatrix}\,,\quad\forall\,\varsigma\in{\mathbb{R}}\,.$
According to Definition 3.22, a linear operator acting in
${\mathbb{R}}^{\nu}\times{\mathbb{R}}^{\nu}\times\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}$
is momentum preserving if
$A({\varphi}-\vec{\jmath}\varsigma)\circ
D\vec{\tau}_{\varsigma}=D\vec{\tau}_{\varsigma}\circ
A({\varphi})\,,\quad\forall\,\varsigma\in{\mathbb{R}}\,.$
If $A({\varphi})$ is a momentum preserving linear operator acting on
${\mathbb{R}}^{\nu}\times{\mathbb{R}}^{\nu}\times\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}$
and
$g\in{\mathbb{R}}^{\nu}\times{\mathbb{R}}^{\nu}\times\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}$
is a traveling wave variation, then $A({\varphi})g({\varphi})$ is a traveling
wave variation.
## 4 Transversality of linear frequencies
In this section we extend the KAM theory approach used in [4, 9, 2, 7] in
order to deal with the linear frequencies $\Omega_{j}(\gamma)$ defined in
(1.13), of the pure gravity water waves with constant vorticity. We use the
vorticity as a parameter. In the proof of the key transversality Proposition
4.5, it is necessary to exploit the momentum condition for avoiding
resonances. We shall also exploit that the tangential sites
${\mathbb{S}}:=\\{\,\overline{\jmath}_{1},\ldots,\overline{\jmath}_{\nu}\\}\subset{\mathbb{Z}}\setminus\\{0\\}$
defined in (2.37), have all distinct modulus
$|\overline{\jmath}_{a}|=\overline{n}_{a}$, see assumption (1.17).
We first introduce the following definition of non-degenerate function.
###### Definition 4.1.
A function
$f=(f_{1},\dots,f_{N}):[\gamma_{1},\gamma_{2}]\rightarrow{\mathbb{R}}^{N}$ is
_non-degenerate_ if, for any $c\in{\mathbb{R}}^{N}\setminus\\{0\\}$, the
scalar function $f\cdot c$ is not identically zero on the whole interval
$[\gamma_{1},\gamma_{2}]$.
From a geometric point of view, the function $f$ is non-degenerate if and only
if the image curve $f([\gamma_{1},\gamma_{2}])\subset{\mathbb{R}}^{N}$ is not
contained in any hyperplane of ${\mathbb{R}}^{N}$.
We shall use in the sequel that the maps $\gamma\mapsto\Omega_{j}(\gamma)$ are
analytic in $[\gamma_{1},\gamma_{2}]$. For any
$j\in{\mathbb{Z}}\setminus\\{0\\}$, we decompose the linear frequencies
$\Omega_{j}(\gamma)$ as
$\Omega_{j}(\gamma)=\omega_{j}(\gamma)+\frac{\gamma}{2}\frac{G_{j}(0)}{j}\,,\quad\omega_{j}(\gamma):=\sqrt{g\,G_{j}(0)+\Big{(}\frac{\gamma}{2}\frac{G_{j}(0)}{j}\Big{)}^{2}}\,,$
(4.1)
where $G_{j}(0)$ is the Dirichlet-Neumann operator defined in (1.10). Note
that $j\mapsto\omega_{j}(\gamma)$ is even in $j$, whereas the component due to
the vorticity $j\mapsto\gamma\frac{G_{j}(0)}{j}$ is odd.
###### Lemma 4.2.
(Non-degeneracy-I) The following frequency vectors are non-degenerate:
1. 1.
$\vec{\Omega}(\gamma):=(\Omega_{j}(\gamma))_{j\in{\mathbb{S}}}\in{\mathbb{R}}^{\nu}$;
2. 2.
$\big{(}\vec{\Omega}(\gamma),1\big{)}\in{\mathbb{R}}^{\nu+1}$;
3. 3.
$\big{(}\vec{\Omega}(\gamma),\Omega_{j}(\gamma)\big{)}\in{\mathbb{R}}^{\nu+1}$,
for any
$j\in{\mathbb{Z}}\setminus\left(\\{0\\}\cup{\mathbb{S}}\cup(-{\mathbb{S}})\right)$;
4. 4.
$\big{(}\vec{\Omega}(\gamma),\Omega_{j}(\gamma),\Omega_{j^{\prime}}(\gamma)\big{)}\in{\mathbb{R}}^{\nu+2}$,
for any
$j,j^{\prime}\in{\mathbb{Z}}\setminus\left(\\{0\\}\cup{\mathbb{S}}\cup(-{\mathbb{S}})\right)$
and $|j|\neq|j^{\prime}|$.
###### Proof.
We first compute the jets of the functions $\gamma\mapsto\Omega_{j}(\gamma)$
at $\gamma=0$. Using that $G_{j}(0)=G_{|j|}(0)>0$, see (1.10), we write (4.1)
as
$\Omega_{j}(\gamma)=\sqrt{g\,G_{|j|}(0)}\left(\sqrt{1+\gamma^{2}{\mathtt{c}}_{j}^{2}}+\gamma{\rm
sgn}(j){\mathtt{c}}_{j}\right)\,,\quad{\mathtt{c}}_{j}:=\frac{1}{2|j|}\,\sqrt{\frac{G_{|j|}(0)}{g}}\,,$
(4.2)
for any $j\in{\mathbb{Z}}\setminus\\{0\\}$. Each function
$\gamma\mapsto\sqrt{1+\gamma^{2}{\mathtt{c}}_{j}^{2}}+\gamma\,{\rm
sgn}(j){\mathtt{c}}_{j}$ is real analytic on the whole real line
${\mathbb{R}}$, and in a neighborhood of $\gamma=0$, it admits the power
series expansion
$\displaystyle\Omega_{j}(\gamma)$
$\displaystyle=\sqrt{g\,G_{|j|}(0)}\Big{(}1+\ \sum_{n\geq
1}a_{n}(\gamma^{2}{\mathtt{c}}_{j}^{2})^{n}+\gamma\,{\rm
sgn}(j){\mathtt{c}}_{j}\Big{)}$ (4.3)
$\displaystyle=\sqrt{g\,G_{|j|}(0)}+\frac{{\rm
sgn}(j)}{2}\frac{G_{|j|}(0)}{|j|}\gamma+\sum_{n\geq
1}\frac{a_{n}}{g^{n-\frac{1}{2}}2^{2n}}\frac{(G_{|j|}(0))^{n+\frac{1}{2}}}{|j|^{2n}}\gamma^{2n}$
where $a_{n}:=\binom{1/2}{n}\neq 0$ for any $n\geq 1$ are binomial
coefficients. From (4.3), we deduce that, for any
$j\in{\mathbb{Z}}\setminus\\{0\\}$, for any $n\geq 1$,
$\partial_{\gamma}^{2n}\Omega_{j}(0)=b_{2n}g_{j}\Big{(}\frac{G_{|j|}(0)}{|j|^{2}}\Big{)}^{n}\quad{\rm
with}\quad g_{j}:=\sqrt{g\,G_{|j|}(0)}>0\,,\
b_{2n}:=\frac{(2n)!\,a_{n}}{g^{n}2^{2n}}\neq 0\,.$ (4.4)
We now prove that, for any $N$ and integers
$1\leq|j_{1}|<|j_{2}|<\ldots<|j_{N}|$, the function
$[\gamma_{1},\gamma_{2}]\ni\gamma\mapsto(\Omega_{j_{1}}(\gamma),...,\Omega_{j_{N}}(\gamma))\in{\mathbb{R}}^{N}$
is non-degenerate according to Definition 4.1. Suppose, by contradiction, that
$(\Omega_{j_{1}}(\gamma),...,\Omega_{j_{N}}(\gamma))$ is degenerate, i.e.
there exists $c\in{\mathbb{R}}^{N}\setminus\\{0\\}$ such that
$c_{1}\Omega_{j_{1}}(\gamma)+...+c_{N}\Omega_{j_{N}}(\gamma)=0\quad\forall\,\gamma\in[\gamma_{1},\gamma_{2}]\,,$
(4.5)
hence, by analyticity, it is identically zero for any $\gamma\in{\mathbb{R}}$.
Differentiating (4.5) we get
$\begin{cases}c_{1}(\partial_{\gamma}^{2}\Omega_{j_{1}})(\gamma)+...+c_{N}(\partial_{\gamma}^{2}\Omega_{j_{N}})(\gamma)=0\\\
....\\\
c_{1}(\partial_{\gamma}^{2N}\Omega_{j_{1}})(\gamma)+...+c_{N}(\partial_{\gamma}^{2N}\Omega_{j_{N}})(\gamma)=0\,.\end{cases}$
As a consequence the $N\times N$ matrix
${\mathcal{A}}(\gamma):=\begin{pmatrix}(\partial_{\gamma}^{2}\Omega_{j_{1}})(\gamma)&\cdots&(\partial_{\gamma}^{2}\Omega_{j_{N}})(\gamma)\\\
\vdots&\ddots&\vdots\\\
(\partial_{\gamma}^{2N}\Omega_{j_{1}})(\gamma)&\cdots&(\partial_{\gamma}^{2N}\Omega_{j_{N}})(\gamma)\end{pmatrix}$
(4.6)
is singular for any $\gamma\in{\mathbb{R}}$ and
$\det{\mathcal{A}}(\gamma)=0\quad\forall\,\gamma\in{\mathbb{R}}\ .$ (4.7)
In particular, at $\gamma=0$ we have $\det{\mathcal{A}}(0)=0$. On the other
hand, by (4.4) and the multi-linearity of the determinant, we compute
$\det{\mathcal{A}}(0)=b_{2}...b_{2N}\prod_{a=1}^{N}g_{j_{a}}f(j_{a})\,\det\begin{pmatrix}1&\cdots&1\\\
f(j_{1})&\cdots&f(j_{N})\\\ \vdots&\ddots&\vdots\\\
f(j_{1})^{N-1}&\cdots&f(j_{N})^{N-1}\end{pmatrix}\,,\
f(j):=\frac{G_{|j|}(0)}{|j|^{2}}\,.$ (4.8)
This is a Vandermonde determinant, which is therefore given by
$\det{\mathcal{A}}(0)=b_{2}...b_{2N}\prod_{a=1}^{N}g_{j_{a}}f(j_{a})\,\prod_{1\leq
p<q\leq N}(f(j_{q})-f(j_{p}))\,.$ (4.9)
Note that the function $f(j)=|j|^{-2}G_{|j|}(0)>0$ is even in
$j\in{\mathbb{Z}}\setminus\\{0\\}$. We claim that the function $f(j)$ is
monotone for any $j>0$, from which, together with (4.4) and the assumption
$1\leq|j_{1}|<...<|j_{N}|$, we obtain $\det{\mathcal{A}}(0)\neq 0$, in
contradiction with (4.7).
We now prove the monotonicity of the function $f:(0,+\infty)\to(0,+\infty)$,
$f(y):=y^{-2}G_{y}(0)\stackrel{{\scriptstyle\eqref{def:Gj0}}}{{=}}\begin{cases}y^{-1}\tanh({\mathtt{h}}y)&\text{
if }{\mathtt{h}}<+\infty\\\ y^{-1}&\text{ if
}{\mathtt{h}}=+\infty\,.\end{cases}$
For ${\mathtt{h}}=+\infty$ the function $f(y)=y^{-1}$ is trivially monotone
decreasing. We then consider the case ${\mathtt{h}}<+\infty$, when
$f(y)=y^{-1}\tanh({\mathtt{h}}y)$. We compute
$\partial_{y}f(y)=y^{-2}\big{(}-\tanh({\mathtt{h}}y)+{\mathtt{h}}y(1-\tanh^{2}({\mathtt{h}}y))\big{)}=y^{-2}g({\mathtt{h}}y)\,,$
where $g(x):=-\tanh(x)+x(1-\tanh^{2}(x))$. Then $\partial_{y}f(y)<0$ for any
$y>0$ if and only if $g(x)<0$ for any $x>0$. We note that $\lim_{x\to
0^{+}}g(x)=0$, $\lim_{x\to+\infty}g(x)=-1$ and $g(x)$ is monotone decreasing
for $x>0$ because $\partial_{x}g(x)=-2x\tanh(x)(1-\tanh^{2}(x))<0$,
$\forall\,x>0$.
We have proved items 1, 3, 4 of the Lemma. We show now item $2$, proving that
the function
$[\gamma_{1},\gamma_{2}]\ni\gamma\mapsto(1,\Omega_{\overline{\jmath}_{1}}(\gamma),...,\Omega_{\overline{\jmath}_{\nu}}(\gamma))$
is non-degenerate according to Definition 4.1. By contradiction, suppose that
there exists
$c=(c_{0},c_{1},...,c_{\nu})\in{\mathbb{R}}^{\nu+1}\setminus\\{0\\}$ such that
$c_{0}+c_{1}\Omega_{\overline{\jmath}_{1}}(\gamma)+...+c_{\nu}\Omega_{\overline{\jmath}_{\nu}}(\gamma)=0\quad\forall\,\gamma\in[\gamma_{1},\gamma_{2}]\,,$
(4.10)
and thus, by analyticity, for all $\gamma\in{\mathbb{R}}$. Differentiating
(4.10) with respect to $\gamma$ we find that the $(\nu+1)\times(\nu+1)$-matrix
${\mathcal{B}}(\gamma):=\begin{pmatrix}1&\Omega_{\overline{\jmath}_{1}}(\gamma)&\cdots&\Omega_{\overline{\jmath}_{\nu}}(\gamma)\\\
0&(\partial_{\gamma}^{2}\Omega_{\overline{\jmath}_{1}})(\gamma)&\cdots&(\partial_{\gamma}^{2}\Omega_{\overline{\jmath}_{\nu}})(\gamma)\\\
\vdots&\vdots&\ddots&\vdots\\\
0&(\partial_{\gamma}^{2\nu}\Omega_{\overline{\jmath}_{1}})(\gamma)&\cdots&(\partial_{\gamma}^{2\nu}\Omega_{\overline{\jmath}_{\nu}})(\gamma)\end{pmatrix}$
(4.11)
is singular for all $\gamma\in{\mathbb{R}}$, and so
$\det{\mathcal{B}}(\gamma)=0$ for all $\gamma\in{\mathbb{R}}$. By the
structure of the matrix (4.11), we get that
$\det{\mathcal{B}}(\gamma)=\det{\mathcal{A}}(\gamma)$, where the matrix
${\mathcal{A}}(\gamma)$ is given in (4.6), with $N=\nu$ and
$j_{p}=\overline{\jmath}_{p}$ for any $p=1,..,\nu$. We have already proved
that $\det{\mathcal{A}}(0)\neq 0$ and this gives the claimed contradiction. ∎
Note that in items 3 and 4 of Lemma 4.2 we require that $j$ and $j^{\prime}$
do not belong to $\\{0\\}\cup{\mathbb{S}}\cup(-{\mathbb{S}})$. In order to
deal in Proposition 4.5 when $j$ and $j^{\prime}$ belong to $-{\mathbb{S}}$,
we need also the following lemma. It is actually a direct consequence of the
proof of Lemma 4.2, noting that $\Omega_{j}(\gamma)-\omega_{j}(\gamma)$ is
linear in $\gamma$ (cfr. (4.1)) and its derivatives of order higher than two
identically vanish.
###### Lemma 4.3.
(Non-degeneracy-II) Let
$\vec{\omega}(\gamma):=\left(\omega_{\overline{\jmath}_{1}}(\gamma),\ldots,\omega_{\overline{\jmath}_{\nu}}(\gamma)\right)$.
The following vectors are non-degenerate:
1. 1.
$(\vec{\omega}(\gamma),\gamma)\in{\mathbb{R}}^{\nu+1}$;
2. 2.
$\left(\vec{\omega}(\gamma),\omega_{j}(\gamma),\gamma\right)\in{\mathbb{R}}^{\nu+2}$
for any
$j\in{\mathbb{Z}}\setminus\left(\\{0\\}\cup{\mathbb{S}}\cup(-{\mathbb{S}})\right)$.
For later use, we provide the following asymptotic estimate of the linear
frequencies.
###### Lemma 4.4.
(Asymptotics) For any $j\in{\mathbb{Z}}\setminus\\{0\\}$ we have
$\omega_{j}(\gamma)=\sqrt{g}|j|^{\frac{1}{2}}+\frac{c_{j}(\gamma)}{\sqrt{g}|j|^{\frac{1}{2}}}\,,$
(4.12)
where, for any $n\in{\mathbb{N}}_{0}$, there exists a constant
$C_{n,{\mathtt{h}}}>0$ such that
$\sup_{j\in{\mathbb{Z}}\setminus\\{0\\}\atop\gamma\in[\gamma_{1},\gamma_{2}]}|\partial_{\gamma}^{n}c_{j}(\gamma)|\leq
C_{n,{\mathtt{h}}}\,.$ (4.13)
###### Proof.
By (4.1), we deduce (4.12) with
$c_{j}(\gamma):=\frac{g|j|\big{(}\frac{G_{|j|}(0)}{|j|}-1\big{)}+\big{(}\frac{\gamma}{2}\frac{G_{|j|}(0)}{|j|}\big{)}^{2}}{1+\sqrt{\frac{G_{|j|}(0)}{|j|}+\frac{1}{g|j|}\Big{(}\frac{\gamma}{2}\frac{G_{|j|}(0)}{|j|}\Big{)}^{2}}}\,.$
(4.14)
The bounds (4.13) follow exploiting that
$\frac{G_{|j|}(0)}{|j|}-1=-\frac{2}{1+e^{2{\mathtt{h}}|j|}}$, by (1.10). ∎
The next proposition is the main result of the section. We remind that
$\vec{\jmath}=(\overline{\jmath}_{1},\ldots,\overline{\jmath}_{\nu})$ denotes
the vector in ${\mathbb{Z}}^{\nu}\setminus\\{0\\}$ of tangential sites, cfr.
(2.42) and (2.37). We also recall that
${\mathbb{S}}_{0}^{c}={\mathbb{Z}}\setminus({\mathbb{S}}\cup\\{0\\})$.
###### Proposition 4.5.
(Transversality) There exist $m_{0}\in{\mathbb{N}}$ and $\rho_{0}>0$ such
that, for any $\gamma\in[\gamma_{1},\gamma_{2}]$, the following hold:
$\displaystyle\max_{0\leq n\leq
m_{0}}|\partial_{\gamma}^{n}\vec{\Omega}(\gamma)\cdot\ell|\geq\rho_{0}\braket{\ell}\,,\quad\forall\,\ell\in{\mathbb{Z}}^{\nu}\setminus\\{0\\}\,;$
(4.15) $\displaystyle\begin{cases}\max\limits_{0\leq n\leq
m_{0}}|\partial_{\gamma}^{n}\,(\vec{\Omega}(\gamma)\cdot\ell+\Omega_{j}(\gamma))|\geq\rho_{0}\braket{\ell}\\\
\vec{\jmath}\cdot\ell+j=0\,,\quad\ell\in{\mathbb{Z}}^{\nu}\,,\
j\in{\mathbb{S}}_{0}^{c}\,;\end{cases}$ (4.16)
$\displaystyle\begin{cases}\max\limits_{0\leq n\leq
m_{0}}|\partial_{\gamma}^{n}\,(\vec{\Omega}(\gamma)\cdot\ell+\Omega_{j}(\gamma)-\Omega_{j^{\prime}}(\gamma))|\geq\rho_{0}\braket{\ell}\\\
\vec{\jmath}\cdot\ell+j-j^{\prime}=0\,,\quad\ell\in{\mathbb{Z}}^{\nu}\,,\
j,j^{\prime}\in{\mathbb{S}}_{0}^{c}\,,\
(\ell,j,j^{\prime})\neq(0,j,j)\,;\end{cases}$ (4.17)
$\displaystyle\begin{cases}\max\limits_{0\leq n\leq
m_{0}}|\partial_{\gamma}^{n}\,(\vec{\Omega}(\gamma)\cdot\ell+\Omega_{j}(\gamma)+\Omega_{j^{\prime}}(\gamma))|\geq\rho_{0}\braket{\ell}\\\
\vec{\jmath}\cdot\ell+j+j^{\prime}=0\,,\ \ell\in{\mathbb{Z}}^{\nu}\,,\
j,j^{\prime}\in{\mathbb{S}}_{0}^{c}\,.\end{cases}$ (4.18)
We call $\rho_{0}$ the amount of non-degeneracy and $m_{0}$ the index of non-
degeneracy.
###### Proof.
We prove separately (4.15)-(4.18). We set for brevity
$\Gamma:=[\gamma_{1},\gamma_{2}]$.
Proof of (4.15). By contradiction, assume that for any $m\in{\mathbb{N}}$
there exist $\gamma_{m}\in\Gamma$ and
$\ell_{m}\in{\mathbb{Z}}^{\nu}\setminus\\{0\\}$ such that
$\Big{|}\partial_{\gamma}^{n}\vec{\Omega}(\gamma_{m})\cdot\frac{\ell_{m}}{\braket{\ell_{m}}}\Big{|}<\frac{1}{\braket{m}}\,,\quad\forall\,0\leq
n\leq m\,.$ (4.19)
The sequences $(\gamma_{m})_{m\in{\mathbb{N}}}\subset\Gamma$ and
$(\ell_{m}/\braket{\ell_{m}})_{m\in{\mathbb{N}}}\subset{\mathbb{R}}^{\nu}\setminus\\{0\\}$
are both bounded. By compactness, up to subsequences
$\gamma_{m}\to\overline{\gamma}\in\Gamma$ and
$\ell_{m}/\braket{\ell_{m}}\rightarrow\overline{c}\neq 0$. Therefore, for any
$n\in{\mathbb{N}}_{0}$, passing to the limit for $m\rightarrow+\infty$ in
(4.19), we get
$\partial_{\gamma}^{n}\vec{\Omega}(\overline{\gamma})\cdot\overline{c}=0$. By
the analyticity of $\vec{\Omega}(\gamma)$, we deduce that the function
$\gamma\mapsto\vec{\Omega}(\gamma)\cdot\overline{c}$ is identically zero on
$\Gamma$, which contradicts Lemma 4.2-1, since $\overline{c}\neq 0$.
Proof of (4.16). By contradiction, assume that, for any $m\in{\mathbb{N}}$,
there exist $\gamma_{m}\in\Gamma$, $\ell_{m}\in{\mathbb{Z}}^{\nu}$ and
$j_{m}\in{\mathbb{S}}_{0}^{c}$, such that, for any $n\in{\mathbb{N}}_{0}$ with
$n\leq m$,
$\begin{cases}\big{|}\partial_{\gamma}^{n}\big{(}\vec{\Omega}(\gamma)\cdot\frac{\ell_{m}}{\braket{\ell_{m}}}+\frac{1}{\braket{\ell_{m}}}\Omega_{j_{m}}(\gamma)\big{)}_{|\gamma=\gamma_{m}}\big{|}<\frac{1}{\braket{m}}\\\
\vec{\jmath}\cdot\ell_{m}+j_{m}=0\,.\end{cases}$ (4.20)
Up to subsequences $\gamma_{m}\rightarrow\overline{\gamma}\in\Gamma$ and
$\ell_{m}/\braket{\ell_{m}}\rightarrow\overline{c}\in{\mathbb{R}}^{\nu}$.
Step 1. We consider first the case when the sequence
$(\ell_{m})_{m\in{\mathbb{N}}}\subset{\mathbb{Z}}^{\nu}$ is bounded. Up to
subsequences, we have definitively that
$\ell_{m}=\overline{\ell}\in{\mathbb{Z}}^{\nu}$. Moreover, since $j_{m}$ and
$\ell_{m}$ satisfy the momentum restriction
$\vec{\jmath}\cdot\ell_{m}+j_{m}=0$ also the sequence
$(j_{m})_{m\in{\mathbb{N}}}$ is bounded and, up to subsequences, definitively
$j_{m}=\overline{\jmath}\in{\mathbb{S}}_{0}^{c}$. Therefore, for any
$n\in{\mathbb{N}}_{0}$, taking $m\rightarrow\infty$ in (4.20) we obtain
$\partial_{\gamma}^{n}\big{(}\vec{\Omega}(\gamma)\cdot\overline{\ell}+\Omega_{\overline{\jmath}}(\gamma)\big{)}_{|\gamma=\overline{\gamma}}=0\
,\
\forall\,n\in{\mathbb{N}}_{0}\,,\quad\vec{\jmath}\cdot\overline{\ell}+\overline{\jmath}=0\,.$
By analyticity this implies
$\vec{\Omega}(\gamma)\cdot\overline{\ell}+\Omega_{\overline{\jmath}}(\gamma)=0\,,\
\forall\,\gamma\in\Gamma\,,\quad\vec{\jmath}\cdot\overline{\ell}+\overline{\jmath}=0\,.$
(4.21)
We distinguish two cases:
* •
Let $\overline{\jmath}\notin-{\mathbb{S}}$. By (4.21) the vector
$\big{(}\vec{\Omega}(\gamma),\Omega_{\overline{\jmath}}(\gamma)\big{)}$ is
degenerate according to Definition 4.1 with $c:=(\overline{\ell},1)\neq 0$.
This contradicts Lemma 4.2-3.
* •
Let $\overline{\jmath}\in-{\mathbb{S}}$. With no loss of generality suppose
$\overline{\jmath}=-\overline{\jmath}_{1}$. Then, denoting
$\overline{\ell}=(\overline{\ell}_{1},\ldots,\overline{\ell}_{\nu})$, and
(4.21) reads, for any $\gamma\in\Gamma$,
$(\overline{\ell}_{1}+1)\omega_{\overline{\jmath}_{1}}(\gamma)+\sum_{a=2}^{\nu}\overline{\ell}_{a}\omega_{\overline{\jmath}_{a}}(\gamma)+\frac{\gamma}{2}\Big{(}(\overline{\ell}_{1}-1)\frac{G_{\overline{\jmath}_{1}}(0)}{\overline{\jmath}_{1}}+\sum_{a=2}^{\nu}\overline{\ell}_{a}\frac{G_{\overline{\jmath}_{a}}(0)}{\overline{\jmath}_{a}}\Big{)}=0\,.$
By Lemma 4.3-1 the vector $(\vec{\omega}(\gamma),\gamma)$ is non-degenerate.
Therefore $\overline{\ell}_{1}=-1$ and $\overline{\ell}_{a}=0$ for any
$a=2,\ldots,\nu$, and
$-2\frac{G_{\overline{\jmath}_{1}}(0)}{\overline{\jmath}_{1}}=0$, which is a
contradiction.
Step 2. We consider now the case when the sequence
$(\ell_{m})_{m\in{\mathbb{N}}}$ is unbounded. Up to subsequences
$\ell_{m}\rightarrow\infty$ as $m\rightarrow\infty$ and
$\lim_{m\rightarrow\infty}\ell_{m}/\braket{\ell_{m}}=:\overline{c}\neq 0$. By
(4.1), Lemma 4.4, (1.10), and since the momentum condition implies
$|j_{m}|^{\frac{1}{2}}=|\vec{\jmath}\cdot\ell_{m}|^{\frac{1}{2}}\leq
C|\ell_{m}|^{\frac{1}{2}}$, we deduce, for any $n\in{\mathbb{N}}_{0}$,
$\begin{split}\partial_{\gamma}^{n}\frac{1}{\braket{\ell_{m}}}\Omega_{j_{m}}(\gamma_{m})&=\partial_{\gamma}^{n}\Big{(}\frac{1}{\braket{\ell_{m}}}\sqrt{g}\left|j_{m}\right|^{\frac{1}{2}}+\frac{c_{j_{m}}(\gamma)}{\braket{\ell_{m}}\sqrt{g}\left|j_{m}\right|^{\frac{1}{2}}}+\frac{\gamma}{2\braket{\ell_{m}}}\frac{G_{j_{m}}(0)}{j_{m}}\Big{)}_{|\gamma=\gamma_{m}}\to
0\end{split}$
for $m\rightarrow\infty$. Therefore (4.20) becomes, in the limit
$m\rightarrow\infty$,
$\partial_{\gamma}^{n}\vec{\Omega}(\overline{\gamma})\cdot\overline{c}\,=0$
for any $n\in{\mathbb{N}}_{0}$. By analyticity, this implies
$\vec{\Omega}(\gamma)\cdot\overline{c}=0$ for any $\gamma\in\Gamma$,
contradicting the non-degeneracy of $\vec{\Omega}(\gamma)$ in Lemma 4.2-1,
since $\overline{c}\neq 0$.
Proof of (4.17). We assume $j_{m}\neq j_{m}^{\prime}$ because the case
$j_{m}=j_{m}^{\prime}$ is included in (4.15). By contradiction, we assume
that, for any $m\in{\mathbb{N}}$, there exist $\gamma_{m}\in\Gamma$,
$\ell_{m}\in{\mathbb{Z}}^{\nu}$ and
$j_{m},j_{m}^{\prime}\in{\mathbb{S}}_{0}^{c}$,
$(\ell_{m},j_{m},j_{m}^{\prime})\neq(0,j_{m},j_{m})$, such that, for any
$0\leq n\leq m$,
$\begin{cases}\big{|}\partial_{\gamma}^{n}\big{(}\vec{\Omega}(\gamma)\cdot\frac{\ell_{m}}{\braket{\ell_{m}}}+\frac{1}{\braket{\ell_{m}}}\big{(}\Omega_{j_{m}}(\gamma)-\Omega_{j_{m}^{\prime}}(\gamma)\big{)}\big{)}_{|\gamma=\gamma_{m}}\big{|}<\frac{1}{\braket{m}}\\\
\vec{\jmath}\cdot\ell_{m}+j_{m}-j_{m}^{\prime}=0\,.\end{cases}$ (4.22)
We have that $\ell_{m}\neq 0$, otherwise, by the momentum condition
$j_{m}=j_{m}^{\prime}$. Up to subsequences
$\gamma_{m}\rightarrow\overline{\gamma}\in\Gamma$ and
$\ell_{m}/\braket{\ell_{m}}\rightarrow\overline{c}\in{\mathbb{R}}^{\nu}\setminus\\{0\\}$.
Step 1. We start with the case when
$(\ell_{m})_{m\in{\mathbb{N}}}\subset{\mathbb{Z}}^{\nu}$ is bounded. Up to
subsequences, we have definitively that
$\ell_{m}=\overline{\ell}\in{\mathbb{Z}}^{\nu}\setminus\\{0\\}$. The sequences
$(j_{m})_{m\in{\mathbb{N}}}$ and $(j^{\prime}_{m})_{m\in{\mathbb{N}}}$ may be
bounded or unbounded. Up to subsequences, we consider the different cases:
Case (a). $|j_{m}|,|j^{\prime}_{m}|\to+\infty$ for $m\to\infty$. We have that
$j_{m}\cdot j^{\prime}_{m}>0$, because, otherwise,
$|j_{m}-j_{m}^{\prime}|=|j_{m}|+|j_{m}^{\prime}|\to+\infty$ contradicting that
$|j_{m}-j_{m}^{\prime}|=|\vec{\jmath}\cdot\ell_{m}|\leq C$. Recalling (1.10)
we have, for any $j\cdot j^{\prime}>0$, that
$\Big{|}\frac{G_{j}(0)}{j}-\frac{G_{j^{\prime}}(0)}{j^{\prime}}\Big{|}\leq|{\rm
sgn}(j)-{\rm
sgn}(j^{\prime})|+\frac{2e^{-2{\mathtt{h}}|j|}}{1+e^{-2{\mathtt{h}}|j|}}+\frac{2e^{-2{\mathtt{h}}|j^{\prime}|}}{1+e^{-2{\mathtt{h}}|j^{\prime}|}}\leq
C_{\mathtt{h}}\Big{(}\frac{1}{|j|^{\frac{1}{2}}}+\frac{1}{|j^{\prime}|^{\frac{1}{2}}}\Big{)}\,.$
(4.23)
Moreover, by the momentum condition
$\vec{\jmath}\cdot\ell_{m}+j_{m}-j_{m}^{\prime}=0$, we deduce
$|\sqrt{|j_{m}|}-\sqrt{|j_{m}^{\prime}|}|=\frac{||j_{m}|-|j_{m}^{\prime}||}{\sqrt{|j_{m}|}+\sqrt{|j_{m}^{\prime}|}}\leq\frac{|j_{m}-j_{m}^{\prime}|}{\sqrt{|j_{m}|}+\sqrt{|j_{m}^{\prime}|}}\leq\frac{C|\ell_{m}|}{\sqrt{|j_{m}|}+\sqrt{|j_{m}^{\prime}|}}\,.$
(4.24)
By (4.1), Lemma 4.4, $j_{m}\cdot j^{\prime}_{m}>0$, (4.23), (4.24), we
conclude that
$\displaystyle\partial_{\gamma}^{n}(\Omega_{j_{m}}(\gamma)-\Omega_{j_{m}^{\prime}}(\gamma))$
$\displaystyle=\sqrt{g}\partial_{\gamma}^{n}\big{(}\sqrt{|j_{m}|}-\sqrt{|j_{m}^{\prime}|}\big{)}$
$\displaystyle+\partial_{\gamma}^{n}\Big{(}\frac{c_{j_{m}}(\gamma)}{\sqrt{g}|j_{m}|^{\frac{1}{2}}}-\frac{c_{j_{m}^{\prime}}(\gamma)}{\sqrt{g}|j_{m}^{\prime}|^{\frac{1}{2}}}+\frac{\gamma}{2}\Big{(}\frac{G_{j_{m}}(0)}{j_{m}}-\frac{G_{j_{m}^{\prime}}(0)}{j_{m}^{\prime}}\Big{)}\Big{)}\to
0$
as $m\to+\infty$. Passing to the limit in (4.22), we obtain
$\partial_{\gamma}^{n}\\{\vec{\Omega}(\gamma)\cdot\overline{\ell}\\}_{|\gamma=\overline{\gamma}}=0$
for any $n\in{\mathbb{N}}_{0}$. Hence the analytic function
$\gamma\mapsto\vec{\Omega}(\gamma)\cdot\overline{\ell}$ is identically zero,
contradicting Lemma 4.2-1, since $\overline{\ell}\neq 0$.
Case (b). $(j_{m})_{m\in{\mathbb{N}}}$ is bounded and
$|j_{m}^{\prime}|\to\infty$ (or viceversa): this case is excluded by the
momentum condition $\vec{\jmath}\cdot\ell_{m}+j_{m}-j_{m}^{\prime}=0$ in
(4.22) and since $(\ell_{m})$ is bounded.
Case (c). Both $(j_{m})_{m\in{\mathbb{N}}}$,
$(j_{m}^{\prime})_{m\in{\mathbb{N}}}$ are bounded: we have definitively that
$j_{m}=\overline{\jmath}$ and $j_{m}^{\prime}=\overline{\jmath}^{\prime}$,
with $\overline{\jmath},\overline{\jmath}^{\prime}\in{\mathbb{S}}_{0}^{c}$
and, since $j_{m}\neq j_{m}^{\prime}$,
$\overline{\jmath}\neq\overline{\jmath}^{\prime}\,.$ (4.25)
Therefore (4.22) becomes, in the limit $m\rightarrow\infty$,
$\partial_{\gamma}^{n}\big{(}\vec{\Omega}(\gamma)\cdot\overline{\ell}+\Omega_{\overline{\jmath}}(\gamma)-\Omega_{\overline{\jmath}^{\prime}}(\gamma)\big{)}_{|\gamma=\overline{\gamma}}=0\,,\
\forall\,n\in{\mathbb{N}}_{0}\,,\quad\vec{\jmath}\cdot\overline{\ell}+\overline{\jmath}-\overline{\jmath}^{\prime}=0\,.$
By analyticity, we obtain that
$\vec{\Omega}(\gamma)\cdot\overline{\ell}+\Omega_{\overline{\jmath}}(\gamma)-\Omega_{\overline{\jmath}^{\prime}}(\gamma)=0\quad\forall\,\gamma\in\Gamma\,,\quad\vec{\jmath}\cdot\overline{\ell}+\overline{\jmath}-\overline{\jmath}^{\prime}=0\,.$
(4.26)
We distinguish several cases:
* •
Let $\overline{\jmath},\overline{\jmath}^{\prime}\notin-{\mathbb{S}}$ and
$|\overline{\jmath}|\neq|\overline{\jmath}^{\prime}|$. By (4.26) the vector
$(\vec{\Omega}(\gamma),\Omega_{\overline{\jmath}}(\gamma),\Omega_{\overline{\jmath}^{\prime}}(\gamma))$
is degenerate with $c:=(\overline{\ell},1,-1)\neq 0$, contradicting Lemma
4.2-4.
* •
Let $\overline{\jmath},\overline{\jmath}^{\prime}\notin-{\mathbb{S}}$ and
$\overline{\jmath}^{\prime}=-\overline{\jmath}$. In view of (4.1), the first
equation in (4.26) becomes
$\vec{\omega}(\gamma)\cdot\overline{\ell}+\frac{\gamma}{2}\Big{(}\sum_{a=1}^{\nu}\overline{\ell}_{a}\frac{G_{\overline{\jmath}_{a}}(0)}{\overline{\jmath}_{a}}+2\frac{G_{\overline{\jmath}}(0)}{\overline{\jmath}}\Big{)}=0\quad\forall\gamma\in\Gamma\,.$
By Lemma 4.3-1 the vector $(\vec{\omega}(\gamma),\gamma)$ is non-degenerate,
thus $\overline{\ell}=0$ and
$2\frac{G_{\overline{\jmath}}(0)}{\overline{\jmath}}=0$, which is a
contradiction.
* •
Let $\overline{\jmath}^{\prime}\notin-{\mathbb{S}}$ and
$\overline{\jmath}\in-{\mathbb{S}}$. With no loss of generality suppose
$\overline{\jmath}=-\overline{\jmath}_{1}$. In view of (4.1), the first
equation in (4.26) implies that, for any $\gamma\in\Gamma$,
$(\overline{\ell}_{1}+1)\omega_{\overline{\jmath}_{1}}(\gamma)+\sum_{a=2}^{\nu}\overline{\ell}_{a}\omega_{\overline{\jmath}_{a}}(\gamma)-\omega_{\overline{\jmath}^{\prime}}(\gamma)+\frac{\gamma}{2}\Big{(}(\overline{\ell}_{1}-1)\frac{G_{\overline{\jmath}_{1}}(0)}{\overline{\jmath}_{1}}+\sum_{a=2}^{\nu}\overline{\ell}_{a}\frac{G_{\overline{\jmath}_{a}}(0)}{\overline{\jmath}_{a}}-\frac{G_{\overline{\jmath}^{\prime}}(0)}{\overline{\jmath}^{\prime}}\Big{)}=0\,.$
By Lemma 4.3-2 the vector
$\big{(}\vec{\omega}(\gamma),\omega_{\overline{\jmath}^{\prime}}(\gamma),\gamma\big{)}$
is non-degenerate, which is a contradiction.
* •
Last, let $\overline{\jmath},\overline{\jmath}^{\prime}\in-{\mathbb{S}}$ and
$\overline{\jmath}\neq\overline{\jmath}^{\prime}$, by (4.25). With no loss of
generality suppose $\overline{\jmath}=-\overline{\jmath}_{1}$ and
$\overline{\jmath}^{\prime}=-\overline{\jmath}_{2}$. Then the first equation
in (4.26) reads, for any $\gamma\in\Gamma$,
$\displaystyle(\overline{\ell}_{1}+1)\omega_{\overline{\jmath}_{1}}(\gamma)+\left(\overline{\ell}_{2}-1\right)\omega_{\overline{\jmath}_{2}}+\sum_{a=3}^{\nu}\overline{\ell}_{a}\omega_{\overline{\jmath}_{a}}(\gamma)$
$\displaystyle\ \ \ \
+\frac{\gamma}{2}\Big{(}(\overline{\ell}_{1}-1)\frac{G_{\overline{\jmath}_{1}}(0)}{\overline{\jmath}_{1}}+(\overline{\ell}_{2}+1)\frac{G_{\overline{\jmath}_{2}}(0)}{\overline{\jmath}_{2}}+\sum_{a=3}^{\nu}\overline{\ell}_{a}\frac{G_{\overline{\jmath}_{a}}(0)}{\overline{\jmath}_{a}}\Big{)}=0\,.$
Since the vector $(\vec{\omega}(\gamma),\gamma)$ is non-degenerate by Lemma
4.3-1, it implies $\overline{\ell}_{1}=-1$, $\overline{\ell}_{2}=1$,
$\overline{\ell}_{3}=\ldots=\overline{\ell}_{\nu}=0$. Inserting these values
in the momentum condition in (4.26) we obtain
$-2\overline{\jmath}_{1}+2\overline{\jmath}_{2}=0$. This contradicts
$\overline{\jmath}\neq\overline{\jmath}^{\prime}$.
Step 2. We finally consider the case when $(\ell_{m})_{m\in{\mathbb{N}}}$ is
unbounded. Up to subsequences $\ell_{m}\rightarrow\infty$ as
$m\rightarrow\infty$ and
$\lim_{m\to\infty}\ell_{m}/\braket{\ell_{m}}=:\overline{c}\neq 0$. By (4.1),
Lemma 4.4, (4.23), we have, for any $n\geq 1$,
$\displaystyle\partial_{\gamma}^{n}\frac{1}{\braket{\ell_{m}}}\Big{(}\Omega_{j_{m}}(\gamma)-\Omega_{j_{m}^{\prime}}(\gamma)\Big{)}_{|\gamma=\gamma_{m}}$
$\displaystyle=\partial_{\gamma}^{n}\Big{(}\frac{1}{\braket{\ell_{m}}\sqrt{g}}\Big{(}\frac{c_{j_{m}}(\gamma)}{|j_{m}|^{\frac{1}{2}}}-\frac{c_{j_{m}^{\prime}}(\gamma)}{|j_{m}^{\prime}|^{\frac{1}{2}}}\Big{)}$
$\displaystyle\qquad+\frac{\gamma}{2\braket{\ell_{m}}}\Big{(}\frac{G_{j_{m}}(0)}{j_{m}}-\frac{G_{j_{m}^{\prime}}(0)}{j_{m}^{\prime}}\Big{)}_{|\gamma=\gamma_{m}}\Big{)}\to
0$
as $m\to\infty$. Therefore, for any $n\geq 1$, taking $m\rightarrow\infty$ in
(4.22) we get
$\partial_{\gamma}^{n}\big{(}\vec{\Omega}(\gamma)\cdot\overline{c}\big{)}_{|\gamma=\overline{\gamma}}=0$.
By analyticity this implies
$\vec{\Omega}(\gamma)\cdot\overline{c}=\overline{d}$, for all
$\gamma\in\Gamma$, contradicting Lemma 4.2-2, since $\overline{c}\neq 0$.
Proof of (4.18). It follows as (4.17) and we omit it. ∎
###### Remark 4.6.
For the irrotational gravity water waves equations (1.3) with $\gamma=0$,
quasi-periodic traveling waves solutions exist for most values of the _depth_
${\mathtt{h}}\in[{\mathtt{h}}_{1},{\mathtt{h}}_{2}]$. In detail, the non-
degeneracy of the linear frequencies with respect to the parameter
${\mathtt{h}}$ as in Lemma 4.2 is proved precisely in Lemma 3.2 in [2],
whereas the transversality properties hold by restricting the bounds in Lemma
3.4 in [2] to the Fourier sites satisfying the momentum conditions. We are not
able to use ${\mathtt{h}}$ as a parameter for any value of $\gamma\neq 0$ (in
this case we do not know if the non-degeneracy properties of Lemma 4.2 hold
with respect to ${\mathtt{h}}$).
## 5 Proof of Theorem 1.2
Under the rescaling $(\eta,\zeta)\mapsto(\varepsilon\eta,\varepsilon\zeta)$,
the Hamiltonian system (2.10) transforms into the Hamiltonian system generated
by
${\mathcal{H}}_{\varepsilon}(\eta,\zeta):=\varepsilon^{-2}{\mathcal{H}}(\varepsilon\eta,\varepsilon\zeta)={\mathcal{H}}_{L}(\eta,\zeta)+\varepsilon
P_{\varepsilon}(\eta,\zeta)\,,$ (5.1)
where ${\mathcal{H}}$ is the water waves Hamiltonian (2.9) expressed in the
Wahlén coordinates (2.7), ${\mathcal{H}}_{L}$ is defined in (2.15) and,
denoting ${\mathcal{H}}_{\geq 3}:={\mathcal{H}}-{\mathcal{H}}_{L}$ the cubic
part of the Hamiltonian,
$P_{\varepsilon}(\eta,\zeta):=\varepsilon^{-3}{\mathcal{H}}_{\geq
3}(\varepsilon\eta,\varepsilon\zeta)\,.$
We now study the Hamiltonian system generated by the Hamiltonian
${\mathcal{H}}_{\varepsilon}(\eta,\zeta)$, in the action-angle and normal
coordinates $(\theta,I,w)$ defined in Section 2.2. Thus we consider the
Hamiltonian $H_{\varepsilon}(\theta,I,w)$ defined by
$H_{\varepsilon}:={\mathcal{H}}_{\varepsilon}\circ
A=\varepsilon^{-2}{\mathcal{H}}\circ\varepsilon A$ (5.2)
where $A$ is the map defined in (2.2). The associated symplectic form is given
in (2.43).
By (2.47) (see also (2.30), (2.38)), in the variables $(\theta,I,w)$ the
quadratic Hamiltonian ${\mathcal{H}}_{L}$ defined in (2.15) simply reads, up
to a constant,
${\mathcal{N}}:={\mathcal{H}}_{L}\circ A=\vec{\Omega}(\gamma)\cdot
I+\tfrac{1}{2}\left({\bf{\Omega}}_{W}w,w\right)_{L^{2}}$
where $\vec{\Omega}(\gamma)\in{\mathbb{R}}^{\nu}$ is defined in (1.20) and
${\bf{\Omega}}_{W}$ in (2.14). Thus the Hamiltonian $H_{\varepsilon}$ in (5.2)
is
$H_{\varepsilon}={\mathcal{N}}+\varepsilon P\qquad{\rm with}\qquad
P:=P_{\varepsilon}\circ A\,.$ (5.3)
We look for an embedded invariant torus
$i:{\mathbb{T}}^{\nu}\rightarrow{\mathbb{R}}^{\nu}\times{\mathbb{R}}^{\nu}\times\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}\,,\quad{\varphi}\mapsto
i({\varphi}):=(\theta({\varphi}),I({\varphi}),w({\varphi}))\,,$
of the Hamiltonian vector field
$X_{H_{\varepsilon}}:=(\partial_{I}H_{\varepsilon},-\partial_{\theta}H_{\varepsilon},\Pi_{{\mathbb{S}}^{+},\Sigma}^{\angle}J\nabla_{w}H_{\varepsilon})$
filled by quasi-periodic solutions with frequency vector
$\omega\in{\mathbb{R}}^{\nu}$.
### 5.1 Nash-Moser theorem of hypothetical conjugation
Instead of looking directly for quasi-periodic solutions of
$X_{H_{\varepsilon}}$ we look for quasi-periodic solutions of the family of
modified Hamiltonians, where $\alpha\in{\mathbb{R}}^{\nu}$ are additional
parameters,
$H_{\alpha}:={\mathcal{N}}_{\alpha}+\varepsilon
P\,,\quad{\mathcal{N}}_{\alpha}:=\alpha\cdot
I+\tfrac{1}{2}\left(w,{\bf{\Omega}}_{W}w\right)_{L^{2}}\,.$ (5.4)
We consider the nonlinear operator
$\displaystyle{\mathcal{F}}(i,\alpha)$
$\displaystyle:={\mathcal{F}}(\omega,\gamma,\varepsilon;i,\alpha):=\omega\cdot\partial_{\varphi}i({\varphi})-X_{H_{\alpha}}(i({\varphi}))$
$\displaystyle=\begin{pmatrix}\omega\cdot\partial_{\varphi}\theta({\varphi})&-\alpha-\varepsilon\partial_{I}P(i({\varphi}))\\\
\omega\cdot\partial_{\varphi}I({\varphi})&+\varepsilon\partial_{\theta}P(i({\varphi}))\\\
\omega\cdot\partial_{\varphi}w({\varphi})&-\,\Pi_{{\mathbb{S}}^{+},\Sigma}^{\angle}J({\bf{\Omega}}_{W}w({\varphi})+\varepsilon\nabla_{w}P(i({\varphi})))\end{pmatrix}\,.$
(5.5)
If ${\mathcal{F}}(i,\alpha)=0$, then the embedding ${\varphi}\mapsto
i({\varphi})$ is an invariant torus for the Hamiltonian vector field
$X_{H_{\alpha}}$, filled with quasi-periodic solutions with frequency
$\omega$.
Each Hamiltonian $H_{\alpha}$ in (5.4) is invariant under the involution
$\vec{\mathcal{S}}$ and the translations $\vec{\tau}_{\varsigma}$,
$\varsigma\in{\mathbb{R}}$, defined respectively in (2.40) and in (2.41):
$H_{\alpha}\circ\vec{\mathcal{S}}=H_{\alpha}\,,\qquad
H_{\alpha}\circ\vec{\tau}_{\varsigma}=H_{\alpha}\,,\quad\forall\,\varsigma\in{\mathbb{R}}\,.$
(5.6)
We look for a reversible traveling torus embedding $i(\varphi)=$
$(\theta({\varphi}),I({\varphi}),w({\varphi}))$, namely satisfying
$\vec{\mathcal{S}}i({\varphi})=i(-{\varphi})\,,\qquad\vec{\tau}_{\varsigma}i({\varphi})=i({\varphi}-\vec{\jmath}\varsigma)\,,\quad\forall\,\varsigma\in{\mathbb{R}}\,.$
(5.7)
Note that, by (5.1) and (5.6), the operator ${\mathcal{F}}(\cdot,\alpha)$ maps
a reversible, respectively traveling, wave into an anti-reversible,
respectively traveling, wave variation, according to Definition 3.26.
The norm of the periodic components of the embedded torus
${\mathfrak{I}}({\varphi}):=i({\varphi})-({\varphi},0,0):=\left(\Theta({\varphi}),I({\varphi}),w({\varphi})\right)\,,\quad\Theta({\varphi}):=\theta({\varphi})-{\varphi}\,,$
(5.8)
is
$\left\|{\mathfrak{I}}\right\|_{s}^{k_{0},\upsilon}:=\left\|\Theta\right\|_{H_{\varphi}^{s}}^{k_{0},\upsilon}+\left\|I\right\|_{H_{\varphi}^{s}}^{k_{0},\upsilon}+\left\|w\right\|_{s}^{k_{0},\upsilon}$,
where
$k_{0}:=m_{0}+2$ (5.9)
and $m_{0}\in{\mathbb{N}}$ is the index of non-degeneracy provided by
Proposition 4.5, which only depends on the linear unperturbed frequencies. We
will often omit to write the dependence of the various constants with respect
to $k_{0}$, which is considered as an absolute constant. We look for quasi-
periodic solutions of frequency $\omega$ belonging to a $\delta$-neighbourhood
(independent of $\varepsilon$)
${\mathtt{\Omega}}:=\big{\\{}\omega\in{\mathbb{R}}^{\nu}\ :\
\operatorname{dist}\big{(}\omega,\vec{\Omega}[\gamma_{1},\gamma_{2}]\big{)}<\delta\big{\\}}\,,\quad\delta>0\,,$
of the curve $\vec{\Omega}[\gamma_{1},\gamma_{2}]$ defined by (1.20).
The next theorem, whose proof is based on an implicit function iterative
scheme of Nash-Moser type, provides, for $\varepsilon$ small enough, a
solution $(i_{\infty},\alpha_{\infty})(\omega,\gamma;\varepsilon)$ of the
nonlinear operator ${\cal F}(\varepsilon,\omega,\gamma;i,\alpha)=0$ for all
the values of $(\omega,\gamma)$ in the Cantor like set ${\cal
C}_{\infty}^{\upsilon}$ below.
###### Theorem 5.1.
(Theorem of hypothetical conjugation) There exist positive constants ${\rm
a_{0}},\varepsilon_{0},C$ depending on ${\mathbb{S}}$, $k_{0}$ and $\tau\geq
1$ such that, for all $\upsilon=\varepsilon^{\rm a}$, ${\rm a}\in(0,{\rm
a}_{0})$ and for all $\varepsilon\in(0,\varepsilon_{0})$, there exist
1. 1.
a $k_{0}$-times differentiable function of the form
$\alpha_{\infty}:\,{\mathtt{\Omega}}\times[\gamma_{1},\gamma_{2}]\mapsto{\mathbb{R}}^{\nu}$,
$\displaystyle\alpha_{\infty}(\omega,\gamma):=\omega+r_{\varepsilon}(\omega,\gamma)\quad\text{
with }\quad|r_{\varepsilon}|^{k_{0},\upsilon}\leq
C\varepsilon\upsilon^{-1}\,;$ (5.10)
2. 2.
a family of embedded reversible traveling tori $i_{\infty}({\varphi})$ (cfr.
(5.7)), defined for all
$(\omega,\gamma)\in{\mathtt{\Omega}}\times[\gamma_{1},\gamma_{2}]$, satisfying
$\|i_{\infty}({\varphi})-({\varphi},0,0)\|_{s_{0}}^{k_{0},\upsilon}\leq
C\varepsilon\upsilon^{-1}\,;$ (5.11)
3. 3.
a sequence of $k_{0}$-times differentiable functions
$\mu_{j}^{\infty}:{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]\rightarrow{\mathbb{R}}$,
$j\in{\mathbb{S}}_{0}^{c}={\mathbb{Z}}\,\setminus\,({\mathbb{S}}\cup\\{0\\})$,
of the form
$\mu_{j}^{\infty}(\omega,\gamma)={\mathtt{m}}_{1}^{\infty}(\omega,\gamma)j+{\mathtt{m}}_{\frac{1}{2}}^{\infty}(\omega,\gamma)\Omega_{j}(\gamma)-{\mathtt{m}}_{0}^{\infty}(\omega,\gamma){\rm
sgn}(j)+{\mathfrak{r}}_{j}^{\infty}(\omega,\gamma)\,,$ (5.12)
with $\Omega_{j}(\gamma)$ defined in (1.13), satisfying
$|{\mathtt{m}}_{1}^{\infty}|^{k_{0},\upsilon}\leq C\varepsilon\,,\
|{\mathtt{m}}_{\frac{1}{2}}^{\infty}-1|^{k_{0},\upsilon}+|{\mathtt{m}}_{0}^{\infty}|^{k_{0},\upsilon}\leq
C\varepsilon\upsilon^{-1}\,,\quad\sup_{j\in{\mathbb{S}}_{0}^{c}}|j|^{\frac{1}{2}}|{\mathfrak{r}}_{j}^{\infty}|^{k_{0},\upsilon}\leq
C\varepsilon\upsilon^{-3}\,,$ (5.13)
such that, for all $(\omega,\gamma)$ in the Cantor-like set
$\displaystyle{\mathcal{C}}_{\infty}^{\upsilon}:=$
$\displaystyle\Big{\\{}(\omega,\gamma)\in{\mathtt{\Omega}}\times[\gamma_{1},\gamma_{2}]\
:\ |\omega\cdot\ell|\geq\ 8\upsilon\langle\ell\rangle^{-\tau}\,,\ \
\forall\,\ell\in{\mathbb{Z}}^{\nu}\setminus\\{0\\}\,,$ (5.14) $\displaystyle\
\left|\omega\cdot\ell-{\mathtt{m}}_{1}^{\infty}(\omega,\gamma)j\right|\geq
8\upsilon\braket{\ell}^{-\tau}\,,\
\forall\,\ell\in{\mathbb{Z}}^{\nu},\,j\in{\mathbb{S}}_{0}^{c}\text{ with
}\vec{\jmath}\cdot\ell+j=0;$ (5.15) $\displaystyle\
\left|\omega\cdot\ell+\mu_{j}^{\infty}(\omega,\gamma)\right|\geq
4\upsilon\left|j\right|^{\frac{1}{2}}\braket{\ell}^{-\tau}\,,\forall\,\ell\in{\mathbb{Z}}^{\nu},\,j\in{\mathbb{S}}_{0}^{c}\text{
with }\vec{\jmath}\cdot\ell+j=0\,;$ (5.16) $\displaystyle\
\left|\omega\cdot\ell+\mu_{j}^{\infty}(\omega,\gamma)-\mu_{j^{\prime}}^{\infty}(\omega,\gamma)\right|\geq
4\upsilon\,\braket{\ell}^{-\tau}\,,$ (5.17) $\displaystyle\
\quad\quad\forall\ell\in{\mathbb{Z}}^{\nu},\,j,j^{\prime}\in{\mathbb{S}}_{0}^{c},\,(\ell,j,j^{\prime})\neq(0,j,j)\text{
with }\vec{\jmath}\cdot\ell+j-j^{\prime}=0\,,$ $\displaystyle\
\left|\omega\cdot\ell+\mu_{j}^{\infty}(\omega,\gamma)+\mu_{j^{\prime}}^{\infty}(\omega,\gamma)\right|\geq
4\upsilon\,\big{(}\left|j\right|^{\frac{1}{2}}+|j^{\prime}|^{\frac{1}{2}}\big{)}\braket{\ell}^{-\tau}\,,$
(5.18) $\displaystyle\
\quad\quad\forall\,\ell\in{\mathbb{Z}}^{\nu},\,j,j^{\prime}\in{\mathbb{S}}_{0}^{c}\,,\text{
with }\vec{\jmath}\cdot\ell+j+j^{\prime}=0\,\Big{\\}}\,,$
the function
$i_{\infty}({\varphi}):=i_{\infty}(\omega,\gamma,\varepsilon;{\varphi})$ is a
solution of
${\mathcal{F}}(\omega,\gamma,\varepsilon;(i_{\infty},\alpha_{\infty})(\omega,\gamma))=0$.
As a consequence, the embedded torus ${\varphi}\mapsto i_{\infty}({\varphi})$
is invariant for the Hamiltonian vector field
$X_{H_{\alpha_{\infty}(\omega,\gamma)}}$ as it is filled by quasi-periodic
reversible traveling wave solutions with frequency $\omega$.
Note that the Cantor-like set ${\cal C}_{\infty}^{\upsilon}$ in (5.14)-(5.18)
is defined in terms of the functions
${\mathtt{m}}_{1}^{\infty}(\omega,\gamma)$ and the “final" perturbed normal
frequencies $\mu_{j}^{\infty}(\omega,\gamma)$, $j\in{\mathbb{S}}_{0}^{c}$,
which are defined for all the values of the parameters $(\omega,\gamma)$. This
formulation completely decouples the Nash-Moser implicit function theorem
construction of $(\alpha_{\infty},i_{\infty})(\omega,\gamma)$ (in Sections
6-9) from the discussion about the measure of the parameters where all the
required “non-resonance" conditions are verified (Section 5.2). This approach
simplifies considerably the presentation because the measure estimates
required to build $(i_{\infty},\alpha_{\infty})(\omega,\gamma)$ are not
verified at each step along the Nash-Moser iteration (the set ${\cal
C}_{\infty}^{\upsilon}$ in (5.14)-(5.18) could be empty, in such a case the
functions $(\alpha_{\infty},i_{\infty})(\omega,\gamma)$ constructed in Theorem
5.1 are obtained by just finitely many sums). In order to define the extended
functions $(i_{\infty},\alpha_{\infty})$ for all the values of
$(\omega,\gamma)$, preserving the weighted norm $\|\ \|^{k_{0},\upsilon}$, we
use the Whitney extension theory reported in Section 3.
We also remind that the conditions on the indexes in (5.14)-(5.18) (where
$\vec{\jmath}\in{\mathbb{Z}}^{\nu}$ is the vector in (2.42)) are due to the
fact that we look for traveling wave solutions. These restrictions are
essential to prove the measure estimates of the next section.
###### Remark 5.2.
The Diophantine condition (5.14) could be weakened requiring only
$|\omega\cdot\ell|\geq\ \upsilon\langle\ell\rangle^{-\tau}$ for any
$\ell\cdot\vec{\jmath}=0$. In such a case the vector $\omega$ could admit one
non-trivial resonance, i.e.
$\overline{\ell}\in{\mathbb{Z}}^{\nu}\setminus\\{0\\}$ such that
$\omega\cdot\overline{\ell}=0$, thus the orbit $\\{\omega
t\\}_{t\in{\mathbb{R}}}$ would densely fill a ($\nu-1$)-dimensional torus,
orthogonal to $\overline{\ell}$. In any case
$\vec{\jmath}\cdot\overline{\ell}\neq 0$ (otherwise
$|\omega\cdot\overline{\ell}|\geq\upsilon\langle\overline{\ell}\rangle^{-\tau}>0$,
contradicting that $\omega\cdot\overline{\ell}=0$) and then the closure of the
set $\\{\omega t-\vec{\jmath}x\\}_{t\in{\mathbb{R}},x\in{\mathbb{R}}}$ is
dense in ${\mathbb{T}}^{\nu}$. This is the natural minimal requirement to look
for traveling quasi-periodic solutions $U(\omega t-\vec{\jmath}x)$ (Definition
3.1).
The next goal is to deduce Theorem 1.2 from Theorem 5.1.
### 5.2 Measure estimates: proof of Theorem 1.2
We now want to prove the existence of quasi-periodic solutions of the original
Hamiltonian system $H_{\varepsilon}$ in (5.3), which is equivalent after a
rescaling to (2.10), and not of just of the Hamiltonian system generated by
the modified Hamiltonian $H_{\alpha_{\infty}}$. We proceed as follows. By
(5.10), the function $\alpha_{\infty}(\,\cdot\,,\gamma)$ from
${\mathtt{\Omega}}$ into its image $\alpha_{\infty}({\mathtt{\Omega}},\gamma)$
is invertible and
$\displaystyle\beta=\alpha_{\infty}(\omega,\gamma)=\omega+r_{\varepsilon}(\omega,\gamma)\
\Leftrightarrow$ (5.19)
$\displaystyle\omega=\alpha_{\infty}^{-1}(\beta,\gamma)=\beta+\breve{r}_{\varepsilon}(\beta,\gamma)\,,\quad\left|\breve{r}_{\varepsilon}\right|^{k_{0},\upsilon}\leq
C\varepsilon\upsilon^{-1}\,.$
Then, for any $\beta\in\alpha_{\infty}({\mathcal{C}}_{\infty}^{\upsilon})$,
Theorem 5.1 proves the existence of an embedded invariant torus filled by
quasi-periodic solutions with Diophantine frequency
$\omega=\alpha_{\infty}^{-1}(\beta,\gamma)$ for the Hamiltonian
$H_{\beta}=\beta\cdot I+\tfrac{1}{2}(w,{\bf{\Omega}}_{W}w)_{L^{2}}+\varepsilon
P\,.$
Consider the curve of the unperturbed tangential frequency vector
$\vec{\Omega}(\gamma)$ in (1.20). In Theorem 5.3 below we prove that for
"most" values of $\gamma\in[\gamma_{1},\gamma_{2}]$ the vector
$(\alpha_{\infty}^{-1}(\vec{\Omega}(\gamma),\gamma),\gamma)$ is in
${\mathcal{C}}_{\infty}^{\upsilon}$, obtaining an embedded torus for the
Hamiltonian $H_{\varepsilon}$ in (5.2), filled by quasi-periodic solutions
with Diophantine frequency vector
$\omega=\alpha_{\infty}^{-1}(\vec{\Omega}(\gamma),\gamma)$, denoted
${\widetilde{\Omega}}$ in Theorem 1.2. Thus $\varepsilon
A(i_{\infty}({\widetilde{\Omega}}t))$, where $A$ is defined in (2.2), is a
quasi-periodic traveling wave solution of the water waves equations (2.10)
written in the Wahlén variables. Finally, going back to the original Zakharov
variables via (2.7) we obtain solutions of (1.3). This proves Theorem 1.2
together with the following measure estimates.
###### Theorem 5.3.
(Measure estimates) Let
$\upsilon=\varepsilon^{\rm a}\,,\quad 0<{\rm a}<\min\\{{\rm
a}_{0},1/(4m_{0}^{2})\\}\,,\quad\tau>m_{0}(2m_{0}\nu+\nu+2)\,,$ (5.20)
where $m_{0}$ is the index of non-degeneracy given in Proposition 4.5 and
$k_{0}:=m_{0}+2$. Then, for $\varepsilon\in(0,\varepsilon_{0})$ small enough,
the measure of the set
${\mathcal{G}}_{\varepsilon}:=\big{\\{}\gamma\in[\gamma_{1},\gamma_{2}]\ :\
\big{(}\alpha_{\infty}^{-1}(\vec{\Omega}(\gamma),\gamma),\gamma\big{)}\in{\mathcal{C}}_{\infty}^{\upsilon}\big{\\}}$
(5.21)
satisfies $|{\mathcal{G}}_{\varepsilon}|\rightarrow\gamma_{2}-\gamma_{1}$ as
$\varepsilon\rightarrow 0$.
The rest of this section is devoted to prove Theorem 5.3. By (5.19) we have
$\vec{\Omega}_{\varepsilon}(\gamma):=\alpha_{\infty}^{-1}(\vec{\Omega}(\gamma),\gamma)=\vec{\Omega}(\gamma)+\vec{r}_{\varepsilon}\,,$
(5.22)
where
$\vec{r}_{\varepsilon}(\gamma):=\breve{r}_{\varepsilon}(\vec{\Omega}(\gamma),\gamma)$
satisfies
$|\partial_{\gamma}^{k}{\vec{r}}_{\varepsilon}(\gamma)|\leq
C\varepsilon\upsilon^{-(1+k)}\,,\quad\forall\,\left|k\right|\leq k_{0}\,,\
\text{uniformly on }[\gamma_{1},\gamma_{2}]\,.$ (5.23)
We also denote, with a small abuse of notation, for all
$j\in{\mathbb{S}}_{0}^{c}$,
$\mu_{j}^{\infty}(\gamma):=\mu_{j}^{\infty}\big{(}\vec{\Omega}_{\varepsilon}(\gamma),\gamma\big{)}:={\mathtt{m}}_{1}^{\infty}(\gamma)j+{\mathtt{m}}_{\frac{1}{2}}^{\infty}(\gamma)\Omega_{j}(\gamma)-{\mathtt{m}}_{0}^{\infty}(\gamma){\rm
sgn}(j)+{\mathfrak{r}}_{j}^{\infty}(\gamma)\,,$ (5.24)
where
${\mathtt{m}}_{1}^{\infty}(\gamma):={\mathtt{m}}_{1}^{\infty}(\vec{\Omega}_{\varepsilon}(\gamma),\gamma)$,
${\mathtt{m}}_{\frac{1}{2}}^{\infty}(\gamma):={\mathtt{m}}_{\frac{1}{2}}^{\infty}(\vec{\Omega}_{\varepsilon}(\gamma),\gamma)$,
${\mathtt{m}}_{0}^{\infty}(\gamma):={\mathtt{m}}_{0}^{\infty}(\vec{\Omega}_{\varepsilon}(\gamma),\gamma)$
and
${\mathfrak{r}}_{j}^{\infty}(\gamma):={\mathfrak{r}}_{j}^{\infty}(\vec{\Omega}_{\varepsilon}(\gamma),\gamma)$.
By (5.13) and (5.23) we get the estimates
$\displaystyle|\partial_{\gamma}^{k}{\mathtt{m}}_{1}^{\infty}(\gamma)|\leq
C\varepsilon\upsilon^{-k}\,,\,\big{|}\partial_{\gamma}^{k}\big{(}{\mathtt{m}}_{\frac{1}{2}}^{\infty}(\gamma)-1\big{)}\big{|}+|\partial_{\gamma}^{k}{\mathtt{m}}_{0}^{\infty}(\gamma)|\leq
C\varepsilon\upsilon^{-k-1},$ (5.25)
$\displaystyle\sup_{j\in{\mathbb{S}}_{0}^{c}}|j|^{\frac{1}{2}}\left|\partial_{\gamma}^{k}{\mathfrak{r}}_{j}^{\infty}(\gamma)\right|\leq
C\varepsilon\upsilon^{-3-k}\,,\quad\forall\,0\leq k\leq k_{0}\,.$ (5.26)
Recalling (5.14)-(5.18), the Cantor set in (5.21) becomes
$\displaystyle{\mathcal{G}}_{\varepsilon}:=$
$\displaystyle\Big{\\{}\gamma\in[\gamma_{1},\gamma_{2}]\ :\
|\vec{\Omega}_{\varepsilon}(\gamma)\cdot\ell|\geq
8\upsilon\braket{\ell}^{-\tau}\,,\
\forall\,\ell\in{\mathbb{Z}}^{\nu}\setminus\\{0\\}\,;$ $\displaystyle\ \
|(\vec{\Omega}_{\varepsilon}(\gamma)-{\mathtt{m}}_{1}^{\infty}(\gamma)\vec{\jmath})\cdot\ell|\geq
8\upsilon\braket{\ell}^{-\tau}\,,\
\forall\,\ell\in{\mathbb{Z}}^{\nu}\setminus\\{0\\}\,;$ $\displaystyle\ \
|\vec{\Omega}_{\varepsilon}(\gamma)\cdot\ell+\mu_{j}^{\infty}(\gamma)|\geq
4\upsilon|j|^{\frac{1}{2}}\braket{\ell}^{-\tau}\,,\
\forall\,\ell\in{\mathbb{Z}}^{\nu}\,,\,j\in{\mathbb{S}}_{0}^{c}\,,\text{ with
}\vec{\jmath}\cdot\ell+j=0\,;$ $\displaystyle\ \
|\vec{\Omega}_{\varepsilon}(\gamma)\cdot\ell+\mu_{j}^{\infty}(\gamma)-\mu_{j^{\prime}}^{\infty}(\gamma)|\geq
4\upsilon\,\braket{\ell}^{-\tau}\,,$ $\displaystyle\ \
\forall\ell\in{\mathbb{Z}}^{\nu},\,j,j^{\prime}\in{\mathbb{S}}_{0}^{c},\,(\ell,j,j^{\prime})\neq(0,j,j)\text{
with }\vec{\jmath}\cdot\ell+j-j^{\prime}=0\,;$ $\displaystyle\ \
|\vec{\Omega}_{\varepsilon}(\gamma)\cdot\ell+\mu_{j}^{\infty}(\gamma)+\mu_{j^{\prime}}^{\infty}(\gamma)|\geq
4\upsilon\,\big{(}|j|^{\frac{1}{2}}+|j^{\prime}|^{\frac{1}{2}}\big{)}\braket{\ell}^{-\tau}\,,$
$\displaystyle\ \
\forall\,\ell\in{\mathbb{Z}}^{\nu},\,j,j^{\prime}\in{\mathbb{S}}_{0}^{c}\text{
with }\vec{\jmath}\cdot\ell+j+j^{\prime}=0\Big{\\}}\,.$
We estimate the measure of the complementary set
$\displaystyle{\mathcal{G}}_{\varepsilon}^{c}$
$\displaystyle:=[\gamma_{1},\gamma_{2}]\setminus{\mathcal{G}}_{\varepsilon}$
(5.27) $\displaystyle=\left(\bigcup_{\ell\neq 0}R_{\ell}^{(0)}\cup
R_{\ell}^{(T)}\right)\cup\left(\bigcup_{\ell\in{\mathbb{Z}}^{\nu},\,j\in{\mathbb{S}}_{0}^{c}\atop\vec{\jmath}\cdot\ell+j=0}R_{\ell,j}^{(I)}\right)\cup\left(\bigcup_{(\ell,j,j^{\prime})\neq(0,j,j),j\neq
j^{\prime}\atop\vec{\jmath}\cdot\ell+j-j^{\prime}=0}R_{\ell,j,j^{\prime}}^{(II)}\right)\cup\left(\bigcup_{\ell\in{\mathbb{Z}}^{\nu},j,j^{\prime}\in{\mathbb{S}}_{0}^{c}\,,\atop\vec{\jmath}\cdot\ell+j+j^{\prime}=0}Q_{\ell,j,j^{\prime}}^{(II)}\right)\,,$
where the “nearly-resonant sets" are, recalling the notation
$\Gamma=[\gamma_{1},\gamma_{2}]$,
$\displaystyle R_{\ell}^{(0)}:=R_{\ell}^{(0)}(\upsilon,\tau):=$
$\displaystyle\big{\\{}\gamma\in\Gamma\,:\,|\vec{\Omega}_{\varepsilon}(\gamma)\cdot\ell|<8\upsilon\braket{\ell}^{-\tau}\big{\\}}\,,$
(5.28) $\displaystyle R_{\ell}^{(T)}:=R_{\ell}^{(T)}(\upsilon,\tau):=$
$\displaystyle\big{\\{}\gamma\in\Gamma\,:\,|(\vec{\Omega}_{\varepsilon}(\gamma)-{\mathtt{m}}_{1}^{\infty}(\gamma)\vec{\jmath})\cdot\ell|<8\upsilon\braket{\ell}^{-\tau}\big{\\}}\,,$
(5.29) $\displaystyle R_{\ell,j}^{(I)}:=R_{\ell,j}^{(I)}(\upsilon,\tau):=$
$\displaystyle\big{\\{}\gamma\in\Gamma\,:\,|\vec{\Omega}_{\varepsilon}(\gamma)\cdot\ell+\mu_{j}^{\infty}(\gamma)|<4\upsilon|j|^{\frac{1}{2}}\braket{\ell}^{-\tau}\big{\\}}\,,$
(5.30) $\displaystyle
R_{\ell,j,j^{\prime}}^{(II)}:=R_{\ell,j,j^{\prime}}^{(II)}(\upsilon,\tau):=$
$\displaystyle\big{\\{}\gamma\in\Gamma\,:\,|\vec{\Omega}_{\varepsilon}(\gamma)\cdot\ell+\mu_{j}^{\infty}(\gamma)-\mu_{j^{\prime}}^{\infty}(\gamma)|<4\upsilon\,\braket{\ell}^{-\tau}\big{\\}}\,,$
(5.31) $\displaystyle
Q_{\ell,j,j^{\prime}}^{(II)}:=Q_{\ell,j,j^{\prime}}^{(II)}(\upsilon,\tau):=$
$\displaystyle\Big{\\{}\gamma\in\Gamma\,:\,|\vec{\Omega}_{\varepsilon}(\gamma)\cdot\ell+\mu_{j}^{\infty}(\gamma)+\mu_{j^{\prime}}^{\infty}(\gamma)|<\frac{4\upsilon\big{(}|j|^{\frac{1}{2}}+|j^{\prime}|^{\frac{1}{2}}\big{)}}{\braket{\ell}^{\tau}}\Big{\\}}\,.$
(5.32)
Note that in the third union in (5.27) we may require $j\neq j^{\prime}$
because $R_{\ell,j,j}^{(II)}\subset R_{\ell}^{(0)}$. In the sequel we shall
always suppose the momentum conditions on the indexes $\ell,j,j^{\prime}$
written in (5.27). Some of the above sets are empty.
###### Lemma 5.4.
For $\varepsilon\in(0,\varepsilon_{0})$ small enough, if
$Q_{\ell,j,j^{\prime}}^{(II)}\neq\emptyset$ then
$|j|^{\frac{1}{2}}+|j^{\prime}|^{\frac{1}{2}}\leq C\braket{\ell}$.
###### Proof.
If $Q_{\ell,j,j^{\prime}}^{(II)}\neq\emptyset$ then there exists
$\gamma\in[\gamma_{1},\gamma_{2}]$ such that
$\left|\mu_{j}^{\infty}(\gamma)+\mu_{j^{\prime}}^{\infty}(\gamma)\right|<\frac{4\upsilon\big{(}|j|^{\frac{1}{2}}+|j^{\prime}|^{\frac{1}{2}}\big{)}}{\braket{\ell}^{\tau}}+C|\ell|\,.$
(5.33)
By (5.24) we have
$\mu_{j}^{\infty}(\gamma)+\mu_{j^{\prime}}^{\infty}(\gamma)={\mathtt{m}}_{1}^{\infty}(\gamma)(j+j^{\prime})+{\mathtt{m}}_{\frac{1}{2}}^{\infty}(\gamma)(\Omega_{j}(\gamma)+\Omega_{j^{\prime}}(\gamma))-{\mathtt{m}}_{0}^{\infty}(\gamma)({\rm
sgn}(j)+{\rm
sgn}(j^{\prime}))+{\mathfrak{r}}_{j}^{\infty}(\gamma)+{\mathfrak{r}}_{j^{\prime}}^{\infty}(\gamma)\,.$
Then, by (5.25)-(5.26) with $k=0$, Lemma 4.4 and the momentum condition
$j+j^{\prime}=-\vec{\jmath}\cdot\ell$, we deduce, for $\varepsilon$ small
enough,
$\displaystyle|\mu_{j}^{\infty}(\gamma)+\mu_{j^{\prime}}^{\infty}(\gamma)|$
$\displaystyle\geq-C\varepsilon|\ell|+\tfrac{\sqrt{g}}{2}\,\big{|}|j|^{\frac{1}{2}}+|j^{\prime}|^{\frac{1}{2}}\big{|}-C^{\prime}-C\varepsilon\upsilon^{-3}\,.$
(5.34)
Combining (5.33) and (5.34), we deduce
$||j|^{\frac{1}{2}}+|j^{\prime}|^{\frac{1}{2}}|\leq C\braket{\ell}$, for
$\varepsilon$ small enough. ∎
In order to estimate the measure of the sets (5.28)-(5.32), the key point is
to prove that the perturbed frequencies satisfy transversality properties
similar to the ones (4.15)-(4.18) satisfied by the unperturbed frequencies. By
Proposition 4.5, (5.22), and the estimates (5.23), (5.25)-(5.26) we deduce the
following lemma (cfr. Lemma 5.5 in [7]).
###### Lemma 5.5.
(Perturbed transversality) For $\varepsilon\in(0,\varepsilon_{0})$ small
enough and for all $\gamma\in[\gamma_{1},\gamma_{2}]$,
$\displaystyle\max_{0\leq n\leq
m_{0}}|\partial_{\gamma}^{n}\vec{\Omega}_{\varepsilon}(\gamma)\cdot\ell|\geq\frac{\rho_{0}}{2}\braket{\ell}\,,\quad\forall\,\ell\in{\mathbb{Z}}^{\nu}\setminus\\{0\\}\,;$
(5.35) $\displaystyle\max_{0\leq n\leq
m_{0}}|\partial_{\gamma}^{n}(\vec{\Omega}_{\varepsilon}(\gamma)-{\mathtt{m}}_{1}^{\infty}(\gamma)\vec{\jmath})\cdot\ell|\geq\frac{\rho_{0}}{2}\braket{\ell}\,,\quad\forall\ell\in{\mathbb{Z}}^{\nu}\setminus\\{0\\}$
(5.36) $\displaystyle\begin{cases}\max_{0\leq n\leq
m_{0}}|\partial_{\gamma}^{n}(\vec{\Omega}_{\varepsilon}(\gamma)\cdot\ell+\mu_{j}^{\infty}(\gamma))|\geq\frac{\rho_{0}}{2}\braket{\ell}\,,\\\
\vec{\jmath}\cdot\ell+j=0\,,\quad\ell\in{\mathbb{Z}}^{\nu}\,,\
j\in{\mathbb{S}}_{0}^{c}\,;\end{cases}$ (5.37)
$\displaystyle\begin{cases}\max_{0\leq n\leq
m_{0}}|\partial_{\gamma}^{n}(\vec{\Omega}_{\varepsilon}(\gamma)\cdot\ell+\mu_{j}^{\infty}(\gamma)-\mu_{j^{\prime}}^{\infty}(\gamma))|\geq\frac{\rho_{0}}{2}\braket{\ell}\\\
\vec{\jmath}\cdot\ell+j-j^{\prime}=0\,,\quad\ell\in{\mathbb{Z}}^{\nu}\,,\
j,j^{\prime}\in{\mathbb{S}}_{0}^{c}\,,\
(\ell,j,j^{\prime})\neq(0,j,j)\,;\end{cases}$ (5.38)
$\displaystyle\begin{cases}\max_{0\leq n\leq
m_{0}}|\partial_{\gamma}^{n}(\vec{\Omega}_{\varepsilon}(\gamma)\cdot\ell+\mu_{j}^{\infty}(\gamma)+\mu_{j^{\prime}}^{\infty}(\gamma))|\geq\frac{\rho_{0}}{2}\braket{\ell}\\\
\vec{\jmath}\cdot\ell+j+j^{\prime}=0\,,\quad\ell\in{\mathbb{Z}}^{\nu}\,,\
j,j^{\prime}\in{\mathbb{S}}_{0}^{c}\,.\end{cases}$ (5.39)
The transversality estimates (5.35)-(5.39) and an application of Rüssmann
Theorem 17.1 in [32], directly imply the following bounds for the sets in
(5.27) (cfr. Lemma 5.6 in [7]).
###### Lemma 5.6.
(Estimates of the resonant sets) The measure of the sets (5.27)- (5.32)
satisfy
$\displaystyle|R_{\ell}^{(0)}|,|R_{\ell}^{(T)}|\lesssim(\upsilon\braket{\ell}^{-(\tau+1)})^{\frac{1}{m_{0}}}\,,$
$\displaystyle\quad|R_{\ell,j}^{(I)}|\lesssim\big{(}\upsilon|j|^{\frac{1}{2}}\braket{\ell}^{-(\tau+1)}\big{)}^{\frac{1}{m_{0}}}\,,$
$\displaystyle|R_{\ell,j,j^{\prime}}^{(II)}|\lesssim\big{(}\upsilon\braket{\ell}^{-(\tau+1)}\big{)}^{\frac{1}{m_{0}}}\,,$
$\displaystyle\quad|Q_{\ell,j,j^{\prime}}^{(II)}|\lesssim\big{(}\upsilon\,\big{(}|j|^{\frac{1}{2}}+|j^{\prime}|^{\frac{1}{2}}\big{)}\braket{\ell}^{-(\tau+1)}\big{)}^{\frac{1}{m_{0}}}\,.$
We now estimate the measure of all the sets in (5.27). By Lemma 5.6, and the
choice of $\tau$ in (5.20), we have
$\displaystyle\Big{|}\bigcup_{\ell\neq 0}R^{(0)}_{\ell}\cup
R^{(T)}_{\ell}\Big{|}\leq\sum_{\ell\neq
0}|R^{(0)}_{\ell}|+|R^{(T)}_{\ell}|\lesssim\sum_{\ell\neq
0}\Big{(}\frac{\upsilon}{\braket{\ell}^{\tau+1}}\Big{)}^{\frac{1}{m_{0}}}\lesssim\upsilon^{\frac{1}{m_{0}}}\,,$
(5.40) $\displaystyle\left|\bigcup_{\ell\neq
0,j=-\vec{\jmath}\cdot\ell}R_{\ell,j}^{(I)}\right|\leq\sum_{\ell\neq
0}|R_{\ell,-\vec{\jmath}\cdot\ell}^{(I)}|\lesssim\sum_{\ell}\Big{(}\frac{\upsilon}{\braket{\ell}^{\tau+\frac{1}{2}}}\Big{)}^{\frac{1}{m_{0}}}\lesssim\upsilon^{\frac{1}{m_{0}}}\,,$
(5.41)
and using also Lemma 5.4,
$\displaystyle\left|\bigcup_{\ell,\,j,j^{\prime}\in{\mathbb{S}}_{0}^{c}\atop\vec{\jmath}\cdot\ell+j+j^{\prime}=0}Q_{\ell,j,j^{\prime}}^{(II)}\right|\leq\sum_{\ell,\left|j\right|\leq
C\braket{\ell}^{2},\atop
j^{\prime}=-\vec{\jmath}\cdot\ell-j}|Q_{\ell,j,j^{\prime}}^{(II)}|\lesssim\sum_{\ell,\left|j\right|\leq
C\braket{\ell}^{2}}\left(\frac{\upsilon}{\braket{\ell}^{\tau}}\right)^{\frac{1}{m_{0}}}\lesssim\upsilon^{\frac{1}{m_{0}}}\,.$
(5.42)
We are left with estimating the measure of
$\displaystyle\bigcup_{(\ell,j,j^{\prime})\neq(0,j,j),j\neq
j^{\prime}\atop\vec{\jmath}\cdot\ell+j-j^{\prime}=0}\\!\\!\\!\\!\\!\\!\\!\\!R_{\ell,j,j^{\prime}}^{(II)}$
$\displaystyle=\left(\bigcup_{j\neq j^{\prime}\,,\ j\cdot
j^{\prime}<0\atop\vec{\jmath}\cdot\ell+j-j^{\prime}=0}R_{\ell,j,j^{\prime}}^{(II)}\right)\cup\left(\bigcup_{j\neq
j^{\prime}\,,\ j\cdot
j^{\prime}>0\atop\vec{\jmath}\cdot\ell+j-j^{\prime}=0}R_{\ell,j,j^{\prime}}^{(II)}\right):={\mathtt{I}}_{1}+{\mathtt{I}}_{2}\,.$
(5.43)
We first estimate the measure of ${\mathtt{I}}_{1}$. For $j\cdot
j^{\prime}<0$, the momentum condition reads $j-j^{\prime}={\rm
sgn}(j)(|j|+|j^{\prime}|)=-\vec{\jmath}\cdot\ell$, thus $|j|,|j^{\prime}|\leq
C\left\langle\ell\right\rangle$. Hence, by Lemma 5.6 and the choice of $\tau$
in (5.20), we have
$\displaystyle|{\mathtt{I}}_{1}|\leq\sum_{\ell,|j|\leq
C\left\langle\ell\right\rangle,j^{\prime}=j+\vec{\jmath}\cdot\ell}|R_{\ell,j,j^{\prime}}^{(II)}|\lesssim\sum_{\ell,\left|j\right|\leq
C\braket{\ell}}\left(\frac{\upsilon}{\braket{\ell}^{\tau+1}}\right)^{\frac{1}{m_{0}}}\lesssim\upsilon^{\frac{1}{m_{0}}}\,.$
(5.44)
Then we estimate the measure of ${\mathtt{I}}_{2}$ in (5.43). The key step is
given in the next lemma. Remind the definition of the sets
$R_{\ell,j,j^{\prime}}^{(II)}$ and $R_{\ell}^{(T)}$ in (5.30) and (5.31).
###### Lemma 5.7.
Let $\upsilon_{0}\geq\upsilon$ and $\tau\geq\tau_{0}\geq 1$. There is a
constant $C_{1}>0$ such that, for $\varepsilon$ small enough, for any
$\vec{\jmath}\cdot\ell+j-j^{\prime}=0$, $j\cdot j^{\prime}>0$,
$\min\\{|j|,|j^{\prime}|\\}\geq
C_{1}\upsilon_{0}^{-2}\braket{\ell}^{2(\tau_{0}+1)}\quad\Longrightarrow\quad
R_{\ell,j,j^{\prime}}^{(II)}(\upsilon,\tau)\subset\bigcup_{\ell\neq
0}R_{\ell}^{(T)}(\upsilon_{0},\tau_{0})\,.$ (5.45)
###### Proof.
If $\gamma\in[\gamma_{1},\gamma_{2}]\setminus\bigcup_{\ell\neq
0}R_{\ell}^{(T)}(\upsilon_{0},\tau_{0})$, then
$|(\vec{\Omega}_{\varepsilon}(\gamma)-{\mathtt{m}}_{1}^{\infty}(\gamma)\vec{\jmath})\cdot\ell|\geq
8\upsilon_{0}\braket{\ell}^{-\tau_{0}}\,,\quad\forall\ell\in{\mathbb{Z}}^{\nu}\setminus\\{0\\}\,.$
(5.46)
Then, by (5.24), the momentum condition $j-j^{\prime}=-\vec{\jmath}\cdot\ell$,
(5.25), (5.26), Lemma 4.4, the condition $j\cdot j^{\prime}>0$, (4.23), and
(5.46), we deduce that
$\displaystyle|\vec{\Omega}_{\varepsilon}(\gamma)\cdot\ell+\mu_{j}^{\infty}(\gamma)-\mu_{j^{\prime}}^{\infty}(\gamma)|\geq|\vec{\Omega}_{\varepsilon}(\gamma)\cdot\ell+{\mathtt{m}}_{1}^{\infty}(j-j^{\prime})|-|{\mathtt{m}}_{\frac{1}{2}}^{\infty}||\Omega_{j}(\gamma)-\Omega_{j^{\prime}}(\gamma)|-|{\mathfrak{r}}_{j}^{\infty}(\gamma)-{\mathfrak{r}}_{j^{\prime}}^{\infty}(\gamma)|$
$\displaystyle\geq|(\vec{\Omega}_{\varepsilon}(\gamma)-{\mathtt{m}}_{1}^{\infty}\vec{\jmath})\cdot\ell|-(1-C\varepsilon\upsilon^{-1})\big{|}|j|^{\frac{1}{2}}-|j^{\prime}|^{\frac{1}{2}}\big{|}-C\Big{(}\frac{1}{|j|^{\frac{1}{2}}}+\frac{1}{|j^{\prime}|^{\frac{1}{2}}}\Big{)}-C\frac{\varepsilon}{\upsilon^{3}}\Big{(}\frac{1}{|j|^{\frac{1}{2}}}+\frac{1}{|j^{\prime}|^{\frac{1}{2}}}\Big{)}$
$\displaystyle\geq\frac{8\upsilon_{0}}{\braket{\ell}^{\tau_{0}}}-\frac{1}{2}\frac{|j-j^{\prime}|}{|j|^{\frac{1}{2}}+|j^{\prime}|^{\frac{1}{2}}}-C\Big{(}\frac{1}{|j|^{\frac{1}{2}}}+\frac{1}{|j^{\prime}|^{\frac{1}{2}}}\Big{)}\geq\frac{8\upsilon_{0}}{\braket{\ell}^{\tau_{0}}}-C\Big{(}\frac{\braket{\ell}}{|j|^{\frac{1}{2}}}+\frac{\braket{\ell}}{|j^{\prime}|^{\frac{1}{2}}}\Big{)}$
$\displaystyle\geq\frac{4\upsilon_{0}}{\braket{\ell}^{\tau_{0}}}$
for any
$|j|,|j^{\prime}|>C_{1}\upsilon_{0}^{-2}\braket{\ell}^{2(\tau_{0}+1)}$, for
$C_{1}>C^{2}/64$. Since $\upsilon_{0}\geq\upsilon$ and $\tau\geq\tau_{0}$ we
deduce that
$|\vec{\Omega}_{\varepsilon}(\gamma)\cdot\ell+\mu_{j}^{\infty}(\gamma)-\mu_{j^{\prime}}^{\infty}(\gamma)|\geq
4\upsilon\braket{\ell}^{-\tau}\,,$
namely that $\gamma\not\in R_{\ell,j,j^{\prime}}^{(II)}(\upsilon,\tau)$. ∎
Note that the set of indexes $(\ell,j,j^{\prime})$ such that
$\vec{\jmath}\cdot\ell+j-j^{\prime}=0$ and
$\min\\{|j|,|j^{\prime}|\\}<C_{1}\upsilon_{0}^{-2}\braket{\ell}^{2(\tau_{0}+1)}$
is included, for $\upsilon_{0}$ small enough, into the set
${\cal I}_{\ell}:=\Big{\\{}(\ell,j,j^{\prime})\
:\,\vec{\jmath}\cdot\ell+j-j^{\prime}=0\,,\
|j|,|j^{\prime}|\leq\upsilon_{0}^{-3}\langle\ell\rangle^{2(\tau_{0}+1)}\Big{\\}}$
(5.47)
because
$\max\\{|j|,|j^{\prime}|\\}\leq\min\\{|j|,|j^{\prime}|\\}+|j-j^{\prime}|<C_{1}\upsilon_{0}^{-2}\langle\ell\rangle^{2(\tau_{0}+1)}+C\langle\ell\rangle\leq\upsilon_{0}^{-3}\langle\ell\rangle^{2(\tau_{0}+1)}$.
As a consequence, by Lemma 5.7 we deduce that
$\displaystyle{\mathtt{I}}_{2}=\bigcup_{j\neq j^{\prime}\,,\ j\cdot
j^{\prime}>0\atop\vec{\jmath}\cdot\ell+j-j^{\prime}=0}R_{\ell,j,j^{\prime}}^{(II)}(\upsilon,\tau)\subset\Big{(}\bigcup_{\ell\neq
0}R_{\ell}^{(T)}(\upsilon_{0},\tau_{0})\Big{)}\bigcup\Big{(}\bigcup_{(\ell,j,j^{\prime})\in{\cal
I}_{\ell}}R_{\ell,j,j^{\prime}}^{(II)}(\upsilon,\tau)\Big{)}\,.$ (5.48)
###### Lemma 5.8.
Let $\tau_{0}:=m_{0}\nu$ and $\upsilon_{0}=\upsilon^{\frac{1}{4m_{0}}}$. Then
$|{\mathtt{I}}_{2}|\leq C\upsilon^{\frac{1}{4m_{0}^{2}}}\,.$ (5.49)
###### Proof.
By (5.40) (applied with $\upsilon_{0},\tau_{0}$ instead of $\upsilon,\tau$),
and $\tau_{0}=m_{0}\nu$, the measure of
$\Big{|}\bigcup_{\ell\neq
0}R^{(T)}_{\ell}(\upsilon_{0},\tau_{0})\Big{|}\lesssim\upsilon_{0}^{\frac{1}{m_{0}}}\lesssim\upsilon^{\frac{1}{4m_{0}^{2}}}\,.$
(5.50)
Moreover, recalling (5.47),
$\Big{|}\bigcup_{(\ell,j,j^{\prime})\in{\cal
I}_{\ell}}R_{\ell,j,j^{\prime}}^{(II)}(\upsilon,\tau)\Big{|}\lesssim\sum_{\ell\in{\mathbb{Z}}^{\nu}\atop|j|\leq
C_{1}\upsilon_{0}^{-3}\braket{\ell}^{2(\tau_{0}+1)}}\left(\frac{\upsilon}{\braket{\ell}^{\tau+1}}\right)^{\frac{1}{m_{0}}}\lesssim\sum_{\ell\in{\mathbb{Z}}^{\nu}}\frac{\upsilon^{\frac{1}{m_{0}}}\upsilon_{0}^{-3}}{\braket{\ell}^{\frac{\tau+1}{m_{0}}-2(\tau_{0}+1)}}\leq
C\upsilon^{\frac{1}{4m_{0}}}\,,$ (5.51)
by the choice of $\tau$ in (5.20) and $\upsilon_{0}$. The bound (5.49) follows
by (5.50) and (5.51). ∎
###### Proof of Theorem 5.3 completed..
By (5.27), (5.40), (5.41), (5.42), (5.43), (5.44) and (5.49) we deduce that
$\left|{\mathcal{G}}_{\varepsilon}^{c}\right|\leq
C\upsilon^{\frac{1}{4m_{0}^{2}}}\,.$
For $\upsilon=\varepsilon^{\mathtt{a}}$ as in (5.20), we get
$|{\mathcal{G}}_{\varepsilon}|\geq\gamma_{2}-\gamma_{1}-C\varepsilon^{{\mathtt{a}}/4m_{0}^{2}}$.
The proof of Theorem 5.3 is concluded. ∎
###### Remark 5.9.
We have actually imposed in Lemma 5.8 the stronger non-resonance condition
(5.15) with $\upsilon_{0}=\upsilon^{\frac{1}{4m_{0}}}>\upsilon$. Since it has
no significant importance for Lemma 7.7 we keep $\upsilon$.
## 6 Approximate inverse
In order to implement a convergent Nash-Moser scheme that leads to a solution
of ${\mathcal{F}}(i,\alpha)=0$, where ${\mathcal{F}}(i,\alpha)$ is the
nonlinear operator defined in (5.1), we construct an _almost approximate right
inverse_ of the linearized operator
${\rm
d}_{i,\alpha}{\mathcal{F}}(i_{0},\alpha_{0})[\widehat{\imath},{\widehat{\alpha}}]=\omega\cdot\partial_{\varphi}\widehat{\imath}-{\rm
d}_{i}X_{H_{\alpha}}\left(i_{0}({\varphi})\right)[\widehat{\imath}]-\left({\widehat{\alpha}},0,0\right)\,.$
Note that ${\rm d}_{i,\alpha}{\mathcal{F}}(i_{0},\alpha_{0})={\rm
d}_{i,\alpha}{\mathcal{F}}(i_{0})$ is independent of $\alpha_{0}$. We assume
that the torus
$i_{0}({\varphi})=(\theta_{0}({\varphi}),I_{0}({\varphi}),w_{0}({\varphi}))$
is reversible and traveling, according to (5.7).
In the sequel we shall assume the smallness condition, for some
${\mathtt{k}}:={\mathtt{k}}(\tau,\nu)>0$,
$\varepsilon\upsilon^{-{\mathtt{k}}}\ll 1$.
We closely follow the strategy presented in [6] and implemented for the water
waves equations in [9, 2, 7]. As shown in [7] this construction preserves the
momentum preserving properties needed for the search of traveling waves and
the estimates are very similar. Thus we are short.
First of all, we state tame estimates for the composition operator induced by
the Hamiltonian vector field
$X_{P}=(\partial_{I}P,-\partial_{\theta}P,\Pi_{{\mathbb{S}}^{+},\Sigma}^{\angle}J\nabla_{w}P)$
in (5.1) (see Lemma 6.1 of [7]).
###### Lemma 6.1.
(Estimates of the perturbation $P$) Let ${\mathfrak{I}}({\varphi})$ in (5.8)
satisfy $\left\|{\mathfrak{I}}\right\|_{3s_{0}+2k_{0}+5}^{k_{0},\upsilon}\leq
1$. Then, for any $s\geq s_{0}$,
$\left\|X_{P}(i)\right\|_{s}^{k_{0},\upsilon}\lesssim_{s}1+\left\|{\mathfrak{I}}\right\|_{s+2s_{0}+2k_{0}+3}^{k_{0},\upsilon}$,
and, for all
$\widehat{\imath}:=({\widehat{\theta}},{\widehat{I}},{\widehat{w}})$,
$\displaystyle\left\|{\rm
d}_{i}X_{P}(i)[\widehat{\imath}]\right\|_{s}^{k_{0},\upsilon}$
$\displaystyle\lesssim_{s}\left\|\widehat{\imath}\right\|_{s+1}^{k_{0},\upsilon}+\left\|{\mathfrak{I}}\right\|_{s+2s_{0}+2k_{0}+4}^{k_{0},\upsilon}\left\|\widehat{\imath}\right\|_{s_{0}+1}^{k_{0},\upsilon}\,,$
$\displaystyle\left\|{\rm
d}_{i}^{2}X_{P}(i)[\widehat{\imath},\widehat{\imath}]\right\|_{s}^{k_{0},\upsilon}$
$\displaystyle\lesssim_{s}\left\|\widehat{\imath}\right\|_{s+1}^{k_{0},\upsilon}\left\|\widehat{\imath}\right\|_{s_{0}+1}^{k_{0},\upsilon}+\left\|{\mathfrak{I}}\right\|_{s+2s_{0}+2k_{0}+5}^{k_{0},\upsilon}(\left\|\widehat{\imath}\right\|_{s_{0}+1}^{k_{0},\upsilon})^{2}\,.$
Along this section, we assume the following hypothesis, which is verified by
the approximate solutions obtained at each step of the Nash-Moser Theorem 9.1.
* •
ANSATZ. The map
$(\omega,\gamma)\mapsto{\mathfrak{I}}_{0}(\omega,\gamma)=i_{0}({\varphi};\omega,\gamma)-({\varphi},0,0)$
is $k_{0}$-times differentiable with respect to the parameters
$(\omega,\gamma)\in{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]$ and, for
some $\mu:=\mu(\tau,\nu)>0$, $\upsilon\in(0,1)$,
$\left\|{\mathfrak{I}}_{0}\right\|_{s_{0}+\mu}^{k_{0},\upsilon}+\left|\alpha_{0}-\omega\right|^{k_{0},\upsilon}\leq
C\varepsilon\upsilon^{-1}\,.$ (6.1)
We first modify the approximate torus $i_{0}({\varphi})$ to obtain a nearby
isotropic torus $i_{\delta}({\varphi})$, namely such that the pull-back 1-form
$i_{\delta}^{*}\Lambda$ is closed, where $\Lambda$ is the Liouville 1-form
defined in (2.44). Consider the pull-back $1$-form
$\displaystyle i_{0}^{*}\Lambda$
$\displaystyle=\sum_{k=1}^{\nu}a_{k}({\varphi}){\rm d}{\varphi}_{k}\,,\quad
a_{k}({\varphi}):=-\big{(}[\partial_{\varphi}\theta_{0}({\varphi})]^{\top}I_{0}({\varphi})\big{)}_{k}+\tfrac{1}{2}\big{(}J_{\angle}^{-1}w_{0}({\varphi}),\partial_{{\varphi}_{k}}w_{0}({\varphi})\big{)}_{L^{2}}\,,$
(6.2)
and define
$A_{kj}({\varphi}):=\partial_{{\varphi}_{k}}a_{j}({\varphi})-\partial_{{\varphi}_{j}}a_{k}({\varphi})$.
The next Lemma follows as in Lemma 5.3 in [2] and Lemma 6.2 in [7]. Let
$Z({\varphi}):={\mathcal{F}}(i_{0},\alpha_{0})({\varphi})=\omega\cdot\partial_{\varphi}i_{0}({\varphi})-X_{H_{\alpha_{0}}}(i_{0}({\varphi}))$.
###### Lemma 6.2.
(Isotropic torus) The torus
$i_{\delta}({\varphi}):=(\theta_{0}({\varphi}),I_{\delta}({\varphi}),w_{0}({\varphi}))$,
defined by
$I_{\delta}({\varphi}):=I_{0}({\varphi})+[\partial_{\varphi}\theta_{0}({\varphi})]^{-\top}\rho({\varphi})\,,\quad\rho=(\rho_{j})_{j=1,\ldots,\nu}\,,\quad\rho_{j}({\varphi}):=\Delta_{\varphi}^{-1}\sum_{k=1}^{\nu}\partial_{{\varphi}_{k}}A_{kj}({\varphi})\,,$
is isotropic. Moreover, there is $\sigma:=\sigma(\nu,\tau)$ such that, for all
$s\geq s_{0}$,
$\displaystyle\left\|I_{\delta}-I_{0}\right\|_{s}^{k_{0},\upsilon}$
$\displaystyle\lesssim_{s}\left\|{\mathfrak{I}}_{0}\right\|_{s+1}^{k_{0},\upsilon}\,,\quad\left\|I_{\delta}-I_{0}\right\|_{s}^{k_{0},\upsilon}\lesssim_{s}\upsilon^{-1}\big{(}\left\|Z\right\|_{s+\sigma}^{k_{0},\upsilon}+\left\|Z\right\|_{s_{0}+\sigma}^{k_{0},\upsilon}\left\|{\mathfrak{I}}_{0}\right\|_{s+\sigma}^{k_{0},\upsilon}\big{)}$
(6.3)
$\displaystyle\left\|{\mathcal{F}}(i_{\delta},\alpha_{0})\right\|_{s}^{k_{0},\upsilon}$
$\displaystyle\lesssim_{s}\left\|Z\right\|_{s+\sigma}^{k_{0},\upsilon}+\left\|Z\right\|_{s_{0}+\sigma}^{k_{0},\upsilon}\left\|{\mathfrak{I}}_{0}\right\|_{s+\sigma}^{k_{0},\upsilon}\,,\quad\left\|{\rm
d}_{i}(i_{\delta})[\widehat{\imath}]\right\|_{s_{1}}\lesssim_{s_{1}}\left\|\widehat{\imath}\right\|_{s_{1}+1}\,,$
(6.4)
for $s_{1}\leq s_{0}+\mu$ (cfr. (6.1)). Furthermore $i_{\delta}({\varphi})$ is
a reversible and traveling torus, cfr. (5.7).
We first find an approximate inverse of the linearized operator ${\rm
d}_{i,\alpha}{\mathcal{F}}(i_{\delta})$. We introduce the symplectic
diffeomorphism $G_{\delta}:(\phi,y,{\mathtt{w}})\rightarrow(\theta,I,w)$ of
the phase space
${\mathbb{T}}^{\nu}\times{\mathbb{R}}^{\nu}\times\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}$,
$\begin{pmatrix}\theta\\\ I\\\
w\end{pmatrix}:=G_{\delta}\begin{pmatrix}\phi\\\ y\\\
{\mathtt{w}}\end{pmatrix}:=\begin{pmatrix}\theta_{0}(\phi)\\\
I_{\delta}(\phi)+\left[\partial_{\phi}\theta_{0}(\phi)\right]^{-\top}y+\left[(\partial_{\theta}{\widetilde{w}}_{0})(\theta_{0}(\phi))\right]^{\top}J_{\angle}^{-1}{\mathtt{w}}\\\
w_{0}(\phi)+{\mathtt{w}}\end{pmatrix}\,,$ (6.5)
where ${\widetilde{w}}_{0}(\theta):=w_{0}(\theta_{0}^{-1}(\theta))$. It is
proved in Lemma 2 of [6] that $G_{\delta}$ is symplectic, because the torus
$i_{\delta}$ is isotropic (Lemma 6.2). In the new coordinates, $i_{\delta}$ is
the trivial embedded torus $(\phi,y,{\mathtt{w}})=(\phi,0,0)$. Moreover the
diffeomorphism $G_{\delta}$ in (6.5) is reversibility and momentum preserving,
in the sense that (Lemma 6.3 in [7]) $\vec{\mathcal{S}}\circ
G_{\delta}=G_{\delta}\circ\vec{\mathcal{S}}$, $\vec{\tau}_{\varsigma}\circ
G_{\delta}=G_{\delta}\circ\vec{\tau}_{\varsigma}$,
$\forall\,\varsigma\in{\mathbb{R}}$, where $\vec{\mathcal{S}}$ and
$\vec{\tau}_{\varsigma}$ are defined respectively in (2.40), (2.41).
Under the symplectic diffeomorphism $G_{\delta}$, the Hamiltonian vector field
$X_{H_{\alpha}}$ changes into
$X_{K_{\alpha}}=\left(DG_{\delta}\right)^{-1}X_{H_{\alpha}}\circ
G_{\delta}\qquad{\rm where}\qquad K_{\alpha}:=H_{\alpha}\circ G_{\delta}$
is reversible and momentum preserving, in the sense that
$K_{\alpha}\circ\vec{\mathcal{S}}=K_{\alpha}$,
$K_{\alpha}\circ\vec{\tau}_{\varsigma}=K_{\alpha}$,
$\forall\,\varsigma\in{\mathbb{R}}$.
The Taylor expansion of $K_{\alpha}$ at the trivial torus $(\phi,0,0)$ is
$\displaystyle K_{\alpha}(\phi,y,{\mathtt{w}})=$ $\displaystyle\
K_{00}(\phi,\alpha)+K_{10}(\phi,\alpha)\cdot
y+(K_{01}(\phi,\alpha),{\mathtt{w}})_{L^{2}}+\tfrac{1}{2}K_{20}(\phi)y\cdot y$
(6.6)
$\displaystyle+(K_{11}(\phi)y,{\mathtt{w}})_{L^{2}}+\tfrac{1}{2}(K_{02}(\phi){\mathtt{w}},{\mathtt{w}})_{L^{2}}+K_{\geq
3}(\phi,y,{\mathtt{w}})\,,$
where $K_{\geq 3}$ collects all terms at least cubic in the variables
$(y,{\mathtt{w}})$. Here $K_{00}\in{\mathbb{R}}$,
$K_{10}\in{\mathbb{R}}^{\nu}$,
$K_{01}\in\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}$, whereas $K_{20}$
is a $\nu\times\nu$ symmetric matrix,
$K_{11}\in{\mathcal{L}}({\mathbb{R}}^{\nu},\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle})$
and $K_{02}$ is a self-adjoint operator acting on
$\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}$.
The Hamilton equations associated to (6.6) are
$\begin{cases}\dot{\phi}=K_{10}(\phi,\alpha)+K_{20}(\phi)y+[K_{11}(\phi)]^{\top}{\mathtt{w}}+\partial_{y}K_{\geq
3}(\phi,y,{\mathtt{w}})\\\
\dot{y}=-\partial_{\phi}K_{00}(\phi,\alpha)-[\partial_{\phi}K_{10}(\phi,\alpha)]^{\top}y-[\partial_{\phi}K_{01}(\phi,\alpha)]^{\top}{\mathtt{w}}\\\
\ \ \ \ \ -\partial_{\phi}\left(\tfrac{1}{2}K_{20}(\phi)y\cdot
y+\left(K_{11}(\phi)y,{\mathtt{w}}\right)_{L^{2}}+\tfrac{1}{2}\left(K_{02}(\phi){\mathtt{w}},{\mathtt{w}}\right)_{L^{2}}+K_{\geq
3}(\phi,y,{\mathtt{w}})\right)\\\
\dot{\mathtt{w}}=J_{\angle}\,\left(K_{01}(\phi,\alpha)+K_{11}(\phi)y+K_{02}(\phi){\mathtt{w}}+\nabla_{{\mathtt{w}}}K_{\geq
3}(\phi,y,{\mathtt{w}})\right)\,,\end{cases}$ (6.7)
where $\partial_{\phi}K_{10}^{\top}$ is the $\nu\times\nu$ transposed matrix
and
$\partial_{\phi}K_{01}^{\top},K_{11}^{\top}:\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}\rightarrow{\mathbb{R}}^{\nu}$
are defined by the duality relation
$(\partial_{\phi}K_{01}[{\widehat{\phi}}],{\mathtt{w}})_{L^{2}}={\widehat{\phi}}\cdot[\partial_{\phi}K_{01}]^{\top}{\mathtt{w}}$
for any ${\widehat{\phi}}\in{\mathbb{R}}^{\nu}$,
${\mathtt{w}}\in\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}$.
The terms $K_{00},K_{01}$, $K_{10}-\omega$ in the Taylor expansion (6.6)
vanish at an exact solution: indeed, arguing as in Lemma 5.4 in [2], there is
$\sigma:=\sigma(\nu,\tau)>0$, such that, for all $s\geq s_{0}$,
$\left\|\partial_{\phi}K_{00}(\cdot,\alpha_{0})\right\|_{s}^{k_{0},\upsilon}+\left\|K_{10}(\cdot,\alpha_{0})-\omega\right\|_{s}^{k_{0},\upsilon}+\left\|K_{01}(\cdot,\alpha_{0})\right\|_{s}^{k_{0},\upsilon}\lesssim_{s}\left\|Z\right\|_{s+\sigma}^{k_{0},\upsilon}+\left\|Z\right\|_{s_{0}+\sigma}^{k_{0},\upsilon}\left\|{\mathfrak{I}}_{0}\right\|_{s+\sigma}^{k_{0},\upsilon}\,.$
(6.8)
Under the linear change of variables
$DG_{\delta}({\varphi},0,0)\begin{pmatrix}{\widehat{\phi}}\\\ {\widehat{y}}\\\
{\widehat{{\mathtt{w}}}}\end{pmatrix}:=\begin{pmatrix}\partial_{\phi}\theta_{0}({\varphi})&0&0\\\
\partial_{\phi}I_{\delta}({\varphi})&[\partial_{\phi}\theta_{0}({\varphi})]^{-\top}&[(\partial_{\theta}{\widetilde{w}}_{0})(\theta_{0}({\varphi}))]^{\top}J_{\angle}^{-1}\\\
\partial_{\phi}w_{0}({\varphi})&0&{\rm
Id}\end{pmatrix}\begin{pmatrix}{\widehat{\phi}}\\\ {\widehat{y}}\\\
{\widehat{{\mathtt{w}}}}\end{pmatrix}\,,$
the linearized operator ${\rm d}_{i,\alpha}{\mathcal{F}}(i_{\delta})$ is
approximately transformed into the one obtained when one linearizes the
Hamiltonian system (6.7) at $(\phi,y,{\mathtt{w}})=({\varphi},0,0)$,
differentiating also in $\alpha$ at $\alpha_{0}$ and changing
$\partial_{t}\rightsquigarrow\omega\cdot\partial_{\varphi}$, namely
$\begin{pmatrix}\widehat{\phi}\\\ \widehat{y}\\\ \widehat{\mathtt{w}}\\\
\widehat{\alpha}\end{pmatrix}\mapsto\begin{pmatrix}\omega\cdot\partial_{\varphi}{\widehat{\phi}}-\partial_{\phi}K_{10}({\varphi})[{\widehat{\phi}}]-\partial_{\alpha}K_{10}({\varphi})[{\widehat{\alpha}}]-K_{20}({\varphi}){\widehat{y}}-[K_{11}({\varphi})]^{\top}{\widehat{{\mathtt{w}}}}\\\
\omega\cdot\partial_{\varphi}{\widehat{y}}+\partial_{\phi\phi}K_{00}({\varphi})[{\widehat{\phi}}]+\partial_{\alpha}\partial_{\phi}K_{00}({\varphi})[{\widehat{\alpha}}]+[\partial_{\phi}K_{10}({\varphi})]^{\top}{\widehat{y}}+[\partial_{\phi}K_{01}({\varphi})]^{\top}{\widehat{{\mathtt{w}}}}\\\
\omega\cdot\partial_{\varphi}{\widehat{{\mathtt{w}}}}-J_{\angle}\,\big{(}\partial_{\phi}K_{01}({\varphi})[{\widehat{\phi}}]+\partial_{\alpha}K_{01}({\varphi})[{\widehat{\alpha}}]+K_{11}({\varphi}){\widehat{y}}+K_{02}({\varphi}){\widehat{{\mathtt{w}}}}\big{)}\end{pmatrix}.$
(6.9)
In order to construct an “almost approximate" inverse of (6.9), we need that
${\mathcal{L}}_{\omega}:=\Pi_{{\mathbb{S}}^{+},\Sigma}^{\angle}\left(\omega\cdot\partial_{\varphi}-JK_{02}({\varphi})\right)|_{\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}}$
(6.10)
is "almost invertible" (on traveling waves) up to remainders of size
$O(N_{{\mathtt{n}}-1}^{-{{\mathtt{a}}}})$, where, for
${\mathtt{n}}\in{\mathbb{N}}_{0}$
$N_{\mathtt{n}}:=K_{\mathtt{n}}^{p}\,,\quad
K_{\mathtt{n}}:=K_{0}^{\chi^{\mathtt{n}}}\,,\quad\chi=3/2\,.$ (6.11)
The $(K_{\mathtt{n}})_{{\mathtt{n}}\geq 0}$ is the scale used in the nonlinear
Nash-Moser iteration of Section 9 and $(N_{\mathtt{n}})_{{\mathtt{n}}\geq 0}$
is the one in Lemma 7.7 and Theorem 8.2. Let
$H_{\angle}^{s}({\mathbb{T}}^{\nu+1}):=H^{s}({\mathbb{T}}^{\nu+1})\cap\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}$.
* (AI)
Almost invertibility of ${\mathcal{L}}_{\omega}$: There exist positive real
numbers $\sigma$, $\mu({\mathtt{b}})$, ${\mathtt{a}}$, $p$, $K_{0}$ and a
subset
${\mathtt{\Lambda}}_{o}\subset{\mathtt{D}}{\mathtt{C}}(\upsilon,\tau)\times[\gamma_{1},\gamma_{2}]$
such that, for all $(\omega,\gamma)\in{\mathtt{\Lambda}}_{o}$, the operator
${\mathcal{L}}_{\omega}$ may be decomposed as
${\mathcal{L}}_{\omega}={\mathcal{L}}_{\omega}^{<}+{\mathcal{R}}_{\omega}+{\mathcal{R}}_{\omega}^{\perp}\,,$
(6.12)
where, for any traveling wave function $g\in
H_{\angle}^{s+\sigma}({\mathbb{T}}^{\nu+1},{\mathbb{R}}^{2})$ and for any
$(\omega,\gamma)\in{\mathtt{\Lambda}}_{o}$, there is a traveling wave solution
$h\in H_{\angle}^{s}({\mathbb{T}}^{\nu+1},{\mathbb{R}}^{2})$ of
${\mathcal{L}}_{\omega}^{<}h=g$ satisfying, for all $s_{0}\leq s\leq
S-\mu({\mathtt{b}})-\sigma$,
$\left\|({\mathcal{L}}_{\omega}^{<})^{-1}g\right\|_{s}^{k_{0},\upsilon}\lesssim_{S}\upsilon^{-1}\big{(}\left\|g\right\|_{s+\sigma}^{k_{0},\upsilon}+\left\|g\right\|_{s_{0}+\sigma}^{k_{0},\upsilon}\left\|{\mathfrak{I}}_{0}\right\|_{s+\mu({{\mathtt{b}}})+\sigma}^{k_{0},\upsilon}\big{)}\,.$
(6.13)
In addition, if $g$ is anti-reversible, then $h$ is reversible. Moreover, for
any $s_{0}\leq s\leq S-\mu({\mathtt{b}})-\sigma$, for any traveling wave
$h\in\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}$, the operators
${\mathcal{R}}_{\omega},{\mathcal{R}}_{\omega}^{\perp}$ satisfy the estimates
$\displaystyle\left\|{\mathcal{R}}_{\omega}h\right\|_{s}^{k_{0},\upsilon}$
$\displaystyle\lesssim_{S}\varepsilon\upsilon^{-3}N_{{\mathtt{n}}-1}^{-{\mathtt{a}}}\big{(}\left\|h\right\|_{s+\sigma}^{k_{0},\upsilon}+\left\|h\right\|_{s_{0}+\sigma}^{k_{0},\upsilon}\left\|{\mathfrak{I}}_{0}\right\|_{s+\mu({\mathtt{b}})+\sigma}^{k_{0},\upsilon}\big{)}\,,$
(6.14)
$\displaystyle\left\|{\mathcal{R}}_{\omega}^{\perp}h\right\|_{s_{0}}^{k_{0},\upsilon}$
$\displaystyle\lesssim_{S}K_{\mathtt{n}}^{-{\rm
b}}\big{(}\left\|h\right\|_{s_{0}+{\rm
b}+\sigma}^{k_{0},\upsilon}+\left\|h\right\|_{s_{0}+\sigma}^{k_{0},\upsilon}\left\|{\mathfrak{I}}_{0}\right\|_{s_{0}+\mu({\mathtt{b}})+\sigma+{\rm
b}}^{k_{0},\upsilon}\big{)}\,,\ \forall\,{\rm b}>0\,,$ (6.15)
$\displaystyle\left\|{\mathcal{R}}_{\omega}^{\perp}h\right\|_{s}^{k_{0},\upsilon}$
$\displaystyle\lesssim_{S}\left\|h\right\|_{s+\sigma}^{k_{0},\upsilon}+\left\|h\right\|_{s_{0}+\sigma}^{k_{0},\upsilon}\left\|{\mathfrak{I}}_{0}\right\|_{s+\mu({\mathtt{b}})+\sigma}^{k_{0},\upsilon}\,.$
(6.16)
This assumption shall be verified by Theorem 8.9 at each step of the Nash-
Moser iteration.
In order to find an almost approximate inverse of the linear operator in (6.9)
(and so of ${\rm d}_{i,\alpha}{\mathcal{F}}(i_{\delta})$), it is sufficient to
invert the operator
${\mathbb{D}}\big{[}{\widehat{\phi}},{\widehat{y}},{\widehat{{\mathtt{w}}}},{\widehat{\alpha}}\big{]}:=\begin{pmatrix}\omega\cdot\partial_{\varphi}{\widehat{\phi}}-\partial_{\alpha}K_{10}({\varphi})[{\widehat{\alpha}}]-K_{20}({\varphi}){\widehat{y}}-K_{11}^{\top}({\varphi}){\widehat{{\mathtt{w}}}}\\\
\omega\cdot\partial_{\varphi}{\widehat{y}}+\partial_{\alpha}\partial_{\phi}K_{00}({\varphi})[{\widehat{\alpha}}]\\\
{\mathcal{L}}_{\omega}^{<}{\widehat{{\mathtt{w}}}}-J_{\angle}\left(\partial_{\alpha}K_{01}({\varphi})[{\widehat{\alpha}}]+K_{11}({\varphi}){\widehat{y}}\right)\end{pmatrix}$
(6.17)
obtained neglecting in (6.9) the terms $\partial_{\phi}K_{10}$,
$\partial_{\phi\phi}K_{00}$, $\partial_{\phi}K_{00}$, $\partial_{\phi}K_{01}$
(they vanish at an exact solution by (6.8)) and the small remainders
${\mathcal{R}}_{\omega}$, ${\mathcal{R}}_{\omega}^{\perp}$ appearing in
(6.12).
As in section 6 of [7] we have the following result, where we denote
$\|(\phi,y,{\mathtt{w}},\alpha)\|_{s}^{k_{0},\upsilon}:=\max\big{\\{}\|(\phi,y,{\mathtt{w}})\|_{s}^{k_{0},\upsilon},\left|\alpha\right|^{k_{0},\upsilon}\big{\\}}$
(see [7, Proposition 6.5]):
###### Proposition 6.3.
Assume (6.1) (with $\mu=\mu({{\mathtt{b}}})+\sigma$) and (AI). Then, for all
$(\omega,\gamma)\in{\mathtt{\Lambda}}_{o}$, for any anti-reversible traveling
wave variation $g=(g_{1},g_{2},g_{3})$, there exists a unique solution
${\mathbb{D}}^{-1}g:=({\widehat{\phi}},{\widehat{y}},{\widehat{{\mathtt{w}}}},{\widehat{\alpha}})$
of
${\mathbb{D}}({\widehat{\phi}},{\widehat{y}},{\widehat{{\mathtt{w}}}},{\widehat{\alpha}})=g$
where $({\widehat{\phi}},{\widehat{y}},{\widehat{{\mathtt{w}}}})$ is a
reversible traveling wave variation. Moreover, for any $s_{0}\leq s\leq
S-\mu({\mathtt{b}})-\sigma$,
$\|{\mathbb{D}}^{-1}g\|_{s}^{k_{0},\upsilon}\lesssim_{S}\upsilon^{-1}\big{(}\|g\|_{s+\sigma}^{k_{0},\upsilon}+\|{\mathfrak{I}}_{0}\|_{s+\mu({{\mathtt{b}}})+\sigma}^{k_{0},\upsilon}\|g\|_{s_{0}+\sigma}^{k_{0},\upsilon}\big{)}$.
Finally we conclude that the operator
${\bf T}_{0}:={\bf
T}_{0}(i_{0}):=(D{\widetilde{G}}_{\delta})({\varphi},0,0)\circ{\mathbb{D}}^{-1}\circ(DG_{\delta})({\varphi},0,0)^{-1}$
(6.18)
is an almost approximate right inverse for ${\rm
d}_{i,\alpha}{\mathcal{F}}(i_{0})$, where
${\widetilde{G}}_{\delta}(\phi,y,{\mathtt{w}},\alpha):=\left(G_{\delta}(\phi,y,{\mathtt{w}}),\alpha\right)$
is the identity on the $\alpha$-component. Arguing exactly as in Theorem 6.6
in [7] we deduce the following.
###### Theorem 6.4.
(Almost approximate inverse) Assume (AI). Then there is
$\overline{\sigma}:=\overline{\sigma}(\tau,\nu,k_{0})>0$ such that, if (6.1)
holds with $\mu=\mu({\mathtt{b}})+\overline{\sigma}$, then, for all
$(\omega,\gamma)\in{\mathtt{\Lambda}}_{o}$ and for any anti-reversible
traveling wave variation $g:=(g_{1},g_{2},g_{3})$, the operator ${\bf T}_{0}$
defined in (6.18) satisfies, for all $s_{0}\leq s\leq
S-\mu({\mathtt{b}})-\overline{\sigma}$,
$\|{\bf
T}_{0}g\|_{s}^{k_{0},\upsilon}\lesssim_{S}\upsilon^{-1}\big{(}\|g\|_{s+\overline{\sigma}}^{k_{0},\upsilon}+\|{\mathfrak{I}}_{0}\|_{s+\mu({\mathtt{b}})+\overline{\sigma}}^{k_{0},\upsilon}\|g\|_{s_{0}+\overline{\sigma}}^{k_{0},\upsilon}\big{)}\,.$
(6.19)
Moreover, the first three components of ${\bf T}_{0}g$ form a reversible
traveling wave variation. Finally, ${\bf T}_{0}$ is an almost approximate
right inverse of ${\rm d}_{i,\alpha}{\mathcal{F}}(i_{0})$, namely
${\rm d}_{i,\alpha}{\mathcal{F}}(i_{0})\circ{\bf T}_{0}-{\rm
Id}={\mathcal{P}}(i_{0})+{\mathcal{P}}_{\omega}(i_{0})+{\mathcal{P}}_{\omega}^{\perp}(i_{0})\,,$
where, for any traveling wave variation $g$, for all $s_{0}\leq s\leq
S-\mu({\mathtt{b}})-\overline{\sigma}$,
$\displaystyle\|{\mathcal{P}}g\|_{s}^{k_{0},\upsilon}$
$\displaystyle\lesssim_{S}\upsilon^{-1}\Big{(}\|{\mathcal{F}}(i_{0},\alpha_{0})\|_{s_{0}+\overline{\sigma}}^{k_{0},\upsilon}\|g\|_{s+\overline{\sigma}}^{k_{0},\upsilon}$
(6.20)
$\displaystyle\qquad+\,\big{(}\|{\mathcal{F}}(i_{0},\alpha_{0})\|_{s+\overline{\sigma}}^{k_{0},\upsilon}+\|{\mathcal{F}}(i_{0},\alpha_{0})\|_{s_{0}+\overline{\sigma}}^{k_{0},\upsilon}\|{\mathfrak{I}}_{0}\|_{s+\mu({\mathtt{b}})+\overline{\sigma}}^{k_{0},\upsilon}\big{)}\|g\|_{s_{0}+\overline{\sigma}}^{k_{0},\upsilon}\Big{)}\,,$
(6.21) $\displaystyle\|{\mathcal{P}}_{\omega}g\|_{s}^{k_{0},\upsilon}$
$\displaystyle\lesssim_{S}\varepsilon\upsilon^{-4}N_{{\mathtt{n}}-1}^{-{\mathtt{a}}}\big{(}\|g\|_{s+\overline{\sigma}}^{k_{0},\upsilon}+\|{\mathfrak{I}}_{0}\|_{s+\mu({\mathtt{b}})+\overline{\sigma}}^{k_{0},\upsilon}\|g\|_{s_{0}+\overline{\sigma}}^{k_{0},\upsilon}\big{)}\,,$
(6.22)
$\displaystyle\|{\mathcal{P}}_{\omega}^{\perp}g\|_{s_{0}}^{k_{0},\upsilon}$
$\displaystyle\lesssim_{S,b}\upsilon^{-1}K_{\mathtt{n}}^{-b}\left(\|g\|_{s_{0}+\overline{\sigma}+b}^{k_{0},\upsilon}+\|{\mathfrak{I}}_{0}\|_{s_{0}+\mu({\mathtt{b}})+b+\overline{\sigma}}^{k_{0},\upsilon}\|g\|_{s_{0}+\overline{\sigma}}^{k_{0},\upsilon}\right)\,,\quad\forall\,b>0\,,$
(6.23) $\displaystyle\|{\mathcal{P}}_{\omega}^{\perp}g\|_{s}^{k_{0},\upsilon}$
$\displaystyle\lesssim_{S}\upsilon^{-1}\big{(}\|g\|_{s+\overline{\sigma}}^{k_{0},\upsilon}+\|{\mathfrak{I}}_{0}\|_{s+\mu({\mathtt{b}})+\overline{\sigma}}^{k_{0},\upsilon}\|g\|_{s_{0}+\overline{\sigma}}^{k_{0},\upsilon}\big{)}\,.$
(6.24)
## 7 The linearized operator in the normal subspace
We now write an explicit expression of the linear operator
${\mathcal{L}}_{\omega}$ defined in (6.10). As in Lemma 7.1 in [7], since the
diffeomorphism $G_{\delta}$ in (6.5) is just a translation along the infinite
dimensional normal variable $w$, we have the following structural result.
###### Lemma 7.1.
The Hamiltonian operator ${\mathcal{L}}_{\omega}$ defined in (6.10), acting on
the normal subspace $\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}$, has the
form
${\mathcal{L}}_{\omega}=\Pi_{{\mathbb{S}}^{+},\Sigma}^{\angle}({\mathcal{L}}-\varepsilon
JR)|_{\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}}\,,$ (7.1)
where:
1\. ${\mathcal{L}}$ is the Hamiltonian operator
${\mathcal{L}}:=\omega\cdot\partial_{\varphi}-J\partial_{u}\nabla_{u}{\mathcal{H}}(T_{\delta}(\varphi))\,,$
(7.2)
where ${\mathcal{H}}$ is the water waves Hamiltonian in the Wahlén variables
defined in (2.9), evaluated at the reversible traveling wave
$T_{\delta}(\phi):=\varepsilon A(i_{\delta}(\phi))=\varepsilon
A\left(\theta_{0}(\phi),I_{\delta}(\phi),w_{0}(\phi)\right)=\varepsilon
v^{\intercal}\left(\theta_{0}(\phi),I_{\delta}(\phi)\right)+\varepsilon
w_{0}(\phi)\,,$ (7.3)
the torus
$i_{\delta}({\varphi}):=(\theta_{0}({\varphi}),I_{\delta}({\varphi}),w_{0}({\varphi}))$
is defined in Lemma 6.2 and $A(\theta,I,w)$, $v^{\intercal}(\theta,I)$ in
(2.2); 2\. $R(\phi)$ has the ‘finite rank" form
$R(\phi)[h]=\sum_{j=1}^{\nu}\left(h,g_{j}\right)_{L^{2}}\chi_{j}\,,\quad\forall\,h\in\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}\,,$
(7.4)
for functions
$g_{j},\chi_{j}\in\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}$ which
satisfy, for some $\sigma:=\sigma(\tau,\nu,k_{0})>0$, for all
$j=1,\ldots,\nu$, for all $s\geq s_{0}$,
$\displaystyle\left\|g_{j}\right\|_{s}^{k_{0},\upsilon}+\left\|\chi_{j}\right\|_{s}^{k_{0},\upsilon}$
$\displaystyle\lesssim_{s}1+\left\|{\mathfrak{I}}_{\delta}\right\|_{s+\sigma}^{k_{0},\upsilon}\,,$
(7.5) $\displaystyle\left\|{\rm
d}_{i}g_{j}[\widehat{\imath}]\right\|_{s}+\left\|{\rm
d}_{i}\chi_{j}[\widehat{\imath}]\right\|_{s}$
$\displaystyle\lesssim_{s}\left\|\widehat{\imath}\right\|_{s+\sigma}+\left\|\widehat{\imath}\right\|_{s_{0}+\sigma}\left\|{\mathfrak{I}}_{\delta}\right\|_{s+\sigma}\,.$
The operator ${\mathcal{L}}_{\omega}$ is reversible and momentum preserving.
In order to compute $dX$ we use the "shape derivative" formula, see e.g. [25],
$G^{\prime}(\eta)[{\widehat{\eta}}]\psi:=\lim_{\epsilon\rightarrow
0}\tfrac{1}{\epsilon}\big{(}G(\eta+\epsilon{\widehat{\eta}})\psi-G(\eta)\psi\big{)}=-G(\eta)(B{\widehat{\eta}})-\partial_{x}(V{\widehat{\eta}})\,,$
(7.6)
where
$B(\eta,\psi):=\frac{G(\eta)\psi+\eta_{x}\psi_{x}}{1+\eta_{x}^{2}}\,,\quad
V(\eta,\psi):=\psi_{x}-B(\eta,\psi)\eta_{x}\,.$ (7.7)
Then, recalling (2.9), (2.7), (1.6) and (7.6) the operator ${\mathcal{L}}$ in
(7.2) is given by
$\displaystyle{\mathcal{L}}=\omega\cdot\partial_{\varphi}$
$\displaystyle+\begin{pmatrix}\partial_{x}{\widetilde{V}}+G(\eta)B&-G(\eta)\\\
g+B{\widetilde{V}}_{x}+BG(\eta)B&{\widetilde{V}}\partial_{x}-BG(\eta)\end{pmatrix}$
(7.8)
$\displaystyle+\frac{\gamma}{2}\begin{pmatrix}-G(\eta)\partial_{x}^{-1}&0\\\
\partial_{x}^{-1}G(\eta)B-BG(\eta)\partial_{x}^{-1}-\frac{\gamma}{2}\partial_{x}^{-1}G(\eta)\partial_{x}^{-1}&-\partial_{x}^{-1}G(\eta)\end{pmatrix}\,,$
where
${\widetilde{V}}:=V-\gamma\eta\,,$ (7.9)
and the functions $B:=B(\eta,\psi)$, $V:=V(\eta,\psi)$ in (7.8)-(7.9) are
evaluated at the reversible traveling wave
$(\eta,\psi):=WT_{\delta}({\varphi})$ where $T_{\delta}({\varphi})$ is defined
in (7.3).
Notation. In (7.8) and hereafter the function $B$ is identified with the
corresponding multiplication operators $h\mapsto Bh$, and, where there is no
parenthesis, composition of operators is understood. For example $BG(\eta)B$
means $B\circ G(\eta)\circ B$.
###### Remark 7.2.
We consider the operator ${\mathcal{L}}$ in (7.8) acting on (a dense subspace
of) the whole $L^{2}({\mathbb{T}})\times L^{2}({\mathbb{T}})$. In particular
we extend the operator $\partial_{x}^{-1}$ to act on the whole
$L^{2}({\mathbb{T}})$ as in (3.22).
The following algebraic properties are a direct consequence of the reversible
and space-invariance properties of the water waves equations explained in
Section 2 and the fact that the approximate solution
$(\eta,\zeta)=T_{\delta}({\varphi})$ is a reversible traveling wave (cfr.
Lemma 7.3 in [7]).
###### Lemma 7.3.
The functions $(\eta,\zeta)=T_{\delta}({\varphi})$ and $B,{\widetilde{V}}$
defined in (7.7), (7.9) are quasi-periodic traveling waves. The functions
$(\eta,\zeta)=T_{\delta}({\varphi})$ are $({\rm even}({\varphi},x),{\rm
odd}({\varphi},x))$, $B$ is ${\rm odd}({\varphi},x)$ and ${\widetilde{V}}$ is
${\rm even}({\varphi},x)$. The Hamiltonian operator ${\mathcal{L}}$ is
reversible and momentum preserving.
For the sequel we will always assume the following ansatz (satisfied by the
approximate solutions obtained along the nonlinear Nash-Moser iteration of
Section 9): for some constants $\mu_{0}:=\mu_{0}(\tau,\nu)>0$,
$\upsilon\in(0,1)$, (cfr. Lemma 6.2)
$\left\|{\mathfrak{I}}_{0}\right\|_{s_{0}+\mu_{0}}^{k_{0},\upsilon}\,,\
\left\|{\mathfrak{I}}_{\delta}\right\|_{s_{0}+\mu_{0}}^{k_{0},\upsilon}\leq
1\,.$ (7.10)
In order to estimate the variation of the eigenvalues with respect to the
approximate invariant torus, we need also to estimate the variation with
respect to the torus $i({\varphi})$ in another low norm $\left\|\
\right\|_{s_{1}}$ for all Sobolev indexes $s_{1}$ such that
$s_{1}+\sigma_{0}\leq s_{0}+\mu_{0}\,,\quad\text{ for some }\
\sigma_{0}:=\sigma_{0}(\tau,\nu)>0\,.$ (7.11)
Thus, by (7.10), we have
$\left\|{\mathfrak{I}}_{0}\right\|_{s_{1}+\sigma_{0}}^{k_{0},\upsilon}$,
$\left\|{\mathfrak{I}}_{\delta}\right\|_{s_{1}+\sigma_{0}}^{k_{0},\upsilon}\leq
1$. The constants $\mu_{0}$ and $\sigma_{0}$ represent the _loss of
derivatives_ accumulated along the reduction procedure of the next sections.
What is important is that they are independent of the Sobolev index $s$. In
the following sections we shall denote by $\sigma:=\sigma(\tau,\nu,k_{0})>0$,
$\sigma_{N}({\mathtt{q}}_{0}):=\sigma_{N}({\mathtt{q}}_{0},\tau,\nu,k_{0})$,
$\sigma_{M}:=\sigma_{M}(k_{0},\tau,\nu)>0$, $\aleph_{M}(\alpha)$ constants
(which possibly increase from lemma to lemma) representing losses of
derivatives along the finitely many steps of the reduction procedure.
###### Remark 7.4.
In the next sections $\mu_{0}:=\mu_{0}(\tau,\nu,M,\alpha)>0$ will depend also
on indexes $M,\alpha$, whose maximal values will be fixed depending only on
$\tau$ and $\nu$ (and $k_{0}$ which is however considered an absolute constant
along the paper). In particular $M$ is fixed in (8.5), whereas the maximal
value of $\alpha$ depends on $M$, as explained in Remark 7.16.
###### Remark 7.5.
Starting from Section 7.2, we introduce in the estimates upper bounds on the
regularity $s\geq s_{0}$. We shall control the terms in Sobolev spaces $H^{s}$
with $s_{0}\leq s\leq S-\sigma$, where $\sigma$ denotes a loss of derivatives
of the finitely many steps of the reduction (possibly increasing along the
steps), whereas $S>s_{0}+k_{0}$ is any finite Sobolev index. The index $S$ has
to be taken finite in view of Lemma 7.7 (see also Appendix A). The largest
regularity index $S$ will be fixed in (9.3). In particular, it is compatible
with the condition (7.11), namely $s_{1}+\sigma_{0}\leq s_{0}+\mu_{0}<S$.
As a consequence of Moser composition Lemma 3.2 and (6.3), the Sobolev norm of
the function $u=T_{\delta}({\varphi})$ defined in (7.3) satisfies for all
$s\geq s_{0}$
$\left\|u\right\|_{s}^{k_{0},\upsilon}=\left\|\eta\right\|_{s}^{k_{0},\upsilon}+\left\|\zeta\right\|_{s}^{k_{0},\upsilon}\leq\varepsilon
C(s)\big{(}1+\left\|{\mathfrak{I}}_{0}\right\|_{s}^{k_{0},\upsilon}\big{)}$
(7.12)
(the map $A$ defined in (2.2) is smooth). Similarly, using (6.4),
$\left\|\Delta_{12}u\right\|_{s_{1}}\lesssim_{s_{1}}\varepsilon\left\|i_{2}-i_{1}\right\|_{s_{1}}\,,\quad\text{
where }\quad\Delta_{12}u:=u(i_{2})-u(i_{1})\,.$
We finally recall that ${\mathfrak{I}}_{0}={\mathfrak{I}}_{0}(\omega,\gamma)$
is defined for all
$(\omega,\gamma)\in{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]$ and that
the functions $B,{\widetilde{V}}$ and $c$ appearing in ${\mathcal{L}}$ in
(7.8) are ${\mathcal{C}}^{\infty}$ in $({\varphi},x)$, as
$u=(\eta,\zeta)=T_{\delta}({\varphi})$ is.
In Sections 7.1-7.6 we are going to make several transformations, whose aim is
to conjugate the operator ${\cal L}$ in (7.8) to a constant coefficients
Fourier multiplier, up to a pseudo-differential operator of order $-1/2$ plus
a remainder that satisfies tame estimates, see ${\cal L}_{8}$ in (7.138).
Finally, in Section 7.7 we shall conjugate the restricted operator ${\cal
L}_{\omega}$ in (7.1).
### 7.1 Linearized good unknown of Alinhac
The first step is to conjugate the linear operator ${\mathcal{L}}$ in (7.8) by
the symplectic (Definition 3.18) multiplication matrix operator
${\mathcal{Z}}:=\left(\begin{matrix}{\rm Id}&0\\\ B&{\rm
Id}\end{matrix}\right)\ ,\qquad{\mathcal{Z}}^{-1}=\left(\begin{matrix}{\rm
Id}&0\\\ -B&{\rm Id}\end{matrix}\right)\,,$
obtaining
$\displaystyle{\mathcal{L}}_{1}$
$\displaystyle:={\mathcal{Z}}^{-1}{\mathcal{L}}{\mathcal{Z}}=\omega\cdot\partial_{\varphi}+\begin{pmatrix}\partial_{x}{\widetilde{V}}&-G(\eta)\\\
a&{\widetilde{V}}\partial_{x}\end{pmatrix}-\frac{\gamma}{2}\begin{pmatrix}G(\eta)\partial_{x}^{-1}&0\\\
\frac{\gamma}{2}\partial_{x}^{-1}G(\eta)\partial_{x}^{-1}&\partial_{x}^{-1}G(\eta)\end{pmatrix}\,,$
(7.13)
where $a$ is the function
$a:=g+{\widetilde{V}}B_{x}+\omega\cdot\partial_{\varphi}B\,.$ (7.14)
The matrix ${\mathcal{Z}}$ amounts to introduce, as in [25] and [9, 2], a
linearized version of the “good unknown of Alinhac".
###### Lemma 7.6.
The maps ${\mathcal{Z}}^{\pm 1}-{\rm Id}$ are ${\mathcal{D}}^{k_{0}}$-tame
with tame constants satisfying, for some $\sigma:=\sigma(\tau,\nu,k_{0})>0$,
for all $s\geq s_{0}$,
${\mathfrak{M}}_{{\mathcal{Z}}^{\pm 1}-{\rm Id}}(s)\,,\
{\mathfrak{M}}_{({\mathcal{Z}}^{\pm 1}-{\rm
Id})^{*}}(s)\lesssim_{s}\varepsilon\big{(}1+\left\|{\mathfrak{I}}_{0}\right\|_{s+\sigma}^{k_{0},\upsilon}\big{)}\,.$
The function $a$ in (7.14) is a quasi-periodic traveling wave ${\rm
even}({\varphi},x)$. There is $\sigma:=\sigma(\tau,\nu,k_{0})>0$ such that,
for all $s\geq s_{0}$,
$\left\|a-g\right\|_{s}^{k_{0},\upsilon}+\|{\widetilde{V}}\|_{s}^{k_{0},\upsilon}+\left\|B\right\|_{s}^{k_{0},\upsilon}\lesssim_{s}\varepsilon\big{(}1+\|{\mathfrak{I}}_{0}\|_{s+\sigma}^{k_{0},\upsilon}\big{)}\,.$
(7.15)
Moreover, for any $s_{1}$ as in (7.11),
$\displaystyle\left\|\Delta_{12}a\right\|_{s_{1}}+\|\Delta_{12}{\widetilde{V}}\|_{s_{1}}+\left\|\Delta_{12}B\right\|_{s_{1}}\lesssim_{s_{1}}\varepsilon\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma}\,,$
(7.16) $\displaystyle\|\Delta_{12}({\mathcal{Z}}^{\pm
1})h\|_{s_{1}},\|\Delta_{12}({\mathcal{Z}}^{\pm
1})^{*}h\|_{s_{1}}\lesssim_{s_{1}}\varepsilon\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma}\left\|h\right\|_{s_{1}}\,.$
(7.17)
The operator ${\mathcal{L}}_{1}$ is Hamiltonian, reversible and momentum
preserving.
###### Proof.
By the expressions of $B,\widetilde{V},a$ in (7.7), (7.9), (7.14) the
composition estimates of Lemma 3.2, (3.7) and the bounds for the Dirichlet-
Neumann operator in Lemma 3.10. Since $B$ is an ${\rm odd}(\varphi,x)$ quasi-
periodic traveling wave, the matrix operator ${\mathcal{Z}}$ is reversibility
and momentum preserving (Definitions 3.19 and 3.22). ∎
### 7.2 Almost-straightening of the first order transport operator
We now write the operator ${\mathcal{L}}_{1}$ in (7.13) as
${\mathcal{L}}_{1}=\omega\cdot\partial_{\varphi}+\begin{pmatrix}\partial_{x}{\widetilde{V}}&0\\\
0&{\widetilde{V}}\partial_{x}\end{pmatrix}+\begin{pmatrix}-\frac{\gamma}{2}G(0)\partial_{x}^{-1}&-G(0)\\\
a-\left(\frac{\gamma}{2}\right)^{2}\partial_{x}^{-1}G(0)\partial_{x}^{-1}&-\frac{\gamma}{2}\partial_{x}^{-1}G(0)\end{pmatrix}+{\bf
R}_{1}\,,$ (7.18)
where, using the decomposition (3.32) of the Dirichlet-Neumann operator,
${\bf
R}_{1}:=-\begin{pmatrix}\frac{\gamma}{2}{\mathcal{R}}_{G}(\eta)\partial_{x}^{-1}&{\mathcal{R}}_{G}(\eta)\\\
\left(\frac{\gamma}{2}\right)^{2}\partial_{x}^{-1}{\mathcal{R}}_{G}(\eta)\partial_{x}^{-1}&\frac{\gamma}{2}\partial_{x}^{-1}{\mathcal{R}}_{G}(\eta)\end{pmatrix}$
(7.19)
is a small remainder in ${\rm OP}S^{-\infty}$. The aim of this section is to
conjugate the variable coefficients quasi-periodic transport operator
${\mathcal{L}}_{\rm
TR}:=\omega\cdot\partial_{\varphi}+\begin{pmatrix}\partial_{x}{\widetilde{V}}&0\\\
0&{\widetilde{V}}\partial_{x}\end{pmatrix}$ (7.20)
to a constant coefficients transport operator
$\omega\cdot\partial_{\varphi}+{\mathtt{m}}_{1,\overline{\mathtt{n}}}\,\partial_{y}$,
up to an exponentially small remainder, see (7.28)-(7.29), where
${\mathtt{n}}\in{\mathbb{N}}_{0}$ and the scale
$(N_{{\mathtt{n}}})_{{\mathtt{n}}\in{\mathbb{N}}_{0}}$ is defined, for
$N_{0}>1$, by
$N_{{\mathtt{n}}}:=N_{0}^{\chi^{{\mathtt{n}}}}\,,\quad\chi=3/2\,,\quad
N_{-1}:=1\,.$ (7.21)
Such small remainder is left because we assume only finitely many non-
resonance conditions, see (7.26). This enables to deduce Lemma 7.9, and then
to formulate the non-resonance condition (5.15), stated in terms of the
“final" function ${\mathtt{m}}_{1}^{\infty}(\omega,\gamma)$, which implies
(7.26) at any step of the nonlinear Nash-Moser iteration of Section 9.
In the next lemma we conjugate ${\mathcal{L}}_{\rm TR}$ by a _symplectic_
(Definition 3.18) transformation
${\mathcal{E}}:=\begin{pmatrix}(1+\beta_{x}({\varphi},x))\circ{\mathcal{B}}&0\\\
0&{\mathcal{B}}\end{pmatrix}\,,\quad{\mathcal{E}}^{-1}:=\begin{pmatrix}{\mathcal{B}}^{-1}\circ(1+\beta_{x}({\varphi},x))^{-1}&0\\\
0&{\mathcal{B}}^{-1}\end{pmatrix}$ (7.22)
where the composition operator
$({\mathcal{B}}u)({\varphi},x):=u\left({\varphi},x+\beta({\varphi},x)\right)$
(7.23)
is induced by a ${\varphi}$-dependent diffeomorphism $y=x+\beta({\varphi},x)$
of the torus ${\mathbb{T}}_{x}$, for some small quasi-periodic traveling wave
$\beta:{\mathbb{T}}_{\varphi}^{\nu}\times{\mathbb{T}}_{x}\to{\mathbb{R}}$,
${\rm odd}({\varphi},x)$. Let
${\mathtt{b}}:=[{\mathtt{a}}]+2\in{\mathbb{N}}\,,\quad{\mathtt{a}}:=3(\tau_{1}+1)\geq
1\,,\quad\tau_{1}:=k_{0}+(k_{0}+1)\tau\,.$ (7.24)
###### Lemma 7.7.
(Almost-Straightening of the transport operator) There exists
$\tau_{2}(\tau,\nu)>\tau_{1}(\tau,\nu)+1+{\mathtt{a}}$ such that, for all
$S>s_{0}+k_{0}$, there are $N_{0}:=N_{0}(S,{\mathtt{b}})\in{\mathbb{N}}$ and
$\updelta:=\updelta(S,{\mathtt{b}})\in(0,1)$ such that, if
$N_{0}^{\tau_{2}}\varepsilon\upsilon^{-1}<\updelta$ the following holds true.
For any $\overline{\mathtt{n}}\in{\mathbb{N}}_{0}$:
1\. There exist a constant
${\mathtt{m}}_{1,\overline{\mathtt{n}}}:={\mathtt{m}}_{1,\overline{\mathtt{n}}}(\omega,\gamma)\in{\mathbb{R}}$,
where ${\mathtt{m}}_{1,0}=0$, defined for any
$(\omega,\gamma)\in{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]$, and a
quasi-periodic traveling wave
$\beta({\varphi},x):=\beta_{\overline{\mathtt{n}}}({\varphi},x)$, ${\rm
odd}({\varphi},x)$, satisfying, for some $\sigma=\sigma(\tau,\nu,k_{0})>0$,
the estimates
$|{\mathtt{m}}_{1,\overline{\mathtt{n}}}|^{k_{0},\upsilon}\lesssim\varepsilon\,,\quad\|\beta\|_{s}^{k_{0},\upsilon}\lesssim_{S}\varepsilon\upsilon^{-1}(1+\|{\mathfrak{I}}_{0}\|_{s+\sigma+{\mathtt{b}}}^{k_{0},\upsilon})\,,\quad\forall\,s_{0}\leq
s\leq S\,,$ (7.25)
independently of $\overline{\mathtt{n}}$;
2\. For any $(\omega,\gamma)$ in
$\displaystyle{\mathtt{T}}{\mathtt{C}}_{\overline{\mathtt{n}}+1}(2\upsilon,\tau)$
$\displaystyle:={\mathtt{T}}{\mathtt{C}}_{\overline{\mathtt{n}}+1}({\mathtt{m}}_{1,\overline{\mathtt{n}}},2\upsilon,\tau)$
(7.26)
$\displaystyle:=\Big{\\{}(\omega,\gamma)\in{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]\,:\,|(\omega-{\mathtt{m}}_{1,\overline{\mathtt{n}}}\vec{\jmath})\cdot\ell|\geq
2\upsilon\braket{\ell}^{-\tau}\,\ \forall\,0<|\ell|\leq
N_{\overline{\mathtt{n}}}\Big{\\}}$
the operator ${\mathcal{L}}_{\rm TR}$ in (7.20) is conjugated to
${\mathcal{E}}^{-1}{\mathcal{L}}_{\rm
TR}{\mathcal{E}}=\omega\cdot\partial_{\varphi}+{\mathtt{m}}_{1,\overline{\mathtt{n}}}\,\partial_{y}+{\bf
P}_{2}^{\perp}\,,$ (7.27)
where
${\bf
P}_{2}^{\perp}:=\begin{pmatrix}\partial_{y}p_{\overline{\mathtt{n}}}&0\\\
0&p_{\overline{\mathtt{n}}}\partial_{y}\end{pmatrix}\,,$ (7.28)
and the real, quasi-periodic traveling wave function
$p_{\overline{\mathtt{n}}}({\varphi},y)$, ${\rm even}({\varphi},y)$,
satisfies, for some $\sigma=\sigma(\tau,\nu,k_{0})>0$ and for any $s_{0}\leq
s\leq S$,
$\|p_{\overline{\mathtt{n}}}\|_{s}^{k_{0},\upsilon}\lesssim_{s,{\mathtt{b}}}\varepsilon\,N_{\overline{\mathtt{n}}-1}^{-{\mathtt{a}}}(1+\|{\mathfrak{I}}_{0}\|_{s+\sigma+{\mathtt{b}}}^{k_{0},\upsilon})\,;$
(7.29)
3\. The operators ${\mathcal{E}}^{\pm}$ are
${\mathcal{D}}^{k_{0}}$-$(k_{0}+1)$-tame, the operators ${\mathcal{E}}^{\pm
1}-{\rm Id}$, $({\mathcal{E}}^{\pm 1}-{\rm Id})^{*}$ are
${\mathcal{D}}^{k_{0}}$-$(k_{0}+2)$-tame with tame constants satisfying, for
some $\sigma:=\sigma(\tau,\nu,k_{0})>0$ and for all $s_{0}\leq s\leq
S-\sigma$,
$\displaystyle{\mathfrak{M}}_{{\mathcal{E}}^{\pm
1}}(s)\lesssim_{S}1+\|{\mathfrak{I}}_{0}\|_{s+\sigma}^{k_{0},\upsilon}\,,\
{\mathfrak{M}}_{{\mathcal{E}}^{\pm 1}-{\rm
Id}}(s)+{\mathfrak{M}}_{\left({\mathcal{E}}^{\pm 1}-{\rm
Id}\right)^{*}}(s)\lesssim_{S}\varepsilon\upsilon^{-1}(1+\|{\mathfrak{I}}_{0}\|_{s+\sigma+{\mathtt{b}}}^{k_{0},\upsilon})\,.$
(7.30)
4\. Furthermore, for any $s_{1}$ as in (7.11),
$\displaystyle|\Delta_{12}{\mathtt{m}}_{1,\overline{\mathtt{n}}}|\lesssim\varepsilon\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma}\,,\quad\|\Delta_{12}\beta\|_{s_{1}}\lesssim_{s_{1}}\varepsilon\upsilon^{-1}\|i_{1}-i_{2}\|_{s_{1}+\sigma+{\mathtt{b}}}\,,$
(7.31)
$\displaystyle\|\Delta_{12}({\mathcal{A}})h\|_{s_{1}}\lesssim_{s_{1}}\varepsilon\upsilon^{-1}\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma+{\mathtt{b}}}\left\|h\right\|_{s_{1}+\sigma+{\mathtt{b}}}\,,\quad{\mathcal{A}}\in\\{{\mathcal{E}}^{\pm
1},({\mathcal{E}}^{\pm 1})^{*}\\}\,.$ (7.32)
###### Proof.
We apply Theorem A.2 and Corollary A.4 to the transport operator
$X_{0}=\omega\cdot\partial_{\varphi}+\widetilde{V}\partial_{x}$, which has the
form (A.1) with $p_{0}=\widetilde{V}$. By (7.15) and (7.10), the smallness
conditions (A.3) and (A.10) hold for
$N_{0}^{\tau_{2}}\varepsilon\upsilon^{-1}$ sufficiently small. Therefore there
exist a constant ${\mathtt{m}}_{1,\overline{\mathtt{n}}}\in{\mathbb{R}}$ and a
quasi-periodic traveling wave
$\beta({\varphi},x):=\beta_{\overline{\mathtt{n}}}({\varphi},x)$, ${\rm
odd}({\varphi},x)$, such that, for any $(\omega,\gamma)$ in
${\mathtt{T}}{\mathtt{C}}_{\overline{\mathtt{n}}+1}(2\upsilon,\tau)\subseteq{\mathtt{\Lambda}}_{\overline{\mathtt{n}}+1}^{\upsilon,\rm
T}\subseteq{\mathtt{\Lambda}}_{\overline{\mathtt{n}}}^{\upsilon,\rm T}$ (see
Corollary A.3) we have
${\mathcal{B}}_{\overline{\mathtt{n}}}^{-1}(\omega\cdot\partial_{\varphi}+\widetilde{V}\partial_{x}){\mathcal{B}}_{\overline{\mathtt{n}}}=\omega\cdot\partial_{\varphi}+({\mathtt{m}}_{1,\overline{\mathtt{n}}}+p_{\overline{\mathtt{n}}}({\varphi},y))\partial_{y}$
where the function $p_{\overline{\mathtt{n}}}$ satisfies (7.29) by (A.5) and
(7.15). The estimates (A.6), (A.15), (7.15) imply (7.25) and (7.30). The
conjugated operator of ${\mathcal{L}}_{\rm TR}$ in (7.20) is
${\mathcal{E}}^{-1}{\mathcal{L}}_{\rm
TR}{\mathcal{E}}=\omega\cdot\partial_{\varphi}+\begin{pmatrix}A_{1}&0\\\
0&({\mathtt{m}}_{1,\overline{\mathtt{n}}}+p_{\overline{\mathtt{n}}})\partial_{y}\end{pmatrix}$
where
$\omega\cdot\partial_{\varphi}+A_{1}={\mathcal{B}}^{-1}(1+\beta_{x})^{-1}\big{(}\omega\cdot\partial_{\varphi}+\partial_{x}{\widetilde{V}}\big{)}(1+\beta_{x}){\mathcal{B}}$.
Since ${\mathcal{L}}_{\rm TR}$ is Hamiltonian (Definition 3.18), and the map
${\mathcal{E}}$ is symplectic, we have that
${\mathcal{E}}^{-1}{\mathcal{L}}_{\rm TR}{\mathcal{E}}$ is Hamiltonian as
well. In particular
$A_{1}=-(({\mathtt{m}}_{1,\overline{\mathtt{n}}}+p_{\overline{\mathtt{n}}})\partial_{y})^{*}={\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{y}+\partial_{y}p_{\overline{\mathtt{n}}}$.
This proves (7.27)-(7.28). The estimates (7.31)-(7.32) follow by
(A.11)-(A.12), the bound for
$\|\Delta_{12}\beta_{\overline{\mathtt{n}}}\|_{s_{1}}$ in Corollary A.4 and
(7.16)-(7.17). ∎
###### Remark 7.8.
Actually, for any
$(\omega,\gamma)\in{\mathtt{T}}{\mathtt{C}}_{\overline{\mathtt{n}}+1}(2\upsilon,\tau)$
in (7.26), Theorem A.2 and Corollary A.3 would imply also the conjugation of
${\mathcal{L}}_{\rm TR}$ to the operator
$\omega\cdot\partial_{\varphi}+{\mathtt{m}}_{1,\overline{\mathtt{n}}+1}\partial_{y}$
for some ${\mathtt{m}}_{1,\overline{\mathtt{n}}+1}\in{\mathbb{R}}$, up to a
remainder ${\bf P}_{2}^{\perp}=O(\varepsilon
N_{\overline{\mathtt{n}}}^{-{\mathtt{a}}})$. For simplicity we stated only the
conjugation in (7.27). We shall use the non-resonance condition in (7.26) also
later in Sections 7.5, 7.6 .
The next lemma is needed in order to prove the inclusion of the Cantor sets
associated to two nearby approximate solutions.
###### Lemma 7.9.
Let $i_{1},i_{2}$ be close enough and $0<2\upsilon-\rho<2\upsilon<1$. Then
$\varepsilon
C(s_{1})N_{\overline{\mathtt{n}}}^{\tau+1}\|i_{1}-i_{2}\|_{s_{1}+\sigma}\leq\rho\quad\Rightarrow\quad{\mathtt{T}}{\mathtt{C}}_{\overline{\mathtt{n}}+1}(2\upsilon,\tau)(i_{1})\subseteq{\mathtt{T}}{\mathtt{C}}_{\overline{\mathtt{n}}+1}(2\upsilon-\rho,\tau)(i_{2})\,.$
###### Proof.
For any
$(\omega,\gamma)\in{\mathtt{T}}{\mathtt{C}}_{\overline{\mathtt{n}}+1}(2\upsilon,\tau)(i_{1})$,
using also (7.31), we have, for any
$\ell\in{\mathbb{Z}}^{\nu}\setminus\\{0\\}$, $|\ell|\leq
N_{\overline{\mathtt{n}}}$,
$\displaystyle|(\omega-{\mathtt{m}}_{1,\overline{\mathtt{n}}}(i_{2})\vec{\jmath})\cdot\ell|$
$\displaystyle\geq|(\omega-{\mathtt{m}}_{1,\overline{\mathtt{n}}}(i_{1})\vec{\jmath})\cdot\ell|-C|\Delta_{12}{\mathtt{m}}_{1,\overline{\mathtt{n}}}||\ell|$
$\displaystyle\geq\frac{2\upsilon}{\braket{\ell}^{\tau}}-C(s_{1})\varepsilon
N_{\overline{\mathtt{n}}}\|i_{1}-i_{2}\|_{s_{1}+\sigma}\geq\frac{2\upsilon-\rho}{\braket{\ell}^{\tau}}\,.$
We conclude that
$(\omega,\gamma)\in{\mathtt{T}}{\mathtt{C}}_{\overline{\mathtt{n}}+1}(2\upsilon-\rho,\tau)(i_{2})$.
∎
We now conjugate the whole operator ${\mathcal{L}}_{1}$ in (7.18)-(7.19) by
the operator ${\mathcal{E}}$ in (7.22).
We first compute the conjugation of the matrix
$\displaystyle{\mathcal{E}}^{-1}$
$\displaystyle\begin{pmatrix}-\frac{\gamma}{2}G(0)\partial_{x}^{-1}&-G(0)\\\
a-\left(\frac{\gamma}{2}\right)^{2}\partial_{x}^{-1}G(0)\partial_{x}^{-1}&-\frac{\gamma}{2}\partial_{x}^{-1}G(0)\end{pmatrix}{\mathcal{E}}$
$\displaystyle=\begin{pmatrix}-\frac{\gamma}{2}{\mathcal{B}}^{-1}(1+\beta_{x})^{-1}G(0)\partial_{x}^{-1}(1+\beta_{x}){\mathcal{B}}&-{\mathcal{B}}^{-1}(1+\beta_{x})^{-1}G(0){\mathcal{B}}\\\
{\mathcal{B}}^{-1}\big{(}a-\left(\frac{\gamma}{2}\right)^{2}\partial_{x}^{-1}G(0)\partial_{x}^{-1}\big{)}(1+\beta_{x}){\mathcal{B}}&-\frac{\gamma}{2}{\mathcal{B}}^{-1}\partial_{x}^{-1}G(0){\mathcal{B}}\end{pmatrix}\,.$
The multiplication operator for $a({\varphi},x)$ is transformed into the
multiplication operator for the function
${\mathcal{B}}^{-1}a(1+\beta_{x}){\mathcal{B}}={\mathcal{B}}^{-1}\big{(}a(1+\beta_{x})\big{)}\,.$
(7.33)
We write the Dirichlet-Neumann operator $G(0)$ in (1.9) as
$G(0)=G(0,{\mathtt{h}})=\partial_{x}{\mathcal{H}}T({\mathtt{h}})\,,$ (7.34)
where ${\mathcal{H}}$ is the Hilbert transform defined in (3.21) and
$T({\mathtt{h}}):=\begin{cases}\tanh({\mathtt{h}}|D|)={\rm Id}+{\rm
Op}(r_{\mathtt{h}})&\text{ if }{\mathtt{h}}<+\infty\,,\qquad
r_{{\mathtt{h}}}(\xi):=-\frac{2}{1+e^{2{\mathtt{h}}|\xi|\chi(\xi)}}\in
S^{-\infty}\,,\\\ {\rm Id}&\text{ if }{\mathtt{h}}=\infty\,.\end{cases}$
(7.35)
We have the conjugation formula (see formula (7.42) in [2])
${\mathcal{B}}^{-1}G(0){\mathcal{B}}=\left\\{{\mathcal{B}}^{-1}(1+\beta_{x})\right\\}G(0)+{\mathcal{R}}_{1}\,,$
(7.36)
where
${\mathcal{R}}_{1}:=\left\\{{\mathcal{B}}^{-1}(1+\beta_{x})\right\\}\partial_{y}\big{(}\l{\mathcal{H}}\left({\mathcal{B}}^{-1}{\rm
Op}(r_{\mathtt{h}}){\mathcal{B}}-{\rm
Op}(r_{\mathtt{h}})\right)+\left({\mathcal{B}}^{-1}{\mathcal{H}}{\mathcal{B}}-{\mathcal{H}}\right)({\mathcal{B}}^{-1}T({\mathtt{h}}){\mathcal{B}})\big{)}\,.$
The operator ${\mathcal{R}}_{1}$ is in ${\rm OP}S^{-\infty}$ because both
${\mathcal{B}}^{-1}{\rm Op}(r_{\mathtt{h}}){\mathcal{B}}-{\rm
Op}(r_{\mathtt{h}})$ and
${\mathcal{B}}^{-1}{\mathcal{H}}{\mathcal{B}}-{\mathcal{H}}$ are in ${\rm
OP}S^{-\infty}$ and there is ${\sigma}>0$ such that, for any
$m\in{\mathbb{N}}$, $\alpha\in{\mathbb{N}}_{0}$ and $s\geq s_{0}$,
$\displaystyle\|{\mathcal{B}}^{-1}{\mathcal{H}}{\mathcal{B}}-{\mathcal{H}}\|_{-m,s,\alpha}^{k_{0},\upsilon}\lesssim_{m,s,\alpha,k_{0}}\|\beta\|_{s+m+\alpha+\sigma}^{k_{0},\upsilon}\,,$
(7.37) $\displaystyle\|{\mathcal{B}}^{-1}{\rm
Op}(r_{\mathtt{h}}){\mathcal{B}}-{\rm
Op}(r_{\mathtt{h}})\|_{-m,s,\alpha}^{k_{0},\upsilon}\lesssim_{m,s,\alpha,k_{0}}\|\beta\|_{s+m+\alpha+\sigma}^{k_{0},\upsilon}\,.$
The first estimate is given in Lemmata 2.36 and 2.32 in [9], whereas the
second one follows by that fact that $r_{\mathtt{h}}\in S^{-\infty}$ (see
(7.35)), Lemma 2.18 in [2] and Lemmata 2.34 and 2.32 in [9]. Therefore by
(7.36) we obtain
${\mathcal{B}}^{-1}(1+\beta_{x})^{-1}G(0){\mathcal{B}}=\\{{\mathcal{B}}^{-1}(1+\beta_{x})^{-1}\\}{\mathcal{B}}^{-1}G(0){\mathcal{B}}=G(0)+{\mathcal{R}}_{B}\,,$
(7.38)
where
${\mathcal{R}}_{B}:=\\{{\mathcal{B}}^{-1}(1+\beta_{x})^{-1}\\}\,{\mathcal{R}}_{1}\,.$
(7.39)
Next we transform $G(0)\partial_{x}^{-1}$. By (7.34) and using the identities
${\mathcal{H}}\partial_{x}\partial_{x}^{-1}={\mathcal{H}}$ and
${\mathcal{H}}T({\mathtt{h}})=\partial_{y}^{-1}G(0)$ on the periodic
functions, we have that
$\displaystyle{\mathcal{B}}^{-1}(1+\beta_{x})^{-1}G(0)\partial_{x}^{-1}(1+\beta_{x}){\mathcal{B}}=G(0)\partial_{y}^{-1}+{\mathcal{R}}_{A}$
(7.40)
$\displaystyle{\mathcal{B}}^{-1}\partial_{x}^{-1}G(0){\mathcal{B}}=\partial_{y}^{-1}G(0)+{\mathcal{R}}_{D}\,,$
where
$\displaystyle{\mathcal{R}}_{D}$
$\displaystyle=({\mathcal{B}}^{-1}{\mathcal{H}}{\mathcal{B}}-{\mathcal{H}})({\mathcal{B}}^{-1}T({\mathtt{h}}){\mathcal{B}})+{\mathcal{H}}\big{(}{\mathcal{B}}^{-1}{\rm
Op}(r_{\mathtt{h}}){\mathcal{B}}-{\rm Op}(r_{\mathtt{h}})\big{)}\,,$ (7.41)
$\displaystyle{\mathcal{R}}_{A}$
$\displaystyle=\\{{\mathcal{B}}^{-1}(1+\beta_{x})^{-1}\\}\big{[}{\mathcal{H}}T({\mathtt{h}}),\\{{\mathcal{B}}^{-1}(1+\beta_{x})\\}-1\big{]}$
$\displaystyle\ \
+\\{{\mathcal{B}}^{-1}(1+\beta_{x})^{-1}\\}{\mathcal{R}}_{D}\\{{\mathcal{B}}^{-1}(1+\beta_{x})\\}\,.$
The operator ${\mathcal{R}}_{D}$ is in ${\rm OP}S^{-\infty}$ by (7.37),
(7.35). Also ${\mathcal{R}}_{A}$ is in ${\rm OP}S^{-\infty}$ using that, by
Lemma 2.35 of [9] and (7.35), there is ${\sigma}>0$ such that, for any
$m\in{\mathbb{N}}$, $s\geq s_{0}$, and $\alpha\in{\mathbb{N}}_{0}$,
$\|[{\mathcal{H}}T({\mathtt{h}}),{\widetilde{a}}]\|_{-m,s,\alpha}^{k_{0},\upsilon}\lesssim_{m,s,\alpha,k_{0}}\|{\widetilde{a}}\|_{s+m+\alpha+\sigma}^{k_{0},\upsilon}\,.$
(7.42)
Finally we conjugate $\partial_{x}^{-1}G(0)\partial_{x}^{-1}$. By the Egorov
Proposition 3.9 applied to $\partial_{x}^{-1}$, we have that, for any
$N\in{\mathbb{N}}$,
${\mathcal{B}}^{-1}\partial_{x}^{-1}(1+\beta_{x}){\mathcal{B}}={\mathcal{B}}^{-1}\partial_{x}^{-1}{\mathcal{B}}\,\\{{\mathcal{B}}^{-1}(1+\beta_{x})\\}=\partial_{y}^{-1}+P^{(1)}_{-2,N}(\varphi,x,D)+{\mathtt{R}}_{N}\,,$
(7.43)
where $P^{(1)}_{-2,N}(\varphi,x,D)\in{\rm OP}S^{-2}$ is given by
$P^{(1)}_{-2,N}(\varphi,x,D):=\big{[}\\{{\mathcal{B}}^{-1}(1+\beta_{x})^{-1}\\},\partial_{y}^{-1}\big{]}\\{{\mathcal{B}}^{-1}(1+\beta_{x})\\}+\sum_{j=1}^{N}p_{-1-j}\partial_{y}^{-1-j}\\{{\mathcal{B}}^{-1}(1+\beta_{x})\\}$
with functions $p_{-1-j}(\lambda;\varphi,y)$, $j=0,\ldots,N$, satisfying
(3.30) and ${\mathtt{R}}_{N}$ is a regularizing operator satisfying the
estimate (3.31). So, using (7.40) and (7.43), we obtain
$\displaystyle{\mathcal{B}}^{-1}\partial_{x}^{-1}G(0)\partial_{x}^{-1}(1+\beta_{x}){\mathcal{B}}$
$\displaystyle=({\mathcal{B}}^{-1}\partial_{x}^{-1}G(0){\mathcal{B}})({\mathcal{B}}^{-1}\partial_{x}^{-1}(1+\beta_{x}){\mathcal{B}})$
(7.44)
$\displaystyle=\partial_{y}^{-1}G(0)\partial_{y}^{-1}+P_{-2,N}^{(2)}+{\mathtt{R}}_{2,N}$
where
$\displaystyle P_{-2,N}^{(2)}$
$\displaystyle:=\partial_{y}^{-1}G(0)P^{(1)}_{-2,N}(\varphi,x,D)\in{\rm
OP}S^{-2}$ (7.45)
and ${\mathtt{R}}_{2,N}$ is the regularizing operator
${\mathtt{R}}_{2,N}:={\mathcal{R}}_{D}({\mathcal{B}}^{-1}\partial_{x}^{-1}(1+\beta_{x}){\mathcal{B}})+G(0)\partial_{y}^{-1}{\mathtt{R}}_{N}\,.$
(7.46)
In conclusion, by Lemma 7.7, (7.33), (7.38), (7.40) and (7.44) we obtain the
following lemma, which summarizes the main result of this section.
###### Lemma 7.10.
Let $N\in{\mathbb{N}}$. For any $\overline{\mathtt{n}}\in{\mathbb{N}}_{0}$ and
for all
$(\omega,\gamma)\in{\mathtt{T}}{\mathtt{C}}_{\overline{\mathtt{n}}+1}(2\upsilon,\tau)$,
the operator ${\mathcal{L}}_{1}$ in (7.18) is conjugated to the real,
Hamiltonian, reversible and momentum preserving operator
$\displaystyle{\mathcal{L}}_{2}:={\mathcal{E}}^{-1}{\mathcal{L}}_{1}{\mathcal{E}}=\omega\cdot\partial_{\varphi}+{\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{y}\,+$
$\displaystyle\begin{pmatrix}-\frac{\gamma}{2}G(0)\partial_{y}^{-1}&-G(0)\\\
a_{1}-\left(\frac{\gamma}{2}\right)^{2}\partial_{y}^{-1}G(0)\partial_{y}^{-1}&-\frac{\gamma}{2}\partial_{y}^{-1}G(0)\end{pmatrix}$
(7.47) $\displaystyle+\begin{pmatrix}0&0\\\
-\left(\frac{\gamma}{2}\right)^{2}P_{-2,N}^{(2)}&0\end{pmatrix}+{\bf
R}_{2}^{\Psi}+{\bf T}_{2,N}+{\bf P}_{2}^{\perp}\,,$
defined for any
$(\omega,\gamma)\in{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]$, where:
1\. The constant
${\mathtt{m}}_{1,\overline{\mathtt{n}}}={\mathtt{m}}_{1,\overline{\mathtt{n}}}(\omega,\gamma)\in{\mathbb{R}}$
satisfies
$|{\mathtt{m}}_{1,\overline{\mathtt{n}}}|^{k_{0},\upsilon}\lesssim\varepsilon$,
independently on $\overline{\mathtt{n}}$;
2\. The real quasi-periodic traveling wave
$a_{1}:={\mathcal{B}}^{-1}\big{(}a(1+\beta_{x})\big{)}$, ${\rm
even}({\varphi},x)$, satisfies, for some $\sigma:=\sigma(k_{0},\tau,\nu)>0$
and for all $s_{0}\leq s\leq S-\sigma$,
$\|a_{1}-g\|_{s}^{k_{0},\upsilon}\lesssim_{s}\varepsilon\upsilon^{-1}(1+\|{\mathfrak{I}}_{0}\|_{s+\sigma}^{k_{0},\upsilon})\,;$
(7.48)
3\. The operator $P_{-2,N}^{(2)}$ is a pseudodifferential operator in ${\rm
OP}S^{-2}$, reversibility and momentum preserving, and satisfies, for some
$\sigma_{N}:=\sigma_{N}(\tau,\nu,N)>0$, for finitely many
$0\leq\alpha\leq\alpha(M)$ (fixed in Remark 7.16) and for all $s_{0}\leq s\leq
S-\sigma_{N}-\alpha$,
$\|P_{-2,N}^{(2)}\|_{-2,s,\alpha}^{k_{0},\upsilon}\lesssim_{s,N,\alpha}\varepsilon\upsilon^{-1}(1+\|{\mathfrak{I}}_{0}\|_{s+\sigma_{N}+\alpha}^{k_{0},\upsilon})\,;$
(7.49)
4\. For any ${\mathtt{q}}\in{\mathbb{N}}^{\nu}_{0}$ with
$|{\mathtt{q}}|\leq{\mathtt{q}}_{0}$, $n_{1},n_{2}\in{\mathbb{N}}_{0}$ with
$n_{1}+n_{2}\leq N-(k_{0}+{\mathtt{q}}_{0})+2$, the operator $\langle
D\rangle^{n_{1}}\partial_{\varphi}^{\mathtt{q}}({\bf
R}_{2}^{\Psi}(\varphi)+{\bf T}_{2,N}({\varphi}))\langle D\rangle^{n_{2}}$ is
${\mathcal{D}}^{k_{0}}$-tame with a tame constant satisfying, for some
$\sigma_{N}({\mathtt{q}}_{0}):=\sigma_{N}({\mathtt{q}}_{0},k_{0},\tau,\nu)>0$
and for any $s_{0}\leq s\leq S-\sigma_{N}({\mathtt{q}}_{0})$,
${\mathfrak{M}}_{\langle D\rangle^{n_{1}}\partial_{\varphi}^{\mathtt{q}}({\bf
R}_{2}^{\Psi}(\varphi)+{\bf T}_{2,N}({\varphi}))\langle
D\rangle^{n_{2}}}(s)\lesssim_{S,N,{\mathtt{q}}_{0}}\varepsilon\upsilon^{-1}\big{(}1+\|{\mathfrak{I}}_{0}\|_{s+\sigma_{N}({\mathtt{q}}_{0})}^{k_{0},\upsilon}\big{)}\,;$
(7.50)
5\. The operator ${\bf P}_{2}^{\perp}$ is defined in (7.28) and the function
$p_{\overline{\mathtt{n}}}$ satisfies (7.29);
6\. Furthermore, for any $s_{1}$ as in (7.11), finitely many
$0\leq\alpha\leq\alpha(M)$, ${\mathtt{q}}\in{\mathbb{N}}_{0}^{\nu}$, with
$\left|{\mathtt{q}}\right|\leq{\mathtt{q}}_{0}$, and
$n_{1},n_{2}\in{\mathbb{N}}_{0}$, with $n_{1}+n_{2}\leq N-{\mathtt{q}}_{0}+1$,
$\displaystyle|\Delta_{12}{\mathtt{m}}_{1,\overline{\mathtt{n}}}|\lesssim_{s_{1}}\varepsilon\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma}\,,\
\|\Delta_{12}a_{1}\|_{s_{1}}\lesssim\varepsilon\upsilon^{-1}\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma}\,,$
(7.51)
$\displaystyle\|\Delta_{12}P_{-2,N}^{(2)}\|_{-2,s_{1},\alpha}\lesssim_{s_{1},N,\alpha}\varepsilon\upsilon^{-1}\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma_{N}+\alpha}\,,$
(7.52)
$\displaystyle\left\|\braket{D}^{n_{1}}\partial_{\varphi}^{\mathtt{q}}\Delta_{12}({\bf
R}_{2}^{\Psi}(\varphi)+{\bf
T}_{2,N}({\varphi}))\braket{D}^{n_{2}}\right\|_{{\mathcal{L}}(H^{s_{1}})}\lesssim_{s_{1},N,{\mathtt{q}}_{0}}\varepsilon\upsilon^{-1}\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma_{N}({\mathtt{q}}_{0})}\,.$
(7.53)
###### Proof.
Item 1 follows by Lemma 7.7. The function $a_{1}$ satisfies (7.48) by (7.14),
(3.7), (7.15), (7.30), (7.25). The estimate (7.49) follows by (7.45),
Proposition 3.9 and Lemmata 3.5, 3.6, 3.8, 7.7. The operators ${\bf
R}_{2}^{\Psi}$ and ${\bf T}_{2,N}$ in (7.47) are
${\bf
R}_{2}^{\Psi}:=-\begin{pmatrix}\frac{\gamma}{2}{\mathcal{R}}_{A}&{\mathcal{R}}_{B}\\\
0&\frac{\gamma}{2}{\mathcal{R}}_{D}\end{pmatrix}+{\mathcal{E}}^{-1}{\bf
R}_{1}{\mathcal{E}}\,,\qquad{\bf
T}_{2,N}:=-\left(\frac{\gamma}{2}\right)^{2}\begin{pmatrix}0&0\\\
{\mathtt{R}}_{2,N}&0\end{pmatrix}\,,$
where ${\mathcal{R}}_{B}$, ${\mathcal{R}}_{A}$, ${\mathcal{R}}_{D}$, are
defined in (7.39), (7.41), and ${\bf R}_{1}$, ${\mathtt{R}}_{2,N}$ in (7.19),
(7.46). Thus the estimate (7.50) holds by Lemmata 3.12, 3.13, 7.7, (7.37),
(7.42), Proposition 3.9, Lemma 3.10, (7.25) and Lemmata 2.34, 2.32 in [9]. The
estimates (7.51)-(7.53) are proved similarly. ∎
### 7.3 Symmetrization of the order $1/2$
The goal of this section is to symmetrize the order $1/2$ of the quasi-
periodic Hamiltonian operator ${\mathcal{L}}_{2}$ in (7.47). From now on, we
neglect the contribution of the operator ${\bf P}_{2}^{\perp}$, which will be
conjugated in Section 7.7. For simplicity of notation we denote such operator
${\mathcal{L}}_{2}$ as well.
Step 1: We first conjugate the operator ${\mathcal{L}}_{2}$ in (7.47), where
we relabel the space variable $y\rightsquigarrow x$, by the real, symplectic,
reversibility preserving and momentum preserving transformation
${\widetilde{{\mathcal{M}}}}:=\begin{pmatrix}\Lambda&0\\\
0&\Lambda^{-1}\end{pmatrix}\,,\quad{\widetilde{{\mathcal{M}}}}^{-1}:=\begin{pmatrix}\Lambda^{-1}&0\\\
0&\Lambda\end{pmatrix}\,,$ (7.54)
where $\Lambda\in{\rm OP}S^{\frac{1}{4}}$ is the Fourier multiplier
$\Lambda:=\tfrac{1}{\sqrt{g}}\pi_{0}+M(D)\,,\quad\text{with
inverse}\quad\Lambda^{-1}:=\sqrt{g}\pi_{0}+M(D)^{-1}\in{\rm
OP}S^{-\frac{1}{4}}\,,$ (7.55)
with $\pi_{0}$ defined in (3.23) and (cfr. (2.16))
$M(D):=G(0)^{\frac{1}{4}}\big{(}g-(\tfrac{\gamma}{2})^{2}\partial_{x}^{-1}G(0)\partial_{x}^{-1}\big{)}^{-\frac{1}{4}}\in{\rm
OP}S^{\frac{1}{4}}\,.$ (7.56)
We have the identities $\Lambda^{-1}G(0)\Lambda^{-1}=\omega(\gamma,D)$ and
$\Lambda\big{(}g-\big{(}\tfrac{\gamma}{2}\big{)}^{2}\partial_{x}^{-1}G(0)\partial_{x}^{-1}\big{)}\Lambda=\Lambda^{-1}G(0)\Lambda^{-1}+\pi_{0}=\omega(\gamma,D)+\pi_{0}\,,$
(7.57)
where $\omega(\gamma,D)\in{\rm OP}S^{\frac{1}{2}}$ is defined in (2.18).
By (7.47) we compute
$\displaystyle{\mathcal{L}}_{3}:={\widetilde{{\mathcal{M}}}}^{-1}{\mathcal{L}}_{2}{\widetilde{{\mathcal{M}}}}=$
$\displaystyle\
\omega\cdot\partial_{\varphi}+{\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{x}+\begin{pmatrix}-\frac{\gamma}{2}G(0)\partial_{x}^{-1}&-\Lambda^{-1}G(0)\Lambda^{-1}\\\
\Lambda\big{(}a_{1}-(\frac{\gamma}{2})^{2}\partial_{x}^{-1}G(0)\partial_{x}^{-1}\big{)}\Lambda&-\frac{\gamma}{2}G(0)\partial_{x}^{-1}\end{pmatrix}$
(7.58) $\displaystyle+\begin{pmatrix}0&0\\\ -(\frac{\gamma}{2})^{2}\Lambda
P_{-2,N}^{(2)}\Lambda&0\end{pmatrix}+{\widetilde{{\mathcal{M}}}}^{-1}{\bf
R}_{2}^{\Psi}{\widetilde{{\mathcal{M}}}}+{\widetilde{{\mathcal{M}}}}^{-1}{\bf
T}_{2,N}{\widetilde{{\mathcal{M}}}}\,.$
By (7.57), (7.55) and (7.56), we get
$\displaystyle\Lambda\big{(}a_{1}-(\tfrac{\gamma}{2})^{2}\partial_{x}^{-1}G(0)\partial_{x}^{-1}\big{)}\Lambda=\Lambda\big{(}g-(\tfrac{\gamma}{2})^{2}\partial_{x}^{-1}G(0)\partial_{x}^{-1}\big{)}\Lambda+\Lambda(a_{1}-g)\Lambda$
(7.59) $\displaystyle\ \ \ \ \ \ \
=\omega(\gamma,D)+(a_{1}-g)\Lambda^{2}+[\Lambda,a_{1}]\Lambda+\pi_{0}$
$\displaystyle\ \ \ \ \ \ \
=\big{(}1+\tfrac{a_{1}-g}{g}\big{)}\omega(\gamma,D)+\tfrac{a_{1}-g}{g}\big{(}g\Lambda^{2}-\omega(\gamma,D)\big{)}+[\Lambda,a_{1}]\Lambda+\pi_{0}$
$\displaystyle\ \ \ \ \ \ \
=a_{2}^{2}\omega(\gamma,D)+\tfrac{a_{1}-g}{g}(\tfrac{\gamma}{2})^{2}M(D)^{2}\partial_{x}^{-1}G(0)\partial_{x}^{-1}+[\Lambda,a_{1}]\Lambda+\pi_{0}+\tfrac{a_{1}-g}{g}\pi_{0}$
where $a_{2}$ is the real quasi-periodic traveling wave function (with $a_{1}$
defined in Lemma 7.10)
$a_{2}:=\sqrt{\tfrac{a_{1}}{g}}=\sqrt{1+\tfrac{a_{1}-g}{g}}\,,\quad{\rm
even}({\varphi},x)\,\,.$ (7.60)
Therefore, by (7.58), (7.57), (7.59) we obtain
$\displaystyle{\mathcal{L}}_{3}$
$\displaystyle=\omega\cdot\partial_{\varphi}+{\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{x}+\begin{pmatrix}-\frac{\gamma}{2}G(0)\partial_{x}^{-1}&-\omega(\gamma,D)\\\
a_{2}\omega(\gamma,D)a_{2}&-\frac{\gamma}{2}\partial_{x}^{-1}G(0)\end{pmatrix}+\begin{pmatrix}0&0\\\
\pi_{0}&0\end{pmatrix}$ (7.61) $\displaystyle+\begin{pmatrix}0&0\\\
C_{3}&0\end{pmatrix}+{\bf R}_{3}^{\Psi}+{\bf T}_{3,N}\,,$
where
$C_{3}:=a_{2}[a_{2},\omega(\gamma,D)]+\tfrac{a_{1}-g}{g}(\tfrac{\gamma}{2})^{2}M(D)^{2}\partial_{x}^{-1}G(0)\partial_{x}^{-1}+[\Lambda,a_{1}]\Lambda-(\tfrac{\gamma}{2})^{2}\Lambda
P_{-2,N}^{(2)}\Lambda$ (7.62)
is in ${\rm OP}S^{-\frac{1}{2}}$ and
${\bf R}_{3}^{\Psi}:={\widetilde{{\mathcal{M}}}}^{-1}{\bf
R}_{2}^{\Psi}{\widetilde{{\mathcal{M}}}}+\begin{pmatrix}0&0\\\
(\tfrac{a_{1}}{g}-1)\pi_{0}&0\end{pmatrix}\,,\quad{\bf
T}_{3,N}:={\widetilde{{\mathcal{M}}}}^{-1}{\bf
T}_{2,N}{\widetilde{{\mathcal{M}}}}\,.$ (7.63)
The operator ${\mathcal{L}}_{3}$ in (7.61) is Hamiltonian, reversible and
momentum preserving.
Step 2: We now conjugate the operator ${\mathcal{L}}_{3}$ in (7.61) with the
symplectic matrix of multiplication operators
${\mathcal{Q}}:=\begin{pmatrix}q&0\\\ 0&q^{-1}\end{pmatrix}\
,\qquad{\mathcal{Q}}^{-1}:=\begin{pmatrix}q^{-1}&0\\\ 0&q\end{pmatrix}\,,$
where $q$ is a real function, close to $1$, to be determined, see (7.69). We
have that
$\displaystyle{\mathcal{L}}_{4}:={\mathcal{Q}}^{-1}{\mathcal{L}}_{3}{\mathcal{Q}}=\omega\cdot\partial_{\varphi}+{\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{x}+\begin{pmatrix}A&B\\\
C&D\end{pmatrix}+{\mathcal{Q}}^{-1}({\bf R}_{3}^{\Psi}+{\bf
T}_{3,N}){\mathcal{Q}}\,,$ (7.64)
where (actually $D=-A^{*}$, see Definition 3.18)
$\displaystyle
A:=-\tfrac{\gamma}{2}q^{-1}G(0)\partial_{x}^{-1}q+{\mathtt{m}}_{1,\overline{\mathtt{n}}}q^{-1}q_{x}+q^{-1}(\omega\cdot\partial_{\varphi}q)\,,$
(7.65) $\displaystyle B:=-q^{-1}\omega(\gamma,D)q^{-1}\,,$ (7.66)
$\displaystyle C:=qa_{2}\omega(\gamma,D)a_{2}q+q\pi_{0}q+qC_{3}q\,,$ (7.67)
$\displaystyle
D:=-\tfrac{\gamma}{2}q\partial_{x}^{-1}G(0)q^{-1}-{\mathtt{m}}_{1,\overline{\mathtt{n}}}q^{-1}q_{x}-q^{-1}(\omega\cdot\partial_{\varphi}q)\,.$
(7.68)
We choose the function $q$ so that the coefficients of the highest order terms
of the off-diagonal self-adjoint operators $B$ and $C$ satisfy
$q^{-1}=qa_{2}$, namely as the real quasi-periodic traveling wave, ${\rm
even}({\varphi},x)$
$q({\varphi},x):=a_{2}({\varphi},x)^{-\frac{1}{2}}\,.$ (7.69)
Thus ${\mathcal{Q}}$ is reversibility and momentum preserving.
In view of (7.65)-(7.68) and (7.69) the operator ${\mathcal{L}}_{4}$ in (7.64)
becomes
$\displaystyle{\mathcal{L}}_{4}$
$\displaystyle=\omega\cdot\partial_{\varphi}+{\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{x}+\begin{pmatrix}-\frac{\gamma}{2}G(0)\partial_{x}^{-1}&-a_{2}^{\frac{1}{2}}\omega(\gamma,D)a_{2}^{\frac{1}{2}}\\\
a_{2}^{\frac{1}{2}}\omega(\gamma,D)a_{2}^{\frac{1}{2}}&-\frac{\gamma}{2}\partial_{x}^{-1}G(0)\end{pmatrix}$
(7.70) $\displaystyle\ \ \ +\begin{pmatrix}0&0\\\
\pi_{0}&0\end{pmatrix}+\begin{pmatrix}a_{3}&0\\\
C_{4}&-a_{3}\end{pmatrix}+{\bf R}_{4}^{\Psi}+{\bf T}_{4,N}\,,$
where $a_{3}$ is the real quasi-periodic traveling wave function, ${\rm
odd}({\varphi},x)$,
$\displaystyle a_{3}$
$\displaystyle:={\mathtt{m}}_{1,\overline{\mathtt{n}}}q^{-1}q_{x}+q^{-1}(\omega\cdot\partial_{\varphi}q)\,,\quad
C_{4}:=qC_{3}q\in{\rm OP}S^{-\frac{1}{2}}\,,$ (7.71)
and ${\bf R}_{4}^{\Psi},{\bf T}_{4,N}$ are the smoothing remainders (recall
that $G(0)\partial_{x}^{-1}={\mathcal{H}}T({\mathtt{h}})$)
$\displaystyle{\bf
R}_{4}^{\Psi}:=\begin{pmatrix}-\frac{\gamma}{2}q^{-1}[{\mathcal{H}}T({\mathtt{h}}),q-1]&0\\\
q\pi_{0}q-\pi_{0}&-\frac{\gamma}{2}[q-1,{\mathcal{H}}T({\mathtt{h}})]q^{-1}\end{pmatrix}+{\mathcal{Q}}^{-1}{\bf
R}_{3}^{\Psi}{\mathcal{Q}}\in{\rm OP}S^{-\infty}\,,$ (7.72) $\displaystyle{\bf
T}_{4,N}:={\mathcal{Q}}^{-1}{\bf T}_{3,N}{\mathcal{Q}}\,.$
The operator ${\mathcal{L}}_{4}$ in (7.70) is Hamiltonian, reversible and
momentum preserving.
Step 3: We finally move in complex coordinates, conjugating the operator
${\mathcal{L}}_{4}$ in (7.70) via the transformation ${\mathcal{C}}$ defined
in (2.19). The main result of this section is the following lemma.
###### Lemma 7.11.
Let $N\in{\mathbb{N}}$, ${\mathtt{q}}_{0}\in{\mathbb{N}}_{0}$. We have that
$\displaystyle{\mathcal{L}}_{5}:=$
$\displaystyle\,({\widetilde{{\mathcal{M}}}}{\mathcal{Q}}{\mathcal{C}})^{-1}{\mathcal{L}}_{2}{\widetilde{{\mathcal{M}}}}{\mathcal{Q}}{\mathcal{C}}$
(7.73) $\displaystyle=$
$\displaystyle\,\omega\cdot\partial_{\varphi}+{\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{x}+{\rm
i}\,a_{2}({\varphi},x){\bf{\Omega}}(\gamma,D)+{\rm
i}\,{\bf{\Pi}}_{0}+a_{4}{\mathcal{H}}+{\bf R}_{5}^{(-\frac{1}{2},d)}+{\bf
R}_{5}^{(0,o)}+{\bf T}_{5,N}\,,$
where:
1\. The real quasi-periodic traveling wave $a_{2}({\varphi},x)$ defined in
(7.60), ${\rm even}({\varphi},x)$, satisfies, for some
$\sigma=\sigma(k_{0},\tau,\nu)>$ and for any $s_{0}\leq s\leq S-\sigma$,
$\|a_{2}-1\|_{s}^{k_{0},\upsilon}\lesssim\varepsilon\upsilon^{-1}(1+\|{\mathfrak{I}}_{0}\|_{s+\sigma}^{k_{0},\upsilon})\,;$
(7.74)
2\. ${\bf{\Omega}}(\gamma,D)$ is the matrix of Fourier multipliers (see
(2.20), (2.21))
${\bf{\Omega}}(\gamma,D)=\begin{pmatrix}\Omega(\gamma,D)&0\\\
0&-\overline{\Omega(\gamma,D)}\end{pmatrix},\quad\Omega(\gamma,D)=\omega(\gamma,D)+{\rm
i}\,\frac{\gamma}{2}\partial_{x}^{-1}G(0)\,;$ (7.75)
3\. The operator
${\bf{\Pi}}_{0}:=\frac{1}{2}\begin{pmatrix}\pi_{0}&\pi_{0}\\\
-\pi_{0}&-\pi_{0}\end{pmatrix}\,.$
4\. The real quasi-periodic traveling wave
$a_{4}({\varphi},x):=\tfrac{\gamma}{2}(a_{2}({\varphi},x)-1)$, ${\rm
even}({\varphi},x)$, satisfies, for some $\sigma:=$ $\sigma(k_{0},\tau,\nu)>0$
and for all $s_{0}\leq s\leq S-\sigma$,
$\|a_{4}\|_{s}^{k_{0},\upsilon}\lesssim_{s}\varepsilon\upsilon^{-1}(1+\|{\mathfrak{I}}_{0}\|_{s+\sigma}^{k_{0},\upsilon})\,;$
(7.76)
5\. ${\bf R}_{5}^{(-\frac{1}{2},d)}\in{\rm OP}S^{-\frac{1}{2}}$ and ${\bf
R}_{5}^{(0,o)}\in{\rm OP}S^{0}$ are pseudodifferential operators of the form
$\displaystyle\footnotesize{\bf
R}_{5}^{(-\frac{1}{2},d)}:=\begin{pmatrix}r_{5}^{(d)}({\varphi},x,D)&0\\\
0&\overline{r_{5}^{(d)}({\varphi},x,D)}\end{pmatrix},\quad{\bf
R}_{5}^{(0,o)}:=\begin{pmatrix}0&r_{5}^{(o)}({\varphi},x,D)\\\
\overline{r_{5}^{(o)}({\varphi},x,D)}&0\end{pmatrix}\,,$
reversibility and momentum preserving, satisfying, for some
$\sigma_{N}:=\sigma(\tau,\nu,N)>0$, for finitely many
$0\leq\alpha\leq\alpha(M)$ (fixed in Remark 7.16), and for all $s_{0}\leq
s\leq S-\sigma_{N}-3\alpha$,
$\|{\bf
R}_{5}^{(-\frac{1}{2},d)}\|_{-\frac{1}{2},s,\alpha}^{k_{0},\upsilon}+\|{\bf
R}_{5}^{(0,o)}\|_{0,s,\alpha}^{k_{0},\upsilon}\lesssim_{s,N,\alpha}\varepsilon\upsilon^{-1}(1+\|{\mathfrak{I}}_{0}\|_{s+\sigma_{N}+3\alpha}^{k_{0},\upsilon})\,;$
(7.77)
6\. For any ${\mathtt{q}}\in{\mathbb{N}}^{\nu}_{0}$ with
$|{\mathtt{q}}|\leq{\mathtt{q}}_{0}$, $n_{1},n_{2}\in{\mathbb{N}}_{0}$ with
$n_{1}+n_{2}\leq N-(k_{0}+{\mathtt{q}}_{0})+\frac{3}{2}$, the operator
$\langle D\rangle^{n_{1}}\partial_{\varphi}^{\mathtt{q}}{\bf
T}_{5,N}(\varphi)\langle D\rangle^{n_{2}}$ is ${\mathcal{D}}^{k_{0}}$-tame
with a tame constant satisfying, for some
$\sigma_{N}({\mathtt{q}}_{0}):=\sigma_{N}({\mathtt{q}}_{0},k_{0},\tau,\nu)>0$
and for any $s_{0}\leq s\leq S-\sigma_{N}({\mathtt{q}}_{0})$,
${\mathfrak{M}}_{\langle D\rangle^{n_{1}}\partial_{\varphi}^{\mathtt{q}}{\bf
T}_{5,N}(\varphi)\langle
D\rangle^{n_{2}}}(s)\lesssim_{S,N,{\mathtt{q}}_{0}}\varepsilon\upsilon^{-1}\big{(}1+\|{\mathfrak{I}}_{0}\|_{s+\sigma_{N}({\mathtt{q}}_{0})}^{k_{0},\upsilon}\big{)}\,;$
(7.78)
7\. The operators ${\mathcal{Q}}^{\pm 1}$, ${\mathcal{Q}}^{\pm 1}-{\rm Id}$,
$({\mathcal{Q}}^{\pm 1}-{\rm Id})^{*}$ are ${\mathcal{D}}^{k_{0}}$-tame with
tame constants satisfying, for some $\sigma:=\sigma(\tau,\nu,k_{0})>0$ and for
all $s_{0}\leq s\leq S-\sigma$,
$\displaystyle{\mathfrak{M}}_{{\mathcal{Q}}^{\pm
1}}(s)\lesssim_{S}1+\|{\mathfrak{I}}_{0}\|_{s+\sigma}^{k_{0},\upsilon}\,,\ \
{\mathfrak{M}}_{{\mathcal{Q}}^{\pm 1}-{\rm
Id}}(s)+{\mathfrak{M}}_{\left({\mathcal{Q}}^{\pm 1}-{\rm
Id}\right)^{*}}(s)\lesssim_{S}\varepsilon\upsilon^{-1}(1+\|{\mathfrak{I}}_{0}\|_{s+\sigma}^{k_{0},\upsilon})\,.$
(7.79)
8\. Furthermore, for any $s_{1}$ as in (7.11), finitely many
$0\leq\alpha\leq\alpha(M)$, ${\mathtt{q}}\in{\mathbb{N}}_{0}^{\nu}$, with
$\left|{\mathtt{q}}\right|\leq{\mathtt{q}}_{0}$, and
$n_{1},n_{2}\in{\mathbb{N}}_{0}$, with $n_{1}+n_{2}\leq
N-{\mathtt{q}}_{0}+\frac{1}{2}$,
$\displaystyle\|\Delta_{12}({\mathcal{A}})h\|_{s_{1}}\lesssim_{s_{1}}\varepsilon{\upsilon^{-1}}\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma}\left\|h\right\|_{s_{1}+\sigma}\,,\quad{\mathcal{A}}\in\\{{\mathcal{Q}}^{\pm
1}=({\mathcal{Q}}^{\pm 1})^{*}\\}\,,$ (7.80)
$\displaystyle\|\Delta_{12}a_{2}\|_{s_{1}}\lesssim_{s_{1}}\varepsilon{\upsilon^{-1}}\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma}\,,\
\|\Delta_{12}a_{4}\|_{s_{1}}\lesssim\varepsilon{\upsilon^{-1}}\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma}\,,$
(7.81) $\displaystyle\|\Delta_{12}{\bf
R}_{5}^{(-\frac{1}{2},d)}\|_{-\frac{1}{2},s_{1},\alpha}+\|\Delta_{12}{\bf
R}_{5}^{(0,o)}\|_{0,s_{1},\alpha}\lesssim_{s_{1},N,\alpha}\varepsilon{\upsilon^{-1}}\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma_{N}+2\alpha}\,,$
(7.82)
$\displaystyle\left\|\braket{D}^{n_{1}}\partial_{\varphi}^{\mathtt{q}}\Delta_{12}{\bf
T}_{5,N}({\varphi})\braket{D}^{n_{2}}\right\|_{{\mathcal{L}}(H^{s_{1}})}\lesssim_{s_{1},N,{\mathtt{q}}_{0}}\varepsilon{\upsilon^{-1}}\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma_{N}({\mathtt{q}}_{0})}\,.$
(7.83)
The real operator ${\mathcal{L}}_{5}$ is Hamiltonian, reversible and momentum
preserving.
###### Proof.
By the expression of ${\mathcal{L}}_{4}$ in (7.70) and (3.17) we obtain that
${\cal L}_{5}$ has the form (7.73) with
$\displaystyle r_{5}^{(d)}$
$\displaystyle:=\tfrac{\gamma}{2}(a_{2}-1){\mathcal{H}}(T({\mathtt{h}})-1)+{\rm
i}\big{(}\tfrac{1}{2}C_{4}+a_{2}^{\frac{1}{2}}[\omega(\gamma,D),a_{2}^{\frac{1}{2}}]\big{)}\in{\rm
OP}S^{-\frac{1}{2}}\,,$ (7.84) $\displaystyle r_{5}^{(o)}$
$\displaystyle:=a_{3}+\tfrac{{\rm i}}{2}C_{4}\in{\rm OP}S^{0}$
(with $C_{4}$ given in (7.71)) and ${\bf T}_{5,N}:={\mathcal{C}}^{-1}({\bf
R}_{4}^{\Psi}+{\bf T}_{4,N}){\mathcal{C}}$. The function $q$ defined in
(7.69), with $a_{2}$ in (7.60), satisfies, by (7.48) and Lemma 3.2, for all
$s_{0}\leq s\leq S-\sigma$,
$\|q^{\pm
1}-1\|_{s}^{k_{0},\upsilon}\lesssim_{s}\varepsilon{\upsilon^{-1}}(1+\|{\mathfrak{I}}_{0}\|_{s+\sigma}^{k_{0},\upsilon})\,.$
(7.85)
The estimates (7.74) and (7.76) follows by (7.60) and (7.85). The estimate
(7.77) follows by (7.84), (7.74), (7.69), (7.62), (7.60), (7.48), (7.49),
(7.71), (7.55), (2.16), Lemma 7.10. The estimate (7.78) follows by (7.72),
(7.63), (7.42), (7.50), (7.48) Lemmata 3.12, 3.13, (7.85). The estimates
(7.79) follow by Lemmata 3.13 and (7.85). The estimates (7.80)- (7.83) are
proved similarly. ∎
### 7.4 Symmetrization up to smoothing remainders
The goal of this section is to transform the operator ${\mathcal{L}}_{5}$ in
(7.73) into the operator ${\mathcal{L}}_{6}$ in (7.88) which is block diagonal
up to a regularizing remainder. From this step we do not preserve any further
the Hamiltonian structure, but only the reversible and momentum preserving one
(it is sufficient for proving Theorem 5.1).
###### Lemma 7.12.
Fix ${\mathfrak{m}},N\in{\mathbb{N}}$, ${\mathtt{q}}_{0}\in{\mathbb{N}}_{0}$.
There exist real, reversibility and momentum preserving operator matrices
$\\{{\bf X}_{k}\\}_{k=1}^{{\mathfrak{m}}}$ of the form
${\bf X}_{k}:=\begin{pmatrix}0&\chi_{k}({\varphi},x,D)\\\
\overline{\chi_{k}({\varphi},x,D)}&0\end{pmatrix},\qquad\chi_{k}({\varphi},x,\xi)\in
S^{-\frac{k}{2}}\,,$ (7.86)
such that, conjugating the operator ${\mathcal{L}}_{5}$ in (7.73) via the map
${\bf{\Phi}}_{{\mathfrak{m}}}:=e^{{\bf X}_{1}}\circ\cdots\circ e^{{\bf
X}_{{\mathfrak{m}}}}\,,$ (7.87)
we obtain the real, reversible and momentum preserving operator
$\displaystyle{\mathcal{L}}_{6}$
$\displaystyle:={\mathcal{L}}_{6}^{({\mathfrak{m}})}:={\bf{\Phi}}_{{\mathfrak{m}}}^{-1}\,{\mathcal{L}}_{5}\,{\bf{\Phi}}_{{\mathfrak{m}}}$
(7.88)
$\displaystyle=\omega\cdot\partial_{\varphi}+{\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{x}+{\rm
i}\,a_{2}{\bf{\Omega}}(\gamma,D)+{\rm i}{\bf{\Pi}}_{0}+a_{4}{\mathcal{H}}+{\bf
R}_{6}^{(-\frac{1}{2},d)}+{\bf R}_{6}^{(-\frac{{\mathfrak{m}}}{2},o)}+{\bf
T}_{6,N}\,,$
where:
1\. ${\bf R}_{6}^{(-\frac{1}{2},d)}$ is a block-diagonal operator
$\displaystyle{\bf R}_{6}^{(-\frac{1}{2},d)}:={\bf
R}_{6,{\mathfrak{m}}}^{(-\frac{1}{2},d)}$
$\displaystyle:=\begin{pmatrix}r_{6}^{(d)}({\varphi},x,D)&0\\\
0&\overline{r_{6}^{(d)}({\varphi},x,D)}\end{pmatrix}\in{\rm
OP}S^{-\frac{1}{2}}\,,$
${\bf R}_{6}^{(-\frac{{\mathtt{m}}}{2},o)}$ is a smoothing off diagonal
remainder
$\displaystyle{\bf R}_{6}^{(-\frac{{\mathtt{m}}}{2},o)}:={\bf
R}_{6,{\mathfrak{m}}}^{(-\frac{{\mathfrak{m}}}{2},o)}$
$\displaystyle:=\begin{pmatrix}0&r_{6}^{(o)}({\varphi},x,D)\\\
\overline{r_{6}^{(o)}({\varphi},x,D)}&0\end{pmatrix}\in{\rm
OP}S^{-\frac{{\mathfrak{m}}}{2}}\,,$ (7.89)
satisfying for finitely many $0\leq\alpha\leq\alpha({\mathfrak{m}})$ (fixed in
Remark 7.16), for some $\sigma_{N}:=\sigma_{N}(k_{0},\tau,\nu,N)>0$,
$\aleph_{{\mathfrak{m}}}(\alpha)>0$ and for all $s_{0}\leq s\leq
S-\sigma_{N}-\aleph_{{\mathfrak{m}}}(\alpha)$,
$\displaystyle\|{\bf
R}_{6}^{(-\frac{1}{2},d)}\|_{-\frac{1}{2},s,\alpha}^{k_{0},\upsilon}+\|{\bf
R}_{6}^{(-\frac{{\mathfrak{m}}}{2},o)}\|_{-\frac{{\mathfrak{m}}}{2},s,\alpha}^{k_{0},\upsilon}\lesssim_{s,{\mathfrak{m}},N,\alpha}\varepsilon{\upsilon^{-1}}\big{(}1+\|{\mathfrak{I}}_{0}\|_{s+\sigma_{N}+\aleph_{{\mathfrak{m}}}(\alpha)}^{k_{0},\upsilon}\big{)}\,.$
(7.90)
Both ${\bf R}_{6}^{(-\frac{1}{2},d)}$ and ${\bf
R}_{6}^{(-\frac{{\mathfrak{m}}}{2},o)}$ are reversible and momentum
preserving;
2\. For any ${\mathtt{q}}\in{\mathbb{N}}^{\nu}_{0}$ with
$|{\mathtt{q}}|\leq{\mathtt{q}}_{0}$, $n_{1},n_{2}\in{\mathbb{N}}_{0}$ with
$n_{1}+n_{2}\leq N-(k_{0}+{\mathtt{q}}_{0})+\frac{3}{2}$, the operator
$\langle D\rangle^{n_{1}}\partial_{\varphi}^{\mathtt{q}}{\bf
T}_{6,N}(\varphi)\langle D\rangle^{n_{2}}$ is ${\mathcal{D}}^{k_{0}}$-tame
with a tame constant satisfying, for some
$\sigma_{N}({\mathtt{q}}_{0}):=\sigma_{N}(k_{0},\tau,\nu,{\mathtt{q}}_{0})$,
for any $s_{0}\leq s\leq
S-\sigma_{N}({\mathtt{q}}_{0})-\aleph_{{\mathfrak{m}}}(0)$,
${\mathfrak{M}}_{\langle D\rangle^{n_{1}}\partial_{\varphi}^{\mathtt{q}}{\bf
T}_{6,N}(\varphi)\langle
D\rangle^{n_{2}}}(s)\lesssim_{S,{\mathfrak{m}},N,{\mathtt{q}}_{0}}\varepsilon{\upsilon^{-1}}(1+\|{\mathfrak{I}}_{0}\|_{s+\sigma_{N}({\mathtt{q}}_{0})+\aleph_{{\mathfrak{m}}}(0)}^{k_{0},\upsilon})\,.$
(7.91)
3\. The conjugation map ${\bf{\Phi}}_{{\mathfrak{m}}}$ in (7.87) satisfies,
for all $s_{0}\leq s\leq S-\sigma_{N}-\aleph_{{\mathfrak{m}}}(0)$,
$\|{\bf{\Phi}}_{{\mathfrak{m}}}^{\pm 1}-{\rm
Id}\|_{0,s,0}^{k_{0},\upsilon}+\|\left({\bf{\Phi}}_{{\mathfrak{m}}}^{\pm
1}-{\rm
Id}\right)^{*}\|_{0,s,0}^{k_{0},\upsilon}\lesssim_{s,{\mathfrak{m}},N}\varepsilon{\upsilon^{-1}}(1+\|{\mathfrak{I}}_{0}\|_{s+\sigma_{N}+\aleph_{{\mathfrak{m}}}(0)}^{k_{0},\upsilon})\,.$
(7.92)
4\. Furthermore, for any $s_{1}$ as in (7.11), finitely many
$0\leq\alpha\leq\alpha({\mathfrak{m}})$,
${\mathtt{q}}\in{\mathbb{N}}_{0}^{\nu}$, with
$\left|{\mathtt{q}}\right|\leq{\mathtt{q}}_{0}$, and
$n_{1},n_{2}\in{\mathbb{N}}_{0}$, with $n_{1}+n_{2}\leq
N-{\mathtt{q}}_{0}+\frac{1}{2}$, we have
$\displaystyle\|\Delta_{12}{\bf
R}_{6}^{(-\frac{1}{2},d)}\|_{-\frac{1}{2},s_{1},\alpha}+\|\Delta_{12}{\bf
R}_{6}^{(-\frac{{\mathfrak{m}}}{2},o)}\|_{-\frac{{\mathfrak{m}}}{2},s_{1},\alpha}\lesssim_{s_{1},{\mathfrak{m}},N,\alpha}\varepsilon{\upsilon^{-1}}\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma_{N}+\aleph_{{\mathfrak{m}}}(\alpha)}\,,$
(7.93)
$\displaystyle\|\braket{D}^{n_{1}}\partial_{\varphi}^{\mathtt{q}}\Delta_{12}{\bf
T}_{6,N}\braket{D}^{n_{2}}\|_{{\mathcal{L}}(H^{s_{1}})}\lesssim_{s_{1},{\mathfrak{m}},N,{\mathtt{q}}_{0}}\varepsilon{\upsilon^{-1}}\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma_{N}({\mathtt{q}}_{0})+\aleph_{{\mathfrak{m}}}(0)}\,,$
(7.94) $\displaystyle\|\Delta_{12}{\bf{\Phi}}_{{\mathfrak{m}}}^{\pm
1}\|_{0,s_{1},0}+\|\Delta_{12}({\bf{\Phi}}_{{\mathfrak{m}}}^{\pm
1})^{*}\|_{0,s_{1},0}\lesssim_{s_{1},{\mathfrak{m}},N}\varepsilon{\upsilon^{-1}}\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma_{N}+\aleph_{{\mathfrak{m}}}(0)}\,.$
(7.95)
###### Proof.
The proof is inductive. The operator
${\mathcal{L}}_{6}^{(0)}:={\mathcal{L}}_{5}$ satisfies (7.90)-(7.91) with
$\aleph_{0}(\alpha):=3\alpha$, by (7.77)-(7.78). Suppose we have done already
${\mathfrak{m}}$ steps obtaining an operator
${\mathcal{L}}_{6}^{({\mathfrak{m}})}$ as in (7.88) with ${\bf
R}_{6,{\mathfrak{m}}}^{(-\frac{1}{2},d)}:={\bf R}_{6}^{(-\frac{1}{2},d)}$ and
${\bf R}_{6,{\mathfrak{m}}}^{(-\frac{1}{2},o)}:={\bf
R}_{6}^{(-\frac{{\mathfrak{m}}}{2},o)}$ and the remainder
${\bf\Phi}_{{\mathfrak{m}}}^{-1}{\bf T}_{5,N}{\bf\Phi}_{{\mathfrak{m}}}$,
instead of ${\bf T}_{6,N}$. We now show how to define
${\mathcal{L}}_{6}^{({\mathfrak{m}}+1)}$. Let
$\chi_{{\mathfrak{m}}+1}({\varphi},x,\xi):=-\big{(}2{\rm
i}\,a_{2}({\varphi},x)\omega(\gamma,\xi)\big{)}^{-1}r_{6,{\mathfrak{m}}}^{(o)}({\varphi},x,\xi)\chi(\xi)\in
S^{-\frac{{\mathfrak{m}}}{2}-\frac{1}{2}}\,,$ (7.96)
where $\chi$ is the cut-off function defined in (3.11) and
$\omega(\gamma,\xi)$ is the symbol (cfr. (2.18))
$\omega(\gamma,\xi):=\sqrt{G(0;\xi)\Big{(}g+\frac{\gamma^{2}}{4}\frac{G(0;\xi)}{\xi^{2}}\Big{)}}\in
S^{\frac{1}{2}}\,,\ \
G(0;\xi):=\begin{cases}\chi(\xi)|\xi|\tanh({\mathtt{h}}|\xi|)\,,\
{\mathtt{h}}<+\infty\cr\chi(\xi)|\xi|\,,\qquad\qquad\ \,\
{\mathtt{h}}=+\infty\,.\end{cases}$
Note that $\chi_{{\mathfrak{m}}+1}$ in (7.96) is well defined because
$\omega(\gamma,\xi)$ is positive on the support of $\chi(\xi)$ and
$a_{2}({\varphi},x)$ is close to 1. We conjugate the operator
${\mathcal{L}}_{6}^{({\mathfrak{m}})}$ in (7.88) by the flow generated by
${\bf X}_{{\mathfrak{m}}+1}$ of the form (7.86) with
$\chi_{{\mathfrak{m}}+1}(\varphi,x,\xi)$ defined in (7.96). By (7.90) and
(7.75), for suitable constants
$\aleph_{{\mathfrak{m}}+1}(\alpha)>\aleph_{{\mathfrak{m}}}(\alpha)$, for
finitely many $\alpha\in{\mathbb{N}}_{0}$ and for any $s_{0}\leq s\leq
S-\sigma_{N}-\aleph_{{\mathfrak{m}}+1}(\alpha)$,
$\|{\bf
X}_{{\mathfrak{m}}+1}\|_{-\frac{{\mathfrak{m}}}{2}-\frac{1}{2},s,\alpha}^{k_{0},\upsilon}\lesssim_{s,{\mathfrak{m}},\alpha}\varepsilon{\upsilon^{-1}}\big{(}1+\|{\mathfrak{I}}_{0}\|_{s+\sigma_{N}+\aleph_{{\mathfrak{m}}+1}(\alpha)}^{k_{0},\upsilon}\big{)}\,.$
(7.97)
Therefore, by Lemmata 3.7, 3.5 and the induction assumption (7.92) for
${\bf{\Phi}}_{{\mathfrak{m}}}$, the conjugation map
${\bf{\Phi}}_{{\mathfrak{m}}+1}:={\bf{\Phi}}_{{\mathfrak{m}}}e^{{\bf
X}_{{\mathfrak{m}}+1}}$ is well defined and satisfies estimate (7.92) with
${\mathfrak{m}}+1$. By the Lie expansion (3.18) we have
$\displaystyle{\mathcal{L}}_{6}^{({\mathfrak{m}}+1)}$ $\displaystyle:=e^{-{\bf
X}_{{\mathfrak{m}}+1}}\,{\mathcal{L}}_{6}^{({\mathfrak{m}})}\,e^{{\bf
X}_{{\mathfrak{m}}+1}}$ (7.98)
$\displaystyle=\omega\cdot\partial_{\varphi}+{\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{x}+{\rm
i}a_{2}{\bf{\Omega}}(\gamma,D)+{\rm i}{\bf{\Pi}}_{0}+a_{4}{\mathcal{H}}+{\bf
R}_{6,{\mathfrak{m}}}^{(-\frac{1}{2},d)}$ $\displaystyle-\big{[}{\bf
X}_{{\mathfrak{m}}+1},{\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{x}+{\rm
i}\,a_{2}{\bf{\Omega}}(\gamma,D)\big{]}+{\bf
R}_{6,{\mathfrak{m}}}^{(-\frac{{\mathfrak{m}}}{2},o)}+{\bf\Phi}_{{\mathfrak{m}}+1}^{-1}{\bf
T}_{5,N}{\bf\Phi}_{{\mathfrak{m}}+1}$ $\displaystyle-\int_{0}^{1}e^{-\tau{\bf
X}_{{\mathfrak{m}}+1}}\big{[}{\bf
X}_{{\mathfrak{m}}+1}\,,\,\omega\cdot\partial_{\varphi}+{\rm
i}{\bf{\Pi}}_{0}+a_{4}{\mathcal{H}}+{\bf
R}_{6,{\mathfrak{m}}}^{(-\frac{1}{2},d)}\big{]}e^{\tau{\bf
X}_{{\mathfrak{m}}+1}}\,{\rm d}{\tau}$ (7.99)
$\displaystyle-\int_{0}^{1}e^{-\tau{\bf X}_{{\mathfrak{m}}+1}}\left[{\bf
X}_{{\mathfrak{m}}+1},{\bf
R}_{6,{\mathfrak{m}}}^{(-\frac{{\mathfrak{m}}}{2},o)}\right]e^{\tau{\bf
X}_{{\mathfrak{m}}+1}}\,{\rm d}{\tau}$ (7.100)
$\displaystyle+\int_{0}^{1}(1-\tau)e^{-\tau{\bf
X}_{{\mathfrak{m}}+1}}\left[{\bf X}_{{\mathfrak{m}}+1},\left[{\bf
X}_{{\mathfrak{m}}+1},{\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{x}+{\rm
i}\,a_{2}{\bf{\Omega}}(\gamma,D)\right]\right]e^{\tau{\bf
X}_{{\mathfrak{m}}+1}}\,{\rm d}{\tau}\,.$ (7.101)
In view of (7.86), (7.75) and (7.89), we have that
$-\big{[}{\bf
X}_{{\mathfrak{m}}+1},{\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{x}+{\rm
i}\,a_{2}{\bf{\Omega}}(\gamma,D)\big{]}+{\bf
R}_{6,{\mathfrak{m}}}^{(-\frac{{\mathfrak{m}}}{2},o)}=\begin{pmatrix}0&Z_{{\mathfrak{m}}+1}\\\
\overline{Z_{{\mathfrak{m}}+1}}&0\end{pmatrix}=:{\bf Z}_{{\mathfrak{m}}+1}\,,$
where, denoting for brevity
$\chi_{{\mathfrak{m}}+1}:=\chi_{{\mathfrak{m}}+1}({\varphi},x,\xi)$, it
results
$\displaystyle Z_{{\mathfrak{m}}+1}$ $\displaystyle={\rm i}\left({\rm
Op}(\chi_{{\mathfrak{m}}+1})a_{2}\,\omega(\gamma,D)+a_{2}\,\omega(\gamma,D){\rm
Op}(\chi_{{\mathfrak{m}}+1})\right)$ $\displaystyle\quad+\left[{\rm
Op}(\chi_{{\mathfrak{m}}+1}),-{\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{x}+a_{2}\,\tfrac{\gamma}{2}\partial_{x}^{-1}G(0)\right]+{\rm
Op}(r_{6,{\mathfrak{m}}}^{(o)})\,.$
By (3.24), (3.26) and since $\chi_{{\mathfrak{m}}+1}\in
S^{-\frac{{\mathfrak{m}}}{2}-\frac{1}{2}}$ by (7.96), we have that
${\rm
Op}(\chi_{{\mathfrak{m}}+1})a_{2}\omega(\gamma,D)+a_{2}\omega(\gamma,D){\rm
Op}(\chi_{{\mathfrak{m}}+1})={\rm
Op}\big{(}2a_{2}\omega(\gamma,\xi)\chi_{{\mathfrak{m}}+1}\big{)}+{\mathtt{r}}_{{\mathfrak{m}}+1}\,,$
where ${\mathtt{r}}_{{\mathfrak{m}}+1}$ is in ${\rm
OP}S^{-\frac{{\mathfrak{m}}}{2}-1}$. By (7.96) and (7.4)
$Z_{{\mathfrak{m}}+1}={\rm i}{\mathtt{r}}_{{\mathfrak{m}}+1}+\left[{\rm
Op}(\chi_{{\mathfrak{m}}+1}),-{\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{x}+a_{2}\tfrac{\gamma}{2}\partial_{x}^{-1}G(0)\right]+{\rm
Op}(r_{6,{\mathfrak{m}}}^{(o)}(1-\chi(\xi)))\in{\rm
OP}S^{-\frac{{\mathfrak{m}}}{2}-\frac{1}{2}}\,.$
The remaining pseudodifferential operators in (7.99)-(7.101) have order ${\rm
OP}S^{-\frac{{\mathfrak{m}}+1}{2}}$. Therefore the operator
${\mathcal{L}}_{6}^{({\mathfrak{m}}+1)}$ in (7.98) has the form (7.88) at
${\mathfrak{m}}+1$ with
${\bf R}_{6,{\mathfrak{m}}+1}^{(-\frac{1}{2},d)}+{\bf
R}_{6,{\mathfrak{m}}+1}^{(-\frac{{\mathfrak{m}}+1}{2},o)}:={\bf
R}_{6,{\mathfrak{m}}}^{(-\frac{1}{2},d)}+{\bf
Z}_{{\mathfrak{m}}+1}+\eqref{geno12}+\eqref{geno13}+\eqref{geno14}$ (7.102)
and a smoothing remainder ${\bf\Phi}_{{\mathfrak{m}}+1}^{-1}{\bf
T}_{5,N}{\bf\Phi}_{{\mathfrak{m}}+1}$. By Lemmata 3.5, 3.6, (7.90), (7.97),
(7.76), we conclude that ${\bf R}_{6,{\mathfrak{m}}+1}^{(-\frac{1}{2},d)}$ and
${\bf R}_{6,{\mathfrak{m}}+1}^{(-\frac{{\mathfrak{m}}+1}{2},o)}$ satisfy
(7.90) at order ${\mathfrak{m}}+1$ for suitable constants
$\aleph_{{\mathfrak{m}}+1}(\alpha)>\aleph_{{\mathfrak{m}}}(\alpha)$. Moreover
the operator ${\bf{\Phi}}_{{\mathfrak{m}}+1}^{-1}{\bf
T}_{5,N}{\bf{\Phi}}_{{\mathfrak{m}}+1}$ satisfies (7.91) at order
${\mathfrak{m}}+1$ by Lemmata 3.12, 3.13 and (7.78), (7.92). Estimates
(7.93)-(7.95) follow similarly. By (7.96), Lemmata 3.20, 3.24, and the
induction assumption that ${\bf
R}_{6,{\mathfrak{m}}}^{(-\frac{{\mathfrak{m}}}{2},o)}$ is reversible and
momentum preserving, we get that ${\bf X}_{{\mathfrak{m}}+1}$ is reversibility
and momentum preserving, and so are $e^{\pm{\bf X}_{{\mathfrak{m}}+1}}$. We
deduce that ${\mathcal{L}}_{6}^{({\mathfrak{m}}+1)}$ is reversible and
momentum preserving, in particular ${\bf
R}_{6,{\mathfrak{m}}+1}^{(-\frac{{\mathfrak{m}}+1}{2},o)}$ in (7.102). ∎
###### Remark 7.13.
The number of regularizing iterations ${\mathfrak{m}}\in{\mathbb{N}}$ will be
fixed by the KAM reduction scheme in Section 8, more precisely we take
${\mathfrak{m}}=2M$ with $M$ in (8.5). Note that it is independent of the
Sobolev index $s$.
So far the operator ${\mathcal{L}}_{6}$ of Lemma 7.12 depends on two indexes
${\mathfrak{m}},N$ which provide respectively the order of the regularizing
off-diagonal remainder ${\bf R}_{6}^{(-\frac{{\mathfrak{m}}}{2},o)}$ and of
the smoothing tame operator ${\bf T}_{6,N}$. From now on we fix
${\mathfrak{m}}:=2M\,,\ M\in{\mathbb{N}}\,,\quad N=M\,.$ (7.103)
### 7.5 Reduction of the order 1/2
The goal of this section is to transform the operator ${\mathcal{L}}_{6}$ in
(7.88) with ${\mathfrak{m}}:=2M$, $N=M$ (cfr. (7.103)), into the operator
${\mathcal{L}}_{7}$ in (7.117) whose coefficient in front of
${\bf{\Omega}}(\gamma,D)$ is constant. First we rewrite
${\mathcal{L}}_{6}=\omega\cdot\partial_{\varphi}+\begin{pmatrix}P_{6}&0\\\
0&\overline{P_{6}}\end{pmatrix}+{\rm i}{\bf{\Pi}}_{0}+{\bf
R}_{6}^{(-M,o)}+{\bf T}_{6,M}\,,$
having denoted
$P_{6}:=P_{6}({\varphi},x,D):={\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{x}+{\rm
i}a_{2}({\varphi},x)\Omega(\gamma,D)+a_{4}{\mathcal{H}}+r_{6}^{(d)}({\varphi},x,D)\,.$
(7.104)
We conjugate ${\mathcal{L}}_{6}$ through the real operator
${\bf{\Phi}}({\varphi}):=\begin{pmatrix}\Phi({\varphi})&0\\\
0&\overline{\Phi}({\varphi})\end{pmatrix}$ (7.105)
where $\Phi({\varphi}):=\Phi^{\tau}({\varphi})|_{\tau=1}$ is the time $1$-flow
of the PDE
$\begin{cases}\partial_{\tau}\Phi^{\tau}({\varphi})={\rm
i}A({\varphi})\Phi^{\tau}({\varphi})\,,\\\ \Phi^{0}({\varphi})={\rm
Id}\,,\end{cases}\qquad A({\varphi}):=b({\varphi},x)|D|^{\frac{1}{2}}\,,$
(7.106)
and $b({\varphi},x)$ is a real quasi-periodic traveling wave, ${\rm
odd}({\varphi},x)$, chosen later, see (7.114). Thus ${\rm
i}b({\varphi},x)|D|^{\frac{1}{2}}$ is reversibility and momentum preserving as
well as ${\bf{\Phi}}({\varphi})$. Moreover
$\Phi\pi_{0}=\pi_{0}=\Phi^{-1}\pi_{0}$, which implies
${\bf{\Phi}}^{-1}{\bf{\Pi}}_{0}{\bf{\Phi}}={\bf{\Pi}}_{0}{\bf{\Phi}}\,.$
(7.107)
By the Lie expansion (3.18) we have
$\displaystyle\Phi^{-1}P_{6}\Phi$ $\displaystyle=P_{6}-{\rm
i}[A,P_{6}]-\frac{1}{2}[A,[A,P_{6}]]+\sum_{n=3}^{2M+1}\frac{(-{\rm
i})^{n}}{n!}{\rm ad}_{A({\varphi})}^{n}(P_{6})+T_{M}\,,$ (7.108)
$\displaystyle T_{M}$ $\displaystyle:=\frac{(-{\rm
i})^{2M+2}}{(2M+1)!}\int_{0}^{1}(1-\tau)^{2M+1}\Phi^{-\tau}({\varphi})\,{\rm
ad}_{A({\varphi})}^{2M+2}(P_{6})\,\Phi^{\tau}({\varphi}){\rm d}\tau\,,$
and, by (3.19),
$\displaystyle\Phi^{-1}\circ\omega\cdot\partial_{\varphi}\circ\Phi$
$\displaystyle=\omega\cdot\partial_{\varphi}+{\rm
i}(\omega\cdot\partial_{\varphi}A)+\frac{1}{2}[A,\omega\cdot\partial_{\varphi}A]-\sum_{n=3}^{2M+1}\frac{(-{\rm
i})^{n}}{n!}{\rm
ad}_{A({\varphi})}^{n-1}(\omega\cdot\partial_{\varphi}A({\varphi}))+T_{M}^{\prime}\,,$
$\displaystyle T_{M}^{\prime}$ $\displaystyle:=-\frac{(-{\rm
i})^{2M+2}}{(2M+1)!}\int_{0}^{1}(1-\tau)^{2M+1}\Phi^{-\tau}({\varphi})\,{\rm
ad}_{A({\varphi})}^{2M+1}(\omega\cdot\partial_{\varphi}A({\varphi}))\,\Phi^{\tau}({\varphi}){\rm
d}\tau\,.$ (7.109)
Note that ${\rm ad}_{A({\varphi})}^{2M+2}(P_{6})$ and ${\rm
ad}_{A({\varphi})}^{2M+1}(\omega\cdot\partial_{\varphi}A({\varphi}))$ are in
${\rm OP}S^{-M}$. We now determine the pseudo-differential term of order $1/2$
in (7.108)-(7.109). We use the expansion of the linear dispersion operator
$\Omega(\gamma,D)$, defined by (4.1), (1.10), and, since $j\to
c_{j}(\gamma)\in S^{0}$ (see (4.14)),
$\Omega(\gamma,D)=\sqrt{g}|D|^{\frac{1}{2}}+{\rm
i}\,\tfrac{\gamma}{2}{\mathcal{H}}+r_{-\frac{1}{2}}(\gamma,D)\,,\quad
r_{-\frac{1}{2}}(\gamma,D)\in{\rm OP}S^{-\frac{1}{2}}\,,$ (7.110)
where ${\mathcal{H}}$ is the Hilbert transform in (3.21). By (7.104), that
$A=b|D|^{\frac{1}{2}}$, (3.27), (7.110) we get
$\displaystyle[A,P_{6}]$
$\displaystyle=\big{[}b|D|^{\frac{1}{2}},{\mathtt{m}}_{1,{\mathtt{n}}}\partial_{x}+{\rm
i}\,\sqrt{g}a_{2}|D|^{\frac{1}{2}}+(a_{4}-\tfrac{\gamma}{2}a_{2}){\mathcal{H}}+r_{6}^{(d)}(x,D)+{\rm
i}\,a_{2}r_{-\frac{1}{2}}(\gamma,D)\big{]}$
$\displaystyle=-{\mathtt{m}}_{1,\overline{\mathtt{n}}}b_{x}|D|^{\frac{1}{2}}-{\rm
i}\tfrac{\sqrt{g}}{2}(b_{x}a_{2}-(a_{2})_{x}b){\mathcal{H}}+{\rm
Op}(r_{b,-\frac{1}{2}})\,,$ (7.111)
where $r_{b,-\frac{1}{2}}\in S^{-\frac{1}{2}}$ is small with $b$. As a
consequence, the contribution at order $\frac{1}{2}$ of the operator ${\rm
i}\,\omega\cdot\partial_{\varphi}A+P_{6}-{\rm i}[A,P_{6}]$ is ${\rm
i}\big{(}\omega\cdot\partial_{\varphi}b+{\mathtt{m}}_{1,\overline{\mathtt{n}}}b_{x}+\sqrt{g}\,a_{2})|D|^{\frac{1}{2}}$.
We choose $b({\varphi},x)$ as the solution of
$(\omega\cdot\partial_{\varphi}+{\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{x})b+\sqrt{g}\,\Pi_{N_{\overline{\mathtt{n}}}}\,a_{2}=\sqrt{g}\,{\mathtt{m}}_{\frac{1}{2}}$
(7.112)
where ${\mathtt{m}}_{\frac{1}{2}}$ is the average (see (3.6))
${\mathtt{m}}_{\frac{1}{2}}:=\braket{a_{2}}_{{\varphi},x}\,.$ (7.113)
We define $b({\varphi},x)$ to be the real, ${\rm odd}({\varphi},x)$, quasi-
periodic traveling wave
$b({\varphi},x):=-\sqrt{g}(\omega\cdot\partial_{\varphi}+{\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{x})_{\rm
ext}^{-1}\big{(}\Pi_{N_{\overline{\mathtt{n}}}}a_{2}({\varphi},x)-{\mathtt{m}}_{\frac{1}{2}}\big{)}$
(7.114)
recall (3.10). Note that $b({\varphi},x)$ and ${\mathtt{m}}_{\frac{1}{2}}$ are
defined for any
$(\omega,\gamma)\in{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]$ and that,
for any
$(\omega,\gamma)\in{\mathtt{T}}{\mathtt{C}}_{\overline{\mathtt{n}}+1}(2\upsilon,\tau)$
defined in (7.26), it solves (7.112).
We deduce by (7.108), (7.109), (7.104), (7.5)-(7.114), that, for any
$(\omega,\gamma)\in{\mathtt{T}}{\mathtt{C}}_{\overline{\mathtt{n}}+1}(2\upsilon,\tau)$,
$\displaystyle L_{7}$
$\displaystyle:=\Phi^{-1}({\varphi})\left(\omega\cdot\partial_{\varphi}+P_{6}\right)\Phi({\varphi})$
$\displaystyle=\omega\cdot\partial_{\varphi}+{\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{x}+{\rm
i}\,{\mathtt{m}}_{\frac{1}{2}}\Omega(\gamma,D)+a_{5}{\mathcal{H}}+{\rm
Op}(r_{7}^{(d)})+T_{M}+T_{M}^{\prime}+{\rm
i}\sqrt{g}(\Pi_{N_{\overline{\mathtt{n}}}}^{\perp}a_{2})|D|^{\frac{1}{2}}\,,$
where $a_{5}({\varphi},x)$ is the real function (using that
$a_{4}=\frac{\gamma}{2}(a_{2}-1)$)
$\displaystyle a_{5}:=$
$\displaystyle\,\tfrac{\gamma}{2}({\mathtt{m}}_{\frac{1}{2}}-1)-\tfrac{\sqrt{g}}{2}(b_{x}a_{2}-(a_{2})_{x}b)$
(7.115)
$\displaystyle+\tfrac{{\mathtt{m}}_{1,\overline{\mathtt{n}}}}{4}\big{(}b_{xx}b-b_{x}^{2}\big{)}+\tfrac{1}{4}\big{(}b(\omega\cdot\partial_{\varphi}b)_{x}-(\omega\cdot\partial_{\varphi}b)b_{x}\big{)}\,,$
and
$\displaystyle{\rm Op}(r_{7}^{(d)}):={\rm Op}(-{\rm i}r_{b,-\frac{1}{2}}+{\rm
i}\,(a_{2}-{\mathtt{m}}_{\frac{1}{2}})r_{-\frac{1}{2}}(\gamma,D)+r_{6}^{(d)})$
$\displaystyle\ \ \ +\tfrac{1}{2}\big{[}b|D|^{\frac{1}{2}},{\rm
i}\tfrac{\sqrt{g}}{2}(b_{x}a_{2}-(a_{2})_{x}b){\mathcal{H}}-{\rm
Op}(r_{b,-\frac{1}{2}})\big{]}+\tfrac{1}{2}{\rm
Op}({\widetilde{r}}_{2}(b|\xi|^{\frac{1}{2}},({\mathtt{m}}_{1,\overline{\mathtt{n}}}b_{x}+\omega\cdot\partial_{\varphi}b)|\xi|^{\frac{1}{2}}))$
$\displaystyle\ \ \ +\sum_{n=3}^{2M+1}\frac{(-{\rm i})^{n}}{n!}{\rm
ad}_{A({\varphi})}^{n}(P_{6})-\sum_{n=3}^{2M+1}\frac{(-{\rm i})^{n}}{n!}{\rm
ad}_{A({\varphi})}^{n-1}(\omega\cdot\partial_{\varphi}A({\varphi}))\in{\rm
OP}S^{-\frac{1}{2}}\,,$ (7.116)
with ${\widetilde{r}}_{2}(\,\cdot\,,\,\cdot\,)$ defined in (3.27). In
conclusion we have the following lemma.
###### Lemma 7.14.
Let $M\in{\mathbb{N}}$, ${\mathtt{q}}_{0}\in{\mathbb{N}}_{0}$. Let
$b({\varphi},x)$ be the quasi-periodic traveling wave function ${\rm
odd}({\varphi},x)$, defined in (7.114). Then, for any
$\overline{\mathtt{n}}\in{\mathbb{N}}_{0}$, conjugating ${\mathcal{L}}_{6}$ in
(7.88) via the invertible, real, reversibility and momentum preserving map
${\bf{\Phi}}$ defined in (7.105)-(7.106), we obtain, for any
$(\omega,\gamma)\in{\mathtt{T}}{\mathtt{C}}_{\overline{\mathtt{n}}+1}(2\upsilon,\tau)$,
the real, reversible and momentum preserving operator
$\displaystyle{\mathcal{L}}_{7}$
$\displaystyle:={\bf\Phi}^{-1}{\mathcal{L}}_{6}{\bf\Phi}$ (7.117)
$\displaystyle=\omega\cdot\partial_{\varphi}+{\mathtt{m}}_{1,\overline{\mathtt{n}}}\,\partial_{x}+{\rm
i}\,{\mathtt{m}}_{\frac{1}{2}}{\bf{\Omega}}(\gamma,D)+a_{5}{\mathcal{H}}+{\rm
i}{\bf{\Pi}}_{0}+{\bf R}_{7}^{(-\frac{1}{2},d)}+{\bf T}_{7,M}+{\bf
Q}_{7}^{\perp}\,,$
defined for any
$(\omega,\gamma)\in{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]$, where:
1\. The real constant ${\mathtt{m}}_{\frac{1}{2}}$ defined in (7.113)
satisfies
$|{\mathtt{m}}_{\frac{1}{2}}-1|^{k_{0},\upsilon}\lesssim\varepsilon{\upsilon^{-1}}$;
2\. The real, quasi-periodic traveling wave function $a_{5}({\varphi},x)$
defined in (7.115), ${\rm even}({\varphi},x)$, satisfies, for some
$\sigma=\sigma(\tau,\nu,k_{0})>0$, for all $s_{0}\leq s\leq S-\sigma$,
$\displaystyle\|a_{5}\|_{s}^{k_{0},\upsilon}\lesssim_{s}\varepsilon{\upsilon^{-2}}(1+\|{\mathfrak{I}}_{0}\|_{s+\sigma}^{k_{0},\upsilon})\,,\quad|\langle
a_{5}\rangle_{\varphi,x}|^{k_{0},\upsilon}\lesssim\varepsilon{\upsilon^{-1}}\,;$
(7.118)
3\. ${\bf R}_{7}^{(-\frac{1}{2},d)}$ is the block-diagonal operator
$\displaystyle{\bf R}_{7}^{(-\frac{1}{2},d)}$
$\displaystyle:=\begin{pmatrix}r_{7}^{(d)}({\varphi},x,D)&0\\\
0&\overline{r_{7}^{(d)}({\varphi},x,D)}\end{pmatrix}\in{\rm
OP}S^{-\frac{1}{2}}$
with $r_{7}^{(d)}({\varphi},x,D)$ defined in (7.116), that satisfies for
finitely many $0\leq\alpha\leq\alpha(M)$ (fixed in Remark 7.16), for some
$\sigma_{M}(\alpha):=\sigma_{M}(k_{0},\tau,\nu,\alpha)>0$ and for all
$s_{0}\leq s\leq S-\sigma_{M}(\alpha)$,
$\displaystyle\|{\bf
R}_{7}^{(-\frac{1}{2},d)}\|_{-\frac{1}{2},s,\alpha}^{k_{0},\upsilon}\lesssim_{s,M,\alpha}\varepsilon{\upsilon^{-2}}(1+\|{\mathfrak{I}}_{0}\|_{s+\sigma_{M}(\alpha)}^{k_{0},\upsilon})\,\,;$
(7.119)
4\. For any ${\mathtt{q}}\in{\mathbb{N}}^{\nu}_{0}$ with
$|{\mathtt{q}}|\leq{\mathtt{q}}_{0}$, $n_{1},n_{2}\in{\mathbb{N}}_{0}$ with
$n_{1}+n_{2}\leq M-\frac{3}{2}(k_{0}+{\mathtt{q}}_{0})+\frac{3}{2}$, the
operator $\langle D\rangle^{n_{1}}\partial_{\varphi}^{\mathtt{q}}{\bf
T}_{7,M}(\varphi)\langle D\rangle^{n_{2}}$ is ${\mathcal{D}}^{k_{0}}$-tame
with a tame constant satisfying, for some
$\sigma_{M}({\mathtt{q}}_{0}):=\sigma_{M}(k_{0},\tau,\nu,{\mathtt{q}}_{0})$,
for any $s_{0}\leq s\leq S-\sigma_{M}({\mathtt{q}}_{0})$,
${\mathfrak{M}}_{\langle D\rangle^{n_{1}}\partial_{\varphi}^{\mathtt{q}}{\bf
T}_{7,M}(\varphi)\langle
D\rangle^{n_{2}}}(s)\lesssim_{S,M,{\mathtt{q}}_{0}}\varepsilon{\upsilon^{-2}}(1+\|{\mathfrak{I}}_{0}\|_{s+\sigma_{M}({\mathtt{q}}_{0})}^{k_{0},\upsilon})\,;$
(7.120)
5\. The operator ${\bf Q}_{7}^{\perp}$ is
${\bf Q}_{7}^{\perp}:={\rm
i}\sqrt{g}(\Pi_{N_{\overline{\mathtt{n}}}}^{\perp}a_{2})|D|^{\frac{1}{2}}\begin{pmatrix}1&0\\\
0&-1\end{pmatrix}\,,$ (7.121)
where $a_{2}({\varphi},x)$ is defined in (7.60) and satisfies (7.74);
6\. The operators ${\bf{\Phi}}^{\pm 1}-{\rm Id}$, $({\bf{\Phi}}^{\pm 1}-{\rm
Id})^{*}$ are ${\mathcal{D}}^{k_{0}}$-$\frac{1}{2}(k_{0}+1)$-tame, with tame
constants satisfying, for some $\sigma>0$ and for all $s_{0}\leq s\leq
S-\sigma$,
$\displaystyle{\mathfrak{M}}_{{\bf{\Phi}}^{\pm 1}-{\rm
Id}}(s)+{\mathfrak{M}}_{({\bf{\Phi}}^{\pm 1}-{\rm
Id})^{*}}(s)\lesssim_{S}\varepsilon{\upsilon^{-2}}(1+\|{\mathfrak{I}}_{0}\|_{s+\sigma}^{k_{0},\upsilon})\,.$
(7.122)
7\. Furthermore, for any $s_{1}$ as in (7.11), finitely many
$0\leq\alpha\leq\alpha(M)$, ${\mathtt{q}}\in{\mathbb{N}}_{0}^{\nu}$, with
$\left|{\mathtt{q}}\right|\leq{\mathtt{q}}_{0}$, and
$n_{1},n_{2}\in{\mathbb{N}}_{0}$, with $n_{1}+n_{2}\leq
M-\frac{3}{2}{\mathtt{q}}_{0}$, we have
$\displaystyle\|\Delta_{12}a_{5}\|_{s_{1}}\lesssim_{s_{1}}\varepsilon{\upsilon^{-2}}\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma}\,,\
|\Delta_{12}{\mathtt{m}}_{\frac{1}{2}}|\lesssim\varepsilon{\upsilon^{-1}}\left\|i_{1}-i_{2}\right\|_{s_{0}+\sigma}\,,$
(7.123) $\displaystyle\|\Delta_{12}{\bf
R}_{7}^{(-\frac{1}{2},d)}\|_{-\frac{1}{2},s_{1},\alpha}\lesssim_{s_{1},M,\alpha}\varepsilon{\upsilon^{-2}}\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma_{M}(\alpha)}\,,$
(7.124)
$\displaystyle\|\braket{D}^{n_{1}}\partial_{\varphi}^{\mathtt{q}}\Delta_{12}{\bf
T}_{7,M}\braket{D}^{n_{2}}\|_{{\mathcal{L}}(H^{s_{1}})}\lesssim_{s_{1},M,{\mathtt{q}}_{0}}\varepsilon{\upsilon^{-2}}\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma_{M}({\mathtt{q}}_{0})}\,,$
(7.125)
$\displaystyle\|\Delta_{12}({\mathcal{A}})h\|_{s_{1}}\lesssim_{s_{1}}\varepsilon{\upsilon^{-2}}\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma}\left\|h\right\|_{s_{1}+\sigma}\,,\quad{\mathcal{A}}\in\\{{\bf{\Phi}}^{\pm
1},({\bf{\Phi}}^{\pm 1})^{*}\\}\,.$ (7.126)
###### Proof.
The estimate
$|{\mathtt{m}}_{\frac{1}{2}}-1|^{k_{0},\upsilon}\lesssim\varepsilon\upsilon^{-1}$
follows by (7.113) and (7.74). The function $b({\varphi},x)$ defined in
(7.114) satisfies, by (3.12) and (7.74),
$\|b\|_{s}^{k_{0},\upsilon}\lesssim_{s}\varepsilon\upsilon^{-2}(1+\|{\mathfrak{I}}_{0}\|_{s+\sigma}^{k_{0},\upsilon})$
(7.127)
for some $\sigma>0$ and for all $s_{0}\leq s\leq S-\sigma$. The estimate
(7.118) is deduced by (7.115),
$|{\mathtt{m}}_{\frac{1}{2}}-1|^{k_{0},\upsilon}\lesssim\varepsilon\upsilon^{-1}$,
(7.127), (7.74), (7.10). The estimate (7.119) follows by (7.116), (7.104),
Lemmata 3.5, 3.6, 3.8 and (7.127), (7.90), (7.74), (7.76). The smoothing term
${\bf T}_{7,M}$ in (7.117) is, using also (7.107),
${\bf T}_{7,M}:={\bf{\Phi}}^{-1}{\bf T}_{6,M}{\bf{\Phi}}+{\rm
i}{\bf{\Pi}}_{0}({\bf{\Phi}}-{\rm Id})+{\bf{\Phi}}^{-1}{\bf
R}_{6}^{(-M,o)}{\bf{\Phi}}+\begin{pmatrix}T_{M}+T_{M}^{\prime}&0\\\
0&\overline{T_{M}}+\overline{T_{M}^{\prime}}\end{pmatrix}$
with $T_{M}$ and $T_{M}^{\prime}$ defined in (7.108), (7.109). The estimate
(7.120) follows by (7.104), Lemmata 3.12, 3.13, the tame estimates of
${\bf{\Phi}}$ in Proposition 2.37 in [2], and (7.76), (7.127), (7.122),
(7.91). The estimate (7.122) follows by Lemma 2.38 in [2] and (7.127). The
estimates (7.123), (7.124), (7.125), (7.126) are proved in the same fashion,
using also (3.13). ∎
### 7.6 Reduction of the order 0
The goal of this section is to transform the operator ${\mathcal{L}}_{7}$ in
(7.117) into the operator ${\mathcal{L}}_{8}$ in (7.138) whose coefficient in
front of the Hilbert transform ${\mathcal{H}}$ is a real constant. From now
on, we neglect the contribution of ${\bf Q}_{7}^{\perp}$ in (7.117) which will
be conjugated in Section 7.7. For simplicity of notation we denote such
operator ${\mathcal{L}}_{7}$ as well. We first write
${\mathcal{L}}_{7}=\omega\cdot\partial_{\varphi}+\begin{pmatrix}P_{7}&0\\\
0&\overline{P_{7}}\end{pmatrix}+{\rm i}{\bf{\Pi}}_{0}+{\bf T}_{7,M}\,,$
where
$P_{7}:={\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{x}+{\rm
i}{\mathtt{m}}_{\frac{1}{2}}\Omega(\gamma,D)+a_{5}(\varphi,x){\mathcal{H}}+{\rm
Op}(r_{7}^{(d)})\,.$ (7.128)
We conjugate ${\mathcal{L}}_{7}$ through the time-$1$ flow
$\Psi({\varphi}):=\Psi^{\tau}({\varphi})|_{\tau=1}$ generated by
$\partial_{\tau}\Psi^{\tau}({\varphi})=B({\varphi})\Psi^{\tau}({\varphi})\,,\
\Psi^{0}({\varphi})={\rm Id}\,,\quad
B({\varphi}):=b_{1}({\varphi},x){\mathcal{H}}\,,$ (7.129)
where $b_{1}({\varphi},x)$ is a real quasi-periodic traveling wave ${\rm
odd}(\varphi,x)$ chosen later (see (7.136)) and ${\mathcal{H}}$ is the Hilbert
transform in (3.21). Thus by Lemmata 3.20, 3.24 the operator
$b_{1}({\varphi},x){\mathcal{H}}$ is reversibility and momentum preserving and
so is its flow $\Psi^{\tau}(\varphi)$. Note that, since ${\mathcal{H}}(1)=0$,
$\Psi(\varphi)\pi_{0}=\pi_{0}=\Psi^{-1}(\varphi)\pi_{0}\,.$ (7.130)
By the Lie expansion in (3.18) we have
$\displaystyle\Psi^{-1}P_{7}\Psi$
$\displaystyle=P_{7}-[B,P_{7}]+\sum_{n=2}^{M}\frac{(-1)^{n}}{n!}{\rm
ad}_{B({\varphi})}^{n}(P_{7})+L_{M}\,,$ (7.131) $\displaystyle L_{M}$
$\displaystyle:=\frac{(-1)^{M+1}}{M!}\int_{0}^{1}(1-\tau)^{M}\Psi^{-\tau}({\varphi})\,{\rm
ad}_{B({\varphi})}^{M+1}(P_{7})\,\Psi^{\tau}({\varphi}){\rm d}\tau\,,$
and, by (3.19),
$\displaystyle\Psi^{-1}\circ\omega\cdot\partial_{\varphi}\circ\Psi$
$\displaystyle=\omega\cdot\partial_{\varphi}+(\omega\cdot\partial_{\varphi}B({\varphi}))-\sum_{n=2}^{M}\frac{(-1)^{n}}{n!}{\rm
ad}_{B({\varphi})}^{n-1}(\omega\cdot\partial_{\varphi}B({\varphi}))+L_{M}^{\prime}\,,$
(7.132) $\displaystyle L_{M}^{\prime}$
$\displaystyle:=\frac{(-1)^{M}}{M!}\int_{0}^{1}(1-\tau)^{M}\Psi^{-\tau}({\varphi})\,{\rm
ad}_{B({\varphi})}^{M}(\omega\cdot\partial_{\varphi}B({\varphi}))\,\Psi^{\tau}({\varphi}){\rm
d}\tau\,.$
The number $M$ will be fixed in (8.5). The contributions at order $0$ come
from $(\omega\cdot\partial_{\varphi}B)+P_{7}-[B,P_{7}]$. Since
$B=b_{1}{\mathcal{H}}$, by (7.128), (3.27) and (7.110) we have
$\displaystyle=-{\mathtt{m}}_{1,\overline{\mathtt{n}}}(b_{1})_{x}{\mathcal{H}}+{\rm
Op}(r_{b_{1},-\frac{1}{2}})\,,$ (7.133)
where ${\rm Op}(r_{b_{1},-\frac{1}{2}})\in{\rm OP}S^{-\frac{1}{2}}$ is small
with $b_{1}$. As a consequence, the $0$ order term of the operator
$\omega\cdot\partial_{\varphi}B+P_{7}-[B,P_{7}]$ is
$\big{(}\omega\cdot\partial_{\varphi}b_{1}+{\mathtt{m}}_{1,\overline{\mathtt{n}}}(b_{1})_{x}+a_{5}\big{)}{\mathcal{H}}$.
We choose $b_{1}$ as the solution of
$(\omega\cdot\partial_{\varphi}b_{1}+{\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{x})b_{1}+\Pi_{N_{\overline{\mathtt{n}}}}a_{5}={\mathtt{m}}_{0}$
(7.134)
where ${\mathtt{m}}_{0}$ is the average (see (3.6))
${\mathtt{m}}_{0}:=\braket{a_{5}}_{{\varphi},x}\,.$ (7.135)
We define $b_{1}({\varphi},x)$ to be the real, ${\rm odd}(\varphi,x)$, quasi-
periodic traveling wave
$b_{1}({\varphi},x):=-(\omega\cdot\partial_{\varphi}+{\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{x})_{\rm
ext}^{-1}\big{(}\Pi_{N_{\overline{\mathtt{n}}}}a_{5}({\varphi},x)-{\mathtt{m}}_{0}\big{)}\,,$
(7.136)
recall (3.10). Note that $b_{1}({\varphi},x)$ is defined for any
$(\omega,\gamma)\in{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]$ and that,
for any
$(\omega,\gamma)\in{\mathtt{T}}{\mathtt{C}}_{\overline{\mathtt{n}}+1}(2\upsilon,\tau)$
defined in (7.26), it solves (7.134).
We deduce by (7.131)-(7.132) and (7.133), (7.136), that, for any
$(\omega,\gamma)\in{\mathtt{T}}{\mathtt{C}}_{\overline{\mathtt{n}}+1}(2\upsilon,\tau)$,
$\displaystyle L_{8}:=$ $\displaystyle\
\Psi^{-1}({\varphi})\left(\omega\cdot\partial_{\varphi}+P_{7}\right)\Psi({\varphi})$
$\displaystyle=$ $\displaystyle\
\omega\cdot\partial_{\varphi}+{\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{x}+{\rm
i}\,{\mathtt{m}}_{\frac{1}{2}}\Omega(\gamma,D)+{\mathtt{m}}_{0}{\mathcal{H}}+{\rm
Op}(r_{8}^{(d)})+L_{M}+L_{M}^{\prime}+(\Pi_{N_{\overline{\mathtt{n}}}}^{\perp}a_{5}){\mathcal{H}}\,,$
where
$\displaystyle{\rm Op}(r_{8}^{(d)})$ $\displaystyle:={\rm
Op}(-r_{b_{1},-\frac{1}{2}}+r_{7}^{(d)})$ (7.137)
$\displaystyle+\sum_{n=2}^{M}\frac{(-1)^{n}}{n!}{\rm
ad}_{B({\varphi})}^{n}(P_{7})-\sum_{n=2}^{M}\frac{(-1)^{n}}{n!}{\rm
ad}_{B({\varphi})}^{n-1}(\omega\cdot\partial_{\varphi}B({\varphi}))\in{\rm
OP}S^{-\frac{1}{2}}\,.$
In conclusion we have the following lemma.
###### Lemma 7.15.
Let $M\in{\mathbb{N}}$, ${\mathtt{q}}_{0}\in{\mathbb{N}}_{0}$. Let $b_{1}$ be
the quasi-periodic traveling wave defined in (7.136). Then, for any
$\overline{\mathtt{n}}\in{\mathbb{N}}_{0}$, conjugating the operator
${\mathcal{L}}_{7}$ in (7.117) via the invertible, real, reversibility and
momentum preserving map $\Psi(\varphi)$ (cfr. (7.129)), we obtain, for any
$(\omega,\gamma)\in{\mathtt{T}}{\mathtt{C}}_{\overline{\mathtt{n}}+1}(2\upsilon,\tau)$,
the real, reversible and momentum preserving operator
$\displaystyle{\mathcal{L}}_{8}$
$\displaystyle:=\Psi^{-1}{\mathcal{L}}_{7}\Psi$ (7.138)
$\displaystyle=\omega\cdot\partial_{\varphi}+{\mathtt{m}}_{1,\overline{\mathtt{n}}}\partial_{x}+{\rm
i}\,{\mathtt{m}}_{\frac{1}{2}}{\bf{\Omega}}(\gamma,D)+{\mathtt{m}}_{0}{\mathcal{H}}+{\rm
i}{\bf{\Pi}}_{0}+{\bf R}_{8}^{(-\frac{1}{2},d)}+{\bf T}_{8,M}+{\bf
Q}_{8}^{\perp}\,,$
defined for any
$(\omega,\gamma)\in{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]$, where
1\. The constant ${\mathtt{m}}_{0}$ defined in (7.135) satisfies
$|{\mathtt{m}}_{0}|^{k_{0},\upsilon}\lesssim\varepsilon{\upsilon^{-1}}$;
2\. ${\bf R}_{8}^{(-\frac{1}{2},d)}$ is the block-diagonal operator
$\displaystyle{\bf R}_{8}^{(-\frac{1}{2},d)}$
$\displaystyle=\begin{pmatrix}r_{8}^{(d)}({\varphi},x,D)&0\\\
0&\overline{r_{8}^{(d)}({\varphi},x,D)}\end{pmatrix}\in{\rm
OP}S^{-\frac{1}{2}}\,,$
with $r_{8}^{(d)}({\varphi},x,D)$ defined in (7.137) that satisfies, for some
$\sigma_{M}:=\sigma_{M}(k_{0},\tau,\nu)>0$ and for all $s_{0}\leq s\leq
S-\sigma_{M}$,
$\displaystyle\|{\bf
R}_{8}^{(-\frac{1}{2},d)}\|_{-\frac{1}{2},s,1}^{k_{0},\upsilon}\lesssim_{s,M}{\varepsilon\upsilon^{-3}}(1+\|{\mathfrak{I}}_{0}\|_{s+\sigma_{M}}^{k_{0},\upsilon})\,;$
(7.139)
3\. For any ${\mathtt{q}}\in{\mathbb{N}}^{\nu}_{0}$ with
$|{\mathtt{q}}|\leq{\mathtt{q}}_{0}$, $n_{1},n_{2}\in{\mathbb{N}}_{0}$ with
$n_{1}+n_{2}\leq M-\frac{3}{2}(k_{0}+{\mathtt{q}}_{0})+\frac{3}{2}$, the
operator $\langle D\rangle^{n_{1}}\partial_{\varphi}^{\mathtt{q}}{\bf
T}_{8,M}(\varphi)\langle D\rangle^{n_{2}}$ is ${\mathcal{D}}^{k_{0}}$-tame
with a tame constant satisfying, for some
$\sigma_{M}({\mathtt{q}}_{0}):=\sigma_{M}(k_{0},\tau,\nu,{\mathtt{q}}_{0})$,
for any $s_{0}\leq s\leq S-\sigma_{M}({\mathtt{q}}_{0})$,
${\mathfrak{M}}_{\langle D\rangle^{n_{1}}\partial_{\varphi}^{\mathtt{q}}{\bf
T}_{8,M}(\varphi)\langle
D\rangle^{n_{2}}}(s)\lesssim_{S,M,{\mathtt{q}}_{0}}{\varepsilon\upsilon^{-3}}(1+\|{\mathfrak{I}}_{0}\|_{s+\sigma_{M}({\mathtt{q}}_{0})}^{k_{0},\upsilon})\,;$
(7.140)
4\. The operator ${\bf Q}_{8}^{\perp}$ is
${\bf
Q}_{8}^{\perp}:=(\Pi_{N_{\overline{\mathtt{n}}}}^{\perp}a_{5}){\mathcal{H}}\begin{pmatrix}1&0\\\
0&1\end{pmatrix}\,,$ (7.141)
where $a_{5}({\varphi},x)$ is defined in (7.115) and satisfies (7.118);
5\. The operators $\Psi^{\pm 1}-{\rm Id}$, $(\Psi^{\pm 1}-{\rm Id})^{*}$ are
${\mathcal{D}}^{k_{0}}$-tame, with tame constants satisfying, for some
$\sigma:=\sigma(k_{0},\tau,\nu)>0$ and for all $s_{0}\leq s\leq S-\sigma$,
$\displaystyle{\mathfrak{M}}_{\Psi^{\pm 1}-{\rm
Id}}(s)+{\mathfrak{M}}_{(\Psi^{\pm 1}-{\rm
Id})^{*}}(s)\lesssim_{s}{\varepsilon\upsilon^{-3}}(1+\|{\mathfrak{I}}_{0}\|_{s+\sigma}^{k_{0},\upsilon})\,;$
(7.142)
6\. Furthermore, for any $s_{1}$ as in (7.11),
${\mathtt{q}}\in{\mathbb{N}}_{0}^{\nu}$, with
$\left|{\mathtt{q}}\right|\leq{\mathtt{q}}_{0}$, and
$n_{1},n_{2}\in{\mathbb{N}}_{0}$, with $n_{1}+n_{2}\leq
M-\frac{3}{2}{\mathtt{q}}_{0}$, we have
$\displaystyle\|\Delta_{12}{\bf
R}_{8}^{(-\frac{1}{2},d)}\|_{-\frac{1}{2},s_{1},1}\lesssim_{s_{1},M}{\varepsilon\upsilon^{-3}}\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma_{M}}\,,\
|\Delta_{12}{\mathtt{m}}_{0}|\lesssim{\varepsilon\upsilon^{-1}}\left\|i_{1}-i_{2}\right\|_{s_{0}+\sigma}\,,$
(7.143)
$\displaystyle\|\braket{D}^{n_{1}}\partial_{\varphi}^{\mathtt{q}}\Delta_{12}{\bf
T}_{8,M}\braket{D}^{n_{2}}\|_{{\mathcal{L}}(H^{s_{1}})}\lesssim_{s_{1},M,{\mathtt{q}}_{0}}{\varepsilon\upsilon^{-3}}\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma_{M}({\mathtt{q}}_{0})}\,,$
(7.144) $\displaystyle\|\Delta_{12}(\Psi^{\pm
1})h\|_{s_{1}}+\|\Delta_{12}(\Psi^{\pm
1})^{*}h\|_{s_{1}}\lesssim_{s_{1}}{\varepsilon\upsilon^{-3}}\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma}\left\|h\right\|_{s_{1}+\sigma}\,.$
(7.145)
###### Proof.
The function $b_{1}({\varphi},x)$ defined in (7.136), satisfies, by (7.118),
(3.12), for some $\sigma>0$ and for all $s_{0}\leq s\leq S-\sigma$,
$\|b_{1}\|_{s}^{k_{0},\upsilon}\lesssim_{s}\varepsilon\upsilon^{-3}(1+\|{\mathfrak{I}}_{0}\|_{s+\sigma}^{k_{0},\upsilon})\,.$
(7.146)
The estimate for ${\mathtt{m}}_{0}$ follows by (7.135) and (7.118). The
estimate (7.139) follows by (7.137), (7.128), Lemmata 3.5, 3.6, and (7.118),
(7.119), (7.146). Using (7.130), the smoothing term ${\bf T}_{8,M}$ in (7.138)
is
${\bf T}_{8,M}:={\Psi}^{-1}{\bf T}_{7,M}{\Psi}+{\rm i}{\bf{\Pi}}_{0}(\Psi-{\rm
Id})+\begin{pmatrix}L_{M}+L_{M}^{\prime}&0\\\
0&\overline{L_{M}}+\overline{L_{M}^{\prime}}\end{pmatrix}$
with $L_{M}$ and $L_{M}^{\prime}$ introduced in (7.131), (7.132). The estimate
(7.140) follows by Lemmata 3.12, 3.13, 3.7, (7.128), (7.118), (7.120),
(7.146), (7.142). The estimate (7.142) follows by Lemmata 3.7, 3.13 and
(7.146). The estimates (7.143), (7.144), (7.145) are proved in the same
fashion. ∎
###### Remark 7.16.
In Proposition 7.20 we shall estimate $\|[\partial_{x},{\bf
R}_{8}^{(-\frac{1}{2},d)}]\|_{-\frac{1}{2},s,0}^{k_{0},\upsilon}$ using
(7.139) and (3.28). In order to control $\|{\bf
R}_{8}^{(-\frac{1}{2},d)}\|_{-\frac{1}{2},s,1}^{k_{0},\upsilon}$ we used the
estimates (7.119) for finitely many $\alpha\in{\mathbb{N}}_{0}$,
$\alpha\leq\alpha(M)$, depending on $M$, as well similar estimates for ${\bf
R}_{6}^{(-\frac{1}{2},d)}$, ${\bf R}_{5}^{(-\frac{1}{2},d)}$, etc. In
Proposition 7.20 we shall use (7.143)-(7.144) only for $s_{1}=s_{0}$.
### 7.7 Conclusion: reduction of ${\mathcal{L}}_{\omega}$
By Sections 7.1-7.6, the linear operator ${\mathcal{L}}$ in (7.8) is
conjugated, under the map
${\mathcal{W}}:={\mathcal{Z}}{\mathcal{E}}{\widetilde{{\mathcal{M}}}}{\mathcal{Q}}{\mathcal{C}}{\bf{\Phi}}_{2M}{\bf{\Phi}}\Psi\,,$
(7.147)
for any
$(\omega,\gamma)\in{\mathtt{T}}{\mathtt{C}}_{\overline{\mathtt{n}}+1}(2\upsilon,\tau)$,
$\overline{\mathtt{n}}\in{\mathbb{N}}_{0}$, into the real, reversible and
momentum preserving operator
${\mathcal{W}}^{-1}{\mathcal{L}}{\mathcal{W}}={\mathcal{L}}_{8}-{\bf
Q}_{8}^{\perp}+{\bf P}_{\overline{\mathtt{n}}}^{\perp}+{\bf
Q}_{\overline{\mathtt{n}}}^{\perp}\,,$ (7.148)
where ${\mathcal{L}}_{8}$ is defined in (7.138), and
${\bf
P}_{\overline{\mathtt{n}}}^{\perp}:=\big{(}{\widetilde{{\mathcal{M}}}}{\mathcal{Q}}{\mathcal{C}}{\bf{\Phi}}_{2M}{\bf{\Phi}}\Psi\big{)}^{-1}{\bf
P}_{2}^{\perp}{\widetilde{{\mathcal{M}}}}{\mathcal{Q}}{\mathcal{C}}{\bf{\Phi}}_{2M}{\bf{\Phi}}\Psi\,,\quad{\bf
Q}_{\overline{\mathtt{n}}}^{\perp}:=\Psi^{-1}{\bf Q}_{7}^{\perp}\Psi+{\bf
Q}_{8}^{\perp}\,,$ (7.149)
with ${\bf P}_{2}^{\perp}$, ${\bf Q}_{7}^{\perp}$ and ${\bf Q}_{8}^{\perp}$
defined respectively in (7.28), (7.121) and (7.141); these operators are
exponentially small, and will contribute to the remainders estimated in Lemma
7.19. Moreover, ${\mathcal{L}}_{8}$ is defined for any
$(\omega,\gamma)\in{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]$.
Now we deduce a similar conjugation result for the projected operator
${\mathcal{L}}_{\omega}$ in (6.10), i.e. (7.1), which acts in the normal
subspace $\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}$. We first introduce
some notation. We denote by $\Pi_{{\mathbb{S}}^{+},\Sigma}^{\intercal}$ and
$\Pi_{{\mathbb{S}}^{+},\Sigma}^{\angle}$ the projections on the subspaces
$\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\intercal}$ and
$\mathfrak{H}_{{\mathbb{S}}^{+},\Sigma}^{\angle}$ defined in Section 2.2. In
view of Remark 7.2, we denote, with a small abuse of notation,
$\Pi_{{\mathbb{S}}_{0}^{+},\Sigma}^{\intercal}:=\Pi_{{\mathbb{S}}^{+},\Sigma}^{\intercal}+\pi_{0}$,
so that
$\Pi_{{\mathbb{S}}_{0}^{+},\Sigma}^{\intercal}+\Pi_{{\mathbb{S}}^{+},\Sigma}^{\angle}={\rm
Id}$ on the whole $L^{2}\times L^{2}$. We remind that
${\mathbb{S}}_{0}={\mathbb{S}}\cup\\{0\\}$, where ${\mathbb{S}}$ is the set
defined in (2.37). We denote by
$\Pi_{{{\mathbb{S}}}_{0}}:=\Pi_{\mathbb{S}}^{\intercal}+\pi_{0}$, where
$\Pi_{\mathbb{S}}^{\intercal}$ is defined below (2.45). We have
$\Pi_{{{\mathbb{S}}}_{0}}+\Pi_{{{\mathbb{S}}}_{0}}^{\perp}={\rm Id}$. Arguing
as in Lemma 7.15 in [7] we have the following.
###### Lemma 7.17.
Let $M>0$. There is $\sigma_{M}>0$ (depending also on $k_{0},\tau,\nu$) such
that, assuming (7.10) with $\mu_{0}\geq\sigma_{M}$, the following holds: the
map ${\mathcal{W}}$ defined in (7.147) has the form
${\mathcal{W}}={\widetilde{{\mathcal{M}}}}{\mathcal{C}}+{\mathcal{R}}(\varepsilon)\,,$
(7.150)
where, for all $s_{0}\leq s\leq S-\sigma_{M}$,
$\displaystyle\|{\mathcal{R}}(\varepsilon)h\|_{s}^{k_{0},\upsilon}$
$\displaystyle\lesssim_{S,M}{\varepsilon\upsilon^{-3}}\big{(}\|h\|_{s+\sigma_{M}}^{k_{0},\upsilon}+\|{\mathfrak{I}}_{0}\|_{s+\sigma_{M}}^{k_{0},\upsilon}\|h\|_{s_{0}+\sigma_{M}}^{k_{0},\upsilon}\big{)}\,.$
(7.151)
Moreover
${\mathcal{W}}^{\perp}:=\Pi_{{\mathbb{S}}^{+},\Sigma}^{\angle}{\mathcal{W}}\Pi_{{{\mathbb{S}}}_{0}}^{\perp}$
(7.152)
is invertible and, for all $s_{0}\leq s\leq S-\sigma_{M}$,
$\displaystyle\|({\mathcal{W}}^{\perp})^{\pm 1}h\|_{s}^{k_{0},\upsilon}$
$\displaystyle\lesssim_{S,M}\|h\|_{s+\sigma_{M}}^{k_{0},\upsilon}+\|{\mathfrak{I}}_{0}\|_{s+\sigma_{M}}^{k_{0},\upsilon}\|h\|_{s_{0}+\sigma_{M}}^{k_{0},\upsilon}\,\,,$
(7.153) $\displaystyle\|\Delta_{12}({\mathcal{W}}^{\perp})^{\pm 1}h\|_{s_{1}}$
$\displaystyle\lesssim_{s_{1},M}{\varepsilon\upsilon^{-3}}\left\|i_{1}-i_{2}\right\|_{s_{1}+\sigma_{M}}\left\|h\right\|_{s_{1}+\sigma_{M}}\,.$
The operator ${\mathcal{W}}^{\perp}$ maps (anti)-reversible, respectively
traveling, waves, into (anti)-reversible, respectively traveling, waves.
For any
$(\omega,\gamma)\in{\mathtt{T}}{\mathtt{C}}_{\overline{\mathtt{n}}+1}(2\upsilon,\tau)$,
$\overline{\mathtt{n}}\in{\mathbb{N}}_{0}$, the operator
${\mathcal{L}}_{\omega}$ in (6.10) (i.e. (7.1)) is conjugated under the map
${\mathcal{W}}^{\perp}$ to
${\mathcal{L}}_{\bot}:=({\mathcal{W}}^{\perp})^{-1}{\mathcal{L}}_{\omega}{\mathcal{W}}^{\perp}=\Pi_{{{\mathbb{S}}}_{0}}^{\perp}\,({\mathcal{L}}_{8}-{\bf
Q}_{8}^{\perp})\,\Pi_{{{\mathbb{S}}}_{0}}^{\perp}+{\bf
P}_{\perp,\overline{\mathtt{n}}}+{\bf
Q}_{\perp,\overline{\mathtt{n}}}+{\mathcal{R}}^{f}$ (7.154)
where
${\bf P}_{\perp,\overline{\mathtt{n}}}:=\Pi_{{{\mathbb{S}}}_{0}}^{\perp}{\bf
P}_{\overline{\mathtt{n}}}^{\perp}\Pi_{{{\mathbb{S}}}_{0}}^{\perp}\,,\quad{\bf
Q}_{\perp,\overline{\mathtt{n}}}:=\Pi_{{{\mathbb{S}}}_{0}}^{\perp}{\bf
Q}_{\overline{\mathtt{n}}}^{\perp}\Pi_{{{\mathbb{S}}}_{0}}^{\perp}$ (7.155)
and ${\mathcal{R}}^{f}$ is, by (7.152), (7.148), (7.150) and (2.46),
$\displaystyle{\mathcal{R}}^{f}$
$\displaystyle:=({\mathcal{W}}^{\perp})^{-1}\Pi_{{\mathbb{S}}^{+},\Sigma}^{\angle}{\mathcal{R}}(\varepsilon)\Pi_{{\mathbb{S}}_{0}}\big{(}{\mathcal{L}}_{8}-{\bf
Q}_{8}^{\perp}+{\bf P}_{\overline{\mathtt{n}}}^{\perp}+{\bf
Q}_{\overline{\mathtt{n}}}^{\perp}\big{)}\Pi_{{\mathbb{S}}_{0}}^{\bot}$
(7.156)
$\displaystyle-({\mathcal{W}}^{\perp})^{-1}\Pi_{{\mathbb{S}}^{+},\Sigma}^{\angle}{\mathcal{L}}\Pi_{{\mathbb{S}}_{0}^{+},\Sigma}^{\intercal}{\mathcal{R}}(\varepsilon)\Pi_{{\mathbb{S}}_{0}}^{\bot}-\varepsilon({\mathcal{W}}^{\perp})^{-1}\Pi_{{\mathbb{S}}^{+},\Sigma}^{\angle}JR{\mathcal{W}}^{\perp}\,.$
###### Lemma 7.18.
The operator ${\mathcal{R}}^{f}$ in (7.156) has the finite rank form (7.4),
(7.5). Moreover, let ${\mathtt{q}}_{0}\in{\mathbb{N}}_{0}$ and
$M\geq\frac{3}{2}(k_{0}+{\mathtt{q}}_{0})+\frac{3}{2}$. There exists
$\aleph(M,{\mathtt{q}}_{0})>0$ (depending also on $k_{0}$, $\tau$, $\nu$) such
that, for any $n_{1},n_{2}\in{\mathbb{N}}_{0}$, with $n_{1}+n_{2}\leq
M-\frac{3}{2}(k_{0}+{\mathtt{q}}_{0})+\frac{3}{2}$, and any
${\mathtt{q}}\in{\mathbb{N}}_{0}^{\nu}$, with
$\left|{\mathtt{q}}\right|\leq{\mathtt{q}}_{0}$, the operator
$\braket{D}^{n_{1}}\partial_{\varphi}^{\mathtt{q}}{\mathcal{R}}^{f}\braket{D}^{n_{2}}$
is ${\mathcal{D}}^{k_{0}}$-tame, with a tame constant satisfying
$\displaystyle{\mathfrak{M}}_{\braket{D}^{n_{1}}\partial_{\varphi}^{\mathtt{q}}{\mathcal{R}}^{f}\braket{D}^{n_{2}}}(s)\lesssim_{S,M,{\mathtt{q}}_{0}}{\varepsilon\upsilon^{-3}}(1+\|{\mathfrak{I}}_{0}\|_{s+\aleph(M,{\mathtt{q}}_{0})}^{k_{0},\upsilon})\,,\
\forall s_{0}\leq s\leq S-\aleph(M,{\mathtt{q}}_{0})\,,$ (7.157)
$\displaystyle\|\braket{D}^{n_{1}}\partial_{\varphi}^{\mathtt{q}}\Delta_{12}{\mathcal{R}}^{f}\braket{D}^{n_{2}}\|_{{\mathcal{L}}(H^{s_{1}})}\lesssim_{s_{1},M,{\mathtt{q}}_{0}}{\varepsilon\upsilon^{-3}}\left\|i_{1}-i_{2}\right\|_{s_{1}+\aleph(M,{\mathtt{q}}_{0})}\,,$
(7.158)
for any $s_{1}$ as in (7.11).
###### Proof.
The first two terms in (7.156) have the finite rank form (7.4) because of the
presence of the finite dimensional projectors $\Pi_{{\mathbb{S}}_{0}}$ and
$\Pi_{{\mathbb{S}}_{0}^{+},\Sigma}^{\intercal}$. In the last term, the
operator $R$ has the finite rank form (7.4). The estimate (7.157) follows by
(7.156), (7.147), (7.152), (7.138), (7.4), (3.7) and (7.151), (7.153),
(7.139), (7.140), (7.5). The estimate (7.158) follows similarly. ∎
###### Lemma 7.19.
The operators ${\bf P}_{\perp,\overline{\mathtt{n}}}$ and ${\bf
Q}_{\perp,\overline{\mathtt{n}}}$ defined in (7.155), (7.149) satisfy, for
some $\sigma_{M}=\sigma_{M}(k_{0},\tau,\nu)>0$, for all $s_{0}\leq s\leq
S-\sigma_{M}$,
$\displaystyle\|{\bf P}_{\perp,\overline{\mathtt{n}}}h\|_{s}^{k_{0},\upsilon}$
$\displaystyle\lesssim_{S}\varepsilon
N_{\overline{\mathtt{n}}-1}^{-{\mathtt{a}}}\big{(}\|h\|_{s+\sigma_{M}}^{k_{0},\upsilon}+{\|{\mathfrak{I}}_{0}\|_{s+\sigma_{M}+{\mathtt{b}}}^{k_{0},\upsilon}\|h\|_{s_{0}+\sigma_{M}}^{k_{0},\upsilon}}\big{)}\,,\
\ \forall\,s_{0}\leq s\leq S-\sigma_{M}\,,$ (7.159) $\displaystyle\|{\bf
Q}_{\perp,\overline{\mathtt{n}}}h\|_{s_{0}}^{k_{0},\upsilon}$
$\displaystyle\lesssim_{S}{\varepsilon\upsilon^{-2}}N_{\overline{\mathtt{n}}}^{-{\rm
b}}\big{(}{1+\|{\mathfrak{I}}_{0}\|_{s_{0}+\sigma_{M}+{\rm
b}}^{k_{0},\upsilon}\big{)}\|h\|_{s_{0}+\frac{1}{2}}^{k_{0},\upsilon}}\,,\,\forall\,{\rm
b}>0\,,$ (7.160) $\displaystyle\|{\bf
Q}_{\perp,\overline{\mathtt{n}}}h\|_{s}^{k_{0},\upsilon}$
$\displaystyle\lesssim_{S}{\varepsilon\upsilon^{-2}}\big{(}\|h\|_{s+\frac{1}{2}}^{k_{0},\upsilon}+\|{\mathfrak{I}}_{0}\|_{s+\sigma_{M}}^{k_{0},\upsilon}\|h\|_{s_{0}+\frac{1}{2}}^{k_{0},\upsilon}\big{)}\,.$
(7.161)
###### Proof.
The estimates (7.159), (7.160), (7.161) follow from (7.155), (7.149), (7.28),
(7.121), (7.141), using the estimates (7.29), (7.74), (7.118), (3.8), (7.153),
(7.142), (7.122), (7.92), (7.79). ∎
The next proposition summarizes the main result of this section.
###### Proposition 7.20.
(Reduction of ${\cal L}_{\omega}$ up to smoothing operators) For any
$\overline{\mathtt{n}}\in{\mathbb{N}}_{0}$ and for all
$(\omega,\gamma)\in{\mathtt{T}}{\mathtt{C}}_{\overline{\mathtt{n}}+1}(2\upsilon,\tau)$
(cfr. (7.26)), the operator ${\mathcal{L}}_{\omega}$ in (6.10) (i.e. (7.1)) is
conjugated as in (7.154) to the real, reversible and momentum preserving
operator ${\mathcal{L}}_{\perp}$. For all
$(\omega,\gamma)\in{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]$ the
extended operator defined by the right hand side in (7.154), has the form
${\mathcal{L}}_{\perp}=\omega\cdot\partial_{\varphi}\mathds{1}_{\perp}+{\rm
i}\,{\bf D}_{\perp}+{\bf R}_{\perp}+{\bf P}_{\perp,\overline{\mathtt{n}}}+{\bf
Q}_{\perp,\overline{\mathtt{n}}}\,,$ (7.162)
where $\mathds{1}_{\perp}$ denotes the identity map of ${\bf
H}_{{\mathbb{S}}_{0}}^{\bot}$ (cfr. (2.45)) and:
1\. ${\bf D}_{\perp}$ is the diagonal operator
${\bf D}_{\perp}:=\begin{pmatrix}{\mathcal{D}}_{\perp}&0\\\
0&-\overline{{\mathcal{D}}_{\perp}}\end{pmatrix}\,,\quad{\mathcal{D}}_{\perp}:=\operatorname{diag}_{j\in{\mathbb{S}}_{0}^{c}}\mu_{j}\,,\quad{\mathbb{S}}_{0}^{c}:={\mathbb{Z}}\setminus({\mathbb{S}}\cup\\{0\\})\,,$
with eigenvalues
$\mu_{j}:={\mathtt{m}}_{1,\overline{\mathtt{n}}}j+{\mathtt{m}}_{\frac{1}{2}}\Omega_{j}(\gamma)-{\mathtt{m}}_{0}\,{\rm
sgn}(j)\in{\mathbb{R}}\,,$ where $\Omega_{j}(\gamma)$ is the dispersion
relation (1.13) and the real constants
${\mathtt{m}}_{1,\overline{\mathtt{n}}},{\mathtt{m}}_{\frac{1}{2}},{\mathtt{m}}_{0}$,
defined respectively in Lemma 7.7, (7.113), (7.135), satisfy
$\displaystyle|{\mathtt{m}}_{1,\overline{\mathtt{n}}}|^{k_{0},\upsilon}\lesssim\varepsilon\,,\quad|{\mathtt{m}}_{\frac{1}{2}}-1|^{k_{0},\upsilon}+|{\mathtt{m}}_{0}|^{k_{0},\upsilon}\lesssim\varepsilon\upsilon^{-1}\,.$
(7.163)
In addition, for some $\sigma>0$,
$|\Delta_{12}{\mathtt{m}}_{1,\overline{\mathtt{n}}}|\lesssim\varepsilon\left\|i_{1}-i_{2}\right\|_{s_{0}+\sigma}\,,\quad|\Delta_{12}{\mathtt{m}}_{\frac{1}{2}}|+|\Delta_{12}{\mathtt{m}}_{0}|\lesssim\varepsilon\upsilon^{-1}\left\|i_{1}-i_{2}\right\|_{s_{0}+\sigma}\,;$
(7.164)
2\. For any ${\mathtt{q}}_{0}\in{\mathbb{N}}_{0}$,
$M>\frac{3}{2}(k_{0}+{\mathtt{q}}_{0})+\frac{3}{2}$, there is a constant
$\aleph(M,{\mathtt{q}}_{0})>0$ (depending also on $k_{0}$, $\tau$, $\nu$) such
that, assuming (7.10) with $\mu_{0}\geq\aleph(M,{\mathtt{q}}_{0})$, for any
$s_{0}\leq s\leq S-\aleph(M,{\mathtt{q}}_{0})$,
${\mathtt{q}}\in{\mathbb{N}}_{0}^{\nu}$, with
$\left|{\mathtt{q}}\right|\leq{\mathtt{q}}_{0}$, the operators $\langle
D\rangle^{\frac{1}{4}}\partial_{\varphi}^{\mathtt{q}}{\bf R}_{\perp}\langle
D\rangle^{\frac{1}{4}}$, $\langle
D\rangle^{\frac{1}{4}}[\partial_{\varphi}^{\mathtt{q}}{\bf
R}_{\perp},\partial_{x}]\langle D\rangle^{\frac{1}{4}}$ are
${\mathcal{D}}^{k_{0}}$-tame with tame constants satisfying
$\displaystyle{\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}\partial_{\varphi}^{\mathtt{q}}{\bf R}_{\perp}\langle
D\rangle^{\frac{1}{4}}}(s),\ {\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}[\partial_{\varphi}^{\mathtt{q}}{\bf
R}_{\perp},\partial_{x}]\langle
D\rangle^{\frac{1}{4}}}(s)\lesssim_{S,M,{\mathtt{q}}_{0}}{\varepsilon\upsilon^{-3}}(1+\|{\mathfrak{I}}_{0}\|_{s+\aleph(M,{\mathtt{q}}_{0})}^{k_{0},\upsilon})\,.$
(7.165)
Moreover, for any ${\mathtt{q}}\in{\mathbb{N}}_{0}^{\nu}$, with
$\left|{\mathtt{q}}\right|\leq{\mathtt{q}}_{0}$,
$\|\langle
D\rangle^{\frac{1}{4}}\partial_{\varphi}^{\mathtt{q}}\Delta_{12}{\bf
R}_{\perp}\langle
D\rangle^{\frac{1}{4}}\|_{{\mathcal{L}}(H^{s_{0}})}+\|\langle
D\rangle^{\frac{1}{4}}\partial_{\varphi}^{\mathtt{q}}\Delta_{12}[{\bf
R}_{\perp},\partial_{x}]\langle
D\rangle^{\frac{1}{4}}\|_{{\mathcal{L}}(H^{s_{0}})}\lesssim_{M}{\varepsilon\upsilon^{-3}}\left\|i_{1}-i_{2}\right\|_{s_{0}+\aleph(M,{\mathtt{q}}_{0})}\,.$
(7.166)
The operator ${\bf R}_{\perp}:={\bf R}_{\perp}(\varphi)$ is real, reversible
and momentum preserving.
3\. The remainders ${\bf P}_{\perp,\overline{\mathtt{n}}},{\bf
Q}_{\perp,\overline{\mathtt{n}}}$ are defined in (7.155) and satisfy the
estimates (7.159)-(7.161).
###### Proof.
By (7.154) and (7.138) we deduce (7.162) with ${\bf
R}_{\perp}:=\Pi_{{{\mathbb{S}}}_{0}}^{\perp}({\bf
R}_{8}^{(-\frac{1}{2},d)}+{\bf
T}_{8,M})\Pi_{{{\mathbb{S}}}_{0}}^{\perp}+{\mathcal{R}}^{f}$. The estimates
(7.163)-(7.164) follow by Lemmata 7.11, 7.14, 7.15. The estimate (7.165)
follows by Lemmata 3.5, 3.6, 3.13, (7.139) and (7.140), (7.157), choosing
$(n_{1},n_{2})=(1,2),(2,1)$. The estimate (7.166) follows similarly. ∎
## 8 Almost-diagonalization and invertibility of ${\mathcal{L}}_{\omega}$
In this section we diagonalize the operator
$\omega\cdot\partial_{\varphi}\mathds{1}_{\perp}+{\rm i}\,{\bf D}_{\bot}+{\bf
R}_{\perp}(\varphi)$ obtained neglecting from ${\mathcal{L}}_{\perp}$ in
(7.162) the remainders ${\bf P}_{\perp,\overline{\mathtt{n}}}$ and ${\bf
Q}_{\perp,\overline{\mathtt{n}}}$. We implement a KAM iterative scheme. As
starting point, we consider the real, reversible and momentum preserving
operator, acting in ${\bf H}_{{\mathbb{S}}_{0}}^{\bot}$,
${\bf L}_{0}:={\bf
L}_{0}(i):=\omega\cdot\partial_{\varphi}\mathds{1}_{\perp}+{\rm i}\,{\bf
D}_{0}+{\bf R}_{\perp}^{(0)}\,,$ (8.1)
defined for all
$(\omega,\gamma)\in{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]$, with
diagonal part (with respect to the exponential basis)
$\displaystyle{\bf D}_{0}$
$\displaystyle:=\begin{pmatrix}{\mathcal{D}}_{0}&0\\\
0&-\overline{{\mathcal{D}}_{0}}\end{pmatrix}\,,\quad{\mathcal{D}}_{0}:=\operatorname{diag}_{j\in{\mathbb{S}}_{0}^{c}}\mu_{j}^{(0)}\,,\quad\mu_{j}^{(0)}:={\mathtt{m}}_{1,\overline{\mathtt{n}}}j+{\mathtt{m}}_{\frac{1}{2}}\Omega_{j}(\gamma)-{\mathtt{m}}_{0}\,{\rm
sgn}(j)\,,$ (8.2)
where ${\mathbb{S}}_{0}^{c}={\mathbb{Z}}\setminus{\mathbb{S}}_{0}$,
${\mathbb{S}}_{0}={\mathbb{S}}\cup\\{0\\}$, the real constants
${\mathtt{m}}_{1,\overline{\mathtt{n}}}$, ${\mathtt{m}}_{\frac{1}{2}}$,
${\mathtt{m}}_{0}$ satisfy (7.163)-(7.164) and
${\bf R}_{\perp}^{(0)}:={\bf
R}_{\perp}:=\begin{pmatrix}R_{\perp}^{(0,d)}&R_{\perp}^{(0,o)}\\\
\overline{R_{\perp}^{(0,o)}}&\overline{R_{\perp}^{(0,d)}}\end{pmatrix}\,,\
\quad R_{\perp}^{(0,d)}:H_{{\mathbb{S}}_{0}}^{\perp}\rightarrow
H_{{\mathbb{S}}_{0}}^{\perp}\,,\
R_{\perp}^{(0,o)}:H_{-{\mathbb{S}}_{0}}^{\perp}\rightarrow
H_{{\mathbb{S}}_{0}}^{\perp}\,,$ (8.3)
which is a real, reversible, momentum preserving operator satisfying (7.165),
(7.166). We denote
$H_{\pm{\mathbb{S}}_{0}}^{\bot}=\\{h(x)=\sum_{j\not\in\pm{\mathbb{S}}_{0}}h_{j}e^{\pm{\rm
i}jx}\in L^{2}\\}$. Note that
$\overline{{\mathcal{D}}_{0}}:H_{-{\mathbb{S}}_{0}}^{\bot}\to
H_{-{\mathbb{S}}_{0}}^{\bot}\,,\quad\overline{{\mathcal{D}}_{0}}={\rm
diag}_{j\in-{\mathbb{S}}_{0}^{c}}(\mu_{-j}^{(0)})\,.$ (8.4)
Proposition 7.20 implies that the operator ${\bf R}_{\perp}^{(0)}$ satisfies
the estimates of Lemma 8.1 below by fixing the constant $M$ large enough
(which means performing sufficiently many regularizing steps in Section 7.4),
namely
$M:=\big{[}\tfrac{3}{2}(k_{0}+s_{0}+{\mathtt{b}})+\tfrac{3}{2}\big{]}+1\in{\mathbb{N}}\,,$
(8.5)
where ${\mathtt{b}}$ is defined in (7.24). We also set
$\mu({\mathtt{b}}):=\aleph(M,s_{0}+{\mathtt{b}})\,,$ (8.6)
where the constant $\aleph(M,{\mathtt{q}}_{0})$ is given in Proposition 7.20,
with ${\mathtt{q}}_{0}=s_{0}+{\mathtt{b}}$.
###### Lemma 8.1.
(Smallness of ${\bf R}_{\perp}^{(0)}$) Assume (7.10) with
$\mu_{0}\geq\mu({\mathtt{b}})$. Then the operators $\langle
D\rangle^{\frac{1}{4}}{\bf R}_{\perp}^{(0)}\langle D\rangle^{\frac{1}{4}}$,
$\langle D\rangle^{\frac{1}{4}}[{\bf R}_{\perp}^{(0)},\partial_{x}]\langle
D\rangle^{\frac{1}{4}}$, and $\langle
D\rangle^{\frac{1}{4}}\partial_{{\varphi}_{m}}^{s_{0}}{\bf
R}_{\perp}^{(0)}\langle D\rangle^{\frac{1}{4}}$, $\langle
D\rangle^{\frac{1}{4}}[\partial_{{\varphi}_{m}}^{s_{0}}{\bf
R}_{\perp}^{(0)},\partial_{x}]\langle D\rangle^{\frac{1}{4}}$, $\langle
D\rangle^{\frac{1}{4}}\partial_{{\varphi}_{m}}^{s_{0}+{\mathtt{b}}}{\bf
R}^{(0)}_{\perp}\langle D\rangle^{\frac{1}{4}}$,
$\langle
D\rangle^{\frac{1}{4}}[\partial_{{\varphi}_{m}}^{s_{0}+{\mathtt{b}}}{\bf
R}^{(0)}_{\perp},\partial_{x}]\langle D\rangle^{\frac{1}{4}}$,
$m=1,\ldots,\nu$, are ${\mathcal{D}}^{k_{0}}$-tame. Defining
$\displaystyle\mathbb{M}_{0}(s):=\max\big{\\{}{\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}{\bf R}^{(0)}_{\perp}\langle
D\rangle^{\frac{1}{4}}}(s),\,{\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}[{\bf R}^{(0)}_{\perp},\partial_{x}]\langle
D\rangle^{\frac{1}{4}}}(s),\,$
$\displaystyle\quad\quad\quad\quad\quad{\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}\partial_{{\varphi}_{m}}^{s_{0}}{\bf
R}^{(0)}_{\perp}\langle D\rangle^{\frac{1}{4}}}(s),\,{\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}[\partial_{{\varphi}_{m}}^{s_{0}}{\bf
R}^{(0)}_{\perp},\partial_{x}]\langle
D\rangle^{\frac{1}{4}}}(s),m=1,\ldots,\nu\big{\\}}$ (8.7)
$\displaystyle\mathbb{M}_{0}(s,{\mathtt{b}}):=\max\big{\\{}{\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}\partial_{{\varphi}_{m}}^{s_{0}+{\mathtt{b}}}{\bf
R}^{(0)}_{\perp}\langle D\rangle^{\frac{1}{4}}}(s),\ {\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}[\partial_{{\varphi}_{m}}^{s_{0}+{\mathtt{b}}}{\bf
R}^{(0)}_{\perp},\partial_{x}]\langle
D\rangle^{\frac{1}{4}}}(s)\,,\,m=1,\ldots,\nu\big{\\}}\,,$ (8.8)
we have, for all $s_{0}\leq s\leq S-\mu({\mathtt{b}})$,
${\mathfrak{M}}_{0}(s,{\mathtt{b}}):=\max\Set{\mathbb{M}_{0}(s),\mathbb{M}_{0}(s,{\mathtt{b}})}\leq
C(S){\frac{\varepsilon}{\upsilon^{3}}}(1+\|{\mathfrak{I}}_{0}\|_{s+\mu({\mathtt{b}})}^{k_{0},\upsilon})\,,\
{\mathfrak{M}}_{0}(s_{0},{\mathtt{b}})\leq
C(S){\frac{\varepsilon}{\upsilon^{3}}}\,.$ (8.9)
Moreover, for all ${\mathtt{q}}\in{\mathbb{N}}_{0}^{\nu}$, with
$\left|{\mathtt{q}}\right|\leq s_{0}+{\mathtt{b}}$,
$\|\langle
D\rangle^{\frac{1}{4}}\partial_{\varphi}^{\mathtt{q}}\Delta_{12}{\bf
R}^{(0)}_{\perp}\langle
D\rangle^{\frac{1}{4}}\|_{{\mathcal{L}}(H^{s_{0}})}\,,\ \|\langle
D\rangle^{\frac{1}{4}}\Delta_{12}[\partial_{\varphi}^{\mathtt{q}}{\bf
R}^{(0)}_{\perp},\partial_{x}]\langle
D\rangle^{\frac{1}{4}}\|_{{\mathcal{L}}(H^{s_{0}})}\leq
C(S){\varepsilon\upsilon^{-3}}\left\|i_{1}-i_{2}\right\|_{s_{0}+\mu({\mathtt{b}})}\,.$
(8.10)
###### Proof.
Recalling (8.7), (8.8), the bounds (8.9)-(8.10) follow by (7.165), (8.5),
(8.6), (7.166). ∎
We perform the almost-reducibility of ${\bf L}_{0}$ along the scale
$(N_{{\mathtt{n}}})_{{\mathtt{n}}\in{\mathbb{N}}_{0}}$, defined in (7.21).
###### Theorem 8.2.
(Almost-diagonalization of ${\bf L}_{0}$: KAM iteration) There exists
$\tau_{2}(\tau,\nu)>\tau_{1}(\tau,\nu)+1+{\mathtt{a}}$ (with
$\tau_{1},{\mathtt{a}}$ defined in (7.24)) such that, for all $S>s_{0}$, there
is $N_{0}:=N_{0}(S,{\mathtt{b}})\in{\mathbb{N}}$ such that, if
$N_{0}^{\tau_{2}}{\mathfrak{M}}_{0}(s_{0},{\mathtt{b}})\upsilon^{-1}\leq 1\,,$
(8.11)
then, for all $\overline{\mathtt{n}}\in{\mathbb{N}}_{0}$,
${\mathtt{n}}=0,1,\ldots,\overline{\mathtt{n}}$:
$({\bf S1})_{\mathtt{n}}$ There exists a real, reversible and momentum
preserving operator
$\displaystyle{\bf
L}_{\mathtt{n}}:=\omega\cdot\partial_{\varphi}\mathds{1}_{\perp}+{\rm i}\,{\bf
D}_{\mathtt{n}}+{\bf R}_{\perp}^{({\mathtt{n}})}\,,$ (8.12) $\displaystyle{\bf
D}_{\mathtt{n}}:=\begin{pmatrix}{\mathcal{D}}_{\mathtt{n}}&0\\\
0&-\overline{{\mathcal{D}}_{\mathtt{n}}}\end{pmatrix}\,,\quad{\mathcal{D}}_{\mathtt{n}}:=\operatorname{diag}_{j\in{\mathbb{S}}_{0}^{c}}\mu_{j}^{({\mathtt{n}})}\,,$
defined for all $(\omega,\gamma)$ in
${\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]$, where
$\mu_{j}^{({\mathtt{n}})}$ are $k_{0}$-times differentiable real functions
$\mu_{j}^{({\mathtt{n}})}(\omega,\gamma):=\mu_{j}^{(0)}(\omega,\gamma)+{\mathfrak{r}}_{j}^{({\mathtt{n}})}(\omega,\gamma)\,,\quad\mu_{j}^{(0)}={\mathtt{m}}_{1,\overline{\mathtt{n}}}\,j+{\mathtt{m}}_{\frac{1}{2}}\,\Omega_{j}(\gamma)-{\mathtt{m}}_{0}\,{\rm
sgn}(j)\,,$ (8.13)
satisfying ${\mathfrak{r}}_{j}^{(0)}=0$ and, for ${\mathtt{n}}\geq 1$,
$\displaystyle|j|^{\frac{1}{2}}|{\mathfrak{r}}_{j}^{({\mathtt{n}})}|^{k_{0},\upsilon}\leq
C(S,{\mathtt{b}})\varepsilon\upsilon^{-3}\,,\quad|j|^{\frac{1}{2}}|\mu_{j}^{({\mathtt{n}})}-\mu_{j}^{({\mathtt{n}}-1)}|^{k_{0},\upsilon}\leq
C(S,{\mathtt{b}})\varepsilon\upsilon^{-3}N_{{\mathtt{n}}-2}^{-{\mathtt{a}}}\,,\
\ \forall j\in{\mathbb{S}}_{0}^{c}\,\,.$ (8.14)
The remainder
${\bf
R}_{\perp}^{({\mathtt{n}})}:=\begin{pmatrix}R_{\perp}^{({\mathtt{n}},d)}&R_{\perp}^{({\mathtt{n}},o)}\\\
\overline{R_{\perp}^{({\mathtt{n}},o)}}&\overline{R_{\perp}^{({\mathtt{n}},d)}}\end{pmatrix},\
\quad R_{\perp}^{({\mathtt{n}},d)}:H_{{\mathbb{S}}_{0}}^{\perp}\rightarrow
H_{{\mathbb{S}}_{0}}^{\perp}\,,\
R_{\perp}^{({\mathtt{n}},o)}:H_{-{\mathbb{S}}_{0}}^{\perp}\rightarrow
H_{{\mathbb{S}}_{0}}^{\perp}$ (8.15)
is ${\mathcal{D}}^{k_{0}}$-$(-\frac{1}{2})$-modulo-tame with a modulo-tame
constant
${\mathfrak{M}}_{\mathtt{n}}^{\sharp}(s):={\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}{\bf R}_{\perp}^{({\mathtt{n}})}\langle
D\rangle^{\frac{1}{4}}}^{\sharp}(s)\,,\quad{\mathfrak{M}}_{\mathtt{n}}^{\sharp}(s,{\mathtt{b}}):={\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}\langle\partial_{\varphi}\rangle^{\mathtt{b}}{\bf
R}_{\perp}^{({\mathtt{n}})}\langle D\rangle^{\frac{1}{4}}}^{\sharp}(s)\,,$
(8.16)
which satisfy, for some constant $C_{*}(s_{0},{\mathtt{b}})>0$, for all
$s_{0}\leq s\leq S-\mu({\mathtt{b}})$,
${\mathfrak{M}}_{\mathtt{n}}^{\sharp}(s)\leq
C_{*}(s_{0},{\mathtt{b}}){\mathfrak{M}}_{0}(s,{\mathtt{b}})N_{{\mathtt{n}}-1}^{-{\mathtt{a}}}\,,\quad{\mathfrak{M}}_{\mathtt{n}}^{\sharp}(s,{\mathtt{b}})\leq
C_{*}(s_{0},{\mathtt{b}}){\mathfrak{M}}_{0}(s,{\mathtt{b}})N_{{\mathtt{n}}-1}\,.$
(8.17)
Define the sets
${\mathtt{\Lambda}}_{\mathtt{n}}^{\upsilon}={\mathtt{\Lambda}}_{\mathtt{n}}^{\upsilon}(i)$
by
${\mathtt{\Lambda}}_{0}^{\upsilon}:={\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]$
and, for ${\mathtt{n}}=1,...,\overline{\mathtt{n}}$,
$\displaystyle{\mathtt{\Lambda}}_{\mathtt{n}}^{\upsilon}:=$
$\displaystyle\big{\\{}\lambda=(\omega,\gamma)\in{\mathtt{\Lambda}}_{{\mathtt{n}}-1}^{\upsilon}\,:\,$
(8.18) $\displaystyle\ \
\big{|}\omega\cdot\ell+\mu_{j}^{({\mathtt{n}}-1)}-\mu_{j^{\prime}}^{({\mathtt{n}}-1)}\big{|}\geq\upsilon\,\braket{\ell}^{-\tau}$
$\displaystyle\ \ \forall\,\left|\ell\right|\leq N_{{\mathtt{n}}-1}\,,\
j,j^{\prime}\notin{\mathbb{S}}_{0}\,,\ (\ell,j,j^{\prime})\neq(0,j,j),\text{
with }\vec{\jmath}\cdot\ell+j-j^{\prime}=0\,,$ $\displaystyle\ \
\big{|}\omega\cdot\ell+\mu_{j}^{({\mathtt{n}}-1)}+\mu_{j^{\prime}}^{({\mathtt{n}}-1)}\big{|}\geq\upsilon\,\big{(}\left|j\right|^{\frac{1}{2}}+|j^{\prime}|^{\frac{1}{2}}\big{)}\braket{\ell}^{-\tau}$
$\displaystyle\ \ \forall\,\left|\ell\right|\leq N_{{\mathtt{n}}-1}\,,\
j,j^{\prime}\notin{\mathbb{S}}_{0}\text{ with
}\vec{\jmath}\cdot\ell+j+j^{\prime}=0\big{\\}}\,.$
For ${\mathtt{n}}\geq 1$ there exists a real, reversibility and momentum
preserving map, defined for all
$(\omega,\gamma)\in{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]$, of the
form
${\bf{\Phi}}_{{\mathtt{n}}-1}=e^{{\bf X}_{{\mathtt{n}}-1}}\,,\quad{\bf
X}_{{\mathtt{n}}-1}:=\begin{pmatrix}X_{{\mathtt{n}}-1}^{(d)}&X_{{\mathtt{n}}-1}^{(o)}\\\
\overline{X_{{\mathtt{n}}-1}^{(o)}}&\overline{X_{{\mathtt{n}}-1}^{(d)}}\end{pmatrix}\,,\
\ X_{{\mathtt{n}}-1}^{(d)}:H_{{\mathbb{S}}_{0}}^{\perp}\rightarrow
H_{{\mathbb{S}}_{0}}^{\perp}\,,\
X_{{\mathtt{n}}-1}^{(o)}:H_{-{\mathbb{S}}_{0}}^{\perp}\rightarrow
H_{{\mathbb{S}}_{0}}^{\perp}\,,$
such that, for all $\lambda\in{\mathtt{\Lambda}}_{{\mathtt{n}}}^{\upsilon}$,
the following conjugation formula holds:
${\bf L}_{\mathtt{n}}={\bf{\Phi}}_{{\mathtt{n}}-1}^{-1}{\bf
L}_{{\mathtt{n}}-1}{\bf{\Phi}}_{{\mathtt{n}}-1}\,.$ (8.19)
The operators ${\bf X}_{{\mathtt{n}}-1}$,
$\braket{\partial_{\varphi}}^{\mathtt{b}}{\bf X}_{{\mathtt{n}}-1}$, are
${\mathcal{D}}^{k_{0}}$-$(-\frac{1}{2})$-modulo-tame with modulo tame
constants satisfying, for all $s_{0}\leq s\leq S-\mu({\mathtt{b}})$,
$\displaystyle{\mathfrak{M}}_{\langle D\rangle^{\frac{1}{4}}{\bf
X}_{{\mathtt{n}}-1}\langle D\rangle^{\frac{1}{4}}}^{\sharp}(s)$
$\displaystyle\leq
C(s_{0},{\mathtt{b}})\upsilon^{-1}N_{{\mathtt{n}}-1}^{\tau_{1}}N_{{\mathtt{n}}-2}^{-{\mathtt{a}}}{\mathfrak{M}}_{0}(s,{\mathtt{b}})\,,$
(8.20) $\displaystyle{\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}\langle\partial_{\varphi}\rangle^{\mathtt{b}}{\bf
X}_{{\mathtt{n}}-1}\langle D\rangle^{\frac{1}{4}}}^{\sharp}(s)$
$\displaystyle\leq
C(s_{0},{\mathtt{b}})\upsilon^{-1}N_{{\mathtt{n}}-1}^{\tau_{1}}N_{{\mathtt{n}}-2}{\mathfrak{M}}_{0}(s,{\mathtt{b}})\,.$
$({\bf S2})_{\mathtt{n}}$ Let $i_{1}(\omega,\gamma)$, $i_{2}(\omega,\gamma)$
such that ${\bf R}_{\perp}^{({\mathtt{n}})}(i_{1})$, ${\bf
R}_{\perp}^{({\mathtt{n}})}(i_{2})$ satisfy (8.9), (8.10). Then, for all
$(\omega,\gamma)\in{\mathbb{R}}^{\nu}\times{\mathbb{R}}$
$\displaystyle\|\langle D\rangle^{\frac{1}{4}}|\Delta_{12}{\bf
R}_{\perp}^{({\mathtt{n}})}|\langle
D\rangle^{\frac{1}{4}}\|_{{\mathcal{L}}(H^{s_{0}})}\lesssim_{S,{\mathtt{b}}}\varepsilon\upsilon^{-3}N_{{\mathtt{n}}-1}^{-{\mathtt{a}}}\left\|i_{1}-i_{2}\right\|_{s_{0}+\mu({\mathtt{b}})}\,,$
(8.21) $\displaystyle\|\langle
D\rangle^{\frac{1}{4}}|\braket{\partial_{\varphi}}^{\mathtt{b}}\Delta_{12}{\bf
R}_{\perp}^{({\mathtt{n}})}|\langle
D\rangle^{\frac{1}{4}}\|_{{\mathcal{L}}(H^{s_{0}})}\lesssim_{S,{\mathtt{b}}}\varepsilon\upsilon^{-3}N_{{\mathtt{n}}-1}\left\|i_{1}-i_{2}\right\|_{s_{0}+\mu({\mathtt{b}})}\,.$
(8.22)
Furthermore, for ${\mathtt{n}}\geq 1$, for all $j\in{\mathbb{S}}_{0}^{c}$,
$\displaystyle|j|^{\frac{1}{2}}|\Delta_{12}({\mathfrak{r}}_{j}^{({\mathtt{n}})}-{\mathfrak{r}}_{j}^{({\mathtt{n}}-1)})|\leq
C\|\langle D\rangle^{\frac{1}{4}}|\Delta_{12}{\bf
R}_{\perp}^{({\mathtt{n}})}|\langle
D\rangle^{\frac{1}{4}}\|_{{\mathcal{L}}(H^{s_{0}})}\,,$ (8.23)
$\displaystyle|j|^{\frac{1}{2}}|\Delta_{12}{\mathfrak{r}}_{j}^{({\mathtt{n}})}|\leq
C(S,{\mathtt{b}})\varepsilon\upsilon^{-3}\left\|i_{1}-i_{2}\right\|_{s_{0}+\mu({\mathtt{b}})}\,.$
(8.24)
$({\bf S3})_{\mathtt{n}}$ Let $i_{1},i_{2}$ be like in $({\bf
S2})_{\mathtt{n}}$ and $0<\rho<\upsilon/2$. Then
$\varepsilon\upsilon^{-3}C(S)N_{{\mathtt{n}}-1}^{\tau+1}\left\|i_{1}-i_{2}\right\|_{s_{0}+\mu({\mathtt{b}})}\leq\rho\quad\Rightarrow\quad{\mathtt{\Lambda}}_{\mathtt{n}}^{\upsilon}(i_{1})\subseteq{\mathtt{\Lambda}}_{\mathtt{n}}^{\upsilon-\rho}(i_{2})\,.$
(8.25)
Theorem 8.2 implies also that the invertible operator
${\bf U}_{0}:=\mathds{1}_{\perp}\,,\quad{\bf
U}_{\overline{\mathtt{n}}}:={\bf{\Phi}}_{0}\circ\ldots\circ{\bf{\Phi}}_{\overline{\mathtt{n}}-1}\,,\quad\overline{\mathtt{n}}\geq
1\,,$ (8.26)
has almost diagonalized ${\bf L}_{0}$. We have indeed the following corollary.
###### Theorem 8.3.
(Almost-diagonalization of ${\bf L}_{0}$) Assume (7.10) with
$\mu_{0}\geq\mu({\mathtt{b}})$. For all $S>s_{0}$, there exist
$N_{0}=N_{0}(S,{\mathtt{b}})>0$ and $\delta_{0}=\delta_{0}(S)>0$ such that, if
the smallness condition
$N_{0}^{\tau_{2}}\varepsilon\upsilon^{-4}\leq\delta_{0}$ (8.27)
holds, where $\tau_{2}=\tau_{2}(\tau,\nu)$ is defined in Theorem 8.2, then,
for all $\overline{\mathtt{n}}\in{\mathbb{N}}_{0}$ and for all
$(\omega,\gamma)\in{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]$ the
operator ${\bf U}_{\overline{\mathtt{n}}}$ in (8.26) is well-defined, the
operators ${\bf U}_{\overline{\mathtt{n}}}^{\pm 1}-\mathds{1}_{\perp}$ are
${\mathcal{D}}^{k_{0}}$-$(-\frac{1}{2})$-modulo-tame with modulo-tame
constants satisfying, for all $s_{0}\leq s\leq S-\mu({\mathtt{b}})$,
${\mathfrak{M}}_{\langle D\rangle^{\frac{1}{4}}({\bf
U}_{\overline{\mathtt{n}}}^{\pm 1}-\mathds{1}_{\perp})\langle
D\rangle^{\frac{1}{4}}}^{\sharp}(s)\lesssim_{S}\varepsilon\upsilon^{-4}N_{0}^{\tau_{1}}(1+\|{\mathfrak{I}}_{0}\|_{s+\mu({\mathtt{b}})}^{k_{0},\upsilon})\,,$
(8.28)
where $\tau_{1}$ is given by (7.24). Moreover ${\bf
U}_{\overline{\mathtt{n}}}$, ${\bf U}_{\overline{\mathtt{n}}}^{-1}$ are real,
reversibility and momentum preserving. The operator ${\bf
L}_{\overline{\mathtt{n}}}=\omega\cdot\partial_{\varphi}\mathds{1}_{\perp}+{\rm
i}\,{\bf D}_{\overline{\mathtt{n}}}+{\bf
R}_{\perp}^{(\overline{\mathtt{n}})}$, defined in (8.12) with
${\mathtt{n}}=\overline{\mathtt{n}}$ is real, reversible and momentum
preserving. The operator ${\bf R}_{\perp}^{(\overline{\mathtt{n}})}$ is
${\mathcal{D}}^{k_{0}}$-$(-\tfrac{1}{2})$-modulo-tame with a modulo-tame
constant satisfying, for all $s_{0}\leq s\leq S-\mu({\mathtt{b}})$,
${\mathfrak{M}}_{\langle D\rangle^{\frac{1}{4}}{\bf
R}_{\perp}^{(\overline{\mathtt{n}})}\langle
D\rangle^{\frac{1}{4}}}^{\sharp}(s)\lesssim_{S}\varepsilon\upsilon^{-3}N_{\overline{\mathtt{n}}-1}^{-{\mathtt{a}}}(1+\|{\mathfrak{I}}_{0}\|_{s+\mu({\mathtt{b}})}^{k_{0},\upsilon})\,.$
(8.29)
Moreover, for all $(\omega,\gamma)$ in
${\mathtt{\Lambda}}_{\overline{\mathtt{n}}}^{\upsilon}={\mathtt{\Lambda}}_{\overline{\mathtt{n}}}^{\upsilon}(i)=\bigcap_{{\mathtt{n}}=0}^{\overline{\mathtt{n}}}{\mathtt{\Lambda}}_{\mathtt{n}}^{\upsilon}$,
where the sets ${\mathtt{\Lambda}}_{\mathtt{n}}^{\upsilon}$ are defined in
(8.18), the conjugation formula ${\bf L}_{\overline{\mathtt{n}}}:={\bf
U}_{\overline{\mathtt{n}}}^{-1}{\bf L}_{0}{\bf U}_{\overline{\mathtt{n}}}$
holds.
### Proof of Theorem 8.2
The proof of Theorem 8.2 is inductive. We first show that $({\bf
S1})_{\mathtt{n}}$-$({\bf S3})_{\mathtt{n}}$ hold when ${\mathtt{n}}=0$.
#### Proof of $({\bf S1})_{0}$-$({\bf S3})_{0}$.
Properties (8.12)-(8.13), (8.15) for ${\mathtt{n}}=0$ hold by (8.1), (8.2),
(8.3) with ${\mathfrak{r}}_{j}^{(0)}=0$. Moreover, by Lemma 3.17, we deduce
that, for any $s_{0}\leq s\leq S-\mu({\mathtt{b}})$, we have
${\mathfrak{M}}_{0}^{\sharp}(s),{\mathfrak{M}}_{0}^{\sharp}(s,{\mathtt{b}})\lesssim_{s_{0},{\mathtt{b}}}{\mathfrak{M}}_{0}(s,{\mathtt{b}})$
and (8.17) for ${\mathtt{n}}=0$ holds. The estimates (8.21), (8.22) at
${\mathtt{n}}=0$ follows similarly by (8.10). Finally $({\bf S3})_{0}$ is
trivial since
${\mathtt{\Lambda}}_{0}^{\upsilon}(i_{1})={\mathtt{\Lambda}}_{0}^{\upsilon-\rho}(i_{2})={\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]$.
#### The reducibility step.
We now describe the generic inductive step, showing how to transform ${\bf
L}_{\mathtt{n}}$ into ${\bf L}_{{\mathtt{n}}+1}$ by the conjugation with
${\bf{\Phi}}_{{\mathtt{n}}}$. For sake of simplicity , we drop the index
${\mathtt{n}}$ and we write $+$ instead of ${\mathtt{n}}+1$, so that we write
${\bf L}:={\bf L}_{\mathtt{n}}$, ${\bf L}_{+}:={\bf L}_{{\mathtt{n}}+1}$,
${\bf R}_{\perp}:={\bf R}_{\perp}^{({\mathtt{n}})}$, ${\bf
R}_{\perp}^{(+)}:={\bf R}_{\perp}^{({\mathtt{n}}+1)}$, $N:=N_{\mathtt{n}}$,
etc. We conjugate ${\bf L}$ in (8.12) by a transformation of the form
${\bf{\Phi}}:=e^{{\bf X}}\,,\quad{\bf X}:=\begin{pmatrix}X^{(d)}&X^{(o)}\\\
\overline{X^{(o)}}&\overline{X^{(d)}}\end{pmatrix},\
X^{(d)}:H_{{\mathbb{S}}_{0}}^{\perp}\rightarrow
H_{{\mathbb{S}}_{0}}^{\perp}\,,\
X^{(o)}:H_{-{\mathbb{S}}_{0}}^{\perp}\rightarrow
H_{{\mathbb{S}}_{0}}^{\perp}\,,$ (8.30)
where ${\bf X}$ is a bounded linear operator, chosen below in (8.35), (8.36).
By the Lie expansions (3.18)-(3.19) we have
$\displaystyle{\bf L}_{+}:={\bf{\Phi}}^{-1}{\bf L}{\bf{\Phi}}$
$\displaystyle=\omega\cdot\partial_{\varphi}\mathds{1}_{\perp}+{\rm i}\,{\bf
D}+((\omega\cdot\partial_{\varphi}{\bf X})-{\rm i}[{\bf X},{\bf
D}]+\Pi_{N}{\bf R}_{\perp})+\Pi_{N}^{\perp}{\bf R}_{\perp}$ (8.31)
$\displaystyle\ \ -\int_{0}^{1}e^{-\tau{\bf X}}[{\bf X},{\bf
R}_{\perp}]e^{\tau{\bf X}}\,{\rm d}{\tau}-\int_{0}^{1}(1-\tau)e^{-\tau{\bf
X}}[{\bf X},(\omega\cdot\partial_{\varphi}{\bf X})-{\rm i}[{\bf X},{\bf
D}]]e^{\tau{\bf X}}\,{\rm d}{\tau}$
where $\Pi_{N}$ is defined in (3.35) and $\Pi_{N}^{\perp}:={\rm Id}-\Pi_{N}$.
We want to solve the homological equation
$\omega\cdot\partial_{\varphi}{\bf X}-{\rm i}[{\bf X},{\bf D}]+\Pi_{N}{\bf
R}_{\perp}=[{\bf R}_{\perp}]$ (8.32)
where
$[{\bf R}_{\perp}]:=\begin{pmatrix}[R_{\perp}^{(d)}]&0\\\
0&[\overline{R_{\perp}^{(d)}}]\end{pmatrix}\,,\quad[R_{\perp}^{(d)}]:={\rm
diag}_{j\in{\mathbb{S}}_{0}^{c}}(R_{\perp}^{(d)})_{j}^{j}(0)\,.$ (8.33)
By (8.12), (8.15) and (8.30), the homological equation (8.32) is equivalent to
the two scalar homological equations
$\displaystyle\omega\cdot\partial_{\varphi}X^{(d)}-{\rm
i}(X^{(d)}{\mathcal{D}}-{\mathcal{D}}X^{(d)})+\Pi_{N}R_{\perp}^{(d)}=[R_{\perp}^{(d)}]\,$
(8.34) $\displaystyle\omega\cdot\partial_{\varphi}X^{(o)}+{\rm
i}(X^{(o)}\overline{{\mathcal{D}}}+{\mathcal{D}}X^{(o)})+\Pi_{N}R_{\perp}^{(o)}=0\,.$
Recalling (8.12) and since $\overline{{\mathcal{D}}}={\rm
diag}_{j\in-{\mathbb{S}}_{0}^{c}}(\mu_{-j})$, acting in
$H_{-{\mathbb{S}}_{0}}^{\bot}$ (see (8.4)) the solutions of (8.34) are, for
all $(\omega,\gamma)\in{\mathtt{\Lambda}}_{{\mathtt{n}}+1}^{\upsilon}$ (see
(8.18) with ${\mathtt{n}}\rightsquigarrow{\mathtt{n}}+1$)
$\displaystyle(X^{(d)})_{j}^{j^{\prime}}(\ell):=\begin{cases}-\dfrac{(R_{\perp}^{(d)})_{j}^{j^{\prime}}(\ell)}{{\rm
i}(\omega\cdot\ell+\mu_{j}-\mu_{j^{\prime}})}&\ \text{ if
}\begin{cases}(\ell,j,j^{\prime})\neq(0,j,j),\
j,j^{\prime}\in{\mathbb{S}}_{0}^{c},\ \braket{\ell}\leq N\\\
\ell\cdot\vec{\jmath}+j-j^{\prime}=0\end{cases}\\\ 0&\ \text{
otherwise}\,,\end{cases}$ (8.35)
$\displaystyle(X^{(o)})_{j}^{j^{\prime}}(\ell):=\begin{cases}-\dfrac{(R_{\perp}^{(o)})_{j}^{j^{\prime}}(\ell)}{{\rm
i}(\omega\cdot\ell+\mu_{j}+\mu_{-j^{\prime}})}&\ \text{ if
}\begin{cases}\forall\,\ell\in{\mathbb{Z}}^{\nu}\
j,-j^{\prime}\in{\mathbb{S}}_{0}^{c},\ \braket{\ell}\leq N\\\
\ell\cdot\vec{\jmath}+j-j^{\prime}=0\end{cases}\\\ 0&\ \text{
otherwise}\,.\end{cases}$ (8.36)
Note that, since $-j^{\prime}\in{\mathbb{S}}_{0}^{c}$, we can apply the bounds
(8.18) for $(\omega,\gamma)\in{\mathtt{\Lambda}}_{{\mathtt{n}}+1}^{\upsilon}$.
###### Lemma 8.4.
(Homological equations) The real operator ${\bf X}$ defined in (8.30), (8.35),
(8.36), (which for all
$(\omega,\gamma)\in{\mathtt{\Lambda}}_{{\mathtt{n}}+1}^{\upsilon}$ solves the
homological equation (8.32)) admits an extension to the whole parameter space
${\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]$. Such extended operator is
${\mathcal{D}}^{k_{0}}$-$(-\tfrac{1}{2})$-modulo-tame satisfying, for all
$s_{0}\leq s\leq S-\mu({\mathtt{b}})$,
${\mathfrak{M}}_{\langle D\rangle^{\frac{1}{4}}{\bf X}\langle
D\rangle^{\frac{1}{4}}}^{\sharp}(s)\lesssim_{k_{0}}N^{\tau_{1}}\upsilon^{-1}{\mathfrak{M}}^{\sharp}(s)\,,\quad{\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}\langle\partial_{\varphi}\rangle^{\mathtt{b}}{\bf
X}\langle
D\rangle^{\frac{1}{4}}}^{\sharp}(s)\lesssim_{k_{0}}N^{\tau_{1}}\upsilon^{-1}{\mathfrak{M}}^{\sharp}(s,{\mathtt{b}})\,,$
(8.37)
where $\tau_{1}:=\tau(k_{0}+1)+k_{0}$. For all
$(\omega,\gamma)\in{\mathbb{R}}^{\nu}\times{\mathbb{R}}$,
$\displaystyle\|\langle D\rangle^{\frac{1}{4}}|\Delta_{12}{\bf X}|\langle
D\rangle^{\frac{1}{4}}\|_{{\mathcal{L}}(H^{s_{0}})}\lesssim$ $\displaystyle
N^{2\tau+1}{\upsilon^{-1}}(\|\langle D\rangle^{\frac{1}{4}}|{\bf
R}_{\perp}(i_{2})|\langle
D\rangle^{\frac{1}{4}}\|_{{\mathcal{L}}(H^{s_{0}})}\left\|i_{1}-i_{2}\right\|_{s_{0}+\mu({\mathtt{b}})}+\|\langle
D\rangle^{\frac{1}{4}}|\Delta_{12}{\bf R}_{\perp}|\langle
D\rangle^{\frac{1}{4}}\|_{{\mathcal{L}}(H^{s_{0}})})\,,$ (8.38)
$\displaystyle\|\langle
D\rangle^{\frac{1}{4}}|\langle\partial_{\varphi}\rangle^{\mathtt{b}}\Delta_{12}{\bf
X}|\langle D\rangle^{\frac{1}{4}}\|_{{\mathcal{L}}(H^{s_{0}})}\lesssim$
$\displaystyle N^{2\tau+1}{\upsilon^{-1}}(\|\langle
D\rangle^{\frac{1}{4}}|\langle\partial_{\varphi}\rangle^{\mathtt{b}}{\bf
R}_{\perp}(i_{2})|\langle
D\rangle^{\frac{1}{4}}\|_{{\mathcal{L}}(H^{s_{0}})}\left\|i_{1}-i_{2}\right\|_{s_{0}+\mu({\mathtt{b}})}+\|\langle
D\rangle^{\frac{1}{4}}|\langle\partial_{\varphi}\rangle^{\mathtt{b}}\Delta_{12}{\bf
R}_{\perp}|\langle D\rangle^{\frac{1}{4}}\|_{{\mathcal{L}}(H^{s_{0}})})\,.$
(8.39)
The operator ${\bf X}$ is reversibility and momentum preserving.
###### Proof.
We prove that (8.37) holds for $X^{(d)}$. The proof for $X^{(o)}$ holds
analogously. First, we extend the solution in (8.35) to all $\lambda$ in
${\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]$ by setting (without any
further relabeling) $(X^{(d)})_{j}^{j^{\prime}}(\ell)={\rm
i}\,g_{\ell,j,j^{\prime}}(\lambda)(R_{\perp}^{(d)})_{j}^{j^{\prime}}(\ell)$,
where
$g_{\ell,j,j^{\prime}}(\lambda):=\frac{\chi(f(\lambda)\rho^{-1})}{f(\lambda)}\,,\quad
f(\lambda):=\omega\cdot\ell+\mu_{j}-\mu_{j^{\prime}}\,,\quad\rho:=\upsilon\braket{\ell}^{-\tau}\,,$
and $\chi$ is the cut-off function (3.11). By (8.13), (8.14), (7.163), (8.18),
Lemma 4.4, together with (3.11), we deduce that, for any
$k_{1}\in{\mathbb{N}}_{0}^{\nu}$, $\left|k_{1}\right|\leq k_{0}$,
$\sup_{\left|k_{1}\right|\leq
k_{0}}\big{|}\partial_{\lambda}^{k_{1}}g_{\ell,j,j^{\prime}}\big{|}\lesssim_{k_{0}}\braket{\ell}^{\tau_{1}}\upsilon^{-1-\left|k_{1}\right|}\,,\quad\tau_{1}=\tau(k_{0}+1)+k_{0}\,,$
and we deduce, for all $0\leq\left|k\right|\leq k_{0}$,
$\displaystyle|\partial_{\lambda}^{k}(X^{(d)})_{j}^{j^{\prime}}(\ell)|$
$\displaystyle\lesssim_{k_{0}}\sum_{k_{1}+k_{2}=k}|\partial_{\lambda}^{k_{1}}g_{\ell,j,j^{\prime}}(\lambda)||\partial_{\lambda}^{k_{2}}(R_{\perp}^{(d)})_{j}^{j^{\prime}}(\ell)|$
$\displaystyle\lesssim_{k_{0}}\braket{\ell}^{\tau_{1}}\upsilon^{-1-\left|k\right|}\sum_{\left|k_{2}\right|\leq\left|k\right|}\upsilon^{\left|k_{2}\right|}|\partial_{\lambda}^{k_{2}}(R_{\perp}^{(d)})_{j}^{j^{\prime}}(\ell)|\,.$
(8.40)
By (8.35) we have that $(X^{(d)})_{j}^{j^{\prime}}(\ell)=0$ for all
$\langle\ell\rangle>N$. Therefore, for all $|k|\leq k_{0}$, we have
$\displaystyle\|\langle
D\rangle^{\frac{1}{4}}|\braket{\partial_{\varphi}}^{\mathtt{b}}\partial_{\lambda}^{k}X^{(d)}|\langle
D\rangle^{\frac{1}{4}}h\|_{s}^{2}\leq\sum_{\ell,j}\braket{\ell,j}^{2s}\Big{(}\sum_{\braket{\ell-\ell^{\prime}}\leq
N,j^{\prime}}|\braket{\ell-\ell^{\prime}}^{\mathtt{b}}\langle
j\rangle^{\frac{1}{4}}\langle
j^{\prime}\rangle^{\frac{1}{4}}\partial_{\lambda}^{k}(X^{(d)})_{j}^{j^{\prime}}(\ell-\ell^{\prime})||h_{\ell^{\prime},j^{\prime}}|\Big{)}^{2}$
$\displaystyle\stackrel{{\scriptstyle\eqref{equa1}}}{{\lesssim_{k_{0}}}}N^{2\tau_{1}}\upsilon^{-2(1+\left|k\right|)}\sum_{\left|k_{2}\right|\leq\left|k\right|}\upsilon^{2\left|k_{2}\right|}\sum_{\ell,j}\braket{\ell,j}^{2s}\Big{(}\sum_{\ell^{\prime},j^{\prime}}|\braket{\ell-\ell^{\prime}}^{\mathtt{b}}\langle
j\rangle^{\frac{1}{4}}\langle
j^{\prime}\rangle^{\frac{1}{4}}\partial_{\lambda}^{k_{2}}(R_{\perp}^{(d)})_{j}^{j^{\prime}}(\ell-\ell^{\prime})||h_{\ell^{\prime},j^{\prime}}|\Big{)}^{2}$
$\displaystyle\lesssim_{k_{0}}N^{2\tau_{1}}\upsilon^{-2(1+\left|k\right|)}\sum_{\left|k_{2}\right|\leq\left|k\right|}\upsilon^{2\left|k_{2}\right|}\|\langle
D\rangle^{\frac{1}{4}}|\braket{\partial_{\varphi}}^{\mathtt{b}}\partial_{\lambda}^{k_{2}}R_{\perp}^{(d)}|\langle
D\rangle^{\frac{1}{4}}|h|\|_{s}^{2}$
$\displaystyle\stackrel{{\scriptstyle\ref{Dk0-modulo-12},\eqref{Mn.sharp}}}{{\lesssim_{k_{0}}}}N^{2\tau_{1}}\upsilon^{-2(1+\left|k\right|)}\big{(}{\mathfrak{M}}^{\sharp}(s,{\mathtt{b}})^{2}\left\|h\right\|_{s_{0}}^{2}+{\mathfrak{M}}^{\sharp}(s_{0},{\mathtt{b}})^{2}\left\|h\right\|_{s}^{2}\big{)}\,,$
and, by Definition 3.14, we conclude that ${\mathfrak{M}}_{\langle
D\rangle^{\frac{1}{4}}\langle\partial_{\varphi}\rangle^{\mathtt{b}}X^{(d)}\langle
D\rangle^{\frac{1}{4}}}^{\sharp}(s)\lesssim_{k_{0}}N^{\tau_{1}}\upsilon^{-1}{\mathfrak{M}}^{\sharp}(s,{\mathtt{b}})$.
The analogous estimates for $\braket{\partial_{\varphi}}^{\mathtt{b}}X^{(o)}$,
$X^{(d)}$, $X^{(o)}$ and (8.38), (8.39) follow similarly. By induction, the
operator ${\bf R}_{\perp}$ is reversible and momentum preserving. Therefore,
by (8.30), (8.35), (8.36) and Lemmata 3.20, 3.24, it follows that ${\bf X}$ is
reversibility and momentum preserving. ∎
By (8.31), (8.32), for all
$\lambda\in{\mathtt{\Lambda}}_{{\mathtt{n}}+1}^{\upsilon}$, we have
${\bf L}_{+}={\bf{\Phi}}^{-1}{\bf
L}{\bf{\Phi}}=\omega\cdot\partial_{\varphi}\mathds{1}_{\perp}+{\rm i}\,{\bf
D}_{+}+{\bf R}_{\perp}^{(+)}\,,$ (8.41)
where
$\displaystyle{\bf D}_{+}:={\bf D}-{\rm i}[{\bf R}_{\perp}]\,,$ (8.42)
$\displaystyle{\bf R}_{\perp}^{(+)}:=\Pi_{N}^{\perp}{\bf
R}_{\perp}-\int_{0}^{1}e^{-\tau{\bf X}}[{\bf X},{\bf R}_{\perp}]e^{\tau{\bf
X}}\,{\rm d}{\tau}+\int_{0}^{1}(1-\tau)e^{-\tau{\bf X}}[{\bf X},\Pi_{N}{\bf
R}_{\perp}-[{\bf R}_{\perp}]]e^{\tau{\bf X}}\,{\rm d}{\tau}\,.$
The right hand side of (8.41)-(8.42) define an extension of ${\bf L}_{+}$ to
the whole parameter space ${\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]$,
since ${\bf R}_{\perp}$ and ${\bf X}$ are defined on
${\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]$.
The new operator ${\bf L}_{+}$ in (8.41) has the same form of ${\bf L}$ in
(8.12) with the non-diagonal remainder ${\bf R}_{\perp}^{(+)}$ which is the
sum of a term $\Pi_{N}^{\perp}{\bf R}_{\perp}$ supported on high frequencies
and a quadratic function of ${\bf X}$ and ${\bf R}_{\perp}$. The new normal
form ${\bf D}_{+}$ is diagonal:
###### Lemma 8.5.
(New diagonal part) For all
$(\omega,\gamma)\in{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]$, the new
normal form is
$\displaystyle{\rm i}\,{\bf D}_{+}={\rm i}\,{\bf D}+[{\bf R}_{\perp}]={\rm
i}\begin{pmatrix}{\mathcal{D}}_{+}&0\\\
0&-\overline{{\mathcal{D}}_{+}}\end{pmatrix}\,,\
{\mathcal{D}}_{+}:=\operatorname{diag}_{j\in{\mathbb{S}}_{0}^{c}}\mu_{j}^{(+)}\,,\
\mu_{j}^{(+)}:=\mu_{j}+{\mathtt{r}}_{j}\in{\mathbb{R}}\,,$
where each ${\mathtt{r}}_{j}$ satisfies, on
${\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]$,
$|j|^{\frac{1}{2}}|{\mathtt{r}}_{j}|^{k_{0},\upsilon}=|j|^{\frac{1}{2}}|\mu_{j}^{(+)}-\mu_{j}|^{k_{0},\upsilon}\lesssim{\mathfrak{M}}^{\sharp}(s_{0})\,.$
(8.43)
Moreover, given tori $i_{1}(\omega,\gamma),i_{2}(\omega,\gamma)$, we have
$|j|^{\frac{1}{2}}|{\mathtt{r}}_{j}(i_{1})-{\mathtt{r}}_{j}(i_{2})|\lesssim\|\langle
D\rangle^{\frac{1}{4}}|\Delta_{12}{\bf R}_{\perp}|\langle
D\rangle^{\frac{1}{4}}\|_{{\mathcal{L}}(H^{s_{0}})}$.
###### Proof.
Recalling (8.33), we have that ${\mathtt{r}}_{j}:=-{\rm
i}(R_{\perp}^{(d)})_{j}^{j}(0)$, for all $j\in{\mathbb{S}}_{0}^{c}$. By the
reversibility of $R_{\perp}^{(d)}$ and (3.40) we deduce that
${\mathtt{r}}_{j}\in{\mathbb{R}}$. Recalling the definition of
${\mathfrak{M}}^{\sharp}(s_{0})$ in (8.16) (with $s=s_{0}$) and Definition
3.14, we have, for all $0\leq\left|k\right|\leq k_{0}$, $\|\langle
D\rangle^{\frac{1}{4}}|\partial_{\lambda}^{k}R_{\perp}^{(d)}|\langle
D\rangle^{\frac{1}{4}}h\|_{s_{0}}\leq
2\upsilon^{-|k|}{\mathfrak{M}}^{\sharp}(s_{0})\left\|h\right\|_{s_{0}}$, and
therefore
$|j|^{\frac{1}{2}}|\partial_{\lambda}^{k}(R_{\perp}^{(d)})_{j}^{j}(0)|\lesssim\upsilon^{-\left|k\right|}{\mathfrak{M}}^{\sharp}(s_{0})\,.$
Hence (8.43) follows. The last bound for
$|j|^{\frac{1}{2}}|{\mathtt{r}}_{j}(i_{1})-{\mathtt{r}}_{j}(i_{2})|$ follows
analogously. ∎
#### The iterative step.
Assume that the statements $({\bf S1})_{{\mathtt{n}}}$-$({\bf
S3})_{{\mathtt{n}}}$ are true. We now prove $({\bf
S1})_{{\mathtt{n}}+1}$-$({\bf S3})_{{\mathtt{n}}+1}$. For simplicity (as in
other parts of the paper) we omit to write the dependence on $k_{0}$, which is
considered as a fixed constant.
Proof of $({\bf S1})_{{\mathtt{n}}+1}$. The real operator ${\bf
X}_{{\mathtt{n}}}$ defined in Lemma 8.4 is defined for all
$(\omega,\gamma)\in{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]$ and, by
(8.37), (8.17), satisfies the estimates (8.20) at the step ${\mathtt{n}}+1$.
The flow maps ${\bf{\Phi}}_{{\mathtt{n}}}^{\pm 1}=e^{\pm{\bf
X}_{{\mathtt{n}}}}$ are well defined by Lemma 3.16. By (8.41), for all
$\lambda\in{\mathtt{\Lambda}}_{{\mathtt{n}}+1}^{\upsilon}$, the conjugation
formula (8.19) holds at the step ${\mathtt{n}}+1$. The operator ${\bf
X}_{{\mathtt{n}}}$ is reversibility and momentum preserving, and so are the
operators ${\bf{\Phi}}_{{\mathtt{n}}}^{\pm 1}=e^{\pm{\bf X}_{{\mathtt{n}}}}$.
By Lemma 8.5, the operator ${\bf D}_{{\mathtt{n}}+1}$ is diagonal with
eigenvalues
$\mu_{j}^{({\mathtt{n}}+1)}:{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]\rightarrow{\mathbb{R}}$,
$\mu_{j}^{({\mathtt{n}}+1)}=\mu_{j}^{(0)}+{\mathfrak{r}}_{j}^{({\mathtt{n}}+1)}$
with
${\mathfrak{r}}_{j}^{({\mathtt{n}}+1)}:={\mathfrak{r}}_{j}^{({\mathtt{n}})}+{\mathtt{r}}_{j}^{({\mathtt{n}})}$
satisfying, using also (8.17), (8.14) at the step ${\mathtt{n}}+1$. The next
lemma provides the estimates of the remainder ${\bf
R}_{\perp}^{({\mathtt{n}}+1)}={\bf R}_{\perp}^{(+)}$ defined in (8.42).
###### Lemma 8.6.
The operators ${\bf R}_{\perp}^{({\mathtt{n}}+1)}$ and
$\braket{\partial_{\varphi}}^{\mathtt{b}}{\bf R}_{\perp}^{({\mathtt{n}}+1)}$
are ${\mathcal{D}}^{k_{0}}$-$(-\frac{1}{2})$-modulo-tame with modulo-tame
constants satisfying, for any $s_{0}\leq s\leq S-\mu({\mathtt{b}})$,
$\displaystyle{\mathfrak{M}}_{{\mathtt{n}}+1}^{\sharp}(s)\lesssim_{s}N_{\mathtt{n}}^{-{\mathtt{b}}}{\mathfrak{M}}_{\mathtt{n}}^{\sharp}(s,{\mathtt{b}})+N_{\mathtt{n}}^{\tau_{1}}\upsilon^{-1}{\mathfrak{M}}_{\mathtt{n}}^{\sharp}(s){\mathfrak{M}}_{\mathtt{n}}^{\sharp}(s_{0})\,,$
(8.44)
$\displaystyle{\mathfrak{M}}_{{\mathtt{n}}+1}^{\sharp}(s,{\mathtt{b}})\lesssim_{s,{\mathtt{b}}}{\mathfrak{M}}_{\mathtt{n}}^{\sharp}(s,{\mathtt{b}})+N_{\mathtt{n}}^{\tau_{1}}\upsilon^{-1}\big{(}{\mathfrak{M}}_{\mathtt{n}}^{\sharp}(s,{\mathtt{b}}){\mathfrak{M}}_{\mathtt{n}}^{\sharp}(s_{0})+{\mathfrak{M}}_{\mathtt{n}}^{\sharp}(s_{0},{\mathtt{b}}){\mathfrak{M}}_{\mathtt{n}}^{\sharp}(s)\big{)}\,.$
(8.45)
###### Proof.
The estimates (8.44), (8.45) follow by (8.42), Lemmata 3.15, 3.16, (3.36) and
(8.37), (8.17), (7.24), (7.21), (8.11). ∎
###### Lemma 8.7.
Estimates (8.17) holds at the step ${\mathtt{n}}+1$.
###### Proof.
It follows by (8.44), (8.45), (8.17) at the step ${\mathtt{n}}$, (7.24), the
smallness condition (8.11) with $N_{0}=N_{0}(S,s_{0},{\mathtt{b}})>0$ large
enough and taking $\tau_{2}>\tau_{1}+1+{\mathtt{a}}$. ∎
Finally ${\bf R}_{\perp}^{({\mathtt{n}}+1)}$ is real, reversible and momentum
preserving as ${\bf R}_{\perp}^{({\mathtt{n}})}$, since ${\bf X}_{\mathtt{n}}$
is real, reversibility and momentum preserving. This concludes the proof of
$({\bf S1})_{{\mathtt{n}}+1}$.
Proof of $({\bf S2})_{{\mathtt{n}}+1}$. It follows by similar arguments and we
omit it.
Proof of $({\bf S3})_{{\mathtt{n}}+1}$. Use (8.13), (7.163)-(7.164), $({\bf
S2})_{{\mathtt{n}}}$, and the momentum conditions in (8.18).
### Almost invertibility of ${\mathcal{L}}_{\omega}$
By (7.162), (8.1), (7.154) and Theorem 8.3, we obtain
${\mathcal{L}}_{\omega}={\bf W}_{\overline{\mathtt{n}}}{\bf
L}_{\overline{\mathtt{n}}}{\bf
W}_{\overline{\mathtt{n}}}^{-1}+{\mathcal{W}}^{\perp}{\bf
P}_{\perp,\overline{\mathtt{n}}}({\mathcal{W}}^{\perp})^{-1}+{\mathcal{W}}^{\perp}{\bf
Q}_{\perp,\overline{\mathtt{n}}}({\mathcal{W}}^{\perp})^{-1}\,,\quad{\bf
W}_{\overline{\mathtt{n}}}:={\mathcal{W}}^{\perp}{\bf
U}_{\overline{\mathtt{n}}}\,,$ (8.46)
where the operator ${\bf L}_{\overline{\mathtt{n}}}$ is defined in (8.12) with
${\mathtt{n}}=\overline{\mathtt{n}}$ and ${\bf
P}_{\perp,\overline{\mathtt{n}}}$, ${\bf Q}_{\perp,\overline{\mathtt{n}}}$
satisfy the estimates in Lemma 7.19. By (7.153) and (8.28), we have, for some
$\sigma:=\sigma(\tau,\nu,k_{0})>0$, for any $s_{0}\leq s\leq
S-\mu({\mathtt{b}})-\sigma$,
$\|{\bf W}_{\overline{\mathtt{n}}}^{\pm
1}h\|_{s}^{k_{0},\upsilon}\lesssim_{S}\|h\|_{s+\sigma}^{k_{0},\upsilon}+\|{\mathfrak{I}}_{0}\|_{s+\mu({\mathtt{b}})+\sigma}^{k_{0},\upsilon}\|h\|_{s_{0}+\sigma}^{k_{0},\upsilon}\,.$
(8.47)
In order to verify the almost invertibility assumption (AI) of
${\mathcal{L}}_{\omega}$ in Section 6, we decompose the operator ${\bf
L}_{\overline{\mathtt{n}}}$ in (8.12) (with $\overline{\mathtt{n}}$ instead of
${\mathtt{n}}$) as
${\bf L}_{\overline{\mathtt{n}}}={\bf D}_{\overline{\mathtt{n}}}^{<}+{\bf
Q}_{\perp}^{(\overline{\mathtt{n}})}+{\bf
R}_{\perp}^{(\overline{\mathtt{n}})}$ (8.48)
where ${\bf R}_{\perp}^{(\overline{\mathtt{n}})}$ satisfies (8.29), whereas
${\bf
D}_{\overline{\mathtt{n}}}^{<}:=\Pi_{K_{\overline{\mathtt{n}}}}(\omega\cdot\partial_{\varphi}\mathds{1}_{\perp}+{\rm
i}\,{\bf
D}_{\overline{\mathtt{n}}})\Pi_{K_{\overline{\mathtt{n}}}}+\Pi_{K_{\overline{\mathtt{n}}}}^{\perp}\,,\quad{\bf
Q}_{\perp}^{(\overline{\mathtt{n}})}:=\Pi_{K_{\overline{\mathtt{n}}}}^{\perp}(\omega\cdot\partial_{\varphi}\mathds{1}_{\perp}+{\rm
i}\,{\bf
D}_{\overline{\mathtt{n}}})\Pi_{K_{\overline{\mathtt{n}}}}^{\perp}-\Pi_{K_{\overline{\mathtt{n}}}}^{\perp}\,,$
(8.49)
and the smoothing operator $\Pi_{K}$ on the traveling waves is defined in
(3.5), and $\Pi_{K}^{\perp}:={\rm Id}-\Pi_{K}$. The constants $K_{\mathtt{n}}$
in (8.49) are $K_{\mathtt{n}}:=K_{0}^{\chi^{\mathtt{n}}}$, $\chi=3/2$ (cfr.
(6.11)), and $K_{0}$ will be fixed in (9.5).
###### Lemma 8.8.
(First order Melnikov non-resonance conditions) For all
$\lambda=(\omega,\gamma)$ in
$\displaystyle{\mathtt{\Lambda}}_{\overline{\mathtt{n}}+1}^{\upsilon,I}:=\Big{\\{}\lambda\in{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]\,:\,|\omega\cdot\ell+\mu_{j}^{(\overline{\mathtt{n}})}|\geq\upsilon\frac{\left|j\right|^{\frac{1}{2}}}{\braket{\ell}^{\tau}}\,,\,\forall\left|\ell\right|\leq
K_{\overline{\mathtt{n}}},\,j\in{\mathbb{S}}_{0}^{c}\,,j+\vec{\jmath}\cdot\ell=0\Big{\\}}\,,$
(8.50)
on the subspace of the traveling waves
$\tau_{\varsigma}g({\varphi})=g({\varphi}-\vec{\jmath}\varsigma)$,
$\varsigma\in{\mathbb{R}}$, such that $g({\varphi},\cdot)\in{\bf
H}_{{\mathbb{S}}_{0}}^{\bot}$, the operator ${\bf
D}_{\overline{\mathtt{n}}}^{<}$ in (8.49) is invertible and there exists an
extension of the inverse operator (that we denote in the same way) to the
whole ${\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]$ satisfying the
estimate
$\|({\bf
D}_{\overline{\mathtt{n}}}^{<})^{-1}g\|_{s}^{k_{0},\upsilon}\lesssim_{k_{0}}\upsilon^{-1}\|g\|_{s+\tau_{1}}^{k_{0},\upsilon}\,,\quad\tau_{1}=k_{0}+\tau(k_{0}+1)\,.$
(8.51)
Moreover $({\bf D}_{\overline{\mathtt{n}}}^{<})^{-1}g$ is a traveling wave.
###### Proof.
The estimate (8.51) follows arguing as in Lemma 8.4. ∎
Standard smoothing properties imply that the operator ${\bf
Q}_{\perp}^{(\overline{\mathtt{n}})}$ in (8.49) satisfies, for any traveling
wave $h\in{\bf H}_{{\mathbb{S}}_{0}}^{\bot}$, for all $b>0$,
$\|{\bf
Q}_{\perp}^{(\overline{\mathtt{n}})}h\|_{s_{0}}^{k_{0},\upsilon}\lesssim
K_{\overline{\mathtt{n}}}^{-b}\|h\|_{s_{0}+b+1}^{k_{0},\upsilon}\,,\quad\|{\bf
Q}_{\perp}^{(\overline{\mathtt{n}})}h\|_{s}^{k_{0},\upsilon}\lesssim\|h\|_{s+1}^{k_{0},\upsilon}\,.$
(8.52)
By the decompositions (8.46), (8.48), Theorem 8.3 (note that (6.1) and Lemma
6.2 imply (7.10)), Proposition 7.20, the fact that ${\bf
W}_{\overline{\mathtt{n}}}$ maps (anti)-reversible, respectively traveling,
waves, into (anti)-reversible, respectively traveling, waves (Lemma 7.17) and
estimates (8.47), (8.51), (8.52), (3.8) we deduce the following theorem.
###### Theorem 8.9.
(Almost invertibility of ${\mathcal{L}}_{\omega}$) Assume (6.1). Let
${\mathtt{a}},{\mathtt{b}}$ as in (7.24) and $M$ as in (8.5). Let
$S>s_{0}+k_{0}$ and assume the smallness condition (8.27). Then the almost
invertibility assumption (AI) in Section 6 holds with ${\mathtt{\Lambda}}_{o}$
replaced by
${\bf{\Lambda}}_{\overline{\mathtt{n}}+1}^{\upsilon}:={\bf{\Lambda}}_{\overline{\mathtt{n}}+1}^{\upsilon}(i):={\mathtt{\Lambda}}_{\overline{\mathtt{n}}+1}^{\upsilon}\cap{\mathtt{\Lambda}}_{\overline{\mathtt{n}}+1}^{\upsilon,I}\cap{\mathtt{T}}{\mathtt{C}}_{\overline{\mathtt{n}}+1}(2\upsilon,\tau)\,,$
(8.53)
(see (8.18), (8.50), (7.26)) and, with $\mu({\mathtt{b}})$ defined in (8.6),
$\displaystyle{\mathcal{L}}_{\omega}^{<}:={\bf W}_{\overline{\mathtt{n}}}{\bf
D}_{\overline{\mathtt{n}}}^{<}{\bf
W}_{\overline{\mathtt{n}}}^{-1}\,,\quad{\mathcal{R}}_{\omega}:={\bf
W}_{\overline{\mathtt{n}}}{\bf R}_{\perp}^{(\overline{\mathtt{n}})}{\bf
W}_{\overline{\mathtt{n}}}^{-1}+{\mathcal{W}}^{\perp}{\bf
P}_{\perp,\overline{\mathtt{n}}}({\mathcal{W}}^{\perp})^{-1}\,,$
$\displaystyle{\mathcal{R}}_{\omega}^{\perp}:={\bf
W}_{\overline{\mathtt{n}}}{\bf Q}_{\perp}^{(\overline{\mathtt{n}})}{\bf
W}_{\overline{\mathtt{n}}}^{-1}+{\mathcal{W}}^{\perp}{\bf
Q}_{\perp,\overline{\mathtt{n}}}({\mathcal{W}}^{\perp})^{-1}\,.$
In particular ${\mathcal{R}}_{\omega}$, ${\mathcal{R}}_{\omega}^{\perp}$
satisfy (6.14), (6.15), (6.16).
## 9 Proof of Theorem 5.1
Theorem 5.1 is a consequence of Theorem 9.1 below. In turn Theorem 9.1 is
deduced, in a by now standard way, from the almost invertibility result of
${\mathcal{L}}_{\omega}$ of Theorem 8.9, as in [9, 2, 7]. Remark that the
estimates (6.20), (6.22), (6.23), (6.24) coincide with (5.49)-(5.52) in [2]
with $M=1/2$. Therefore this section shall be short.
We consider the finite dimensional subspaces of traveling wave variations
$E_{\mathtt{n}}:=\big{\\{}{\mathfrak{I}}({\varphi})=(\Theta,I,w)({\varphi})\
{\rm such\ that}\ \eqref{mompres_aa1}\ {\rm holds}\ :\
\Theta=\Pi_{\mathtt{n}}\Theta\,,\ I=\Pi_{\mathtt{n}}I\,,\
w=\Pi_{\mathtt{n}}w\big{\\}}$
where $\Pi_{\mathtt{n}}w:=\Pi_{K_{\mathtt{n}}}w$ are defined as in (3.5) with
$K_{\mathtt{n}}$ in (6.11), and we denote with the same symbol
$\Pi_{\mathtt{n}}g({\varphi}):=\sum_{\left|\ell\right|\leq
K_{\mathtt{n}}}g_{\ell}e^{{\rm i}\ell\cdot{\varphi}}$. Note that the projector
$\Pi_{{\mathtt{n}}}$ maps (anti)-reversible traveling variations into
(anti)-reversible traveling variations.
In view of the Nash-Moser Theorem 9.1 we introduce the constants
$\displaystyle{\mathtt{a}}_{1}:=\max\\{6\sigma_{1}+13,\chi(p(\tau+1)+\mu({\mathtt{b}})+2\sigma_{1})+1\\}\,,\quad{\mathtt{a}}_{2}:=\chi^{-1}{\mathtt{a}}_{1}-\mu({\mathtt{b}})-2\sigma_{1}\,,$
(9.1)
$\displaystyle\mu_{1}:=3(\mu({\mathtt{b}})+2\sigma_{1})+1\,,\quad{\mathtt{b}}_{1}:={\mathtt{a}}_{1}+2\mu({\mathtt{b}})+4\sigma_{1}+3+\chi^{-1}\mu_{1}\,,\quad\chi=3/2$
(9.2)
$\displaystyle\sigma_{1}:=\max\\{\overline{\sigma},2s_{0}+2k_{0}+5\\}\,,\quad
S-\mu({\mathtt{b}})-\overline{\sigma}=s_{0}+{\mathtt{b}}_{1}\,,$ (9.3)
where $\overline{\sigma}=\overline{\sigma}(\tau,\nu,k_{0})>0$ is defined by
Theorem 6.4, $2s_{0}+2k_{0}+5$ is the largest loss of regularity in the
estimates of the Hamiltonian vector field $X_{P}$ in Lemma 6.1,
$\mu({\mathtt{b}})$ is defined in (8.6), and ${\mathtt{b}}=[{\mathtt{a}}]+2$
is defined in (7.24). The exponent $p$ in (6.11) is required to satisfy
$p{\mathtt{a}}>\tfrac{1}{2}{\mathtt{a}}_{1}+\tfrac{3}{2}\sigma_{1}\,.$ (9.4)
By (7.24), and the definition of ${\mathtt{a}}_{1}$ in (9.1), there exists
$p=p(\tau,\nu,k_{0})$ such that (9.4) holds, for example we fix
$p:=\frac{3(\mu({\mathtt{b}})+4\sigma_{1}+1)}{{\mathtt{a}}}$.
Given a function $W=({\mathfrak{I}},\beta)$ where ${\mathfrak{I}}$ is the
periodic component of a torus as in (5.8) and $\beta\in{\mathbb{R}}^{\nu}$, we
denote
$\|W\|_{s}^{k_{0},\upsilon}:=\|{\mathfrak{I}}\|_{s}^{k_{0},\upsilon}+\left|\beta\right|^{k_{0},\upsilon}$.
###### Theorem 9.1.
(Nash-Moser) There exist $\delta_{0},C_{*}>0$ such that, if
$K_{0}^{\tau_{3}}\varepsilon\upsilon^{-4}<\delta_{0}\,,\
\tau_{3}:=\max\\{p\tau_{2},2\sigma_{1}+{\mathtt{a}}_{1}+4\\}\,,\
K_{0}:=\upsilon^{-1}\,,\ \upsilon:=\varepsilon^{\rm a}\,,\ 0<{\rm
a}<(4+\tau_{3})^{-1}\,,$ (9.5)
where $\tau_{2}=\tau_{2}(\tau,\nu)$ is given by Theorem 8.2, then, for all
${\mathtt{n}}\geq 0$:
* $({\mathcal{P}}1)_{\mathtt{n}}$
There exists a $k_{0}$-times differentiable function
${\widetilde{W}}_{\mathtt{n}}:{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]\rightarrow
E_{{\mathtt{n}}-1}\times{\mathbb{R}}^{\nu}$,
$\lambda=(\omega,\gamma)\mapsto{\widetilde{W}}_{\mathtt{n}}(\lambda):=({\widetilde{{\mathfrak{I}}}}_{\mathtt{n}},{\widetilde{\alpha}}_{\mathtt{n}}-\omega)$,
for ${\mathtt{n}}\geq 1$, and ${\widetilde{W}}_{0}:=0$, satisfying
$\|{\widetilde{W}}_{\mathtt{n}}\|_{s_{0}+\mu({\mathtt{b}})+\sigma_{1}}^{k_{0},\upsilon}\leq
C_{*}\varepsilon\upsilon^{-1}\,.$ (9.6)
Let ${\widetilde{U}}_{\mathtt{n}}:=U_{0}+{\widetilde{W}}_{\mathtt{n}}$, where
$U_{0}:=({\varphi},0,0,\omega)$. The difference
${\widetilde{H}}_{\mathtt{n}}:={\widetilde{U}}_{\mathtt{n}}-{\widetilde{U}}_{{\mathtt{n}}-1}$,
for ${\mathtt{n}}\geq 1$, satisfies
$\displaystyle\|{\widetilde{H}}_{1}\|_{s_{0}+\mu({\mathtt{b}})+\sigma_{1}}^{k_{0},\upsilon}\leq
C_{*}\varepsilon\upsilon^{-1}\,,\quad\|{\widetilde{H}}_{\mathtt{n}}\|_{s_{0}+\mu({\mathtt{b}})+\sigma_{1}}^{k_{0},\upsilon}\leq
C_{*}\varepsilon\upsilon^{-1}K_{{\mathtt{n}}-1}^{-{\mathtt{a}}_{2}}\,,\
\forall\,{\mathtt{n}}\geq 2\,.$ (9.7)
The torus embedding
$\widetilde{\imath}_{\mathtt{n}}:=({\varphi},0,0)+{\widetilde{{\mathfrak{I}}}}_{\mathtt{n}}$
is reversible and traveling, i.e. (5.7) holds.
* $({\mathcal{P}}2)_{\mathtt{n}}$
We define
${\mathcal{G}}_{0}:={\mathtt{\Omega}}\times[\gamma_{1},\gamma_{2}]\,,\quad{\mathcal{G}}_{{\mathtt{n}}+1}:={\mathcal{G}}_{{\mathtt{n}}}\cap{\bf{\Lambda}}_{{\mathtt{n}}+1}^{\upsilon}(\widetilde{\imath}_{\mathtt{n}})\,,\quad\forall\,{\mathtt{n}}\geq
0\,,$ (9.8)
where
${\bf{\Lambda}}_{{\mathtt{n}}+1}^{\upsilon}(\widetilde{\imath}_{\mathtt{n}})$
is defined in (8.53). Then, for all $\lambda\in{\mathcal{G}}_{{\mathtt{n}}}$ ,
setting $K_{-1}:=1$, we have
$\|{\mathcal{F}}({\widetilde{U}}_{\mathtt{n}})\|_{s_{0}}^{k_{0},\upsilon}\leq
C_{*}\varepsilon K_{{\mathtt{n}}-1}^{-{\mathtt{a}}_{1}}\,.$
* $({\mathcal{P}}3)_{\mathtt{n}}$
(High norms) For all $\lambda\in{\mathcal{G}}_{{\mathtt{n}}}$, we have
$\|{\widetilde{W}}_{\mathtt{n}}\|_{s_{0}+{\mathtt{b}}_{1}}^{k_{0},\upsilon}\leq
C_{*}\varepsilon\upsilon^{-1}K_{{\mathtt{n}}-1}^{\mu_{1}}$.
###### Proof.
The inductive proof follows exactly as in [9, 2]. The verification that each
approximate torus $\widetilde{\imath}_{\mathtt{n}}$ is reversible and
traveling is given in [7]. ∎
Theorem 5.1 is a by now standard corollary of Theorem 9.1, as in [9, 2, 7].
Let $\upsilon=\varepsilon^{\rm a}$, with $0<{\rm a}<{\rm
a_{0}}:=1/(4+\tau_{3})$. Then, the smallness condition in (9.5) is verified
for $0<\varepsilon<\varepsilon_{0}$ small enough and Theorem 9.1 holds. By
(9.7), the sequence of functions
${\widetilde{W}}_{\mathtt{n}}={\widetilde{U}}_{\mathtt{n}}-({\varphi},0,0,\omega)=({\widetilde{{\mathfrak{I}}}}_{\mathtt{n}},{\widetilde{\alpha}}_{\mathtt{n}}-\omega)$
converges to a function
$W_{\infty}:{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]\rightarrow
H_{\varphi}^{s_{0}}\times H_{\varphi}^{s_{0}}\times
H^{s_{0}}\times{\mathbb{R}}^{\nu}$, and we define
$U_{\infty}:=(i_{\infty},\alpha_{\infty}):=({\varphi},0,0,\omega)+W_{\infty}\,.$
The torus $i_{\infty}$ is reversible and traveling, i.e. (5.7) holds. By
(9.6), (9.7), we also deduce the bounds
$\displaystyle\|U_{\infty}-U_{0}\|_{s_{0}+\mu({\mathtt{b}})+\sigma_{1}}^{k_{0},\upsilon}\leq
C_{*}\varepsilon\upsilon^{-1}\,,\quad\|U_{\infty}-{\widetilde{U}}_{\mathtt{n}}\|_{s_{0}+\mu({\mathtt{b}})+\sigma_{1}}^{k_{0},\upsilon}\leq
C\varepsilon\upsilon^{-1}K_{\mathtt{n}}^{-{\mathtt{a}}_{2}}\,,\
\forall\,{\mathtt{n}}\geq 1\,.$ (9.9)
In particular (5.10)-(5.11) hold. By Theorem
9.1-$({\mathcal{P}}2)_{\mathtt{n}}$, we deduce that
${\mathcal{F}}(\lambda;U_{\infty}(\lambda))=0$ for any $\lambda$ in the set
$\bigcap_{{\mathtt{n}}\in{\mathbb{N}}_{0}}{\mathcal{G}}_{{\mathtt{n}}}={\mathcal{G}}_{0}\cap\bigcap_{{\mathtt{n}}\geq
1}{\bf{\Lambda}}_{\mathtt{n}}^{\upsilon}(\widetilde{\imath}_{{\mathtt{n}}-1})\stackrel{{\scriptstyle\eqref{bLambdan}}}{{=}}{\mathcal{G}}_{0}\cap\Big{[}\bigcap_{{\mathtt{n}}\geq
1}{\mathtt{\Lambda}}_{\mathtt{n}}^{\upsilon}(\widetilde{\imath}_{{\mathtt{n}}-1})\Big{]}\cap\Big{[}\bigcap_{{\mathtt{n}}\geq
1}{\mathtt{\Lambda}}_{\mathtt{n}}^{\upsilon,I}(\widetilde{\imath}_{{\mathtt{n}}-1})\Big{]}\cap\Big{[}\bigcap_{{\mathtt{n}}\geq
1}{\mathtt{T}}{\mathtt{C}}_{{\mathtt{n}}}(2\upsilon,\tau)(\widetilde{\imath}_{{\mathtt{n}}-1})\Big{]}\,,$
where ${\mathcal{G}}_{0}:={\mathtt{\Omega}}\times[\gamma_{1},\gamma_{2}]$. To
conclude the proof of Theorem 5.1 it remains only to define the
$\mu_{j}^{\infty}$ in (5.12) and prove that the set
${\mathcal{C}}_{\infty}^{\upsilon}$ in (5.14)-(5.18) is contained in
$\cap_{{\mathtt{n}}\geq 0}{\mathcal{G}}_{{\mathtt{n}}}$. We first define
${\mathcal{G}}_{\infty}:={\mathcal{G}}_{0}\cap\Big{[}\bigcap_{{\mathtt{n}}\geq
1}{\mathtt{\Lambda}}_{\mathtt{n}}^{2\upsilon}(i_{\infty})\Big{]}\cap\Big{[}\bigcap_{{\mathtt{n}}\geq
1}{\mathtt{\Lambda}}_{\mathtt{n}}^{2\upsilon,I}(i_{\infty})\Big{]}\cap\Big{[}\bigcap_{{\mathtt{n}}\geq
1}{\mathtt{T}}{\mathtt{C}}_{\mathtt{n}}(4\upsilon,\tau)(i_{\infty})\Big{]}\,.$
(9.10)
Using that the approximate solution ${\widetilde{U}}_{\mathtt{n}}$ is
exponentially close to the limit $U_{\infty}$ according to (9.9), and relying
on the inclusion properties of the set of non-resonant parameters stated
precisely in Lemmata 7.9 and (8.25), one directly deduces the following lemma
(cfr. e.g. Lemma 8.6 in [9]).
###### Lemma 9.2.
${\mathcal{G}}_{\infty}\subseteq\cap_{{\mathtt{n}}\geq
0}{\mathcal{G}}_{{\mathtt{n}}}$, where ${\mathcal{G}}_{{\mathtt{n}}}$ are
defined in (9.8).
Then we define the $\mu_{j}^{\infty}$ in (5.12) with
${\mathtt{m}}_{1,{\mathtt{n}}}^{\infty}:={\mathtt{m}}_{1,{\mathtt{n}}}(i_{\infty})$,
${\mathtt{m}}_{\frac{1}{2}}^{\infty}={\mathtt{m}}_{\frac{1}{2}}(i_{\infty})$,
${\mathtt{m}}_{0}^{\infty}={\mathtt{m}}_{0}(i_{\infty})$, and
${\mathtt{m}}_{1,{\mathtt{n}}},{\mathtt{m}}_{\frac{1}{2}},{\mathtt{m}}_{0}$
are provided in Proposition 7.20. By (8.14), the sequence
$({\mathfrak{r}}_{j}^{({\mathtt{n}})}(i_{\infty}))_{{\mathtt{n}}\in{\mathbb{N}}}$,
with ${\mathfrak{r}}_{j}^{({\mathtt{n}})}$ given by Theorem 8.2-$({\bf
S1})_{\mathtt{n}}$ (evaluated at $i=i_{\infty}$), is a Cauchy sequence in
$|\,\cdot\,|^{k_{0},\upsilon}$. Then we define
${\mathfrak{r}}_{j}^{\infty}:=\lim_{{\mathtt{n}}\to\infty}{\mathfrak{r}}_{j}^{({\mathtt{n}})}(i_{\infty})$,
for any $j\in{\mathbb{S}}_{0}^{c}$, which satisfies
$|j|^{\frac{1}{2}}|{\mathfrak{r}}_{j}^{\infty}-{\mathfrak{r}}_{j}^{({\mathtt{n}})}(i_{\infty})|^{k_{0},\upsilon}\leq
C\varepsilon\upsilon^{-3}N_{{\mathtt{n}}-1}^{-{\mathtt{a}}}$ for any
${\mathtt{n}}\geq 0$. Then, recalling ${\mathfrak{r}}_{j}^{(0)}(i_{\infty})=0$
and (7.163), the estimates (5.13) hold (here $C=C(S)$ with $S$ fixed in
(9.3)). Finally one checks (see e.g. Lemma 8.7 in [9]) that the Cantor set
${\mathcal{C}}_{\infty}^{\upsilon}$ in (5.14)-(5.18) satisfies
${\mathcal{C}}_{\infty}^{\upsilon}\subseteq{\mathcal{G}}_{\infty}$, with
${\mathcal{G}}_{\infty}$ defined in (9.10), and Lemma 9.2 implies that
${\mathcal{C}}_{\infty}^{\upsilon}\subseteq\cap_{{\mathtt{n}}\geq
0}{\mathcal{G}}_{{\mathtt{n}}}$. This concludes the proof of Theorem 5.1.
## Appendix A Almost straightening of a transport operator
The main results of this appendix are Theorem A.2 and Corollary A.4. The goal
is to almost-straighten a linear quasi-periodic transport operator of the form
$X_{0}:=\omega\cdot\partial_{\varphi}+p_{0}(\varphi,x)\partial_{x}\,,$ (A.1)
to a constant coefficient one
$\omega\cdot\partial_{\varphi}+{\mathtt{m}}_{1,{\mathtt{n}}}\partial_{x}$, up
to a small term $p_{{\mathtt{n}}}\partial_{x}$, see (A.4) and (A.5). We follow
the scheme of Section 4 in [3].
We first introduce the following weighted-graded Sobolev norm: for any
$u=u(\lambda)\in H^{s}({\mathbb{T}}^{\nu+1})$, $s\in{\mathbb{R}}$,
$k_{0}$-times differentiable with respect to
$\lambda=(\omega,\gamma)\in{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]$,
we define the weighted graded Sobolev norm
$|u|_{s}^{k_{0},\upsilon}:=\sum_{k\in{\mathbb{N}}^{\nu+1}\atop 0\leq|k|\leq
k_{0}}\upsilon^{|k|}\sup_{\lambda\in{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]}\|\partial_{\lambda}^{k}u(\lambda)\|_{s-|k|}\,.$
This norm satisfies usual tame and interpolation estimates. The main reason to
use this norm is the estimate (A.2) for the composition operator where there
is no loss of $k_{0}$-derivatives on the highest norm
$|u|_{s}^{k_{0},\upsilon}$, unlike the corresponding estimate (3.29) for the
$\|\ \|_{s}^{k_{0},\upsilon}$. This is used in a crucial way to prove (A.25)
and then deduce the a-priori bound (A.5) for the divergence of the high norms
of the functions $p_{{\mathtt{n}}}$. In the following we consider
$\mathfrak{s}_{0}:=s_{0}+k_{0}>\frac{1}{2}(\nu+1)+k_{0}\,.$
We report the following estimates which can be proved by adapting the
arguments of [9].
###### Lemma A.1.
The following hold:
(i) For any $s\in{\mathbb{R}}$, we have
$|u|_{s}^{k_{0},\upsilon}\leq\|u\|_{s}^{k_{0},\upsilon}\leq|u|_{s+k_{0}}^{k_{0},\upsilon}$.
(ii) For any $s\geq\mathfrak{s}_{0}$, we have $|uv|_{s}^{k_{0},\upsilon}\leq
C(s)|u|_{s}^{k_{0},\upsilon}|v|_{\mathfrak{s}_{0}}^{k_{0},\upsilon}+C(\mathfrak{s}_{0})|u|_{\mathfrak{s}_{0}}^{k_{0},\upsilon}|v|_{s}^{k_{0},\upsilon}.$
The tame constant $C(s):=C(s,k_{0})$ is monotone in $s\geq\mathfrak{s}_{0}$.
(iii) For $N\geq 1$ and $\alpha\geq 0$ we have
$|\Pi_{N}u|_{s}^{k_{0},\upsilon}\leq
N^{\alpha}|u|_{s-\alpha}^{k_{0},\upsilon}$ and
$|\Pi_{N}^{\perp}u|_{s}^{k_{0},\upsilon}\leq
N^{-\alpha}|u|_{s+\alpha}^{k_{0},\upsilon}$, $\forall s\in{\mathbb{R}}$.
(iv) Let
$|\beta|_{2\mathfrak{s}_{0}+1}^{k_{0},\upsilon}\leq\delta(\mathfrak{s}_{0})$
small enough. Then the composition operator ${\mathcal{B}}$ defined as in
(7.23) satisfies the tame estimate, for any $s\geq\mathfrak{s}_{0}+1$,
$|{\mathcal{B}}u|_{s}^{k_{0},\upsilon}\leq
C(s)(|u|_{s}^{k_{0},\upsilon}+|\beta|_{s}^{k_{0},\upsilon}|u|_{\mathfrak{s}_{0}+1}^{k_{0},\upsilon}).$
(A.2)
The tame constant $C(s):=C(s,k_{0})$ is monotone in $s\geq\mathfrak{s}_{0}$.
Moreover the diffeomorphism $x\mapsto x+\beta({\varphi},x)$ is invertible and
the inverse diffeomorphism $y\mapsto y+\breve{\beta}({\varphi},y)$ satisfies,
for any $s\geq\mathfrak{s}_{0}$, $|\breve{\beta}|_{s}^{k_{0},\upsilon}\leq
C(s)|\beta|_{s}^{k_{0},\upsilon}$.
(v) For any $\epsilon>0$, $a_{0},b_{0}\geq 0$ and $p,q>0$, there exists
$C_{\epsilon}=C_{\epsilon}(p,q)>0$, with $C_{1}<1$, such that
$|u|_{a_{0}+p}^{k_{0},\upsilon}|v|_{b_{0}+q}^{k_{0},\upsilon}\leq\epsilon|u|_{a_{0}+p+q}^{k_{0},\upsilon}|v|_{b_{0}}^{k_{0},\upsilon}+C_{\epsilon}|u|_{a_{0}}^{k_{0},\upsilon}|v|_{b_{0}+p+q}^{k_{0},\upsilon}\,.$
We now state the almost straightening result of the quasi-periodic transport
operator. Remind that $N_{{\mathtt{n}}}:=N_{0}^{\chi^{{\mathtt{n}}}}$,
$\chi=3/2$, $N_{-1}:=1$, see (7.21).
###### Theorem A.2 (Almost straightening).
Consider the quasi-periodic transport operator $X_{0}$ in (A.1) where
$p_{0}(\varphi,x)$ is a quasi-periodic traveling wave, ${\rm
even}(\varphi,x)$, defined for all
$(\omega,\gamma)\in{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]$. For any
$S>\mathfrak{s}_{0}$, there exist $\tau_{2}>\tau_{1}+1+{\mathtt{a}}$,
$\updelta:=\updelta(S,\mathfrak{s}_{0},k_{0},{\mathtt{b}})\in(0,1)$ and
$N_{0}:=N_{0}(S,\mathfrak{s}_{0},k_{0},{\mathtt{b}})\in{\mathbb{N}}$ (with
$\tau_{1}$, ${\mathtt{a}}$, ${\mathtt{b}}$ defined in (7.24)) such that, if
$N_{0}^{\tau_{2}}\,|p_{0}|_{2\mathfrak{s}_{0}+{\mathtt{b}}+1}^{k_{0},\upsilon}\,\upsilon^{-1}\leq\updelta\,,$
(A.3)
then, for any $\overline{\mathtt{n}}\in{\mathbb{N}}_{0}$, for any
${\mathtt{n}}=0,\ldots,\overline{\mathtt{n}}$, the following holds true:
$\bf(S1)_{\mathtt{n}}$ There exists a linear quasi-periodic transport operator
$X_{{\mathtt{n}}}:=\omega\cdot\partial_{\varphi}+({\mathtt{m}}_{1,{\mathtt{n}}}+p_{{\mathtt{n}}}({\varphi},x))\partial_{x}\,,$
(A.4)
defined for all
$(\omega,\gamma)\in{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]$, where
$p_{\mathtt{n}}({\varphi},x)$ is a quasi-periodic traveling wave function,
${\rm even}({\varphi},x)$, such that, for any $\mathfrak{s}_{0}\leq s\leq S$,
$|p_{{\mathtt{n}}}|_{s}^{k_{0},\upsilon}\leq
C(s,{\mathtt{b}})N_{{\mathtt{n}}-1}^{-{\mathtt{a}}}|p_{0}|_{s+{\mathtt{b}}}^{k_{0},\upsilon}\,,\quad|p_{{\mathtt{n}}}|_{s+{\mathtt{b}}}^{k_{0},\upsilon}\leq
C(s,{\mathtt{b}})N_{{\mathtt{n}}-1}|p_{0}|_{s+{\mathtt{b}}}^{k_{0},\upsilon}\,,$
(A.5)
for some constant $C(s,{\mathtt{b}})\geq 1$ monotone in
$s\in[\mathfrak{s}_{0},S]$, and ${\mathtt{m}}_{1,{\mathtt{n}}}$ is a real
constant satisfying
$|{\mathtt{m}}_{1,{\mathtt{n}}}|^{k_{0},\upsilon}\leq
2\,|p_{0}|_{\mathfrak{s}_{0}+{\mathtt{b}}}^{k_{0},\upsilon}\,,\quad|{\mathtt{m}}_{1,{\mathtt{n}}}-{\mathtt{m}}_{1,{\mathtt{n}}-1}|^{k_{0},\upsilon}\leq
C(\mathfrak{s}_{0},{\mathtt{b}})N_{{\mathtt{n}}-2}^{-{\mathtt{a}}}|p_{0}|_{\mathfrak{s}_{0}+{\mathtt{b}}}^{k_{0},\upsilon}\,,\,\forall{\mathtt{n}}\geq
2\,.$ (A.6)
Let ${\mathtt{\Lambda}}_{0}^{\rm
T}:={\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]$, and, for
${\mathtt{n}}\geq 1$,
$\displaystyle{\mathtt{\Lambda}}_{{\mathtt{n}}}^{\rm T}$
$\displaystyle:={\mathtt{\Lambda}}_{{\mathtt{n}}}^{\upsilon,\rm T}(p_{0})$
(A.7)
$\displaystyle:=\big{\\{}(\omega,\gamma)\in{\mathtt{\Lambda}}_{{\mathtt{n}}-1}^{\rm
T}\,:\,|(\omega-{\mathtt{m}}_{1,{\mathtt{n}}-1}\vec{\jmath})\cdot\ell|\geq\upsilon\braket{\ell}^{-\tau}\
\forall\,\ell\in{\mathbb{Z}}^{\nu}\setminus\\{0\\}\,,\ |\ell|\leq
N_{{\mathtt{n}}-1}\big{\\}}\,.$
For ${\mathtt{n}}\geq 1$, there exists a quasi-periodic traveling wave
function $g_{{\mathtt{n}}-1}({\varphi},x)$, ${\rm odd}({\varphi},x)$, defined
for all $(\omega,\gamma)\in{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]$,
fulfilling for any $\mathfrak{s}_{0}\leq s\leq S$,
$|g_{{\mathtt{n}}-1}|_{s}^{k_{0},\upsilon}\leq
C(s)N_{{\mathtt{n}}-1}^{\tau_{1}}\upsilon^{-1}|\Pi_{N_{{\mathtt{n}}-1}}p_{{\mathtt{n}}-1}|_{s}^{k_{0},\upsilon}\,,$
(A.8)
for some constant $C(s)\geq 1$ monotone in $s\in[\mathfrak{s}_{0},S]$, such
that, defining the composition operators
$({\mathcal{G}}_{{\mathtt{n}}-1}u)({\varphi},x):=u({\varphi},x+g_{{\mathtt{n}}-1}({\varphi},x))\,,\
\
({\mathcal{G}}_{{\mathtt{n}}-1}^{-1}u)({\varphi},y):=u({\varphi},y+\breve{g}_{{\mathtt{n}}-1}({\varphi},y))\,,$
where $x=y+\breve{g}_{{\mathtt{n}}-1}({\varphi},y)$ is the inverse
diffeomorphism of $y=x+g_{{\mathtt{n}}-1}({\varphi},x)$, the following
conjugation formula holds: for any $(\omega,\gamma)$ in the set
${\mathtt{\Lambda}}_{{\mathtt{n}}}^{\rm T}$ (cfr. (A.7)) we have
$X_{{\mathtt{n}}}={\mathcal{G}}_{{\mathtt{n}}-1}^{-1}\,X_{{\mathtt{n}}-1}\,{\mathcal{G}}_{{\mathtt{n}}-1}\,.$
(A.9)
$\bf(S2)_{\mathtt{n}}$ Let $\Delta_{12}p_{0}:=p_{0,1}-p_{0,2}$. For any
$s_{1}\in[s_{0}+1,S]$, there exist $C(s_{1})>0$ and
$\updelta^{\prime}(s_{1})\in(0,1)$ such that if
$N_{0}^{\tau_{2}}\sup_{(\omega,\gamma)\in{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]}\big{(}\|p_{0,1}\|_{s_{1}+{\mathtt{b}}}+\|p_{0,2}\|_{s_{1}+{\mathtt{b}}}\big{)}\upsilon^{-1}\leq\updelta^{\prime}(s_{1})\,,$
(A.10)
then, for all $(\omega,\gamma)\in{\mathbb{R}}^{\nu}\times{\mathbb{R}}$,
$\displaystyle\|\Delta_{12}p_{\mathtt{n}}\|_{s_{1}-1}\leq
C({s_{1}})N_{{\mathtt{n}}-1}^{-{\mathtt{a}}}\|\Delta_{12}p_{0}\|_{s_{1}+{\mathtt{b}}}\,,\quad\|\Delta_{12}p_{\mathtt{n}}\|_{s_{1}+{\mathtt{b}}}\leq
C({s_{1}})N_{{\mathtt{n}}-1}\|\Delta_{12}p_{0}\|_{s_{1}+{\mathtt{b}}}$ (A.11)
$\displaystyle|\Delta_{12}({\mathtt{m}}_{1,{\mathtt{n}}+1}-{\mathtt{m}}_{1,{\mathtt{n}}})|\leq\|\Delta_{12}p_{\mathtt{n}}\|_{s_{0}}\,,\quad|\Delta_{12}{\mathtt{m}}_{1,{\mathtt{n}}}|\leq
C({s_{1}})\|\Delta_{12}p_{0}\|_{s_{0}}\,.$ (A.12)
Moreover for any $s\geq s_{0}$,
$\displaystyle\|\Delta_{12}g_{\mathtt{n}}\|_{s}\lesssim_{s}\upsilon^{-1}\big{(}\|\Pi_{N_{{\mathtt{n}}}}\Delta_{12}p_{\mathtt{n}}\|_{s+\tau}+\upsilon^{-1}|\Delta_{12}{\mathtt{m}}_{1,{\mathtt{n}}}|\|\Pi_{N_{{\mathtt{n}}}}p_{{\mathtt{n}},2}\|_{s+2\tau+1}\big{)}\,.$
(A.13)
We deduce the following corollaries.
###### Corollary A.3.
For any $\overline{\mathtt{n}}\in{\mathbb{N}}_{0}$ we have the inclusion
${\mathtt{T}}{\mathtt{C}}_{\overline{\mathtt{n}}+1}({\mathtt{m}}_{1,\overline{\mathtt{n}}},2\upsilon,\tau)\subset{\mathtt{\Lambda}}_{\overline{\mathtt{n}}+1}^{\upsilon,\rm
T}$ where the set
${\mathtt{T}}{\mathtt{C}}_{\overline{\mathtt{n}}+1}({\mathtt{m}}_{1,\overline{\mathtt{n}}},2\upsilon,\tau)$
is defined in (7.26).
###### Proof.
When $\overline{\mathtt{n}}=0$, by definition we have
${\mathtt{T}}{\mathtt{C}}_{1}(2\upsilon,\tau)\subset{\mathtt{\Lambda}}_{1}^{\upsilon,\rm
T}$. Let
$(\omega,\gamma)\in{\mathtt{T}}{\mathtt{C}}_{\overline{\mathtt{n}}+1}({\mathtt{m}}_{1,\overline{\mathtt{n}}},2\upsilon,\tau)$.
For any $k=0,\ldots,\overline{\mathtt{n}}-1$ we have, by (A.6),
$|{\mathtt{m}}_{1,\overline{\mathtt{n}}}-{\mathtt{m}}_{1,k}|\lesssim_{\mathfrak{s}_{0},{\mathtt{b}}}N_{k-1}^{-{\mathtt{a}}}|p_{0}|_{\mathfrak{s}_{0}+{\mathtt{b}}}^{k_{0},\upsilon}\,.$
(A.14)
By (7.26) and (A.14), for all $0<|\ell|\leq N_{k}$,
$\displaystyle|(\omega-{\mathtt{m}}_{1,k}\vec{\jmath})\cdot\ell|$
$\displaystyle\geq|(\omega-{\mathtt{m}}_{1,\overline{\mathtt{n}}}\vec{\jmath})\cdot\ell|-|{\mathtt{m}}_{1,\overline{\mathtt{n}}}-{\mathtt{m}}_{1,k}||\vec{\jmath}||\ell|$
$\displaystyle\geq
2\upsilon\braket{\ell}^{-\tau}-CN_{k-1}^{-{\mathtt{a}}}|p_{0}|_{\mathfrak{s}_{0}+{\mathtt{b}}}^{k_{0},\upsilon}|\ell|\geq\upsilon\braket{\ell}^{-\tau}$
if
$CN_{k}^{\tau+1}N_{k-1}^{-{\mathtt{a}}}|p_{0}|_{\mathfrak{s}_{0}+{\mathtt{b}}}^{k_{0},\upsilon}\upsilon^{-1}<1$,
which is satisfied by (A.3) and (7.24). Thus, recalling (A.7), we have proved
that
$(\omega,\gamma)\in{\mathtt{\Lambda}}_{\overline{\mathtt{n}}+1}^{\upsilon,\rm
T}$. ∎
###### Corollary A.4.
For any $\overline{\mathtt{n}}\in{\mathbb{N}}_{0}$ and
$(\omega,\gamma)\in{\mathtt{T}}{\mathtt{C}}_{\overline{\mathtt{n}}+1}({\mathtt{m}}_{1,\overline{\mathtt{n}}},2\upsilon,\tau)$
we have the conjugation formula
$X_{\overline{\mathtt{n}}}={\mathcal{B}}_{\overline{\mathtt{n}}}^{-1}X_{0}{\mathcal{B}}_{\overline{\mathtt{n}}}\qquad\text{
where}\qquad{\mathcal{B}}_{0}:={\rm
Id}\,,\quad{\mathcal{B}}_{\overline{\mathtt{n}}}:={\mathcal{G}}_{0}\circ\cdots\circ{\mathcal{G}}_{\overline{\mathtt{n}}-1}\,,\
\overline{\mathtt{n}}\geq 1\,,$
and $X_{\overline{\mathtt{n}}}$ is given in (A.4) with
${\mathtt{n}}=\overline{\mathtt{n}}$. Moreover, when
$\overline{\mathtt{n}}\geq 1$, for any
${\mathtt{n}}=1,\ldots,\overline{\mathtt{n}}$, each
${\mathcal{B}}_{{\mathtt{n}}}$ is the composition operator induced by the
diffeomorphism of the torus $x\mapsto x+\beta_{{\mathtt{n}}}({\varphi},x)$,
$({\mathcal{B}}_{{\mathtt{n}}}u)({\varphi},x)=u({\varphi},x+\beta_{{\mathtt{n}}}({\varphi},x))$,
where the function $\beta_{{\mathtt{n}}}$ is a quasi-periodic traveling wave,
${\rm odd}({\varphi},x)$, satisfying, for any $\mathfrak{s}_{0}\leq s\leq S$,
for some constant $\underline{C}(S)\geq 1$,
$|\beta_{{\mathtt{n}}}|_{s}^{k_{0},\upsilon}\leq\underline{C}(S)\upsilon^{-1}N_{0}^{\tau_{1}}|p_{0}|_{s+{\mathtt{b}}}^{k_{0},\upsilon}\,.$
(A.15)
Furthermore, for $p_{0,1},p_{0,2}$ fulfilling (A.10), we have
$\|\Delta_{12}\beta_{\overline{\mathtt{n}}}\|_{s_{1}}\leq\overline{C}({S})\upsilon^{-1}N_{0}^{\tau}\|\Delta_{12}p_{0}\|_{s_{1}+{\mathtt{b}}}$.
###### Proof.
Let $\overline{\mathtt{n}}\geq 1$ and we argue by induction on
${\mathtt{n}}=1,\ldots,\overline{\mathtt{n}}$. For ${\mathtt{n}}=1$ we have
that $\beta_{1}=g_{0}$. Hence, using (A.8), we get, for any
$\mathfrak{s}_{0}\leq s\leq S$,
$|\beta_{1}|_{s}^{k_{0},\upsilon}\leq
C(S)\upsilon^{-1}N_{0}^{\tau_{1}}|p_{0}|_{s}^{k_{0},\upsilon}\,,$ (A.16)
which proves (A.15) for ${\mathtt{n}}=1$ and provided $\underline{C}(S)\geq
C(S)$. If $\overline{\mathtt{n}}\geq 2$, for
${\mathtt{n}}=2,\ldots,\overline{\mathtt{n}}$ the operator
${\mathcal{B}}_{{\mathtt{n}}}={\mathcal{B}}_{{\mathtt{n}}-1}\circ{\mathcal{G}}_{{\mathtt{n}}-1}$
is the composition operator induced by the diffeomorphism
$\beta_{{\mathtt{n}}}({\varphi},x)=\beta_{{\mathtt{n}}-1}({\varphi},x)+g_{{\mathtt{n}}-1}({\varphi},x+\beta_{{\mathtt{n}}-1}({\varphi},x))=\beta_{{\mathtt{n}}-1}({\varphi},x)+\\{{\mathcal{B}}_{{\mathtt{n}}-1}g_{{\mathtt{n}}-1}\\}({\varphi},x)\,.$
(A.17)
Since $g_{0}({\varphi},x)$ is a quasi-periodic traveling wave ${\rm
odd}({\varphi},x)$ each $\beta_{{\mathtt{n}}}({\varphi},x)$ is a quasi-
periodic traveling wave ${\rm odd}({\varphi},x)$. We now assume by induction
that (A.15) up to ${\mathtt{n}}-1$. We first prove that, for any
$k=2,...,{\mathtt{n}}$, we have, for any $\mathfrak{s}_{0}\leq s\leq S$,
$\displaystyle|\beta_{k}-\beta_{k-1}|_{s}^{k_{0},\upsilon}$
$\displaystyle\stackrel{{\scriptstyle\eqref{diff.n}}}{{=}}|{\mathcal{B}}_{k-1}g_{k-1}|_{s}^{k_{0},\upsilon}\stackrel{{\scriptstyle\eqref{est:compo}}}{{\leq}}C(s)\left(|g_{k-1}|_{s}^{k_{0},\upsilon}+|\beta_{k-1}|_{s}^{k_{0},\upsilon}|g_{k-1}|_{\mathfrak{s}_{0}+1}^{k_{0},\upsilon}\right)$
$\displaystyle\stackrel{{\scriptstyle\eqref{gtn.est.better},\eqref{stime.pn.w},\eqref{stima.B2}}}{{\leq}}C(S,{\mathtt{b}})N_{k-1}^{\tau_{1}}\upsilon^{-1}N_{k-2}^{-{\mathtt{a}}}|p_{0}|_{s+{\mathtt{b}}}^{k_{0},\upsilon}$
$\displaystyle\qquad\qquad\qquad+C(S,{\mathtt{b}})\underline{C}(S)\upsilon^{-2}N_{0}^{\tau_{1}}N_{k-1}^{\tau_{1}}N_{k-2}^{-{\mathtt{a}}}|p_{0}|_{s+{\mathtt{b}}}^{k_{0},\upsilon}|p_{0}|_{\mathfrak{s}_{0}+{\mathtt{b}}+1}^{k_{0},\upsilon}$
$\displaystyle\stackrel{{\scriptstyle\eqref{small.V.as.AP}}}{{\leq}}C(S,{\mathtt{b}})\,(1+\underline{C}(S))\,\upsilon^{-1}N_{k-1}^{\tau_{1}}N_{k-2}^{-{\mathtt{a}}}|p_{0}|_{s+{\mathtt{b}}}^{k_{0},\upsilon}\,.$
(A.18)
By (A.18) and (A.16), we derive, for any
${\mathtt{n}}=2,\ldots,\overline{\mathtt{n}}$ and setting
$b:={\mathtt{a}}-\frac{1}{2}\tau_{1}\geq 1$ (see (7.24))
$\displaystyle|\beta_{{\mathtt{n}}}|_{s}^{k_{0},\upsilon}$
$\displaystyle\leq\sum_{k=2}^{{\mathtt{n}}}|\beta_{k}-\beta_{k-1}|_{s}^{k_{0},\upsilon}+|\beta_{1}|_{s}^{k_{0},\upsilon}$
$\displaystyle\leq\big{(}C(S,{\mathtt{b}})\,(1+\underline{C}(S))\,N_{0}^{-b}+C(S)\big{)}\upsilon^{-1}N_{0}^{\tau_{1}}|p_{0}|_{s+{\mathtt{b}}}^{k_{0},\upsilon}$
$\displaystyle\leq\underline{C}(S)\upsilon^{-1}N_{0}^{\tau_{1}}|p_{0}|_{s+{\mathtt{b}}}^{k_{0},\upsilon}\,$
provided $C(S,{\mathtt{b}})\,N_{0}^{-b}\leq\frac{1}{2}$ and
$\underline{C}(S):=1+2C(S)$. This proves (A.15) at the step ${\mathtt{n}}+1$.
The estimate for $\Delta_{12}\beta_{\overline{\mathtt{n}}}$ follows similarly
by (A.11)–(A.13), (A.5), (A.10). ∎
###### Remark A.5.
If the function $p_{0}(\varphi,x)$ in (A.1) is not a quasi-periodic traveling
wave $p_{0}(\varphi,x)$, the same kind of conjugation result holds requiring
in (A.7) the non resonance conditions
$|\omega\cdot\ell+{\mathtt{m}}_{1,{\mathtt{n}}-1}j|\geq\upsilon\braket{\ell}^{-\tau},\
\forall\,(\ell,j)\in({\mathbb{Z}}^{\nu}\times{\mathbb{Z}})\setminus\\{0\\}\,,\
|(\ell,j)|\leq N_{{\mathtt{n}}-1}\,.$
###### Proof of Theorem A.2.
The proof is inductive. In Lemma A.6 we prove that the norms
$|p_{\mathtt{n}}|_{s}^{k_{0},\upsilon}$ satisfy inequalities typical of a
Nash-Moser iterative scheme, which converges under the smallness low norm
condition (A.3).
The step ${\mathtt{n}}=0$. The items $\bf(S1)_{0}$, $\bf(S2)_{0}$, hold with
${\mathtt{m}}_{1,0}:=0$ (the estimates (A.6), (A.5) are trivial, as well as
(A.11)-(A.12)).
The reducibility step. We now describe the generic inductive step, showing how
to transform $X_{{\mathtt{n}}}$ in (A.4) into $X_{{\mathtt{n}}+1}$ by
conjugating with the composition operator ${\mathcal{G}}_{{\mathtt{n}}}$
induced by a diffeomorphism $x+g_{{\mathtt{n}}}(\varphi,x)$ for a periodic
function $g_{{\mathtt{n}}}(\varphi,x)$ (defined in (A.20)). A direct
computation gives
$\displaystyle{\mathcal{G}}_{{\mathtt{n}}}^{-1}\,X_{{\mathtt{n}}}\,{\mathcal{G}}_{{\mathtt{n}}}$
$\displaystyle=\omega\cdot\partial_{\varphi}+\\{{\mathcal{G}}_{{\mathtt{n}}}^{-1}\big{(}\omega\cdot\partial_{\varphi}g_{{\mathtt{n}}}+({\mathtt{m}}_{1,{\mathtt{n}}}+p_{{\mathtt{n}}})(1+(g_{{\mathtt{n}}})_{x})\big{)}\\}\partial_{y}$
$\displaystyle=\omega\cdot\partial_{\varphi}+{\mathtt{m}}_{1,{\mathtt{n}}}\partial_{y}+\\{{\mathcal{G}}_{{\mathtt{n}}}^{-1}\big{(}(\omega\cdot\partial_{\varphi}+{\mathtt{m}}_{1,{\mathtt{n}}}\partial_{x})g_{{\mathtt{n}}}+p_{{\mathtt{n}}}+p_{{\mathtt{n}}}(g_{{\mathtt{n}}})_{x}\big{)}\\}\partial_{y}\,.$
We choose $g_{{\mathtt{n}}}({\varphi},x)$ as the solution of the homological
equation
$(\omega\cdot\partial_{\varphi}+{\mathtt{m}}_{1,{\mathtt{n}}}\partial_{x})g_{{\mathtt{n}}}({\varphi},x)+\Pi_{N_{{\mathtt{n}}}}p_{{\mathtt{n}}}=\braket{p_{{\mathtt{n}}}}_{{\varphi},x}$
(A.19)
where $\braket{p_{{\mathtt{n}}}}_{{\varphi},x}$ is the average of
$p_{{\mathtt{n}}}$ defined as in (3.6). So we define
$g_{{\mathtt{n}}}({\varphi},x):=-(\omega\cdot\partial_{\varphi}+{\mathtt{m}}_{1,{\mathtt{n}}}\partial_{x})_{\rm
ext}^{-1}(\Pi_{N_{{\mathtt{n}}}}p_{{\mathtt{n}}}-\braket{p_{{\mathtt{n}}}}_{{\varphi},x})$
(A.20)
where the operator
$(\omega\cdot\partial_{\varphi}+{\mathtt{m}}_{1,{\mathtt{n}}}\partial_{x})_{\rm
ext}^{-1}$ is introduced in (3.10). The function
$g_{{\mathtt{n}}}({\varphi},x)$ is defined for all parameters
$(\omega,\gamma)\in{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]$, it is a
quasi-periodic traveling wave, ${\rm odd}({\varphi},x)$, fulfills (A.8) at the
step ${\mathtt{n}}$ (by (3.12)), and for any $(\omega,\gamma)$ in the set
${\mathtt{\Lambda}}_{{\mathtt{n}}+1}^{\rm T}$ defined in (A.7), it solves the
homological equation (A.19). By (A.8) at the step ${\mathtt{n}}$, (A.5),
(A.3), ${\mathtt{a}}\geq\chi\tau_{1}+3$ (see (7.24))
$|g_{{\mathtt{n}}}|_{2\mathfrak{s}_{0}+1}^{k_{0},\upsilon}\leq
C(\mathfrak{s}_{0})N_{{\mathtt{n}}}^{\tau_{1}}N_{{\mathtt{n}}-1}^{-{\mathtt{a}}}|p_{0}|_{2\mathfrak{s}_{0}+{\mathtt{b}}+1}^{k_{0},\upsilon}\upsilon^{-1}<\delta(\mathfrak{s}_{0})\,$
(A.21)
provided $N_{0}$ is large enough. By Lemma A.1-4 the diffeomorphism
$y=x+g_{{\mathtt{n}}}(\varphi,x)$ is invertible and its inverse
$x=y+\breve{g}_{{\mathtt{n}}}(\varphi,y)$ (which induces the operator
${\mathcal{G}}_{{\mathtt{n}}}^{-1}$) satisfies
$|\breve{g}_{{\mathtt{n}}}|_{s}^{k_{0},\upsilon}\leq
C(s)|g_{{\mathtt{n}}}|_{s}^{k_{0},\upsilon}\,.$ (A.22)
For any $(\omega,\gamma)$ in ${\mathtt{\Lambda}}_{{\mathtt{n}}+1}^{\rm T}$,
the operator
$X_{{\mathtt{n}}+1}={\mathcal{G}}_{{\mathtt{n}}}^{-1}\,X_{{\mathtt{n}}}\,{\mathcal{G}}_{{\mathtt{n}}}$
takes the form (A.4) at step ${\mathtt{n}}+1$ with
${\mathtt{m}}_{1,{\mathtt{n}}+1}:={\mathtt{m}}_{1,{\mathtt{n}}}+\braket{p_{{\mathtt{n}}}}_{{\varphi},x}\in{\mathbb{R}}\,,\quad
p_{{\mathtt{n}}+1}({\varphi},y):=\\{{\mathcal{G}}_{{\mathtt{n}}}^{-1}\big{(}\Pi_{N_{{\mathtt{n}}}}^{\perp}p_{{\mathtt{n}}}+p_{{\mathtt{n}}}(g_{{\mathtt{n}}})_{x}\big{)}\\}({\varphi},y)\,.$
(A.23)
This verifies (A.9) at step ${\mathtt{n}}+1$. Note that the constant
${\mathtt{m}}_{1,{\mathtt{n}}+1}\in{\mathbb{R}}$ and the function
$p_{{\mathtt{n}}+1}({\varphi},y)$ in (A.23) are defined for all
$(\omega,\gamma)\in{\mathbb{R}}^{\nu}\times[\gamma_{1},\gamma_{2}]$.
In order to prove the inductive estimates (A.5), we first show the following
iterative estimates of Nash-Moser type.
###### Lemma A.6.
The function $p_{{\mathtt{n}}+1}$ defined in (A.23) satisfies, for any
$\mathfrak{s}_{0}\leq s\leq S$,
$\displaystyle|p_{{\mathtt{n}}+1}|_{s}^{k_{0},\upsilon}\leq
C_{1}(s)\big{(}N_{{\mathtt{n}}}^{-{\mathtt{b}}}|p_{{\mathtt{n}}}|_{s+{\mathtt{b}}}^{k_{0},\upsilon}+N_{{\mathtt{n}}}^{\tau_{1}+1}\upsilon^{-1}|p_{{\mathtt{n}}}|_{s}^{k_{0},\upsilon}|p_{{\mathtt{n}}}|_{\mathfrak{s}_{0}}^{k_{0},\upsilon}\big{)}$
(A.24)
$\displaystyle|p_{{\mathtt{n}}+1}|_{s+{\mathtt{b}}}^{k_{0},\upsilon}\leq
C_{2}(s,{\mathtt{b}})\big{(}|p_{{\mathtt{n}}}|_{s+{\mathtt{b}}}^{k_{0},\upsilon}+N_{{\mathtt{n}}}^{\tau_{1}+1}\upsilon^{-1}|p_{{\mathtt{n}}}|_{s+{\mathtt{b}}}^{k_{0},\upsilon}|p_{{\mathtt{n}}}|_{\mathfrak{s}_{0}}^{k_{0},\upsilon}\big{)}$
(A.25)
where the positive constants $C_{1}(s),C_{2}(s,{\mathtt{b}})$ are monotone in
$\mathfrak{s}_{0}\leq s\leq S$.
###### Proof.
We first show the estimate (A.24). We write $p_{{\mathtt{n}}+1}$ in (A.23) as
$p_{{\mathtt{n}}+1}:={\mathcal{G}}_{{\mathtt{n}}}^{-1}F_{{\mathtt{n}}}$ with
$F_{{\mathtt{n}}}:=\Pi_{N_{{\mathtt{n}}}}^{\perp}p_{{\mathtt{n}}}+p_{{\mathtt{n}}}(g_{{\mathtt{n}}})_{x}$.
By Lemma A.1-item 2, we get
$|F_{{\mathtt{n}}}|_{s}^{k_{0},\upsilon}\leq|\Pi_{N_{{\mathtt{n}}}}^{\perp}p_{{\mathtt{n}}}|_{s}^{k_{0},\upsilon}+C(s)|p_{{\mathtt{n}}}|_{s}^{k_{0},\upsilon}|g_{{\mathtt{n}}}|_{\mathfrak{s}_{0}+1}^{k_{0},\upsilon}+C(\mathfrak{s}_{0})|p_{{\mathtt{n}}}|_{\mathfrak{s}_{0}}^{k_{0},\upsilon}|g_{{\mathtt{n}}}|_{s+1}^{k_{0},\upsilon}\,.$
(A.26)
By (A.2), (A.22), (A.26), (A.8) at step ${\mathtt{n}}$, Lemma A.1 and (A.21),
we have
$\displaystyle|p_{{\mathtt{n}}+1}|_{s}^{k_{0},\upsilon}$
$\displaystyle\lesssim_{s}|F_{{\mathtt{n}}}|_{s}^{k_{0},\upsilon}+|g_{{\mathtt{n}}}|_{s}^{k_{0},\upsilon}|F_{{\mathtt{n}}}|_{\mathfrak{s}_{0}+1}^{k_{0},\upsilon}$
$\displaystyle\lesssim_{s}|\Pi_{N_{{\mathtt{n}}}}^{\perp}p_{{\mathtt{n}}}|_{s}^{k_{0},\upsilon}+|p_{{\mathtt{n}}}|_{s}^{k_{0},\upsilon}|g_{{\mathtt{n}}}|_{\mathfrak{s}_{0}+1}^{k_{0},\upsilon}+|p_{{\mathtt{n}}}|_{\mathfrak{s}_{0}}^{k_{0},\upsilon}|g_{{\mathtt{n}}}|_{s+1}^{k_{0},\upsilon}$
$\displaystyle\ \
+|g_{{\mathtt{n}}}|_{s}^{k_{0},\upsilon}\big{(}|\Pi_{N_{{\mathtt{n}}}}^{\perp}p_{{\mathtt{n}}}|_{\mathfrak{s}_{0}+1}^{k_{0},\upsilon}+|p_{{\mathtt{n}}}|_{\mathfrak{s}_{0}+1}^{k_{0},\upsilon}|g_{{\mathtt{n}}}|_{\mathfrak{s}_{0}+1}^{k_{0},\upsilon}+|p_{{\mathtt{n}}}|_{\mathfrak{s}_{0}}^{k_{0},\upsilon}|g_{{\mathtt{n}}}|_{\mathfrak{s}_{0}+2}^{k_{0},\upsilon}\big{)}$
$\displaystyle\lesssim_{s}|\Pi_{N_{{\mathtt{n}}}}^{\perp}p_{{\mathtt{n}}}|_{s}^{k_{0},\upsilon}+|p_{{\mathtt{n}}}|_{s}^{k_{0},\upsilon}|g_{{\mathtt{n}}}|_{\mathfrak{s}_{0}+1}^{k_{0},\upsilon}+|p_{{\mathtt{n}}}|_{\mathfrak{s}_{0}}^{k_{0},\upsilon}|g_{{\mathtt{n}}}|_{s+1}^{k_{0},\upsilon}+N_{{\mathtt{n}}}^{\tau_{1}+1}\upsilon^{-1}|p_{{\mathtt{n}}}|_{s-1}^{k_{0},\upsilon}|p_{{\mathtt{n}}}|_{\mathfrak{s}_{0}+1}^{k_{0},\upsilon}$
$\displaystyle\lesssim_{s}N_{{\mathtt{n}}}^{-{\mathtt{b}}}|p_{{\mathtt{n}}}|_{s+{\mathtt{b}}}^{k_{0},\upsilon}+N_{{\mathtt{n}}}^{\tau_{1}+1}\upsilon^{-1}|p_{{\mathtt{n}}}|_{s}^{k_{0},\upsilon}|p_{{\mathtt{n}}}|_{\mathfrak{s}_{0}}^{k_{0},\upsilon}\,,$
(A.27)
which is (A.24). The estimate (A.25) follows as for (A.27) (with
$s\rightsquigarrow s+{\mathtt{b}}$). ∎
As a corollary of the previous lemma we deduce the following lemma.
###### Lemma A.7.
The estimates (A.5)-(A.6) hold at the step ${\mathtt{n}}+1$.
###### Proof.
By (A.24) and (A.5) we have, for any $\mathfrak{s}_{0}\leq s\leq S$,
$\displaystyle|p_{{\mathtt{n}}+1}|_{s}^{k_{0},\upsilon}$ $\displaystyle\leq
C_{1}(S)\,C(s,{\mathtt{b}})\big{(}N_{{\mathtt{n}}}^{-{\mathtt{b}}}N_{{\mathtt{n}}-1}|p_{0}|_{s+{\mathtt{b}}}^{k_{0},\upsilon}+C(\mathfrak{s}_{0},{\mathtt{b}})\upsilon^{-1}N_{{\mathtt{n}}}^{\tau_{1}+1}N_{{\mathtt{n}}-1}^{-2{\mathtt{a}}}|p_{0}|_{s+{\mathtt{b}}}^{k_{0},\upsilon}|p_{0}|_{\mathfrak{s}_{0}+{\mathtt{b}}}^{k_{0},\upsilon}\big{)}$
$\displaystyle\leq
C(s,{\mathtt{b}})N_{{\mathtt{n}}}^{-{\mathtt{a}}}|p_{0}|_{s+{\mathtt{b}}}^{k_{0},\upsilon}$
asking that
$C_{1}(S)N_{{\mathtt{n}}}^{-{\mathtt{b}}}N_{{\mathtt{n}}-1}\leq\tfrac{1}{2}N_{{\mathtt{n}}}^{-{\mathtt{a}}}$
and
$C_{1}(S)C(\mathfrak{s}_{0},{\mathtt{b}})\upsilon^{-1}N_{{\mathtt{n}}}^{\tau_{1}+1}N_{{\mathtt{n}}-1}^{-2{\mathtt{a}}}|p_{0}|_{\mathfrak{s}_{0}+{\mathtt{b}}}^{k_{0},\upsilon}\leq\tfrac{1}{2}N_{{\mathtt{n}}}^{-{\mathtt{a}}}$,
which both follow by (7.24), the smallness assumption (A.3) and taking
$N_{0}:=N_{0}(S)>0$ sufficiently large. This proves the first estimate of
(A.5) at step ${\mathtt{n}}+1$. The second follows in a similar way,
eventually increasing $N_{0}$.
Finally we have, by (A.23) and the first inequality in (A.5),
$|{\mathtt{m}}_{1,{\mathtt{n}}+1}-{\mathtt{m}}_{1,{\mathtt{n}}}|^{k_{0},\upsilon}=|\braket{p_{{\mathtt{n}}}}_{\varphi,x}|^{k_{0},\upsilon}\leq|p_{{\mathtt{n}}}|_{\mathfrak{s}_{0}}^{k_{0},\upsilon}\leq
C(\mathfrak{s}_{0},{\mathtt{b}})N_{{\mathtt{n}}-1}^{-{\mathtt{a}}}|p_{0}|_{\mathfrak{s}_{0}+{\mathtt{b}}}^{k_{0},\upsilon}\,,$
(A.28)
proving the second estimate (A.6) at step ${\mathtt{n}}+1$. Writing
${\mathtt{m}}_{1,{\mathtt{n}}+1}=\sum_{j=0}^{{\mathtt{n}}}({\mathtt{m}}_{1,j+1}-{\mathtt{m}}_{1,j})$
and recalling that ${\mathtt{m}}_{1,0}=0$, we deduce by (A.28) the first
estimate (A.6) at step ${\mathtt{n}}+1$. ∎
The proof of $\bf(S1)_{{\mathtt{n}}+1}$ is complete. The item
$\bf(S2)_{{\mathtt{n}}+1}$ follows by similar inductive arguments. The proof
of Theorem A.2 is concluded. ∎
#### Acknowledgements.
We thank Riccardo Montalto for many useful discussions. The work of the author
L.F. is supported by Tamkeen under the NYU Abu Dhabi Research Institute grant
CG002.
## References
* [1] Alazard T., Baldi P., _Gravity capillary standing water waves_ , Arch. Rat. Mech. Anal. 217(3), 741-830, 2015.
* [2] Baldi P., Berti M., Haus E., Montalto R., _Time quasi-periodic gravity water waves in finite depth_ , Inventiones Math. 214 (2), 739-911, 2018.
* [3] Baldi P., Montalto R., _Quasi-periodic incompressible Euler flows in 3D_ , arXiv preprint, arXiv:2020.14313.
* [4] Bambusi D., Berti M., Magistrelli E., Degenerate KAM theory for partial differential equations, Journal Diff. Equations, 250, 8, 3379-3397, 2011.
* [5] Bedrossian J., Masmoudi N., _Inviscid damping and the asymptotic stability of planar shear flows in the 2D Euler equations_ , Publ. Math. Inst. Hautes Études Sci. 122, 195–300, 2015.
* [6] Berti M., Bolle P., _A Nash-Moser approach to KAM theory_ , Fields Institute Communications, special volume "Hamiltonian PDEs and Applications", pp. 255-284, 2015.
* [7] Berti M., Franzoi L., Maspero A., _Traveling quasi-periodic water waves with constant vorticity_ , Archive for Rational Mechanics, 2021. DOI: https://doi.org/10.1007/s00205-021-01607-w
* [8] Berti M., T. Kappeler, Montalto R., _Large KAM tori for quasi-linear perturbations of KdV_ , Archive for Rational Mechanics, https://doi.org/10.1007/s00205-020-01596-2, 2021.
* [9] Berti M., Montalto R., _Quasi-periodic standing wave solutions of gravity-capillary water waves_ , MEMO, Volume 263, 1273, Memoires AMS, ISSN 0065-9266, 2020.
* [10] Constantin A., Nonlinear Water Waves with Applications to Wave-Current Interaction and Tsunamis, CBMS-NSF Regional Conf, Series in Applied Math., 81. SIAM, 2011.
* [11] Constantin A., Ivanov R.I., Prodanov E.M., Nearly-Hamiltonian structure for water waves with constant vorticity, J. Math. Fluid Mech. 10, 224-237, 2008.
* [12] Constantin A., Strauss W., Exact steady periodic water waves with vorticity, Comm. Pure Appl. Math. 57, no. 4, 481-527, 2004.
* [13] Craig W., Nicholls D., Travelling two and three dimensional capillary gravity water waves, SIAM J. Math. Anal., 32(2):323-359 (electronic), 2000.
* [14] Craig W., Sulem C., Numerical simulation of gravity waves, J. Comput. Phys., 108(1):73-83, 1993.
* [15] Dubreil-Jacotin M.-L., Sur la détermination rigoureuse des ondes permanentes périodiques d’ampleur finie, J. Math. Pures Appl. 13, 217-291, 1934.
* [16] Feola R., Giuliani F., _Quasi-periodic traveling waves on an infinitely deep fluid under gravity_ , arXiv:2005.08280, 2020.
* [17] Feola R., Giuliani F., Montalto R., Procesi M., _Reducibility of first order linear operators on tori via Moser’s theorem_ , Journal of Functional Analysis 276, 932-970, 2019.
* [18] Feola R., Giuliani F., Procesi M., Reducibility for a class of weakly dispersive linear operators arising from the Degasperis–Procesi equation, Dynamics of Partial Differential Equations 16(1), 25 – 94, 2019.
* [19] Gerstner F., Theorie der Wellen, Abh. Königl. Böhm. Ges. Wiss, 1802.
* [20] Goyon R., Contribution á la théorie des houles, Ann. Sci. Univ. Toulouse 22, 1-55, 1958.
* [21] Iooss G., Plotnikov P., Multimodal standing gravity waves: a completely resonant system, J. Math. Fluid Mech., 7(suppl. 1) : S110-S126, 2005.
* [22] Iooss G., Plotnikov P., Small divisor problem in the theory of three-dimensional water gravity waves, Mem. Amer. Math. Soc., 200(940):viii+128, 2009.
* [23] Iooss G., Plotnikov P., Asymmetrical tridimensional traveling gravity waves, Arch. Rat. Mech. Anal., 200(3):789–880, 2011.
* [24] Iooss G., Plotnikov P., Toland J., Standing waves on an infinitely deep perfect fluid under gravity, Arch. Ration. Mech. Anal., 177(3), 367-478, 2005.
* [25] Lannes D., _Well-posedness of the water-waves equations_ , J. Am. Math. Soc. 3, 605-654, 18, 2005.
* [26] Levi-Civita T., Détermination rigoureuse des ondes permanentes d’ ampleur finie, Math. Ann., 93 , pp. 264-314, 1925.
* [27] Keady G., Norbury J., On the existence theory for irrotational water waves, Math. Proc. Cambridge Philos. Soc. 83, no. 1, 137-157, 1978.
* [28] Martin C., Local bifurcation and regularity for steady periodic capillary-gravity water waves with constant vorticity, Nonlinear Anal. Real World Appl. 14, no. 1, 131-149, 2013.
* [29] McLeod J. B., The Stokes and Krasovskii conjectures for the wave of greatest height, Stud. Appl. Math. 98, no. 4, 311-333, 1997.
* [30] Nekrasov A. I., On steady waves, Izv. Ivanovo-Voznesenk. Politekhn. 3, 1921\.
* [31] Plotnikov P., Toland J., Nash-Moser theory for standing water waves, Arch. Ration. Mech. Anal., 159(1):1–83, 2001.
* [32] Rüssmann H., Invariant tori in non-degenerate nearly integrable Hamiltonian systems, Regul. Chaotic Dyn. 6(2), 199-204, 2001.
* [33] Stokes G., On the theory of oscillatory waves, Trans. Cambridge Phil. Soc. 8, 441-455, 1847.
* [34] Struik D., Détermination rigoureuse des ondes irrotationelles périodiques dans un canal á profondeur finie, Math. Ann. 95, 595-634, 1926.
* [35] Toland J. F., On the existence of a wave of greatest height and Stokes conjecture, Proc. Roy. Soc. London Ser. A 363, 1715, 469-485, 1978.
* [36] Wahlén E., Steady periodic capillary-gravity waves with vorticity, SIAM J. Math. Anal. 38, 921-943, 2006.
* [37] Wahlén E., A Hamiltonian formulation of water waves with constant vorticity, Letters in Math. Physics, 79, 303-315, 2007.
* [38] Wei D., Zhang Z., Zhao W., _Linear inviscid damping for a class of monotone shear flow in Sobolev spaces_ , Comm. Pure Appl. Math. 71 no. 4, 617-687, 2018.
* [39] Wilkening J., Zhao X., Spatially quasi-periodic water waves of infinite depth, arXiv:2001.10745, 2020.
* [40] Zakharov V.E., Stability of periodic waves of finite amplitude on the surface of a deep fluid, Zhurnal Prikladnoi Mekhaniki i Teckhnicheskoi Fiziki 9, no.2, 86-94, 1969.
* [41] Zeidler E., Existenzbeweis fúr permanente Kapillar-Schwerewellen mit allgemeinen Wirbelverteilungen, Arch. Ration. Mech. Anal., 50, 34–72, 1973.
|
11institutetext: Santa Clara University, Santa Clara CA 95050, USA
11email<EMAIL_ADDRESS>
# Investors Embrace Gender Diversity, Not Female CEOs: The Role of Gender in
Startup Fundraising
Christopher Cassion Yuhang Qian Constant Bossou
Margareta Ackerman${}^{\textrm{{\char 12\relax}}}$
###### Abstract
The allocation of venture capital is one of the primary factors determining
who takes products to market, which startups succeed or fail, and as such who
gets to participate in the shaping of our collective economy. While gender
diversity contributes to startup success, most funding is allocated to male-
only entrepreneurial teams. In the wake of COVID-19, 2020 is seeing a notable
decline in funding to female and mixed-gender teams, giving raise to an urgent
need to study and correct the longstanding gender bias in startup funding
allocation.
We conduct an in-depth data analysis of over 48,000 companies on Crunchbase,
comparing funding allocation based on the gender composition of founding
teams. Detailed findings across diverse industries and geographies are
presented. Further, we construct machine learning models to predict whether
startups will reach an equity round, revealing the surprising finding that the
CEO’s gender is _the_ primary determining factor for attaining funding. Policy
implications for this pressing issue are discussed.
###### Keywords:
gender bias venture capital diversity entrepreneurship.
As gender equality continues to make strides across a wide range of industries
from STEM to medicine, there is a critical sphere where bias persists:
Compared to their male counterparts, women have little access to startup
funds, restricting them from engaging in our economy at this critical level.
According to Pitchbook, in 2019, female founders raised just 2.7% of the total
venture capital funding invested and mixed gender founding teams received
12.9% [1].
The economic impact of the COVID-19 pandemic is having severe consequences for
female entrepreneurs. Compared with 2019, the first quarter of 2020 saw a
decline in the proportion of deals made with female and mixed-gender teams and
funding allocated to female teams. In the third quarter of 2020, funding given
to female-only teams dropped to 1.8% with mixed-gender teams receiving just
11.1% [1]. There is an urgent need for understanding the nature of this
persistent bias and uncovering effective solutions for systemic change.
In the United States, only 10-15% of startups are founded by women [2]. Yet,
the number of women starting companies is not the primary issue, the far more
important problem is their lack of access to capital [3]. The funding gap
between male and female founders is particularly high at the early stage of a
venture, with an analysis of California and Massachusetts startups revealing
that female-led ventures are 63% less likely than male-led ones to obtain VC
funding [4].
While it is generally well-known in the venture community that men have an
easier time raising funds, much remains unclear. In order to ascertain
effective solutions, it is necessary to gain insight into the nature of the
problem. For instance, does having a woman on a founding team increase or
decrease fundraising outcomes? What role does the gender of the CEO play
compared to the gender of other founders? If funding is successfully raised,
how does gender impact the amount raised? How much does gender matter in
different geographic regions and across industries?
We perform an in-depth data analysis of over 48,000 companies on Crunchbase.
Our analysis suggests the presence of bias against women across geographies
and industries, which extends not only to female-only but also to mixed-gender
teams with female CEOs. We also construct machine-learning models (Decision
Tree, Random Forest, Logistic Regression, Gradient Boosted Trees, and Multi-
layer Perceptron (MLP)) to predict whether a founding team will reach a priced
funding round.111Raising a priced round is a major milestone that offers
startups the means to succeed. Our findings show the CEO’s222In startups, the
role of CEO is most often taken by one of the founders. This is nearly
ubiquitous at early stages. gender to be the most important founder
characteristic for predicting fundraising success, beating critical features
including whether the founders attended top universities and the number of
prior exits. We discuss the implications of these findings to the utilization
of machine learning models in venture capital allocation, and make
recommendations for systemic change.
## 1 Background
Gender plays a key role across the lifetime of an entrepreneurial journey:
Women are less likely to become entrepreneurs than men [5] and less likely to
get external funding once a new venture is founded [6]. The funding gap
between male and female founders is higher at the early stage of the venture
than at later stages [7]. Women are 65% less likely to get funded at early
stages and 35% less likely to be funded at later stages, when strong signals
of growth are available [4].
Consequently, women-owned businesses rely heavily on internal funding (ex.
personal finances) rather than funding from others, both debt and equity, to
finance their firms [7]. Even though the number of women-owned firms is
increasing rapidly [8], they are still left behind compared to their male
counterparts in receiving external founding.
Previous work provides valuable insight into the role of gender in the
allocation of Venture Capital funds. However, the data used in previous
studies, such as those above, tends to be geographically limited (focusing on
individual countries, often the US), or consisting of only several hundred
instances. Many questions remain to be answered: How wide is the gender gap in
the allocation of venture capital funding across geographic regions and
industries? Does gender diversity help or hinder fundraising outcomes? Does
the gender of the CEO play a special role compared to other founders?
In order gain a broader understanding into the nature and prevalence of gender
bias in VC, we perform the most comprehensive analysis to date on the impact
of gender on startup funding across geographies and industry verticals,
utilizing both statistical methods and machine learning techniques. We are
careful to account for the potential influence of the pipeline problem,
whereby fewer women seeking to engage in entrepreneurship.333The pipeline
problem is often perceived as the primary cause of the gender gap in startup
funding allocation, suggesting that the gap would be eliminated if women were
as interested as men in pursuing entrepreneurship. We devise and apply
analysis methods that shed light into these issues in a manner that cannot be
reduced to the pipeline problem. The data analysis helps inform our policy
recommendations, and we hope that it will support future research on resolving
gender bias in startup funding allocation.
## 2 Methodology
We rely on Crunchbase data to attain a data set of over 48,000 companies along
with founder information. We consider four gender compositions: founding teams
consisting entirely of male founders (male-only), founding teams consisting
entirely of female founders (female-only), teams with at least one female and
at least one male founder led by a male CEO (mixed male-led), and teams with
at least one female and at least one male founder led by a female CEO (mixed
female-led). Companies with these gender and leadership compositions are
subsequently compared, with emphasis on funding raised across a variety of
industries and geographies. We then construct machine learning models to
ascertain the importance of the team’s gender composition and the leader’s
gender in funding outcomes.
### 2.1 Data Collection
The data was obtained from Crunchbase, which prides itself for being “the
leading destination for company insights from early-stage startups to the
Fortune 1000.”444https://www.crunchbase.com/ Crunchbase provides two majors
types of data: Information on companies and data on individuals in leadership
positions. We separately retrieved both types of data as they include some
non-overlapping features. For instance, gender information is only available
as a founder attribute and is absent from the company description.
Our final dataset is an integration of the company and founder data. We first
downloaded data of 224,000 companies and 175,000 founders with attributes of
interest to our analysis. We then combined the two datasets by matching the
Company’s Website attribute in both the founders and companies dataset to
produce a new dataset of 63,462 data points. The combined dataset contains all
the attributes of the companies and all the aggregated attributes of the
founders dataset. We dropped all rows with missing values in the key
attributed (headquarter region, total funding raised, and industry) and
obtained a final dataset of 48,676 entries. 58.14% of the companies are led by
multi-member founding teams. It is worth noting that companies ranked higher
by Crunchbase tend to have fewer missing values.
While not all founders are present in the founders dataset, founders names are
present in the company dataset as comma separated attributes. Whenever a
founder’s gender is missing from the founder data set, we rely on a machine-
learning model for gender classification based on names.555We utilized the
following name-based gender classifier: https://github.com/clintval/gender-
predictor. We retrained the model, achieving an accuracy of 97.10%.
Another important aspect is identifying who is leading the startup. We define
the leader as either the CEO, or the sole founder for one person founding
teams. In order to determine leadership, we inspect the job titles of all of
the founders found on Crunchbase that have the company as their primary
organization. We set if the company is male or female led based on the gender
of the identified founder. Female-only and male-only companies are
respectively female and male led.
We reclassified the industries attribute values by reducing over one hundred
industries down to thirty by combining closely related industries, from
amongst which twenty industries with over 300 companies each were selected. We
then picked the first industry that each company provides as its primary
industry. The company’s headquarter region was used to identify its location.
### 2.2 Attribute Statistics
Before delving into extensive analysis, we share some basic statistics about
the data. As shown in Figure 1, overall founder gender distribution of the
48,676 companies in our dataset consists of 7.13% female-only companies,
80.22% male-only companies, 3.26% mixed female-led companies, and 9.39% mixed
male-led companies (see Figure 1)
Figure 1: Number of companies of each gender composition type Figure 2:
Average funding by gender composition of founding teams. Values in tens of
millions of USD.
Our analysis includes 20 industries, each consisting of at least 300 companies
(See Figure 3 for the list of industries). We omit locations with fewer than
1,500 companies, resulting in three major geographic regions, consisted of
North America, Europe, and Asia-Pacific. Since 64.01% of companies are located
in North America, we also include a detailed analysis focusing on companies
based in the top four US startup hubs: Silicon Valley Bay Area, Greater New
York Area, Greater Los Angeles Area, and Greater Boston Area. Lastly, 94.84%
of the startups in our data were founded on or after the year 2000. Please see
the Appendix for additional information about the data set.
## 3 Data Analysis
In this section, we analyse the funding allocated to founding teams with
different gender compositions. Results are reported across diverse industries
and geographic regions.
### 3.1 Analysis by industry
We begin with an analysis of funding allocation by industry across the 20 most
dominant industries identified in our data. As shown in Figure 3, there are
far more male-only companies than female-only and mixed-gender companies
across all industries. The industries Data, Commerce and Apps have the largest
number of companies while Gaming, Agriculture and Farming and Administrative
Services have the fewest. In all but 5 out the 20 industries, the next biggest
category is male-led, mixed-gender groups.
Figure 3: Number of companies for each gender composition type by industry.
All 20 industries are dominated by male-only teams. Figure 4: Total funding
allocation for founding teams with different gender composition across
industries. Values in in hundreds of billions of USD. Total funding across all
twenty one industries are dominated by companies founded by male-only founders
except in the Food industry where mixed-gender male-led teams raised more
funding than any other group type. Figure 5: Average funding allocation for
founding teams with different gender composition across industries. Values in
hundreds of millions of USD. Of the 20 industries, average funding is highest
for mixed-gender male-led teams in 11 industries and for male-only teams in 7
industries.
As shown in Figure 4, male-only and male-led mixed-gender teams receive the
great majority of funding. In particular, male-only teams receive
significantly 31 times more funding than female-only teams (statistic=11.0715,
_P_ $<0.0001$). Male-only teams also get significantly 47 times more funding
than mixed-gender female-led teams (statistic=5.8197, _P_ $<0.0001$).
Similarly, mixed-gender male-led teams also receive significantly 8.5 times
more funding than mixed-gender female-led (statistic=3.5987, _P_ $<0.0001$)
and female only teams (statistic=4.2129, _P_ $<0.0001$).
Male-only teams receive more funding in 19 of the 20 industries. In 18 of the
20 industries, there is a significant difference between the amount raised by
male-only teams compared with female-only teams, the exceptions being
agriculture & farming and biotechnology, where the difference is not
significant. On the other hand, the difference between male-only and male-led
is often insignificant, with a significant difference found in only 4 of the
20 industries.
For example, in the Data industry, male-only teams get significantly more
funding than mixed female-led teams (statistic=6.0181, _P_ $<0.0001$), and
female only teams (statistic=4.6942, _P_ $<0.0001$), and insignificantly more
than mixed male-led teams (statistic=0.6319, _P_ $=0.5276$). Similarly, in
Commerce, male-only teams get significantly more funding than mixed female-led
teams (statistic=3.4581, _P_ $=0.0005$) and female only teams
(statistic=3.7047, _P_ $=0.0002$) and significantly more than mixed male-led
teams (statistic=2.6360, _P_ $=0.0084$). Of the industries studies, Food is
the only one where male-only teams did not raise the largest amount of total
funding, however, the difference between male-only and mixed male-led teams
was not significant (statistic=0.9567, _P_ $=0.3427$).
### 3.2 Average funding by industry
The pipeline problem, the fact that fewer women engage in entrepreneurship, is
often perceived as the primary factor in the discrepancy in funding
allocation. In order to gain insight into the nature of the issue beyond the
pipeline problem, we consider the average funding allocated to teams that have
successfully raised funds, comparing the amounts raised against the gender of
the founding teams. This analysis helps gain insight while offering an
accessible demonstration of a potential gender gap to lay audiences.
As shown on Figure 5, male-only founding teams and male led founding teams
lead in average funding, receiving the highest amount of average funding
across most industries (18 out 20). In 11 of the 20 industries, mixed-gender
male-led team achieve the highest average funding, compared with 7 industries
where male-only teams raise the most average funding. In industries including
Food and Administrative Services there is a substantial gap between average
funding given to mixed male-led led teams and male-only, with the mixed teams
raising a greater amount of funding.
Of the twenty industries, there are only two industries (Energy and Education)
where female-led teams receive more average funding. Notably, there are no
industries where female-only teams raise the greatest amount of average
funding. Unlike total funding, this persistent gap in average funding to
startups that successfully raise funds cannot be explained by low numbers of
women entrepreneurs.
Comparing companies led by women, we find that in 9 of the 20 industries
female-only teams receive more average funding than mixed-gender female-led
startups.
### 3.3 Analysis by geography
Figure 6: Average funding allocation to founding teams with different gender
composition across dominant continents. Values in hundreds of millions of USD.
All continents under consideration reveal the same raking, with male-led
mixed-gender teams receiving the highest average funding, followed by male-
only teams, then female-led mixed-gender teams, and finally female-only teams
receiving the lowest amount of average funding. Figure 7: Average funding
allocation to founding teams with different gender compositions across US
startup hubs. Values in tens of millions of USD. In the US, Silicon Valley Bay
Area and New York allocate the highest average funding to male-only teams,
whereas male-led mixed-gender groups in Greater Los Angeles Area and Boston
receive more average funding than all other group types. Figure 8: Total
funding by major geographic regions. Values in hundreds of millions of USD.
Male-only founding teams receive the greater amount of funding in all regions
considered, while female-only teams and female-led teams receive the least
founding. Figure 9: Total funding by major US startup hubs. Values in hundreds
of billions of USD. Male-only founding teams receive the greater amount of
funding in all regions considered, while female-only and female-led teams
receive the least.
Considering average funding allocation, shown in Figure 6, we see the same
raking by average funding allocation across all continents, with female-only
teams receiving the lowest average funding, followed by female-led mixed-
gender teams, then male-only teams, and finally mixed-gender male-led teams
receiving the highest amount of average capital.
Analyzing startup hubs in the United States, shown in 7, we discover that in
Silicon Valley and New York male-only teams receive the highest average
funding, narrowing beating mixed-gender male-led teams. LA and Boston follow
the global trend of giving mixed-gender male-led teams the highest amounts of
average funding, followed by male-only teams. In Silicon Valley, New York, and
LA, female-only teams receive the least amount of average funding, followed by
mixed female-led teams. However, in Boston, female-only teams receive more
average funding than mixed-gender female-led teams.
In summary, companies with male CEOs receive greater funding across all
continents and US startup hubs compared with companies with female CEOs.
Mixed-gender teams perform well with respect to fundraising, often better than
male-only teams, when they are led by male-CEOs.
When comparing total funding for different gender composition teams across
continents (see Figure 8), we find that mixed-gender teams receive the great
majority of funding in the three continents considered. Europe appears to be
exhibiting the greatest preference for male-only teams, where such groups
receive significantly over 65.5 times more funding than female-only companies
(statistic=6.5061, _P_ $<0.0001$). European male-only teams raise
significantly 92.9 more money than mixed female-led teams (statistic=7.6873,
_P_ $<0.0001$). Comparing mixed gender teams, those led by men raise 15.3 more
funding than female-led groups (statistic=2.0306, _P_ $=0.0427$). Male-only
companies in Europe raise non-significantly 6 times more total funding than
mixed-gender teams led by men (statistic=0.6232, _P_ $=0.5327$).
In Asia-Pacific, male-only companies receive significantly over 37 times more
funding than female-only companies (statistic=5.1950, _P_ $<0.0001$). Male-
only teams raise insignificantly over 43 times more than mixed-gender female
led teams (statistic=1.0063, _P_ $=0.3157$). Comparing mixed gender teams,
those led by men raise insignificantly 12 times more funding than female-led
groups (statistic=1.8761, _P_ $=0.0611$). Finally, male only teams raise 3.5
time more than male-led mixed-gender teams, not statistically significant
(statistic=1.6322, _P_ $=0.1032$).
When looking at total funding for different gender composition teams across US
startup hubs (see Figure 9), similar to the continental analysis, male only
teams receive the great majority of funding for the four major hubs. Silicon
Valley exhibits some of the greatest preference for male founders, with male-
only companies receiving significantly over 27 times more funding than female-
only companies (statistic=2.9081, _P_ $<0.0038$) and significantly over 35
times more than mixed-gender female-led companies (statistic=2.7921, _P_
$<0.0054$).
In Los Angeles Area, male-only companies significantly receive over 23 times
more funding than female-only companies (statistic=3.3092, _P_ $<0.0010$).
Comparing male-only to mixed female-led companies, male-only raised
insignificantly over 30 times more (statistic=1.0888, _P_ $=0.2787$). New York
gives male-only companies over 21 times more funding than female-only
companies
(statistic=6.8847, _P_ $<0.0001$) and over 37 times more than mixed-gender
female-led companies (statistic=3.4533, _P_ $=0.0006$), both results being
significant.
Finally, analysis of Boston area shows that male-only companies receive about
16 times more funding than female-only companies (statistic=1.8629, _P_
$<0.0639$) and about 41 times more than mixed-gender female-led companies
(statistic=1.4243, _P_ $<0.1599$), however here the results were not
significant.
## 4 Predictive Models
Venture capitalists’ primary aim is to identify startups that will become
successful in the future. As such, machine learning models have been playing
an increasingly important role in the venture capital space (see, for example,
[9], [10] and [11]).666Further, many venture capital firms built their own
custom models which they do not make public in order to maintain a competitive
advantage. While predictive models can be used at any stage of investment, the
problem is particularly challenging for early stage startups, prior to the
availability of qualitative data on company performance. Prediction for later
stage startups benefit from information on factors such as revenue and growth,
making prediction significantly more accurate. On the other hand, early stage
investments, which are often pre-revenue and precede product market fit, rely
primarily on founder characteristics.
One of primary risks with the utilization of machine learning models from an
ethical perspective is the perpetuation and even amplification of existing
biases. For instance, in the context of credit markets, Black and Hispanic
borrowers are disproportionately less likely to gain from the introduction of
machine learning [12].
How much gender bias is present in startup data? To what degree does the
utilization of machine learning models stands to perpetuate, or even amplify,
gender bias in venture capital? We explore this direction by creating several
machine learning models based on founder characteristics, the dominant
characteristics available for early stage investments. We then analyze feature
importance to ascertain how much the predictions rely on the gender
composition of founding teams and the gender of the CEOs. Note that our
exploration differs significantly from prior work in predictive modeling for
startup success, since we are interested specifically in the importance of
gender for attaining a priced funding round. By contrast, most work in the
field aims to predict startup success by incorporating information about the
startup itself, including quantitative success indicators such as total
funding raised and number of employees.
### 4.1 Feature Selection
In order to ascertain investor behaviour prior to having clear success
indicators available, we focus exclusively on founder characteristics777As
mention above, while a gender gap exits at all startup stages, investors are
most reluctant to invest in women in the early stages, where female ventures
are 65% less likely to receive funding [4]. However, it is essential to avoid
including features that would be heavily altered by the target variable. For
instance, social media presence stands to alter for founders who successfully
raised funding. Similarly, information regarding investments made by the
founders is heavily influenced by their entrepreneurial success, and are as
such also omitted.
#### 4.1.1 Training Features
To build the model, we first extract a set of features related to the founders
from the aggregated dataset discussed in Methodology. The following features
have been selected:
* •
Male Led: Boolean variable that is True if the CEO or sole founder is Male,
False otherwise
* •
Gender Composition: If the founding team is male-only, female-only, mixed
male-led, or mixed female-led
* •
Total Previously Founded Organizations: Total number of companies previously
founded by members of the founding team
* •
Average Previously Founded Organizations: The average number of companies
previously founded by the founders
* •
Has Previously Founded Organizations: Boolean variable indicating True if any
of the founders previously founded an organization
* •
Total Number of Exits: Total number of exit events in which the founders
participated
* •
Average Number of Exits: Average number of exit events for the company
founders
* •
Has Exits: Boolean variable indicating True if any of the founders had
previously founded a company that had an exit event
* •
Total Number of Founders: Number of founders of the company
* •
Multiple Founders: Boolean variable set to True if the founding team consists
of two or more founders
* •
Same Alma Mater: Boolean variable indicating True if all of the founders went
to the same university, False otherwise
* •
% from Top School: Percentage of founders that went to a top 100 school [13]
* •
Top School Attended: Boolean variable set to True if any of the founders went
to a top 100 school[13]
#### 4.1.2 Target Feature
The goal of these experiments is to determine if a startup reached an equity
funding round based on its founders. An equity round is when a startup sells
shares of the startup in exchange for a large investment (generally well over
a million). Equity rounds are important to the life cycle of startups largely
because they provide a significant monetary influx into the company and
represents a vote of confidence from the Venture Capital community, which
helps with subsequent rounds.
We separate the dataset into two funding stage groups, pre-equity rounds and
post-equity rounds. We define pre-equity rounds as those whose latest funding
stage is an Angel Round, Pre-Seed, Seed Round, or Convertible Note. We define
post-equity rounds as those that whose latest funding stage is Series A,
Series B or beyond, or Corporate Rounds. Using that, we construct models to
predict whether a founding team has reached a priced round.
### 4.2 Model Analysis
Using hyperparameter grid-search to obtain the best model of each type, we
constructed the following models: (1) Decision Tree (DT), (2) Random Forest,
(3) Logistic Regression (LR), (4) Gradient Boosted Trees (GBT), and (5) Multi-
layer Perceptron (MLP), for each of the worldwide and US data. MLP had the
highest accuracy on worldwide data, at $63.73\%$. Similar results were found
for US data with MLP giving the highest accuracy of $63.61\%$. Figure 1 and
Figure 4 summarize the results for worldwide and US data, respectively. All
models performed comparably, with worldwide accuracies varying by only 0.86%.
Model | AUC | Precision | Recall | Accuracy
---|---|---|---|---
Decision Tree | 53.30 | 59.90 | 53.23 | 62.91
Random Forest | 53.20 | 59.78 | 53.17 | 62.86
Logistic Regression | 52.70 | 61.38 | 52.73 | 63.00
Gradient Boosted Trees | 53.00 | 60.04 | 53.04 | 62.88
Multi-Layer Percepton | 53.80 | 63.92 | 53.80 | 63.72
Table 1: Predictive model results for worldwide data.
Early stage predictions are known to be highly challenging. It is essential to
emphasize that no information about the companies has been provided beyond
founder features, in order to ascertain the impact of gender on early stage
investing. It is unlikely that much higher accuracy is possible without
incorporating features beyond the scope of founder characteristics.
Model | AUC | Precision | Recall | Accuracy
---|---|---|---|---
Decision Tree | 54.20 | 58.36 | 54.17 | 60.68
Random Forest | 54.70 | 58.90 | 54.70 | 61.00
Logistic Regression | 55.80 | 60.97 | 55.83 | 62.08
Gradient Boosted Trees | 55.00 | 60.15 | 55.00 | 61.52
Multi-Layer Percepton | 57.60 | 63.63 | 57.59 | 63.61
Table 2: Predictive model results for the US dataset.
#### 4.2.1 Feature importance in tree based models
Considering feature importance enables us to ascertain how significant are
gender-related characteristics compared with other founder attributes, such as
prior exits or whether founders attended top schools. Table 4 and Table 3
detail the feature importance for the Decision Tree and Random Forest
models.888We report feature importance for the interpretable tree-based
models, emphasizing that all models obtained comparable accuracy. Importance
analysis for other models are left for future work. The results show that
whether a company is led by a male CEO is by far the most important feature in
the decision tree model, and also the top feature in the random forest model
for both the worldwide and US-only datasets.
Feature | Decision Tree | Feature | Random Forest
---|---|---|---
Male Led | 25.62$\%$ | Male Led | 15.50$\%$
% from top school | 18.81$\%$ | Total Number of Founders | 12.97$\%$
Has Exits | 18.81$\%$ | % from top school | 11.26$\%$
Same Alma Mater | 10.10$\%$ | Same Alma Mater | 9.12$\%$
Total Number of Founders | 6.99$\%$ | Has Exits | 7.86$\%$
Table 3: Decision Tree and Random Forest feature importance for top 5 features for worldwide data. Whether the company is led by a male CEO is the most important feature for both the decision tree and random forest models, more important than the features addressing the number of prior exits and founders’ alma mater. Feature | Decision Tree | Feature | Random Forest
---|---|---|---
Male Led | 19.34$\%$ | Male Led | 18.40$\%$
Number of Exits | 19.29$\%$ | Number of Founders | 13.78$\%$
% from top school | 14.81$\%$ | % from top school | 9.38$\%$
Same Alma Mater | 12.46$\%$ | Same Alma Mater | 7.91$\%$
Number of Founders | 8.22$\%$ | Avg Number of Exits | 7.21$\%$
Table 4: Predictive model feature importance on US data. According to both the
decision tree and random forest models, the most important feature for
reaching a priced round in the US is whether the company consists of only male
founders.
For the Gradient Boosted Tree (GBT) model on US data, the top 5 features are
ranked as follows: whether the founding team is male-only (14.46$\%$), number
of founders (13.04$\%$), percent of founders from top schools (12.03$\%$),
number of exits (11.46$\%$), and if all the founders have graduated from the
same university (10.07$\%$). For the worldwide GBT model, where GBT achieved
the second lowest accuracy of the models created, the top feature is the
number of founders.
In summary, feature importance analysis for the tree-based models indicates
that in most instances, for both worldwide and US-only datasets, gender is key
to fundraising success. We find that the main indicator of whether a startup
will reach a priced funding round centers on gender, either the gender of the
CEO or whether the team consists entirely of male founders.
## 5 Conclusions and Recommendations
Our analysis suggests the presence of a pervasive and substantial bias against
female-led startups across geographies and industries. Looking at average
funding, where only startups that have raised funds are considered, lets us
eliminate the pipeline problem as a a primary explanation for discrepancies in
funding allocation. The analysis reveals that, across all but 2 of the 20
industries considered, male-led teams received the highest average funding.
Across all three continents in our analysis, North America, Europe, and Asia-
Pacific, the highest average funding went to mixed-gender teams led by male
CEOs. In all three continents, the least average funding went to female-only
teams, followed by female-led mixed-gender teams.
Among US startup hubs, Silicon Valley and New York gave the highest average
funding to male-only teams, while LA and Boston gave greatest average support
to mixed-gender teams led by male CEOs. As in the continental analysis,
female-only teams receive the lowest average funding, following by mixed-
gender teams with female CEOS.
Our ML-based analysis reveals gender characteristics to be of highest
importance amongst founder features for reaching a prices round, in
particular, more important that traditionally prized characteristics
pertaining to whether the founders have attended top universities or had prior
exits. Worldwide analysis reveals the gender of the CEO to the most important
feature. On US data, machine learning modelling shows that the most important
characteristic tends to be whether all founders are male.
In summary, we find that across all geographic regions and the great majority
of industries, companies a male CEO have much better funding outcomes those
with female CEO. The gender of the CEO appears to be _the most_ important
factor in fundraising. With no exceptions across geographies and industries,
our results show that startups led by male CEOs raise more money than startups
raised by female CEOs, irrespective of the gender of the rest of the founding
team. On the other hand, having women as founders but not CEOs improves
funding results in some (but not all) cases, sometimes by a substantial
margin. Across all geographies (but not all industries), female-led companies
achieve better funding outcomes when they include a male co-founder.
### 5.1 Implications for machine learning modeling for investment decisions
Our machine-learning analysis reveals that CEO’s gender to be the most
important amongst founder characteristics for attaining a priced funding
round. In particular, gender composition was found to be more important than
characteristics that are known to be prized by venture capitalists, such as
the number of prior exits or the founder’s alma mater. This surprising finding
not only reveals the primary role of gender in venture capital allocation, but
also warns of potential pitfalls when applying machine learning models to
investment decisions.
Machine learning models in other spheres, such as credit markets, have already
been shown to inflate biases [12]. We recommend exercising caution when
building machine learning models for startup success prediction, in order to
reduce the impact of gender on the resultant decision making. Most
importantly, features directly capturing the gender of the founders should be
omitted.999Gender information has been incorporated into previous ML models
for startup success, see, for example [11].
Further, we observe that features such as prior exits, while not directly
capturing bias, may play an important role in perpetuating it. With
longstanding low access to startup funding, women have much lower chances of
having had previous exits.
### 5.2 Discussion and policy implications
One of the most important findings of this analysis is the critical role of a
CEO’s gender for fundraising outcomes, even in mixed-gender teams. This is
notable because in startups, particularly at early stages, division of labour
amongst founder is a less clear cut than in mature companies. Thus, it
unlikely that a mixed-gender company’s performances in terms of investor
returns will be impacted on the basis of whether a male or female co-founder
is designated as the CEO.
Yet, fundraising is often handled by the CEO, making the founder identified as
such the primary link between the startup and any potential investors. Even
when other founders are present, the CEO is expected to lead the discussion on
behalf of their startup. Consequently, any bias against women, implicit or
otherwise, is likely to manifest most strongly if the CEO is female.
The critical role of the CEO in funding outcomes may thus be reduced to
differences in how investors treat men and women. Prior research points to
desperate treatment of men and women during startup pitches. In a study on
interactions at TechCrunch Disrupt in New York City, investors asked men to
expand on how the plan to reach success, whereas women were asked to defend
themselves against failure [14], which hindered women’s ability to raise
funds. Notably, both male and female investors exhibited this bias against
female founders.101010There is a prevalent notion that the key to eradicating
gender bias in startup investing lies in increasing the number of female
investors. This view is oversimplified and potentially misleading. Both men
and women are highly prone to bias against women [15]. While increasing gender
diversity amongst investors is important for a variety of reasons, tackling
gender bias against female founders calls for more comprehensive solutions.
Further research is needed to elucidate the impact of implicit gender bias on
fundraising outcomes and uncover the reasons behind ubiquitous lower
propensity towards investing in female CEOs across the globe.
Our findings show that across all continents considered and some US startups
hub mixed-gender are given higher average funding, when they are male-led.
This likely stems from the inherent advantage of mixed gender teams. Gender
balanced teams perform better than male-dominated teams in terms of sales and
profits [16], and gender diverse executives teams are 21% more likely to yield
higher financial returns [17]. The venture capital firm First Round reported
that their investments in companies with at least one female founder were
meaningfully outperforming their investments in all-male teams [18]. In fact,
their investment in companies with a female founder performed $63\%$ better
than their investments with all-male founding teams [18].
The benefit of a gender diversity helps mixed-gender teams raise funding, but
only when they are led by a male CEO. The consistently lower funding
allocation, both in total and on average, to mixed-gender teams when they are
led by women CEOs points to the severity of gender bias in startup capital
allocation.
In recent years, a number of Venture Capital firms emerged with the mandate to
invest in teams with at least one female founder. However, these investment
firms tend to be (1) late stage, and (2) utilize a “follow” investment
strategy, investing only after another VC firm makes a substantial investment
and sets the deal terms. This does little to help increase the number of
female founders.
Our findings suggest that importance of investing in female-led companies to
correct the gender bias in capital allocation. We recommend the formation of
venture capital firms with the mandate to invest in companies with female
CEOs. Investing in women-led, mixed-gender teams should allow investors to
benefit from the performance boost of gender diversity, while helping to
correct the long standing bias against female business leaders. Investors with
expertise to lead early stage deals applying such practiced can further reap
the benefit of early investing, receiving large equity in promising deals.
## References
* [1] Pitchbook. The vc female founders dashboard. https://pitchbook.com/news/articles/the-vc-female-founders-dashboard, 2020\. Accessed: 2020-10-25.
* [2] Michael Ewens and Richard R Townsend. Are early stage investors biased against women? Journal of Financial Economics, 135(3):653–677, 2020.
* [3] Eventerprise. Does gender bias have an impact on venture funding? https://bit.ly/3pjiP1q, 2019. Accessed: 2019-10-25.
* [4] Jorge Guzman and Aleksandra Olenka Kacperczyk. Gender gap in entrepreneurship. Research Policy, 48(7):1666–1680, 2019.
* [5] Martin Ruef, Howard E Aldrich, and Nancy M Carter. The structure of founding teams: Homophily, strong ties, and isolation among us entrepreneurs. American sociological review, 68(2):195–222, 2003.
* [6] Tiantian Yang and Howard E Aldrich. Who’s the boss? explaining gender inequality in entrepreneurial teams. American Sociological Review, 79(2):303–327, 2014.
* [7] Susan Coleman and Alicia Robb. Sources of funding for new women-owned firms. W. New Eng. L. Rev., 32:497, 2010.
* [8] Patricia G Greene, Candida G Brush, Myra M Hart, and Patrick Saparito. Patterns of venture capital funding: is gender a factor? Venture Capital: An international journal of entrepreneurial finance, 3(1):63–83, 2001.
* [9] Amar Krishna, Ankit Agrawal, and Alok Choudhary. Predicting the outcome of startups: less failure, more success. In 2016 IEEE 16th International Conference on Data Mining Workshops (ICDMW), pages 798–805. IEEE, 2016.
* [10] Claudia E Halabí and R Lussier. A model for predicting small firm performance: Increasing the probability of entrepreneurial success. Documentos de Trabajo, 3, 2010.
* [11] Javier Arroyo, Francesco Corea, Guillermo Jimenez-Diaz, and Juan A Recio-Garcia. Assessment of machine learning performance for decision support in venture capital investments. Ieee Access, 7:124233–124243, 2019.
* [12] Andreas Fuster, Paul Goldsmith-Pinkham, Tarun Ramadorai, and Ansgar Walther. Predictably unequal? the effects of machine learning on credit markets. SSRN Electronic Journal, 2017.
* [13] The world’s top 100 universities. https://www.topuniversities.com, June 2020.
* [14] Dana Kanze, Laura Huang, Mark A Conley, and E Tory Higgins. We ask men to win and women not to lose: Closing the gender gap in startup funding. Academy of Management Journal, 61(2):586–614, 2018.
* [15] United Nations Development Programme (UNDP). Tackling Social Norms: A Game Changer for Gender Inequalities. United Nations Development Programme (UNDP), 2020.
* [16] Sander Hoogendoorn, Hessel Oosterbeek, and Mirjam Van Praag. The impact of gender diversity on the performance of business teams: Evidence from a field experiment. Management Science, 59(7):1514–1528, 2013.
* [17] Vivian Hunt, Sara Prince, Sundiatu Dixon-Fyle, and Lareina Yee. Delivering through diversity. McKinsey & Company Report. Retrieved April, 3:2018, 2018.
* [18] First Round. 10 years. http://10years.firstround.com/, 2019. Accessed: 2019-11-1.
## 6 Appendix
This appendix includes additional information on the data used in our
analysis.
Table 5: Total Number and Percentage of Companies per Region Geography | # of companies | Percentage
---|---|---
North America | 30,212 | 62.07%
Europe | 8,932 | 18.35%
Asia-Pacific | 7,968 | 16.37%
Latin America | 1,308 | 2.69%
Gulf Cooperation Council | 256 | 0.53%
Total | 48,676 | 100.0%
Table 6: Total Number and Percentage of Companies per Region Geography | # of companies | Percentage
---|---|---
Silicon Valley Bay Area | 7,611 | 46.98%
Greater New York Area | 4,420 | 27.28%
Greater Los Angeles Area | 2,416 | 14.91%
Greater Boston | 1,754 | 10.83%
Total | 16,201 | 100.0%
Table 7: Total and Percentage of Companies per Industry Verticals | # of companies | Percentage
---|---|---
Data | 6,200 | 12.74%
Commerce | 5,008 | 10.29%
Apps | 4,789 | 9.84%
Finance | 3,679 | 7.56%
Information Technology | 2,822 | 5.80%
Health Care | 2,755 | 5.66%
Biotechnology | 2,663 | 5.47%
Advertising | 2,442 | 5.02%
Consumer Electronics | 2,236 | 4.59%
Hardware | 1,826 | 3.75%
Education | 1,690 | 3.47%
Content Design | 1,670 | 3.43%
Internet | 1,575 | 3.24%
Community and Lifestyle | 1,388 | 2.85%
Clothing and Apparel | 1,107 | 2.27%
Energy | 884 | 1.82%
Food | 828 | 1.70%
Administrative Services | 729 | 1.50%
Gaming | 613 | 1.26%
Agriculture and Farming | 612 | 1.26%
Others | 3,160 | 6.49%
Total | 48,676 | 100.0%
|
# A Peculiar ICME Event in August 2018 Observed with the Global Muon Detector
Network
###### Abstract
We demonstrate that global observations of high-energy cosmic rays contribute
to understanding unique characteristics of a large-scale magnetic flux rope
causing a magnetic storm in August 2018. Following a weak interplanetary shock
on 25 August 2018, a magnetic flux rope caused an unexpectedly large
geomagnetic storm. It is likely that this event became geoeffective because
the flux rope was accompanied by a corotating interaction region and
compressed by high-speed solar wind following the flux rope. In fact, a
Forbush decrease was observed in cosmic-ray data inside the flux rope as
expected, and a significant cosmic-ray density increase exceeding the
unmodulated level before the shock was also observed near the trailing edge of
the flux rope. The cosmic-ray density increase can be interpreted in terms of
the adiabatic heating of cosmic rays near the trailing edge of the flux rope,
as the corotating interaction region prevents free expansion of the flux rope
and results in the compression near the trailing edge. A northeast-directed
spatial gradient in the cosmic-ray density was also derived during the cosmic-
ray density increase, suggesting that the center of the heating near the
trailing edge is located northeast of Earth. This is one of the best examples
demonstrating that the observation of high-energy cosmic rays provides us with
information that can only be derived from the cosmic ray measurements to
observationally constrain the three-dimensional macroscopic picture of the
interaction between coronal mass ejections and the ambient solar wind, which
is essential for prediction of large magnetic storms.
Space Weather
Physics Department, Shinshu University, Matsumoto, Japan National Institute of
Polar Research, Tachikawa, Japan Department of Electrical and Electronic
Systems Engineering, National Institute of Technology, Ibaraki College, Japan
Institute of Space and Astronautical Science, Japan Aerospace Exploration
Agency, Sagamihara, Japan Graduate School of Science, Chiba University, Chiba
City, Japan Institute for Space-Earth Environmental Research, Nagoya
University, Nagoya, Japan National Institute for Space Research, São José dos
Campos, Brazil George Mason University, Fairfax, VA, USA Physics Department,
Kuwait University, Kuwait City, Kuwait School of Natural Sciences, University
of Tasmania, Hobart, Australia Bartol Research Institute, Department of
Physics and Astronomy, University of Delaware, Newark, DE, USA Department of
Applied Sciences, College of Technological Studies, Public Authority for
Applied Education and Training, Shuwaikh, Kuwait Lunar and Planetary
Laboratory, University of Arizona, Tucson, AZ, USA
Wataru<EMAIL_ADDRESS>
(accepted for publication in the Space Weather)
We derived the spatial distribution of cosmic rays associated with a peculiar
ICME event that caused a large magnetic storm in August 2018.
We found a cosmic-ray density increase possibly resulting from the MFR
compression by the following faster solar wind.
The Global Muon Detector Network observed this density increase as a
macroscopic modification of this geoeffective flux rope.
## 1 Introduction
Solar eruptions such as coronal mass ejections (CMEs) cause environmental
changes in various ways in near Earth space. It is known that major
geomagnetic storms can be triggered by the arrival of an interplanetary
counterpart of a CME (ICME) at Earth along with a strong southward
interplanetary magnetic field (IMF), which allows solar wind energy and plasma
to enter the magnetosphere. A magnetic flux rope (MFR), which is often
observed in an ICME with magnetic field lines winding about the central axis,
is recognized as a key factor making an ICME such a powerful driver of an
intense space weather storm. While ICMEs accompanied by a strong
interplanetary shock (IP-shock) in a fast solar wind have attracted attention
as geoeffective storms, the interaction of moderate or slower ICMEs with
ambient solar wind structure and the interaction among a series of CMEs also
play an essential role in producing an ICME causing a larger-than-expected
magnetic storm(Dal Lago et al., 2006; Liu et al., 2014; Kataoka et al., 2015).
On its course in interplanetary space, an ICME driving a strong IP-shock forms
a depleted region of the galactic cosmic rays (GCRs) behind the shock. When
Earth enters this depleted region, cosmic-ray detectors at Earth’s orbit
detect a decrease of GCR intensity, which is known as a Forbush Decrease (FD)
after S. E. Forbush (Forbush, 1937). The IP-shock accompanied by a turbulent
magnetic sheath inhibits GCR transport into the inner heliosphere and sweeps
GCRs away from Earth’s orbit. The MFR behind the magnetic sheath, rapidly
expanding in interplanetary space after the eruption from the Sun, also
reduces GCR density inside the MFR by adiabatic cooling. At the same time, the
GCR depletion either behind the IP-shock or in the MFR promotes the inward
diffusion of GCRs. Due to the closed-field-line configuration of the MFR (in
which both ends of each field line are anchored on the solar surface), GCRs
enter the MFR through drift and/or cross-field diffusion, the latter of which
is largely suppressed in the highly ordered strong IMF in the MFR even for
high-energy particles.
By modeling the local part of an MFR with a straight cylinder, Munakata et al.
(2006) numerically solved the GCR transport equation and found that the
spatial distribution of GCR density in MFRs rapidly reaches a stationary state
due to the balance between adiabatic cooling and inward cross-field diffusion.
By assuming an axisymmetric straight cylinder for individual MFRs, Kuwabara et
al. (2004) and Kuwabara et al. (2009) successfully derived from the observed
GCR data the orientation and geometry of each MFR that were consistent with
in-situ observations of IMF and the interplanetary scintillation (IPS)
observations (Tokumaru et al., 2007). This demonstrates that cosmic-ray
observations provide a useful tool for space weather studies (Rockenbach et
al., 2014). In this paper, we study a particular ICME event observed in
August, 2018 by analyzing the cosmic-ray data from the Global Muon Detector
Network (GMDN).
## 2 Overview of August, 2018 event
Figure 1 summarizes solar wind parameters measured in an ICME over four days
between 24 and 27 August, 2018 (https://omniweb.gsfc.nasa.gov/ow.html). Both
the magnetic field and plasma data are observed by the Wind spacecraft and
time-shifted to Earth’s location. According to the list by Richardson and Cane
(column “o” in http://www.srl.caltech.edu/ACE/ASC/DATA/level3/icmetable2.htm),
this ICME event is caused by a CME eruption recorded at 21:24 UT on 20 August
by the LASCO coronagraphs on board the SOHO satellite. Following a weak IP-
shock recorded at 03:00 UT on 25 August (see the pink vertical line in Figure
1), the sheath period can be identified by the enhanced fluctuation of IMF (a
period delimited by the pink and the first blue vertical lines of about 12
hours after the IP-shock). After the sheath period, a significant enhancement
of the IMF magnitude is recorded until 09:09 UT on 26 August (panels a and c)
in association with a systematic rotation of IMF orientation (panel d)
indicating Earth’s entrance into the MFR. Following Chen et al. (2019), we
define the MFR period as a period between 14:10 UT on 25 August and 09:09 UT
on 26 August, delimited by a pair of blue vertical lines in Figure 1.
A significant southward field is recorded in the MFR causing a gradual
decrease of the $D_{ST}$ index of geomagnetic field down to the minimum of
-174 nT at 06:00 on 26 August (panel
e)(http://wdc.kugi.kyoto-u.ac.jp/index.html). Following the MFR period showing
the clear rotation of IMF orientation in Figure 1d, the gradual increase of
solar wind speed is recorded along with significant fluctuations of IMF
magnitude and orientation. We also note in Figure 1d that the IMF sector
polarity is toward in the period before the IP-shock as indicated by the GSE-
longitude of IMF orientation (BGSE-long) around $300^{\circ}$, while it is
away after the MFR period as indicated by BGSE-long around $150^{\circ}$. This
implies that this storm also may involve heliospheric current sheet(s).
This event occurred in 2018 close to the solar activity minimum of solar cycle
24. The CME was relatively slow, and occurred in slow solar wind, taking about
five days to arrive at Earth after the CME eruption on the Sun. The solar wind
velocity enhancement after the IP-shock is also weak and seems to be
insufficient to cause the large solar wind compression and significant
enhancement of the southward IMF that triggered a major geomagnetic storm.
Chen et al. (2019) attributed peculiarities of this storm to the MFR
compression by the following faster solar wind and Dal Lago et al. (2006) also
presented a similar idea of MFR compression for an event that occurred in
October 1999.
In this paper, we analyze the directional anisotropy of high-energy GCRs
observed during this event. Since the GCR anisotropy arises from the diffusion
and drift streamings, which are both proportional to the spatial gradient of
GCR density, we can deduce from the observed anisotropy the three dimensional
spatial distribution of GCRs which reflects the average magnetic field
geometry extending over the large scale comparable to Larmor radii of high-
energy GCRs in the IMF. Our derivation of the GCR density gradient is based on
the observational finding by Bieber and Evenson (1998) that the drift is a
primary source of the ICME-related anisotropy observed with neutron monitors.
We observed this with the higher rigidity response of GMDN and it has been
recognized that the GCR density gradient derived from the observed anisotropy
is rather insensitive to assumptions for the parallel and perpendicular
diffusions. As already shown in a series of our papers, this allowed us to
deduce from the observed anisotropy the orientation of cosmic ray density
minimum viewed from Earth. Readers can find examples of such analyses in
Rockenbach et al. (2014) and references therein.
Figure 1: Solar wind parameters and $D_{ST}$ index for 24-27 August, 2018.
From top to bottom, each panel shows one minute solar wind parameters; (a)
magnitude of solar wind velocity (black curve) and “flow angle” of solar wind
($\phi_{SW}=\tan^{-1}(V_{y}/|V_{x}|$) (blue curve), (b) proton density (black)
and temperature (blue), (c) IMF magnitude (black) and its fluctuation (blue),
(d) GSE-longitude (black) and latitude (blue) of IMF orientation, (e) GSM-z
component of IMF (blue) and hourly value of the $D_{ST}$ index (black). The
pink vertical line indicates the timing of IP-shock identified by the shock of
IMF at 03:00 UT on 25 August, while a pair of blue vertical lines delimit the
MFR period reported by Chen et al. (2019). The blue shaded area indicates six
hours between 03:00 UT and 09:00 UT on 26 August when an increase is observed
in the cosmic-ray density (see Figure 2a and Section 4). The orange vertical
line indicates the second stream interface at 13:00 UT on 26 August. As
indicated at the top of the figure, we define the “MFR period” delimited by a
pair of blue vertical lines and the “sheath period” between the pink and the
first blue vertical lines (see text). Figure 2: Cosmic-ray density, anisotropy
and density gradient at 60 GV derived from GMDN data for 24-27 August, 2018.
Each panel displays; (a) cosmic-ray density $I_{0}(t)$, (b) magnitude (black
curve) and GSE-$x$ (blue), -$y$ (purple) and -$z$ (red) components of the
anisotropy vector $\bm{\xi}^{w}(t)$ in the solar wind frame, (c) magnitudes of
components of $\bm{\xi}^{w}(t)$ parallel (red) and perpendicular (blue) to
IMF, (d) a bubble plot showing the magnitude of hourly anisotropy vector
$|\bm{\xi}^{w}(t)|$ as a function of the pitch angle ($\theta$) between
$\bm{\xi}^{w}(t)$ and IMF vector, while (e)-(g) three GSE components of the
density gradient vector $\bm{G}(t)$. The area of each circle in (d) is
proportional to $|\bm{\xi}^{w}(t)|$ as indicated in the legend in the right
top corner of the panel. In each of panels (e)-(g), the contribution from the
drift represented by the last term of Eq.(9) is shown by the blue curve on the
left vertical axis together with the total gradient component (black solid
circles), while contributions from the parallel and perpendicular diffusions
represented by the first and second terms of Eq.(9) are shown by purple and
red curves on the right vertical axis, respectively (note the scale of the
right vertical axis is expanded four times the left axis). Each hourly value
and error are deduced from the average and dispersion of 10-minute values in
the corresponding one hour shown by a thin curve, respectively. Open red
circles in panel (e) show $\frac{1}{V_{SW}}\frac{dI_{0}(t)}{dt}$ calculated
with $V_{SW}=400$ km/s for a test of $\bm{G}(t)$ derived from
$\bm{\xi}^{w}(t)$ (see text). The blue shaded area indicates six-hours between
03:00 UT and 09:00 UT on 26 August when the increase of $I_{0}(t)$ is observed
in panel (a). The pink, blue and orange vertical lines are same as Figure 1.
All data used for producing this figure are available in the Supporting
Information (S2).
## 3 Cosmic-ray data and analyses
### 3.1 Global Muon Detector Network (GMDN)
The GMDN, which is designed for accurate observation of the GCR anisotropy,
comprises four multidirectional muon detectors, “Nagoya” in Japan, “Hobart” in
Australia, “Kuwait City” in Kuwait and “São Martinho da Serra” in Brazil,
recording muon count rates in 60 directional channels viewing almost the
entire sky around Earth. Basic characteristics of directional channels of the
GMDN are also available in the Supporting Information (S1). The median
rigidity ($P_{m}$) of primary GCRs recorded by the GMDN, which we calculate by
using the response function of the atmospheric muons to the primary GCRs given
by numerical solutions of the hadronic cascade in the atmosphere (Murakami et
al., 1979), ranges from about 50 GV for the vertical directional channel to
about 100 GV for the most inclined directional channel, while the asymptotic
viewing directions (corrected for geomagnetic bending of cosmic-ray orbits) at
$P_{m}$ covers the asymptotic viewing latitude ($\lambda_{\rm asymp}$) from
$72^{\circ}$N to $77^{\circ}$S. The representative $P_{m}$ of the entire GMDN
is about 60 GV.
### 3.2 Derivation of the GCR density and anisotropy
We analyze the percent deviation of the 10-minute muon count rate $I_{i,j}(t)$
from an average over 27 days between 12 August and 7 September, 2018 in the
$j$-th directional channel of the $i$-th detector ($i=1$ for Nagoya, $i=2$ for
Hobart, $i=3$ for Kuwait and $i=4$ for São Martinho da Serra) in the GMDN at
universal time $t$, after correcting for local atmospheric pressure and
temperature effects. For our correction method of the atmospheric effects
using the on-site measurement of pressure and the mass weighted temperature
from the vertical profile of the atmospheric temperature provided by the
Global Data Assimilation System (GDAS) of the National Center for
Environmental Prediction, readers can refer to Mendonça et al. (2016).
Since the observed temporal variation of $I_{i,j}(t)$ at the universal time
$t$ includes contributions from variations of the GCR density (or
ominidirectional intensity) $I_{0}(t)$ and anisotropy vector $\bm{\xi}(t)$, it
is necessary to analyze each contribution separately. An accurate analysis of
$I_{0}(t)$ and $\bm{\xi}(t)$ is possible only with global observations using
multidirectional detectors. For such analyses, we model $I_{i,j}(t)$ in terms
of $I_{0}(t)$ and three components ($\xi^{\rm GEO}_{x}(t),\xi^{\rm
GEO}_{y}(t),\xi^{\rm GEO}_{z}(t)$) of $\bm{\xi}(t)$ in a geocentric (GEO)
coordinate system, as
$\displaystyle I^{fit}_{i,j}(t)=I_{0}(t)c_{0i,j}^{0}$ $\displaystyle+$
$\displaystyle\xi^{\rm GEO}_{x}(t)(c_{1i,j}^{1}\cos\omega
t_{i}-s_{1i,j}^{1}\sin\omega t_{i})$ (1) $\displaystyle+$
$\displaystyle\xi^{\rm GEO}_{y}(t)(s_{1i,j}^{1}\cos\omega
t_{i}+c_{1i,j}^{1}\sin\omega t_{i})$ $\displaystyle+$ $\displaystyle\xi^{\rm
GEO}_{z}(t)c_{1i,j}^{0},$
where $t_{i}$ is the local time in hours at the $i$-th detector,
$c^{0}_{0i,j}$, $c^{1}_{1i,j}$, $s^{1}_{1i,j}$ and $c^{0}_{1i,j}$ are coupling
coefficients which relate (or “couple”) the observed intensity in each
directional channel with the cosmic ray density and anisotropy in space and
$\omega=\pi/12$. In the GEO coordinate system, we set the $x$-axis to the
anti-sunward direction in the equatorial plane, the $z$-axis to the
geographical north perpendicular to the equatorial plane and the $y$-axis
completing the right-handed coordinate system. The coupling coefficients in
Eq.(1) are calculated by using the response function of the atmospheric muon
intensity to primary GCRs (Murakami et al., 1979) and given in the Supporting
Information (S1). Note that the anisotropy vector $\bm{\xi}(t)$ in Eq.(1) is
defined to direct opposite to the GCR streaming, pointing toward the upstream
direction of the streaming (see also Eq.(6) in the next section). We derive
the best-fit set of four parameters $\left(I_{0}(t),\xi^{\rm
GEO}_{x}(t),\xi^{\rm GEO}_{y}(t),\xi^{\rm GEO}_{z}(t)\right)$ by solving the
following linear equations.
$\displaystyle\frac{\partial\chi^{2}}{\partial
I_{0}(t)}=\frac{\partial\chi^{2}}{\partial\xi^{\rm
GEO}_{x}(t)}=\frac{\partial\chi^{2}}{\partial\xi^{\rm
GEO}_{y}(t)}=\frac{\partial\chi^{2}}{\partial\xi^{\rm GEO}_{z}(t)}=0,$ (2)
where $\chi^{2}$ is the residual value of fitting defined, as
$\displaystyle\chi^{2}=\sum_{i,j}{(I_{i,j}(t)-I^{fit}_{i,j}(t))^{2}/\sigma_{ci,j}^{2}}$
(3)
with $\sigma_{ci,j}$ denoting the count rate error of $I_{i,j}(t)$. The best-
fit anisotropy vector $\xi$${}^{\rm GEO}(t)$ in the GEO coordinate system is
then transformed to $\xi$${}^{\rm GSE}(t)$ in the geocentric solar ecliptic
(GSE) coordinate system for comparisons with the solar wind and IMF data.
Eq.(1) does not include contributions from the second order anisotropy such as
the bidirectional counter-streaming sometimes observed in the MFR in MeV
electron/ion intensities. We also performed best-fit analyses adding five more
best-fit parameters in Eq.(1) necessary to express the second order anisotropy
and actually found an enhancement of the second order anisotropy in the MFR.
However, we verified that the inclusion of the second order anisotropy does
not change the obtained $I_{0}(t)$ and $\bm{\xi}(t)$ significantly keeping
conclusions of the present paper unchanged. In this paper, therefore, we
analyze only $I_{0}(t)$ and $\bm{\xi}(t)$ derived from Eq.(1). We will present
our analyses and discussion of the second-order anisotropy elsewhere.
### 3.3 Derivation of the spatial gradient of GCR density
Diffusive propagation of GCRs in the heliosphere is described by the following
transport equation (Parker, 1965; Gleeson, 1969).
$\frac{\partial U}{\partial
t}+\bm{\nabla}\cdot\bm{S}=-\frac{\partial}{\partial
p}(\frac{1}{3}p\bm{V}_{SW}\cdot\bm{\nabla}U),$ (4)
where $U(\bm{r},p,t)$ is the GCR density at position $\bm{r}$, momentum $p$
and time $t$, $\bm{V}_{SW}$ is the solar wind velocity. $\bm{S}(\bm{r},p,t)$
in Eq.(4) is the GCR streaming vector consisting of the solar wind convection
and the diffusion terms, as
$\bm{S}=CU\bm{V}_{SW}-\bm{\kappa}\cdot\bm{\nabla}U$ (5)
where $\bm{\kappa}$ is the diffusion tensor and $C$ is the Compton-Getting
(CG) factor denoted by $C={1-\frac{1}{U}\frac{\partial}{\partial
p}(\frac{1}{3}pU)}=(2+\gamma)/3$ with an assumption of $U$ proportional to
$p^{-\gamma}$ with the power-law index $\gamma=2.7$. The diffusion and drift
anisotropy $\bm{\xi^{D}}$ is given as
${\bm{\xi^{D}}(t)}\equiv-\frac{3\bm{S}}{vU}=\frac{3}{v}(\bm{\kappa}\cdot\bm{G}-C\bm{V}_{SW})$
(6)
where $v$ is the speed of GCR particle, which is approximately equal to the
speed of light $c$, and $\bm{G}=\bm{\nabla}U/U$ is the spatial gradient of GCR
density.
We transform the observed anisotropy ${\bm{\xi}}^{\rm GSE}(t)$ by subtracting
the solar wind convection and an apparent anisotropy arising from Earth’s
orbital motion around the Sun, as
$\displaystyle\bm{\xi}^{w}(t)=~{}{{\bm{\xi}}^{\rm
GSE}(t)}+(2+\gamma)\frac{\bm{V}_{SW}(t)-\bm{v}_{E}}{c}$ (7)
where $\bm{v}_{E}$ is the velocity of Earth (30 km/s toward the orientation
opposite to the GSE-y orientation). We replace $\bm{\xi}^{w}(t)$ with
$\bm{\xi}^{D}(t)$ as
$\displaystyle\bm{\xi}^{w}(t)=~{}\bm{\xi}^{D}(t)$ (8)
by ignoring contribution to $\bm{\xi}^{w}(t)$ from other possible non-
diffusion/drift anisotropy such as recently reported by Tortermpun et al.
(2018) from the observation in a MFR(Krittinatham and Ruffolo, 2009). Then, we
can deduce the density gradient $\bm{G}$ by solving Eq.(7) and Eq.(8) for
$\bm{G}$ as
$\displaystyle\bm{G}(t)$
$\displaystyle=\frac{1}{R_{L}(t)\alpha_{\parallel}}\bm{\xi}^{w}_{\parallel}(t)+{\frac{\alpha_{\perp}}{R_{L}(t)(1+\alpha^{2}_{\perp})}\bm{\xi}^{w}_{\perp}(t)+\frac{1}{R_{L}(t)(1+\alpha^{2}_{\perp})}\frac{\bm{B}(t)}{B(t)}\times\bm{\xi}^{w}_{\perp}(t)}$
(9)
where $R_{L}(t)=\frac{P}{c|\bm{B}(t)|}$ is the Larmor radius of particles with
rigidity $P$ in magnetic field $\bm{B}(t)$ and $\bm{\xi}^{w}_{\parallel}(t)$
and $\bm{\xi}^{w}_{\perp}(t)$ are components of $\bm{\xi}^{w}(t)$ parallel and
perpendicular to $\bm{B}(t)$, respectively (Kozai et al., 2016).
$\alpha_{\parallel}$ and $\alpha_{\perp}$ in Eq.(9) are mean-free-paths of
parallel and perpendicular diffusions, respectively, normalized by $R_{L}(t)$,
as
$\displaystyle\alpha_{\parallel}={\lambda_{\parallel}(t)/R_{L}(t)}$ (10)
$\displaystyle\alpha_{\perp}={\lambda_{\perp}(t)/R_{L}(t)}.$ (11)
According to current understanding that GCRs at neutron monitor and muon
detector energies are in the “weak-scattering” regime (Bieber et al., 2004),
we assume $\lambda_{\perp}(t)\ll\lambda_{\parallel}(t)$. Following models
widely used in the study of the large-scale GCR transport in the heliosphere
(Wibberenz et al., 1998; Miyake et al., 2017), we assume constant
$\alpha_{\perp}=0.36$ for a period outside the MFR in this paper. We also
assume $\lambda_{\parallel}=1.9$ AU for the entire period. For 60 GV cosmic
rays in $|\bm{B}(t)|\sim 5~{}\rm{nT}$ average magnetic field, $R_{L}$ is 0.27
AU resulting in $\lambda_{\perp}=0.096~{}\rm{AU}$ and
$\alpha_{\parallel}=7.2$. For a period inside the MFR where the magnetic field
is exceptionally strong, we use a constant $\lambda_{\perp}=0.010~{}\rm{AU}$
without changing $\lambda_{\parallel}$. Note that this $\lambda_{\perp}$ was
obtained as an upper limit by Munakata et al. (2006).
We are aware that our ad-hoc assumptions of $\lambda_{\parallel}(t)$ and
$\lambda_{\perp}(t)$ above are difficult to validate directly from
observations. However, it will be shown in the next section that $\bm{G}(t)$
derived from the observed anisotropy $\bm{\xi}^{w}(t)$ in Eq.(9) is
significantly dominated by the contribution from the drift anisotropy
represented by the last term on the right-hand side of Eq.(9) and is
insensitive to our ad-hoc assumptions of $\lambda_{\parallel}(t)$ and
$\lambda_{\perp}(t)$.
## 4 Results
Figure 2 shows the GCR density ($I_{0}(t)$ in panel a), anisotropy
($\bm{\xi}^{w}(t)$ in panels b-d) and density gradient ($\bm{G}(t)$ in e-g )
at 60 GV obtained from our analyses of the GMDN data described in the
preceding section using the solar wind velocity $\bm{V}_{SW}(t)$ and IMF
$\bm{B}(t)$ in Figure 1. While we derived the best-fit parameters in Eq.(1) in
every 10-minute interval, in this paper we only use the hourly average of six
10-minute values, because one hour is much shorter than the time scale
($R_{L}/V_{SW}\sim$ 9 hours) for the solar wind to travel across the Larmor
radius ($R_{L}$=0.089 AU) of 60 GV GCRs in IMF ($B\sim$ 15 nT) with the
average velocity ($V_{SW}\sim$ 400 km/s) and enough for analyzing the spatial
distribution of 60 GV GCRs. We also calculated the error of the hourly value
of each parameter from the dispersion of 10-minute values. All data used for
producing this figure are given in the Supporting Information (S2).
Besides the random error of the best-fit parameters in Figure 2, there are
possible sources of systematic error. For instance, the atmospheric effect
results in the day-to-day offset of $I_{i,j}(t)$ which is almost the same for
all directional channel in one detector, but generally different between
detectors at different locations. We corrected for the effect by using the
barometric and temperature coefficients ($\beta$ and $\alpha$ in the
Supporting Information (S1)) derived in August 2018, instead of using nominal
(or average) coefficients derived from the long-term observations. We verified
that the local effect in $I_{i,j}(t)$ is significantly reduced with smaller
$\chi^{2}$ in this way. Another source of systematic error is the second order
anisotropy which is not included in Eq.(1), but we verified that the inclusion
of the second order anisotropy does not change the obtained $I_{0}(t)$ and
$\bm{\xi}(t)$ significantly, as mentioned in the preceding section. We
conclude, therefore, that systematic error is similar to or smaller than the
random error in Figure 2.
The cosmic-ray density $I_{0}(t)$ in Figure 2a starts decreasing a few hours
before the IP-shock early on 25 August and goes to the minimum of -0.28 % at
16:30 UT and recovers to the level before the IP-shock early on 26 August.
This is a well-known feature of a moderate Forbush decrease indicating Earth’s
entrance into the cosmic-ray depleted region formed behind the shock and in
the magnetic flux rope (MFR). A small local maximum is seen in $I_{0}(t)$ at
around 10:00 UT of 25 August being superposed on the gradual decrease in the
sheath period. We think that this is probably due to the discontinuity
recorded in the IMF magnitude and longitude seen in Figures 1c-d, because such
a local increase of $I_{0}(t)$ is often observed by the GMDN at the IMF
discontinuity (Munakata et al., 2018).
A marked feature of this event is a “hump” in the density in which $I_{0}(t)$
increased to the maximum at around 06:00 UT on 26 August exceeding the
unmodulated level before the shock. As discussed later, this increase is
likely caused by the compression of the trailing edge of the MFR by the faster
solar wind following the MFR (see the blue shaded period in Figures 1 and 2).
Figure 2a also shows a significant variation of $I_{0}(t)$ around noon on 24
August when the disturbance is recorded in the solar wind parameters. We do
not analyze this variation in this paper, but Abunin et al. (2020) considered
it as an indication of another ICME which they identified.
Figure 2c shows magnitudes of $\bm{\xi}^{w}_{\parallel}(t)$ and
$\bm{\xi}^{w}_{\perp}(t)$, the parallel and perpendicular components of the
anisotropy in Eq.(9), while Figure 2d displays $|\bm{\xi}^{w}(t)|$ as a
function of the pitch angle ($\theta$) between $\bm{\xi}^{w}(t)$ and the IMF.
During a period between $\sim$12:00 UT on 25 August and $\sim$0:00 UT on 26
August in the MFR period when $I_{0}(t)$ recovers from its minimum and the
amplitude of $\bm{\xi}^{w}(t)$ increases, the perpendicular component
($\bm{\xi}^{w}_{\perp}(t)$) exceeds the parallel component
($\bm{\xi}^{w}_{\parallel}(t)$) except for a few hours prior to the minimum of
$I_{0}(t)$. This indicates the dominant contributions from drift anisotropy
inside the magnetic flux rope.
During a few hours after $\sim$03:00 UT on 26 August in the blue shaded
period, on the other hand, $\bm{\xi}^{w}(t)$ is dominated by the parallel
anisotropy. Tortermpun et al. (2018) reported a strong anisotropy parallel to
the magnetic field observed in a MFR, as predicted from a theory by
Krittinatham and Ruffolo (2009). Such parallel anisotropy cannot be expressed
by $\bm{\xi}^{D}(t)$ in Eq.(6) based on the diffusion and drift picture
particularly in a MFR where $\lambda_{\parallel}(t)$ can be comparable to or
even longer than the scale size of the MFR. The theory predicts a parallel
anisotropy inside MFRs, due to an inflow of cosmic rays along one leg of the
MFR, caused by guiding center drifts, and an outflow along the other. Cosmic
rays enter the MFR along a leg of the MFR where the field line winding is
counterclockwise (viewing from the wide part of the field line cone) and
outward along the other leg with clockwise winding (Krittinatham and Ruffolo,
2009). Since Chen et al. (2019) reported that the MFR in Figure 1 is left-
handed and its mean magnetic field directs southward, the theory predicts
cosmic rays to enter the MFR at the southern leg and exit at the northern leg,
resulting in a negative $\xi^{w}_{z}$ (north directing flow) at Earth’s orbit.
However, the observed $\xi^{w}_{z}$ in Figure 2b (red curve) is positive
(south directing flow) during the blue shaded period, in contradiction to the
prediction. We conclude therefore that the contribution from such
unidirectional parallel flow to the observed anisotropy is not dominant in
this particular event, even if it exists.
Figures 2e-g show three GSE-components of the density gradient ($\bm{G}(t)$)
displayed by black curves. Blue curves in panels (e)-(g) show contributions to
$\bm{G}(t)$ from the drift represented by the last term of Eq.(9) on the left
vertical axis, while purple and red curves display contributions from the
parallel and perpendicular diffusions represented by the first and second
terms, respectively, on the right vertical axis. It is clear that the derived
$\bm{G}(t)$ is significantly dominated by the contribution from the drift
anisotropy and contributions from parallel and perpendicular diffusions are
small (Bieber and Evenson, 1998). By changing assumptions of
$\lambda_{\parallel}(t)$ and $\lambda_{\perp}(t)$ in Eq.(9) each in a wide
range, we also verified that the derived $\bm{G}(t)$ is insensitive to these
parameters. For instance, we calculated $\bm{G}(t)$ by assuming a constant
$\lambda_{\perp}(t)=0.010$ AU during the MFR period between the two vertical
blue lines and verified that the difference of $\bm{G}(t)$ from the black
solid circles in panels (e)-(g) are well within errors. As already shown in a
series of our papers, this allowed us to deduce from the observed anisotropy
the CME geometries viewed from Earth successfully in accordance with other
observations (Kuwabara et al., 2004, 2009).
As a test of our $\bm{G}(t)$ derived from $\bm{\xi}^{w}(t)$, we also calculate
$G_{x}$ from $\frac{1}{V_{SW}}\frac{dI_{0}(t)}{dt}$ which is expected to be
observed at Earth in the case of the stationary $G_{x}$ passing Earth with the
solar wind velocity $V_{SW}$. Red open circles superposed in Figure 2e show
$\frac{1}{V_{SW}}\frac{dI_{0}(t)}{dt}$ calculated with $V_{SW}=400$ km/s. It
is seen that $G_{x}(t)$ derived from $\bm{\xi}^{w}(t)$ shown by black solid
circles is consistent within errors with
$\frac{1}{V_{SW}}\frac{dI_{0}(t)}{dt}$ independently derived from $I_{0}(t)$,
particularly during the MFR period including the blue shaded period. This
supports the validity of our $\bm{G}(t)$ derived from $\bm{\xi}^{w}(t)$.
It is seen in the black curves in Figures 2e-f that $G_{x}(t)$ and $G_{y}(t)$
change their signs from negative to positive around the time of the minimum
$I_{0}(t)$ in the MFR period, while $G_{z}(t)$ remains negative. This is
qualitatively consistent with the MFR orientation (the elevation angle of the
MFR is about $-51^{\circ}$ and the azimuthal angle is about $299^{\circ}$) and
the Grad-Shafranov plot presented in Chen et al. (2019), indicating that the
cosmic-ray density minimum region formed along the MFR axis passed north of
Earth approaching from the sunward direction and then leaving (Kuwabara et
al., 2004).
Another notable features of $\bm{G}(t)$ are positive enhancements of
$G_{z}(t)$ and $G_{y}(t)$ in the later MFR period between 03:00 UT and 09:00
UT on 26 August (blue shaded period in Figures 1 and 2) when the orientation
of the strongest IMF turned northeast after the maximum of $I_{0}(t)$ observed
in the hump in Figure 2a. This indicates that a region with higher $I_{0}(t)$
exists northeast of Earth ($y_{GSE}>0$, $z_{GSE}>0$) and a significant
diffusion anisotropy from there is observed along the IMF.
Finally, we note a large variation of $G_{x}(t)$ around the maximum of
$I_{0}(t)$ in the hump. In particular, $G_{x}(t)$ with a magnitude of 4%/AU
changes its sign from positive to negative in four hours. In the next section,
we will discuss the physical origin of the hump of $I_{0}(t)$ and associated
$\bm{G}(t)$.
## 5 Discussions
As discussed in Section 2, the MFR in this event is followed by a gradual
increase of the solar wind speed ($V_{SW}$) starting at $\sim$09:00 UT on 26
August. This increase of $V_{SW}$ is consistent with a typical stream
interface, as identified by the proton temperature increase, proton density
decrease and, a negative-to-positive flip of flow angle seen in Figure 1
(Kataoka and Miyoshi, 2006). There are two major interesting points in this
event. Firstly, stream interfaces are usually formed in corotating interaction
regions, which clearly separate the slow and fast solar wind streams. In this
event, however, the slow solar wind was replaced by a slow MFR. The existence
of such a clear discontinuity at the trailing edge of the MFR therefore
suggests a large-scale compression of the trailing part of the MFR by the
following ambient solar wind. Secondly, a secondary enhancement is seen in the
solar wind speed at $\sim$13:00 UT on 26 August as indicated by the orange
vertical lines in Figures 1 and 2, which again shows a stream interface-like
variation, as identified by an increase of proton temperature and a decrease
of proton density. Such doublet structures in the solar wind speed enhancement
are often observed in corotating interaction regions in association with
longitudinally elongated and complex coronal hole(s), as was also discussed in
Abunin et al. (2020). We also note in Figure 2a that the cosmic-ray density
$I_{0}(t)$ starts decreasing after the MFR period. This is again consistent
with cosmic-ray intensity variation observed in the corotating interaction
regions where $I_{0}(t)$ peaks near the stream interface and then starts
falling in the leading edge of the high-speed stream (Richardson, 2004).
In the preceding section, we reported a significant increase (a hump) of
cosmic-ray density $I_{0}(t)$ observed in the GMDN data near the end of the
MFR period. Recently, Abunin et al. (2020) have also analyzed cosmic-ray data
observed by neutron monitors in August 2018. Neutron monitors have their
maximum response to cosmic rays with $P_{m}\sim 10$ GV which is $\sim 1/5$ of
$P_{m}$ for muon detectors. While they found no clear increase in $I_{0}(t)$,
they reported large increases, called “bursts”, in count rates of some neutron
monitors near the trailing edge of the MFR and attributed these “bursts” to
the enhancement of $\xi_{z}(t)$ and the geomagnetic storm occurring at the
same time (Abunin et al., 2020). They did not present the density gradient
($\bm{G}(t)$) deduced from the observed anisotropy. When the geomagnetic field
is weakened during the storm and the geomagnetic cutoff rigidity ($P_{c}$) of
cosmic rays is reduced, allowing more low-energy particles to reach the ground
level detectors, the asymptotic viewing direction of a neutron monitor is
changed due to the reduced magnetic deflection of cosmic-ray orbits in the
magnetosphere. A similar idea was also presented by Mohanty et al. (2016) to
interpret the “cosmic-ray short burst” observed by a muon detector in June
2015. Based on calculations of cosmic-ray trajectory in the latest model of
geomagnetic field, however, analyses of the same event by Munakata et al.
(2018) showed that the reductions of $P_{c}$ and the magnetic deflection of
cosmic-ray orbits are not enough to cause the observed intensity increase of
60 GeV cosmic rays monitored by the GMDN, because the GMDN only has a small
response to cosmic rays with rigidities around $P_{c}$. They attributed the
burst to enhancements of $I_{0}(t)$ and $\bm{\xi}(t)$ outside the
magnetosphere caused by Earth’s crossing the heliospheric current sheet
(Munakata et al., 2018).
The observation that, in the blue shaded region, $I_{0}$ rose above its
undisturbed level suggests that GCRs gained energy relative to the quiet
period. A plausible cause of the energy gain may be a compression of the
plasma occurring in the MFR. In Appendix A we outline our preliminary model
considering this option. Our model considers the energy gain in a slab
parallel to the $yz$ plane including the magnetic field with thickness $2d$
and perpendicular to the solar wind in $-x$-direction. The model also assumes
that the global expansion of the MFR is not affected by the local compression
because the expansion continues on the leading side of the MFR. This model
quantitatively reproduces the observed time profiles of $I_{0}(t)$ and
$G_{x}(t)$ during the blue shaded period by using the plasma density
enhancement in Figure 1b as a proxy of the local compression rate (see Figure
A1 in Appendix). However, it does not take into account transport to/from the
remaining part of the MFR which is magnetically connected to the trailing
edge. The model in Appendix A, therefore, would be unphysical if the MFR of
interest is an “ideal” axisymmetric expanding cylinder such as those analyzed
by Kuwabara et al. (2004) (Munakata et al., 2006). Figure 2, however, suggests
that this event is quite peculiar and far from an “ideal” MFR.
By analyzing GMDN observations of 11 CME events using the expanding
axisymmetric cylinder model, Kuwabara et al. (2009) showed that $I_{0}(t)$
typically starts decreasing after the beginning of the MFR period and reaches
its minimum at about one half or one third of the MFR period when Earth passes
the closest point to the MFR axis, while $G_{x}(t)$ changes its sign from
negative to positive at the time. $I_{0}(t)$ in Figure 2a, however, reaches
its minimum much earlier, only a few hours after the start of the MFR period
and $G_{x}(t)$ in Figure 2e changes sign at the same time. $I_{0}(t)$ then
starts recovering almost monotonically toward the peak in the blue shaded
period. We think that these are indications of cosmic-ray heating (and
cooling) in operation over an entire MFR which cannot be reproduced by the
simple slab model in this paper. The stark peculiarity of this MFR is also
seen in Figure 1 and in the Grad-Shafranov plot in Figure 9 of Chen et al.
(2019) in which the MFR core is shifted close to the trailing edge being
surrounded by tightly wound magnetic field lines. To fully understand the
observed $I_{0}(t)$ and $\bm{G}(t)$ reflecting the modification of the MFR,
therefore, a more practical and detailed model taking account of adiabatic
heating/cooling together with parallel and perpendicular diffusions of cosmic
rays in a three dimensional entire MFR is awaited. This is planned for our
future investigation.
Dynamic evolution of the MFR, i.e. the simplest adiabatic heating of cosmic
rays, is a possible cause of the cosmic-ray increase observed near the
trailing edge of MFR in this paper, as discussed above, although we do not
know any literatures reporting the compressive heating of plasma or energetic
ions/electrons inside an MFR near the trailing edge. On the other hand, there
are other possibilities to observe the cosmic-ray increase, because steady
structures such as the heliospheric current sheet and corotating interaction
region affect the drift of cosmic rays and can also modulate the large-scale
spatial distribution of cosmic-ray density and anisotropy (Okazaki et al.,
2008; Fushishita et al., 2010). This study therefore provides an important
clue to examine cosmic-ray diffusions parallel and perpendicular to the IMF,
and the dependence on the IMF magnitude in great detail, via close
collaborations with the drift-model simulations of cosmic-ray transport
(Miyake et al., 2017) and the cutting-edge MHD simulations (Shiota and
Kataoka, 2016; Matsumoto et al., 2019). After all, the weak solar wind
condition provided a unique opportunity to study the cosmic-ray increase in
this paper.
We learned from this study that the GMDN observed the cosmic ray density
increase and the associated gradient as evidence of the MFR compression by the
faster following solar wind which made this peculiar event geoeffective. This
evidence is unique and independent of other observations including in-situ
measurements and demonstrates the value of cosmic ray measurements in
understanding the physics of space weather forecasting.
## Appendix A Slab model of the adiabatic heating of cosmic rays
In this section, we consider the cosmic ray transport in the blue shaded
period in Figures 1 and 2 when a “hump” of $I_{0}(t)$ is observed in Figure
2a. We model the solar wind velocity $\bm{V}_{SW}$ as
$\bm{V}_{SW}=\bar{\bm{V}}_{SW}+\bm{u}$ (12)
where $\bar{\bm{V}}_{SW}$ is the average velocity and $\bm{u}$ is an
additional compression or expansion velocity. The observed temporal variation
of $\bm{V}_{SW}$ during the blue shaded period in Figure 1a shows first a
gradual decrease and then an increase near the end of period. This is
qualitatively consistent with the compression velocity $\bm{u}$ directing
inward of the blue shaded area on both sides of a point where $\bm{u}$ becomes
zero, because such $\bm{u}$ increases (decreases) $\bm{V}_{SW}$ on the side of
that point closer to (farther from) the Sun. The point where $\bm{u}$ becomes
zero looks shifted to the later period probably due to $\bar{\bm{V}}_{SW}$
increasing (accelerating). By using the phase-space density of cosmic rays
$f(\bm{r},p,t)=U(\bm{r},p,t)/(4\pi p^{2})$, Eq.(4) is written as
$\frac{\partial f}{\partial
t}=\bm{\nabla}\cdot(\bm{\kappa}\cdot\bm{\nabla}f)-\bm{u}\cdot\bm{\nabla}f+\frac{1}{3}(\bm{\nabla}\cdot\bm{u})p\frac{\partial
f}{\partial p}.$ (13)
For $f(x,p,t)$ in one-dimension perpendicular to the mean magnetic field, this
equation becomes
$\frac{\partial f}{\partial t}=\kappa_{\perp}\frac{\partial^{2}f}{\partial
x^{2}}-u\frac{\partial f}{\partial x}+\frac{1}{3}\frac{\partial u}{\partial
x}p\frac{\partial f}{\partial p},$ (14)
where $x$ is the position measured from the center of the slab toward the Sun
and $\kappa_{\perp}=\frac{1}{3}\lambda_{\perp}v$ is the spatially uniform
perpendicular diffusion coefficient.
By assuming the self-similar compression of the slab, we set
$u(x,t)=-u_{c}\frac{x}{d(t)},$ (15)
where $d(t)$ is the half-thickness of the slab at time $t$ defined as
$d(t)=d_{0}(1-\frac{t}{t_{r}})$ (16)
with $d_{0}$ denoting $d(t)$ at $t=0$. $t_{r}(>0)$ is the reference time when
$d(t)$ goes to zero, but we assume that this does not happen during the event
period we discuss. The real importance of $t_{r}$ is that it is the inverse
rate of relative compression giving the magnitude of constant compression
velocity at $x=d(t)$ by $u_{c}=|\frac{dd(t)}{dt}|=\frac{d_{0}}{t_{r}}$. In
this paper, we do not include the adiabatic cooling due to the large scale
expansion of solar wind, which operates in longer time scale, but focus on the
effect of local compression at $0<t\ll t_{r}$. By introducing (A4), we get
(A3) as
$\frac{\partial f}{\partial t}=\kappa_{\perp}\frac{\partial^{2}f}{\partial
x^{2}}+u_{c}\frac{x}{d}\frac{\partial f}{\partial
x}-\frac{u_{c}}{d}\frac{1}{3}p\frac{\partial f}{\partial p}.$ (17)
We replace $x$ with a dimensionless variable $s$, defined as
$s(x,t)=\frac{x}{d(t)},$ (18)
where $-1\leq s\leq 1~{}(-d\leq x\leq d)$ and we get an equation for
$f_{s}(s,p,t)=f(x,p,t)$ as
$\frac{\partial f_{s}}{\partial
t}=\frac{\kappa_{\perp}}{d^{2}}\frac{\partial^{2}f_{s}}{\partial
s^{2}}-\frac{u_{c}}{d}\frac{1}{3}p\frac{\partial f_{s}}{\partial p},$ (19)
by using
$\frac{\partial f}{\partial t}=\frac{\partial f_{s}}{\partial
t}+\frac{\partial s}{\partial t}\frac{\partial f_{s}}{\partial
s}=\frac{\partial f_{s}}{\partial t}+\frac{s}{d}u_{c}\frac{\partial
f_{s}}{\partial s}.$ (20)
For $f_{s}$ in the steady state, we get
$\frac{\partial^{2}f_{s}}{\partial
s^{2}}=\frac{u_{c}d}{\kappa_{\perp}}\frac{1}{3}p\frac{\partial f_{s}}{\partial
p}.$ (21)
We finally assume a single power-law dependence on $p$ for $f_{s}$ outside the
slab, as
$f_{s}(s,p)=p^{-(2+\gamma)}F(s)$ (22)
with $\gamma$ denoting the exponent of the momentum spectrum ($U\propto
p^{-\gamma}$) set equal to 2.7 and obtain the equation to be solved, as
$\frac{d^{2}F}{ds^{2}}=-\frac{2+\gamma}{3\kappa_{0}}F$ (23)
where $\kappa_{0}$ is a dimensionless parameter defined as
$\kappa_{0}=\frac{\kappa_{\perp}}{u_{c}d}.$ (24)
By assuming a uniform compression of the plasma with the density $N_{p}$ in
the slab, which keeps $d(t)N_{p}$ constant, we replace $u_{c}$ in Eq. (A13)
with $d(t)\frac{1}{N_{p}}\frac{dN_{p}}{dt}$ and get
$\kappa_{0}=\frac{\kappa_{\perp}N_{p}}{d^{2}\frac{dN_{p}}{dt}}=\frac{1}{3}\lambda_{\perp}v\frac{1}{d(t)^{2}\frac{d\log
N_{p}}{dt}}$ (25)
where $\lambda_{\perp}$ is the meanfreepath of perpendicular diffusion and $v$
is the velocity of cosmic ray particles. Eq. (A12) can be written using the
spatial density gradient $G_{x}=\frac{1}{F}\frac{dF}{dx}$ with $x$ as
$-\frac{{(}2+\gamma{)}}{\lambda_{\perp}v}\frac{d\log
N_{p}}{dt}=\frac{1}{F}\frac{d^{2}F}{dx^{2}}=\frac{1}{F}\frac{d(FG_{x})}{dx}=G_{x}^{2}+\frac{dG_{x}}{dx}\approx\frac{dG_{x}}{dx},$
(26)
because $G_{x}$ is less than 5%/AU=$5\times 10^{-2}$/AU at most and
$G_{x}^{2}$ is much smaller than $\frac{\partial G_{x}}{\partial x}$ which is
$\sim 1$/AU2 (see Figure A1b below). By integrating Eq. (A15) by $x$, we get
$G_{x}\approx-\frac{{(}2+\gamma{)}}{\lambda_{\perp}v}\frac{d\log N_{p}}{dt}x,$
(27)
and
${I_{0}}\approx\frac{{(}2+\gamma{)}}{2\lambda_{\perp}v}\frac{d\log
N_{p}}{dt}{(d^{2}-x^{2})},$ (28)
where we set $I_{0}=0$ at $x=\pm d$ as a boundary condition.
The steady state solution of $G_{x}$ in Eq.(27) is a linear function of the
distance $x=V_{SW}dt$ where $dt$ is the time relative to the time of passage
of the center of the slab. In discussions here, we assume the thickness of
slab is constant, ignoring its temporal variation during the blue shade
period. This is seen in Figure A1b showing the observed $G_{x}$ which can be
fitted by a linear function of $x$ calculated with a constant $V_{SW}=400$
km/s and $dt$ relative to 05:16 UT on 26 August when the best-fit line crosses
the horizontal axis. The slope of this best-fit line is $-1.47\times
10^{2}(\%/\rm{AU}^{2})$. By using $\lambda_{\perp}=$0.010 AU and $\gamma=2.7$
in Eq.(27), we get $\frac{d\log N_{p}}{dt}=2.2\times 10^{-2}$/hour, necessary
for the heating, while the average $\frac{d\log N_{p}}{dt}$ calculated from
hourly mean of the observed $N_{p}$ in Figure A1b is $(5.5\pm 6.4)\times
10^{-2}$/hour, being consistent with the value for the heating within errors
(hourly mean $N_{p}$ is available in S2). Eq.(28), on the other hand, predicts
$I_{0}$ to be $I_{0}\propto G_{x}x$, while the observed $I_{0}$ and its peak
value of $\sim 0.13$ % can be fitted by a quadratic function of $x$ with a
best-fit parameter $d=4.3\times 10^{-2}$ AU as shown in Figure A1a.
Figure 3: Cosmic-ray density and density gradient compared with expectations
from the adiabatic heating in a slab. Each panel displays; (a) cosmic-ray
density $I_{0}$, (b) density gradient $G_{x}$, as a function of $x$ measured
from the center of the slab which is defined at 05:16 UT on 26 August from the
zero-cross point of the best-fit curve in (b) (see the observed time on the
upper horizontal axis). In this figure, we converted the time $dt$ measured
from the center of the slab to the distance $x$ by $x=V_{SW}\times dt$
assuming a constant solar wind velocity of $V_{SW}=400$ km/s. The best-fit
curves (blue curves) shown in panels (a) and (b) are $I_{0}(\%)=7.3\times
10^{1}(\%/\rm{AU}^{2})\times((4.3\times 10^{-2}\rm{AU})^{2}$$-x^{2})$ and
$G_{x}(\%/\rm{AU})=-1.47\times 10^{2}(\%/\rm{AU}^{2})$$\times x$ (AU),
respectively (see text).
###### Acknowledgements.
This work is supported in part by the joint research programs of the National
Institute of Polar Research, in Japan, the Institute for Space-Earth
Environmental Research (ISEE), Nagoya University, and the Institute for Cosmic
Ray Research (ICRR), University of Tokyo. The observations are supported by
Nagoya University with the Nagoya muon detector, by INPE and UFSM with the São
Martinho da Serra muon detector, by the Australian Antarctic Division with the
Hobart muon detector, and by project SP01/09 of the Research Administration of
Kuwait University with the Kuwait City muon detector. Global Muon Detector
Network data are available at the website
(http://cosray.shinshu-u.ac.jp/crest/DB/Public/main.php) of the Cosmic Ray
Experimental Science Team (CREST) of Shinshu University. The authors
gratefully acknowledge the NOAA Air Resources Laboratory (ARL) for the
provision of GDAS data, which are available at READY website
(http://www.ready.noaa.gov) and used in this paper. The Wind spacecraft data
were obtained via the NASA homepage and the hourly $D_{ST}$ index is provided
by the WDC for Geomagnetism, Kyoto, Japan. N. J. S. thanks the Brazilian
Agency - CNPq for the fellowship under grant number 300886/2016-0 and C.R.B.
acknowledges grants #2014/24711-6 and #2017/21270-7 from São Paulo Research
Foundation (FAPESP). EE would like to thank Brazilian funding agencies for
research grants FAPESP (2018/21657-1) and CNPq (PQ-301883/2019-0).
## References
* Abunin et al. (2020) Abunin, A. A., M. A. Abunina, A. V. Belov, and I. M. Chertok (2020), Peculiar Solar Sources and Geospace Disturbances on 20-26 August 2018, Solar Phys., 295, 7, doi:10.1007/s11207-019-1574-8.
* Bieber and Evenson (1998) Bieber, J. W., and P. Evenson (1998), CME Geometry in Relation to Cosmic Ray Anisotropy, Geophys. Res. Lett, 25, 2955, doi:10.1029/98GL51232.
* Bieber et al. (2004) Bieber, J. W., W. H. Matthaeus, and A. Shalchi (2004), Nonlinear guiding center theory of perpendicular diffusion: General properties and comparison with observation, Geophys. Res. Lett, 31, L10805, doi:10.1029/2004GL020007.
* Cane (2000) Cane, H. (2000), Coronal mass ejections and Forbush decreases, Space Sci. Rev., 93, 55, doi:10.1023/A:1026532125747.
* Chen et al. (2019) Chen, C., Y. D. Liu, R. Wang, X. Zhao, H. Hu, and B. Zhu (2019), Characteristics of a Gradual Filament Eruption and Subsequent CME Propagation in Relation to a Strong Geomagnetic Storm, Astrophys. J., 884, 90, doi:10.3847/1538-4357/ab3f36.
* Dal Lago et al. (2006) Dal Lago, A., W. D. Gonzalez, L. A. Balmaceda, L. E. A. Vieira, E. Echer, F. L. Guarnieri, J. Santos, M. R. da Silva, A. de Lucas, A. L. C. de Gonzalez, R. Schwenn, and N. J. Schuch (2006), The 17-22 October (1999) solar-interplanetary-geomagnetic event: Very intense geomagmetic storm associated with a pressure balance between interplanetary coronal mass ejection and a high-speed stream, J. Geophys. Res., 111, A07S14, doi:10.1029/2005JA011394.
* Fushishita et al. (2010) Fushishita, A., Y. Okazaki, T. Narumi, C. Kato, S. Yasue, T. Kuwabara, J. W. Bieber, P. Evenson, M. R. Da Silva, A. Dal Lago, N. J. Schuch, M. Tokumaru, M. L. Duldig, J. E. Humble, I. Sabbah, J. Kóta, and K. Munakata (2010), Drift effects and the average features of cosmic ray density gradient in CIRs during successive two solar minimum periods, Advances in Geosciences, 21, 199, doi:10.1142/9789812838209_0016.
* Forbush (1937) Forbush, S. E. (1937), On the effects in cosmic-ray intensity observed during the recent magnetic storm, Phys. Rev., 51, 1108, doi:10.1103/PhysRev.51.1108.3.
* Gleeson (1969) Gleeson, L. J. (1969), The equations describing the cosmic-ray gas in the interplanetary region, Planet. Space Sci., 17, 31, doi:10.1016/0032-0633(69)90121-4.
* Kataoka and Miyoshi (2006) Kataoka, R., and Y. Miyoshi (2006), Flux enhancement of radiation belt electrons during geomagnetic storms driven by coronal mass ejections and corotating interaction regions, Space Weather, 4, S09004, doi:10.1029/2005SW000211.
* Kataoka et al. (2015) Kataoka, R., D. Shiota, E. Kilpua, and K. Keika (2015), Pileup accident hypothesis of magnetic storm on 2015 March 17, Geophys. Res. Lett, 42, 5155-5161, doi:10.1002/2015GL064816.
* Kozai et al. (2016) Kozai M., K. Munakata, C. Kato, T. Kuwabara, M. Rockenbach, A. Dal Lago, N. J. Schuch, C. R. Braga, R. R. S. Mendon, H. K. Al Jassar, M. M. Sharma, M. L. Duldig, J. E. Humble, P. Evenson, I. Sabbah, and M. Tokumaru (2016), Average spatial distribution of cosmic rays behind the interplanetary shock, Astrophys. J., 825, 100, doi:10.3847/0004-637X/825/2/100.
* Krittinatham and Ruffolo (2009) Krittinatham. W., and D. Ruffolo (2009), Drift orbits of energetic particles in an interplanetary magnetic flux rope Astrophys. J., 705, 831, doi:10.1088/0004-637X/704/1/831.
* Kuwabara et al. (2004) Kuwabara, T., K. Munakata, S. Yasue, C. Kato, S. Akahane, M. Koyama, J. W. Bieber, P. Evenson, R. Pyle, Z. Fujii, M. Tokumaru, M. Kojima, K. Marubashi, M. L. Duldig, J. E. Humble, M. Silva, N. Trivedi, W. Gonzalez, and N. J. Schuch (2004), Geometry of an interplanetary CME on October 29, 2003 deduced from cosmic rays, Geophys. Res. Lett, 31, L19803, doi:10.1029/2004GL020803.
* Kuwabara et al. (2009) Kuwabara, T., J. W. Bieber, P. Evenson, K. Munakata, S. Yasue, C. Kato, A. Fushishita, M. Tokumaru, M. L. Duldig, J. E. Humble, M. R. Silva, A. Dal Lago, and N. J. Schuch (2009), Determination of ICME Geometry and Orientation from Ground Based Observations of Galactic Cosmic Rays, J. Geophys. Res., 114, A05109, doi:10.1029/2008JA013717.
* Liu et al. (2014) Liu, Y., J. Luhmann, P. Kajdič, E. K. J. Kilpua, N. Lugaz, N. V. Nitta, C. Möstl, B. Lavraud, S. D. Bale, C. J. Farrugia, and A. B. Galvin (2014), Observations of an extreme storm in interplanetary space caused by successive coronal mass ejections, Nat Commun 5, 348, doi:10.1038/ncomms4481.
* Matsumoto et al. (2019) Matsumoto, T., D. Shiota, R. Kataoka, H. Miyahara, and S. Miyake (2019), A dynamical model of the heliosphere with the Adaptive Mesh Refinement, Phys.: Conf. Ser., 1225, 1225, 012008, doi:10.1088/1742-6596/1225/1/012008.
* Mendonça et al. (2016) Mendonça, R. R. S., C. R. Braga, E. Echer, A. Dal Lago, K. Munakata, T. Kuwabara, M. Kozai, C. Kato, M. Rockenbach, N. J. Schuch, H. K. Al Jassar, M. M. Sharma, M. Tokumaru, M. L. Duldig, J. E. Humble, P. Evenson, and I. Sabbah (2016), Temperature effect in secondary cosmic rays (muons) observed at ground: analysis of the global muon detector network data, Astrophys. J., 830, 88, doi:10.3847/0004-637X/830/2/88.
* Miyake et al. (2017) Miyake, S., R. Kataoka, and T. Sato (2017), Cosmic ray modulation and radiation dose of aircrews during the solar cycle 24/25, Space Weather, 15(4),589-605, doi:0.1002/2016SW001588
* Mohanty et al. (2016) Mohanty, P. K., K. P. Arunbabu, T. Aziz, S. R. Dugad, S. K. Gupta, B. Hariharan, P. Jagadeesan, A. Jain, S. D. Morris, B. S. Rao, Y. Hayashi, S. Kawakami, A. Oshima, S. Shibata, S. Raha, P. Subramanian, and H. Kojima (2016), Transient Weakening of Earth’s Magnetic Shield Probed by a Cosmic Ray Burst, Phys. Rev. Lett., 117, 171101, doi:10.1103/PhysRevLett.117.171101.
* Munakata et al. (2006) Munakata, K., S. Yasue, C. Kato, J. Kota, M. Tokumaru, M. Kojima, A. A. Darwish, T. Kuwabara, and J. W. Bieber (2006), On the cross-field diffusion of galactic cosmic rays into the magnetic flux rope of a CME, Advances in Geosciences, 21, 115, doi:10.1142/9789812707185.
* Munakata et al. (2018) Munakata, K., M. Kozai, P. Evenson, T. Kuwabara, C. Kato, M. Tokumaru, M. Rockenbach, A. Dal Lago, R. R. S. Mendonca, C. R. Braga, N. J. Schuch, H. K. Al Jassar, M. M. Sharma, M. L. Duldig, J. E. Humble, I. Sabbah, and J. Kota (2018), Cosmic Ray Short Burst Observed with the Global Muon Detector Network (GMDN) on June 22, 2015, Astrophys. J., 862, 170, doi:10.3847/1538-4357/aacdfe.
* Murakami et al. (1979) Murakami, K., K. Nagashima, S. Sagisaka, Y. Mishima, and A. Inoue (1979), Response Functions for Cosmic-Ray Muons at Various Depths Underground, IL NUOVO CIM., 2C, 635, doi:10.1007/BF02557762.
* Okazaki et al. (2008) Okazaki, Y., A. Fushishita, T. Narumi, C. Kato, S. Yasue, T. Kuwabara, J. W. Bieber, P. Evenson, M. R. Da Silva, A. Dal Lago, N. J. Schuch, Z. Fujii, M. L. Duldig, J. E. Humble, I. Sabbah, J. Kóta, and K. Munakata (2008), Drift effects and the cosmic ray density gradient in a solar rotation period: First observation with the Global Muon Detector Network (GMDN), Astrophys. J., 681, 693, doi:10.1086/588277.
* Parker (1965) Parker, E. N. (1965), The passage of energetic charged particles through interplanetary space, Planet. Space Sci., 13, 9, doi:10.1016/0032-0633(65)90131-5.
* Richardson (2004) Richardson, I. G. (2004), Energetic particles and corotating interaction regions in the solar wind, Space Sci Rev., 111, 267, doi:10.1023/B:SPAC.0000032689.52830.3e.
* Rockenbach et al. (2014) Rockenbach, M., A. Dal Lago, N. J. Schuch, K. Munakata, T. Kuwabara, A. G. Oliveira, E. Echer, C. R. Braga, R. R. S. Mendonça, C. Kato, M. Kozai, M. Tokumaru, J. W. Bieber, P. Evenson, M. L. Duldig, J. E. Humble, H. K. Al Jassar, M. M. Sharma, and I. Sabbah (2014), Global muon detector network used for space weather applications, Space Sci Rev., 182, 1, doi:10.1007/s11214-014-0048-4.
* Shiota and Kataoka (2016) Shiota, D., and R. Kataoka (2016), Magnetohydrodynamic simulation of interplanetary propagation of multiple coronal mass ejections with internal magnetic flux rope (SUSANOO-CME), Space Weather, 14, 56-75, doi:10.1002/2015SW001308.
* Tokumaru et al. (2007) Tokumaru, M., M. Kojima, K. Fujiki, M. Yamashita, and B. V. Jackson (2007), The source and propagation of the interplanetary disturbance associated with the full-halo coronal mass ejection on 28 October 2003, J. Geophys. Res., 112, A05106, doi:10.1029/2006JA012043.
* Tortermpun et al. (2018) Tortermpun, U., D. Ruffolo, and J. W. Bieber (2018), Galactic cosmic-ray anisotropy during the Forbush decrease starting 2013 April 13, Astrophys. J. Lett., 852, L26, doi:10.3847/2041-8213/aaa407.
* Wibberenz et al. (1998) Wibberenz, G., J. A. Le Roux, M. S. Potgieter, and J. W. Bieber (1998), Transient effects and disturbed conditions, Space Sci. Rev., 83, 309, doi:10.1023/A:1005083109827.
|
# COVID-19 propagation by diffusion - a two-dimensional approach for Germany
Günter<EMAIL_ADDRESS>
Technische Universität Berlin
###### Abstract
Diffusion comes anytime and everywhere. If there is a gradient or a potential
difference of a quantity a diffusion process happens and this ends if an
equilibrium is reached only. The concentration of a species maybe such
quantity, or the voltage. An electric currant will be driven by a voltage
difference for example.
In this COVID-19 pandemic one observes both regions with low incidence and
other ones with high incidence. The local different people density could be a
reason for that. In populous areas like big cities or congested urban areas
higher COVID-19 incidences could be observed than in rural regions.
The aim of this paper consists in the application of a diffusion concept to
describe one possible issue of the the COVID-19 propagation.
This will be discussed for the German situation based on the quite different
incidence data for the different federal states of Germany.
With this ansatz some phenomenoms of the actual development of the pandemic
could be confirmed. The model gives a possibility to investigate certain
scenarios like border-crossings or local spreading events and their influence
on the COVID-19 propagation as well.
## 1 Introduction and the mathematical model
The mathematical modeling of COVID-19 with SIR-type models ([5], [6]) leads to
averaged results and does not take unequal peopling or populousness into
account. But it is well known, that these issues play an important role in the
local pandemic evolution. With the consideration of local-dependent density of
people and a diffusion model we try to resolve the COVID-19 propagation in a
finer manner.
What is a good choice of a quantity to describe the COVID-19 spread? The WHO
and national health institutions measure the COVID-19 spread with the seven-
days-incidence (sometimes also the fourteen-days incidence) of infected people
per 100000 inhabitants. In Germany it is possible to control or trace the
history of infected people by local health institutions if the seven-days
incidence has a value less than 50. But at the end of December 2020 and the
begin of January 2021 the averaged incidence is about 140, and in some hotspot
federal states like Saxony greater than 300. If the social and economical life
should be sustained there are several possibilities to transmit the COVID-19
virus anyhow. The following ones should be mentioned:
* •
Commuters and employers on the way to there office or to there position of
employment especially including medical and nursing staff.
* •
Pupils and teachers in schools and on the way to school and in the school.
* •
People buying every day necessities using shopping centers.
* •
Postmen, suppliers and deliverers.
All these activities take place during the so called lockdown in Germany with
the result of ongoing propagation of the pandemic. Also the unavailable center
of power in the decentralized federal state Germany. This leads often to solo
efforts of some federal states.
From authoritarian countries like China or Singapore with a quite different
civilization and other cultural traditions than the German ones for example it
is known, that the virus propagation could be stopped with very rigorous
measures like the strict prohibition of the social and economic life (see
2222From gettyimages), i.e. activities which mentioned above are absolutely
forbidden.
Figure 1: German lockdown
Figure 2: Chinese lockdown
This is inconceivable in countries like Germany, Austria, the Netherlands or
other so called democratic states with a western understanding of freedom and
self-determination (see 2). But in the consequences of such a western life
style they have to live with a more or less consecutive activity of the
COVID-19 pandemic. And that is the reason for the following trial to describe
one aspect of the pandemic by a diffusion model. In another context a similar
model was discussed in [4].
In the following diffusion concept the seven-days incidence should be denoted
by $s$ and $s$ should serve as the quantity which will be influenced by it’s
gradients between different levels of incidence in the federal states of
Germany. At the the mathematical model of diffusion leads to a partial
differential equation for the considered quantity (here $s$)
$\frac{\partial s}{\partial t}=\nabla\cdot(D\nabla
s)+q\quad\mbox{in}\;[t_{0},T]\times\Omega,$ (1)
where $\Omega\subset\mathbb{R}^{2}$ is the region which will be investigated,
for example the national territory of Germany, $D$ is a diffusion coefficient,
depending on the locality $x\in\Omega$, $[t_{0},T]$ is the time interval of
interest, and $q$ is a term which describes sources or sinks of possible
infections.
Beside the equation (1) one needs initial conditions for $s$, for example
$s(x,t_{0})=s_{0}(x),\quad x\in\Omega$ (2)
and boundary conditions
$\alpha s+\beta\nabla
s\cdot\vec{n}=\gamma\quad\mbox{in}\;[t_{0},T]\times\partial\Omega\,,$ (3)
where $\alpha,\beta$ and $\gamma$ are real coefficients and by
$\partial\Omega=:\Gamma$ the boundary of the region $\Omega$ will be denoted.
$\nabla_{n}s=\nabla s\cdot\vec{n}$ is the directional derivative of $s$ in the
direction of the outer normal vector $\vec{n}$ on $\Gamma$. The choice of
$\alpha=0$, $\beta=1$ and $\gamma=0$ for example leads to the homogeneous
Neumann boundary condition
$\nabla_{n}s=0\;,$ (4)
which means no import of $s$ at the boundary $\Gamma$. In other words, (4)
describes closed borders to surrounding countries outside $\Omega$.
The diffusion coefficient function $D:\Omega\to\mathbb{R}$ is responsible for
the intensity or velocity of the diffusion process. From fluid or gas dynamics
one knows from [1] the formula
$D=\frac{2}{3}\bar{v}l$
with the averaged particle velocity $\bar{v}$ and the mean free path $l$. The
application of this ansatz to the movement of people in certain areas requires
assumptions for $\bar{v}$ and $l$. If we consider a circular or quadratic
region with the area $A$ and a number of inhabitants $N$ who are distributed
equally $l$ could be approached by
$l=\frac{\sqrt{A}}{\sqrt{N}}\;.$
For the velocity $\bar{v}$ we assume $\bar{v}=5000\frac{km}{day}$ ([3]).
Because of the different areas and numbers of inhabitants of the federal
states of Germany $D$ will be a local depending non-constant function.
If there are no sources or sinks for $s$, i.e. $q=0$, and the borders are
closed which means the boundary condition (4), the initial boundary value
problem (1), (2), (4) has the steady state solution
$s_{st}=\frac{\int_{\Omega}s_{0}(x)\,dx}{\int_{\Omega}dx}=\mbox{const.}\;.$
(5)
This is easy to verify and this property is characteristic for diffusion
processes which tend to an equilibrium. It is quite complicated to model the
source-sink function $q$ in an appropriate kind. $q$ depends of the behavior
of the population and the health policy of the different federal states. It’s
only possible to work with very coarse guesses. It is known that the people in
Schleswig-Holstein is exemplary with respect to the recommendations to avoid
infection with the COVID-19 virus and this means $q<0$. On the other hand it
is known from Saxony the many people belief there is not a jot of truth in the
pandemic, which means $q>0$ for a long time (now the government of Saxony
changed the policy which leads to $q<0$).
But regardless of these uncertainties one can get information about the
pandemic propagation for example the influence of hotspots of high incidences
(Saxony) to regions with low incidences (South of Brandenburg) for example.
## 2 Data of the different federal states of Germany
At the beginning of the year 2021 (14th of January) the Robert-Koch-Institut
which is responsible for the daily COVID-19 data collection published the
seven-days incidence data (of January the 14th, 2021, see [2]) summarized in
table 1.
The values of table 1 are used as initial data for the function $s_{0}$ of
(2).
As a base for the determination of the diffusion coefficient function we use
the data of table 1.
states | 7-days incidence | density | inhibitants | area
---|---|---|---|---
Schleswig-Holstein | 92 | 183 | 2904 | 15804
Hamburg | 115 | 2438 | 1847 | 755
Mecklenburg-West Pomerania | 117 | 69 | 1608 | 23295
Lower Saxony | 100 | 167 | 7994 | 47710
Brandenburg | 212 | 85 | 2522 | 29654
Berlin | 180 | 4090 | 3669 | 891
Bremen | 84 | 1629 | 681 | 419
Saxony-Anhalt | 241 | 109 | 2195 | 20454
Thuringia | 310 | 132 | 2133 | 16202
Saxony | 292 | 221 | 4072 | 18450
Bavaria | 160 | 185 | 13125 | 70542
Baden-Wuerttemberg | 133 | 310 | 11100 | 35784
North Rhine-Westphalia | 131 | 526 | 17947 | 34112
Hesse | 141 | 297 | 6288 | 21116
Saarland | 160 | 385 | 987 | 2571
Rhineland-Palatinate | 122 | 206 | 4094 | 19858
Munic | 156 | 4700 | 1540 | 310
Table 1: 7-days incidence, people density $[/km^{2}]$, inhibitants
$[/100000]$, area of the federal states of Germany $[km^{2}]$
The unit of the diffusion function $D$ will be $[km^{2}/day]$. Eventual
sources or sinks will be gauged in $[/day]$. The incidence $s$ is
dimensionless.
## 3 The numerical solution of the initial boundary value problem (1),(2),(4)
Based on the subdivision of $\Omega$ (area of Germany) into finite rectangular
cells $\omega_{j},\,j\in I_{\Omega}$ and $\Omega=\cup_{j\in
I_{\Omega}}\omega_{j}$ the equation (1) will be spatial discretized with a
finite volume method ($I_{\Omega}$ is the index set of the finite volume
cells) . Together with the discrete boundary condition (4) we get a semi-
discrete system continuous in time
$\frac{\partial s_{j}}{\partial
t}=\nabla_{h}\cdot(D\nabla_{h}s_{j})+q_{j}\;,\;j\in I_{\Omega},$ (6)
where the index $h$ means the discrete versions of the $\nabla$-operator.
The time discretization is done with an implicit Euler scheme. This allows us
to work without strict restrictions for the choice of the discrete time-step
$\Delta_{t}$. At every time-level it is to solve the linear equation system
$\frac{1}{\Delta_{t}}s_{j}^{n+1}-\nabla_{h}\cdot(D\nabla_{h}s_{j}^{n+1})=\frac{1}{\Delta_{t}}s_{j}^{n}+q_{j}\;,\;j\in
I_{\Omega}\;,$ (7)
for $n=0,\dots N,\,N=(T-t_{0})/\Delta_{t}$. $s_{j}^{0}$ was set to the
incidence $s_{0}(x)$ for $x\in\omega_{j}$, $j=1,\dots,I_{\Omega}$. The
solution of equation system (7) for a certain time-level $n$ is done with an
iterative method.
## 4 Numerical simulation results
In fig. 4 the region $\Omega$ is adumbrated. The size of finite volume cells
is $\Delta_{x}\times\Delta_{y}=(8km\times 8km)$.
Figure 3: Germany map
Figure 4: Coarse contour of $\Omega$ and it’s discretization
First we start with the case $q=0$. $\Delta_{t}$ was set to one day. In the
following figs. 6-10 the initial state and the results of the diffusion
process for the development of the seven-days incidence after 50 and 120 time-
steps (days) are shown. The left figs. show a view from west to east, and the
right figs. show the view from north to south. The initial state is a piece-
wise constant function with values of the seven-days incidence of the 16
federal states where we consider munic as a town with over a million
inhabitants separately (it was excluded from Bavaria).
Figure 5: Initial distribution of $s$, $n=0$, $q=0$
Figure 6: Initial distribution of $s$, $n=0$, $q=0$
Figure 7: $s$ after 100 time-steps, $q=0$
Figure 8: $s$ after 100 time-steps, $q=0$
Figure 9: $s$ after 200 time-steps, $q=0$
Figure 10: $s$ after 200 time-steps, $q=0$
Especially in the border regions (Saxony - Brandenburg, Saxony - Bavaria,
Saxony - Thuringia) one can observe a transfer of incidence from the high
level incidence of Saxony to the neighbored federal states. Also the high
incidence level of Berlin was transferred to the surrounding area of
Brandenburg. The north states with a low incidence level were only influenced
by the other states weakly.
With the parameters $\alpha$, $\beta$ and $\gamma$ of the boundary condition
(3) it’s possible to describe several situations at the borders of the
boundary $\Gamma$ of $\Omega$. $\alpha=0$, $\beta=-D$ and $\gamma\neq 0$
describes a flux through the border. In the next example such a scenario will
be used to describe the way home of infected people from Austria to Bavaria.
The boundary condition at the border crossing reads as
$-D\nabla s\cdot\vec{n}=\gamma\;.$
The initial state $s_{0}$ was the same as in the example above. $\gamma>0$
means an ”inflow” of infected people, $\gamma<0$ a loss of infected people
($\gamma=0$ describes a closed border). In the following figures 12, 12 the
case $\gamma=0$ is compared to the case $\gamma=50$ ($km/day$). At the south
boundary of Bavaria one can observe the increase of $s$ caused by the flux of
$s$ from Austria to Bavaria.
Figure 11: $s$ after 200 time-steps, $\gamma=0km/day$, $q=0$
Figure 12: $s$ after 200 time-steps, $\gamma=50km/day$, $q=0$
Figure 13: $s$ after 10 time-steps, $q\neq 0$, based on table 2
Figure 14: $s$ after 20 time-steps, $q\neq 0$, based on table 2
Figure 15: $s$ after 30 time-steps, $q\neq 0$, based on table 2
Figure 16: $s$ after 50 time-steps, $q\neq 0$, based on table 2
states | 12.1.2021 | 13.1.2021 | 14.1.2021 | $\Delta s$ per day
---|---|---|---|---
Schleswig-Holstein | 98 | 94 | 92 | -3
Hamburg | 127 | 120 | 115 | -1
Mecklenburg-West Pomerania | 129 | 122 | 117 | -6
Lower Saxony | 114 | 108 | 100 | -7
Brandenburg | 258 | 230 | 212 | -23
Berlin | 178 | 184 | 180 | 1
Bremen | 86 | 84 | 84 | -1
Saxony-Anhalt | 238 | 232 | 241 | 1,5
Thuringia | 326 | 324 | 310 | -8
Saxony | 342 | 304 | 292 | -25
Bavaria | 159 | 148 | 169 | 0,5
Baden-Wuerttemberg | 139 | 130 | 133 | -3
North Rhine-Westphalia | 149 | 142 | 131 | -9
Hesse | 157 | 150 | 141 | -8
Saarland | 184 | 176 | 160 | -12
Rhineland-Palatinate | 139 | 132 | 132 | -8,5
Munic | 157 | 156 | 156 | 0
Table 2: 7-days incidence, guessed changed $s$ per day $[/day]$
Figure 17: $s$ after 200 time-steps, $q=0$
Figure 18: Infected people after 200 time-steps $q=0$
To get an idea of an appropriate approach of the source-sink term $q$ let us
consider the development of the 7-days incidence of three successive days in
the federal states. The guessed values of the change of $s$ per day are
results of a linear regression. The change of $s$ per day can be divede into a
part coming from diffusion and another part coming from the local passing on
the virus. We assume a 10% part coming from local transmission. The figures
14-16 show the simulation results with the assumptions for $q$ after 10, 20,
30 and 50 time-steps (no border-crossing). The initial values are taken from
table 1, i.e. the $s$-data of 12.01.2021.
Because of the coarse approach of $q$ it is advisable to update $q$ based on
the actual data, especially if the local pandemic progression is changing
stiffly.
## 5 Discussion and Conclusion
The examples show the impact of diffusion effects on the propagation of the
COVID-19 pandemic. It must be remarked that these processes are very slow
compared to the virus transmission in a local hotspot cluster. But with the
presented model is it possible to describe creeping processes which occur
beside slack measures like holey lockdowns.
Especially the pandemic propagation in regions with high incidence gradients
can be described with the discussed diffusion concept.
It would be interesting to continue the work with diffusion models especially
the investigation of spreading events by a refined approach of $q$ and the
influence of border traffic. The presented model is qualified for such
investigations. But it’s important to note that the diffusion modeling is only
a small part of the understanding of the expansion of the SARS-CoV-2 virus.
At the end it should be noted that the distribution and propagation must be
count back to the distribution and propagation of the infected people. Doing
this, one can see, that the dense peaks of infected people are located in the
metropolitan areas like Berlin, Munic or Hamburg. Figs. 18 and 18 show the
distribution of the seven-days incidence and of the resulting infected people
(by counting back using the people densities and the area of the federal
states).
This research received no external funding.
## Acknowledgements
Considering the topic of the pademic modeling I had some interesting exchange
of ideas with my friends and colleagues F. Bechstedt, physicist of the
Friedrich-Schiller University Jena, and Reinhold Schneider, mathematician of
the Technical University Berlin. For this heartthanks.
## References
* [1] E. L. Cussler, Diffusion - Mass Transfer in Fluid Systems. Cambridge University Press, Cambridge, New York, 1997
* [2] Dashboard of the Robert-Koch-Institut
(www.rki.de/DE/Content/InfAZ/N/Neuartiges_Coronavirus/Fallzahlen.html) 2021.
* [3] E. Bontempi et al., Understanding COVID-19 diffusion requires an interdisciplinary, multi-dimensional approach. Environmental Research 188 (2020), doi: https://doi.org/10.1016/j.envres.2020.109814.
* [4] M. Braack, Quaas, M.F., Tews, B. et al., Optimization of Fishing Strategies in Space and Time as a Non-convex Optimal Control Problem. J Optim Theory Appl 178, 950–972 (2018). https://doi.org/10.1007/s10957-018-1304-7
* [5] W.O. Kermack and A.G. McKendrick, A contribution to the mathematical theory of epidemics. Proc. R. Soc. London A 115(1927)700.
* [6] G. Bärwolff, A Contribution to the Mathematical Modeling of the Corona/COVID-19 Pandemic. medRxiv.preprint 2020, doi: https://doi.org/10.1101/2020.04.01.20050229.
|
Holographic Entanglement Entropy in flat limit of the Generalized Minimal
Massive Gravity model
M. R. Setare 111E-mail<EMAIL_ADDRESS>, M. Koohgard 222E-mail:
<EMAIL_ADDRESS>
Department of Science,
Campus of Bijar, University of Kurdistan, Bijar, Iran
Previously we have studied the Generalized Minimal Massive Gravity (GMMG) in
asymptotically $AdS_{3}$ background, and have shown that the theory is free of
negative-energy bulk modes. Also we have shown GMMG avoids the aforementioned
“bulk-boundary unitarity clash”. Here instead of $AdS_{3}$ space we consider
asymptotically flat space, and study this model in the flat limit. The dual
field theory of GMMG in the flat limit is a $BMS_{3}$ invariant field theory,
dubbed (BMSFT) and we have BMS algebra asymptotically instead of Virasoro
algebra. In fact here we present an evidence for this claim. Entanglement
entropy of GMMG is calculated in the background in the flat null infinity. Our
evidence for mentioned claim is the result for entanglement entropy in filed
theory side and in the bulk (in the gravity side). At first using Cardy
formula and Rindler transformation, we calculate entanglement entropy of BMSFT
in three different cases. Zero temperature on the plane and on the cylinder,
and non-zero temperature case. Then we obtain the entanglement entropy in the
bulk. Our results in gravity side are exactly in agreement with field theory
calculations.
Keywords: Entanglement Entropy, Cardy formula, GMMG model, BTZ black hole
## 1 Introduction
Holography [1, 2] is a common property for quantum gravity that relates a
gravity theory living in a bulk to a quantum field theory living in its
boundary. A good sign for holography is the entanglement entropy, which is a
good description of the correlation of the theory on the bulk and the CFT
theory on the boundary. Entanglement entropy calculation can be done in 2D
conformal field theories because infinite-dimensional Virasoro algebra, which
generates 2D conformal transformations, can be used to simplify calculations
[3, 4, 5]. For this purpose, the entropy is calculated for an regularized
interval to avoid ultra-violet divergences. The role of holography is to
create an equivalence between a gravity theory and a quantum field theory
without gravity at its boundary. The result of the holography is that it is
conjuctured that entanglement entropy of the conformal field theory is
equivalent to the calculation of the area of an extremal codimension-two-
surface in AdS [6].
Rindler transformation method [7] maps entanglement entropy to thermal entropy
with the help of symmetry transformations. In this method, the Bekenestein-
Hawking entropy calculates the thermal entropy and the thermal entropy is
equal to the entanglement entropy. This is because on the boundary and on the
bulk, respectively, with the help of some transformations, we map the
entanglement entropy to the thermal entropy on a hyperbolic spacetime and map
the AdS vacuum to black holes with a hyperbolic horizon. Then, with the help
of the AdS/CFT dictionary [8, 9, 10], we can calculate the thermal entropy
using the Bekenestein-Hawking entropy and obtain the entanglement entropy.
In recent years, some interesting models of massive gravity in three
dimensions have been introduced. Among these models, a well known one is that
of the generalized minimal massive gravity (GMMG) [11]. This model is realized
by adding the CS deformation term, the higher derivative deformation term, and
an extra term to pure Einstein gravity with a negative cosmological constant.
Usually the theories including the terms given by the square of the curvatures
have the massive spin 2 mode and the massive scalar mode in addition to the
massless graviton. Also the theory has ghosts due to negative energy
excitations of the massive tensor. In [11] it is discussed that the GMMG is
free of negative-energy bulk modes, and also avoids the aforementioned _bulk-
boundary unitarity clash_. Such clash is present in the previously constructed
gravity theories with local degrees of freedom in 2+1-dimensions, namely
Topologically Massive Gravity [12, 13] and the cosmological extension of New
Massive Gravity [14]. By a Hamiltonian analysis one can show that the GMMG
model has no Boulware-Deser ghosts and this model propagate only two physical
modes. Since this theory avoid the bulk-boundary clash, it define excellent
arenas to explore the structure of asymptotically AdS solutions, asymptotic
symmetries, its algebra and other holographically inspired questions. In this
paper we study a holographic calculation of entanglement entropy in
asymptotically flat geometry solutions of this model.
It is well known that the group of asymptotic symmetries of asymptotically
flat spacetimes at future null infinity is the BMS group [15, 16, 17]. In
extension of AdS/CFT correspondence to the flat space holography, the BMS
algebra has been investigated very much in recent years [18, 19, 20, 21, 22,
23, 24, 25]. In this paper, we use the BMS gauge to do the computations. The
entanglement is also considered to be proportional to the length of the
spacelike geodesic, which is one of the three geodesics characteristic of
three-dimensional flat spacetime.
The paper is organized as follows. In the section [2] we will focus on the
field theory with the BMS symmetry, and with the help of Cardy formula and
Rindler transformaation, we calculate entanglement entropy of GMMG model. In
the section [3], the spacetime metric is described in the gravity side and the
flat spacetime is divided into three classes. The entanglemnet entropy of the
gravity side is then calculated, in which we consider the entropy proportional
to the spacelike geodesic length. We present some conclusions in section [4].
## 2 Entanglement entropy in field theory side
It has already been shown that the GMMG model, if written in the background of
the $AdS_{3}$ space, is dual with a two-dimensional conformal field theory, at
least in part of the space of the parameters of the dual model. In this paper,
we study this model in flat limit and we provide an important proof of this
correspondence and duality, and that is the calculation of the entanglement
entropy on both sides of the duality. We know that the model, when re-written
in flat limit and in the background of an asymptotically flat spacetime, is
dual with a field theory with the $BMS_{3}$ symmetry called the BMSFT. We look
at the field theory in three different cases. Using the Cardy formula and
calculations that have already been obtained for entanglement entropy of these
cases of the field theories, we find the entanglement entropy of the field
theory. Then we find the general form of the entanglement entropy in
asymptotically flat limit and we consider the duality in the three cases. By
finding the entanglement entropy, we show that the result on each of the cases
on both sides of the duality is exactly the same. In this way, we obtain a
strong and reasoned proof of the gauge/gravity duality. The symmetry of BMS is
determined by transformations whose the charge algebra is as follows [26]
$\displaystyle\big{[}\mathcal{L}_{n},\mathcal{L}_{m}\big{]}$ $\displaystyle=$
$\displaystyle(n-m)\mathcal{L}_{n+m}+\frac{c_{L}}{12}(n^{3}-n)\delta_{m+n,0}$
$\displaystyle\big{[}\mathcal{L}_{n},\mathcal{M}_{m}\big{]}$ $\displaystyle=$
$\displaystyle(n-m)\mathcal{M}_{n+m}+\frac{c_{M}}{12}(n^{3}-n)\delta_{m+n,0}$
$\displaystyle\big{[}\mathcal{M}_{n},\mathcal{M}_{m}\big{]}$ $\displaystyle=$
$\displaystyle 0$ (2.1)
where $c_{L}$ and $c_{M}$ are the central charges and $\mathcal{L}_{n}$ and
$\mathcal{M}_{n}$ are the conserved charges of the currents that generate the
following $BMS$-transformations on the boundary coordinates $(u,\phi)$
$\displaystyle\tilde{\phi}$ $\displaystyle=$ $\displaystyle f(\phi)$
$\displaystyle\tilde{u}$ $\displaystyle=$
$\displaystyle\partial_{\phi}f(\phi)u+g(\phi),$ (2.2)
where $f(\phi)$ and $g(\phi)$ are two arbitrary functions.
In the Bondi gauge, $2+1$-dimensional solution of the Einstein’s equation in
the flat plane is as follows [23]
$ds^{2}=Mdu^{2}-2dudr+Jdud\phi+r^{2}d\phi^{2}$ (2.3)
in which the cosmological constant is considered equal to zero. The flat
metric Eq. (2.3) is categorized in three cases based on the $J$ and $M$
values. When $M=-1$ and $J=0$, the metric Eq. (2.3) is converted to the
Minkowski metric in three dimensions (called global Minkowski solution), as
follows
$ds^{2}=-du^{2}-2dudr+r^{2}d\phi^{2}$ (2.4)
The second case is for the solutions that $M=J=0$, which shows a manifold
called the null orbifold. This orbifold is defined by a null boost and was
first used in the string theory [27]. This metric in this case has the
following form
$ds^{2}=-2dudr+r^{2}d\phi^{2}$ (2.5)
The third case is considered with the $M>0$ in which we have a flat space
cosmological geometry (FSC). This is a time-dependent solution of Einstein
equation in three dimensions of spacetime [28]. The flat space metric Eq.
(2.3) has an Cauchy event horizon that is best seen in the ADM form as follows
$ds^{2}=-(\frac{J^{2}}{4r^{2}}-M)^{2}dt^{2}+(\frac{J^{2}}{4r^{2}}-M)^{-1}dr^{2}+r^{2}(d\phi+\frac{J}{2r^{2}}dt)^{2}$
(2.6)
where
$r_{H}=\mid r_{c}\mid\equiv\mid\frac{J}{2\sqrt{M}}\mid$ (2.7)
is location of the horizon. We consider the thermal circle of the FSC metric
Eq. (2.3) by defining the parameters as follows
$(u,\phi)\sim(u+i\beta_{u},\phi-i\beta_{\phi});~{}~{}~{}\beta_{u}=\frac{\pi
J}{M^{3/2}},~{}~{}~{}\beta_{\phi}=\frac{2\pi}{\sqrt{M}}$ (2.8)
In the Rindler transformation, the symmetry translations are generated along
the thermal circle. This circle is produced by some imaginary identifications
between coordinates.
The second metric case mentioned above is very similar to the Poincare patch
in the $AdS_{3}$ spacetime, which we can easily see by the following
transformation between the Poincare coordinates and the Cartesian coordinates.
$\displaystyle t$ $\displaystyle=$
$\displaystyle\frac{l_{\phi}}{r}+\frac{2}{l_{\phi}}(u+\frac{r\phi^{2}}{2}),$
(2.9) $\displaystyle x$ $\displaystyle=$
$\displaystyle\frac{l_{u}}{l_{\phi}}+r\phi,$ (2.10) $\displaystyle y$
$\displaystyle=$
$\displaystyle\frac{l_{\phi}}{r}-\frac{2}{l_{\phi}}(u+\frac{r\phi^{2}}{2}).$
(2.11)
Using the coordinate transformation in Eq. (2.5), we have the metric form as a
Poincare spacetime as follows
$ds^{2}=-2dudr+r^{2}d\phi^{2}=-dt^{2}+dx^{2}+dy^{2}$ (2.12)
Since there are three cases for the background spacetime Eq. (2.3), we
consider the field theory in three cases on the boundary: zero temperature on
the plane and on the cylinder, and non-zero temperature. The periodicity along
imaginary time axis characterizes temperature of a quantum field theory. By
considering imaginary time, the thermal circle changes to spatial circle.
We want to find a Rindler transformation that relates the entanglement entropy
to the thermal entropy. The thermal entropy of BMSFT is calculated on the
following regularized interval
$(-l_{u}/2+\epsilon_{u},~{}-l_{\phi}/2+\epsilon_{\phi})~{}\to~{}(l_{u}/2-\epsilon_{u},~{}l_{\phi}/2-\epsilon_{\phi})$
(2.13)
where $l_{u}$ and $l_{\phi}$ are the interval extensions along $u$ and $\phi$
directions. The Cardy formula is used to calculate the entropy of two-
dimensional CFTs, and with this formula, the thermal entropy is obtained as an
asymptotic density of states in a statistical manner [29]. To calculate the
thermal entropy of the field theory, we use following form of Cardy formula
$S_{\bar{b}|b}(\bar{a}|a)=-\frac{\pi^{2}}{3}\bigg{(}c_{L}\frac{b}{a}+c_{M}\frac{(\bar{a}b-a\bar{b})}{a^{2}}\bigg{)}$
(2.14)
where $(\bar{a},a)$ specifies the thermal circle and $(\bar{b},b)$ specifies
the spatial circle, which we apply the coordinate changes in the field theory
using the Rindler transformation between these two circles. The Cardy formula
is obtained by counting the states that identify the spatial circle and the
thermal circle; the manifold of the BMSFT is defined on the following torus
$(\tilde{u},\tilde{\phi})\sim(\tilde{u}+i\bar{a},~{}\tilde{\phi}-ia)\sim(\tilde{u}+2\pi\bar{b},~{}\tilde{\phi}-2\pi
b)$ (2.15)
The coordinate identification in the form
$\tilde{x}^{i}\sim\tilde{x}^{i}+i\tilde{\beta}^{i}$ is called the thermal
identification. The second identification in Eq. (2.15) is a spatial circle
that preserve the metric form as follows
$ds^{2}=\tilde{M}d\tilde{u}^{2}-2d\tilde{u}d\tilde{r}+\tilde{J}d\tilde{u}d\tilde{\phi}+r^{2}d\tilde{\phi}^{2}$
(2.16)
The coordinates used are the Bondi coordinates. The BMSFT is invariant under
the Rindler transformations, and at the same time these transformations
preserve $BMS_{3}$ symmetry of the theory asymptotically. Rindler
transformation as a coordinate transformation is as follows [30]
$\displaystyle\tilde{\phi}$ $\displaystyle=$ $\displaystyle f(\phi)$
$\displaystyle\tilde{u}$ $\displaystyle=$
$\displaystyle\partial_{\phi}f(\phi)u+g(\phi)$ (2.17)
where its generators are considered as a combination of BMS generators as
follows
$\displaystyle\partial_{\tilde{\phi}}$ $\displaystyle=$
$\displaystyle\Sigma_{n=-1}^{1}(b_{n}\mathcal{L}_{n}+d_{n}\mathcal{M}_{n})$
$\displaystyle\partial_{\tilde{u}}$ $\displaystyle=$
$\displaystyle-\Sigma_{n=-1}^{1}(b_{n}\mathcal{M}_{n})$ (2.18)
where $\mathcal{L}_{n}$ and $\mathcal{M}_{n}$ are $BMS_{3}$ generators as
follows [26]
$\displaystyle\mathcal{L}_{n}$ $\displaystyle=$
$\displaystyle-u(n+1)\phi^{n}\partial_{u}-\phi^{n+1}\partial_{\phi},$
$\displaystyle\mathcal{M}_{n}$ $\displaystyle=$
$\displaystyle\phi^{n+1}\partial_{u}$ (2.19)
By Rindler generators and by
$k_{t}\equiv\tilde{\beta}^{i}\partial_{\tilde{x}^{i}}$, we can create a
translation in the direction of the thermal circle as
$\tilde{x}^{i}(s)=\tilde{x}^{i}+\tilde{\beta}^{i}s$, and in general, an
identification is considered as $\tilde{x}^{i}\sim\tilde{x}^{i}(i)$. These
transformations are generally obtained on the plane as follows [31]
$\tilde{\phi}=\frac{\tilde{\beta}_{\phi}}{\pi}\tanh^{-1}(\frac{2\phi}{l_{\phi}})$
(2.20)
$\tilde{u}+\frac{\tilde{\beta}_{u}}{\tilde{\beta}_{\phi}}\tilde{\phi}=\frac{2\tilde{\beta}_{\phi}(ul_{\phi}-l_{u}\phi)}{\pi(l^{2}_{\phi}-4\phi^{2})}$
(2.21)
By substituting the beginning and the end of the regularized interval Eq.
(2.13) into Eqs. (2.20) and (2.21), we have the interval length as follows
$\displaystyle\Delta\tilde{\phi}$ $\displaystyle=$
$\displaystyle\frac{\tilde{\beta}_{\phi}}{\pi}\big{(}\tanh^{-1}(\frac{2\phi_{max}}{l_{\phi}})-\tanh^{-1}(\frac{2\phi_{min}}{l_{\phi}})\big{)}$
(2.22) $\displaystyle=$
$\displaystyle\frac{\tilde{\beta}_{\phi}}{2\pi}\big{(}\log(\frac{l_{\phi}+2\phi_{max}}{l_{\phi}-2\phi_{max}})-\log(\frac{l_{\phi}+2\phi_{min}}{l_{\phi}-2\phi_{min}})\big{)}$
$\displaystyle=$
$\displaystyle\frac{\tilde{\beta}_{\phi}}{2\pi}\big{(}\log(\frac{2l_{\phi}-2\epsilon_{\phi}}{-2\epsilon_{\phi}})-\log(\frac{2\epsilon_{\phi}}{2l_{\phi}-2\epsilon_{\phi}})\big{)}$
$\displaystyle\simeq$
$\displaystyle\frac{\tilde{\beta}_{\phi}}{\pi}\log(\frac{l_{\phi}}{\epsilon_{\phi}})$
and
$\displaystyle\Delta\tilde{u}$ $\displaystyle=$
$\displaystyle\frac{2\tilde{\beta}_{\phi}(u_{max}l_{\phi}-\phi_{max}l_{u})}{\pi(l_{\phi}-2\phi_{max})(l_{\phi}+2\phi_{max})}-\frac{2\tilde{\beta}_{\phi}(u_{min}l_{\phi}-\phi_{min}l_{u})}{\pi(l_{\phi}-2\phi_{min})(l_{\phi}+2\phi_{min})}-\Delta\tilde{\phi}$
(2.23) $\displaystyle=$
$\displaystyle\frac{\tilde{\beta}_{\phi}}{\pi}\frac{\epsilon_{\phi}l_{u}-\epsilon_{u}l_{\phi}}{\epsilon_{\phi}(l_{\phi}-\epsilon_{\phi})}-\Delta\tilde{\phi}$
$\displaystyle\simeq$
$\displaystyle\frac{\tilde{\beta}_{\phi}}{\pi}(\frac{l_{u}}{l_{\phi}}-\frac{\epsilon_{u}}{\epsilon_{\phi}}-\log(\frac{l_{\phi}}{\epsilon_{\phi}}))$
Therefore, the interval of the coordinate extensions on the plane is as
follows
$\mathcal{I}_{reg}:~{}~{}~{}\big{(}-\frac{\Delta\tilde{u}}{2},-\frac{\Delta\tilde{\phi}}{2}\big{)}~{}\to~{}\big{(}\frac{\Delta\tilde{u}}{2},\frac{\Delta\tilde{\phi}}{2}\big{)}$
(2.24)
We use the regularized interval Eq. (2.24) for the BMSFT with zero temperature
on the plane. Here we consider the $\tilde{\beta}_{\phi}$ and the
$\tilde{\beta}_{u}$ to determine the thermal circle (i.e. $a$ and $\tilde{a}$)
as follows
$a=\tilde{\beta}_{\phi},~{}~{}~{}~{}\bar{a}=\tilde{\beta}_{u}$ (2.25)
The extensions $\Delta\tilde{u}$ and $-\Delta\tilde{\phi}$ are considered
corresponding to $2\pi\bar{b}$ and $2\pi b$ respectively to determine the
spatial circle, as follows
$2\pi b=-\Delta\tilde{\phi},~{}~{}~{}~{}2\pi\bar{b}=\Delta\tilde{u}$ (2.26)
On the other hand, the field theory is discussed on a canonical torus, which
is characterized by $\phi\sim\phi+2\pi$. Using the BMS transformation as
follows
$\hat{\phi}=\frac{\tilde{\phi}}{b},~{}~{}~{}~{}~{}\hat{u}=\frac{\tilde{u}}{b}+\frac{\bar{b}}{b^{2}}\tilde{\phi}$
(2.27)
manifold of the field theory is characterized by the following identifications
$\displaystyle(\hat{u},\hat{\phi})$ $\displaystyle\sim$
$\displaystyle(\hat{u}+i\hat{\beta}_{u},\hat{\phi}+i\hat{\beta}_{\phi})\sim(\hat{u},\hat{\phi}-2\pi)$
$\displaystyle\hat{\beta}_{\phi}$ $\displaystyle=$
$\displaystyle\frac{a}{b}~{}~{}~{}~{}\hat{\beta}_{u}=\frac{\bar{a}b-a\bar{b}}{b^{2}}$
(2.28)
Thus, according to the regularized interval extensions Eqs. (2.22) and (2.23),
we have the following relations
$\displaystyle\hat{\beta}_{\phi}$ $\displaystyle=$
$\displaystyle-\frac{2\pi^{2}}{\log(\frac{l_{\phi}}{\epsilon_{\phi}})}$
$\displaystyle\hat{\beta}_{u}$ $\displaystyle=$
$\displaystyle-\frac{\hat{\beta}_{\phi}^{2}}{2\pi^{2}}(\frac{l_{u}}{l_{\phi}}-\frac{\epsilon_{u}}{\epsilon_{\phi}})$
(2.29)
where we have used the identifications Eqs. (2.25) and (2.26). By substituting
the 2nd line of (2) into the Cardy formula (2.14) for the thermal entropy , we
have the following result
$S=-\frac{\pi^{2}}{3}\bigg{(}c_{L}\frac{1}{\hat{\beta}_{\phi}}+c_{M}\frac{\hat{\beta}_{u}}{\hat{\beta}_{\phi}^{2}}\bigg{)}$
(2.30)
where by substituting the results (2) into this entropy formula, we have the
general form of the entanglement entropy for BMSFT with zero temperature on
the plane, as follows
$S_{EE}=\frac{c_{L}}{6}\log(\frac{l_{\phi}}{\epsilon_{\phi}})+\frac{c_{M}}{6}(\frac{l_{u}}{l_{\phi}}-\frac{\epsilon_{u}}{\epsilon_{\phi}})$
(2.31)
For the flat limit of GMMG where the cosmological constant is considered to be
zero, the central charges are as follows [32]
$\displaystyle c_{L}$ $\displaystyle=$ $\displaystyle-\frac{3}{\mu G}$
$\displaystyle c_{M}$ $\displaystyle=$
$\displaystyle-\frac{3}{G}(\bar{\sigma}+\frac{\alpha H}{\mu}+\frac{F}{m^{2}})$
(2.32)
By substituting the central charges Eq. (2) in entanglement entropy Eq.
(2.31), the entanglement entropy for the zero temperature BMSFT on the plane
is as follows
$\displaystyle S_{EE}$ $\displaystyle=$ $\displaystyle-\frac{1}{2\mu
G}\log(\frac{l_{\phi}}{\epsilon_{\phi}})-\frac{1}{2G}(\bar{\sigma}+\frac{\alpha
H}{\mu}+\frac{F}{m^{2}})(\frac{l_{u}}{l_{\phi}}-\frac{\epsilon_{u}}{\epsilon_{\phi}})$
(2.33) $\displaystyle=$ $\displaystyle-\frac{1}{4G}(\bar{\sigma}+\frac{\alpha
H}{\mu}+\frac{F}{m^{2}})(\sqrt{\tilde{M}}\Delta\tilde{u}+\frac{\tilde{J}}{2\sqrt{\tilde{M}}}\Delta\tilde{\phi})-\frac{1}{4\mu
G}\sqrt{\tilde{M}}\Delta\tilde{\phi}$
In the last line of this equation we have used Eqs. (2.22) and (2.23). We do
the same calculations for other two cases of the BMSFTs. In this case we have
the following identification
$(u,\phi)\sim(u+i\beta_{u},\phi-i\beta_{\phi})$ (2.34)
By similar steps as in the zero temperature BMSFT on the plane, parameters of
the Cardy formula are obtained based on Rindler transformation as follows
[31].
$\displaystyle a$ $\displaystyle=$
$\displaystyle\tilde{\beta}_{\phi},~{}~{}~{}~{}\bar{a}=\tilde{\beta}_{u}$
$\displaystyle 2\pi b$ $\displaystyle=$
$\displaystyle-\frac{\tilde{\beta}_{\phi}}{\pi}\zeta,~{}~{}~{}~{}2\pi\bar{b}=-\frac{\tilde{\beta}_{u}}{\pi}\zeta+\frac{\tilde{\beta}_{\phi}}{\pi\beta_{\phi}}\big{[}\pi\big{(}l_{u}+\frac{\beta_{u}}{\beta_{\phi}}\big{)}\coth\frac{\pi
l_{\phi}}{\beta_{\phi}}-\beta_{u}\big{]},$ (2.35)
where
$\zeta=log\big{(}\frac{2}{\epsilon_{\phi}}\sin\frac{l_{\phi}}{2}\big{)}$.
Substituting these parameters into the Cardy formula (2.14), the entanglement
entropy of the BMSFT with finite temperature is as follows
$S_{EE}=\frac{c_{L}}{6}\log\big{(}\frac{\beta_{\phi}}{\pi\epsilon_{\phi}}\sinh\frac{\pi
l_{\phi}}{\beta_{\phi}}\big{)}+\frac{c_{M}}{6}\frac{1}{\beta_{\phi}}\big{[}\pi(l_{u}+\frac{\beta_{u}}{\beta_{\phi}}l_{\phi})\coth\frac{\pi
l_{\phi}}{\beta_{\phi}}-\beta_{u}\big{]}-\frac{c_{M}}{6}\frac{\epsilon_{u}}{\epsilon_{\phi}}$
(2.36)
By substituting the following parameters
$\beta_{u}=\frac{\pi J}{M^{3/2}},~{}~{}~{}\beta_{\phi}=\frac{2\pi}{\sqrt{M}},$
(2.37)
into Eq. (2.36) we have
$S_{EE}=\frac{c_{L}}{6}\log\big{(}\frac{2}{\sqrt{M}\epsilon_{\phi}}\sinh\frac{\sqrt{M}l_{\phi}}{2}\big{)}+\frac{c_{M}}{6}\big{(}\frac{(l_{u}M+l_{\phi}\sqrt{M}r_{c})\coth\frac{\sqrt{M}l_{\phi}}{2}-2r_{c}}{2\sqrt{M}}\big{)}-\frac{c_{M}}{6}\frac{\epsilon_{u}}{\epsilon_{\phi}}$
(2.38)
By inserting the coordinate extensions as follows [31]
$\displaystyle\log\big{(}\frac{2}{\sqrt{M}\epsilon_{\phi}}\sinh\frac{\sqrt{M}l_{\phi}}{2}\big{)}$
$\displaystyle=$ $\displaystyle\frac{\sqrt{\tilde{M}}}{2}\Delta\tilde{\phi},$
(2.39)
$\displaystyle\frac{(l_{u}M+l_{\phi}\sqrt{M}r_{c})\coth\frac{\sqrt{M}l_{\phi}}{2}-2r_{c}}{2\sqrt{M}}-\frac{\epsilon_{u}}{\epsilon_{\phi}}$
$\displaystyle=$
$\displaystyle\sqrt{\tilde{M}}\Delta\tilde{u}+\tilde{r}_{c}\Delta\tilde{\phi}$
(2.40)
in the Eq.(2.38) we obtain following expression for the entanglement entropy
$S_{EE}=\frac{c_{M}}{12}(\sqrt{\tilde{M}}\Delta\tilde{u}+\tilde{r}_{c}\Delta\tilde{\phi})+\frac{c_{L}}{12}\sqrt{\tilde{M}}\Delta\tilde{\phi}$
(2.41)
By substituting the central charges Eq. (2) into the above equation we obtain
an expression similar to the entanglement entropy for the zero temperature
BMSFT on the plane where we presented in Eq. (2.33). 333Note that
$\tilde{r}_{c}=\frac{\tilde{J}}{2\sqrt{\tilde{M}}}$. The last case of the
BMSFT is considered on a cylinder with zero temperature. The cylinder
identification is as follows
$\phi\sim\phi+2\pi$ (2.42)
By the similar steps as former cases, the parameters of the Cardy formula in
this case are obtained as follows [26]
$\displaystyle a$ $\displaystyle=$
$\displaystyle\tilde{\beta}_{\phi},~{}~{}~{}~{}\bar{a}=\tilde{\beta}_{u}$
$\displaystyle 2\pi b$ $\displaystyle=$
$\displaystyle-\frac{\tilde{\beta}_{\phi}}{\pi}\zeta,~{}~{}~{}~{}2\pi\bar{b}=-\frac{\tilde{\beta}_{u}}{\pi}\zeta+\frac{\tilde{\beta_{\phi}}l_{u}\cot(l_{\phi}/2)}{2\pi}-\frac{\tilde{\beta_{\phi}}\epsilon_{u}}{\pi\epsilon_{\phi}}.$
(2.43)
Substituting these parameters in the Cardy formula (2.14), the entanglement
entropy with zero temperature on a cylinder is obtained as follows
$S_{EE}=\frac{c_{L}}{6}\log\big{(}\frac{2}{\epsilon_{\phi}}\sin\frac{l_{\phi}}{2}\big{)}+\frac{c_{M}}{12}(l_{u}\cot\frac{l_{\phi}}{2}-2\frac{\epsilon_{u}}{\epsilon_{\phi}})$
(2.44)
The coordinate extensions in this case are as follows
$\displaystyle\frac{1}{2}\sqrt{\tilde{M}}\Delta\tilde{\phi}$ $\displaystyle=$
$\displaystyle\log\big{(}\frac{2}{\epsilon_{\phi}}\sin\frac{l_{\phi}}{2}\big{)}$
(2.45)
$\displaystyle\sqrt{\tilde{M}}\Delta\tilde{u}+\tilde{r}_{c}\Delta\tilde{\phi}$
$\displaystyle=$ $\displaystyle
l_{u}\cot\frac{l_{\phi}}{2}-2\frac{\epsilon_{u}}{\epsilon_{\phi}}$ (2.46)
Substituting this coordinate extensions in the entropy Eq. (2.44), we obtain
the same result for the entanglemet entropy as Eq. (2.41).
## 3 Holographic entanglement entropy in gravity side
In the field theory side, a Rindler transformation is used and the BMSFT is
mapped to the $\widetilde{BMSFT}$, thus the entanglement entropy of the former
is obtained from the thermal entropy of the latter. In the gravity side, we
can find a coordinate transformation that transform the spacetime Eq. (2.3)
into a $\widetilde{FSC}$, and consider the latter as a dual with the
$\widetilde{BMSFT}$ on the boundary. An important condition is that the new
coordinates satisfy the Bondi gauge conditions as follows
$\displaystyle g_{\tilde{u},\tilde{u}}$ $\displaystyle=$
$\displaystyle\tilde{M},~{}~{}~{}~{}g_{\tilde{u},\tilde{\phi}}=\tilde{J}$
$\displaystyle g_{\tilde{\phi},\tilde{\phi}}$ $\displaystyle=$
$\displaystyle\tilde{r}^{2},~{}~{}~{}~{}g_{\tilde{r},\tilde{r}}=0$
$\displaystyle g_{\tilde{r},\tilde{\phi}}$ $\displaystyle=$ $\displaystyle
0,~{}~{}~{}~{}g_{\tilde{u},\tilde{r}}=g_{\tilde{u},\tilde{r}}(\tilde{r})$
(3.1)
In Poincare coordinates in the limit $r\to 0$, the transformations between
Poincare coordinates and the $\widetilde{FSC}$ are as follows [31]
$\displaystyle\tilde{\phi}$ $\displaystyle=$
$\displaystyle\frac{2}{\sqrt{\tilde{M}}}\tanh^{-1}\frac{2\phi}{l_{\phi}}$
(3.2) $\displaystyle\tilde{u}$ $\displaystyle=$
$\displaystyle\frac{4}{\sqrt{\tilde{M}}}\frac{(ul_{\phi}-l_{u}\phi)}{(l^{2}_{\phi}-4\phi^{2})}-\frac{\tilde{r}_{c}}{\sqrt{\tilde{M}}}\tilde{\phi}$
(3.3)
With these transformations and the regularized interval Eq. (2.13), we obtain
the extensions in $u$ and $\phi$ directions as follows
$\displaystyle\Delta\tilde{u}$ $\displaystyle=$
$\displaystyle\frac{2}{\sqrt{\tilde{M}}}(\frac{l_{u}}{l_{\phi}}-\frac{\epsilon_{u}}{\epsilon_{\phi}})\frac{\tilde{J}}{2\tilde{M}}\Delta\tilde{\phi}$
(3.4) $\displaystyle\Delta\tilde{\phi}$ $\displaystyle=$
$\displaystyle\frac{2}{\sqrt{\tilde{M}}}\log\frac{l_{\phi}}{\epsilon_{\phi}}$
(3.5)
Generalized Minimal Massive Gravity (GMMG) was introduced in [11], providing a
new example of a theory that avoids the bulk-boundary clash and therefore, the
theory possesses both, positive energy excitations around the maximally
$AdS_{3}$ vacuum as well as a positive central charge in the dual CFT. The
lagrangian of GMMG model is as follows [11]
$L_{GMMG}=L_{GMG}-\frac{\alpha}{2}e.h\times h$ (3.6)
where
$L_{GMG}=L_{TMG}-\frac{1}{m^{2}}\big{(}f.R+\frac{1}{2}e.f\times f\big{)}$
(3.7)
and
$L_{TMG}=-\sigma e.R+\frac{\Lambda_{0}}{6}e.e\times
e+h.T(\omega)+\frac{1}{2\mu}\big{(}\omega.d\omega+\frac{1}{3}\omega.\omega\times\omega\big{)}$
(3.8)
Here, $L_{TMG}$ is the lagrangian of the topological massive gravity and the
last term in it is a Lorentz Chern-Simons term. The $\Lambda_{0}$ is a
cosmological parameter and $\sigma$ is a sign. $\alpha$ is a dimensionless
parameter, $e$ is a dreibein, $h$ is the auxiliary field, $\omega$ is a
dualized spin-connection, $T(\omega)$ and $R(\omega)$ are a Lorentz covariant
torsion and a curvature 2-form respectively. We need the central charges in
the flat limit, which we used in the previous section. The GMMG model has a
rotating BTZ black hole solutions as [33]
$ds^{2}=-\frac{(r^{2}-r_{+}^{2})(r^{2}-r_{-}^{2})}{l^{2}r^{2}}dt^{2}+\frac{l^{2}r^{2}}{(r^{2}-r_{+}^{2})(r^{2}-r_{-}^{2})}dr^{2}+r^{2}(d\phi-\frac{r_{+}r_{-}}{lr^{2}}dt)^{2}$
(3.9)
where $r_{+}$ and $r_{-}$ are outer and inner horizons respectively. The
energy and the angular momentum of the BTZ black hole are as follows [33]
$\displaystyle E$ $\displaystyle=$
$\displaystyle(\sigma+\frac{\gamma}{2\mu^{2}l^{2}}+\frac{s}{2m^{2}l^{2}})\frac{r_{+}^{2}+r_{-}^{2}}{l^{2}}-\frac{2r_{+}r_{-}}{\mu
l^{3}}$ $\displaystyle J$ $\displaystyle=$
$\displaystyle(\sigma+\frac{\gamma}{2\mu^{2}l^{2}}+\frac{s}{2m^{2}l^{2}})\frac{2r_{+}r_{-}}{l}-\frac{r_{+}^{2}+r_{-}^{2}}{\mu
l^{2}}$ (3.10)
If one writes the BTZ black hole in terms of the mass parameter $M$ and the
angular momentum parameter $a$, the energy and the angular momentum can be
written as follows
$\displaystyle E$ $\displaystyle=$
$\displaystyle(\sigma+\frac{\gamma}{2\mu^{2}l^{2}}+\frac{s}{2m^{2}l^{2}})M-\frac{a}{\mu
l^{2}}$ $\displaystyle J$ $\displaystyle=$
$\displaystyle(\sigma+\frac{\gamma}{2\mu^{2}l^{2}}+\frac{s}{2m^{2}l^{2}})a-\frac{M}{\mu}$
(3.11)
By comparing $E$ and $J$ in Eq. (3) with corresponding quantities in Eq. (3),
we find the following result for the horizons
$\displaystyle r_{+}r_{-}$ $\displaystyle=$ $\displaystyle\frac{al}{2}$ (3.12)
$\displaystyle r_{+}^{2}+r_{-}^{2}$ $\displaystyle=$ $\displaystyle Ml^{2}$
(3.13)
In the flat limit $r\to\infty$, we have the result as follows
$\displaystyle r_{+}$ $\displaystyle\to$ $\displaystyle l\sqrt{M}$
$\displaystyle r_{-}$ $\displaystyle\to$
$\displaystyle\frac{a}{2\sqrt{M}}\equiv r_{c}$ (3.14)
The general form of the metric is also written using the horizons and in the
flat limit as follows
$\displaystyle ds^{2}_{outer}$ $\displaystyle=$
$\displaystyle(r_{+}d\phi+\frac{r_{-}dt}{l})^{2}~{}~{}\to~{}~{}(\sqrt{M}ld\phi+r_{c}\frac{dt}{l})^{2}$
(3.15) $\displaystyle ds^{2}_{inner}$ $\displaystyle=$
$\displaystyle(r_{-}d\phi+\frac{r_{+}dt}{l})^{2}~{}~{}\to~{}~{}(r_{c}d\phi+\sqrt{M}dt)^{2}$
(3.16)
which we can use it to calculate the entropy. The thermal entropy of inner
horizon of the BTZ black hole is as follows [34, 35, 36]
$S_{inner}=\frac{l_{inner~{}horizon}}{4G}+\frac{l_{outer~{}horizon}}{4G\mu l}$
(3.17)
We do not write the entropy of the outer horizon because we want to do the
calculations in the flat limit. So we have
$S_{inner}|_{l\to\infty}=\frac{\sqrt{M}\Delta
t+r_{c}\Delta\phi}{4G}+\frac{\sqrt{M}l\Delta\phi}{4G\mu l}\equiv
S_{BH}+S_{CS}$ (3.18)
where the first term is a Bekenestein-Hawking entropy and the second term is
related to Chern-Simons contributions in the Lagrangian. By comparing the
entanglement entropy Eq. (2.33) of BMSFT with zero temperature on the plane
with Eq. (3.18), the CS correction to the entanglement entropy in Poincare
patch is obtained as follows
$S_{CS}=-\frac{1}{2\mu G}\log\frac{l_{\phi}}{\epsilon_{\phi}}$ (3.19)
We can find the holographic entanglement entropy for other cases of the
duality. To this end, we first consider the FSC case. The gravity theory in
this case has a holographic dual on the boundary which is a BMS-field theory
with finite temperature. The entanglement entropy of the field theory side is
obtained in Eq. (2.41) for a BMSFT with finite temperature. In this case the
coordinate extension Eqs. (2.39) and (2.40) are used to find the entropy in
$\tilde{u}$ and $\tilde{\phi}$ coordinates. Substituting the central charges
of GMMG in flat limit Eq. (2) into Eq. (2.41), we obtain the entropy as
follows
$S_{EE}=-\frac{1}{4G}(\bar{\sigma}+\frac{\alpha
H}{\mu}+\frac{F}{m^{2}})(\sqrt{\tilde{M}}\Delta\tilde{u}+\tilde{r}_{c}\Delta\tilde{\phi})-\frac{1}{4\mu
G}\sqrt{\tilde{M}}\Delta\tilde{\phi}$ (3.20)
Corresponding its holographic dual in gravity side Eq. (3.18), we obtain the
CS correction to the entropy in FSC as follows
$\displaystyle S_{CS}$ $\displaystyle=$ $\displaystyle-\frac{1}{4\mu
G}\sqrt{\tilde{M}}\Delta\tilde{\phi}$ (3.21) $\displaystyle=$
$\displaystyle-\frac{1}{2\mu
G}\log\big{(}\frac{2}{\sqrt{M}\epsilon_{\phi}}\sinh\frac{\sqrt{M}l_{\phi}}{2}\big{)}\
$
where we have used Eq.(2.39) to obtain the 2nd line. The holographic dual of
GMMG model in the global Minkowski space is a BMSFT with zero temperature on a
cylinder. The entanglement entropy of BMSFT with zero temperature on a
cylinder is obtained in Eq. (2.44). Substituting the coordinate extensions Eq.
(2.45), we find the following result for entanglement entropy in the field
theory side
$S_{EE}=\frac{c_{M}}{12}(\sqrt{\tilde{M}}\Delta\tilde{u}+\tilde{r}_{c}\Delta\tilde{\phi})+\frac{c_{L}}{12}\sqrt{\tilde{M}}\Delta\tilde{\phi}$
(3.22)
Substituting the central charges Eq. (2) into this equation, the entanglement
entropy obtained as follows
$S_{EE}=-\frac{1}{4G}(\bar{\sigma}+\frac{\alpha
H}{\mu}+\frac{F}{m^{2}})(\sqrt{\tilde{M}}\Delta\tilde{u}+\tilde{r}_{c}\Delta\tilde{\phi})-\frac{1}{4\mu
G}\sqrt{\tilde{M}}\Delta\tilde{\phi}$ (3.23)
This result is the same as Eq. (3.20) found here again. If we compare this
result with the entropy in the gravitational side, we get the CS-correction to
the entropy in global Minkowski space as follows
$\displaystyle S_{CS}$ $\displaystyle=$ $\displaystyle-\frac{1}{4\mu
G}\sqrt{\tilde{M}}\Delta\tilde{\phi}$ (3.24) $\displaystyle=$
$\displaystyle-\frac{1}{2\mu
G}\log\big{(}\frac{2}{\epsilon_{\phi}}\sin\frac{l_{\phi}}{2}\big{)}$
In this section we considered the GMMG model and obtained the holographic
entanglement entropy in three different asymptotically flat solutions of this
model: Poincare patch, global Minkowski, and FSC solutions. Our results are
coincide exactly with field theory calculations which have been done in
previous section. The asymptotically symmetry algebra of GMMG model in
contrast with Einstein gravity and similar with TMG has two non vanishing
central charges $C_{L}$ and $C_{M}$. By our this study, we have taken into
account the effect of non vanishing central charge $C_{L}$ in holographic
entanglement entropy. Equations (3.19),(3.21),(3.24), are effects of non
vanishing central charge $C_{L}$ on holographic entanglement entropy
respectively in Poincare patch, FSC, and global Minkowski solution of GMMG
model in flat limit. As one can see from these equations, this effect appears
as a Chern-Simons correction to the thermal entropy.
## 4 Conclusion
We know that the pure Einstein-Hilbert gravity in three dimensions (in the
presence of negative cosmological constat also) exhibits no propagating
physical degrees of freedom [37, 38]. Adding the gravitational Chern-Simons
term produces a propagating massive graviton [39]. The resulting theory is
called topologically massive gravity (TMG). Unfortunately TMG has a bulk-
boundary unitarity conflict. But, as mentioned in introduction, fortunately
GMMG avoids the aforementioned _bulk-boundary unitarity clash_. The
calculation of the GMMG action to quadratic order about the $AdS_{3}$ space
shows that the theory is free of negative energy bulk modes. So this model is
a viable candidate for the semi-classical limit of a unitary quantum 3D
massive gravity. Our motivation in this paper is that to give another evidence
on the duality between GMMG in the bulk and a quantum field theory in the
boundary. Here instead of $AdS_{3}$ space we consider asymptotically flat
space, and study the holographic duality in the flat limit of GMMG.
The authors of [31] have obtained the holographic entanglement entropy of a
single interval in the dual BMSFT located at the null infinity of the
$3-$dimensional bulk asymptotically flat spacetime in the framework of
Einstein gravity and TMG. In the present work we have extended such study to
the GMMG model. In the another term to give an interesting and important
evidence for this duality, we calculate entanglement entropy at the bulk and
at the boundary. To calculate entanglement entropy on the boundary, we
consider a BMSFT on the boundary, and then by placing its manifold on the
thermal circle Eq.(2.8), we calculate the entropy on the interval of the
coordinate extensions using the Cardy formula Eq. (2.14). To calculate the
coordinate extensions, the Rindler transformation is used and these extensions
are considered in three cases with zero temperature on the plane and cylinder
and finite temperature. What characterizes the GMMG model are the central
charges Eq. (2) that we substitute in the entropy formula and get the result
Eq. (2.33).
Using the expressions of the energy and angular momnetum Eqs. (3) and (3), for
BTZ black hole solution of GMMG, we could obtain the outer and inner horizons
Eq. (3) in the flat limit, where we need to obtain entanglement entropy in the
bulk. We have obtained some corrections to Beckenstein-Hawking entropy, which
are not only due to the presence of Chern-Simons term in the Lagrangian of the
model, but from higher order curvature terms. The contribution of Chern-Simons
term in the entanglement entropy of bulk is exactly the same has been obtained
by Jiang et.al [31] for topological massive gravity. It is interesting that
the contribution of another terms of the Lagrangian of GMMG model appear as a
coefficient of usual Beckenstein-Hawking term in the entanglement entropy.
## References
* [1] G. t Hooft, _Dimensional reduction in quantum gravity_ ,gr-qc/9310026
* [2] L. Susskind, _The world as a hologram_ , _J. Math. Phys._ 36 (1995) 6377
* [3] P. Calabrese and J. L. Cardy, _Entanglement entropy and quantum field theory_ , _J.Stat.Mech._ 0406 (2004) P06002
* [4] C. Holzhey, F. Larsen, and F. Wilczek, _Geometric and renormalized entropy in conformal field theory_ , _Nucl.Phys. B_ 424 (1994) 443 - 467
* [5] G. Vidal, J. Latorre, E. Rico, and A. Kitaev, _Entanglement in quantum critical phenomena_ , _Phys.Rev.Lett._ 90 (2003) 227902
* [6] S. Ryu and T. Takayanagi, _Holographic derivation of entanglement entropy from AdS/CFT_ , _Phys.Rev.Lett._ 96 (2006) 181602
* [7] H. Casini, M. Huerta and R.C. Myers, _Towards a derivation of holographic entanglement entropy_ , _JHEP_ 05 (2011) 036
* [8] J.M. Maldacena, _The large-N limit of superconformal field theories and supergravity_ , _Int. J. Theor. Phys._ 38 (1999) 1113
* [9] S.S. Gubser, I.R. Klebanov and A.M. Polyakov, _Gauge theory correlators from noncritical string theory_ , _Phys. Lett. B_ 428 (1998) 105
* [10] E. Witten, _Anti-de Sitter space and holography_ , _Adv. Theor. Math. Phys._ 2 (1998) 253
* [11] M. R. Setare, _On the Generalized Minimal Massive Gravity_ , _Nucl. Phys. B_ 898 (2015) 259.
* [12] S. Deser, R. Jackiw and S. Templeton, Phys. Rev. Lett. 48, 975, (1982).
* [13] S. Deser, R. Jackiw and S. Templeton, Annals Phys. 140, 372, (1982) [erratum: Annals Phys. 185, 406 (1988)].
* [14] E. A. Bergshoe, O. Hohm and P. K. Townsend, Phys. Rev. Lett. 102, 201301 (2009).
* [15] H. Bondi, M.G.J. van der Burg, A.W.K. Metzner, _Gravitational waves in general relativity, VII. Waves from axi-symmetric isolated system_ , _Proc. R. Soc. Lond. Ser. A_ 269 (Aug., 1962) 21 - 52
* [16] R. K. Sachs, _Gravitational waves in general relativity VIII. Waves in asymptotically flat space-time_ , _Proc. R. Soc. Lond. Ser. A_ 270 (Oct., 1962) 103 - 126
* [17] R. K. Sachs, _Asymptotic Symmetries in Gravitational Theory_ , _Phys. Rev._ 128 (1962) 2851.
* [18] G. Barnich, G. _Compere, Class. Quantum Gravity_ 24 (2007) F15; Corrigendum: _Class. Quantum Gravity_ 24 (2007) 3139.
* [19] J. de Boer, S.N. Solodukhin, _Nucl. Phys. B_ 665 (2003) 545.
* [20] G. Arcioni, C. Dappiaggi, _Class. Quantum Gravity_ 21 (2004) 5655.
* [21] G. Arcioni, C. Dappiaggi, _Nucl. Phys. B_ 674 (2003) 553.
* [22] G. Barnich, C. Troessaert, _Phys. Rev. Lett._ 105 (2010) 111103.
* [23] G. Barnich, C. Troessaert, _JHEP_ 05 (2010) 062.
* [24] G. Barnich, C. Troessaert, _JHEP_ 12 (2011) 105.
* [25] G. Barnich, C. Troessaert, _JHEP_ 1311 (2013) 003;
* [26] A. Bagchi, R. Gopakumar, I. Mandal and A. Miwa, _GCA in 2d_ , _JHEP_ 08 (2010) 004
* [27] G. T. Horowitz and R. Steif, _Singular string solutions with nonsingular initial data_ , _Phys.Lett.B_ 258 (1991) 91-96
* [28] L. Cornalba and M.S. Costa, _A new cosmological scenario in string theory_ , _Phys. Rev. D_ 66 (2002) 066001; L. Cornalba and M.S. Costa, _Time dependent orbifolds and string cosmology_ , _Fortsch. Phys._ 52 (2004) 145
* [29] T. Hartman, C.A. Keller and B. Stoica, _Universal Spectrum of 2d Conformal Field Theory in the Large c Limit_ , _JHEP_ 09 (2014) 118
* [30] G. Barnich, JHEP 10 (2012) 095.
* [31] H. Jiang, W. Song and Q. Wen, _Entanglement entropy in flat holography_ , _JHEP_ 07 (2017) 142
* [32] M. R. Setare and S. N. Sajadi, _Phase Transition Between Flat Space Cosmology Spacetimes and Hot Flat Spacetimes_ , arXiv:2012.00002 [gr-qc]
* [33] M. R. Setare and H. Adami, _Black hole conserved charges in Generalized Minimal Massive Gravity_ , _Phys.Lett.B_ 744 (2015) 280-283
* [34] S. N. Solodukhin, _Holography with gravitational Chern-Simons_ , Phys. Rev. D 74 (2006) 024015\.
* [35] B. Sahoo and A. Sen,_BTZ black hole with Chern-Simons and higher derivative terms_ , JHEP 07 (2006) 008.
* [36] M.-I. Park, _BTZ black hole with gravitational Chern-Simons: Thermodynamics and statistical entropy_ , Phys. Rev. D 77 (2008) 026011.
* [37] L. F. Abbott and S. Deser,_Stability of gravity with a cosmological constant_ , _Nucl. Phys. B_ 195 (1982) 76; L. F. Abbott and S. Deser,_Charge Definition in Nonabelian Gauge Theories_ , _Phys. Lett. B_ 116 (1982) 259.
* [38] S. Deser and B. Tekin ,_Energy in generic higher curvature gravity theories_ , _Phys. Rev. D_ 67 (2003) 084009; S. Deser and B. Tekin, _Gravitational Energy in Quadratic-Curvature Gravities_ , _Phys. Rev. Lett._ 89 (2002) 101101.
* [39] Bergshoeff E. A., Hohm O., Merbis W., Routh A. J. and Townsend P. K., _Lect. Notes Phys._ 892; (Springer) 2015, p. 181. ISBN 978-3-319-10069-2, proceedings of the Seventh Aegean Summer School Beyond Einstein’s Theory of Gravity, Paros (Greece), September 2013
|
# Baseline Pruning-Based Approach
to Trojan Detection in Neural Networks
Peter Bajcsy1 and Michael Majurski Information Technology Laboratory
National Institute of Standards and Technology
100 Bureau Drive. Gaithersburg, MD 20899
1Email<EMAIL_ADDRESS>
###### Abstract
This paper addresses the problem of detecting trojans in neural networks (NNs)
by analyzing systematically pruned NN models. Our pruning-based approach
consists of three main steps. First, detect any deviations from the reference
look-up tables of model file sizes and model graphs. Next, measure the
accuracy of a set of systematically pruned NN models following multiple
pruning schemas. Finally, classify a NN model as clean or poisoned by applying
a mapping between accuracy measurements and NN model labels. This work
outlines a theoretical and experimental framework for finding the optimal
mapping over a large search space of pruning parameters. Based on our
experiments using Round 1 and Round 2 TrojAI Challenge datasets, the approach
achieves average classification accuracy of $69.73\>\%$ and $82.41\>\%$
respectively with an average processing time of less than $60\>s$ per model.
For both datasets random guessing would produce $50\>\%$ classification
accuracy. Reference model graphs and source code are available from GitHub.
## 1 Introduction
This work addresses classifying neural network (NN) models into two classes:
(1) models trained without trojans (clean) and (2) models trained with trojans
(poisoned). Trojans in NNs are defined as triggers inserted into the inputs
that cause misclassification into a class (or classes) unintended by the
design of the model [1]. For example, trojans can be polygons inserted as
innocuous objects (triggers) into traffic sign images (foreground) to change
the classification result as shown in Figure 1. Such triggers have been used
to generate the datasets for multiple rounds of the Intelligence Advanced
Research Projects Agency (IARPA) challenge [2].
The overarching motivation for designing trojan detection algorithms is the
defense against a variety of adversarial attacks during NN training. NN
training might be outsourced to a third party with unknown malicious intent.
It might also leverage NN models pre-trained by an unknown untrusted third
party. In many life-critical applications, such as self-driving cars or
medical diagnoses, deployment of NN models depends on establishing trust in
model performance. To build that trust, trojan detection algorithms must
operate on a variety of model architectures, with limited prior knowledge
about the model, and for a wide range of trojan types. Our work is motivated
by the need to establish a baseline approach for new and innovative algorithms
tested on the IARPA challenge datasets. In addition, we are motivated to lower
the trojan detection computational requirements to the level of a simple phone
app.
Figure 1: Illustration of injecting a polygon trojan (trigger) to a traffic
sign region that causes the shift in classification from class A to class B.
The goal of this work is to design a baseline approach for detecting (a)
possible tampering with the reference model architecture (changing task-
specific NN architecture called a reference model) and (b) the presence of
trojans in a spectrum of architectures. Our approach is illustrated in Figure
2. The “quality assurance” computations in Figure 2 are based on our prior
knowledge about model files and architecture graphs in order to detect
deviations from reference models. The “signal measurement” computations in
Figure 2 focus on measuring accuracies of systematically pruned models.
Finally, the “NN model classification” computations derive and apply a mapping
between accuracies of pruned models and labels denoting the presence of an
embedded trojan. The main challenges lie in estimating the optimal mapping, in
collecting signal measurements within a time limit, and in making the mapping
robust to many architectures and to complex trojan characteristics.
Figure 2: Overview of NN model classification workflow
Our contributions lie in the design of a baseline trojan detection approach
that
* •
leverages well-established filter pruning approaches and their existing
implementations (provides a baseline),
* •
evaluates multiple pruning, ranking, and sampling methods into model pruning
(includes optimization),
* •
collects model accuracy measurements over a wide spectrum of architectures and
with varying number of input images (delivers robustness), and
* •
includes classification accuracy and execution speed tradeoffs into the trojan
detection design (measures scalability).
## 2 Related Work
The design of trojan detection algorithms is a relatively new area of
research. According to the statistics derived from the publications listed at
[3] in 2020, two related publications appeared in Arxiv before 2017, eight in
2017, 15 in 2018, 31 in 2019, and 57 in 2020. The research interest increased
as IARPA and Defense Advanced Research Projects Agency (DARPA) announced the
TrojAI [2] and Guaranteeing AI Robustness Against Deception (GARD) [4]
programs in 2019. With more research efforts invested into designs of trojan
detectors [5, 6, 7], there is a need to establish a baseline method that is
simple, but generally applicable, and provides results that are better than a
chance [8].
Model pruning approaches have been very popular in the past NN research [9,
10, 11, 12, 13, 14]. The past model pruning objectives were focused on
reducing file size and computations needed for storing, training and
inferencing models [11, 15, 12, 16, 13, 17, 18, 19]. The plethora of pruning
approaches was documented in a survey of 81 papers that led the authors in [9]
to design a framework for evaluations of pruning methods. Such a wide use of
the model pruning approach motivated us to leverage this approach for a design
of the baseline trojan detector. In addition, model capacity, efficiency, and
model pruning were mentioned as factors and a possible solution that can
increase robustness and resiliency to attacks [20, 14].
Our survey of available GitHub pruning-based solutions [21] highlighted the
existing challenges in terms of the limited number of supported model
architectures, long execution times, and dependencies on outdated libraries.
For example, the GitHub implementation from [18] is applicable to VGG16
architectures and has been adapted to ConvNet, AlexNet , ResNet18,
InceptionV3, and ResNet50 in limited settings [21]. There is no pruning
implementation that would work with the 22 model architectures presented in
the TrojAI challenge. Thus, our work could only partially leverage the GitHub
implementation linked from [13].
## 3 Methods
Classification Problem: The problem can be formulated as follows:
Classify a set of NN models $M$ as clean or poisoned such that the
classification is robust to
* •
architecture,
* •
the number of provided sample images without trojans, and
* •
trojan type;
while execution time is limited on a variety of computational hardware.
Formally, given the following inputs:
* •
a set of clean images $D_{i}$ that represent samples from each predicted class
$C_{l}\in C$;
* •
a model $M_{i}\in M$ of an architecture $G_{n}\in G$ that predicts $|C|$
classes
* •
a corresponding label for each model $M_{i}$:
* –
$L_{i}=0\rightarrow$ clean or Trained without Trojan,
* –
$L_{i}=1\rightarrow$ poisoned or Trained with Trojan,
Classify the model $M_{i}$ as either clean or poisoned while minimizing the
trojan detection error within an allocated execution time $T_{i}\leq T_{max}$
on a variety of computational platforms. Note: $|C|$ refers to the number of
classes (cardinality of the set of labels $C$).
Pruning-based Approach: To solve the classification problem, we introduced
quality assurance (QA) based classification criteria and designed a supervised
pruning-based classifier. The QA-based classification assumes reference
measurements about file size and model graphs are known. The pruning-based
classifier assumes that a trojan is encoded in convolutional filters. Thus,
one can discriminate NN models into clean and poisoned categories by
systematic pruning of convolutional filters across all layers, measuring
accuracies of pruned models $\vec{A_{i}}$, and estimating some function
$f(\vec{A_{i}})\rightarrow L_{i}$.
The pruning-based approach is characterized by the search for an optimal
mapping function $f(\vec{A_{i}})$ and optimal parameters
$\theta_{opt}{(G_{n},D)}$ used for computing the vector of pruned model
accuracies $\vec{A_{i}}$. The set of optimal parameters
$\theta_{opt}{(G_{n},D)}$ is specific to each NN architecture $G_{n}$ and
depends on available clean images $D_{i}\in D$. The optimization task can be
formally defined as follows:
Given the pruning-based approach and the following inputs:
* •
a NN model $M_{i}{(G_{n},C)}\in M$,
* •
a set of clean images $D_{i}{(C)}$, and
* •
a clean or poisoned label for each model $L_{i}$;
Find an optimal configuration of parameters $\theta_{opt}{(G_{n},D)}$ for each
model architecture $G_{n}\in G$ that minimizes the NN classification error
$\mathcal{L}_{i}^{error}$ subject to allocated execution time
$\mathcal{L}_{i}^{exec}$ per NN model as shown in Equation 1.
$\begin{split}\min_{\theta(G_{n},D)}{\sum_{i=1}^{|M(G_{n})|}\frac{1}{|M(G_{n})|}*\mathcal{L}_{i}^{error}(\theta(G_{n},D))}\\\
\textrm{subject to}\quad\mathcal{L}_{i}^{exec}(\theta(G_{n},D))\leq
1\end{split}$ (1)
where $|M(G_{n})|$ is the number of NN models of the $G_{n}$ architecture type
and $\theta(G_{n},D)$ is a set of algorithmic configurations evaluated for
each NN architecture type $G_{n}$. The term for classification error
$\mathcal{L}_{i}^{error}$ is defined as
$\mathcal{L}_{i}^{error}=1.0-\mathcal{L}_{i}^{AC}$ defined in Equation 2 or as
a cross entropy (CE) loss $\mathcal{L}_{i}^{CE}$ according to Equation 3 (see
also [22]). In these equations,
$\vec{A_{i}}=\vec{A_{i}}{(M_{i},\theta(G_{n},D))}$ is a vector of accuracy
measurements over pruned models, $f(\vec{A_{i}})$ is the probability of
predicting a poisoned model, $\lfloor\;\rceil$ denotes rounding to the nearest
integer, and $\big{[}\;\big{]}$ is the Iverson bracket. The term for execution
time $\mathcal{L}_{i}^{exec}$ is defined as a percentage of maximum allocated
execution time $T_{max}$ according to Equation 4.
$\mathcal{L}_{i}^{AC}=\Big{[}L_{i}=\lfloor f(\vec{A_{i}})\rceil\Big{]}$ (2)
$\mathcal{L}_{i}^{CE}=-(L_{i}*ln(f(\vec{A_{i}}))+(1-L_{i})*ln(1-f(\vec{A_{i}}))$
(3)
$\mathcal{L}_{i}^{exec}=\frac{T_{i}}{T_{max}}$ (4)
Pruning configurations: The space of pruning configurations can be
characterized by six parameters: $\theta(G_{n},D)=\\{PM,SM,RM,p,|S|,|D|\\}$.
Pruning methods $PM$ consist of {Remove, Reset, Trim}, sampling methods $SM$
can be {Random, Uniform, Targeted}, ranking methods $RM$ include {$l_{1}$,
$l_{2}$, $l_{\infty}$, stdev (standard deviation)}, and a sampling probability
$p$ can be in general any real value $p\in(0,1)\in\mathcal{R}$ per NN layer.
The number of evaluated pruned models per configuration
$|S|\in\mathcal{Z}^{>0}$ and the number of used evaluation images
$|D|\in\mathcal{Z}^{>0}$ can be any integer value smaller than the number of
all available clean images in $D_{i}$. Note that we excluded the pruned module
type as a pruning parameter as we focus on trojan feature formation in
convolutional layers represented by Conv2D and BatchNorm modules. We also
excluded the run-time parameters (software and hardware) as they could be
optimized based on the requirements for $T_{i}$.
The differences between pruning methods in $PM$ are illustrated in Figure 3.
The Remove method completely removes the convolutional filter and re-connects
inputs and outputs. The Reset method sets all filter coefficients to zero and
the Trim method clamps the coefficients to the mean $\pm k*stdev$, where the
mean and stdev are computed from the convolutional filter coefficients, and
$k\in(0,1]$. The sampling methods differ in choosing the set of filters for
pruning. Figure 4 shows Targeted sampling method applied after $l_{1}$ norm
was used to rank all filters in one layer (Inception v3 architecture, Layer
175). In Figure 4, $l_{1}$ norm is applied to all convolutional filters (top
right) and the filters are sorted accordingly (top left). Targeted sampling
method selects $|S|=5$ sample sets of filters that are pruned (bottom left).
For each of the $|S|=5$ pruned models, the model accuracy is evaluated using
$|D|=10$ clean example images. Figure 4 (bottom right) shows an example of
accuracies measured over $S1,S2,S3,S4$ and $S5$ pruned models for clean and
poisoned models. While Targeted sampling selects contiguous filters from a
sorted list, the Uniform sampling method chooses uniformly distributed filters
after ranking them. The Random sampling method selects filters randomly and
the sampling is repeated $|S|$ times.
Figure 3: Differences between Remove (top), Reset (middle), and Trim (bottom)
pruning methods. Figure 4: Illustration of targeted sampling method and
$l_{1}$ ranking method.
Reduction of Search Space: The challenge of the pruning-based approach applied
to convolutional filters on NN models lies in the cost of searching the space
of all possible pruned models and pruning methods per architecture $G_{n}$.
Theoretically, the number of possible pruned models is
$\prod_{j=1}^{|L(G_{n})|}(2^{|F_{j}|}-1)$ for a NN architecture $G_{n}$ that
consists of $|L|$ convolutional layers with a varying number of convolutional
filters $|F_{j}|$ within each layer. We assume that _the significance of a
convolutional filter to class predictions is related to the norm of the filter
coefficients_ [12, 13, 19]. Thus, the number of pruned models can be reduced
to $\prod_{j=1}^{|L(G_{n})|}|F_{j}|$ by ranking filters; therefore ranking
methods are included in the pruning configurations. Unfortunately, there is no
theory nor guidelines about how to rank NN convolutional layers based on their
influence on the output [19]. Thus, in order to reduce the search space, we
assumed that all layers are equally significant to class predictions and
applied the same sampling probability $p$ of removed filters to all layers.
The challenge of optimizing the pruning-approach parameters l ies in the
additional cost of evaluating all possible parameter configurations.
Theoretically, the space of parameters is infinite as it consists of all
pruning configuration parameters $\theta_{n}$ per architecture and all models
for the functional mapping $f$. The pruning configuration space is illustrated
in Figure 5 with six parameters $\theta(G_{n},D)=\\{PM,SM,RM,p,|S|,|D|\\}$ and
one unknown classifier function $f(\vec{A_{i}})$. To reduce the search space,
we first restricted the function $f(\vec{A_{i}}{(M_{i},\theta_{n})})$ to a
multiple linear regression. The mathematical expression is shown in Equation
5. The coefficients are derived by using the pairs of accuracy vectors
$\vec{A_{i}}$ and labels $L_{i}$.
$f(\vec{A_{i}}{(M_{i},\theta_{n})})=b_{0}+\sum_{k=1}^{|S|}b_{k}*A_{i,k}{(M_{i},\theta_{n})}$
(5)
where $|S|$ is the size of vector $\vec{A_{i}}$ and $b_{k}$ are the
coefficients derived from a linear regression fit.
Next, we decomposed the configuration parameters $\theta_{n}$ into two
disjoint subsets $\theta_{n}^{error}=\\{PM,SM,RM,p\\}$ and
$\theta_{n}^{exec}=\\{|S|,|D|\\}$. The split of the parameters is based on our
observations that the number of pruned models $|S|$ and the number of images
to evaluate each pruned model with $|D|$ are the key contributors to
increasing classification time per model. This parameter decomposition allows
us to lower the search cost by first optimizing the four parameters in
$\theta_{n}^{error}$ for fixed low values in $\theta_{n}^{exec}$, and then by
completing the optimization of the two parameters in $\theta_{n}^{exec}$ with
fixed optimal values in $\theta_{n}^{error}$.
Finally, we reduce the search space by introducing relationships between
$p\in(0,1)\in\mathcal{R}$ and $|S|\in\mathcal{Z}^{>0}$ parameters under two
assumptions: (1) At least one convolutional filter per layer must be removed
in each pruned model and therefore the layer with the smallest number of
filters defines the sampling probability as $p=1/\min_{j\in[1,|L|]}{|F_{j}|}$.
(2) Each filter must be removed at least once in the set of $|S|$ pruned
models and therefore $p=k/|S|$, where $k$ is a multiplier defining how many
times the same filter could be removed in a set of $|S|$ pruned models
yielding $\vec{A_{i}}$.
Figure 5: Pruning configurations generating measurements.
## 4 Experimental Results
The quality assurance and measurements were implemented in Python using the
PyTorch and sklearn libraries. The code, installation instructions, and the
reference model architecture graphs are available from GitHub [23]. Next, we
summarize the input datasets, quality control and performance results.
### 4.1 Input Datasets
TrojAI challenge datasets are described at [22]. Given the notation in Section
3, the Round 1 dataset can be characterized by the number of models
$|M|=1000$, the number of architectures $|G|=3$ with $G=${ResNet50,
InceptionV3, DenseNet121}, the number of predicted classes $|C|=5$, the number
of clean images per class $|D|=100$, and $50:50$ split of labels $L_{i}$
between clean and poisoned.
Similarly, the Round 2 dataset is described by the number of models
$|M|=1104$, the number of architectures $|G|=22$, the number of predicted
classes randomly varying $|C|=10\pm 5$ or $|C|=20\pm 5$, the number of clean
images per class randomly varying $|D|=|C|*10$ or $|D|=|C|*20$, and $50:50$
split between clean and poisoned labels $L_{i}$. The datasets are summarized
in Table I.
Table I: Summary of Input Datasets Inputs | $|M|$ | $|G|$ | $|C|$ | $|D|$
---|---|---|---|---
Round 1 | 1000 | 3 | 5 | $|C|*100$
Round 2 | 1104 | 22 | $10\pm 5$ or $20\pm 5$ | $|C|*10$ or $|C|*20$
### 4.2 Quality Assurance
The input datasets were processed to compute the average and standard
deviation of model file size per architecture. The variations in model file
sizes are due to the use of PyTorch (version 3.4 and up) and its dependency on
the Python Pickle library [24] for data format optimizations and for saving
models as serialized objects to disk. As a sanity check, by analyzing the
model file sizes and model clean/poisoned labels, we confirmed that model file
sizes and their variations do not predict clean or poisoned labels.
For trojan detection, we extracted diagrams of abstract model graphs from the
Round 1 and Round 2 datasets using the Graphviz library [25]. The reference
graphs can be found in the reference_data folder of the GitHub repository [23]
and are used for detecting graph deviations.
### 4.3 Performance Results
All performance benchmarks were collected on a desktop running Ubuntu 18.04,
with 8 CPU cores (Intel(R) Xeon(R) Silver 4114 CPU @ $2.20\;\textmd{GHz}$),
and $192\;\textmd{GB}$ RAM. The implementation only utilizes CPU resources.
The evaluations over the set of parameters $\theta_{n}$ were skewed towards
$SM=\texttt{Targeted}$ and $RM=l_{1}$. The Targeted sampling method is
expected to outperform Uniform and Random sampling methods by design because
we anticipate two distinct trends of accuracy values in the vectors
$\vec{A_{i}}$ for clean and poisoned models (see Figure 4, bottom right). As
the evaluation metric, we used classification accuracy and average cross
entropy loss over all models in each dataset.
Round 1 dataset: We evaluated 31 pruning configurations for 286 DenseNet121
models, 395 ResNet50 models, and 319 InceptionV3 models in 254 h of compute
time. The 31 evaluations per NN model have the following distribution of
configuration parameters:
$\displaystyle PM$
$\displaystyle=\\{\texttt{Remove}(14x),\texttt{Trim}(12x),\texttt{Reset}(5x)\\}$
(6) $\displaystyle SM$
$\displaystyle=\\{\texttt{Targeted}(27x),\texttt{Uniform}(2x),\texttt{Random}(2x)\\}$
$\displaystyle RM$
$\displaystyle=\\{l_{1}(19x),l_{\infty}(1x),\texttt{stdev}(11x)\\}$
$\displaystyle p$ $\displaystyle\in[0.075,0.9]$ $\displaystyle|S|$
$\displaystyle=\\{5(20x),10(4x),15(7x)\\}$ $\displaystyle|D|$
$\displaystyle=\\{10(27x),20(1x),30(1x),40(1x),100(1x)\\}$
The sampling probability $p$ was selected based on the assumptions about
pruning filters and explored for a wide range of values for the Trim pruning
method. The evaluations are staged first for the values of $|S|=5$, $|D|=10$,
and then for other values. We concluded that the smallest classification
errors were for ResNet50: $27.85\>\%$, for InceptionV3: $30.09\>\%$, and for
DenseNet121: $32.87\>\%$ (average of the three is $30.27\>\%$). When sorted by
average cross entropy loss, the smallest values were for ResNet50: $0.5169$,
for InceptionV3: $0.5969$, and for DenseNet121: $0.6251$ (average of the three
is $0.5796$), where the value of $0.6931$ corresponds to random guessing.
Figure 6 presents the distribution of false positive and false negative error
rates in the top three configurations sorted by average CE loss. For these top
results, the parameter distribution is skewed towards $PM=\texttt{Remove}$
(6x), $SM=\texttt{Targeted}$ (7x), $RM=l_{1}$ (9x), and $p=0.02$ (3x),
$|S|=15$ (6x), and $|D|=10$ (9x).
Figure 6: False positive (FP) and false negative (FN) error rates from the top
three parameter configurations for Round 1 dataset sorted by cross entropy
loss.
Figure 7 illustrates the key dependencies of execution time on the number of
pruned models $|S|$ and the number of images used for evaluations $|D|$. For
fixed $|D|=10$ in Figure 7 (top), the average of all standard deviations of
execution times is 1.45 s for DenseNet121, 1.09 s for InceptionV3, and 1.04 s
for ResNet50. For fixed $|S|=5$ in Figure 7 (bottom), the average of all
standard deviations of execution times is 0.78 s for DenseNet121, 0.81 s for
InceptionV3, and 0.61 s for ResNet50. These values indicate that the execution
times vary more for the variable $|S|$ than for the variable $|D|$ in our set
of explored configurations.
To meet the constraint on $\mathcal{L}_{i}^{exec}$ in Equation 1 for
$T_{max}=60$ s, we estimated the values of $|S|\leq 15$ and $|D|\leq 60$ given
our hardware specifications. We also observe that the total classification
error decreases much faster with increasing $|S|$ ($\approx 0.49\,\>\%$ per
$\Delta|S|=1$) than with increasing number of clean evaluation images $|D|$
($\approx 0.05\,\>\%$ per $\Delta|D|=1$). The execution times could also be
ranked based on NN architectures to ResNet50, InceptionV3, and DenseNet121
from the least to the most time consuming classification.
Figure 7: Average execution time of model classification for varying numbers
of pruned models $|S|$ (top) and evaluated images $|D|$ (bottom). Averages are
computed over $nM=1000$ models and for a variety of the six parameters in
$\theta_{n}$. The line is the least squared linear fit to all average
execution times for the ResNet50 architecture.
Round 2 dataset: We evaluated 20 unique pruning configurations applied to 22
model architectures in 165 h of compute time. The Remove pruning method could
not be applied to three ShuffleNet architectures because of implementation
challenges; the ShuffleNet architecture makes it difficult to remove a single
filter in grouped convolutions from the dependency graph of pruned modules as
input channels and output channels must both be divisible by filter groups
[26, 27].
The evaluations were applied with the following distribution of parameters:
$\displaystyle PM$
$\displaystyle=\\{\texttt{Remove}(8x),\texttt{Trim}(7x),\texttt{Reset}(5x)\\}$
(7) $\displaystyle SM$ $\displaystyle=\\{\texttt{Targeted}(20x)\\}$
$\displaystyle RM$ $\displaystyle=\\{l_{1}(19x),\texttt{stdev}(1x)\\}$
$\displaystyle p$ $\displaystyle\in[0.1,0.4]$ $\displaystyle|S|$
$\displaystyle=\\{5(12x),15(8x)\\}$ $\displaystyle|D|$
$\displaystyle=\\{10(15x),30(3x),40(1x),All(1x)\\}$
We mostly evaluated pruning configurations with $SM=\texttt{Targeted}$ and
$RM=l_{1}$ based on the results from the Round 1 dataset analyses. The values
of sampling probability $p$ were set according to our assumptions in Section
3. Figure 8 shows a histogram of model counts in the Round 2 dataset. Due to
the approximately $10\times$ fewer models per architecture than in the Round 1
dataset, the estimate of the mapping
$f(\vec{A_{i}}{(M_{i},\theta_{n})})\rightarrow L_{i}$ has likely a larger
margin of error.
Figure 8: Histogram of models architectures in the Round 2 dataset.
Figure 9 shows the classification accuracy ($\mathcal{L}_{i}^{AC}$) and
average CE loss for all found optimal parameters $\theta_{n}$ per
architecture. The average classification error over all 22 architectures for
the found optimal configurations is $17.59\>\%$ (False Positive = $8.88\>\%$
and False Negative = $8.71\>\%$) and the average CE loss is $0.3888$ (compared
to random guessing value $0.6931$). The trojan detection algorithm meets the
accuracy requirements [22] set to CE loss = $0.3465$ for 8 out of 22
architectures (i.e., InceptionV3, ResNet18, ResNet101, SqueezeNet1.0,
SqueezeNet1.1, VGG 13, Wide ResNet50, and Wide ResNet101). The average
execution time per model is $41$ s. The execution limit of $60$ s is met by 19
out 22 model architectures (i.e., it is not met by DenseNet169, DenseNet201,
and Wide ResNet101).
The parameter values over the optimal settings found confirm that larger $|S|$
values improve detection accuracy. The choice of a pruning method appears to
be specific to the architecture, i.e., $|S|=\\{5(1x),15(21x)\\}$ and
$PM=\\{\texttt{Remove}(12x),\texttt{Trim}(6x),\texttt{Reset}(4x)\\}$ in the 22
optimal configurations $\theta_{n}$.
Figure 9: Classification accuracy and average cross entropy loss metrics
applied to Round 2 dataset for the found optimal parameters $\theta_{n}$ per
architecture. The line at $0.6931$ corresponds to random guessing for CE loss
values.
## 5 Conclusion
We presented a baseline pruning-based approach to trojan detection that was
evaluated on 2104 NN models from TrojAI Challenge (Round 1 and Round 2
datasets). The approach achieved average classification accuracy of
$69.73\>\%$ over Round 1 dataset and $82.41\>\%$ over Round 2 dataset with an
average processing time of less than $60$ s per model on a CPU hardware. The
code for such experimentations is available in GitHub [23].
## Acknowledgments
The funding for all authors was provided by IARPA:
IARPA-20001-D2020-2007180011
## Disclaimer
Commercial products are identified in this document in order to specify the
experimental procedure adequately. Such identification is not intended to
imply recommendation or endorsement by the National Institute of Standards and
Technology, nor is it intended to imply that the products identified are
necessarily the best available for the purpose.
## References
* [1] P. Bajcsy, N. Schaub, M. Majurski, Scientific Calculator for Designing Trojan Detectors in Neural Networks, Association for the Advancement of Artificial Intelligence (AAAI), Fall Symposium Series (FSS), AI in Government and Public Sector Applications (2020) 8.
* [2] IARPA, Intelligence Advanced Research Projects Agency: Trojans in Artificial Intelligence (TrojAI), https://pages.nist.gov/trojai/ (1 2020).
* [3] T. Kulp-McDowall, A. Dima, M. Majurski, TrojAI Literature Review., https://github.com/usnistgov/trojai-literature (12 2020).
* [4] H. Siegelmann, Guaranteeing AI Robustness against Deception (GARD), https://www.darpa.mil/program/guaranteeing-ai-robustness-against-deception (2019).
* [5] X. Xu, Q. Wang, H. Li, N. Borisov, C. A. Gunter, B. Li, Detecting AI Trojans Using Meta Neural Analysis (2019).
* [6] S. Jha, S. Raj, S. L. Fernandes, S. K. Jha, S. Jha, B. Jalaian, G. Verma, A. Swami, Attribution-based confidence metric for deep neural networks, Advances in Neural Information Processing Systems 32 (NeurIPS) (2019).
* [7] N. B. Erichson, D. Taylor, Q. Wu, M. W. Mahoney, Noise-response analysis for rapid detection of backdoors in deep neural networks, arXiv (2020). arXiv:2008.00123.
* [8] E. Ameisen, Always start with a stupid model, no exceptions., https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa (3 2018).
* [9] D. Blalock, J. J. G. Ortiz, J. Frankle, J. Guttag, What is the state of neural network pruning?, arXiv (2020). arXiv:2003.03033.
* [10] B. Hassibi, D. G. Stork, Second Order Derivatives for Network Pruning: Optimal Brain Surgeon, in: Advances in Neural Information Processing Systems 5 (NIPS 1992), Neural Information Processing Systems Foundation, Inc., 1992, pp. 164–172.
* [11] S. Han, J. Pool, J. Tran, W. J. Dally, Learning both Weights and Connections for Efficient Neural Networks (2015). arXiv:1506.02626.
* [12] H. Hu, R. Peng, Y.-w. Tai, S. G. Limited, C.-k. Tang, Network Trimming: A Data-Driven Neuron Pruning Approach towards Efficient Deep Architectures (2016). arXiv:1607.03250.
* [13] H. Li, A. Kadav, I. Durdanovic, H. Samet, H. P. Graf, Pruning Filters for Efficient ConvNets, in: International Conference on Learning Representations, Palais des Congrès Neptune, Toulon, France, 2017, pp. 1–13.
* [14] K. Liu, B. Dolan-Gavitt, S. Garg, Fine-pruning: Defending against backdooring attacks on deep neural networks, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 11050 LNCS (2018) 273–294. doi:10.1007/978-3-030-00470-5-13.
* [15] S. Anwar, K. Hwang, W. Sung, Structured pruning of deep convolutional neural networks (2015). arXiv:1512.08571.
* [16] A. See, M. T. Luong, C. D. Manning, Compression of neural machine translation models via pruning, CoNLL 2016 - 20th SIGNLL Conference on Computational Natural Language Learning, Proceedings (2016) 291–301doi:10.18653/v1/k16-1029.
* [17] Z. Mariet, S. Sra, Diversity networks: Neural network compression using determinantal point processes (2017). arXiv:1511.05077.
* [18] P. Molchanov, S. Tyree, T. Karras, T. Aila, J. Kautz, Pruning convolutional neural networks for resource efficient inference, 5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings (2015) (2017) 1–17. arXiv:1611.06440.
* [19] J. Ye, X. Lu, Z. Lin, J. Z. Wang, Rethinking the smaller-norm-less-informative assumption in channel pruning of convolution layers, arXiv (2017) (2018) 1–11. arXiv:1802.00124.
* [20] Y. Liu, S. Ma, Y. Aafer, W.-C. Lee, J. Zhai, W. Wang, X. Zhang, Trojaning Attack on Neural Networks, in: NDSS, Internet Society, Network and Distributed Systems Security (NDSS) Symposium 2018, San Diego, CA, 2018, pp. 1–15. doi:10.14722/ndss.2018.23291.
* [21] jacobgil, wanglouis49, zepx, eeric, insomnia250, Model Pruning Implementations in GitHub by the listed GitHub users, https://github.com (12 2020).
* [22] NIST, Datasets for Trojans in Artificial Intelligence (TrojAI), https://pages.nist.gov/trojai/ (12 2020).
* [23] P. Bajcsy, Implementations of Pruning-Based Trojan Detection in GitHub, https://github.com/usnistgov/trojai-baseline-pruning (1 2021).
* [24] Python, Software, Foundation, Pickle - Python object serialization, https://docs.python.org/3/library/index.html (12 2020).
* [25] J. Ellson, E. Gansner, Y. Hu, S. North, Graphviz - Graph Visualization Software, https://graphviz.org/ (12 2020).
* [26] Python, Software, Foundation, PyTorch Conv2d Class, https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html (12 2020).
* [27] VainF-GitHub, Grouped Convolution Issue, https://github.com/VainF/Torch-Pruning/issues/9 (12 2020).
|
This brief introduction to Model Predictive Control specifically addresses stochastic Model Predictive Control, where probabilistic constraints are considered. A simple linear system subject to uncertainty serves as an example. The Matlab code for this stochastic Model Predictive Control example is available online.
§ INTRODUCTION
In the following, we provide details on an (S)MPC simulation example. The Matlab code for this simulation example is available at <https://github.com/tim283/smpc_example>. The main purpose of this document is to introduce the idea of considering uncertainty in constraints within MPC. This document is not necessarily a step-by-step manual, nor does it explain code in detail.
In Section <ref> we first introduce the deterministic system, constraints, and MPC optimal control problem. We then introduce uncertainty into the system dynamics and provide a brief overview of how to handle constraints subject to uncertainty, also called chance constraints.
Section <ref> provides a more elaborate derivation of the chance constraint reformulation, both for normally distributed uncertainties and general probability distributions.
§ SIMULATION EXAMPLE
We consider the system example described in [8] to illustrate the concept of MPC. Based on the MPC toolbox of [7], the introduced system may then be controlled.
While this example system is considered here without any physical context, the system represents a linearization of a Buck-Boost DC–DC converter [9, 4].
§.§ System and Constraints
We first consider the system
x_k+1 = Ax_k + Bu_k
with states $\bm{x}_k = [ x_{1,k},~x_{2,k}]^\top$ and input $u_k$. Here, we choose
A =
1 0.0075
-0.143 0.996
B =
for the system and input matrix. In addition, we consider input constraints
-0.2 ≤u_k ≤0.2
and a state constraint limiting the state $x_1$ by
x_1,k ≤x_1,lim
where $x_{1,\text{lim}} = 2.8$ is the limit.
§.§ Model Predictive Control
The MPC optimal control problem, solved at each time step, subject to only input constraints is given by
min_U ∑_k=0^N-1 x_k^⊤Q x_k + u_k^⊤R u_k
s.t. x_k+1 = Ax_k + Bu_k, k ∈{ 0, ..., N-1 }
|u_k| ≤0.2, k ∈{ 0, ..., N-1 }
with $\bm{U} = [u_0, ... u_{N-1}]$, horizon $N=11$ and sampling time $\Delta t = 0.1$, where the weighting matrices are
Q =
1 0
0 10
B = 1 .
If no state constraints are present, starting at the initial state $\bm{x}_0 = [2.5,~4.8]^\top$ results in a curved motionfig? towards the origin, where $x_1$ first increases (beyond the value $2.8$). By introducing the state constraint (<ref>), the MPC optimal control problem is extended to
min_U ∑_k=0^N-1 x_k^⊤Q x_k + u_k^⊤R u_k
s.t. x_k+1 = Ax_k + Bu_k, k ∈{ 0, ..., N-1 }
|u_k| ≤0.2, k ∈{ 0, ..., N-1 }
x_1,k ≤2.8, k ∈{1, ..., N }
causing $x_1$ to increase until the value $2.8$ is reached. Then, the states move towards the origin along the constraint.fig?
§.§ Stochastic MPC for Systems with Uncertainty
Now, uncertainty is introduced into the system, yielding
x_k+1 = Ax_k + Bu_k + D w_k
D =
1 0
0 1
and the Gaussian uncertainty $\bm{w}_k \sim \mathcal{N} (\bm{0}, \bm{\Sigma}_w)$ with covariance matrix
Σ_w =
0.08 0
0 0.08
Assuming we now used (<ref>) to control the actual system (<ref>), the MPC would not account for uncertainty and the state constraint may be violated, as the prediction model does not account for the actual uncertainty. In order to cope with this, an SMPC approach is employed, accounting for the uncertainty $\bm{w}_k$. Therefore, the state constraint is transformed into a chance constraint
Pr(x_1,k ≤2.8) ≥rp
where $\gls{rp}$ is a risk parameter.1-beta? The constraint (<ref>) is not required to hold always, but only up to a level specified by the predefined risk parameter. The higher the risk parameter is chosen, the lower the risk allowed. Sometimes, the risk parameter is defined as $\tilde{\gls{rp}} = 1 - \gls{rp}$, where rp in (<ref>) is replaced with $1-\tilde{\gls{rp}}$. It then holds that increasing the risk parameter $\tilde{\gls{rp}}$ also increases risk.
This chance constraint is required to be reformulated in order to be used within the optimal control problem, yielding
x_1,k ≤2.8 - γ
γ= √(2 [1,0]^⊤Σ^e_k [1, 0]) erf^-1(2rp_brackets -1)
which is an approximation of (<ref>) [6]. In other words, the state constraint is tightened by the tightening parameter $\gamma$, which itself depends on the risk parameter $\gls{rp}$ and the error covariance matrix $\bm{\Sigma}^\text{e}_k$ that is derived in the following. The vector $[1,0]$ is required as only the error affecting $x_1$ is important. A more extensive derivation of the chance constraint reformulation is provided in Section <ref>. In short, the term $\sqrt{[1,0]^\top \bm{\Sigma}^\text{e}_k [1,~0]}$ considers the uncertainty (and the resulting error) variance and $\sqrt{2 }~ \mathrm{erf}^{-1}(2\gls{rp_brackets} -1)$ follows from the uncertainty distribution (here the normal distribution) and the risk parameter.
The state of the system dynamics (<ref>) may be split into a deterministic and a probabilistic part
x_k = z_k + e_k
and the input is adapted to
u_k = -Kx_k + v_k
where $\bm{K}$ is a stabilizing state feedback matrix obtained by an LQR approach. Here, $\bm{K}\bm{x}_k$ controls the deterministic part of the system dynamics, while the new decision variable $v_k$ accounts for uncertainty.
MPC requires propagating the error for the prediction, yielding the error covariance matrix
Σ^e_k+1 = (A - B K ) Σ^e_k (A - B K )^⊤+ D Σ_w D^⊤
for every prediction step with
Σ^e_0 = diag(0,0).
Therefore, the error covariance matrix depends on the previous error propagated through the system, as well as the covariance of the uncertainty additionally added at each step. Note the difference between the uncertainty covariance matrix $\bm{\Sigma}_w$ and the error covariance matrix $\bm{\Sigma}^\text{e}_{k}$, which is computed for every prediction step $k$, based on the prediction model and on $\bm{\Sigma}_w$.
The full SMPC optimal control problem is
min_V ∑_k=0^N-1 x_k^⊤Q x_k + (v_k - Kx_k)^⊤R (v_k - Kx_k)
s.t. x_k+1 = Ax_k + B (v_k - Kx_k), k ∈{ 0, ..., N-1 }
|v_k - Kx_k| ≤0.2, k ∈{ 0, ..., N-1 }
x_1,k ≤2.8 - γ_k, k ∈{1, ..., N }
γ_k = √(2 [1,0] Σ^e_k [1, 0]^⊤) erf^-1(2rp_brackets -1), k ∈{1, ..., N }
where $\bm{V} = [v_0, ... v_{N-1}]$ replaces the decision variables $\bm{U}$.
If (<ref>) is applied to control the actual system (<ref>), the states do not exactly reach the state constraint, but a margin is established, providing space to account for uncertainty (the constraint is tightened). Increasing the risk parameter $\gls{rp}$ increases the required distance towards the state constraint, again resulting in less constraint violation.
min_U ∑_k=0^N-1 x_k^⊤Q x_k + u_k^⊤R u_k
s.t. x_k+1 = Ax_k + Bu_k, k ∈{ 0, ..., N-1 }
|u_k| ≤0.2, k ∈{ 0, ..., N-1 }
x_1,k ≤2.8, k ∈{1, ..., N }
§ DETAILS ON SMPC WITH CHANCE CONSTRAINTS
In the following, a brief derivation of chance constraints in SMPC is provided. First, the system, constraint, and chance constraint setup are introduced.
We consider a linear system with additive uncertainty $\bm{w}_k $
, i.e.,
x_k+1 = A x_k + B u_k + D w_k .
The system state is split into a deterministic state $\bm{z}_k$ and a probabilistic error $\bm{e}_k$. The input is also split into two parts, a feedback law stabilizing the deterministic system, as well as a new input $\bm{v}_k$ accounting for uncertainty. These considerations yield
x_k = z_k + e_k
u_k = Kx_k + v_k
Splitting the system state results in the new system models
z_k+1 = Φ z_k + B v_k
e_k+1 = Φ e_k + D w_k
with $\bm{\Phi} = \bm{A} + \bm{B}\bm{K}$.
We consider the linear constraint
g^⊤_k x_k ≤h_k.
Comparing (<ref>) to the simulation example in Section <ref>, we obtain $\bm{g}_k^\top = [1, 0]^\top$ and $h_k = 2.8$.
As the system state is subject to (unbounded) uncertainty, it is not possible to guarantee constraint satisfaction in all cases. Therefore, we introduce a probabilistic constraint (chance constraint), which bounds the constraint violation probability depending on a risk parameter rp, yielding
Pr (g^⊤_k x_k ≤h_k ) ≥rp .
With the deterministic and error states, we obtain
Pr (g^⊤_k z_k + g^⊤_k e_k ≤h_k ) ≥rp .
The chance constraint (<ref>) is not a deterministic expression. It is necessary to reformulate the chance constraint, such that a tractable expression is obtained, which is then used to solve the optimization problem. For zero uncertainty, the deterministic part of the state must satisfy the state constraint. However, if uncertainty is present, the constraint must be tightened by a tightening parameter $\gamma_k$. This tightening parameter is determined depending on the uncertainty and the risk parameter rp. We therefore reformulate the chance constraint (<ref>) into
g^⊤_k z_k ≤h_k - γ_k
Pr (g^⊤_k e_k ≤γ_k ) = rp
where (<ref>) ensures that the tightening parameter is chosen in such a way, that the uncertainty only causes constraint violations as allowed by the risk parameter.
It is to note that (<ref>) is still not a deterministic expression. In the following, we derive how (<ref>) is reformulated into a deterministic approximation, based on the underlying uncertainty distribution.
§.§ Normally Distributed Uncertainty
First, we analyze the chance constraint reformulation for an uncertainty $\bm{w}_k$ subject to a normal distributions with zero mean, i.e, $\bm{w}_k \sim \mathcal{N}\left(0,\bm{\Sigma}\right)$. Similar reformulations are also used in [2, 10, 5, 3, 1].
Given the normal distribution with zero mean, the error is also normally distributed according to
e_k ∼𝒩(0,Σ^e_k)
with $\bm{\Sigma}^e_0 = 0\bm{I}$ and
Σ_k+1^e = Φ Σ_k^e Φ^⊤+ D Σ_w D^⊤.
It follows that the constraint state expression is also normally distributed according to
g^⊤e_k ∼𝒩(0,g^⊤Σ^e_k g)
where we assume that $\bm{g}^\top \bm{e}_k$ is scalar and abbreviate the variance with $\sigma^2 = \bm{g}^\top \bm{\Sigma}^e_k \bm{g}$
§.§.§ Chance constraint reformulation
Based on the cdf of a normal distribution
Pr (g^⊤_k e_k ≤γ_k ) = 1/2 [ 1 + erf ( γ_k/σ√(2) ) ] = rp
it is possible to find a deterministic expression for the tightening parameter $\gamma_k$ with the following reformulation:
1/2 [ 1 + erf ( γ_k/σ√(2) ) ] = rp
erf ( γ_k/σ√(2) ) = 2rp_brackets - 1
γ_k/σ√(2) = erf^-1(2rp_brackets - 1)
γ_k = √(2) σ erf^-1(2rp_brackets - 1)
Details on the cdf of a normal distribution are found in Appendix <ref>.
Inserting the variance $\sigma^2 = \bm{g}^\top \bm{\Sigma}^e_k \bm{g}$, we obtain the tightening parameter
γ_k = √(2) √(σ^2) erf^-1(2rp_brackets - 1)
= √(2) √(g^⊤Σ^e_k g) erf^-1(2rp_brackets - 1)
= √(2g^⊤Σ^e_k g) erf^-1(2rp_brackets - 1) .
plot error function? and inverse errror function?
§.§.§ Discussion
In general, the risk parameter is bounded by $0.5 \leq \gls{rp} < 1$. If the state is exactly on the constraint, given the normal distribution, the probability of violating the constraint in the next step without tightening the constraint is exactly $50\%$. Therefore, no constraint tightening corresponds to a risk parameter of $\gls{rp} = 0.5$. A risk parameter $\gls{rp} = 1$ would guarantee constraint satisfaction. Given the unbounded uncertainty due to the normal distribution, it would follow that $\gamma_k = \infty$ (as $\mathrm{erf}^{-1}(1) = \infty$). Obviously, this is not practical. While it is mathematically possible to choose $\gls{rp} < 0.5$, this does not make sense, as this equals loosening the original hard constraint (constraint loosening instead of constraint tightening).
Furthermore, it is to note that the resulting constraint satisfaction may by larger than specified by the risk parameter, depending on the system and constraints. At first glance, this is surprising, as a normal distribution allows finding an exact reformulation of the chance constraint. However, the tightening parameter $\gamma_k$ increases with increasing prediction steps. To account for this, the realized tightening (due to predicting multiple MPC steps) may be larger than the tightening $\gamma_1$ required to hold for the next step. The chance constraint satisfaction considers only individual prediction steps, not the probability of violating the constraint over an entire trajectory.
§.§ General Probability Distributions
A chance constraint reformulation is also possible for general probability distributions. Here, we consider univariate distributions with zero mean and variance $\sigma_w^2$. While a normal distribution allows an exact reformulation, chance constraints for general distributions may only be approximated. An overview is found in [6].
Based on the Cantelli's inequality, see Appendix <ref>, it is possible to determine a bound on the cdf, yielding
Pr (e_k < γ_k ) ≥1 - σ^2/γ_k^2 + σ^2 = rp
add $g_k$, and multidimensional case
where the risk parameter represents the required bound and $\sigma$ represents the variance of the error, similar to $\bm{\Sigma^e}$ before. provide details on how to obtain
Reformulating results in
1 - σ^2/γ_k^2 + σ^2 = rp
σ^2/γ_k^2 + σ^2 = 1 - rp
σ^2 = (1-rp) (γ_k^2 + σ^2)
σ^2 = γ_k^2 (1-rp) + σ^2 (1-rp)
σ^2 + σ^2 (rp-1) = γ_k^2 (1-rp)
σ^2(1+rp-1) = γ_k^2 (1-rp)
γ_k^2 = σ^2 rp/1-rp
γ_k = σ√(rp/1-rp)
representing a constraint tightening parameter for general probability distributions. We can rewrite this expression as
γ_k = σQ(rp)
where the quantile function $Q(\gls{rp})$ may be considered as the inverse cdf.
Unlike the reformulation for the norm constraint, the risk parameter here may take values $0 \leq \gls{rp} < 0$.
A comparison between constraint tightening based on (<ref>), i.e, $\sqrt{2}\mathrm{erf}^{-1}(2\gls{rp_brackets} - 1)$, and (<ref>), i.e., $\sqrt{\gls{rp} / (1-\gls{rp})}$, is shown in Figure <ref>. Note that the constraint tightening in Figure <ref> is not $\gamma_k$, as variance is not considered. It is obvious that the constraint tightening for general probability distributions according to (<ref>) is more conservative.
Constraint tightening comparison
§.§ Normal Distributions with Non-zero Mean
For non-zero mean probability distributions, the chance constraint must be reformulated slightly.
For a normal distribution with non-zero mean $\bm{\mu}$, (<ref>) must be adapted to
g^⊤_k z_k ≤h_k - Eg^⊤_k e_k - γ_k
Pr (g^⊤_k e_k - Eg^⊤_k e_k ≤γ_k ) = rp
where the scalar expectation value $\mathrm{E}\lr{\bm{g}^\top_k \bm{e}_k}$ represents an additional constraint tightening to account for the non-zero mean $\bm{\mu}$.
§ PROBABILITY THEORY BASICS
§.§ Normal Distributions
A normal distribution (also known as a Gaussian distribution) is characterized by its mean $\bm{\mu}$ and its covariance $\bm{\Sigma}$, abbreviated by $\mathcal{N}\sim(\bm{\mu},\bm{\Sigma})$. A special case is given by the standard normal distribution where $\bm{\mu} = \bm{0}$ and $\bm{\Sigma} = \bm{I}$ with identity matrix $\bm{I}$. In the following, we consider the univariate normal distribution with mean $\mu$, variance $\sigma^2$, and standard deviation $\sigma$.
The pdf
is given by
f(x) = 1/σ 1/√(2 π) exp ( - 1/2 ( x - μ/σ ) )
where it holds that
∫_- ∞^∞ f(x) dx = 1 .
The factor $\frac{1}{\sigma}$ accounts for the adjusted variance compared to the standard normal distribution.
For a continuous distribution, the pdf at a specific point $x$ does not yield a probability. Therefore, we introduce the cdf
with random variable $X$
Pr(X ≤x) = Φ(x) = ∫_- ∞^x f(x) dx = 1/2 [ 1 + erf ( x- μ/σ√(2) ) ]
= 1/2 + 1/2 erf ( x- μ/σ√(2) ) .
For $x = \mu$, the cumulative probability is exactly $0.5$.
plot erf
In the cdf, only one side of the distribution is considered. If we consider both sides, we obtain
Pr(-x ≤X ≤x) = ∫_-x^x f(x) dx
= ∫_-x^0 f(x) dx + ∫_0^x f(x) dx
= 2 ∫_0^x f(x) dx
= 2 ( ∫_-∞^x f(x) dx - ∫_-∞^0 f(x) dx )
= 2 ( (1/2 + 1/2 erf ( x- μ/σ√(2) ) ) - 1/2 )
= erf ( x- μ/σ√(2) ) .
§.§ Chebyshev's Inequality
Chebyshev's inequality provides a bound for probability distributions. Chebyshev's inequality indicates how likely it is that a random variable differs from the mean given a certain threshold $c$.
For zero mean distributions, i.e., $\mu = 0$, and variance $\sigma^2$, it holds that
Pr ( |X| ≥c ) ≤σ^2/c^2
where $c > 0$. Complementary, it holds that
Pr ( |X| < c ) ≥1 - σ^2/c^2 .
Note the change from $\geq$ to $<$.
The zero mean results may be extended to distributions with mean $\mu \neq 0$, yielding
Pr ( |X - μ| ≥c ) ≤σ^2/c^2
Pr ( |X - μ| < c ) ≥1 - σ^2/c^2 .
Cantelli's Inequality
The one-sided Chabyshev's inequality is also known as the Cantelli's inequality where only a single tail of the distribution is considered. Cantelli's inequality for zero mean and non-zero mean distributions is given by
Pr ( X ≥c ) ≤σ^2/σ^2 + c^2
Pr ( X - μ≥c ) ≤σ^2/σ^2 + c^2 .
from which it follows that
Pr ( X < c ) = 1 - Pr ( X - μ≥c ) ≥1 - σ^2/σ^2 + c^2
Pr ( X - μ< c ) = 1 - Pr ( X - μ≥c ) ≥1 - σ^2/σ^2 + c^2.
Note that $a \leq b \Longleftrightarrow -a \geq -b$.
[1]
T. Brüdigam, M. Olbrich, M. Leibold, and D. Wollherr.
Combining stochastic and scenario model predictive control to handle
target vehicle uncertainty in autonomous driving.
In 21st IEEE International Conference on Intelligent
Transportation Systems, Maui, USA, 2018.
[2]
L. Blackmore, M. Ono, and B. C. Williams.
Chance-constrained optimal path planning with obstacles.
IEEE Transactions on Robotics, 27(6):1080–1094, 2011.
[3]
A. Carvalho, Y. Gao, S. Lefevre, and F. Borrelli.
Stochastic predictive control of autonomous vehicles in uncertain
In 12th International Symposium on Advanced Vehicle Control,
Tokyo, Japan, 2014.
[4]
M. Cannon, B. Kouvaritakis, S. V. Rakovic, and Q. Cheng.
Stochastic tubes in model predictive control with probabilistic
IEEE Transactions on Automatic Control, 56(1):194–200, Jan
[5]
M. Farina, L. Giulioni, L. Magni, and R. Scattolini.
An approach to output-feedback mpc of stochastic linear discrete-time
Automatica, 55:140 – 149, 2015.
[6]
M. Farina, L. Giulioni, and R. Scattolini.
Stochastic linear model predictive control with chance constraints
– a review.
Journal of Process Control, 44(Supplement C):53 – 67, 2016.
[7]
L. Grüne and J. Pannek.
Nonlinear Model Predictive Control.
Springer-Verlag, London, 2017.
[8]
M. Lorenzen, F. Dabbene, R. Tempo, and F. Allgoewer.
Constraint-tightening and stability in stochastic model predictive
IEEE Transactions on Automatic Control, 62(7):3165–3177, July
[9]
M. Lazar, W.P.M.H. Heemels, B.J.P. Roset, H. Nijmeijer, and P.P.J. van den
Input-to-state stabilizing sub-optimal nmpc with an application to
dc–dc converters.
International Journal of Robust and Nonlinear Control,
18(8):890–904, 2008.
[10]
M. Ono and B.C. Williams.
Iterative risk allocation: A new approach to robust model predictive
control with a joint chance constraint.
In 2008 47th IEEE Conference on Decision and Control, pages
3427–3432, 2008.
|
# Detection of supernova remnants in NGC 4030
R. Cid Fernandes1, M. S. Carvalho1, S. F. Sánchez2, A. de Amorim1, D. Ruschel-
Dutra1
1Departamento de Física - CFM - Universidade Federal de Santa Catarina, C.P.
476, 88040-900, Florianópolis, SC, Brazil
2Instituto de Astronomía, Universidad Nacional Autónoma de México, A. P.
70-264, C.P. 04510 México, D.F., México
###### Abstract
MUSE-based emission-line maps of the spiral galaxy NGC 4030 reveal the
existence of unresolved sources with forbidden line emission enhanced with
respect to those seen in its own H ii regions. This study reports our efforts
to detect and isolate these objects and identify their nature. Candidates are
first detected as unresolved sources on an image of the second principal
component of the H$\beta$, [O iii] 5007, H$\alpha$, [N ii] 6584, [S ii] 6716,
6731 emission-line data cube, where they stand out clearly against both the
dominant H ii region population and the widespread diffuse emission. The
intrinsic emission is then extracted accounting for the highly inhomogeneous
emission-line “background” throughout the field of view. Collisional to
recombination line ratios like [S ii]/H$\alpha$, [N ii]/H$\alpha$, and [O
i]/H$\alpha$ tend to increase when the background emission is corrected for.
We find that many (but not all) sources detected with the principal component
analysis have properties compatible with supernova remnants (SNRs). Applying a
combined [S ii]/H$\alpha$ and [N ii]/H$\alpha$ classification criterion leads
to a list of 59 sources with SNR-like emission lines. Many of them exhibit
conspicuous spectral signatures of SNRs around 7300 Å, and a stacking analysis
shows that these features are also present, except weaker, in other cases. At
nearly 30 Mpc, these are the most distant SNRs detected by optical means to
date. We further report the serendipitous discovery of a luminous
($M_{V}\sim-12.5$), blue, and variable source, possibly associated with a
supernova impostor.
###### keywords:
ISM: supernova remnants – methods: data analysis – galaxies: ISM.
## 1 Introduction
NGC 4030 is an Sbc spiral with multiple arms (Buta et al., 2015), inclined by
47∘ (Crowther, 2013) at a distance of 29 Mpc111This value is a compromise
between the redshift-based distance of $27\pm 2$ Mpc (considering a recession
velocity of $1826\pm 26$ km s-1 relative to the CMB; Koribalski et al., 2004,
and adopting $H_{0}=67.8$ km s${}^{-1}\,$Mpc-1), and the $29.9\pm 0.2$ Mpc
Tully-Fisher distance derived by Tully et al. (2013).. Ganda et al. (2006,
2007) analysed the properties and kinematics of the gas and stellar population
in the context of the SAURON survey, finding regular rotation in both gaseous
and stellar components, and a prevalent young, metal-rich stellar population.
Its central region contains a nuclear star cluster and also an unresolved
X-ray source (Seth et al., 2008). The galaxy was the host of SN 2007aa. As
such it appears in several studies about the relation between SNe and their
host galaxies (e.g. Smartt et al., 2009; Chornock et al., 2010; Anderson et
al., 2012; Im et al., 2019).
NGC 4030 is one of the star-forming galaxies targeted by the MUSE Atlas of
Disks (MAD) survey (Erroz-Ferrer et al., 2019; den Brok et al., 2020), where
its effective radius, stellar mass, star formation rate, and gas phase
metallicity are estimated to be 31 arcsec (4.4 kpc),
$10^{11.2}\mathrm{M_{\odot}}$, $11\,\mathrm{M_{\odot}\,yr^{-1}}$, and
$12+\log{\rm O/H}\sim 9.0$, respectively. More recently, these same data were
part of the study by López-Cobá et al. (2020) on outflows in MUSE galaxies,
although no outflow was found for NGC 4030.
In this paper, we explore these same MUSE data, but with a very different
focus. Inspection of emission-line maps reveals a population of compact
sources whose pattern of line emission differs from that of the plethora of H
ii regions throughout the disc of NGC 4030. The sources that caught our eye
are seen in Fig. 1, whose main panel shows a RGB composite where red codes for
H$\alpha$, green for the [N ii] line at 6584 Å, and blue for a combination of
[S ii] 6716, 6731 and [O iii] 5007. Besides the impressive collection of H ii
regions (in red), this image shows numerous blue/green point-like sources
scattered all over the galaxy.
What are these sources? Are they planetary nebulae (PNe), like those found in
MUSE data for NGC 628 by Kreckel et al. (2017), supernova remnants (SNRs) like
the ones recently found in NGC 6946 by Long et al. (2019), or some other kind
of source? Though the title of the paper spoils the suspense, let us pretend
we do not know the answer and report the steps that lead to it, mimicking our
own discovery process.
In order to identify the nature of these sources one first needs to devise
ways of detecting them and quantifying their observational properties. These
are the two central goals of this paper, which is organized as follows:
Section 2 describes the data and the processing steps. Section 3 presents a
spaxel-based emission-line analysis using both conventional tools and
principal component analysis (PCA). Section 4 describes how we detect and
extract the observational properties of the sources. It also contains the
first direct evidence that at least some of them are SNRs. This is further
explored in Section 5, where we tailor the search for SNRs by means of
emission-line-based criteria. Section 6 discussed what impact these results
might have on studies of star-forming regions in galaxies. Our main findings
are summarized in Section 7.
Figure 1: Bottom panels (b, c, d): Maps of the H$\alpha$, [N ii] 6584, and [S
ii] 6716+6731 emission in NGC 4030. The intensity scale is given in units of
the median spaxel luminosity (421, 164, and 94 $L_{\odot}$ for H$\alpha$, [N
ii], and [S ii], respectively). Panel (e) shows an RGB composite based on the
mean continuum flux densities at $\lambda=8100\pm 50$ (red), $5635\pm 45$
(green), and $4860\pm 60$ Å (blue, but excluding the $\pm 25$ Å around
H$\beta$). Main panel: RGB composite mixing H$\alpha$ (in red), [N ii]
(green), and [S ii] + [O iii] (blue) images as explained in the text. Right-
hand panels (f, g, h) : Zoomed-in $50\times 50$ spaxels cut-outs of panel (a)
highlighting some of the blue/green compact sources we seek to identify.
## 2 Data and processing steps
NGC 4030 was observed with the Multi Unit Spectroscopic Explorer (MUSE) at
ESO’s Very Large Telescope as part of the MAD survey (PI: Carollo; see Erroz-
Ferrer et al., 2019). Two 1800 s observations taken in February and March of
2016 were combined into a single $326\times 326$ datacube with a spatial
sampling of 0.2 arcsec covering a field of view (FoV) of $\sim 1$ arcmin2, and
a spectral sampling of 1.25 Å from 4750 to 9350 Å. The reduction procedures
are those from the MUSE pipeline (Weilbacher et al., 2012). The seeing full
width at half-maximum (FWHM) was 0.6 arcsec. At $d=29$ Mpc, each spaxel covers
28 pc, and the seeing disc spans 84 pc.
The data cube went through our full analysis “pipeline”, comprising basic pre-
processing steps, fitting of the stellar continuum, and measurement of the
emission lines. We refer the reader to Vale Asari et al. (2019), where these
same steps were applied to MaNGA galaxies, for details. Briefly, after masking
foreground stars and low signal-to-noise spaxels, correcting for Galactic
extinction222We used a Galactic $E(B-V)=0.023$ (from Schlafly & Finkbeiner,
2011) and a Cardelli et al. (1989) extinction law with $R_{V}=3.1$., rest-
framing and re-sampling the spectra, each spaxel had its spectrum fitted with
the spectral synthesis code starlight (Cid Fernandes et al., 2005) in order to
map the stellar populations of the galaxy, and, more importantly for this
work, to aid the measurement of nebular emission. The starlight model is then
subtracted from the observed spectrum, leaving a “pure emission-line” residual
spectrum $R_{\lambda}$ from which emission lines are measured by fitting
Gaussians.
This pipeline produces a series of stellar population and nebular properties,
but this study relies almost exclusively on the emission line fluxes. The main
set of lines consists of H$\beta$, [O iii] 5007, H$\alpha$, [N ii] 6584, [S
ii] 6716, and 6731, as these are measurable throughout most of the data cube.
The median signal-to-noise ratio for these lines over the FoV are 26, 8, 115,
45, 15, and 11, respectively. Many other transitions are detected as well, as
will be shown below. Indeed, these fainter lines play an important role in
this paper.
## 3 Emission-line analysis
We start our analysis by examining the emission-line maps obtained from our
pipeline (Section 3.1), as this is how the sources that motivate this study
were originally spotted. We then move to a less conventional space where
individual lines are replaced by eigen line spectra (3.2), a change of
variables which proved useful in the context of this paper.
### 3.1 Emission-line maps
The maps at the bottom of Fig. 1 (panels b, c, and d) show the H$\alpha$, [N
ii] and [S ii] images obtained from our pipeline. The RGB composite (panel a)
was built as follows: The red channel corresponds to the H$\alpha$ flux in
units of its median value among all $93456$ usable spaxels. For the green one
we multiply this same H$\alpha$ flux by
$([\mathrm{N}\,\textsc{ii}]/\mathrm{H}\alpha)/0.39$, where 0.39 corresponds to
the median value of the [N ii]/H$\alpha$ ratio in the FoV. The blue channel is
built analogously, but using the average of
$([\mathrm{S}\,\textsc{ii}]/\mathrm{H}\alpha)/0.22$ and
$([\rm{O}\,\textsc{iii}]/\mathrm{H}\beta)/0.26$, where [S ii] denotes the sum
of the 6716 and 6731 lines. These median line ratios lie in between those
typical of metal rich H ii regions and those found in low-ionization nuclear
emission-line regions (LINERs) and in the diffuse ionized gas (see, for
instance, Lacerda et al., 2018). H ii regions thus appear in red in our colour
scheme, while spaxels with larger collisional to recombination line flux
ratios are painted in shades of green and blue – Sánchez (2020) and López-Cobá
et al. (2020) employ similar color schemes.
H ii regions dominate the line emission everywhere in NGC 4030 except for the
inner regions, where, as is common in bulges, the pattern of line emission is
typical of LINERs. Its median $[\mathrm{N}\,\textsc{ii}]/\mathrm{H}\alpha$ of
0.74 (nearly twice the value for the whole FoV) within 2 arcsec of the nucleus
is what gives its green look in Fig. 1. We note in passing that there is no
evidence of nuclear activity in this galaxy (but see Seth et al., 2008). In
fact, we find that its central regions are well within the “retired galaxy”
regime (Stasińska et al., 2008; Cid Fernandes et al., 2011), with an H$\alpha$
equivalent width of just 1.4 Å. In any case, to avoid confusion we exclude the
inner 4 arcsec in diameter from the analysis that follows.
The sources we are interested in are the blue/green dots scattered throughout
the galaxy. The zooms in panels f, g, and h of Fig. 1 show a few examples.
Close inspection of these images corroborates the visual impression that these
are indeed unresolved, with sizes compatible with the ${\rm FWHM}=0.6$ arcsec
seeing. From here on we will refer to these sources as UFLOs, for unidentified
forbidden line objects. The central goal of this paper is to remove the “U”
from this acronym for as many sources as possible. From the very title of this
paper one already knows that we claim that SNRs are amongst these UFLOs. We
nonetheless keep this nomenclature until this is properly demonstrated, and
also because this identification may not apply to all cases.
As also seen in these images, the same pattern of line emission also appears
in diffuse form, both in the bulge and inter-arm regions, whose colours are
similar to those of the unresolved UFLOs we aim to study. Whether this
indicates that at least part of this diffuse emission comes not from genuinely
diffuse ionized gas (DIG), but from a sparse collection of sources similar to
the ones we see in Fig. 1, except weaker, is an interesting question, but one
that will not be addressed here. As stated before, we seek to properly isolate
and inspect the properties of the UFLOs in NGC 4030.
Fig. 1 encapsulates the rationale behind classical diagnostic diagrams
involving collisional to recombination line flux ratios (e.g. Baldwin et al.,
1981, hereinafter BPT). We refer the reader to López-Cobá et al. (2020), where
several such diagrams are presented for NGC 4030 and many other galaxies. Red
spaxels in Fig. 1 have line ratios typical of metal rich H ii regions, while
green/blue spaxels occupy loci in diagnostic diagrams which overlap with those
of SNRs, PNe, active galactic nuclei, and DIG-like emission.
The latter two of these possibilities can be discarded right away, since (1)
UFLOs appear all through the disc of NGC 4030, and (2) they are compact. PNe
can also be ruled out on the grounds that, besides not being strong [O iii]
emitters, our UFLOs are far too luminous to be associated with PNe, whose
H$\alpha$ luminosities range from $\sim 1$ to 100 $L_{\odot}$ (Stasińska et
al., 1998). This is much fainter than even the spaxel luminosities of our
UFLOs, which (due to atmospheric seeing) account for just part of the total
output of any unresolved source. PNe are just too faint to be detected in
these data. SNRs, on the other hand, remain a viable possibility. With
expected diameters of the order of 40 pc (Roth et al., 2018), SNRs would
appear unresolved at the 84 pc resolution of the MUSE data.
### 3.2 PCA tomography
Let us now explore a less canonical approach, where the choice of line
properties and diagrams to use is guided not by pre-conceived ideas on how to
best combine emission-line data to gain physical insight, but by algebra.
Figure 2: The first three eigen line spectra obtained from the PCA. The
dashed line shows the rescaled mean line-spectrum. Figure 3: Tomograms of
the first three principal components. The intensity scales (all in
$L_{\odot}$) were set to span the range from the 2.5 to the 97.5 percentile of
the PC values in each image. Missing points (in white) mark the masked spaxels
(bad data or foreground stars). The image on the right shows a composite
mapping PC1, PC2, and PC3 (each rescaled to its 2.5–97.5 percentile range) to
red, green, and blue, respectively.
Inspired by the pioneering work of Steiner et al. (2009), we performed a
series of PCA tomography experiments with the MUSE data on NGC 4030. Here we
focus on the results obtained for a $326\times 326\times 6$ cube where the six
layers correspond to the luminosities of H$\beta$, [O iii] 5007, H$\alpha$, [N
ii] 6584, [S ii] 6716, 6731. The analysis is carried over the spaxels where
all six lines were detected by our pipeline, regardless of signal-to-noise.
The terminology “line-spectrum” will be used to denote a set of these six
emission-line luminosities.333PCA tomography was also applied to the full data
cube, but in this paper we focus on the $326\times 326\times 6$ emission-line
data cube only. Incidentally, a faint foreground star at pixel
$(x,y)=(46,302)$ was first identified by us in a full spectrum PCA, and masked
from the analysis thereafter.
We obtain that the first, second and third PCs account for 98.5, 1.2, and 0.2
per cent of the total variance, respectively. Fig. 2 shows the first three
eigen line spectra derived from the data (in red, green and blue lines,
respectively), along with the mean line spectrum (shown as a black dashed
line, scaled to unit norm). Unsurprisingly, this mean line-spectrum is typical
of the H ii regions in NGC 4030. Moreover, the first eigen line spectrum is
very similar to the mean line-spectrum, which is just what one expects since
the spaxel line-spectra were not rescaled prior to the PCA. The main role of
PC1 is thus to set the overall flux scale.
The second principal component has an eigen line spectrum that contrasts
forbidden ([O iii], [N ii], and [S ii]) with recombination (H$\alpha$,
H$\beta$) lines. Spaxels with positive values of PC2 map into those with
enhanced forbidden line emission, like the blue/green spaxels of Fig. 1a.
Finally, the third eigen line spectrum essentially boosts H$\beta$ (and [O
iii]) with respect to H$\alpha$. We attribute this behaviour to the effects of
dust. Spaxels with PC3 $>0$ have larger H$\beta$/H$\alpha$ and are thus
presumably less attenuated by dust, and vice versa.
Attributing meaning to PCs is often a dangerous exercise. In the case of data
cubes, however, the spatial organization of the PCs gives valuable insight on
the astrophysical meaning of this otherwise purely mathematical construct.
This is the beauty of PCA tomography.
Fig. 3 shows tomograms of the first three PCs. Tomogram 1 evidently traces the
H ii regions in the galaxy. In fact, this image of PC1 is very similar to the
H$\alpha$ map in panel b of Fig. 1. Tomogram 2, on the other hand, highlights
regions of enhanced forbidden line emission. H ii regions appear as dark
zones, while UFLOs stand out, along with faint diffuse emission both in the
central and inter-arm regions. In tomogram 3 the spiral arms alternate from
positive to negative values of PC3 as one crosses them in azimuth. It is also
visible that $<0$ regions are more concentrated that $>0$ ones. Inspecting the
image one further finds several dark (PC3 $<0$) spots surrounded by bright
ones (PC3 $>0$), but not the other way around. Recalling that negative PC3
maps to an increase in H$\alpha$/H$\beta$, both these facts are consistent
with our association of PC3 with dust.
The rightmost panel of Fig. 3 combines tomograms 1, 2, and 3 into a RGB
composite. H ii regions appear as mixtures of red (PC1) and blue (PC3). UFLOs
stand out as compact green/yellow sources. These same colours also appear in a
diffuse component all over the FoV, brighter in the central regions.
Given its association with UFLOs, PC2 is the most relevant component in the
context of this paper. Accordingly, in the next section we make use of
tomogram 2 to select sources to study more closely.
## 4 Detecting and extracting the sources
This section presents our efforts to produce a list of candidate UFLOs in NGC
4030, and examine their properties. Section 4.1 describes the source detection
method, while in Section 4.2 we deal with the difficult problem of how to
isolate them from the intense and inhomogeneous background they are immersed
in. A diagnostic diagram analysis is also presented, which suggests a link
between UFLOs and SNRs. Spectral extraction is dealt with in Section 4.3,
which also presents direct evidence that many of our UFLOs are in fact SNRs.
Section 4.4 presents our star case of this association.
### 4.1 Detecting UFLOs
Figure 4: RGB composites like those in Figs. 1 (top) and 3 (bottom), but re-
processed to darken the diffuse emission, marking the loci of candidate UFLOs.
Large circles mark the 26 sources in the primary sample. Small and
intermediate size circles are sources also detected for threshold values of
$T=100$ and $200L_{\odot}$, respectively.
Our strategy to identify UFLOs is to search for unresolved sources in the PC2
image. We use the astropy implementation of DAOStarFinder on tomogram 2 after
masking spaxels with PC2 $<0$ and excluding regions within 2 arcsec from the
nucleus. The expected FWHM of the sources is fixed at three spaxels (the
seeing). The DAOStarFinder threshold parameter ($T$) is first set to
$T=300L_{\odot}$. This configuration produces a list of 27 candidate UFLOs.
Naturally, lower threshold levels lead to more detections: 53 sources for
$T=200L_{\odot}$ and 147 for $T=100L_{\odot}$.
Fig. 4 marks the detected sources on the RGB composites of Figs. 1 (top) and 3
(bottom), both of which (especially the bottom one) were modified to darken
the diffuse emission in order to facilitate the visualization of the sources.
Candidates detected with ${\rm PC2}>T=100$, 200, and $300L_{\odot}$ are marked
with circles of radius 3, 5 and 7 pixels, respectively, but only sources that
satisfy the criteria outlined in the next section are actually marked. In
order to focus first on the clearest detections, we shall treat the sources
obtained with $T=300L_{\odot}$ as our primary UFLO sample.
Figure 5: $18\times 18$ spaxels cut-outs centred on a few example candidate
UFLOs. Columns show the images in (from left to right) H$\beta$, [O iii],
H$\alpha$, [N ii], [S ii] 6716, [S ii] 6731, and tomogram 2. The intensity
scale in each map goes from 0 to its 95th percentile level. The rightmost
panel shows the RGB composite of tomograms 1, 2, and 3. Circles with radii of
$r_{\rm src}=2.55$, $r_{\rm in}=3$, and $r_{\rm out}=6$ spaxels are drawn.
Crosses mark spaxels between $r=3$ and 6 that are brighter in H$\alpha$ than
the mean H$\alpha$ surface brightness within $r<r_{\rm src}$ source aperture.
With the detection method established, we now need to have a closer look at
each source and verify whether its properties can be reliably extracted. This
bring us to the background problem.
It is critical to realize that Fig. 4 gives the false impression that UFLOs
are easily separable from the surrounding emission. This impression comes
about because of the image processing done to suppress diffuse emission and
especially H ii regions. Many of our candidate sources are much harder to spot
in the original images, particularly those in H$\alpha$ and H$\beta$. To
illustrate this, Fig. 5 shows cut-outs around some of the detected sources for
our six main set emission lines, as well as for tomogram 2 and the PC RGB
composite. The examples shown span cases where the source is clearly detected
in all lines to others where it is only visible in forbidden lines (sometimes
barely so in [O iii]). The environment of the sources also varies, with some
sitting in relative isolation but most having intense line emission within a
few spaxels of the source. The structure in this “background” is more clearly
seen in the H$\alpha$ and H$\beta$ panels, where, unlike for the forbidden
lines, the brightest spaxels within the cut-out area are often not on the
source itself. These issues become more severe for samples defined with lower
$T$ values, as exemplified by cases G and H in Fig. 5, which (unlike the
others in this plot) are not part of our primary sample.
None of this invalidates our detection method. In fact the tomogram 2 cut-outs
in Fig. 5 always show a cleanly isolated source, which is what makes it an
excellent detection image. Yet, these examples show that extracting the source
properties from such an inhomogeneous and bright background is a complex task.
To produce a workable sample of UFLOs we must therefore first implement a
method to extract their properties.
### 4.2 Source extraction
Our first attempt to extract the line luminosities of the sources was to
perform a simple aperture photometry, with the source contained within a
circle of radius $r_{\rm src}$, and the background level defined over an
external annulus between $r_{\rm in}$ and $r_{\rm out}$. For the source
aperture we use $r_{\rm src}=2.55$ spaxels (corresponding to $2\sigma$ of a
gaussian of FWHM $=3$), while the background region is defined by $r_{\rm
in}=3$ and $r_{\rm out}=6$. The astropy.photoutils package was used for this
task.
As one could anticipate from Fig. 5, this simple scheme often fails for the
Balmer lines because the background level outshines the source itself. In some
cases this also happens for forbidden lines (particularly [O iii], which is
generally weak in NGC 4030). Alternative methods involving the fitting of
complex 2D surfaces were tried (e.g., a gaussian point source plus a 2D
Legendre polynomial to represent the background), with limited success.
After experiments, we settled for a recipe that removes from the background
all spaxels that are brighter in H$\alpha$ than the mean H$\alpha$ brightness
within the source aperture. Examples of such masked spaxels are marked with a
cross in the cut-outs in Fig. 5. Cases E, F and H, for instance, have a
substantial fraction of the spaxels between $r_{\rm in}$ and $r_{\rm out}$
masked, while the others have none or just a few. A second (and less
important) adjustment in the method is that in the computation of the
background level we now give less weight to spaxels that are farther away from
the source. We stress that this ad hoc extraction method is not meant to be
optimal in any sense, but just a simple workaround to deal with the complex
background landscape in NGC 4030. While it is likely that more sophisticated
tools developed to deal with crowded field spectroscopy (e.g., Fabrika et al.,
2005; Lehmann et al., 2005; Kamann et al., 2016; Roth et al., 2018) can be
used or adapted to this case, the method outlined above suits our needs for
the current paper.
After applying this revised extraction method to our candidate sources, we
remove from the list those where more than half of the background spaxels had
to be masked for being too bright in H$\alpha$. We further reject sources
where any of the six emission lines in our main set comes out with negative
flux after background subtraction. Of the 27 sources in our primary sample,
only one is rejected by these cuts. The survival fractions are smaller for
samples culled with lower $T$ values. For $T=200$ (100) $L_{\odot}$ the number
of sources goes from 53 (147) to 45 (106).
Figure 6: Diagnostic diagrams before (red stars) and after (black circles)
background subtraction for the 26 sources in our primary UFLO sample (see
text). The point marked with a blue circle is the peculiar source discussed in
Section 5.5. Dots correspond to individual spaxels in the whole FoV, colour-
coded according to the H$\alpha$, [N ii], [S ii] $+$ [O iii]-based RGB
composite in Fig. 1. The BPT diagram (top panel) includes the Stasińska et al.
(2006) demarcation line (in blue), while in the bottom panel the
$[\mathrm{S}\,\textsc{ii}]/\mathrm{H}\alpha=0.4$ line (in gray) marks the
conventional borderline to separate SNRs from H ii regions. Black dashed lines
in both diagrams show the SNR/H ii region separation lines proposed by
Kopsacheili et al. (2020).
#### 4.2.1 Diagnostic diagrams
Fig. 6 shows the [O iii]/H$\beta$ and [S ii]/H$\alpha$ vs. [N ii]/H$\alpha$
diagrams for the 26 sources in the primary sample. Red stars show the results
obtained integrating all the line emission within the source aperture (i.e.,
without background correction). Each star is connected to a black circle
located at the ratios obtained after background subtraction. Ignoring sources
that do not move much and the obvious outlier (marked with a blue open circle;
see Section 5.5), the tendency is for all line ratios to increase. This occurs
because in most cases the “background” line emission is associated with H ii
regions. Subtracting H ii region-like lines from sources that are defined
(through PC2) in terms of excess forbidden line emission naturally leads to
even larger collisional to recombination line ratios. Some of our UFLOs are
thus even more different from H ii regions than initially thought.
Dashed lines in Fig. 6 mark borderlines for different kinds of sources in
these diagrams. In the BPT diagram (top panel) the blue line outlines the
Stasińska et al. (2006) limit of pure star-forming galaxies, while the black
line comes from the recent work by Kopsacheili et al. (2020), who trace
optimal demarcation lines on the basis of models for H ii regions and SNRs.
The bottom diagram includes the
$[\mathrm{S}\,\textsc{ii}]/\mathrm{H}\alpha=0.4$ boundary commonly used to
identify SNRs candidates (e.g. Blair & Long, 1997, 2004; Long et al., 2019).
Some variation on this value exists in the literature; for instance, Dopita et
al. (2010) and Leonidaki et al. (2013) use a less stringent
$[\mathrm{S}\,\textsc{ii}]/\mathrm{H}\alpha=0.3$ criterion, while Matonick &
Fesen (1997) use 0.45 instead, and Long et al. (2018) further note that the
separation between H ii and SNRs becomes increasingly blurred at low surface
brightness. Kopsacheili et al. (2020) show that a simple
$[\mathrm{S}\,\textsc{ii}]/\mathrm{H}\alpha\geq 0.4$ demarcation line may
induce biases against slow shocks. Their optimal separator in the [S
ii]/H$\alpha$-[N ii]/H$\alpha$ space is shown as a black dashed line, which
runs well below the $[\mathrm{S}\,\textsc{ii}]/\mathrm{H}\alpha=0.4$ boundary.
Regardless of which demarcation line one choses to use, Fig. 6 shows that the
background corrections make some UFLOs line ratios more SNR-like. Focusing on
the bottom plot, one sees that 9 UFLOs cross the
$[\mathrm{S}\,\textsc{ii}]/\mathrm{H}\alpha=0.4$ boundary, while 16 fall into
the SNR region as defined by Kopsacheili et al. (2020).
#### 4.2.2 [O i]$\lambda 6300$
Figure 7: Top: Map of the [OI]6300 line luminosity (in $L_{\odot}$). Circles
mark UFLO detections as in Fig. 4. Bottom: As Fig. 6, but for the [S
ii]/H$\alpha$ vs. [O i]/H$\alpha$ diagram. The conventional
$[\mathrm{S}\,\textsc{ii}]/\mathrm{H}\alpha=0.4$ and the Kopsacheili et al.
(2020) SNR/H ii region separation lines are shown in dashed gray and black,
respectively.
The analysis presented so far has overlooked the [O i] line at 6300 Å, a well-
known tracer of partially ionized zones, and thus a good indicator of the
presence of shocks. This transition is usually very weak and hence prone to
large measurement uncertainties, particularly in H ii regions, the dominant
source of emission lines in NGC 4030. This is reflected on its median signal-
to-noise ratio of just 3 over the FoV, compared to 8 for [O iii], the weakest
of the six lines in our main set. Now that we have identified a population of
sources likely associated with SNRs, let us examine the [O i] data to verify
whether they are indeed strong [O i] emitters.
Fig. 7 (top) shows the [O i] map for NGC 4030. Primary sample UFLOs are marked
with large circles. Intermediate and small circles are used for UFLOs in the
$T=200$ and $100L_{\odot}$ samples (as in Fig. 4). The source at pixel
$(x,y)=(287,126)$ stands out as the brightest [O i] emitter in the galaxy –
this extreme case is further discussed in Section 4.4 (see also Fig. 9).
Several of the other primary sample UFLOs and many of those in the other
samples are also coincident with peaks in the [O i] image.
The bottom panel in Fig. 7 shows the [O i]/H$\alpha$ vs. [S ii]/H$\alpha$
diagram. As in Fig. 6, red stars and black circles mark the line ratios
obtained prior to and after background subtraction, respectively. Most UFLOs
line up along the highly correlated cloud of spaxel values. As for other
forbidden lines, [O i]/H$\alpha$ tends to increase with the background
subtraction, populating the upper right part of the diagram. The plot also
shows the conventional SNR/H ii division at
$[\mathrm{S}\,\textsc{ii}]/\mathrm{H}\alpha=0.4$ as well as the optimal model
separator line according to Kopsacheili et al. (2020). Sources that lie above
the Kopsacheili et al. line in the [S ii]/H$\alpha$ vs. [N ii]/H$\alpha$
diagram also lie above the corresponding line in this plot.
#### 4.2.3 Many (but not all) UFLOs have SNR-like line ratios
The title of this subsection summarizes the result of the diagnostic diagram
analysis in Figs. 6 and 7. On the one hand, the diagnostic diagrams presented
above support the interpretation that many of our UFLOs are associated with
SNRs. On the other, however, these same diagrams show that several of our
sources remain within or close to the H ii region areas even after background
subtraction.
For instance, of the 11 sources in the bottom plot of Fig. 6 initially located
within the star-forming zone as defined by the Stasińska et al. (2006)
criterion (blue dashed line in the BPT diagram), 8 remain there after
background subtraction, and similarly for Fig. 7. These non-SNR sources have
line ratios slightly offset from the main body of H ii region-dominated
spaxels (red dots) in these diagnostic diagrams, which explains why they are
picked by PC2.
### 4.3 Spectral extraction
Figure 8: Examples of the spectral extraction for two sources. Top panels show
the spectra in the source aperture (in blue) and its estimated background
(red), with insets zooming on the H$\alpha$, [N ii], and [S ii] lines. The
source minus background difference spectra are shown in the bottom panels,
with insets zooming in the 6250–7500 Å window. All spectra were slightly
smoothed (with a $\sigma=2$ Å gaussian) for visualization purposes.
Up to this point we have worked exclusively the integrated emission-line
measurements. Now that we have managed to produce a sensible list of UFLOs,
let us examine their spectra.
Just as with the line images, to extract a source’s spectrum one needs to
properly subtract its background. We do this in exactly the same way we did
for the individual line images, except now operating on a $\lambda$ by
$\lambda$ basis. We perform the extraction on the residual $R_{\lambda}$
obtained after subtracting the starlight model fit from the observed spectrum,
as this represents our best estimate of the pure emission-line spectrum. Prior
to the spectral extraction the $R_{\lambda}$ for every individual spaxel is
shifted to the rest frame using the H$\alpha$-based velocities, thus
eliminating possible effects of the galaxy’s rotation.
Fig. 8 exemplifies the spectral extraction process for two UFLOs. Blue and red
lines in the top panels represent the spectra in the source ($r<r_{\rm
src}=2.55$) and background ($3<r<6$) apertures, respectively. The inset
panels, which zoom into the [N ii]-H$\alpha$-[S ii] window, show that (1) the
background contribution is significant in both cases, and that (2) the sources
have visibly larger [N ii]/H$\alpha$ and [S ii]/H$\alpha$ ratios than their
surroundings.
The intrinsic (i.e., background-subtracted) source spectrum is shown in the
bottom panels. Weak lines such as [N i] 5199, [O i] 6300, 6364, and [S iii]
9069 show up clearly even in these autoscale plots. The insets reveal several
other features, the most striking of which are the lines around 7300 Å. These
are telltale signs of SNRs.
### 4.4 The smoking gun
Figure 9: Left: Spectrum of the most extreme SNR identified in NGC 4030. The
bottom spectrum zooms in on the $y$-axis to facilitate the visualization of
weaker features. All lines marked in the bottom panel are from [Fe ii]. Right:
Cut-outs of Fig. 4 around the source.
Several other UFLOs in our list exhibit the features at $\sim 7300$ Å seen in
the examples shown in Fig. 8, some more clearly, others weaker. The most
spectacular example is shown in Fig. 9. The top panel shows its spectrum in an
autoscale, while the bottom one expands the $y$-axis to allow a clearer view
of the weaker features.
Its long collection of emission lines includes numerous [Fe ii] transitions
(from 5158 to 9033 Å), signaling the existence of an extended partially
ionized zone typical of shocks, but not of H ii regions, where the transition
from ionized to neutral phases is sharp (Osterbrock & Ferland, 2006). The
anomalous strength of the [O i] 6300, 6364 doublet conveys the same message.
Transitions from [Ni ii], [Cr ii], and [Cl ii] are also detected. These lines
are often detected in Galactic and extra-galactic SNR studies (e.g. Russell &
Dopita, 1990; Levenson et al., 1995; Fesen & Hurford, 1996; Dopita et al.,
2019).
The region around 7300 Å is of particular interest in this paper, as its
contains strong features that we have seen before in the examples in Fig. 8.
The three peaks seen in Fig. 9 in this range contain more than three lines.
The blue peak at 7291 Å is due to [Ca ii], though its blue wing seems
distorted by a bit of He i 7281\. The red one at $\sim 7381\,$Å is likely due
to a blend of [Ni ii] 7378 and [Fe ii] 7388. The central and strongest peak at
$\sim 7323$ Å blends a couple of [O ii] doublets (7319, 7320, and 7330, 7331)
and possibly [Ca ii] 7324. Disentangling these blended lines would require a
detailed modelling (say, as in Dopita et al., 2019). Instead, we shall use the
mere visual detection of these features as a signpost for SNRs and check how
many more of our sources exhibit them.
The RGB composites in the right-hand panels of Fig. 9 show that this SNR lives
in a region of ongoing star formation (at the tip of a spiral arm), suggesting
that it had a massive progenitor star. These same images show another UFLO
candidate at $(x,y)=(289,112)$, just $\sim 13$ spaxels (2.6 arcsec) down from
the main source. This weaker source is not among the 26 in our primary sample,
but it is picked up when the detection threshold is lowered to
$T=100L_{\odot}$. Though its spectrum at $\sim 7300$ Å is too noisy to make
any conclusive statement about the presence or otherwise of SNR features, this
source does have line ratios typical of SNRs. Indeed, it is included in the
SNR sample introduced in the next section.
## 5 Supernova Remnants in NGC 4030
The analysis in the previous section provided strong evidence that many of our
UFLOs are associated with SNRs. Some, however, are not, and hence our purely
PCA-based detection criterion does not produce a clean list of SNRs. This is
hardly surprising, as the method was not designed to achieve this specific
goal. It is, nevertheless, easy to adapt our analysis to focus on SNRs.
This section starts by examining whether SNR candidates defined in terms of a
canonical $[\mathrm{S}\,\textsc{ii}]/\mathrm{H}\alpha$-based criterion are
picked by our PCA-based method (Section 5.1). After this basic test, we
produce a list of SNR candidates which are detected through PC2 and satisfy a
combined [S ii]/H$\alpha$ and [N ii]/H$\alpha$ criterion (Section 5.2). The
mean spectrum of the sources are presented in Section 5.3, where we also
examine more closely the 7300 Å features seen in the SNRs in Figs. 8 and 9.
The density-sensitive
$[\mathrm{S}\,\textsc{ii}]6716/[\mathrm{S}\,\textsc{ii}]6731$ ratio and line
widths of our SNRs are briefly discussed in Section 5.4. Finally, Section 5.5
is dedicated to a peculiar source that is definitely not an SNR, but a very
interesting source nonetheless.
### 5.1 [SII]-based search for SNRs
SNRs are traditionally identified by their enhanced [S ii] lines, with [S
ii]/H$\alpha$ ratios in excess of 0.4 (e.g., Long et al., 2019). To search for
SNRs using this criterion we have first searched for unresolved sources in a
[S ii] map. In order to facilitate the identification of peaks we first
subtract from the [S ii] image a smoothed version of itself and set negative
entries to zero. As in Section 4.1, DAOStarFinder was used to identify
potential sources. Sources were then extracted and only those with
$[\mathrm{S}\,\textsc{ii}]/\mathrm{H}\alpha\geq 0.4$ were retained. Candidates
with very noisy spectra were manually removed from the list.
This process yielded 20 SNR-like sources. Only 8 of these are amongst the 26
UFLOs in our primary sample, but others are picked for lower detection
thresholds. All but one are included in the $T=100L_{\odot}$ sample and the
remaining source is picked lowering $T$ to $50L_{\odot}$. To conclude, this
exercise shows that our PCA-based definition of UFLOs can detect SNRs defined
in a traditional way as long as the PC2 detection threshold is adjusted to
$T=50$–$100L_{\odot}$.
### 5.2 A refined list of SNR candidates
The flip side of PCA as a technique to find SNRs is that, as already pointed
out, it picks many non-SNR sources as well. In this section we combine our PCA
methodology with conventional emission-line ratio criteria to produce a list
of SNR candidates.
We keep tomogram 2 as a source detection image because of its much higher
contrast in comparison to the [S ii] images, as illustrated in Fig. 5.
Moreover, tomogram 2 is not based on any single emission-line, but on all of
them simultaneously. Correlations such as those between [N ii] and [S ii]
emission (Fig. 6) are thus automatically built in PC2, making it a robust
image for source detection purposes.
Guided by the results of the previous section, let us now search for sources
with SNR-like line ratios in the $T=100L_{\odot}$ UFLO sample. Only $\sim 1/5$
of these UFLOs satisfy $[\mathrm{S}\,\textsc{ii}]/\mathrm{H}\alpha>0.4$, but
this is likely a too stringent threshold. Many other sources have clearly
enhanced [S ii]/H$\alpha$ and yet do not make this cut. In order to select SNR
candidates we instead use the new Kopsacheili et al. (2020) classification
scheme based on the combination of the [S ii]/H$\alpha$ and [N ii]/H$\alpha$
ratios (the dashed black curve in the bottom panel of Fig. 6). This more
inclusive criterion produces a list of 59 SNR candidates. Lowering the PC2
detection threshold to $T=50L_{\odot}$ adds 40 other sources to this list.
Figure 10: SNR sample sources plotted on the emission lines RGB composite
(top, same as in Fig. 4) and tomogram 2 (bottom, as in Fig. 3). Large circles
mark the 59 SNRs detected with a $T=100L_{\odot}$ threshold on tomogram 2. The
40 small circles are the ones further detected lowering $T$ to $50L_{\odot}$.
Figure 11: Diagnostic diagrams for the SNR sample. Large circles mark the 59
SNRs detected with a $T=100L_{\odot}$ threshold. Small circles mark the 40
extra sources detected with $T=50L_{\odot}$. Black dashed lines in all
diagrams show the Kopsacheili et al. (2020) SNR/H ii region demarkation lines.
Other lines as in Figs. 6 and 7.
Fig. 10 marks these sources in our emission-line RGB composite (top) as well
as on tomogram 2 (bottom). Large circles correspond to the 59 sources found
with $T=100L_{\odot}$, while small circles mark those found only in the
$T=50L_{\odot}$ sample. Note that a few sources are hardly visible in the RGB
frame, though all are clearly seen in tomogram 2.
Fig. 11 shows the loci of our sources in three diagnostic diagrams. By
construction, all sources are above the Kopsacheili et al. (2020) SNR/H ii
region division line in the [S ii]/H$\alpha$ vs. [N ii]/H$\alpha$ plane (left-
hand panel). Most are also located above their division line in the [O
i]/H$\alpha$ vs. [S ii]/H$\alpha$ diagram (middle). Unsurprisingly, the top-
right regions of these diagrams are populated exclusively by the subset of
SNRs found with $T=100L_{\odot}$. Sources identified with a less stringent
threshold in PC2 tend to have smaller forbidden-to-recombination line
intensity ratios. For instance, the mean value of $\log\
[\mathrm{N}\,\textsc{ii}]/\mathrm{H}\alpha$ for the $T=100L_{\odot}$ subsample
is $-0.16$, while excluding these from the $T=50L_{\odot}$ list this average
drops to $-0.29$. As expected, these subsets also differ in absolute terms.
Line luminosities are on average about twice as bright for the
$T=100L_{\odot}$ sources than for those only present in the $T=50\,L_{\odot}$
list.
The right-hand panel in Fig. 11 shows our sources in the BPT diagram. Unlike
in the [O i]/H$\alpha$ vs. [S ii]/H$\alpha$ plane, the Kopsacheili et al.
(2020) in the [O iii]/H$\beta$ vs. [N ii]/H$\alpha$ space does not separate
well our objects, with only 27 out of the 59 sources in the $T=100L_{\odot}$
sample lying in the SNR domain. As explained by Kopsacheili et al. (2020)
themselves, this apparent inconsistency is to some extent expected, given the
partial overlap in emission-line properties of H ii regions and SNRs. Still,
this result raises an important question: which diagnostic diagram should be
used to pick SNR candidates?
Observationally, the [S ii]/H$\alpha$ vs. [N ii]/H$\alpha$ diagram is
undoubtedly the best choice for this particular study, as it comprises the
strongest emission lines in NGC 4030. Theoretically, however, this is not an
optimal choice. Of the diagrams in Fig. 11 the [O i]/H$\alpha$ vs. [S
ii]/H$\alpha$ one offers the best combination of completeness and
contamination in separating SNRs from H ii regions according to Kopsacheili et
al. (2020), but the weakness of [O i] is an obvious disadvantage in practical
work. Similarly, the BPT diagram has a better performance in terms of
contamination, but the [O iii] flux is generally low in NGC 4030 ($\sim 3$
times weaker than [S ii] in the median), and hence prone to uncertainties.
These considerations (1) justify our choice of [S ii]/H$\alpha$ vs. [N
ii]/H$\alpha$ diagram as a means to identify SNRs in NGC 4030, and (2) explain
why we have not applied the theoretically more powerful 3D diagnostic criteria
proposed by Kopsacheili et al. (2020). They also serve as a reminder that our
list of SNRs may well be contaminated at some level due both to inherent
diagnostic ambiguities and the aforementioned source extraction issues
(Section 4.2). Future work will be required to confirm the less obvious
candidates.
With these caveats in mind, let us henceforth focus on the 59 sources detected
with $T=100L_{\odot}$, hereafter the “SNR sample". Table 1 lists some
information on these sources.
ID | RA | Dec. | $\frac{[\mathrm{N}\,\textsc{ii}]}{\mathrm{H}\alpha}$ | $\frac{[\mathrm{S}\,\textsc{ii}]}{\mathrm{H}\alpha}$ | $\frac{[\rm{O}\,\textsc{i}]}{\mathrm{H}\alpha}$ | $\frac{[\rm{O}\,\textsc{iii}]}{\mathrm{H}\beta}$ | $\log\frac{L_{\mathrm{H}\alpha}}{L_{\odot}}$
---|---|---|---|---|---|---|---
1 | 22.90s | $6^{\prime}23.8^{\prime\prime}$ | 0.93 | 0.53 | 0.11 | 0.94 | 3.23
2 | 24.16s | $6^{\prime}23.2^{\prime\prime}$ | 0.60 | 0.41 | 0.06 | 0.17 | 3.61
3 | 21.77s | $6^{\prime}23.2^{\prime\prime}$ | 0.46 | 0.17 | 0.00 | 0.15 | 3.93
4 | 22.28s | $6^{\prime}22.2^{\prime\prime}$ | 0.44 | 0.19 | 0.02 | 0.18 | 4.63
5 | 23.69s | $6^{\prime}21.7^{\prime\prime}$ | 1.59 | 0.64 | 0.16 | 3.10 | 3.09
6 | 22.81s | $6^{\prime}21.7^{\prime\prime}$ | 0.90 | 0.58 | 0.13 | 0.80 | 4.13
7 | 24.64s | $6^{\prime}20.9^{\prime\prime}$ | 0.46 | 0.23 | 0.04 | 0.15 | 3.72
8 | 23.08s | $6^{\prime}20.5^{\prime\prime}$ | 1.86 | 1.06 | 0.11 | 4.46 | 3.14
9 | 22.63s | $6^{\prime}20.5^{\prime\prime}$ | 0.65 | 0.29 | 0.04 | 0.54 | 4.54
10 | 24.67s | $6^{\prime}19.4^{\prime\prime}$ | 0.67 | 0.26 | 0.04 | 1.08 | 3.71
11 | 22.22s | $6^{\prime}18.9^{\prime\prime}$ | 0.38 | 0.22 | 0.02 | 0.23 | 4.14
12 | 23.81s | $6^{\prime}16.9^{\prime\prime}$ | 0.95 | 0.42 | 0.06 | 1.27 | 3.66
13 | 24.46s | $6^{\prime}16.8^{\prime\prime}$ | 0.67 | 0.34 | 0.07 | 0.85 | 3.82
14 | 22.92s | $6^{\prime}16.2^{\prime\prime}$ | 0.42 | 0.27 | 0.03 | 0.25 | 3.91
15 | 22.15s | $6^{\prime}14.0^{\prime\prime}$ | 0.60 | 0.24 | 0.05 | 0.53 | 3.66
16 | 24.50s | $6^{\prime}13.5^{\prime\prime}$ | 1.86 | 0.77 | 0.22 | 3.55 | 3.81
17 | 22.42s | $6^{\prime}12.8^{\prime\prime}$ | 1.07 | 0.51 | 0.09 | 2.10 | 3.77
18 | 22.86s | $6^{\prime}11.6^{\prime\prime}$ | 0.50 | 0.21 | 0.01 | 0.53 | 3.75
19 | 24.09s | $6^{\prime}11.1^{\prime\prime}$ | 0.78 | 0.48 | 0.07 | 0.05 | 3.54
20 | 25.38s | $6^{\prime}10.5^{\prime\prime}$ | 1.73 | 0.85 | 0.22 | 3.07 | 3.12
21 | 22.09s | $6^{\prime}10.2^{\prime\prime}$ | 0.61 | 0.37 | 0.04 | 0.50 | 3.87
22 | 24.49s | $6^{\prime}9.6^{\prime\prime}$ | 1.15 | 0.55 | 0.10 | 0.91 | 3.40
23 | 22.54s | $6^{\prime}9.4^{\prime\prime}$ | 0.46 | 0.27 | 0.03 | 0.33 | 3.84
24 | 23.96s | $6^{\prime}8.5^{\prime\prime}$ | 0.41 | 0.27 | 0.05 | 0.30 | 3.60
25 | 22.96s | $6^{\prime}8.3^{\prime\prime}$ | 0.75 | 0.39 | 0.02 | 1.18 | 3.40
26 | 23.78s | $6^{\prime}7.9^{\prime\prime}$ | 0.77 | 0.35 | 0.05 | 0.49 | 3.96
27 | 22.11s | $6^{\prime}7.5^{\prime\prime}$ | 1.85 | 0.55 | 0.92 | 3.64 | 3.95
28 | 24.30s | $6^{\prime}5.8^{\prime\prime}$ | 0.54 | 0.19 | 0.03 | 0.61 | 3.85
29 | 24.15s | $6^{\prime}5.9^{\prime\prime}$ | 1.32 | 0.53 | 0.03 | 1.63 | 3.44
30 | 24.72s | $6^{\prime}3.2^{\prime\prime}$ | 0.42 | 0.22 | 0.03 | 0.45 | 4.15
31 | 23.17s | $6^{\prime}1.9^{\prime\prime}$ | 0.94 | 0.35 | 0.07 | 1.08 | 3.48
32 | 24.37s | $6^{\prime}1.9^{\prime\prime}$ | 0.61 | 0.30 | 0.04 | 0.75 | 3.96
33 | 21.80s | $6^{\prime}1.7^{\prime\prime}$ | 1.63 | 0.67 | 0.20 | 1.49 | 3.22
34 | 25.09s | $6^{\prime}1.4^{\prime\prime}$ | 0.39 | 0.25 | 0.02 | 0.42 | 3.88
35 | 24.23s | $6^{\prime}0.7^{\prime\prime}$ | 0.63 | 0.24 | 0.03 | 0.27 | 4.02
36 | 22.66s | $5^{\prime}59.1^{\prime\prime}$ | 0.38 | 0.21 | 0.03 | 0.19 | 3.94
37 | 24.92s | $5^{\prime}58.9^{\prime\prime}$ | 0.88 | 0.48 | 0.05 | 1.09 | 3.35
38 | 23.99s | $5^{\prime}57.2^{\prime\prime}$ | 0.52 | 0.23 | 0.04 | 0.24 | 3.91
39 | 24.68s | $5^{\prime}56.8^{\prime\prime}$ | 0.49 | 0.21 | 0.04 | 0.45 | 3.83
40 | 24.64s | $5^{\prime}56.7^{\prime\prime}$ | 0.53 | 0.29 | 0.02 | 0.39 | 3.79
41 | 23.29s | $5^{\prime}55.7^{\prime\prime}$ | 0.56 | 0.23 | 0.03 | 0.27 | 3.90
42 | 21.74s | $5^{\prime}55.0^{\prime\prime}$ | 0.95 | 0.76 | 0.16 | 0.71 | 3.78
43 | 24.78s | $5^{\prime}53.9^{\prime\prime}$ | 0.47 | 0.26 | 0.03 | 0.31 | 3.95
44 | 22.07s | $5^{\prime}53.8^{\prime\prime}$ | 0.72 | 0.45 | 0.08 | 0.82 | 3.57
45 | 21.98s | $5^{\prime}53.7^{\prime\prime}$ | 1.09 | 0.70 | 0.14 | 1.06 | 3.64
46 | 23.15s | $5^{\prime}53.6^{\prime\prime}$ | 0.57 | 0.31 | 0.04 | 0.21 | 3.61
47 | 23.57s | $5^{\prime}51.6^{\prime\prime}$ | 0.58 | 0.20 | 0.06 | 0.18 | 3.79
48 | 23.03s | $5^{\prime}49.2^{\prime\prime}$ | 0.61 | 0.29 | 0.04 | 0.51 | 3.87
49 | 22.99s | $5^{\prime}49.2^{\prime\prime}$ | 1.22 | 0.48 | 0.10 | 1.99 | 3.77
50 | 22.34s | $5^{\prime}47.7^{\prime\prime}$ | 0.37 | 0.21 | 0.02 | 0.63 | 4.18
51 | 23.77s | $5^{\prime}47.2^{\prime\prime}$ | 0.48 | 0.22 | 0.03 | 0.20 | 3.49
52 | 24.56s | $5^{\prime}46.6^{\prime\prime}$ | 0.32 | 0.20 | 0.01 | 0.20 | 3.86
53 | 23.34s | $5^{\prime}43.8^{\prime\prime}$ | 1.34 | 0.56 | 0.11 | 2.37 | 3.31
54 | 24.17s | $5^{\prime}41.3^{\prime\prime}$ | 0.75 | 0.31 | 0.06 | 1.00 | 3.43
55 | 21.71s | $5^{\prime}41.2^{\prime\prime}$ | 0.48 | 0.20 | 0.01 | 0.14 | 3.96
56 | 25.33s | $5^{\prime}39.6^{\prime\prime}$ | 0.94 | 0.62 | 0.13 | 0.69 | 3.91
57 | 24.94s | $5^{\prime}39.2^{\prime\prime}$ | 0.46 | 0.33 | 0.05 | 0.23 | 3.83
58 | 23.43s | $5^{\prime}34.9^{\prime\prime}$ | 0.47 | 0.24 | 0.04 | 0.26 | 4.25
59 | 23.16s | $5^{\prime}33.9^{\prime\prime}$ | 0.34 | 0.20 | 0.01 | 0.08 | 4.05
Table 1: Coordinates and emission-line properties for sources in the SNR
sample. Coordinates are given as offsets from RA $=$ 12h00m00s and Dec.
$=-01^{\circ}00^{\prime}00^{\prime\prime}$.
### 5.3 Spectra and the 7300 Å features
Figure 12: Mean spectrum of the 59 sources in the SNR sample.
Fig. 12 shows the mean spectra of the 59 sources in the SNR sample. The
meaning of such an average is of course not clear, since it mixes not only
SNRs with different excitation conditions but also differently reddened by
dust. None the less the increased signal-to-noise is useful to pick weak
emission lines that are not obvious in the individual spectra.
Zooming on the $y$-scale one can identify most of the lines discussed in
connection with Fig. 9, where our most extreme SNR was first presented, albeit
with smaller intensity. We note in passing that that source, as well as the
two other examples shown in Fig. 8, all satisfy the emission-line based
criteria applied to clean our PC2-based sample of non-SNR sources.
Figure 13: Average spectra obtained by dividing our sorted list of 59 sources
into six bins. The left-hand panel focus on the 7330 Å features. Numbers
towards the right indicate which of the sources in the sorted list are
included in each bin. The right-hand panel zooms into the [O i]
$\lambda\lambda 6300,6363$ lines for comparison. In both cases the bottom
spectrum (in red) represents an average over H ii regions in NGC 4030.
The features at $\sim 7300$ Å (including lines from [Ca ii], [O ii], [Ni ii],
and [Fe ii]; see Section 4.4) are clearly present in the mean spectrum. The
fact that these features have been previously observed in studies of Galactic
and extragalactic SNRs further strengthens the identification of our sources
as SNRs. Yet, these lines are only clearly visible in about 10 of our sources,
the most spectacular example being the case shown in Fig. 9. For the remaining
ones they are either not present or too weak and immersed in the noise.
To investigate this further we sorted the 59 sources by an index that
quantifies the excess emission in the 7270–7400 Å range and stacked the
spectra into six groups, each containing 9 or 10 sources. Fig. 13 (left-hand
panel) shows the result of this experiment. The top spectrum is the average of
the nine sources with the strongest 7300 Å features, the second one is the
average of numbers 10 to 19 in the sorted list, and so on. By construction,
the lines become weaker as one moves towards the bottom of the plot. The
increased signal-to-noise of these stacks allows us to detect the lines down
to the fourth bin, after which there are only inconclusive hints of their
presence. For comparison, the right-hand panel in Fig. 13 shows the same
sequence of stacked spectra but around the [O i] $\lambda\lambda 6300,6363$
doublet, and using the same luminosity density scale as in the left-hand
panel. This comparison shows that the strength of the lines around 7300 Å
decreases in tandem with the decrease of the [O i] lines (and other lines in
general). We speculate that this progression towards weaker SNR features is
due to evolution of the remnants as they gradually exhaust their finite energy
content.
What do H ii regions in NGC 4030 look like in this spectral window? In order
to answer this question we have culled a sample of H ii regions from the MUSE
data and evaluated their stacked spectra. The 143 H ii regions used for this
test were identified with a simple peak finding algorithm applied to an
H$\alpha$ image where spaxels with non-H ii region like line ratios were
excluded – a maximum value of 0.4 was admitted for the [N ii]/H$\alpha$, [O
iii]/H$\beta$, and [S ii]/H$\alpha$. Regions around all $T=100L_{\odot}$
sources were also masked to avoid contamination, but otherwise the spectra
were extracted as for other sources in this paper. The bottom (red) spectrum
in Fig. 13 shows the result of this analysis. The plot shows that, at least in
NGC 4030, H ii regions have negligible emission features around 7300 Å.
Levenson et al. (1995), in their study of the N63A nebula in the Large
Magellanic Cloud, also find that its H ii region component has much weaker
7300 Å features than the SNR within the system.
To summarize, the 7300 Å features are individually detected in $\sim$ 10
sources, and statistically in $\sim 40$. The comparison with the progression
of the [O i] lines in Fig. 13 suggests that these lines may well be present
even for the remaining sources, but at levels that do not allow us to detect
them even statistically.
### 5.4 [SII] ratio and line width
Figure 14: The density-sensitive [S ii] 6716/6731 ratio plotted against [S
ii]/H$\alpha$ for SNRs and H ii regions in NGC 4030.
Before closing, let us mention two other indications that the sources we have
identified in this study are indeed SNRs. The first one comes from the density
sensitive $[\mathrm{S}\,\textsc{ii}]6716/[\mathrm{S}\,\textsc{ii}]6731$ ratio.
At least for young remnants, one expects to find larger densities (smaller
$[\mathrm{S}\,\textsc{ii}]6716/[\mathrm{S}\,\textsc{ii}]6731$) than in H ii
regions (e.g. Sabbadin et al., 1977; Leonidaki et al., 2013; Moumen et al.,
2019). The H ii regions used to test this are the same ones defined in Section
5.3, except that no background subtraction was applied in this case. Results
of this comparison are shown in Fig. 14. The plot shows that, despite some
overlap, our SNRs clearly tend to have denser nebulae than H ii regions.444We
note in passing that the sixth principal component expresses an
anticorrelation between the [S ii] 6716 and 6731 lines, such that its image
could, in principle, be used to trace dense spots. The peaks seen in tomogram
6 indeed coincide with our most extreme SNRs, but the image as a whole is very
noisy given that it corresponds to the last PC. In particular, the source with
the lowest [S ii] ratio is the SNR in Fig. 9.
The second indication comes from line widths. The most obvious SNRs in our
sample have emission lines that are visibly broader than those around them,
but for many the difference is not so evident. Statistically, however, the
difference is there. In nearly all (55/59) cases we find the mean width of the
[N ii] 6584 (the strongest forbidden line) within the source aperture to be
larger than that on the corresponding background annulus, with an average
difference of $26$ km s-1 in terms of FWHM. One should however note that in
approximately half of the cases the lines are not actually resolved at the
instrumental resolution of 140 km s-1 at the wavelength of [N ii] in MUSE. In
any case, this difference is qualitatively consistent with the expectation
that gas velocities are larger in SNRs than in their immediate surroundings.
### 5.5 Serendipitous discovery of an SN impostor?
Figure 15: Spectrum of “the beast", a serendipitously discovered
$M_{V}\sim-12.5$ luminous blue variable on top of an H ii region in NGC 4030.
Images on the right-hand panels are cut-outs of the RGB composites of Figs. 1
and 3.
In the process of screening our PCA-based list of UFLOs for sources with SNR-
like line ratios a very peculiar object was dropped from the sample. Signalled
as an outlier in Fig. 6 because of its weak [O iii] and [S ii] despite strong
[N ii] emission, this source does not show [O i] nor the 7300 Å features.
Located at pixel $(x,y)=119,205$ in Fig. 1
($\mathrm{RA}=12^{\mathrm{h}}00^{\mathrm{m}}24.36^{\mathrm{s}}$,
$\mathrm{Dec.}=-1^{\circ}05{}^{\prime}51.4{}^{\prime\prime}$), “the beast", as
we have nicknamed it, is also peculiar in a fundamental aspect: Unlike other
UFLOs and SNRs, it has a strong continuum.
Fig. 15 shows its spectrum, as extracted directly from the observed cube. As
in Section 4.2, the spectral extraction was performed on a circle $r_{\rm
src}=2.55$ spaxels wide, and the background was that estimated from the
annulus between $r=3$ and 6 spaxels. Note that the beast is located right on
top of an H ii region, as can be seen from the cut-outs of the emission-line
and PC RGB composites shown in Fig. 15.
The beast exhibits a very blue continuum, strong [N ii] and H$\alpha$, weak
H$\beta$ and [S ii] emission. Other noticeable features include (1) a hint of
a faint broad component in H$\alpha$, (2) a strong Na i D doublet absorption
(3.5 Å in equivalent width), (3) HeI 4921 and 6678 in absorption and 5876 in
emission, (4) weak broad emission features at $\sim 5696$ and 7236 Å that
could be associated with C iii and C ii, respectively. All in all it seems
that we are looking not at an SNR, but at a very luminous star.
From the spectrum in Fig. 15 we estimate an astounding V band absolute
magnitude of $-12.5$. Blackbody fits to its spectrum indicate an effective
temperature of $\sim 10000$ K and an inferred radius of the order of
$600R_{\odot}$. We have also verified that the source is variable. The
datacube analysed throughout this paper is a combination of observations taken
on Feb/6 and Mar/10 of 2016. Comparing these cubes we find that the beast
brightened by $\sim 4.4$ percent at 5200 Å in the 34 days between the two
observations. This may not seem much, but (1) it is the largest variation
found in the whole FoV in both absolute and percentual terms, and (2) it is a
$3.7\sigma$ variation, in the sense that the beast brightened by 3.7 times the
rms variation of all 26 UFLOs in our primary sample, none of which varies by
more than $2\sigma$.
The beast is thus luminous blue, and variable. Because of its extreme
luminosity, we speculate that it is not just an LBV star, but an SN impostor
(Van Dyk et al., 2000; Smith et al., 2011). Thöne et al. (2017) recently
presented an extensive study on one such case: SN2015gb, an extragalactic LBV
that exhibited extreme luminosity and variability for at least 20 yr before
undergoing what seemed to be a terminal explosion as a type IIn SN in 2015.
Clearly, this extraordinary source deserves a dedicated study on its own.
## 6 Discussion
Now that we have identified a few dozen SNRs in NGC 4030, let us open a
parenthesis outside the original scope of this paper to evaluate what impact
these sources may have on studies of star-forming regions in galaxies.
### 6.1 Impact on standard emission-line diagnostics
Much of our knowledge about star-forming galaxies comes from nebular
diagnostics based on the hypothesis that all emission lines are powered by
young stars. Other sources of line emission, like SNRs, act as contaminants
which induce biases in the estimates of properties like the star formation
rate (SFR) and nebular metallicity ($Z_{\rm neb}\equiv 12+\log{\rm O/H}$).
In order to assess to which extend SNRs affect the SFR estimates in NGC 4030
we have computed the total H$\alpha$ luminosity with and without SNRs. We find
that only 0.7% of the H$\alpha$ flux in the FoV comes from our 59 SNRs, which
translates into a negligible bias in $L_{\mathrm{H}\alpha}$-based SFR
estimates. This fraction is in the range of those derived by Vučetić et al.
(2015), who have evaluated the contribution of SNRs to the H$\alpha$ flux in
18 galaxies, obtaining fractions ranging from 0.1 to 12.8%.
We have repeated the with vs. without SNRs calculations for galaxy-wide ${\rm
N2}\equiv\log\\{[\mathrm{N}\,\textsc{ii}]/\mathrm{H}\alpha\\}$ and ${\rm
O3N2}\equiv\log\\{([\rm{O}\,\textsc{iii}]/\mathrm{H}\beta)/([\mathrm{N}\,\textsc{ii}]/\mathrm{H}\alpha)\\}$
indices, two popular strong-line calibrators for the nebular metallicity
(Curti et al., 2017, and references therein). Predictably, the effects are
smaller than the already small effect on H$\alpha$, since now the corrections
affect both numerator and denominator. Indeed, we find that N2 and O3N2 are
only affected in their third significant digits.
SNRs thus have a small effect in global SFR estimates and a negligible one in
metallicity. Locally, however, their effects can be significant. To illustrate
this we have defined regions of 2.8 arcsec (14 spaxels, or 78 pc) in diameter
centred on the sources in our SNR sample. The N2 and O3N2 indices were then
computed for the region as a whole and excluding the SNRs.
On average, [N ii]/H$\alpha$ comes out 1.069 times higher when the SNRs are
not masked, a bias that translates to $\Delta Z_{\rm neb}({\rm N2})=0.022$ dex
using the $Z_{\rm neb}({\rm N2})$ calibration of Curti et al. (2017).
Considering only the top 10 sources in terms of [S ii]/H$\alpha$ the bias in
metallicity increases to $\Delta Z_{\rm neb}({\rm N2})=0.041$ dex, while for
the SNR in Fig. 9 alone it reaches 0.071 dex. Numerically, these biases are of
the same order as $Z_{\rm neb}$ variations within galaxies.
O3N2-based metallicities are much less sensitive to contamination by SNRs. The
reason is evident from the BPT diagram itself (Fig. 6, top), where one sees
that SNRs move away from H ii regions along lines of roughly constant O3N2.
(That is the same reason why DIG-related $Z_{\rm neb}$ corrections are smaller
for O3N2; Vale Asari et al. 2019). Accordingly, we obtain an insignificant
$\Delta Z_{\rm neb}({\rm O3N2})=-0.0015$ dex for the sample as a whole. Even
in the extreme case of Fig. 9 the bias is just $-0.019$ dex.
### 6.2 The nature of HII regions with anomalous forbidden lines
The numerical experiments reported just above emulate a situation where an
unnoticed SNR contaminates the emission lines from what was presumed to be an
area composed solely of H ii regions. While the use of circular apertures is
obviously a simplification of the problem, more advanced methods to delineate
H ii regions spatially (e.g. Sánchez et al., 2012b; Casado et al., 2017;
Rousseau-Nepton et al., 2018) would probably also include SNRs within the
resulting contour, specially if observed under coarser resolution.
The possibility that SNRs contaminate the line emission from H ii regions has
long been acknowledged in the literature. Kennicutt et al. (1989), for
instance, discussed it very didactically over 30 yr ago in a comparison of
nuclear and disc H ii regions. More recently, integral field spectroscopy work
with CALIFA and MUSE identified numerous examples of emission regions well
outside the nuclei of galaxies that look just like H ii regions except for
somewhat enhanced forbidden lines, as seen in the studies by Sánchez et al.
(2012a), Sánchez et al. (2015), and Sánchez-Menguiano et al. (2020).
In this context it is appropriate to point out that we have explicitly
demonstrated that, at least in NGC 4030, contamination by SNRs leads to such
“anomalous" H ii regions. As illustrated by virtually every image shown in
this study (starting with Fig. 1), SNRs are very often surrounded by or
immersed within star-forming regions. Then, when examining their positions in
diagnostic diagrams prior to background subtraction (red stars in Figs. 6 and
7), we saw that, besides the obvious non-H ii cases (the clearest SNR
detections), some populate regions that slightly trespass the H ii region
demarcation frontier due to the SNR component. Source 31 in Table 1 is a good
example. Its raw (uncorrected for background) BPT coordinates are
($\log[\mathrm{N}\,\textsc{ii}]/\mathrm{H}\alpha,\log[\rm{O}\,\textsc{iii}]/\mathrm{H}\beta)=(-0.34,-0.31)$,
a bit above the Stasińska et al. (2006) limit. After background removal its
line ratios move into the realm of SNRs, as indicated by the arrows in Fig. 6.
Conversely, removing the SNR component moves them to $(-0.50,-0.53)$, well
within the star-forming zone.
The fact that we were able to isolate the SNR contribution in such complex
star-forming environment is ultimately due to the superb spatial sampling of
MUSE. Observed under the much coarser resolutions of CALIFA or MaNGA most of
the SNRs identified in this study would not be recognized as individual
sources, leaving only trace indications of their existence in the form of
slight “anomalies" in diagnostic diagrams.
## 7 Summary
We have reported the discovery of SNRs based on MUSE integral field
spectroscopy of NGC 4030. As the storyline of this paper reveals, this
discovery came not from an pre-planned search for SNRs, but from an
exploratory investigation on the nature of compact sources with enhanced
forbidden line emission (UFLOs) spotted in a cursory examination of emission-
line images derived from the data (Fig. 1). Here is a summary of how this
investigation unfolded.
1. 1.
PCA tomography of the H$\beta$, [O iii] 5007, H$\alpha$, [N ii] 6584, [S ii]
6716, and 6731 emission lines (the strongest in the MUSE range) proved to be a
useful tool to locate these sources. The second eigen line spectrum obtained
from this analysis contrasts [O iii], [N ii], and [S ii] with H$\beta$ and
H$\alpha$ (Fig. 2), such that UFLOs stand out clearly in the image of this PC
(Fig. 3).
2. 2.
Due to the intense nebular emission throughout the field of view, extracting
the source properties proved far more complicated than spotting them (Fig. 5).
To mitigate this problem our aperture photometry removes spaxels with
excessive H$\alpha$ contribution to the background level.
3. 3.
For about half of the UFLOs the background subtraction leads to an increase in
standard diagnostic flux ratios of collisional to recombination lines ([N
ii]/H$\alpha$, [S ii]/H$\alpha$, [O iii]/H$\beta$, [O i]/H$\alpha$; Figs. 6
and 7), making them even more different from the H ii regions that dominate
the nebular emission in NGC 4030. Others, however, have H ii-region-like line
ratios.
4. 4.
The smoking gun evidence that at least some UFLOs are associated with SNRs
came from the spectral extraction (Figs. 8 and 9), which revealed anomalous [N
ii], [S ii] and [O i] emission as well as features around 7300 Å (including
lines from [Ca ii], [O ii], [Ni ii], and [Fe ii]) which distinguish them from
H ii regions. These same features, all of which are indicative of shocks, have
been previously identified in several studies of Galactic and extragalactic
SNRs, but never before used as a diagnostic.
5. 5.
We have screened our list of PCA-based detections for sources with SNR-like
emission-line ratios. A total of 59 sources fall in the SNR region of the [S
ii]/H$\alpha$ vs. [N ii]/H$\alpha$ diagram (Figs. 10 and 11) according to the
recently proposed H ii region/SNR separation line by Kopsacheili et al.
(2020).
6. 6.
The $\sim 7300$ Å SNR features are evident in the mean spectrum (Fig. 12) as
well as in $\sim 10$ individual sources. They are also apparently present
(though weaker) in other sources. A stacking analysis reveals that indeed
these features are present in about 40 objects (Fig. 13). The remaining
sources also have weaker emission lines in general, so the 7300 Å lines may
well be present but at undetectable flux levels.
7. 7.
The impact of SNRs on global SFR and nebular metallicity estimates in NGC 4030
is small. Locally, ignoring their presence may lead to significant biases in
$Z_{\rm neb}$ estimates based on the N2 index, while O3N2-based metallicities
are not affected.
8. 8.
Another local effect of SNRs is that, when superposed to H ii regions, they
may add enough forbidden lines to move the source beyond the limits for star-
forming regions devised on the basis of photoionization models. This situation
(envisaged over three decades ago) happens in the NGC 4030 data examined here,
and is a likely explanation for the so called “anomalous" H ii regions found
in recent integral field studies.
9. 9.
An unexpected result was the serendipitous discovery of a luminous blue
variable star which seems to fit the profile of a SN impostor (Fig. 15), and
whose nature deserves further investigation.
Our search for the nature of the intriguing green/blue compact (unresolved)
sources in Fig. 1 has thus led us to the discovery of about five dozen SNRs in
NGC 4030. The true number of SNRs may well be larger, of course, and in fact,
some promising candidates can be found among the candidates detected with a
threshold on PC2 lower than the one used for our SNR sample.
The rather unorthodox methodology employed here has plenty of room for
improvement, specially as it was not originally designed to find SNRs. With
the benefit of hindsight, one can envisage ways of improving upon the PCA
tomography technique as used in this paper. For instance, including weaker
emission lines like [O i] and the 7300 Å features in the analysis would
clearly help identifying SNRs, though one would then have to deal with very
uncertain or missing information, perhaps with techniques such as those
described in Budavári et al. (2009). Improvements on the source extraction
strategy should also be attempted, as this is critical to the diagnostic of
physical properties of individual SNRs (e.g., shock velocities, densities).
Combining such tools with the collection of SNRs reported here, presumably
each with a different age, should provide useful empirical information to
better understand the evolution of SNRs.
Finally, it is worth noting that these are the most distant optically detected
SNRs to date. According to Vučetić et al. (2015) the record belonged to NGC
2903, a galaxy 9 Mpc away. At 29 Mpc, NGC 4030 triples this distance. This
record should not hold for long, however, as there are several other galaxies
in the MUSE archive which can be dissected with the same kind of methodology
employed in this paper.
## Acknowledgements
The authors would like to thank Christina Thöne, Enrique Pérez, Grażyna
Stasińska, and Lluís Galbany for valuable discussions. We are in great debt to
the (anonymous) referee for her/his very thorough report. Likewise, we would
like to thank the recently deceased Prof. João Steiner for the outstanding and
inspiring work throughout his career. RCF acknowledges support from CNPq. SFS
thanks CONACYT FC-2016-01-1916 and CB285080 projects and PAPIIT IN100519
project for support on this study.
## DATA AVAILABILITY
The data used in this work are available from the ESO Science Archive at
https://archive.eso.org/.
## References
* Anderson et al. (2012) Anderson J. P., Habergham S. M., James P. A., Hamuy M., 2012, MNRAS, 424, 1372
* Baldwin et al. (1981) Baldwin J. A., Phillips M. M., Terlevich R., 1981, PASP, 93, 5
* Blair & Long (1997) Blair W. P., Long K. S., 1997, ApJS, 108, 261
* Blair & Long (2004) Blair W. P., Long K. S., 2004, ApJS, 155, 101
* Budavári et al. (2009) Budavári T., Wild V., Szalay A. e. S., Dobos L., Yip C.-W., 2009, MNRAS, 394, 1496
* Buta et al. (2015) Buta R. J., et al., 2015, ApJS, 217, 32
* Cardelli et al. (1989) Cardelli J. A., Clayton G. C., Mathis J. S., 1989, ApJ, 345, 245
* Casado et al. (2017) Casado J., Ascasibar Y., García-Benito R., Guidi G., Choudhury O. S., Bellocchi E., Sánchez S. F., Díaz A. I., 2017, MNRAS, 466, 3989
* Chornock et al. (2010) Chornock R., Filippenko A. V., Li W., Silverman J. M., 2010, ApJ, 713, 1363
* Cid Fernandes et al. (2005) Cid Fernandes R., Mateus A., Sodré L., Stasińska G., Gomes J. M., 2005, MNRAS, 358, 363
* Cid Fernandes et al. (2011) Cid Fernandes R., Stasińska G., Mateus A., Vale Asari N., 2011, MNRAS, 413, 1687
* Crowther (2013) Crowther P. A., 2013, MNRAS, 428, 1927
* Curti et al. (2017) Curti M., Cresci G., Mannucci F., Marconi A., Maiolino R., Esposito S., 2017, MNRAS, 465, 1384
* den Brok et al. (2020) den Brok M., et al., 2020, MNRAS, 491, 4089
* Dopita et al. (2010) Dopita M. A., et al., 2010, ApJ, 710, 964
* Dopita et al. (2019) Dopita M. A., Seitenzahl I. R., Sutherland R. S., Nicholls D. C., Vogt F. P. A., Ghavamian P., Ruiter A. J., 2019, AJ, 157, 50
* Erroz-Ferrer et al. (2019) Erroz-Ferrer S., et al., 2019, MNRAS, 484, 5009
* Fabrika et al. (2005) Fabrika S., Sholukhova O., Becker T., Afanasiev V., Roth M., Sanchez S. F., 2005, A&A, 437, 217
* Fesen & Hurford (1996) Fesen R. A., Hurford A. P., 1996, ApJS, 106, 563
* Ganda et al. (2006) Ganda K., Falcón-Barroso J., Peletier R. F., Cappellari M., Emsellem E., McDermid R. M., de Zeeuw P. T., Carollo C. M., 2006, MNRAS, 367, 46
* Ganda et al. (2007) Ganda K., et al., 2007, MNRAS, 380, 506
* Im et al. (2019) Im M., et al., 2019, Journal of Korean Astronomical Society, 52, 11
* Kamann et al. (2016) Kamann S., et al., 2016, A&A, 588, A149
* Kennicutt et al. (1989) Kennicutt Robert C. J., Keel W. C., Blaha C. A., 1989, AJ, 97, 1022
* Kopsacheili et al. (2020) Kopsacheili M., Zezas A., Leonidaki I., 2020, MNRAS, 491, 889
* Koribalski et al. (2004) Koribalski B. S., et al., 2004, AJ, 128, 16
* Kreckel et al. (2017) Kreckel K., Groves B., Bigiel F., Blanc G. A., Kruijssen J. M. D., Hughes A., Schruba A., Schinnerer E., 2017, ApJ, 834, 174
* Lacerda et al. (2018) Lacerda E. A. D., et al., 2018, MNRAS, 474, 3727
* Lehmann et al. (2005) Lehmann I., et al., 2005, A&A, 431, 847
* Leonidaki et al. (2013) Leonidaki I., Boumis P., Zezas A., 2013, MNRAS, 429, 189
* Levenson et al. (1995) Levenson N. A., Kirshner R. P., Blair W. P., Winkler P. F., 1995, AJ, 110, 739
* Long et al. (2018) Long K. S., Blair W. P., Milisavljevic D., Raymond J. C., Winkler P. F., 2018, ApJ, 855, 140
* Long et al. (2019) Long K. S., Winkler P. F., Blair W. P., 2019, ApJ, 875, 85
* López-Cobá et al. (2020) López-Cobá C., et al., 2020, AJ, 159, 167
* Matonick & Fesen (1997) Matonick D. M., Fesen R. A., 1997, ApJS, 112, 49
* Moumen et al. (2019) Moumen I., Robert C., Devost D., Martin R. P., Rousseau-Nepton L., Drissen L., Martin T., 2019, MNRAS, 488, 803
* Osterbrock & Ferland (2006) Osterbrock D. E., Ferland G. J., 2006, Astrophysics of gaseous nebulae and active galactic nuclei. University Science Books
* Roth et al. (2018) Roth M. M., et al., 2018, A&A, 618, A3
* Rousseau-Nepton et al. (2018) Rousseau-Nepton L., Robert C., Martin R. P., Drissen L., Martin T., 2018, MNRAS, 477, 4152
* Russell & Dopita (1990) Russell S. C., Dopita M. A., 1990, ApJS, 74, 93
* Sabbadin et al. (1977) Sabbadin F., Minello S., Bianchini A., 1977, A&A, 60, 147
* Sánchez (2020) Sánchez S. F., 2020, ARA&A, 58, annurev
* Sánchez-Menguiano et al. (2020) Sánchez-Menguiano L., Sánchez S. F., Pérez I., Ruiz-Lara T., Galbany L., Anderson J. P., Kuncarayakti H., 2020, MNRAS, 492, 4149
* Sánchez et al. (2012a) Sánchez S. F., et al., 2012a, A&A, 538, A8
* Sánchez et al. (2012b) Sánchez S. F., et al., 2012b, A&A, 546, A2
* Sánchez et al. (2015) Sánchez S. F., et al., 2015, A&A, 574, A47
* Schlafly & Finkbeiner (2011) Schlafly E. F., Finkbeiner D. P., 2011, ApJ, 737, 103
* Seth et al. (2008) Seth A., Agüeros M., Lee D., Basu-Zych A., 2008, ApJ, 678, 116
* Smartt et al. (2009) Smartt S. J., Eldridge J. J., Crockett R. M., Maund J. R., 2009, MNRAS, 395, 1409
* Smith et al. (2011) Smith N., Li W., Silverman J. M., Ganeshalingam M., Filippenko A. V., 2011, MNRAS, 415, 773
* Stasińska et al. (1998) Stasińska G., Richer M. G., McCall M. L., 1998, A&A, 336, 667
* Stasińska et al. (2006) Stasińska G., Cid Fernandes R., Mateus A., Sodré L., Asari N. V., 2006, MNRAS, 371, 972
* Stasińska et al. (2008) Stasińska G., et al., 2008, MNRAS, 391, L29
* Steiner et al. (2009) Steiner J. E., Menezes R. B., Ricci T. V., Oliveira A. S., 2009, MNRAS, 395, 64
* Thöne et al. (2017) Thöne C. C., et al., 2017, A&A, 599, A129
* Tully et al. (2013) Tully R. B., et al., 2013, AJ, 146, 86
* Vale Asari et al. (2019) Vale Asari N., Couto G. S., Cid Fernandes R., Stasińska G., de Amorim A. L., Ruschel-Dutra D., Werle A., Florido T. Z., 2019, MNRAS, 489, 4721
* Van Dyk et al. (2000) Van Dyk S. D., Peng C. Y., King J. Y., Filippenko A. V., Treffers R. R., Li W., Richmond M. W., 2000, PASP, 112, 1532
* Vučetić et al. (2015) Vučetić M. M., Arbutina B., Urošević D., 2015, MNRAS, 446, 943
* Weilbacher et al. (2012) Weilbacher P. M., Streicher O., Urrutia T., Jarno A., Pécontal-Rousset A., Bacon R., Böhm P., 2012, Design and capabilities of the MUSE data reduction software and pipeline. Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, p. 84510B, doi:10.1117/12.925114
|
# Modeling Ground-to-Air Path Loss for Millimeter Wave UAV Networks
Hazim Shakhatreh1 , Waed Malkawi1,Ahmad Sawalmeh2, Muhannad Almutiry3,Ali
Alenezi3
1Department of Telecommunications Engineering<EMAIL_ADDRESS>Hijjawi
Faculty for Engineering Technology Yarmouk University Irbid Jordan
2Computer Science Department Northern Border University Arar Saudi Arabia
3Electrical Engineering Department Northern Border University Arar Saudi
Arabia
.2021 .2021 ..2021
###### Abstract
Path loss is a significant component of wireless communication channel design
and analysis and reflects the reduction in a transmitted signal’s power
density. Due to the differences in the propagation conditions, wireless aerial
channels’ features differ from those of terrestrial wireless channels;
therefore, unmanned aerial vehicle path loss models are often different from
conventional terrestrial wireless channel path loss models. A mathematical
propagation model is proposed in this paper to estimate the Ground-to-Air path
loss between a wireless device and a low-altitude platform using the frequency
bands of the millimeter wave. The suggested model of Ground-to-Air path loss
will assist academic researchers in formulating several vital problems.
###### keywords:
Wireless Propagation; Ground-to-Air Path Loss Model; Unmanned Aerial Vehicle
(UAV); Millimeter Wave Radio (mmWave); Suburban Environment; Urban
Environment; Dense Urban Environment; High-rise Urban Environment
## 1 Introduction
UAVs are growing exponentially in numerous civilian applications including
wireless coverage, search and rescue, and real-time surveillance. [r1]. UAVs
can be used for wireless coverage in emergency cases where each UAV is used as
an aerial wireless base station when the terrestrial network goes out of
operation [r2]. It can also be used to provide better coverage for users and
higher data speeds [r3]. The authors in [r4_2] suggested a path loss model
between ground wireless devices and a low-altitude platform used as an aerial
wireless base station, which enabled academic researchers to formulate several
significant problems. The path loss model showed a strong propensity for two
distinct types of propagation, which for outdoor receivers were extensively
studied and evaluated. Within this model the authors of [r5] defined the
tradeoff; The path loss between the low altitude platform and the ground
wireless device increases at a higher altitude, but also increases the
possibility of a line of sight connections. On the other side, there are low
probability line of sight links at low altitude, while the loss of the path
decreases. However, this model assumed a downlink scenario, from low altitude
platform to a terrestrial terminal, and the frequency bands are 5800,2000 and
700 MHz. The applicability of this model is constrained by these assumptions
when the uplink scenario from terrestrial terminal to the low altitude
platform is considered, and when the millimeter wave frequency bands is used.
The uplink scenario in which the data is transmitted to the UAV from the
ground wireless devices is considered by just a few studies. Owing to the
limited transmitting capacity of wireless devices, users can not be able to
communicate in emergencies with remote undamaged ground stations (such as
tsunamis, earthquakes or floods). Moreover, the wireless devices could not be
able to recharged due to physical damage of the energy infrastructure. In such
cases, providing stranded people with wireless coverage becomes very critical
as people in the disaster-affected area seek emergency information, locate
family members and friends and get advice to help evacuate the area affected
by the disaster [r6]. Noting that in catastrophic circumstances, energy
efficiency is more important for user devices that can be controlled by
studying the uplink scenario than UAV energy efficiency because UAV can move
long distances for the recharging process. The potential for combining
millimeter wave communications with UAV networks is discussed in [r7] to meet
the high throughput requirements of most UAV applications. Deploying mmWave
communications with UAV networks has two major advantages. 1) The large range
of mmWave is capable of greatly enhancing the capacity of UAV networks,
thereby meeting the requirement for prompt responses. 2) Through using mmWave,
UAV network data traffic can be substantially increased, as in short-range
transmissions, mmWave communications can deliver high throughput [r7].
In this research work, we suggest a mathematical propagation model using
mmWave frequency bands to estimate a Ground-to-Air (GTA) path loss between a
terrestrial terminal and a low altitude platform. Developing an RF-model
requires a specific study description factors and constraints; the
characteristics of the buildings are one of the most critical conditions in an
urban setting. [r4_2]. Within this prediction model four simulation
environments are used: 1) Suburban Environment, 2) Urban Environment, 3) Dense
Urban Environment, 4) Urban Environment with high-rise buildings. The behavior
of GTA channels for mmWave bands is investigated in this work at two separate
frequencies: 28 GHz and 73 GHz, which are believed to cover a broad spectrum
of applications. For each frequency band, the analysis and simulations are
carried out over four different environments: suburban, urban, dense urban and
high-rise buildings environments. Researchers will measure the predicted path
loss for wireless devices on millimeter UAV networks with the proposed GTA
path loss model. Numerous parameters, such as the SINR and the throughput, can
be identified using path loss. To the best of our knowledge, our research work
is the first work that proposes a path loss model for Ground-to-Air
communication system at Millimeter Wave frequency bands in UAV Networks.
The rest of this paper is structured as follows. In Section 2, the related
works is presented. Section 3 discuss the Ground-to-Air path loss model for
four simulation environments, namely, the so-called (1) Suburban Environment
that includes rural areas, (2) The most popular Urban Environment, which
represents the typical European city, then, (3) Dense Urban Environment that
reflects those types of towns where buildings are near to each other, Next,
(4) High-rise Urban Environment, reflecting new towns with the skyscrapers.
Finally, Section 4 presents the conclusion and future work.
## 2 Related Works
Path loss models play an important role in the designing and analyzing of
wireless communication systems. It reflects the reduction amount in the power
density of a transmitted signal. Due to the variations in the propagation
environments, including ground to air and air to ground communication links
for UAVs networks, the UAVs network’s path loss models are different from the
conventional path loss models for cellular networks. UAVs network can be
classified based on the operating frequencies, Figure 1 presents the
classification of the path loss models for UAVs based on the operating
frequency bands. This Figure shows three bands for path loss models used in
UAVs communication, which are: (1) Fourth generation (4G) long term evolution
(LTE) operating in microwave bands. (2) Fifth-generation (5G) operating in a
mmWave bands. (3) The back-haul communication link between UAV and ground base
station (GBS) can be worked either in 4G or 5G bands.
for tree= align=center, parent anchor=south, child anchor=north, font=,
edge=thick, -Stealth[], l sep+=10pt, edge path= [draw, edge] (!u.parent
anchor) – +(0,-10pt) -— (.child anchor)edge label; , if level=0 inner
xsep=0pt, tikz=[thick] (.south east) – (.south west); [ Classification of
Path loss Models for UAVs [4G/LTE [Downlink [ [feng2006path, al2014modeling,
matolak2016air, sun2016air, matolak2017air] ] ] [Uplink [ [yang2018energy,
zeng2016throughput, ranjan2018study] ] ] ] [5G/mmWave [Downlink [
[khawaja2017uav, khawaja2018temporal, yang2019machine] ] ] [Uplink [ This work
] ] ] [4G/5G Back-haul link [UAV-GBS/GBs-UAV [ [al2017modeling,
shi2018multiple, gapeyenko2018flexible] ] ] [UAV-UAV [ [azari2019cellular] ] ]
] ]
Figure 1: Classification of path loss models for UAVs Based on Operation
Frequency Band.
### 2.1 4G/LTE Path loss Models
Many studies in the literature proposed path loss models for UAV-based aerial
communication frameworks in downlink and uplink scenarios for different
environments, such as overseas, suburban, urban, dense urban, and high-rise
building environments.
The authors in [feng2006path], developed a statistical propagation path loss
model for air to ground (ATG) channels in an urban environment. In this model,
a downlink scenario for ATG communication channel between an aerial base
station and the mobile terminal is proposed; furthermore, it can be worked at
frequencies between 200MHz - 5GHz. In this model, the radio channel is
classified into three propagation groups a line of sight, obstructed line of
sight, and a non-line of sight. The ATG model provides a higher line of sight
and less non-line of sight probabilities compared to the terrestrial
communication platform.
While the work in [al2014modeling], proposed an ATG path loss model for UAVs
network, they developed a statistical propagation model for predicting the
path loss between an airborne base station at a low altitude platform (LAP)
and a ground terminal. The ray-tracing simulator was used to model three
different types of rays, namely: 1)direct rays, 2)reflected rays, 3)diffracted
rays for a line of sight and non-line of sight channel in different
environments. The path loss model is found as a function of the aerial base
station’s altitude and the radius of the coverage area.
Moreover, in [matolak2016air, sun2016air, matolak2017air], Matolak et al.
developed measurement-based path loss models for ATG channel for three
different environments namely, Overwater [matolak2016air], Hilly and
Mountainous [sun2016air] and Sub-urban and Near-Urban [matolak2017air]
environments. These works aim to provide a model that can be used in the
evaluation of UAVs as aerial communication systems. They described ATG channel
model in both dual-band (L-band with 970 MHz and C-band with 5 GHz) for over
water, hilly and mountainous, suburban and near-urban environments that
include propagation path loss,root-mean-square delay spread (RMS-DSs), small-
scale RicianK-factor to develop a path loss model and wide-band tapped-delay
lines (TDL) third-ray dispersive channel models.
On the other hand, the ground to air GTA wireless communication system between
ground terminals (GT) and UAV is studied in [yang2018energy,
zeng2016throughput, ranjan2018study]. The authors in [yang2018energy],
proposed a ground to UAV (GTU) wireless communication system, where UAV acts
as a UAV enabled data collection system, to collect data from GT. They found a
power consumption trade-off between served GTs and the UAV in GTU uplink
communication system considering the trajectories of UAV. In this system, the
required transmission power of the GT to send their data can be reduced if the
UAV flies close to GT to establish better GTU communication links. Therefore,
the aim of this study is to characterize the trade-off for the GTU
communication system to find the optimal transmission power of the GT and the
best trajectory of the UAV. In [zeng2016throughput], the authors utilized a
free space path loss to developed an uplink communication system between
mobile relying node (UAV) and GTs. They studied the problem of throughput
maximization to optimize the UAV trajectory and the GT-to-UAV power
allocation. The authors in [ranjan2018study], analyzed different path loss
models for downlink and uplink channel in emergency UAV-based wireless
communication system. For the uplink communication scenario, they utilized
Winner II Uplink (WinIIU) [meinila2009winner] and Two-ray uplink path loss
models.
### 2.2 5G/mmWave Path loss Models
Millimeter-wave (mmWave) frequency bands provide massive bandwidth that can be
used in 5G networks to significantly increase the data rate. Many works in the
literature were conducted to develop and investigate path loss models in
downlink scenarios (ATG) for mmWave [khawaja2017uav, khawaja2018temporal,
yang2019machine].
The authors in [khawaja2017uav], studied the large scale characterization of
mmWave ATG channels for UAVs communication in the time domain. A ray-tracing
simulation with a Wireless InSite simulator was conducted to study the
behavior and characteristics of the ATG mmWave bands for frequency bands 28
and 60 GHz. They considered four environments, namely: overseas, rural,
suburban and urban to study the received signal strength (RSS) and root mean
square delay spread of multipath components at UAV altitudes between 20 to 150
meters. The results showed that the RSS follows the two ray propagation model
[parsons2000mobile, matolak2016air] that adds the sea/earth surfaces
reflection to the line of sight component.
On the other hand, the same authors of [khawaja2017uav] analyzed in
[khawaja2018temporal] a small scale characteristics of the mmWave propagation
channels for UAVs communication in time and spatial domains for oversea,
rural, suburban and dense-urban environments using Wireless InSite ray-tracing
simulator using Omni-directional antennas for the receivers and transmitter.
The results showed that the ATG mmWave propagation channel’s received
multipath components could be grouped into two components: persistent and non-
persistent components, where the persistent component consists of the line of
sight and ground reflected component, whilst the non-persistent contains the
non-line of sight components.
Moreover, the work in [yang2019machine], proposed a prediction method based on
machine learning for path loss and delay spread in ATG mmwave channels. They
employed two algorithms, which are random forest and K-nearest neighbors, for
the prediction method. Then, they proposed a feature selection scheme of the
proposed machine learning method for prediction accuracy improvement. A ray-
tracing software was utilized in an urban environment at 28 GHz to generate
the data that can be used for performance verification of the proposed
prediction path loss model.
## 3 Ground-to-Air Path Loss Model
To assess Millimeter Wave propagation in High-rise Urban, Dense Urban, Urban
and Suburban environments, extensive measurements are conducted of 28 GHz and
73 GHz channels in (Victoria, Australia as in Figure 3c), (Paris, France as in
Figure 3f), (Mumbai, India as in Figure 3i), and (New York, USA as in Figure
3l) respectively. The 28 GHz and 73 GHz are suitable candidates for early
Millimeter Waves deployments. Previously, Local Multipoint Delivery Systems
were aimed at the 28 GHz frequency band and now consider it an enticing
opportunity for initial cellular deployments of the Millimeter Wave due to its
relatively low frequency within the Millimeter Wave spectrum
[akdeniz2014millimeter]. Moreover, the 73 GHz frequency band has an abundant
spectrum that can be adapted for dense deployment and can accommodate further
expansions if low-frequency bands are congested [akdeniz2014millimeter].
To measure the channel characteristics at these frequencies, an uplink
scenario for GTA communication system is considered, and WinProp software is
used to conduct the experiments where the transmitter hight is 1.7 meter and
the receiver (UAV) hight is 120 meters. As a function of the transmitter-
receiver distance, the total path loss is calculated. At each placement, the
path loss is estimated as:
$PL=P_{t}-P_{r}+G_{t}+G_{r},$ (1)
where $P_{t}$ is the transmitter power, $P_{r}$ is the receiver power and
$G_{t}$ and $G_{r}$ are the gains of the transmitter and receiver antennas,
respectively. For this experiment, $P_{t}$ = 40 dBm and $G_{t}$ = $G_{r}$ = 0
dB.
As a function of the transmitter-receiver line of sight distance, a scatter
plot of the path losses at different placements is shown in Figure 2. Each
placement is manually labeled in this experiment as either line of sight,
where the transmitter is visible to the receiver, or non-line of sight, where
the transmitter is obstructed. It is normal to fit the line of sight and non-
line of sight path losses separately in traditional cellular path loss models
[akdeniz2014millimeter].
In Figure 2, we plot a fit using a standard linear model for the line of sight
and non-line of sight points,
$PL(d)[dB]=\alpha+\beta 10log_{10}(d)+\zeta,\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \zeta\sim\mathcal{N}(0,\sigma^{2}),$
(2)
where $d$ is the 3D distance between the ground user and the UAV (meters), the
least square fits of floating intercept and slope over the calculated
distances (200 to 500 m) are $\alpha$ and $\beta$, and $\sigma^{2}$ is the
variance of lognormal shadowing. The values of $\sigma^{2}$, $\alpha$ and
$\beta$ are shown in Table 1 for 28 GHz, and Table 2 for 73GHz.
In Tables 1 and 2, we can notice that the path loss increases as frequency and
distance increase. For example, the values of $\alpha$ and $\beta$ parameters
are 97.81 and 1.87 for the dense environment when the frequency is 28 GHz,
while the values of $\alpha$ and $\beta$ parameters are 100.83 and 2.09 for
the dense environment when the frequency is 73 GHz. Also, we can notice in
Figure 3 that the path increases as distance increase. Moreover, we can notice
that the line of sight parameters are close together, this is because the
transmitter is visible to the receiver. Also, we can notice that the parameter
differences for urban and dense urban environments are small, this because we
do not take into account the density of users when we derive the Ground-to-Air
path loss model, where the human body is considered as a blocker in Millimeter
Wave networks.
To account for the blockage of the human body, we utilize the probability of
line of sight, $P_{L}$, for a wireless device $i$ from [gapeyenko2018effects]
as:
$P_{L}(r_{i},h_{D})=exp{(-\lambda
g_{B}\dfrac{r_{i}(h_{B}-h_{R})}{(h_{D}-h_{R})})},$ (3)
where $r_{i}$ is 2D distance between a ground user and the UAV, $h_{D}$ is the
hight of the UAV, $\lambda$ is the density of human blockers, $g_{B}$ is the
diameter of human blockers, $h_{B}$ is the hight of the human blocker, $h_{R}$
is the hight of the wireless device (transmitter).
The average path loss between a ground wireless device $i$ and the UAV is
given by:
$PL_{a,i}=P_{L}(r_{i},h_{D})PL_{L,i}+[1-P_{L}(r_{i},h_{D})]PL_{N,i},$ (4)
where $PL_{L,i}$ and $PL_{N,i}$ are the path losses for line of sight and non-
line of sigt links.
Table 1: Environment Parameters for GTA Path loss Model - Frequency 28 GHz.
Fitting Parameters | Link Type | Sub-Urban. | Urban | Dense-Urban. | High-rise Building
---|---|---|---|---|---
Victoria, Australia | Paris,France | Mumbai, India | New York, USA
$\alpha$ | NLOS | 113.63 | 97.81 | 98.05 | 66.25
$\beta$ | 1.16 | 1.87 | 1.86 | 3.30
$\zeta\sim\mathcal{N}(0,\sigma^{2})$ | 2.58 | 1.69 | 0.59 | 4.48
$\alpha$ | LOS | 84.64 | 82.54 | 78.58 | 88.76
$\beta$ | 1.55 | 1.68 | 1.85 | 1.68
$\zeta\sim\mathcal{N}(0,\sigma^{2})$ | 0.12 | 0.79 | 0.49 | 2.47
Table 2: Environment Parameters for GTA Path loss Model - Frequency 73 GHz.
Fitting Parameters | Link Type | Sub-Urban. | Urban | Dense-Urban. | High-rise Building
---|---|---|---|---|---
Victoria, Australia | Paris,France | Mumbai, India | New York, USA
$\alpha$ | NLOS | 115.40 | 100.83 | 105.37 | 102.10
$\beta$ | 1.43 | 2.09 | 1.91 | 2.22
$\zeta\sim\mathcal{N}(0,\sigma^{2})$ | 2.74 | 1.90 | 0.46 | 6.61
$\alpha$ | LOS | 93.63 | 90.86 | 85.71 | 85.49
$\beta$ | 1.52 | 1.69 | 1.90 | 1.92
$\zeta\sim\mathcal{N}(0,\sigma^{2})$ | 0.16 | 0.84 | 0.42 | 0.57
(a) Suburban at 28 GHz.
(b) Suburban at 73 GHz.
(c) Urban at 28 GHz.
(d) Urban at 73 GHz.
(e) Dense-urban at 28 GHz.
(f) Dense-urban at 73 GHz.
(g) High-rise buildings at 28 GHz.
(h) High-rise buildings at 73 GHz.
Figure 2: Scatter plot with linear fit of the estimated path loss for
different environments for LOS and NLOS links at 28/73 GHz.
(a) Top view of the targeted area - Suburban.
(b) 3D view of the targeted area - Suburban.
(c) Colored map of the path loss - Suburban.
(d) Top view of the targeted area - Urban.
(e) 3D view of the targeted area - Urban.
(f) Colored map of the path loss - Urban.
(g) Top view of the targeted Dense-Urban.
(h) 3D view of the targeted Dense-Urban
(i) Colored map of the path loss - Dense-Urban.
(j) Top view of the targeted area - High-rise building.
(k) 3D view of the targeted area - High-rise building.
(l) Colored map of the path loss - High-rise building area.
Figure 3: Top view, 3D view and colored map of the path loss for the coverage
areas for Suburban, Urban, Dense-urban and high-rise building environments
when UAV altitude is at 120 m.
## 4 Conclusions
In this research work, a mathematical propagation model for estimating the
Ground-to-Air path loss between a wireless system and a low altitude platform
was provided utilizing millimeter-wave frequency bands. This model was
developed based on simulation experiments in four different environments:
high-rise buildings, dense-urban, urban and suburban in New York, USA, Mumbai,
India, Paris, France, and Victoria, Australia, respectively, at 28 GHz and 73
GHz. In this model, a scatter plot of the path losses was presented at
different placements as a function of the transmitter-receiver distance for
the line of sight and non-line of sight. Each placement was manually labeled
as either line of sight, where the transmitter is visible to the receiver, or
non-line of sight, where the transmitter is obstructed. The proposed model of
path loss can help to formulate several significant problems for academic
researchers. In our future works, the human blocker factor will be considered
in the path loss model. Furthermore, different mmWave frequency bands will be
studied. Moreover, we will conduct real experiments and compare their results
with simulation results.
|
11institutetext: Department of Computer Science and Engineering,
Shahjalal University of Science and Technology, Sylhet, Bangladesh 11email:
<EMAIL_ADDRESS>22email:
<EMAIL_ADDRESS>33email<EMAIL_ADDRESS>
# A transformer based approach for fighting COVID-19 fake news
S.M. Sadiq-Ur-Rahman Shifath 11 0000-0003-2428-6595 Mohammad Faiyaz Khan 22
0000-0002-2155-5991 Md. Saiful Islam 33 0000-0001-9236-380X
###### Abstract
The rapid outbreak of COVID-19 has caused humanity to come to a stand-still
and brought with it a plethora of other problems. COVID-19 is the first
pandemic in history when humanity is the most technologically advanced and
relies heavily on social media platforms for connectivity and other benefits.
Unfortunately, fake news and misinformation regarding this virus is also
available to people and causing some massive problems. So, fighting this
infodemic has become a significant challenge. We present our solution for the
”Constraint@AAAI2021 - COVID19 Fake News Detection in English” challenge in
this work. After extensive experimentation with numerous architectures and
techniques, we use eight different transformer-based pre-trained models with
additional layers to construct a stacking ensemble classifier and fine-tuned
them for our purpose. We achieved 0.979906542 accuracy, 0.979913119 precision,
0.979906542 recall, and 0.979907901 f1-score on the test dataset of the
competition.
###### Keywords:
fake news detection COVID-19 infodemic Coronavirus text classification.
## 1 Introduction
The Coronavirus disease 2019 (COVID-19) is an infectious disease caused by
SARS coronavirus 2. It has impacted almost every country and changed people
worldwide’s social, economic, and psychological states. Especially in the past
few months, people have become more information-hungry on this topic. Hence,
they are exposed to a significant amount of interaction to the information
circled around coronavirus through various platforms. In parallel, the
infodemic of false and misinformation regarding the virus has been rising.
As the idea of ”stay-at-home” is proved to be the most effective precaution
against the virus, people’s most preferred solution for communication or
entertainment has been various online and social media platforms. Hence, the
number of people exposed to rumors or misinformations is more significant than
ever before. In [13], Facebook advertisements across 64 countries were
examined, and it was found that 5% of the advertisements contain possible
errors or misinformations. Besides, many online news portals purposefully
present misguiding news to increase their popularity. In recent times, a
cluster of fake news about lock-downs, possible remedies, vaccinations have
caused panic among people. People started to pile up stocks of sanitizers,
masks, and the supply chain disrupted out of fear. Fighting against this ever-
increasing amount of fake or misleading news has proven to be as crucial as
finding remedies against the virus to overcome the pandemic.
Fake news detection is a critical task as it requires the identification and
detection of various news types like clickbait, propaganda, satire,
misinformation, falsification, sloppy journalism, and many more.
Traditionally, machine learning-based classifiers have been a go-to solution
in this domain. In recent years, sequence models like RNN, LSTM, and CNN have
shown good competency. However, the introduction of transformers has caused a
considerable performance gain. In this work, we propose a solution to the
”Constraint@AAAI2021 - COVID19 Fake News Detection in English” task on the
provided dataset[16]. Our contributions to this task are the following:
* •
We perform extensive experimentation like training classic machine learning
models and traditional text classification models like Bidirectional LSTM, one
dimensional CNN on the competition dataset, and incorporating publicly
available external datasets for training our models.
* •
We fine-tune state-of-the-art transformer based pre-trained models as per the
requirement of our task. We also experiment with additional R-CNN [8],
multichannel CNN with attention [12], multilayer perceptron (MLP) modules to
increase the model’s performance. Finally, we construct a stacking ensemble
classifier of eight transformer models BERT[3], GPT-2[20], XLNet[24],
RoBERTa[11], DistillRoBERTa, ALBERT[9], BART[10], and DeBERTa[6], each with
additional MLP layers.
* •
We also present our overall workflow, comparison, and performance analysis of
our experiments on the validation set.
## 2 Related Work
The issue of fake news detection has been well studied in various fields.
Existing machine learning-based methods like support vector machine (SVM)[5],
decision tree [2] have worked as baselines for this task. In [21], a simple
classifier was used to leverage the Term Frequency(TF), Term Frequency and
Inverse Document Frequency(TF-IDF), and cosine similarity between vectors as
features to provide a baseline for fake news detection task on the Fake News
Challenge (FNC-1) dataset111http://www.fakenewschallenge.org/.
Deep learning linguistic models like Convolutional Neural Networks (CNN),
Recurrent Neural Network (RNN), and Long Short-Term Memory (LSTM) can analyse
variable length sequential data and discover hidden complex patterns in
textual data. In [1] , a fake news detection model based on Bidirectional
LSTM[22] is presented. In [14], Bidirectional LSTM along with GloVe[17] and
ELMO[18] embeddings were used to encode the claim and sentences.
Despite the efficacy of the above-stated language models, they still fell
short of producing significant accuracy. In recent years, transformers and
their various modifications have brought about considerable performance
improvement in various natural language processing tasks. In [7], the
contextual relationship between the headline and main text of news was
analysed using BERT[3]. They showed that fine-tuning the pre-trained BERT for
specific tasks like fake news detection can outperform existing sequence
models.
Research efforts focusing on COVID-19 fake news detection have also been made.
In [4], a detection model based on the information provided by the World
Health Organization, UNICEF, and the United Nations was proposed. Ten machine
learning algorithms were used along with a voting ensemble classifier. In
[23], a Twitter dataset containing 7623 tweets and corresponding labels was
introduced.
## 3 Models
We use an ensemble of multiple varieties of pre-trained transformers for the
task at hand. Transformers are state-of-the-art tools for natural language
processing. They have stacked blocks of identical encoders and decoders with
self-attention. Following are the short descriptions of the varieties of
transformers that are used:
1. 1.
BERT[3]: BERT(Bidirectional Encoder Representations from Transformers) is a
bidirectional language transformer. It is trained based on masked language
modeling and next sentence prediction on a sizeable unlabelled text corpus.
2. 2.
GPT-2[20]: It is a modified transformer with a larger context and vocabulary
size. In contrast to the ordinary transformers, it has an additional
normalization layer after the self-attention block.
3. 3.
XLNet[24]: It is a modified version of the transformer-XL. It is trained to
learn the bidirectional context with an autoregressive method.
4. 4.
RoBERTa[11]: It is an optimized BERT. It is trained or more robust data and
includes fine-tuning the original BERT without the next-sentence prediction
objective.
5. 5.
DistilRoBERTa: It is a distilled and faster version of the RoBERTa base
version.
6. 6.
ALBERT[9]: It splits the embedding matrix into two smaller metrics and uses
two-parameter reduction techniques to increase training speed and decrease
memory consumption.
7. 7.
Bart[10]: It is introduced by facebook as a sequence-to-sequence machine
translation model. It is a fusion between BERT[3] and GPT[19].
8. 8.
DeBERTa[6]: It is built on RoBERTa with some modifications like disentangled
attention and enhanced mask decoder.
Multilayer Perceptron (MLP): Each of the previous models is followed by a
multilayer perceptron module. It consists of a fully connected layer with 64
hidden units, followed by a normalization layer, followed by a linear layer
with tanh activation, a dropout layer and a softmax layer for generating two
class classification probabilities. All the weights are initialized by Xavier
initialization.
Ensemble Module: For ensembling, we train a meta-learner consisting of the
following units: a fully connected layer with 64 hidden units, a linear layer
with tanh activation function, followed by a fully connected layer with 128
hidden units, a linear layer with ReLU activation function, and a softmax
classifier. Figure 1 contains an overview of our proposed architecture.
Figure 1: Overview of our proposed method. tweets are given as inputs to
individual models to produce outputs. then all the outputs are combined to
build a 1x8 feature vector and are fed to the meta learner to generate the
ensembled final output.
## 4 Dataset
The dataset[16] provided for the competition has social media posts related to
COVID-19 and corresponding labels indicating if the social media posts are
fake or real. The dataset is divided into three sets, which are train,
validation, and test. The train set contains 6420 samples. Test and validation
set each contains 2140 social media posts with their corresponding labels. The
train set consists of 3360 real and 3060 fake social media posts, whereas the
test and validation set consists of 1120 real and 1020 fake social media
posts. On average, the train, validation, and test set contain 27.0, 26.79,
and 27.46 words, respectively, in a social media post. We also used additional
data from [23] for training our model as experimentation. It has 7623 labeled
data. We used each data title as a replacement of ’tweet’ in the competition
data because both have similar lengths. We converted the multi-label
classification data to binary as per the label type. As a result, 7501 were
classified as fake, and 122 were classified as real.
## 5 Experimentation and Result Analysis
We perform experiments primarily on traditional language models such as
Bidirectional LSTM(Bi-LSTM) [22] with attention, 1 dimensional CNN(1D-CNN),
Hierarchical Attention Networks(HAN)[25], Recurrent Convolutional Neural
Networks(RCNN)[8], and Multichannel CNN with Attention(AMCNN)[12] on the
competition dataset. We also experiment with transformer-based pre-trained
models like BERT and RoBERTa. The result of these experiments is shown in the
table 1.
Table 1: Comparison of the performance of transformer based models with traditional language models on the validation dataset Model | Accuracy | f1-score | Precision | Recall
---|---|---|---|---
| | Fake | Real | Fake | Real | Fake | Real
Bi-LSTM + attention | 0.928 | 0.931 | 0.924 | 0.931 | 0.925 | 0.932 | 0.924
1D-CNN | 0.926 | 0.931 | 0.920 | 0.908 | 0.948 | 0.948 | 0.948
HAN | 0.930 | 0.933 | 0.928 | 0.943 | 0.918 | 0.923 | 0.938
AMCNN | 0.926 | 0.931 | 0.920 | 0.908 | 0.949 | 0.956 | 0.893
RCNN | 0.933 | 0.937 | 0.928 | 0.921 | 0.947 | 0.954 | 0.910
BERT | 0.971 | 0.969 | 0.972 | 0.977 | 0.965 | 0.961 | 0.980
RoBERTa | 0.979 | 0.977 | 0.980 | 0.981 | 0.976 | 0.974 | 0.983
From table 1, it is clearly evident that transformer-based pre-trained models
showed radical improvements on scores than the traditional models. But in
these experiments, we use just a dense layer with a softmax classifier on top
of the transformers for training and prediction. Still, there is a lot of
scopes to improve it further. According to table 1 it is also clear that,
among the traditional models scores for RCNN is better than the others. So, we
choose RCNN and add it on top of BERT and RoBERTa and train these combined
models. In these cases, we find that none of these additions improves the
performance of the transformer-based models with a simple classification
layer. The output of these experiments is shown in table 2 with the
corresponding model’s name.
Table 2: Comparison of the performance among different models combined with transformer based models on the validation dataset Model | Accuracy | f1-score | Precision | Recall
---|---|---|---|---
| | Fake | Real | Fake | Real | Fake | Real
BERT + RCNN | 0.967 | 0.965 | 0.969 | 0.980 | 0.956 | 0.950 | 0.982
RoBERTa + RCNN | 0.968 | 0.966 | 0.970 | 0.988 | 0.956 | 0.950 | 0.989
RoBERTa + SVM | 0.978 | 0.977 | 0.979 | 0.987 | 0.970 | 0.967 | 0.988
RoBERTa + MLP | 0.979 | 0.977 | 0.980 | 0.981 | 0.976 | 0.974 | 0.983
We experiment with a few classic machine learning classifiers such as Decision
Tree(DT)[2], support vector machine (SVM)[5] with different kernels on top of
transformer-based models. Among these, SVM with Radial Basis Function(RBF)
kernel achieves highest score which is almost similar to the simple classifier
with a slight improvement. We also experiment with different combinations of
MLP on top of RoBERTa and find that the best performing MLP performed better
than SVM with RBF kernel. The results are shown in table 2.
Since none of those models except MLP on top of RoBERTa perform better than
the transformer-based models with a simple classifier, we add a Multi-Layer
Perceptron (MLP) on top of transformer-based models. We test different
combinations of MLPs and choose the best performing combination among these.
Initially, we experiment with only BERT and RoBERTa among the transformer-
based pre-trained models. To add diversity and capture different hidden
information of the data, we select the most suitable eight models among
different transformer-based models based on their performance and low resource
necessity. These are BERT, RoBERTa, XLNet, GPT-2, ALBERT, DistilRoBERTa, BART
and DeBERTa. We experiment on each of these models with base architecture
since we do not have enough resources. We add the best performing MLP on top
of each. These models are tuned individually on the train and validation
dataset. The results of the individual performance are shown in table 3.
Table 3: Comparison of the performance of various modified transformer models on the validation dataset Model | Accuracy | f1-score | Precision | Recall
---|---|---|---|---
| | Fake | Real | Fake | Real | Fake | Real
RoBERTa + MLP | 0.979 | 0.977 | 0.980 | 0.981 | 0.976 | 0.974 | 0.983
BERT + MLP | 0.971 | 0.969 | 0.972 | 0.977 | 0.965 | 0.961 | 0.980
XLNet + MLP | 0.977 | 0.975 | 0.978 | 0.982 | 0.972 | 0.969 | 0.984
GPT-2 + MLP | 0.974 | 0.973 | 0.976 | 0.974 | 0.974 | 0.972 | 0.977
DistilRoBERTa + MLP | 0.978 | 0.977 | 0.979 | 0.986 | 0.971 | 0.968 | 0.988
ALBERT + MLP | 0.962 | 0.960 | 0.964 | 0.976 | 0.951 | 0.944 | 0.979
BART + MLP | 0.978 | 0.976 | 0.979 | 0.988 | 0.969 | 0.965 | 0.989
DeBERTa + MLP | 0.973 | 0.971 | 0.975 | 0.988 | 0.960 | 0.955 | 0.989
From table 3, it can be carefully observed that different models among the
transformer-based models with MLP can capture features differently. As a
result, their combination in prediction can help to improve the overall
performance. So, we finally experiment with different ensemble techniques.
For ensembling, we build a dataset based on the predictions from the
individual models of table 3. We form a feature vector where the features are
predictions from each previous model for a particular training sample. We use
it to train a meta-learner to model the individual predictions into a more
generalized final output. We experiment with random forest and SVM classifiers
and achieve accuracy of .9785 and .9789, respectively on the validation set.
Since these methods do not yield significant improvement, we introduce a more
robust and complex meta-learner consisting of fully connected layers and
achieve considerable performance gain. We try different combinations of
ensemble of transformer based models also. The results of best 3 ensembled
models are summarised in table 4.
Table 4: Quantitative comparison of ensemble models on the validation dataset. Model | Accuracy | f1-score | Precision | Recall
---|---|---|---|---
| | Fake | Real | Fake | Real | Fake | Real
Ensemble-v1 | 0.981 | 0.980 | 0.982 | 0.980 | 0.983 | 0.981 | 0.981
Ensemble-v2 | 0.983 | 0.982 | 0.984 | 0.986 | 0.980 | 0.978 | 0.988
Ensemble-v3 | 0.984 | 0.983 | 0.984 | 0.983 | 0.984 | 0.982 | 0.985
Here, Ensemble-v1 consists of RoBERTa, BERT, XLNet, GPT-2. Ensemble-v2 has
ALBERT, BART, DeBERTa, and the previous models from Ensemble-v1. DistilRoBERTa
is added to the previous seven models in the Ensemble-v3. From table 4, it can
be seen that having more individual models in the final ensemble classifier
usually yields better generalisation and achieves higher accuracy.
One crucial factor is that despite adding additional data for training, all of
our models’ performance in the validation set decreases. One probable reason
for this decline might be the sizeable imbalance in the fake and real news
counts. So, we choose to discard additional data for our final model training.
### 5.1 Hyper-parameters
We test different hyper-parameters like the number of layers, number of units
in a layer, learning rate, weight decay, dropouts, normalization, etc. within
a feasible range. In all the models we use different learning rates between
1e-3 and 2e-6. Learning rate 2e-6 is used in most cases since the dataset is
not so large. Using this learning rate helps to converge to the global minimum
easily. For traditional models, we test a different combination of several
layers with many units. We find Bidirectional LSTM models with 256 layers with
128 hidden units performed best. For 1D-CNN models, we use a layer with 256
filters and filter sizes 1-6. For all traditional models, we use a similar
structure with a little variation. In case of the transformer-based pre-
trained models, we use the base architectures. It is because using large
models on small datasets can cause the models to over-fit. Also, we face a
resource limitation for experimenting with larger models.
## 6 Conclusion
In this work, we have presented our overall workflow for the fake news
detection task. We have conducted a number of experiments and provided a
comprehensive solution based on modified transformers with additional layers
and an ensemble classifier. Our method achieves comparative accuracy on the
test dataset and therefore helps us to place 20 in the leaderboard of
”CONSTRAINT 2021 Shared Tasks: Detecting English COVID-19 Fake News and Hindi
Hostile Posts”[15] competition. Based on our experience from this task, we
feel that a more balanced additional training data may help our model perform
better. Besides, extracting more information like parts of speech, named
entities, number of words, punctuation, hashtags, website links etc. in social
media posts and using these as meta-data during training can increase the
methodology’s performance.
## References
* [1] Bahad, P., Saxena, P., Kamal, R.: Fake news detection using bi-directional lstm-recurrent neural network. Procedia Computer Science 165, 74–82 (2019)
* [2] Breiman, L., Friedman, J., Olshen, R., Stone, C.: Classification and regression trees (wadsworth, belmont, ca). ISBN-13 pp. 978–0412048418 (1984)
* [3] Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)
* [4] Elhadad, M.K., Li, K.F., Gebali, F.: Detecting misleading information on covid-19. IEEE Access 8, 165201–165215 (2020)
* [5] Girosi, F., Niyogi, P., Poggio, T., Vapnik, V.: Comparing support vector machines with gaussian kernels to radial basis function classifiers. Tech. rep., Technical Report 1599, Massachusetts Institute of Techology, MA, USA (1996)
* [6] He, P., Liu, X., Gao, J., Chen, W.: Deberta: Decoding-enhanced bert with disentangled attention. arXiv preprint arXiv:2006.03654 (2020)
* [7] Jwa, H., Oh, D., Park, K., Kang, J.M., Lim, H.: exbake: Automatic fake news detection model based on bidirectional encoder representations from transformers (bert). Applied Sciences 9(19), 4062 (2019)
* [8] Lai, S., Xu, L., Liu, K., Zhao, J.: Recurrent convolutional neural networks for text classification. In: Twenty-ninth AAAI conference on artificial intelligence (2015)
* [9] Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., Soricut, R.: Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942 (2019)
* [10] Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, A., Levy, O., Stoyanov, V., Zettlemoyer, L.: Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461 (2019)
* [11] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019)
* [12] Liu, Z., Huang, H., Lu, C., Lyu, S.: Multichannel cnn with attention for text classification. arXiv preprint arXiv:2006.16174 (2020)
* [13] Mejova, Y., Weber, I., Fernandez-Luque, L.: Online health monitoring using facebook advertisement audience estimates in the united states: evaluation study. JMIR public health and surveillance 4(1), e30 (2018)
* [14] Nie, Y., Chen, H., Bansal, M.: Combining fact extraction and verification with neural semantic matching networks. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 33, pp. 6859–6866 (2019)
* [15] Patwa, P., Bhardwaj, M., Guptha, V., Kumari, G., Sharma, S., PYKL, S., Das, A., Ekbal, A., Akhtar, S., Chakraborty, T.: Overview of constraint 2021 shared tasks: Detecting english covid-19 fake news and hindi hostile posts. In: Proceedings of the First Workshop on Combating Online Hostile Posts in Regional Languages during Emergency Situation (CONSTRAINT). Springer (2021)
* [16] Patwa, P., Sharma, S., PYKL, S., Guptha, V., Kumari, G., Akhtar, M.S., Ekbal, A., Das, A., Chakraborty, T.: Fighting an infodemic: Covid-19 fake news dataset. arXiv preprint arXiv:2011.03327 (2020)
* [17] Pennington, J., Socher, R., Manning, C.D.: Glove: Global vectors for word representation. In: Empirical Methods in Natural Language Processing (EMNLP). pp. 1532–1543 (2014), http://www.aclweb.org/anthology/D14-1162
* [18] Peters, M.E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., Zettlemoyer, L.: Deep contextualized word representations. arXiv preprint arXiv:1802.05365 (2018)
* [19] Radford, A., Narasimhan, K., Salimans, T., Sutskever, I.: Improving language understanding by generative pre-training (2018)
* [20] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019)
* [21] Riedel, B., Augenstein, I., Spithourakis, G.P., Riedel, S.: A simple but tough-to-beat baseline for the fake news challenge stance detection task. arXiv preprint arXiv:1707.03264 (2017)
* [22] Schuster, M., Paliwal, K.K.: Bidirectional recurrent neural networks. IEEE transactions on Signal Processing 45(11), 2673–2681 (1997)
* [23] Shahi, G.K., Nandini, D.: FakeCovid – a multilingual cross-domain fact check news dataset for covid-19. In: Workshop Proceedings of the 14th International AAAI Conference on Web and Social Media (2020), http://workshop-proceedings.icwsm.org/pdf/2020_14.pdf
* [24] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. In: Advances in neural information processing systems. pp. 5753–5763 (2019)
* [25] Yang, Z., Yang, D., Dyer, C., He, X., Smola, A., Hovy, E.: Hierarchical attention networks for document classification. In: Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: human language technologies. pp. 1480–1489 (2016)
|
# Heavy quark expansion of $\Lambda_{b}\to\Lambda^{*}(1520)$ form factors
beyond leading order
Marzia Bordone<EMAIL_ADDRESS>Dipartimento di Fisica, Università di
Torino & INFN, Sezione di Torino, I-10125 Torino, Italy
###### Abstract
I review the parametrisation of the full set of
$\Lambda_{b}\to\Lambda^{*}(1520)$ form factors in the framework of Heavy Quark
Expansion, including next-to-leading-order $\mathcal{O}(\alpha_{s})$ and, for
the first time, next-to-leading-power $\mathcal{O}(1/m_{b})$ corrections. The
unknown hadronic parameters are obtained by performing a fit to recent lattice
QCD calculations. I investigate the compatibility of the Heavy Quark Expansion
and the current lattice data, finding tension between these two approaches in
the case of tensor and pseudo-tensor form factors, whose origin could come
from an underestimation of the current lattice QCD uncertainties and higher
order terms in the Heavy Quark Expansion.
## 1 Introduction
The flavour changing neutral current (FCNC)-mediated $b\to s\ell^{+}\ell^{-}$
transition plays an important role in the search for physics beyond the
Standard Model (SM). Its potential has been extensively studied through the
$B\to K^{(*)}\ell^{+}\ell^{-}$ decays. Interestingly, the LHCb experiment
found some discrepancies with respect to the SM predictions in a few
observables: $R_{K}$ and $R_{K^{*}}$, which test universality between the muon
and electron final states and the angular coefficient $P_{5}^{\prime}$ in the
$B\to K^{*}\mu^{+}\mu^{-}$ angular distribution [1, 2, 3, 4, 5, 6]. These
hints, together with all available $b\to s\ell^{+}\ell^{-}$ data, form a
coherent pattern of discrepancies. They can be addressed by introducing New
Physics (NP) effects. Low-energy fits point toward a breaking of lepton
flavour universality with a combined significance for the NP hypothesis higher
than $5\sigma$ [7, 8, 9, 10]. Whether these data show without doubt first
signs of NP is not clear yet. Only further measurements with higher statistics
or measurements of new processes able to corroborate these data will give a
final answer.
A possibility to better understand these data is studying further $b\to
s\ell^{+}\ell^{-}$-mediated decays, among which baryon decays are promising
candidates. The decay channel involving ground-state baryons
$\Lambda_{b}\to\Lambda\mu^{+}\mu^{-}$ has already received attention both from
the experimental and theoretical point of view. The LHCb experiment measured
the $\Lambda_{b}\to\Lambda\mu^{+}\mu^{-}$ angular distribution [11, 12],
finding good agreement between the measured values of the angular observables
and thieir SM predictions [13, 14, 15] for the angular observables. Even
though this result might be discouraging for NP searches, Refs. [14, 15]
showed that this is still consistent with the NP hypothesis. In fact, angular
distributions describing baryon decays are very different from those in the
meson cases, and NP affects them differently.
Another possibility is studying excited $\Lambda^{*}$ states. The LHCb
experiment used the decay chain $\Lambda_{b}\to\Lambda^{*}(\to
pK^{-})\ell^{+}\ell^{-}$ to measure $R(pK)$, the universality ratio between
muons and electrons, finding results consistent with both the SM expectation
and the measured values of $R_{K^{(*)}}$ [16]. In this analysis, the various
$\Lambda^{*}$ resonances below a certain mass threshold are not distinguished.
However, in Ref. [17] it is shown that the $\Lambda^{*}(1520)$ is expected to
be the most frequent among the $\Lambda^{*}$ resonances, with quite narrow
mass distribution. Therefore, it is motivated to study in detail the
$\Lambda_{b}\to\Lambda^{*}(1520)\ell^{+}\ell^{-}$ decay from both experimental
and theoretical point of view, the latter being the main focus of this paper.
The determination of the form factors of the $\Lambda_{b}\to\Lambda^{*}(1520)$
decays had already been object of study in the literature [18, 19, 20, 21,
22]. The Heavy Quark Expansion (HQE) of the form factors up to next-to-leading
(NLO) order in $\alpha_{s}$ and leading power in $1/m_{b}$ has been employed
[20, 21], using Quark Models to constrain the unknown hadronic parameters
[19]. Recently, a lattice QCD determination of the full base of form factors
has become available [22], but constrained to the low-recoil region.
In this work, I investigate the compatibility between the HQE form factors and
the recent lattice QCD determination. At this scope, I perform a HQE of form
factors including NLO $\alpha_{s}$ and next-to-leading power (NLP) $1/m_{b}$
corrections, the latter being not known so far in the literature. The results
are then matched onto the lattice QCD calculation. In the HQE the number of
independent, hadronic parameters is reduced compared to the lattice QCD case,
introducing strict correlations among the form factors. I perform a fit to the
lattice QCD results to obtain the central values, uncertainties, and
correlations among the HQE parameters. The comparison between lattice QCD
results and HQE predictions shows a tension between the two methods for tensor
and pseudo-tensor form factors, whose origin is not yet completely determined.
This paper is organised as follows: in Sect. 2 I present the HQE of the form
factors; in Sect. 3 I discuss the fit to lattice data; in Sect. 4 I conclude.
Appendix A and Appendix B report details on the calculation of the form
factors and Appendix C contains the covariance matrix for the fitted values of
the HQE parameters.
## 2 Setup
In the following, I investigate the form factors mediating the transition
$\Lambda_{b}(p,s_{b})\to\Lambda^{*}(1520)(k,\eta(\lambda_{\Lambda}),s_{\Lambda})$,
where $p$ and $k$ are the momenta of the initial and final states,
respectively, $s_{b}$ and $s_{\Lambda}$ are the rest-frame helicities of the
two baryons and $\eta(\lambda_{\Lambda})$ is the polarisation vector of the
$\Lambda^{*}$ for each polarisation state $\lambda_{\Lambda}$. Since in this
work I refer to the $\Lambda^{*}(1520)$, only, I denote this state as
$\Lambda^{*}$ in the following. It is worth noticing that the $\Lambda^{*}$ is
considered stable in this discussion. The subsequent $\Lambda^{*}$ decay has
to be taken into account when comparing to experimental data. In Refs. [20,
21] the $\Lambda^{*}\to N\bar{K}$111The $\Lambda^{*}\to N\bar{K}$ is the
$\Lambda^{*}$ decay mode with larger branching fraction and the one that will
be employed by the LHCb collaboration to reconstruct the $\Lambda^{*}$ in
future experimental analysis. decay is discussed, and the four-dimensional
differential decay width of the decay chain $\Lambda_{b}\to\Lambda^{*}(\to
N\bar{K})\ell^{+}\ell^{-}$ is presented.
I define the helicity form factors for
$\Lambda_{b}(p,s_{b})\to\Lambda^{*}(k,\eta(\lambda_{\Lambda}),s_{\Lambda})$ as
$\displaystyle\bra{\Lambda^{*}(k,\eta(\lambda_{\Lambda}),s_{\Lambda})}\bar{s}\gamma^{\mu}b\ket{\Lambda_{b}(p,s_{b})}$
$\displaystyle=+\bar{u}_{\alpha}(k,\eta(\lambda_{\Lambda}),s_{\Lambda})\left[\sum_{i}F_{i}(q^{2})\Gamma^{\alpha\mu}_{V,i}\right]u(p,s_{b})\,,$
(2.1)
$\displaystyle\bra{\Lambda^{*}(k,\eta(\lambda_{\Lambda}),s_{\Lambda})}\bar{s}\gamma^{\mu}\gamma_{5}b\ket{\Lambda_{b}(p,s_{b})}$
$\displaystyle=-\bar{u}_{\alpha}(k,\eta(\lambda_{\Lambda}),s_{\Lambda})\left[\sum_{i}G_{i}(q^{2})\gamma_{5}\Gamma^{\alpha\mu}_{A,i}\right]u(p,s_{b})\,,$
$\displaystyle\bra{\Lambda^{*}(k,\eta(\lambda_{\Lambda}),s_{\Lambda})}\bar{s}i\sigma^{\mu\nu}q_{\nu}b\ket{\Lambda_{b}(p,s_{b})}$
$\displaystyle=-\bar{u}_{\alpha}(k,\eta(\lambda_{\Lambda}),s_{\Lambda})\left[\sum_{i}T_{i}(q^{2})\Gamma^{\alpha\mu}_{T,i}\right]u(p,s_{b})\,,$
$\displaystyle\bra{\Lambda^{*}(k,\eta(\lambda_{\Lambda}),s_{\Lambda})}\bar{s}i\sigma^{\mu\nu}q_{\nu}\gamma_{5}b\ket{\Lambda_{b}(p,s_{b})}$
$\displaystyle=-\bar{u}_{\alpha}(k,\eta(\lambda_{\Lambda}),s_{\Lambda})\left[\sum_{i}T^{5}_{i}(q^{2})\gamma_{5}\Gamma^{\alpha\mu}_{T5,i}\right]u(p,s_{b})\,,$
where $\bar{u}_{\alpha}$ is the spin $3/2$ projector of a Rarita-Schwinger
object [23]. The Dirac structures $\Gamma^{\alpha\mu}_{L,i}$, with
$L=V,A,T,T5$ are given in Appendix A. In the cases $L=V,A$ I adopt the
parametrisation in Ref. [24], while for $L=T,T5$ I follow and adapt the
parametrisation in Refs. [20, 22]. In the following I use the convention
$\sigma^{\mu\nu}=\frac{i}{2}(\gamma^{\mu}\gamma^{\nu}-\gamma^{\nu}\gamma^{\mu})$.
### 2.1 The Heavy Quark Expansion
In the low-recoil limit, a HQE of the $\Lambda_{b}\to\Lambda^{*}$ form factors
can be performed. At leading power in $1/m_{b}$ and leading order in
$\alpha_{s}$, the hadronic matrix element for $\Lambda_{b}\to\Lambda^{*}$
transitions reads:
$\langle\Lambda^{*}(k,\eta,s_{\Lambda})|\bar{s}\Gamma^{\mu}b|\Lambda_{b}(p,s_{b})\rangle=\sqrt{4}\bar{u}_{\alpha}(k,\eta,s_{\Lambda})\zeta^{\alpha}(k,\eta,s_{\Lambda})\Gamma^{\mu}u(m_{\Lambda_{b}}v,s_{b})\,,$
(2.2)
where $v$ is the velocity of the initial state and $\Gamma^{\mu}$ denotes a
Dirac structure. In the following, I focus on the cases
$\Gamma^{\mu}=\gamma^{\mu},\,\gamma^{\mu}\gamma_{5},\,i\sigma^{\mu\nu}q_{\nu},\,i\sigma^{\mu\nu}q_{\nu}\gamma_{5}$.
The most general decomposition for the leading-order and leading-power
contribution $\zeta^{\alpha}$ reads
$\zeta^{\alpha}=v^{\alpha}[\zeta_{1}+\zeta_{2}\not{v}]\,,$ (2.3)
where $\zeta_{1}$ and $\zeta_{2}$ are the leading Isgur-Wise (IW) functions.
The discussion of $1/m_{b}$ and $\alpha_{s}$ corrections closely follows Refs.
[25, 26, 27]. In this spirit, I replace the (axial-)vector and (pseudo-)tensor
currents with:
$\displaystyle\bar{s}\gamma^{\mu}b\mapsto\bar{s}J^{\mu}_{V(A)}h_{v}=\,$
$\displaystyle(1+C_{0}^{(v)})\bar{s}\gamma^{\mu}(\gamma_{5})h_{v}\pm
C_{1}^{(v)}v^{\mu}\bar{s}(\gamma_{5})h_{v}+\frac{1}{2m_{b}}\bar{s}\Delta
J^{\mu}_{V(A)}h_{v}\,,$ (2.4)
$\displaystyle\bar{s}(\gamma_{5})i\sigma^{\mu\nu}q_{\nu}b\mapsto\bar{s}J^{\mu}_{T(5)}h_{v}=\,$
$\displaystyle(1+C_{0}^{(t)})\bar{s}i\sigma^{\mu\nu}q_{\nu}h_{v}+\frac{1}{2m_{b}}\bar{s}\Delta
J^{\mu}_{T(5)}h_{v}\,.$ (2.5)
The matching coefficients at NLO read [25, 26]
$\displaystyle C_{0}^{(v)}(\mu)=$
$\displaystyle-\frac{\alpha_{s}C_{F}}{4\pi}\left[3\log\left(\frac{\mu}{m_{b}}\right)+4\right]+\mathcal{O}(\alpha_{s}^{2})\,,$
(2.6) $\displaystyle C_{1}^{(v)}(\mu)=$
$\displaystyle+\frac{\alpha_{s}C_{F}}{2\pi}+\mathcal{O}(\alpha_{s}^{2})\,,$
$\displaystyle C_{0}^{(t)}(\mu)=$
$\displaystyle-\frac{\alpha_{s}C_{F}}{4\pi}\left[5\log\left(\frac{\mu}{m_{b}}\right)+4\right]+\mathcal{O}(\alpha_{s}^{2})\,.$
For numerical purposes, the scale of the Wilson coefficients is set to
$\mu\sim 2\,\text{GeV}$. The NLP $1/m_{b}$ corrections due to the expansion of
the current are parametrised as
$\langle\Lambda^{*}(k,\eta,s_{\Lambda})|\bar{s}\Delta
J^{\mu}_{V(A,T,T5)}b|\Lambda_{b}(p,s_{b})\rangle=\sqrt{4}\sum_{i}\bar{u}_{\alpha}(k,\eta,s_{\Lambda})\zeta^{\alpha\beta}[\mathcal{O}_{i}^{V(A,T,T5)}]^{\mu}_{\beta}u(m_{\Lambda_{b}}v,s_{b})\,,$
(2.7)
where
$\zeta^{\alpha\beta}=g^{\alpha\beta}\left[\zeta^{\text{SL}}_{1}+\not{v}\zeta^{\text{SL}}_{2}\right]+v^{\alpha}v^{\beta}\left[\zeta^{\text{SL}}_{3}+\not{v}\zeta^{\text{SL}}_{4}\right]+v^{\alpha}\gamma^{\beta}\left[\zeta_{5}^{\text{SL}}+\not{v}\zeta^{\text{SL}}_{6}\right]\,.$
(2.8)
The functions $\zeta^{\text{SL}}_{1\dots 6}$ are the subleading Isgur-Wise
functions, and they correspond to all the independent Dirac structures that
can appear in $\zeta^{\alpha\beta}$. The possible operators
$\mathcal{O}^{\mu\beta}_{i}$ are listed in Ref. [25]. Out of the possible six
of them, only the operator $[\mathcal{O}_{1}^{\Gamma}]^{\mu}_{\beta}$ arises
at order $1/m_{b}$, while the others are suppressed by
$\mathcal{O}(\alpha_{s}/m_{b})$ and therefore are beyond the precision here
required. Therefore, the only contributions that I consider for this analysis
come from:
$\displaystyle\left[\mathcal{O}_{1}^{V}\right]^{\mu}_{\beta}=$
$\displaystyle+\gamma^{\mu}\gamma_{\beta}\,,$
$\displaystyle\left[\mathcal{O}_{1}^{A}\right]^{\mu}_{\beta}=$
$\displaystyle-\gamma_{5}\gamma^{\mu}\gamma_{\beta}\,,$ (2.9)
$\displaystyle\left[\mathcal{O}_{1}^{T}\right]^{\mu}_{\beta}=$
$\displaystyle+i\sigma^{\mu\nu}q_{\nu}\gamma_{\beta}\,,$
$\displaystyle\left[\mathcal{O}_{1}^{T5}\right]^{\mu}_{\beta}=$
$\displaystyle+i\gamma_{5}\sigma^{\mu\nu}q_{\nu}\gamma_{\beta}\,,$
By means of Dirac algebra, and using the properties of Rarita-Schwinger
objects in Ref. [23], inserting Eq. (2.9) in Eq. (2.8) yields
$\displaystyle\langle\Lambda^{*}(k,\eta,s_{\Lambda})|\bar{s}\Delta
J^{\mu}_{V}b|\Lambda_{b}(p,s_{b})\rangle=2\bigg{\\{}$ $\displaystyle
2\bar{u}_{\alpha}(k,\eta,s_{\Lambda})u(m_{\Lambda_{b}}v,s_{b})g^{\alpha\mu}(\zeta^{\text{SL}}_{1}+\zeta^{\text{SL}}_{2})$
$\displaystyle+\bar{u}_{\alpha}(k,\eta,s_{\Lambda})\gamma^{\mu}(m_{\Lambda_{b}}v,s_{b})v^{\alpha}(\zeta^{\text{SL}}_{3}-\zeta^{\text{SL}}_{4}-2\zeta^{\text{SL}}_{2}-2\zeta^{\text{SL}}_{5})$
$\displaystyle+2\bar{u}_{\alpha}(k,\eta,s_{\Lambda})u(m_{\Lambda_{b}}v,s_{b})v^{\alpha}v^{\mu}(\zeta^{\text{SL}}_{4}+2\zeta^{\text{SL}}_{6})\bigg{\\}}\,,$
(2.10) $\displaystyle\langle\Lambda^{*}(k,\eta,s_{\Lambda})|\bar{s}\Delta
J^{\mu}_{A}b|\Lambda_{b}(p,s_{b})\rangle=2\bigg{\\{}$ $\displaystyle
2\bar{u}_{\alpha}(k,\eta,s_{\Lambda})\gamma_{5}u(m_{\Lambda_{b}}v,s_{b})g^{\alpha\mu}(-\zeta^{\text{SL}}_{1}+\zeta^{\text{SL}}_{2})$
$\displaystyle-\bar{u}_{\alpha}(k,\eta,s_{\Lambda})\gamma_{5}\gamma^{\mu}(m_{\Lambda_{b}}v,s_{b})v^{\alpha}(\zeta^{\text{SL}}_{3}+\zeta^{\text{SL}}_{4}+2\zeta^{\text{SL}}_{2}+2\zeta^{\text{SL}}_{5})$
$\displaystyle+2\bar{u}_{\alpha}(k,\eta,s_{\Lambda})\gamma_{5}u(m_{\Lambda_{b}}v,s_{b})v^{\alpha}v^{\mu}(\zeta^{\text{SL}}_{4}-2\zeta^{\text{SL}}_{6})\bigg{\\}}\,,$
(2.11) $\displaystyle\langle\Lambda^{*}(k,\eta,s_{\Lambda})|\bar{s}\Delta
J^{\mu}_{T}b|\Lambda_{b}(p,s_{b})\rangle=2\bigg{\\{}$
$\displaystyle-\bar{u}_{\alpha}(k,\eta,s_{\Lambda})\gamma^{\mu}u(m_{\Lambda_{b}}v,s_{b})v^{\alpha}\bigg{[}2m_{\Lambda_{b}}\zeta_{1}^{\text{SL}}+2m_{\Lambda^{*}}\zeta_{2}^{\text{SL}}$
$\displaystyle+(m_{\Lambda_{b}}+m_{\Lambda^{*}})(\zeta_{3}^{\text{SL}}+2\zeta_{6}^{\text{SL}})+\frac{m_{\Lambda^{*}}^{2}+m_{\Lambda_{b}}m_{\Lambda^{*}}-q^{2}}{m_{\Lambda_{b}}}\zeta_{4}^{\text{SL}}\bigg{]}$
$\displaystyle+2\bar{u}_{\mu}(k,\eta,s_{\Lambda})u(m_{\Lambda_{b}}v,s_{b})\bigg{[}(m_{\Lambda_{b}}-m_{\Lambda^{*}})\zeta_{1}^{\text{SL}}-\frac{m_{\Lambda^{*}}^{2}-m_{\Lambda^{*}}m_{\Lambda_{b}}-q^{2}}{m_{\Lambda_{b}}}\zeta_{2}^{\text{SL}}\bigg{]}$
$\displaystyle+2\bar{u}_{\alpha}(k,\eta,s_{\Lambda})\gamma^{\mu}u(m_{\Lambda_{b}}v,s_{b})k^{\alpha}(\zeta_{1}^{\text{SL}}-\zeta_{2}^{\text{SL}})$
$\displaystyle+\bar{u}_{\alpha}(k,\eta,s_{\Lambda})u(m_{\Lambda_{b}}v,s_{b})v^{\alpha}k^{\mu}(\zeta_{3}^{\text{SL}}+2\zeta_{2}^{\text{SL}}+\zeta_{4}^{\text{SL}}+2\zeta_{6}^{\text{SL}})$
$\displaystyle+4\bar{u}_{\alpha}(k,\eta,s_{\Lambda})u(m_{\Lambda_{b}}v,s_{b})v^{\mu}k^{\alpha}\zeta_{2}^{\text{SL}}$
$\displaystyle+\bar{u}_{\alpha}(k,\eta,s_{\Lambda})u(m_{\Lambda_{b}}v,s_{b})v^{\alpha}v^{\mu}$
$\displaystyle\times\big{[}m_{\Lambda_{b}}\zeta_{3}^{\text{SL}}-2m_{\Lambda_{b}}\zeta_{2}^{\text{SL}}+(2m_{\Lambda^{*}}-m_{\Lambda_{b}})\zeta_{4}^{\text{SL}}+2m_{\Lambda_{b}}\zeta_{6}^{\text{SL}}\big{]}\bigg{\\}}\,,$
(2.12) $\displaystyle\langle\Lambda^{*}(k,\eta,s_{\Lambda})|\bar{s}\Delta
J^{\mu}_{T5}b|\Lambda_{b}(p,s_{b})\rangle=2\bigg{\\{}$
$\displaystyle-\bar{u}_{\alpha}(k,\eta,s_{\Lambda})\gamma_{5}\gamma^{\mu}u(m_{\Lambda_{b}}v,s_{b})v^{\alpha}\bigg{[}m_{\Lambda_{b}}\zeta_{1}^{\text{SL}}+2m_{\Lambda^{*}}\zeta_{2}^{\text{SL}}$
$\displaystyle+(m_{\Lambda_{b}}-m_{\Lambda^{*}})(\zeta_{3}^{\text{SL}}+2\zeta_{6}^{\text{SL}})-\frac{m_{\Lambda^{*}}^{2}-m_{\Lambda_{b}}m_{\Lambda^{*}}-q^{2}}{m_{\Lambda_{b}}}\zeta_{4}^{\text{SL}}\bigg{]}$
$\displaystyle+2\bar{u}_{\mu}(k,\eta,s_{\Lambda})\gamma_{5}u(m_{\Lambda_{b}}v,s_{b})\bigg{[}(m_{\Lambda_{b}}+m_{\Lambda^{*}})\zeta_{1}^{\text{SL}}+\frac{m_{\Lambda^{*}}^{2}+m_{\Lambda^{*}}m_{\Lambda_{b}}-q^{2}}{m_{\Lambda_{b}}}\zeta_{2}^{\text{SL}}\bigg{]}$
$\displaystyle+2\bar{u}_{\alpha}(k,\eta,s_{\Lambda})\gamma_{5}\gamma^{\mu}u(m_{\Lambda_{b}}v,s_{b})k^{\alpha}(\zeta_{1}^{\text{SL}}+\zeta_{2}^{\text{SL}})$
$\displaystyle+\bar{u}_{\alpha}(k,\eta,s_{\Lambda})\gamma_{5}u(m_{\Lambda_{b}}v,s_{b})v^{\alpha}k^{\mu}(+\zeta_{3}^{\text{SL}}-2\zeta_{2}^{\text{SL}}-\zeta_{4}^{\text{SL}}+2\zeta_{6}^{\text{SL}})$
$\displaystyle-4\bar{u}_{\alpha}(k,\eta,s_{\Lambda})\gamma_{5}u(m_{\Lambda_{b}}v,s_{b})v^{\mu}k^{\alpha}\zeta_{2}^{\text{SL}}$
$\displaystyle+\bar{u}_{\alpha}(k,\eta,s_{\Lambda})\gamma_{5}u(m_{\Lambda_{b}}v,s_{b})v^{\alpha}v^{\mu}$
$\displaystyle\times\big{[}m_{\Lambda_{b}}\zeta_{3}^{\text{SL}}+2m_{\Lambda_{b}}\zeta_{2}^{\text{SL}}+(2m_{\Lambda^{*}}+m_{\Lambda_{b}})\zeta_{4}^{\text{SL}}+2m_{\Lambda_{b}}\zeta_{6}^{\text{SL}}\big{]}\bigg{\\}}\,.$
(2.13)
The number of independent subleading IW functions can be reduced by using
equations of motions. In particular, for this decay, the relation
$v_{\beta}\zeta^{\alpha\beta}=0$ gives the following conditions:
$\displaystyle\zeta_{1}^{\text{SL}}+\zeta_{3}^{\text{SL}}+\zeta_{6}^{\text{SL}}=0\,,$
(2.14)
$\displaystyle\zeta_{2}^{\text{SL}}+\zeta_{4}^{\text{SL}}+\zeta_{5}^{\text{SL}}=0\,.$
(2.15)
I choose to retain as independent quantities the subleading IW functions
$\zeta_{1}^{\text{SL}}$, $\zeta_{2}^{\text{SL}}$, $\zeta_{3}^{\text{SL}}$ and
$\zeta_{4}^{\text{SL}}$.
Corrections to the form factors arise also by of non-local insertions of the
heavy quark Lagrangian at order $1/m_{b}$. Following the discussions in
Refs.[28, 29, 25], non-local insertion of the kinetic operator yields a
universal shift proportional to the tree-level matrix elements. Hence I
reabsorb such shift in a redefinition of the leading order IW function
$\zeta_{1}$. Non-local insertion of the chromomagnetic operator can be
parametrised as [30]
$R_{\mu\nu}^{\alpha}\bar{u}_{\alpha}(k,\eta,s_{\Lambda})\Gamma\frac{1+\not{v}}{2}i\sigma^{\mu\nu}u(m_{\Lambda_{b}}v,s_{b})\,,$
(2.16)
where the object $R_{\mu\nu}^{\alpha}$ is an antisymmetric tensor in the
indices $\mu$ and $\nu$ and contains the velocity $v$. $\Gamma$ symbolises all
the possible Dirac structures which mediate $\Lambda_{b}\to\Lambda^{*}$
transitions. Using equations of motion, it can be shown that no possible form
of $R_{\mu\nu}^{\alpha}$ gives a non-zero contribution for the chromomagnetic
operator.
The expressions of the form factors in terms of the leading and subleading IW
functions are obtained by matching the helicity amplitudes with their HQE
expansion. For the vector current, by comparing Eqs. (A.17)–(A.20) to Eqs.
(A.33)–(A.36), I get
$\displaystyle F_{1/2,0}=$
$\displaystyle\frac{\sqrt{s_{+}}}{2m_{\Lambda_{b}}^{3/2}m_{\Lambda^{*}}^{3/2}}\bigg{\\{}\zeta_{1}s_{-}\left[(1+C_{0}^{(v)})+C_{1}^{(v)}\frac{s_{+}}{2m_{\Lambda_{b}}(m_{\Lambda_{b}}+m_{\Lambda^{*}})}\right]$
$\displaystyle+$
$\displaystyle\zeta_{2}s_{-}\left[(1+C_{0}^{(v)})\frac{m_{\Lambda^{*}}^{2}+m_{\Lambda^{*}}m_{\Lambda_{b}}-q^{2}}{m_{\Lambda_{b}}(m_{\Lambda_{b}}+m_{\Lambda^{*}})}+C_{1}^{(v)}\frac{s_{+}}{2m_{\Lambda_{b}}(m_{\Lambda_{b}}+m_{\Lambda^{*}})}\right]$
$\displaystyle-$
$\displaystyle\zeta_{1}^{\text{SL}}\frac{\lambda+m_{\Lambda_{b}}^{2}(m_{\Lambda^{*}}^{2}-m_{\Lambda_{b}}^{2}+q^{2})}{m_{\Lambda_{b}}^{2}(m_{\Lambda^{*}}+m_{\Lambda_{b}})}-\zeta_{2}^{\text{SL}}\frac{(m_{\Lambda^{*}}^{2}-m_{\Lambda_{b}}^{2}+q^{2})}{(m_{\Lambda^{*}}+m_{\Lambda_{b}})}$
$\displaystyle-$ $\displaystyle
s_{-}\left[\zeta_{3}^{\text{SL}}\frac{(2m_{\Lambda^{*}}^{2}+3m_{\Lambda^{*}}m_{\Lambda_{b}}+m_{\Lambda_{b}}^{2}-2q^{2})}{2m_{\Lambda_{b}}^{2}(m_{\Lambda^{*}}+m_{\Lambda_{b}})}-\zeta_{4}^{\text{SL}}\frac{(m_{\Lambda^{*}}^{2}+3m_{\Lambda^{*}}m_{\Lambda_{b}}+2m_{\Lambda_{b}}^{2}-q^{2})}{2m_{\Lambda_{b}}^{2}(m_{\Lambda^{*}}+m_{\Lambda_{b}})}\right]\bigg{\\}}\,,$
(2.17) $\displaystyle F_{1/2,t}=$
$\displaystyle\frac{\sqrt{s_{-}}s_{+}}{2m_{\Lambda_{b}}^{3/2}m_{\Lambda^{*}}^{3/2}}\bigg{\\{}\zeta_{1}\left[(1+C_{0}^{(v)})-C_{1}^{(v)}\frac{m_{\Lambda^{*}}^{2}-m_{\Lambda_{b}}^{2}-q^{2}}{2m_{\Lambda_{b}}(m_{\Lambda_{b}}-m_{\Lambda^{*}})}\right]$
$\displaystyle-$
$\displaystyle\zeta_{2}\frac{1}{2m_{\Lambda_{b}}(m_{\Lambda_{b}}-m_{\Lambda^{*}})}\left[2(1+C_{0}^{(v)})(m_{\Lambda^{*}}^{2}-m_{\Lambda^{*}}m_{\Lambda_{b}}-q^{2})+C_{1}^{(v)}(m_{\Lambda^{*}}^{2}-m_{\Lambda_{b}}^{2}-q^{2})\right]$
$\displaystyle-$
$\displaystyle\frac{1}{m_{\Lambda_{b}}-m_{\Lambda^{*}}}\left[\frac{s-m_{\Lambda^{*}}^{2}}{m_{\Lambda_{b}}^{2}}\zeta_{1}^{\text{SL}}-\zeta_{2}^{\text{SL}}\right]$
$\displaystyle+$
$\displaystyle\left[\frac{2m_{\Lambda^{*}}^{2}-m_{\Lambda^{*}}m_{\Lambda_{b}}-m_{\Lambda_{b}}^{2}-2q^{2}}{2m_{\Lambda_{b}}^{2}(m_{\Lambda_{b}}-m_{\Lambda^{*}})}\zeta_{3}^{\text{SL}}-\frac{m_{\Lambda^{*}}^{2}+m_{\Lambda^{*}}m_{\Lambda_{b}}-2m_{\Lambda_{b}}^{2}-q^{2}}{2m_{\Lambda_{b}}^{2}(m_{\Lambda_{b}}-m_{\Lambda^{*}})}\zeta_{4}^{\text{SL}}\right]\,,$
(2.18) $\displaystyle F_{1/2,\perp}=$
$\displaystyle\frac{\sqrt{s_{+}}}{2m_{\Lambda_{b}}^{3/2}m_{\Lambda^{*}}^{3/2}}\bigg{\\{}s_{-}(1+C_{0}^{(v)})(\zeta_{1}-\zeta_{2})-m_{\Lambda^{*}}(\zeta_{1}^{\text{SL}}+\zeta_{2}^{\text{SL}})+\frac{s_{-}}{2m_{\Lambda_{b}}}(\zeta_{3}^{\text{SL}}+\zeta_{4}^{\text{SL}})\bigg{\\}}\,,$
(2.19) $\displaystyle F_{3/2,\perp}=$
$\displaystyle-\frac{\sqrt{s_{+}}}{2m_{\Lambda_{b}}^{3/2}m_{\Lambda^{*}}^{1/2}}\bigg{\\{}\zeta_{1}^{\text{SL}}+\zeta_{2}^{\text{SL}}\bigg{\\}}\,,$
(2.20)
and for the axial vector current, by matching Eqs. (A.21)–(A.24) to Eqs.
(A.37)–(A.40) I obtain
$\displaystyle G_{1/2,0}=$
$\displaystyle\frac{\sqrt{s_{-}}}{2m_{\Lambda_{b}}^{3/2}m_{\Lambda^{*}}^{3/2}}\bigg{\\{}\zeta_{1}s_{+}\left[(1+C_{0}^{(v)})+C_{1}^{(v)}\frac{s_{-}}{2m_{\Lambda_{b}}(m_{\Lambda_{b}}-m_{\Lambda^{*}})}\right]$
$\displaystyle-$
$\displaystyle\zeta_{2}s_{+}\left[(1+C_{0}^{(v)})\frac{m_{\Lambda^{*}}^{2}-m_{\Lambda^{*}}m_{\Lambda_{b}}-q^{2}}{m_{\Lambda_{b}}(m_{\Lambda_{b}}-m_{\Lambda^{*}})}-C_{1}^{(v)}\frac{s_{-}}{2m_{\Lambda_{b}}(m_{\Lambda_{b}}-m_{\Lambda^{*}})}\right]$
$\displaystyle-$
$\displaystyle\zeta_{1}^{\text{SL}}\frac{\lambda+m_{\Lambda_{b}}^{2}(m_{\Lambda^{*}}^{2}-m_{\Lambda_{b}}^{2}+q^{2})}{m_{\Lambda_{b}}^{2}(m_{\Lambda_{b}}-m_{\Lambda^{*}})}+\zeta_{2}^{\text{SL}}\frac{(m_{\Lambda^{*}}^{2}-m_{\Lambda_{b}}^{2}+q^{2})}{(m_{\Lambda_{b}}-m_{\Lambda^{*}})}$
$\displaystyle-$ $\displaystyle
s_{+}\left[\zeta_{3}^{\text{SL}}\frac{(2m_{\Lambda^{*}}^{2}-3m_{\Lambda^{*}}m_{\Lambda_{b}}+m_{\Lambda_{b}}^{2}-2q^{2})}{2m_{\Lambda_{b}}^{2}(m_{\Lambda_{b}}-m_{\Lambda^{*}})}+\zeta_{4}^{\text{SL}}\frac{(m_{\Lambda^{*}}^{2}-3m_{\Lambda^{*}}m_{\Lambda_{b}}+2m_{\Lambda_{b}}^{2}-q^{2})}{2m_{\Lambda_{b}}^{2}(m_{\Lambda_{b}}-m_{\Lambda^{*}})}\right]\bigg{\\}}\,,$
(2.21) $\displaystyle G_{1/2,t}=$
$\displaystyle\frac{\sqrt{s_{+}}s_{-}}{2m_{\Lambda_{b}}^{3/2}m_{\Lambda^{*}}^{3/2}}\bigg{\\{}\zeta_{1}\left[(1+C_{0}^{(v)})-C_{1}^{(v)}\frac{m_{\Lambda^{*}}^{2}-m_{\Lambda_{b}}^{2}-q^{2}}{2m_{\Lambda_{b}}(m_{\Lambda_{b}}+m_{\Lambda^{*}})}\right]$
$\displaystyle+$
$\displaystyle\zeta_{2}\frac{1}{2m_{\Lambda_{b}}(m_{\Lambda_{b}}+m_{\Lambda^{*}})}\left[2(1+C_{0}^{(v)})(m_{\Lambda^{*}}^{2}+m_{\Lambda^{*}}m_{\Lambda_{b}}-q^{2})-C_{1}^{(v)}(m_{\Lambda^{*}}^{2}-m_{\Lambda_{b}}^{2}-q^{2})\right]$
$\displaystyle-$
$\displaystyle\frac{1}{m_{\Lambda_{b}}+m_{\Lambda^{*}}}\left[\frac{s-m_{\Lambda^{*}}^{2}}{m_{\Lambda_{b}}^{2}}\zeta_{1}^{\text{SL}}+\zeta_{2}^{\text{SL}}\right]$
$\displaystyle+$
$\displaystyle\left[\frac{2m_{\Lambda^{*}}^{2}+m_{\Lambda^{*}}m_{\Lambda_{b}}-m_{\Lambda_{b}}^{2}-2q^{2}}{2m_{\Lambda_{b}}^{2}(m_{\Lambda_{b}}+m_{\Lambda^{*}})}\zeta_{3}^{\text{SL}}+\frac{m_{\Lambda^{*}}^{2}-m_{\Lambda^{*}}m_{\Lambda_{b}}-2m_{\Lambda_{b}}^{2}-q^{2}}{2m_{\Lambda_{b}}^{2}(m_{\Lambda_{b}}+m_{\Lambda^{*}})}\zeta_{4}^{\text{SL}}\right]\,,$
(2.22) $\displaystyle G_{1/2,\perp}=$
$\displaystyle\frac{\sqrt{s_{-}}}{2m_{\Lambda_{b}}^{3/2}m_{\Lambda^{*}}^{3/2}}\bigg{\\{}s_{+}\left[(1+C_{0}^{(v)})(\zeta_{1}+\zeta_{2})\right]+m_{\Lambda^{*}}(\zeta_{1}^{\text{SL}}-\zeta_{2}^{\text{SL}})+\frac{s_{+}}{2m_{\Lambda_{b}}}(\zeta_{3}^{\text{SL}}-\zeta_{4}^{\text{SL}})\bigg{\\}}\,,$
(2.23) $\displaystyle G_{3/2,\perp}=$
$\displaystyle-\frac{\sqrt{s_{-}}}{2m_{\Lambda_{b}}^{3/2}m_{\Lambda^{*}}^{1/2}}\bigg{\\{}\zeta_{1}^{\text{SL}}-\zeta_{2}^{\text{SL}}\bigg{\\}}\,.$
(2.24)
For the tensor current, the comparison between Eqs. (A.25)–(A.27) and Eqs.
(A.41)–(A.43) yields
$\displaystyle T_{1/2,0}=$
$\displaystyle\frac{\sqrt{s_{+}}}{m_{\Lambda_{b}}^{3/2}m_{\Lambda^{*}}^{1/2}}\bigg{\\{}(1+C_{0}^{(t)})(\zeta_{1}-\zeta_{2})s_{-}-\frac{m_{\Lambda^{*}}^{2}+m_{\Lambda_{b}}^{2}-q^{2}}{m_{\Lambda_{b}}}(\zeta_{1}^{\text{SL}}-\zeta_{2}^{\text{SL}})$
$\displaystyle-\frac{s_{-}}{2m_{\Lambda_{b}}}(\zeta_{4}^{\text{SL}}+\zeta_{3}^{\text{SL}})\bigg{\\}}\,,$
(2.25) $\displaystyle T_{1/2,\perp}=$
$\displaystyle\frac{\sqrt{s_{+}}}{m_{\Lambda_{b}}^{3/2}m_{\Lambda^{*}}^{1/2}}\bigg{\\{}(1+C_{0}^{(t)})\left[\zeta_{1}+\frac{m_{\Lambda^{*}}(m_{\Lambda_{b}}+m_{\Lambda^{*}})-q^{2}}{m_{\Lambda_{b}}(m_{\Lambda^{*}}+m_{\Lambda_{b}})}\zeta_{2}\right]s_{-}$
$\displaystyle+$ $\displaystyle
m_{\Lambda^{*}}\frac{m_{\Lambda^{*}}(m_{\Lambda_{b}}-m_{\Lambda^{*}})+q^{2}}{m_{\Lambda_{b}}(m_{\Lambda^{*}}+m_{\Lambda_{b}})}\zeta_{1}^{\text{SL}}+\frac{m_{\Lambda^{*}}(m_{\Lambda_{b}}-m_{\Lambda^{*}})}{m_{\Lambda^{*}}+m_{\Lambda_{b}}}\zeta_{2}^{\text{SL}}$
$\displaystyle+$
$\displaystyle\frac{s_{-}}{2m_{\Lambda_{b}}}\left[-\zeta_{3}^{\text{SL}}+\frac{m_{\Lambda^{*}}(m_{\Lambda_{b}}+m_{\Lambda^{*}})-q^{2}}{m_{\Lambda_{b}}(m_{\Lambda^{*}}+m_{\Lambda_{b}})}\zeta_{4}^{\text{SL}}\right]\bigg{\\}}\,,$
(2.26) $\displaystyle T_{3/2,\perp}=$
$\displaystyle+\frac{\sqrt{s_{+}}}{m_{\Lambda_{b}}^{3/2}m_{\Lambda^{*}}^{1/2}}\bigg{\\{}-\zeta_{1}^{\text{SL}}(m_{\Lambda_{b}}-m_{\Lambda^{*}})+\frac{m_{\Lambda^{*}}^{2}-m_{\Lambda_{b}}m_{\Lambda^{*}}-q^{2}}{m_{\Lambda_{b}}}\zeta_{2}^{\text{SL}}\bigg{\\}}\,,$
(2.27)
while for the axial-tensor form factors the comparison between Eqs.
(A.29)–(A.31) and Eqs. (A.44)–(A.46) gives
$\displaystyle T_{1/2,0}^{5}=$
$\displaystyle\frac{\sqrt{s_{-}}}{m_{\Lambda_{b}}^{3/2}m_{\Lambda^{*}}^{1/2}}\bigg{\\{}(1+C_{0}^{(t)})(\zeta_{2}+\zeta_{1})s_{+}-\frac{m_{\Lambda^{*}}^{2}+m_{\Lambda_{b}}^{2}-q^{2}}{m_{\Lambda_{b}}}(\zeta_{1}^{\text{SL}}-\zeta_{2}^{\text{SL}})$
$\displaystyle-$
$\displaystyle\frac{s_{+}}{2m_{\Lambda_{b}}}(\zeta_{3}^{\text{SL}}-\zeta_{4}^{\text{SL}})\bigg{\\}}\,,$
(2.28) $\displaystyle T_{1/2,\perp}^{5}=$
$\displaystyle\frac{\sqrt{s_{-}}}{m_{\Lambda_{b}}^{3/2}m_{\Lambda^{*}}^{1/2}}\bigg{\\{}(1+C_{0}^{(t)})\left[\zeta_{1}-\frac{m_{\Lambda^{*}}(-m_{\Lambda_{b}}+m_{\Lambda^{*}})-q^{2}}{m_{\Lambda_{b}}(m_{\Lambda_{b}}-m_{\Lambda^{*}})}\zeta_{2}\right]s_{+}$
$\displaystyle+$ $\displaystyle
m_{\Lambda^{*}}\frac{m_{\Lambda^{*}}(+m_{\Lambda_{b}}+m_{\Lambda^{*}})-q^{2}}{m_{\Lambda_{b}}(m_{\Lambda_{b}}-m_{\Lambda^{*}})}\zeta_{1}^{\text{SL}}+\frac{m_{\Lambda^{*}}(m_{\Lambda_{b}}+m_{\Lambda^{*}})}{m_{\Lambda_{b}}-m_{\Lambda^{*}}}\zeta_{2}^{\text{SL}}$
$\displaystyle-$
$\displaystyle\frac{s_{+}}{2m_{\Lambda_{b}}}\left[\zeta_{3}^{\text{SL}}+\frac{m_{\Lambda^{*}}(-m_{\Lambda_{b}}+m_{\Lambda^{*}})-q^{2}}{m_{\Lambda_{b}}(m_{\Lambda_{b}}-m_{\Lambda^{*}})}\zeta_{4}^{\text{SL}}\right]\bigg{\\}}\,,$
(2.29) $\displaystyle T_{3/2,\perp}^{5}=$
$\displaystyle-\frac{\sqrt{s_{-}}}{m_{\Lambda_{b}}^{3/2}m_{\Lambda^{*}}^{1/2}}\bigg{\\{}\zeta_{1}^{\text{SL}}(m_{\Lambda_{b}}+m_{\Lambda^{*}})+\frac{m_{\Lambda^{*}}^{2}+m_{\Lambda_{b}}m_{\Lambda^{*}}-q^{2}}{m_{\Lambda_{b}}}\zeta_{2}^{\text{SL}}\bigg{\\}}\,.$
(2.30)
The expressions in Eqs. (2.17)–(2.30) have been checked against the results in
Ref. [21], where the HQE for the form factors including NLO $\alpha_{s}$
corrections is presented. I find agreement with the expressions reported in
Ref. [21], apart from a sign flip in the term proportional to
$C_{1}^{(v)}\zeta_{2}$ in $G_{1/2,t}$. This misalignment is not expected to
invalidate the analysis in Ref. [21] since it appears in a term that is
$\alpha_{s}$ suppressed. In the following analysis, I stick to my findings and
adopt the signs in Eq. (2.22).
## 3 Form Factors relations and comparison with lattice QCD results
The first lattice QCD calculation for the full basis of
$\Lambda_{b}\to\Lambda^{*}$ form factors is presented in Ref. [22]. The
calculation is performed in the low-recoil region, very close to the zero-
recoil point $q^{2}_{\text{max}}=(m_{\Lambda_{b}}-m_{\Lambda^{*}})^{2}$. Two
lattice points per form factor are obtained, allowing to determine the
normalisation and the slope of each form factor. In the kinematical limit
where the lattice QCD computation is valid, it is more convenient to
substitute the variable $q^{2}$ with the adimensional variable $w=p\cdot
k/(2m_{\Lambda_{b}}m_{\Lambda^{*}})=(m_{\Lambda_{b}}^{2}+m_{\Lambda^{*}}^{2}-q^{2})/(2m_{\Lambda_{b}}m_{\Lambda^{*}})$,
where the zero-recoil point corresponds to $w=1$. The continuum extrapolation
of the results in Ref. [22] is performed using the following functional form
for each of the form factor $f_{i}$:
$f_{i}=F_{i}+A_{i}(w-1)\,.$ (3.1)
Values for the coefficients $F_{i}$ and $A_{i}$ and their covariance matrix
are given in ancillary files of Ref. [22].
Given that both the results in Ref. [22] and the parametrisation based on HQE
in Sect. 2 are valid in the low-recoil region, the former is suitable to
extract the unknown, hadronic parameters for the leading and sub-leading IW
functions introduced in Sect. 2. I want to stress that it is not possible to
extrapolate these results to the large-recoil region without any further
information on the form factors valid at low $q^{2}$.
The form factor base employed in Ref. [22] differs from the one presented in
Sect. 2 and the matching between the two is given in Appendix B. In the
following, I denote with capital letters the HQE base and with lower cases the
base of Ref. [22].
### 3.1 Relations in the zero-recoil point
I study the form factors first at the zero-recoil point. At this particular
kinematical configuration, all the axial-vector and pseudo-tensor HQE form
factors become zero because they are weighted by the factor $s_{-}$. The
remainder is further simplified since the terms associated with
$\zeta_{1},\zeta_{2},\zeta_{3}^{\text{SL}},\zeta_{4}^{\text{SL}}$, are always
proportional to $s_{-}$, hence vanish, leaving only $\zeta_{1}^{\text{SL}}$
and $\zeta_{2}^{\text{SL}}$ to determine the form factors in the zero-recoil
point. Even more interestingly, from Eqs. (2.17)–(2.30) it can be seen that
only the combination $\zeta_{1}^{\text{SL}}+\zeta_{2}^{\text{SL}}$ appears,
with different weights for each of the form factor. Therefore, HQE provides
predictions for ratios of form factors at zero-recoil independent of the IW
functions. They read:
$\displaystyle\frac{F_{1/2,0}}{F_{1/2,\perp}}=$
$\displaystyle\,\frac{F_{1/2,0}}{F_{3/2,\perp}}=-2\frac{m_{\Lambda_{b}}-m_{\Lambda^{*}}}{m_{\Lambda_{b}}+m_{\Lambda^{*}}}=-1.15\,,$
$\displaystyle\frac{F_{1/2,\perp}}{F_{3/2,\perp}}=$ $\displaystyle\,1\,,$
(3.2) $\displaystyle\frac{T_{1/2,0}}{T_{1/2,\perp}}=$
$\displaystyle-2\frac{m_{\Lambda_{b}}+m_{\Lambda^{*}}}{m_{\Lambda_{b}}-m_{\Lambda^{*}}}=-3.48\,,$
$\displaystyle\frac{T_{1/2,0}}{T_{3/2,\perp}}=$
$\displaystyle\frac{2m_{\Lambda^{*}}}{m_{\Lambda_{b}}-m_{\Lambda^{*}}}=0.74\,,$
$\displaystyle\frac{T_{1/2,\perp}}{T_{3/2,\perp}}=$
$\displaystyle-\frac{m_{\Lambda^{*}}}{m_{\Lambda_{b}}+m_{\Lambda^{*}}}=-0.21\,,$
where errors on the baryon masses have been neglected. These predictions have
to be compared with the results in Ref. [22]. Extracting pseudodata for the
parameters entering the lattice QCD form factors, according to their central
values and covariance matrix, I find (in the HQE basis)
$\displaystyle\frac{F_{1/2,0}^{\text{latt}}}{F_{1/2,\perp}^{\text{latt}}}=$
$\displaystyle-0.63\pm 1.82\,,$
$\displaystyle\frac{F_{1/2,0}^{\text{latt}}}{F_{3/2,\perp}^{\text{latt}}}=$
$\displaystyle-0.94\pm 0.14\,,$
$\displaystyle\frac{F_{1/2,\perp}^{\text{latt}}}{F_{3/2,\perp}^{\text{latt}}}=$
$\displaystyle\,1.48\pm 0.39\,,$ (3.3)
$\displaystyle\frac{T_{1/2,0}^{\text{latt}}}{T_{1/2,\perp}^{\text{latt}}}=$
$\displaystyle-5.62\pm 7.86\,,$
$\displaystyle\frac{T_{1/2,0}^{\text{latt}}}{T_{3/2,\perp}^{\text{latt}}}=$
$\displaystyle+1.03\pm 0.20\,,$
$\displaystyle\frac{T_{1/2,\perp}^{\text{latt}}}{T_{3/2,\perp}^{\text{latt}}}=$
$\displaystyle-0.18\pm 0.05\,.$
The central values showed above are the medians of the distributions and the
errors correspond to the $68\%$ intervals. Concerning the ratios
$F_{1/2,0}^{\text{latt}}/F_{1/2,\perp}^{\text{latt}}$ and
$T_{1/2,0}^{\text{latt}}/T_{1/2,\perp}^{\text{latt}}$, I stress that their
uncertainties are very large to reflect the fact that their distributions are
highly non gaussian. For the other ratios, the gaussian approximation well
describes lattice data. The predictions for the ratios obtained in the HQE
framework and using the lattice QCD results agree within $\sim 1.1\sigma$.
### 3.2 Form factors parametrisation and fit
The HQE of the $\Lambda_{b}\to\Lambda^{*}$ form factors presented in Sect. 2
depends on 6 unknown IW functions. The HQE does not predict the $w$ dependence
of the IW functions, and it has to be inferred from other principles. Since
the formalism of HQE is valid mainly in the low-recoil region for $b\to s$
transitions, the form factors can be expanded around the zero-recoil point.
Substituting $q^{2}$ with $w$, the IW functions $\zeta_{i}$ can be expanded as
$\zeta_{i}=\sum_{n=0}^{N}\frac{\zeta_{i}^{(n)}}{n!}(w-1)^{n}\,.$ (3.4)
The truncation order $N$ of the expansion depends on the precision required
and how far from $w=1$ the form factors are evaluated. The parameters
$\zeta_{i}^{(n)}$ are unknown and have to be fixed using some dynamical
information. Notice that since the $\Lambda^{*}$ is not a ground state baryon,
the HQE does not predict any normalisation for the leading IW functions.
For convenience, I perform a fit to the lattice QCD data using the base in
Ref. [22], using the chiral and continuum extrapolation there provided to
obtain pseudo-points for the form factors. Since the lattice QCD data do not
provide information on the curvature of the form factors, it is useful to
express the HQE form factors as in the form of Eq. (3.1). At this scope, I use
the parametrisation in Eq. (3.4) up to $N=1$ and then re-expand the full form
factors up to the first order in $w-1$. After this procedure, it can be
noticed that i) the parameters $\zeta_{1}^{(1)}$ and $\zeta_{2}^{(1)}$ appear
always in the combination $\zeta_{1}^{(1)}+\zeta_{2}^{(1)}$ and ii) the
parameters $\zeta_{3}^{\text{SL},(1)}$ and $\zeta_{4}^{\text{SL},(1)}$ appear
always in the combination
$\zeta_{4}^{\text{SL},(1)}-\zeta_{3}^{\text{SL},(1)}$. Therefore these
parameters cannot be determined on their own, but only the combinations
$\zeta_{1}^{(1)}+\zeta_{2}^{(1)}$ and
$\zeta_{4}^{\text{SL},(1)}-\zeta_{3}^{\text{SL},(1)}$ are determined. This
makes the number of independent, unknown HQE parameters $10$.
Before discussing the fit results, a couple of technical comments are in
order:
1. 1.
Given the available information in Ref. [22], it is possible to use two
pseudo-points for each form factor. I choose to evaluate them at $w=1.02$ and
$w=1.04$. Given that the HQE parametrisation depends on fewer parameters than
the lattice QCD one, it is possible to perform a fit to only a subset of the
lattice QCD data. In particular, I choose to employ the data on the vector and
axial-vector form factors and provide predictions for the tensor and pseudo-
tensor form factors based on the fit results. I comment on the consequences of
this choice in the following. I stress that this is a common procedure and has
been already employed in Ref. [31].
2. 2.
The HQE based form factors are affected by uncertainties due to the unknown
contributions from higher orders of expansion. By naive dimensional arguments
these contributions are expected to be roughly $\mathcal{O}(\text{few}\,\%)$.
Hence, I introduce an uncorrelated $1\%$ uncertainty on all HQE expressions of
the form factors to take these effects into account. Comments on this choice
can be found later in the text.
Parameter | Best fit point
---|---
$\zeta_{1}^{(0)}$ | $0.454\pm 0.070$
$\zeta_{2}^{(0)}$ | $-0.0303\pm 0.0552$
$\zeta_{1}^{(1)}+\zeta_{2}^{(1)}$ | $0.113\pm 0.024$
$\zeta_{1}^{\text{SL},(0)}$ | $0.125\pm 0.038$
$\zeta_{1}^{\text{SL},(1)}$ | $0.0487\pm 0.0614$
$\zeta_{2}^{\text{SL},(0)}$ | $0.0110\pm 0.0363$
$\zeta_{2}^{\text{SL},(1)}$ | $0.00362\pm 0.06184$
$\zeta_{3}^{\text{SL},(0)}$ | $0.228\pm 0.190$
$\zeta_{4}^{\text{SL},(0)}$ | $0.0883\pm 0.185$
$\zeta_{4}^{\text{SL},(1)}-\zeta_{3}^{\text{SL},(1)}$ | $-0.0267\pm 0.0773$
Table 3.1: Best fit points for the HQE parameters.
The fit is performed with a $\chi^{2}$ minimisation, yielding at the minimum
$\chi^{2}_{\text{min}}/\text{d.o.f.}\sim 0.25$. This low value is a direct
consequence of the poor current knowledge of the exact size of the theory
uncertainties and their correlations. If the theory uncertainties were
uncorrelated, the fit procedure would indicate that their natural size is
smaller than the one inferred from HQE. However, such a low value for the
$\chi^{2}_{\text{min}}/\text{d.o.f.}$ could also indicate that strong
correlations among unknown higher-order terms in the HQE have been neglected.
At the current status, it is not possible to obtain a more precise estimation
of theory uncertainties and their correlations. Therefore, I retain the
conservative choice of an uncorrelated $1\%$ uncertainty on all the form
factors in the analysis.
The best-fit point for the hadronic parameters and their uncertainties is
shown in Table 3.1, and the correlation matrix among the parameters is given
in Appendix C. With this result, I compare the HQE form factors to the lattice
QCD ones. The comparison is given in Fig. 3.1, showing excellent agreement
between the two parametrisations.
|
---|---
|
|
|
Figure 3.1: Comparison between the lattice results in Ref. [22] (grey band)
and the fit results for the HQE form factors (red band) for the vector and
axial-vector form factors. The two bands represent the $68\%$ interval.
I use the results of the fit to obtain predictions for the tensor and pseudo-
tensor form factors. The comparison between them and the lattice QCD
computation is shown in Fig. 3.2 222The lattice QCD results employed here for
tensor and pseudo-tensor form factors differ by a global sign with respect to
the first version of Ref.[22]. This sign inconsistency will be fixed in a
forthcoming update of Ref.[22].. The form factors $h_{\perp^{\prime}}$ and
$\tilde{h}_{\perp^{\prime}}$ show a tension between lattice QCD data and HQE
predictions. This tension manifests itself also when introducing the tensor
and pseudo-tensor form factors in the fit. This procedure increases the
$\chi^{2}/\text{d.o.f.}$, making it much higher than 1. This corroborates the
choice of excluding them from the fit procedure.
|
---|---
|
|
Figure 3.2: Comparison between the lattice results in Ref. [22] (grey band)
and the predictions for tensor and pseudo-tensor form factors based on the fir
results in Table 3.1 (red band). The two bands represent the $68\%$ interval.
Sources of these tensions can be looked for in the lattice QCD data and HQE
framework. The hypotheses are mainly two: i) the uncertainties on the lattice
QCD parameters describing tensor and pseudo-tensor form factors are
underestimated, and ii) missing corrections in the HQE cause a shift in the
hadronic parameters. In the case of i), I checked explicitly the result of
inflating lattice QCD uncertainties by $20\%$ for $h_{\perp^{\prime}}$ and
$\tilde{h}_{\perp^{\prime}}$. In Fig. 3.3 the results of this test are shown,
proving that the compatibility slightly improves, even if it is still poor in
the case of $\tilde{h}_{\perp^{\prime}}$. Concerning ii), the most important
corrections in the HQE, beside the ones already discussed in this work, arise
at order $\mathcal{O}(\alpha_{s}/m_{b})$, and could produce a
$\mathcal{O}(\text{few}\,\%)$ shift in the central value of the form factors
parameters. However, assessing the quantitative impact of these corrections
requires understanding how they affect the form factors in a correlated way.
From these estimates, it seems that neither i) nor ii) can explain the tension
on their own. Most likely, a combination of the two effects might help to
reconcile the HQE for the tensor and pseudo-tensor form factors with the
current lattice QCD data.
(a) (b)
Figure 3.3: Comparison between the lattice QCD results in Ref. [22] (grey
band) and the predictions for tensor and pseudo-tensor form factors based on
the fir results in Table 3.1 (red band) with lattice QCD uncertainties
inflated of $20\%$. The two bands represent the $68\%$ interval.
## 4 Conclusions
I revisit the Heavy Quark Expansion of the $\Lambda_{b}\to\Lambda^{*}(1520)$
form factors including next-to-leading order $\alpha_{s}$ corrections and for
the first time next-to-leading power $1/m_{b}$ corrections. In this framework,
form factors depend on unknown hadronic parameters which are obtained by
fitting the form factor parametrisation here discussed to a recent lattice QCD
computation [22]. I perform the fit using data for vector and axial-vector
form factors, finding good agreement between the lattice QCD calculation and
the Heavy Quark Expansion predictions. The fit results are used to predict
tensor and pseudo-tensor form factors, showing tensions between the Heavy
Quark Expansion-based predictions and the lattice QCD data. I discuss two
possible sources of the tensions: an underestimation of the uncertainties on
the lattice QCD parameters involved in these form factors and missing higher-
order terms in the Heavy Quark Expansion, e.g. at order
$\mathcal{O}(\alpha_{s}/m_{b})$. Most likely, only a combination of these two
effects could reconcile lattice QCD determination and Heavy Quark Expansion
based parametrisation of tensor and pseudo-tensor form factors. Until then, it
is not possible to claim high precision in the Heavy Quark Expansion
parametrisation of $\Lambda_{b}\to\Lambda^{*}(1520)$ form factors.
Besides this, I want to point out the need to extend the calculation of the
$\Lambda_{b}\to\Lambda^{*}(1520)$ form factors to the large-recoil region.
Quark models [19] are available, although without a consistent treatment of
uncertainties. It is therefore needed to perform up-to-date calculations of
the $\Lambda_{b}\to\Lambda^{*}(1520)$ form factors using e.g. sum rules at
$q^{2}\leq 0$. Estimates of this type will allow to extrapolate the form
factors to the large-recoil region and to assess the magnitude of their
curvature. These studies will be crucial for future LHCb analysis of
$\Lambda_{b}\to\Lambda^{*}(1520)\ell^{+}\ell^{-}$ decays.
## Acknowledgments
I thank Stefan Meinel and Gumaro Rendon for insightful discussions on lattice
QCD data. I am also grateful to Bernat Capdevila, Thorsten Feldmann, Paolo
Gambino, Martin Jung and Danny van Dyk for useful discussions. I also
acknowledge useful inputs on the text from Nico Gubernari.
This work is supported by Deutsche Forschungsgemeinschaft (DFG, German
Research Foundation) under grant 396021762 - TRR 257 “Particle Physics
Phenomenology after the Higgs Discovery” and by the Italian Ministry of
Research (MIUR) under grant PRIN 20172LNEEZ.
## Appendix A Details on the form factor parametrisation
Concerning the vector and the axial vector currents, we follow the notation of
Ref.[24]. For the vector current I have:
$\displaystyle\Gamma_{V,(1/2,t)}^{\alpha\mu}$
$\displaystyle=\frac{\sqrt{4m_{\Lambda_{b}}m_{\Lambda^{*}}}}{\sqrt{s_{+}}}\,\frac{2m_{\Lambda^{*}}}{\sqrt{s_{+}s_{-}}}p^{\alpha}\,\frac{m_{\Lambda_{b}}-m_{\Lambda^{*}}}{\sqrt{q^{2}}}\,\frac{q^{\mu}}{\sqrt{q^{2}}}\,,$
(A.1) $\displaystyle\Gamma_{V,(1/2,0)}^{\alpha\mu}$
$\displaystyle=\frac{\sqrt{4m_{\Lambda_{b}}m_{\Lambda^{*}}}}{\sqrt{s_{-}}}\,\frac{2m_{\Lambda^{*}}}{\sqrt{s_{+}s_{-}}}p^{\alpha}\,\frac{m_{\Lambda_{b}}+m_{\Lambda^{*}}}{s_{+}}\left[(p+k)^{\mu}-\frac{m_{\Lambda_{b}}^{2}-m_{\Lambda^{*}}^{2}}{q^{2}}q^{\mu}\right]\,,$
(A.2) $\displaystyle\Gamma_{V,(1/2,\perp)}^{\alpha\mu}$
$\displaystyle=\frac{\sqrt{4m_{\Lambda_{b}}m_{\Lambda^{*}}}}{\sqrt{s_{-}}}\,\frac{2m_{\Lambda^{*}}}{\sqrt{s_{+}s_{-}}}p^{\alpha}\,\left[\gamma^{\mu}-\frac{2m_{\Lambda^{*}}}{s_{+}}p^{\mu}-\frac{2m_{\Lambda_{b}}}{s_{+}}k^{\mu}\right]\,,$
(A.3) $\displaystyle\Gamma_{V,(3/2,\perp)}^{\alpha\mu}$
$\displaystyle=\frac{\sqrt{4m_{\Lambda_{b}}m_{\Lambda^{*}}}}{\sqrt{s_{-}}}\,\frac{-4i\epsilon^{\alpha\mu
pk}}{\sqrt{s_{+}s_{-}}}\gamma_{5}+\Gamma_{V,(1/2,\perp)}\,,$ (A.4)
while for the axial vector current:
$\displaystyle\Gamma_{A,(1/2,t)}^{\alpha\mu}$
$\displaystyle=\frac{\sqrt{4m_{\Lambda_{b}}m_{\Lambda^{*}}}}{\sqrt{s_{-}}}\,\frac{2m_{\Lambda^{*}}}{\sqrt{s_{+}s_{-}}}p^{\alpha}\,\frac{m_{\Lambda_{b}}+m_{\Lambda^{*}}}{\sqrt{q^{2}}}\,\frac{q^{\mu}}{\sqrt{q^{2}}}\,,$
(A.5) $\displaystyle\Gamma_{A,(1/2,0)}^{\alpha\mu}$
$\displaystyle=\frac{\sqrt{4m_{\Lambda_{b}}m_{\Lambda^{*}}}}{\sqrt{s_{+}}}\,\frac{2m_{\Lambda^{*}}}{\sqrt{s_{+}s_{-}}}p^{\alpha}\,\frac{m_{\Lambda_{b}}-m_{\Lambda^{*}}}{s_{-}}\left[(p+k)^{\mu}-\frac{m_{\Lambda_{b}}^{2}-m_{\Lambda^{*}}^{2}}{q^{2}}q^{\mu}\right]\,,$
(A.6) $\displaystyle\Gamma_{A,(1/2,\perp)}^{\alpha\mu}$
$\displaystyle=\frac{\sqrt{4m_{\Lambda_{b}}m_{\Lambda^{*}}}}{\sqrt{s_{+}}}\,\frac{2m_{\Lambda^{*}}}{\sqrt{s_{+}s_{-}}}p^{\alpha}\,\left[\gamma^{\mu}+\frac{2m_{\Lambda^{*}}}{s_{-}}p^{\mu}-\frac{2m_{\Lambda_{b}}}{s_{-}}k^{\mu}\right]\,,$
(A.7) $\displaystyle\Gamma_{A,(3/2,\perp)}^{\alpha\mu}$
$\displaystyle=\frac{\sqrt{4m_{\Lambda_{b}}m_{\Lambda^{*}}}}{\sqrt{s_{+}}}\,\frac{-4i\epsilon^{\alpha\mu
pk}}{\sqrt{s_{+}s_{-}}}\gamma_{5}-\Gamma_{A,(1/2,\perp)}\,.$ (A.8)
Concerning the tensor currents, we modify the parametrisation in Ref. [20] by
rescaling each structure with suitable factors. We have:
$\displaystyle\Gamma_{T,(1/2,0)}^{\alpha\mu}$
$\displaystyle=\frac{\sqrt{4m_{\Lambda_{b}}m_{\Lambda^{*}}}}{\sqrt{s_{+}}}\,\frac{q^{2}}{s_{+}s_{-}}p^{\alpha}\left[(p+k)^{\mu}-\frac{m_{\Lambda_{b}}^{2}-m_{\Lambda^{*}}^{2}}{q^{2}}q^{\mu}\right]\,,$
(A.9) $\displaystyle\Gamma_{T,(1/2,\perp)}^{\alpha\mu}$
$\displaystyle=\frac{\sqrt{4m_{\Lambda_{b}}m_{\Lambda^{*}}}}{\sqrt{s_{+}}}\,\frac{m_{\Lambda_{b}}+m_{\Lambda^{*}}}{s_{-}}p^{\alpha}\,\left[\gamma^{\mu}-2\frac{m_{\Lambda^{*}}}{s_{+}}p^{\mu}-2\frac{m_{\Lambda_{b}}}{s_{+}}k^{\mu}\right]\,,$
(A.10) $\displaystyle\Gamma_{T,(3/2,\perp)}^{\alpha\mu}$
$\displaystyle=\frac{\sqrt{4m_{\Lambda_{b}}m_{\Lambda^{*}}}}{\sqrt{s_{+}}}\,\left[g^{\alpha\mu}+\frac{m_{\Lambda^{*}}}{s_{-}}p^{\alpha}\left(\gamma^{\mu}-2\frac{1}{m_{\Lambda^{*}}}k^{\mu}+2\frac{m_{\Lambda^{*}}}{s_{+}}p^{\mu}+2\frac{m_{\Lambda_{b}}}{s_{+}}k^{\mu}\right)\right]\,,$
(A.11)
and
$\displaystyle\Gamma_{T5,(1/2,0)}^{\alpha\mu}$
$\displaystyle=\frac{\sqrt{4m_{\Lambda_{b}}m_{\Lambda^{*}}}}{\sqrt{s_{-}}}\,\frac{q^{2}}{s_{+}s_{-}}p^{\alpha}\left[(p+k)^{\mu}-\frac{m_{\Lambda_{b}}^{2}-m_{\Lambda^{*}}^{2}}{q^{2}}q^{\mu}\right]\,,$
(A.12) $\displaystyle\Gamma_{T5,(1/2,\perp)}^{\alpha\mu}$
$\displaystyle=\frac{\sqrt{4m_{\Lambda_{b}}m_{\Lambda^{*}}}}{\sqrt{s_{-}}}\,\frac{m_{\Lambda_{b}}-m_{\Lambda^{*}}}{s_{+}}p^{\alpha}\,\left[\gamma^{\mu}+2\frac{m_{\Lambda^{*}}}{s_{-}}p^{\mu}-2\frac{m_{\Lambda_{b}}}{s_{-}}k^{\mu}\right]\,,$
(A.13) $\displaystyle\Gamma_{T5,(3/2,\perp)}^{\alpha\mu}$
$\displaystyle=\frac{\sqrt{4m_{\Lambda_{b}}m_{\Lambda^{*}}}}{\sqrt{s_{-}}}\,\left[g^{\alpha\mu}-\frac{m_{\Lambda^{*}}}{s_{+}}p^{\alpha}\left(\gamma^{\mu}+2\frac{1}{m_{\Lambda^{*}}}k^{\mu}-2\frac{m_{\Lambda^{*}}}{s_{-}}p^{\mu}+2\frac{m_{\Lambda_{b}}}{s_{-}}k^{\mu}\right)\right]\,.$
(A.14)
I define the helicity amplitudes as
$\mathcal{A}_{\Gamma}(s_{b},s_{\Lambda},\lambda_{\Lambda},\lambda_{q})=\bra{\Lambda^{*}(s_{\Lambda},\eta(\lambda_{\Lambda}))}\bar{s}\Gamma^{\mu}\epsilon^{*}_{\mu}(\lambda_{q})b\ket{\Lambda_{b}(s_{b})}\,,$
(A.15)
where $\epsilon^{*}_{\mu}(\lambda_{q})$ are a basis of polarisation vectors
for the virtual W exchange with polarisation states
$\lambda_{q}=\\{t,0,+1,-1\\}$. For details see Ref. [24]. The physical
helicity amplitudes are identified by the following set:
$\displaystyle\mathcal{A}_{\Gamma}(+1/2,+3/2,+1)$
$\displaystyle\equiv\mathcal{A}_{\Gamma}(+1/2,+1/2,+1,+1)\,,$ (A.16)
$\displaystyle\mathcal{A}_{\Gamma}(+1/2,+1/2,0)$
$\displaystyle\equiv\sqrt{\frac{2}{3}}\mathcal{A}_{\Gamma}(+1/2,+1/2,0,0)+\sqrt{\frac{1}{3}}\mathcal{A}^{(3/2)}_{\Gamma}(+1/2,-1/2,+1,0)\,,$
$\displaystyle\mathcal{A}_{\Gamma}(+1/2,+1/2,t)$
$\displaystyle\equiv\sqrt{\frac{2}{3}}\mathcal{A}_{\Gamma}(+1/2,+1/2,0,t)+\sqrt{\frac{1}{3}}\mathcal{A}^{(3/2)}_{\Gamma}(+1/2,-1/2,+1,t)\,,$
$\displaystyle\mathcal{A}_{\Gamma}(+1/2,-1/2,-1)$
$\displaystyle\equiv\sqrt{\frac{2}{3}}\mathcal{A}_{\Gamma}(+1/2,-1/2,0,-1)+\sqrt{\frac{1}{3}}\mathcal{A}^{(3/2)}_{\Gamma}(+1/2,+1/2,-1,-1)\,,$
where $\Gamma$ represents the four possible currents. In the case of the
vector and axial vector current, the helicity amplitudes are already
calculated in Ref. [24]. For convenience, I adapt them to this case and report
them here:
$\displaystyle\mathcal{A}_{V}(+1/2,+3/2,+1)$
$\displaystyle=-4\sqrt{m_{\Lambda_{b}}m_{\Lambda^{*}}}F_{3/2,\perp}\,,$ (A.17)
$\displaystyle\mathcal{A}_{V}(+1/2,+1/2,0)$
$\displaystyle=2\sqrt{\frac{2\,m_{\Lambda_{b}}m_{\Lambda^{*}}}{3\,q^{2}}}(m_{\Lambda_{b}}+m_{\Lambda^{*}})F_{1/2,0}\,,$
(A.18) $\displaystyle\mathcal{A}_{V}(+1/2,+1/2,t)$
$\displaystyle=2\sqrt{\frac{2\,m_{\Lambda_{b}}m_{\Lambda^{*}}}{3\,q^{2}}}(m_{\Lambda_{b}}-m_{\Lambda^{*}})F_{1/2,t}\,,$
(A.19) $\displaystyle\mathcal{A}_{V}(+1/2,-1/2,-1)$
$\displaystyle=-\frac{4}{\sqrt{3}}\sqrt{m_{\Lambda_{b}}m_{\Lambda^{*}}}F_{1/2,\perp}\,,$
(A.20)
and for the axial vector current:
$\displaystyle\mathcal{A}_{A}(+1/2,+3/2,+1)$
$\displaystyle=-4\sqrt{m_{\Lambda_{b}}m_{\Lambda^{*}}}G_{3/2,\perp}\,,$ (A.21)
$\displaystyle\mathcal{A}_{A}(+1/2,+1/2,0)$
$\displaystyle=2\sqrt{\frac{2\,m_{\Lambda_{b}}m_{\Lambda^{*}}}{3\,q^{2}}}(m_{\Lambda_{b}}-m_{\Lambda^{*}})G_{1/2,0}\,,$
(A.22) $\displaystyle\mathcal{A}_{A}(+1/2,+1/2,t)$
$\displaystyle=2\sqrt{\frac{2\,m_{\Lambda_{b}}m_{\Lambda^{*}}}{3\,q^{2}}}(m_{\Lambda_{b}}+m_{\Lambda^{*}})G_{1/2,t}\,,$
(A.23) $\displaystyle\mathcal{A}_{A}(+1/2,-1/2,-1)$
$\displaystyle=\frac{4}{\sqrt{3}}\sqrt{m_{\Lambda_{b}}m_{\Lambda^{*}}}G_{1/2,\perp}\,.$
(A.24)
In the case of tensor currents, with the definitions in Eqs. (A.11)–(A.14), I
find
$\displaystyle\mathcal{A}_{T}(+1/2,+3/2,+1)$
$\displaystyle=-2\sqrt{m_{\Lambda_{b}}m_{\Lambda^{*}}}\,T_{3/2,\perp}\,,$
(A.25) $\displaystyle\mathcal{A}_{T}(+1/2,+1/2,0)$
$\displaystyle=-\sqrt{\frac{2}{3}}\sqrt{\frac{m_{\Lambda_{b}}}{m_{\Lambda^{*}}}q^{2}}\,T_{1/2,0}\,,$
(A.26) $\displaystyle\mathcal{A}_{T}(+1/2,-1/2,-1)$
$\displaystyle=+\frac{2}{\sqrt{3}}\sqrt{\frac{m_{\Lambda_{b}}}{m_{\Lambda^{*}}}}(m_{\Lambda_{b}}+m_{\Lambda^{*}})T_{1/2,\perp}\,,$
(A.27) $\displaystyle\mathcal{A}_{T}(+1/2,+1/2,t)$ $\displaystyle=0\,,$ (A.28)
and
$\displaystyle\mathcal{A}_{T5}(+1/2,+3/2,+1)$
$\displaystyle=+2\sqrt{m_{\Lambda_{b}}m_{\Lambda^{*}}}\,T^{5}_{3/2,\perp}\,,$
(A.29) $\displaystyle\mathcal{A}_{T5}(+1/2,+1/2,0)$
$\displaystyle=+\sqrt{\frac{2}{3}}\sqrt{\frac{m_{\Lambda_{b}}}{m_{\Lambda^{*}}}q^{2}}\,T^{5}_{1/2,0}\,,$
(A.30) $\displaystyle\mathcal{A}_{T5}(+1/2,-1/2,-1)$
$\displaystyle=+\frac{2}{\sqrt{3}}\sqrt{\frac{m_{\Lambda_{b}}}{m_{\Lambda^{*}}}}(m_{\Lambda_{b}}-m_{\Lambda^{*}})T_{1/2,\perp}\,,$
(A.31) $\displaystyle\mathcal{A}_{T5}(+1/2,+1/2,t)$ $\displaystyle=0\,.$
(A.32)
In the heavy quark expansion, the helicity amplitudes concerning the vector
current read:
$\displaystyle\mathcal{A}_{V}(+1/2,+3/2,+1)$
$\displaystyle=2\frac{s_{+}}{m_{\Lambda_{b}}}(\zeta_{1}^{\text{SL}}+\zeta_{2}^{\text{SL}})\,,$
(A.33) $\displaystyle\mathcal{A}_{V}(+1/2,+1/2,0)$
$\displaystyle=\frac{\sqrt{2s_{+}}}{m_{\Lambda_{b}}m_{\Lambda^{*}}\sqrt{3q^{2}}}\bigg{\\{}\frac{s_{-}}{m_{\Lambda_{b}}}\left[(1+C_{0}^{(v)})(m_{\Lambda^{*}}^{2}+m_{\Lambda^{*}}m_{\Lambda_{b}}-q^{2})+\frac{1}{2}C_{1}^{(v)}s_{+}\right]\zeta_{2}$
$\displaystyle+$ $\displaystyle
s_{-}\left[(1+C_{0}^{(v)})(m_{\Lambda^{*}}+m_{\Lambda_{b}})+\frac{1}{2m_{\Lambda_{b}}}C_{1}^{(v)}s_{+}\right]\zeta_{1}$
$\displaystyle-$
$\displaystyle\left[m_{\Lambda_{b}}^{2}(m_{\Lambda^{*}}^{2}-m_{\Lambda_{b}}^{2}+q^{2})+s_{-}s_{+}\right]\zeta_{1}^{\text{SL}}-(m_{\Lambda^{*}}^{2}-m_{\Lambda_{b}}^{2}+q^{2})\zeta_{2}^{\text{SL}}$
$\displaystyle-$
$\displaystyle\frac{s_{-}}{2m_{\Lambda_{b}}^{2}}\left[(2m_{\Lambda^{*}}^{2}+3m_{\Lambda^{*}}m_{\Lambda_{b}}+m_{\Lambda_{b}}^{2}-2q^{2})\zeta_{3}^{\text{SL}}-(m_{\Lambda^{*}}^{2}+3m_{\Lambda^{*}}m_{\Lambda_{b}}+2m_{\Lambda_{b}}^{2}-q^{2})\zeta_{4}^{\text{SL}}\right]\bigg{\\}}\,,$
(A.34) $\displaystyle\mathcal{A}_{V}(+1/2,+1/2,t)$
$\displaystyle=\frac{\sqrt{2s_{-}}s_{+}}{m_{\Lambda_{b}}m_{\Lambda^{*}}\sqrt{3q^{2}}}\bigg{\\{}\frac{1}{m_{\Lambda_{b}}}\left[(1+C_{0}^{(v)})(-m_{\Lambda^{*}}^{2}+m_{\Lambda^{*}}m_{\Lambda_{b}}+q^{2})+\frac{1}{2}C_{1}^{(v)}(-m_{\Lambda^{*}}^{2}+m_{\Lambda_{b}}^{2}+q^{2})\right]\zeta_{2}$
$\displaystyle+$
$\displaystyle\left[(1+C_{0}^{(v)})(m_{\Lambda_{b}}-m_{\Lambda^{*}})+C_{1}^{(v)}\frac{-m_{\Lambda^{*}}^{2}+m_{\Lambda_{b}}^{2}+q^{2}}{2m_{\Lambda_{b}}}\right]\zeta_{1}+\left[\frac{m_{\Lambda^{*}}^{2}-q^{2}}{m_{\Lambda_{b}}^{2}}\zeta_{1}^{\text{SL}}+\zeta_{2}^{\text{SL}}\right]$
$\displaystyle+$
$\displaystyle\frac{1}{2m_{\Lambda_{b}}^{2}}\left[(2m_{\Lambda^{*}}^{2}-m_{\Lambda^{*}}m_{\Lambda_{b}}-m_{\Lambda_{b}}^{2}-2q^{2})\zeta_{3}^{\text{SL}}-(m_{\Lambda^{*}}^{2}+m_{\Lambda^{*}}m_{\Lambda_{b}}-2m_{\Lambda_{b}}^{2}-q^{2})\zeta_{4}^{\text{SL}}\right]\bigg{\\}}\,,$
(A.35) $\displaystyle\mathcal{A}_{V}(+1/2,-1/2,-1)$
$\displaystyle=\frac{2\sqrt{s_{+}}}{\sqrt{3}m_{\Lambda_{b}}m_{\Lambda^{*}}}\bigg{\\{}s_{-}(1+C_{0}^{(v)})(\zeta_{2}-\zeta_{1})+m_{\Lambda^{*}}(\zeta_{1}^{\text{SL}}+\zeta_{2}^{\text{SL}})-\frac{s_{-}}{2m_{\Lambda_{b}}}(\zeta_{3}^{\text{SL}}+\zeta_{4}^{\text{SL}})\bigg{\\}}\,,$
(A.36)
and for the axial vector current:
$\displaystyle\mathcal{A}_{A}(+1/2,+3/2,+1)$
$\displaystyle=2\frac{s_{-}}{m_{\Lambda_{b}}}(\zeta_{1}^{\text{SL}}-\zeta_{2}^{\text{SL}})\,,$
(A.37) $\displaystyle\mathcal{A}_{A}(+1/2,+1/2,0)$
$\displaystyle=\frac{\sqrt{2s_{-}}}{\sqrt{3q^{2}}m_{\Lambda_{b}}m_{\Lambda^{*}}}\bigg{\\{}\frac{s_{+}}{m_{\Lambda_{b}}}\left[(1+C_{0}^{(v)})(-m_{\Lambda^{*}}^{2}+m_{\Lambda^{*}}m_{\Lambda_{b}}+q^{2})+\frac{s_{-}}{2}C_{1}^{(v)}\right]\zeta_{2}$
$\displaystyle+s_{+}\left[(m_{\Lambda_{b}}-m_{\Lambda^{*}})(1+C_{0}^{(v)})+\frac{s_{-}}{2m_{\Lambda_{b}}}C_{1}^{(v)}\right]\zeta_{1}$
$\displaystyle+\frac{m_{\Lambda_{b}}^{2}(m_{\Lambda_{b}}^{2}-m_{\Lambda^{*}}^{2}-q^{2})-s_{+}s_{-}}{m_{\Lambda_{b}}^{2}}\zeta_{1}^{\text{SL}}+(m_{\Lambda^{*}}^{2}-m_{\Lambda_{b}}^{2}+q^{2})\zeta_{2}^{\text{SL}}\,,$
$\displaystyle-\frac{s_{+}}{2m_{\Lambda_{b}}^{2}}\left[(2m_{\Lambda^{*}}^{2}-3m_{\Lambda^{*}}m_{\Lambda_{b}}+m_{\Lambda_{b}}^{2}-2q^{2})\zeta_{3}^{\text{SL}}+(m_{\Lambda^{*}}^{2}-3m_{\Lambda^{*}}m_{\Lambda_{b}}+2m_{\Lambda_{b}}^{2}-q^{2})\zeta_{4}^{\text{SL}}\right]\bigg{\\}}$
(A.38) $\displaystyle\mathcal{A}_{A}(+1/2,+1/2,t)$
$\displaystyle=\frac{\sqrt{2s_{+}}s_{-}}{\sqrt{3q^{2}}m_{\Lambda_{b}}m_{\Lambda^{*}}}\bigg{\\{}\frac{1}{m_{\Lambda_{b}}}\left[(m_{\Lambda^{*}}^{2}+m_{\Lambda^{*}}m_{\Lambda_{b}}-q^{2})(1+C_{0}^{(v)})+\frac{1}{2}(m_{\Lambda_{b}}^{2}-m_{\Lambda^{*}}^{2}+q^{2})C_{1}^{(v)}\right]\zeta_{2}\,,$
$\displaystyle+\left[(m_{\Lambda_{b}}+m_{\Lambda^{*}})(1+C_{0}^{(v)})+\frac{m_{\Lambda_{b}}^{2}-m_{\Lambda^{*}}^{2}+q^{2}}{2m_{\Lambda_{b}}}C_{1}^{(v)}\right]\zeta_{1}+\frac{m_{\Lambda^{*}}^{2}-q^{2}}{m_{\Lambda_{b}}^{2}}\zeta_{1}^{\text{SL}}-\zeta_{2}^{\text{SL}}$
$\displaystyle+\frac{1}{2m_{\Lambda_{b}}^{2}}\left[(2m_{\Lambda^{*}}^{2}+m_{\Lambda^{*}}m_{\Lambda_{b}}-m_{\Lambda_{b}}^{2}-2q^{2})\zeta_{3}^{\text{SL}}+(m_{\Lambda^{*}}^{2}-m_{\Lambda^{*}}m_{\Lambda_{b}}-2m_{\Lambda_{b}}^{2}-q^{2})\zeta_{4}^{\text{SL}}\right]$
(A.39) $\displaystyle\mathcal{A}_{A}(+1/2,-1/2,-1)$
$\displaystyle=\frac{2\sqrt{s_{-}}}{\sqrt{3}m_{\Lambda_{b}}m_{\Lambda^{*}}}\bigg{\\{}s_{+}(1+C_{0}^{(v)})(\zeta_{1}+\zeta_{2})+m_{\Lambda^{*}}(\zeta_{1}^{\text{SL}}-\zeta_{2}^{\text{SL}})+\frac{s_{+}}{2m_{\Lambda_{b}}}(\zeta_{3}^{\text{SL}}-\zeta_{4}^{\text{SL}})\bigg{\\}}\,.$
(A.40)
In the case of the tensor current, the non-zero helicity amplitudes in the
heavy quark limit are
$\displaystyle\mathcal{A}_{T}(+1/2,+3/2,+1)$
$\displaystyle=\frac{2\sqrt{s_{+}}}{m_{\Lambda_{b}}}\bigg{\\{}(m_{\Lambda_{b}}-m_{\Lambda^{*}})\zeta_{1}^{\text{SL}}+\frac{m_{\Lambda^{*}}m_{\Lambda_{b}}-m_{\Lambda^{*}}^{2}+q^{2}}{m_{\Lambda_{b}}}\zeta_{2}^{\text{SL}}\bigg{\\}}\,,$
(A.41) $\displaystyle\mathcal{A}_{T}(+1/2,+1/2,0)$
$\displaystyle=\frac{\sqrt{2q^{2}s_{+}}}{\sqrt{3}m_{\Lambda_{b}}m_{\Lambda^{*}}}\bigg{\\{}s_{-}(\zeta_{2}-\zeta_{1})+\frac{m_{\Lambda^{*}}^{2}+m_{\Lambda_{b}}^{2}-q^{2}}{m_{\Lambda_{b}}}(\zeta_{2}^{\text{SL}}+\zeta_{1}^{\text{SL}})+\frac{s_{-}}{2m_{\Lambda_{b}}}(\zeta_{3}^{\text{SL}}+\zeta_{4}^{\text{SL}})\bigg{\\}}\,,$
(A.42) $\displaystyle\mathcal{A}_{T}(+1/2,-1/2,-1)$
$\displaystyle=\frac{2\sqrt{s_{+}}}{\sqrt{3}m_{\Lambda_{b}}m_{\Lambda^{*}}}\bigg{\\{}s_{-}\left[(m_{\Lambda^{*}}+m_{\Lambda_{b}})\zeta_{1}+\frac{m_{\Lambda^{*}}^{2}+m_{\Lambda^{*}}m_{\Lambda_{b}}-q^{2}}{m_{\Lambda_{b}}}\zeta_{2}\right]$
$\displaystyle+\frac{m_{\Lambda^{*}}(m_{\Lambda^{*}}m_{\Lambda_{b}}-m_{\Lambda^{*}}^{2}+q^{2})}{m_{\Lambda_{b}}}\zeta_{1}^{\text{SL}}+m_{\Lambda^{*}}(m_{\Lambda_{b}}-m_{\Lambda^{*}})\zeta_{2}^{\text{SL}}$
$\displaystyle+\frac{s_{-}}{2m_{\Lambda_{b}}}\left[(m_{\Lambda_{b}}+m_{\Lambda^{*}})\zeta_{3}^{\text{SL}}+\frac{m_{\Lambda^{*}}^{2}+m_{\Lambda^{*}}m_{\Lambda_{b}}-q^{2}}{m_{\Lambda_{b}}}\zeta_{4}^{\text{SL}}\right]\,,$
(A.43)
and for the tensor axial current:
$\displaystyle\mathcal{A}_{T5}(+1/2,+3/2,+1)$
$\displaystyle=-\frac{2\sqrt{s_{-}}}{m_{\Lambda_{b}}}\bigg{\\{}(m_{\Lambda_{b}}+m_{\Lambda^{*}})\zeta_{1}^{\text{SL}}+\frac{m_{\Lambda^{*}}^{2}+m_{\Lambda^{*}}m_{\Lambda_{b}}-q^{2}}{m_{\Lambda_{b}}}\zeta_{2}^{\text{SL}}\bigg{\\}}\,,$
(A.44) $\displaystyle\mathcal{A}_{T5}(+1/2,+1/2,0)$
$\displaystyle=\frac{\sqrt{2q^{2}s_{-}}}{\sqrt{3}m_{\Lambda_{b}}m_{\Lambda^{*}}}\bigg{\\{}s_{+}(\zeta_{1}+\zeta_{2})+\frac{m_{\Lambda^{*}}^{2}+m_{\Lambda_{b}}^{2}-q^{2}}{m_{\Lambda_{b}}}(-\zeta_{1}^{\text{SL}}+\zeta_{2}^{\text{SL}})+\frac{s_{+}}{2m_{\Lambda_{b}}}(-\zeta_{3}^{\text{SL}}+\zeta_{4}^{\text{SL}})\bigg{\\}}\,,$
(A.45) $\displaystyle\mathcal{A}_{T5}(+1/2,-1/2,-1)$
$\displaystyle=\frac{2\sqrt{s_{-}}}{\sqrt{3}m_{\Lambda_{b}}m_{\Lambda^{*}}}\bigg{\\{}s_{+}\left[(m_{\Lambda_{b}}-m_{\Lambda^{*}})\zeta_{1}+\frac{m_{\Lambda^{*}}m_{\Lambda_{b}}-m_{\Lambda^{*}}^{2}+q^{2}}{m_{\Lambda_{b}}}\zeta_{2}\right]$
$\displaystyle+\frac{m_{\Lambda^{*}}(m_{\Lambda^{*}}^{2}+m_{\Lambda^{*}}m_{\Lambda_{b}}-q^{2})}{m_{\Lambda_{b}}}\zeta_{1}^{\text{SL}}+m_{\Lambda^{*}}(m_{\Lambda^{*}}+m_{\Lambda_{b}})\zeta_{2}^{\text{SL}}$
$\displaystyle+\frac{s_{+}}{2m_{\Lambda_{b}}}\left[-(m_{\Lambda_{b}}-m_{\Lambda^{*}})\zeta_{3}^{\text{SL}}+\frac{m_{\Lambda^{*}}m_{\Lambda_{b}}-m_{\Lambda^{*}}^{2}+q^{2}}{m_{\Lambda_{b}}}\zeta_{4}^{\text{SL}}\right]\,.$
(A.46)
## Appendix B Relations with lattice form factors
The definitions of the form factors here employed differ from other
conventions in the literature. In particular, the translation with the Lattice
determination in Ref. [22] is needed. I find
$\displaystyle F_{1/2,t}=$
$\displaystyle\,\frac{1}{2}\sqrt{\frac{s_{-}}{4m_{\Lambda_{b}}m_{\Lambda^{*}}}}f_{0}\,,$
$\displaystyle F_{1/2,0}=$
$\displaystyle\,\frac{1}{2}\sqrt{\frac{s_{+}}{4m_{\Lambda_{b}}m_{\Lambda^{*}}}}f_{+}\,,$
(B.1) $\displaystyle F_{1/2,\perp}=$
$\displaystyle\,\frac{1}{2}\sqrt{\frac{s_{+}}{4m_{\Lambda_{b}}m_{\Lambda^{*}}}}f_{\perp}\,,$
$\displaystyle F_{3/2,\perp}=$
$\displaystyle\,-\frac{1}{2}\sqrt{\frac{s_{+}}{4m_{\Lambda_{b}}m_{\Lambda^{*}}}}f_{\perp^{\prime}}\,,$
$\displaystyle G_{1/2,t}=$
$\displaystyle\,\frac{1}{2}\sqrt{\frac{s_{+}}{4m_{\Lambda_{b}}m_{\Lambda^{*}}}}g_{0}\,,$
$\displaystyle F_{1/2,0}=$
$\displaystyle\,\frac{1}{2}\sqrt{\frac{s_{-}}{4m_{\Lambda_{b}}m_{\Lambda^{*}}}}g_{+}\,,$
$\displaystyle G_{1/2,\perp}=$
$\displaystyle\,\frac{1}{2}\sqrt{\frac{s_{-}}{4m_{\Lambda_{b}}m_{\Lambda^{*}}}}g_{\perp}\,,$
$\displaystyle G_{3/2,\perp}=$
$\displaystyle\,\frac{1}{2}\sqrt{\frac{s_{-}}{4m_{\Lambda_{b}}m_{\Lambda^{*}}}}g_{\perp^{\prime}}\,,$
$\displaystyle T_{1/2,0}=$
$\displaystyle\,s_{+}^{1/2}\sqrt{\frac{m_{\Lambda^{*}}}{4m_{\Lambda_{b}}}}h_{+}\,,$
$\displaystyle T_{1/2,\perp}=$
$\displaystyle\,s_{+}^{1/2}\sqrt{\frac{m_{\Lambda^{*}}}{4m_{\Lambda_{b}}}}h_{\perp}\,,$
$\displaystyle T_{3/2,\perp}=$
$\displaystyle\,\sqrt{\frac{s_{+}}{4m_{\Lambda_{b}}m_{\Lambda^{*}}}}(m_{\Lambda_{b}}+m_{\Lambda^{*}})h_{\perp^{\prime}}\,,$
$\displaystyle T^{5}_{1/2,0}=$
$\displaystyle\,s_{-}^{1/2}\sqrt{\frac{m_{\Lambda^{*}}}{4m_{\Lambda_{b}}}}\tilde{h}_{+}\,,$
$\displaystyle T^{5}_{1/2,\perp}=$
$\displaystyle\,s_{-}^{1/2}\sqrt{\frac{m_{\Lambda^{*}}}{4m_{\Lambda_{b}}}}\tilde{h}_{\perp}\,,$
$\displaystyle T^{5}_{3/2,\perp}=$
$\displaystyle\,-\sqrt{\frac{s_{-}}{4m_{\Lambda_{b}}m_{\Lambda^{*}}}}(m_{\Lambda_{b}}-m_{\Lambda^{*}})\tilde{h}_{\perp^{\prime}}\,.$
## Appendix C Correlations between the fit parameters
The correlation matrix for the HQE parameters is reported in Table C.1. The
order of the various correlation coefficients is the same as in Table 3.1.
$1$ | $-0.879$ | $0.440$ | $0.0458$ | $0.0460$ | $-0.120$ | $-0.0619$ | $0.363$ | $0.337$ | $0.0312$
---|---|---|---|---|---|---|---|---|---
$-0.879$ | $1$ | $-0.160$ | $0.0109$ | $-0.0585$ | $0.130$ | $0.0936$ | $-0.325$ | $-0.343$ | $-0.121$
$0.440$ | $-0.160$ | $1$ | $-0.00723$ | $-0.00712$ | $0.0101$ | $0.0465$ | $0.211$ | $0.139$ | $-0.218$
$0.0458$ | $0.0109$ | $-0.00723$ | $1$ | $0.861$ | $-0.512$ | $-0.382$ | $-0.221$ | $-0.0500$ | $0.611$
$0.0460$ | $-0.0585$ | $-0.00712$ | $0.861$ | $1$ | $-0.406$ | $-0.435$ | $-0.214$ | $-0.0603$ | $0.707$
$-0.120$ | $0.130$ | $0.0101$ | $-0.512$ | $-0.406$ | $1$ | $0.887$ | $0.0941$ | $-0.119$ | $-0.783$
$-0.0619$ | $0.0936$ | $0.0465$ | $-0.382$ | $-0.435$ | $0.887$ | $1$ | $0.160$ | $-0.0287$ | $-0.871$
$0.363$ | $-0.325$ | $0.211$ | $-0.221$ | $-0.214$ | $0.0941$ | $0.160$ | $1$ | $0.966$ | $-0.164$
$0.337$ | $-0.343$ | $0.139$ | $-0.0500$ | $-0.0603$ | $-0.119$ | $-0.0287$ | $0.966$ | $1$ | $0.0606$
$0.0312$ | $-0.121$ | $-0.218$ | $0.611$ | $0.707$ | $-0.783$ | $-0.871$ | $-0.164$ | $0.0606$ | $1$
Table C.1: Correlation matrix for the HQE parameters.
## References
* [1] LHCb Collaboration, R. Aaij et al., “Search for lepton-universality violation in $B^{+}\to K^{+}\ell^{+}\ell^{-}$ decays,” Phys. Rev. Lett. 122 no. 19, (2019) 191801, arXiv:1903.09252 [hep-ex].
* [2] LHCb Collaboration, R. Aaij et al., “Test of lepton universality with $B^{0}\rightarrow K^{*0}\ell^{+}\ell^{-}$ decays,” JHEP 08 (2017) 055, arXiv:1705.05802 [hep-ex].
* [3] LHCb Collaboration, R. Aaij et al., “Test of lepton universality using $B^{+}\rightarrow K^{+}\ell^{+}\ell^{-}$ decays,” Phys. Rev. Lett. 113 (2014) 151601, arXiv:1406.6482 [hep-ex].
* [4] LHCb Collaboration, R. Aaij et al., “Measurement of $CP$-Averaged Observables in the $B^{0}\rightarrow K^{*0}\mu^{+}\mu^{-}$ Decay,” Phys. Rev. Lett. 125 no. 1, (2020) 011802, arXiv:2003.04831 [hep-ex].
* [5] LHCb Collaboration, R. Aaij et al., “Angular analysis of the $B^{0}\to K^{*0}\mu^{+}\mu^{-}$ decay using 3 fb-1 of integrated luminosity,” JHEP 02 (2016) 104, arXiv:1512.04442 [hep-ex].
* [6] LHCb Collaboration, R. Aaij et al., “Measurement of Form-Factor-Independent Observables in the Decay $B^{0}\to K^{*0}\mu^{+}\mu^{-}$,” Phys. Rev. Lett. 111 (2013) 191801, arXiv:1308.1707 [hep-ex].
* [7] J. Aebischer, W. Altmannshofer, D. Guadagnoli, M. Reboud, P. Stangl, and D. M. Straub, “$B$-decay discrepancies after Moriond 2019,” Eur. Phys. J. C 80 no. 3, (2020) 252, arXiv:1903.10434 [hep-ph].
* [8] M. Algueró, B. Capdevila, A. Crivellin, S. Descotes-Genon, P. Masjuan, J. Matias, M. Novoa Brunet, and J. Virto, “Emerging patterns of New Physics with and without Lepton Flavour Universal contributions,” Eur. Phys. J. C 79 no. 8, (2019) 714, arXiv:1903.09578 [hep-ph]. [Addendum: Eur.Phys.J.C 80, 511 (2020)].
* [9] T. Hurth, F. Mahmoudi, and S. Neshatpour, “Implications of the new LHCb angular analysis of $B\to K^{*}\mu^{+}\mu^{-}$ : Hadronic effects or new physics?,” Phys. Rev. D 102 no. 5, (2020) 055001, arXiv:2006.04213 [hep-ph].
* [10] M. Ciuchini, M. Fedele, E. Franco, A. Paul, L. Silvestrini, and M. Valli, “Lessons from the $B^{0,+}\to K^{*0,+}\mu^{+}\mu^{-}$ angular analyses,” arXiv:2011.01212 [hep-ph].
* [11] LHCb Collaboration, R. Aaij et al., “Differential branching fraction and angular analysis of $\Lambda^{0}_{b}\rightarrow\Lambda\mu^{+}\mu^{-}$ decays,” JHEP 06 (2015) 115, arXiv:1503.07138 [hep-ex]. [Erratum: JHEP 09, 145 (2018)].
* [12] LHCb Collaboration, R. Aaij et al., “Angular moments of the decay $\Lambda_{b}^{0}\rightarrow\Lambda\mu^{+}\mu^{-}$ at low hadronic recoil,” JHEP 09 (2018) 146, arXiv:1808.00264 [hep-ex].
* [13] W. Detmold and S. Meinel, “$\Lambda_{b}\to\Lambda\ell^{+}\ell^{-}$ form factors, differential branching fraction, and angular observables from lattice QCD with relativistic $b$ quarks,” Phys. Rev. D 93 no. 7, (2016) 074501, arXiv:1602.01399 [hep-lat].
* [14] S. Meinel and D. van Dyk, “Using $\Lambda_{b}\to\Lambda\mu^{+}\mu^{-}$ data within a Bayesian analysis of $|\Delta B|=|\Delta S|=1$ decays,” Phys. Rev. D 94 no. 1, (2016) 013007, arXiv:1603.02974 [hep-ph].
* [15] T. Blake, S. Meinel, and D. van Dyk, “Bayesian Analysis of $b\to s\mu^{+}\mu^{-}$ Wilson Coefficients using the Full Angular Distribution of $\Lambda_{b}\to\Lambda(\to p\,\pi^{-})\mu^{+}\mu^{-}$ Decays,” Phys. Rev. D 101 no. 3, (2020) 035023, arXiv:1912.05811 [hep-ph].
* [16] LHCb Collaboration, R. Aaij et al., “Test of lepton universality with ${\Lambda}_{b}^{0}\to{pK}^{-}{\mathrm{\ell}}^{+}{\mathrm{\ell}}^{-}$ decays,” JHEP 05 (2020) 040, arXiv:1912.08139 [hep-ex].
* [17] LHCb Collaboration, R. Aaij et al., “Observation of $J/\psi p$ Resonances Consistent with Pentaquark States in $\Lambda_{b}^{0}\to J/\psi K^{-}p$ Decays,” Phys. Rev. Lett. 115 (2015) 072001, arXiv:1507.03414 [hep-ex].
* [18] G. Hiller, M. Knecht, F. Legger, and T. Schietinger, “Photon polarization from helicity suppression in radiative decays of polarized Lambda(b) to spin-3/2 baryons,” Phys. Lett. B 649 (2007) 152–158, arXiv:hep-ph/0702191.
* [19] L. Mott and W. Roberts, “Rare dileptonic decays of $\Lambda_{b}$ in a quark model,” Int. J. Mod. Phys. A 27 (2012) 1250016, arXiv:1108.6129 [nucl-th].
* [20] S. Descotes-Genon and M. Novoa-Brunet, “Angular analysis of the rare decay $\Lambda_{b}\to\Lambda(1520)(\to NK)\ell^{+}\ell^{-}$,” JHEP 06 (2019) 136, arXiv:1903.00448 [hep-ph]. [Erratum: JHEP 06, 102 (2020)].
* [21] D. Das and J. Das, “The $\Lambda_{b}\to\Lambda^{\ast}(1520)(\to N\\!\bar{K})\ell^{+}\ell^{-}$ decay at low-recoil in HQET,” JHEP 07 (2020) 002, arXiv:2003.08366 [hep-ph].
* [22] S. Meinel and G. Rendon, “$\Lambda_{b}\to\Lambda^{*}(1520)\ell^{+}\ell^{-}$ form factors from lattice QCD,” arXiv:2009.09313 [hep-lat].
* [23] A. F. Falk, “Hadrons of arbitrary spin in the heavy quark effective theory,” Nucl. Phys. B 378 (1992) 79–94.
* [24] P. Böer, M. Bordone, E. Graverini, P. Owen, M. Rotondo, and D. Van Dyk, “Testing lepton flavour universality in semileptonic $\Lambda_{b}\to\Lambda_{c}^{*}$ decays,” JHEP 06 (2018) 155, arXiv:1801.08367 [hep-ph].
* [25] M. Neubert, “Heavy quark symmetry,” Phys. Rept. 245 (1994) 259–396, arXiv:hep-ph/9306320.
* [26] B. Grinstein and D. Pirjol, “Exclusive rare $B\to K^{*}\ell^{+}\ell^{-}$ decays at low recoil: Controlling the long-distance effects,” Phys. Rev. D 70 (2004) 114005, arXiv:hep-ph/0404250.
* [27] P. Böer, T. Feldmann, and D. van Dyk, “Angular Analysis of the Decay $\Lambda_{b}\to\Lambda(\to N\pi)\ell^{+}\ell^{-}$,” JHEP 01 (2015) 155, arXiv:1410.2115 [hep-ph].
* [28] A. F. Falk and M. Neubert, “Second order power corrections in the heavy quark effective theory. 2. Baryon form-factors,” Phys. Rev. D 47 (1993) 2982–2990, arXiv:hep-ph/9209269.
* [29] A. F. Falk and M. Neubert, “Second order power corrections in the heavy quark effective theory. 1. Formalism and meson form-factors,” Phys. Rev. D 47 (1993) 2965–2981, arXiv:hep-ph/9209268.
* [30] A. K. Leibovich and I. W. Stewart, “Semileptonic Lambda(b) decay to excited Lambda(c) baryons at order Lambda(QCD) / m(Q),” Phys. Rev. D 57 (1998) 5620–5631, arXiv:hep-ph/9711257.
* [31] F. U. Bernlochner, Z. Ligeti, D. J. Robinson, and W. L. Sutcliffe, “New predictions for $\Lambda_{b}\to\Lambda_{c}$ semileptonic decays and tests of heavy quark symmetry,” Phys. Rev. Lett. 121 no. 20, (2018) 202001, arXiv:1808.09464 [hep-ph].
|
Department of Computer Science
LMU Munich, Germany Institut für Logic and Computation 192/4
Technische Universität
Wienlorenz@leutgeb.xyzhttps://orcid.org/0000-0003-0391-3430 Department of
Computer Science
University of Innsbruck,
Austriageorg.moser@uibk.ac.athttps://orcid.org/0000-0001-9240-6128 Department
of Computer Science
University of Innsbruck,
Austriadavid.obwaller@uibk.ac.athttps://orcid.org/0000-0002-4925-4744 Institut
für Logic and Computation 192/4
Technische Universität
Wienzuleger@forsyte.athttps://orcid.org/0000-0003-1468-8398 Lorenz Leutgeb,
David Obwaller, Georg Moser, Florian Zuleger [100]F.3.2. Program Analysis John
Q. Open and Joan R. Access 2 42nd Conference on Very Important Topics (CVIT
2016) CVIT 2016 CVIT 2016 December 24–27, 2016 Little Whinging, United Kingdom
42 23
# Type-Based Analysis of Logarithmic Amortised Complexity
Martin Hofmann ${\dagger}$ Lorenz Leutgeb Georg Moser David Obwaller
Florian Zuleger
###### Abstract
We introduce a novel amortised resource analysis couched in a type-and-effect
system. Our analysis is formulated in terms of the physicist’s method of
amortised analysis, and is potential-based. The type system makes use of
logarithmic potential functions and is the first such system to exhibit
_logarithmic amortised complexity_. With our approach we target the automated
analysis of self-adjusting data structures, like splay trees, which so far
have only manually been analysed in the literature. In particular, we have
implemented a semi-automated prototype, which successfully analyses the zig-
zig case of _splaying_ , once the type annotations are fixed.
###### keywords:
analysis of algorithms; amortised resource analysis; functional programming;
self-adjusting data structures; automation
In Memoriam: Martin Hofmann
## 1 Introduction
Amortised analysis as pioneered by Sleator and Tarjan [52, 53] is a method for
the worst-case cost analysis of data structures. The innovation of amortised
analysis lies in considering the cost of a single data structure operation as
part of a sequence of data structures operations. The methodology of amortised
analysis allows one to assign a low (e.g., constant or logarithmic) amortised
cost to a data structure operation even though the worst-case cost of a single
operation might be high (e.g., polynomial or worse). The setup of amortised
analysis guarantees that for a sequence of data structure operations the
worst-case cost is indeed the number of data structure operations times the
amortised cost. In this way amortised analysis provides a methodology for
worst-case cost analysis.
Starting with the initial proposal, one of the objectives of amortised
analysis has been to conduct a worst-case cost analysis for self-adjusting
binary search trees, such as _splay trees_ [52, 53]. These data structures
have the behaviour that a single data structure operation might be expensive
(ie. linear in the size of the tree) but the cost is guaranteed to ‘average
out’ in a sequence of data structure operations (ie. logarithmic in the size
of the tree). Amortised analysis has been designed to provide a framework for
this kind of reasoning on the cost of data structure operations.
The automated cost analysis of imperative, functional, logic and object-
oriented programs as well as of more abstract programming paradigms such a
term rewriting systems is an active research topic [1, 3, 11, 20, 5, 2, 4, 21,
33, 6, 34, 8, 17, 19, 42]. Most research has focused on the inference of
polynomial bounds on the worst-case cost of the program under analysis. A few
papers also target the inference of exponential and logarithmic bounds [1, 5,
13, 54, 40, 55]. Some of the cited approaches are able to conduct an automated
amortised analysis in the sense of Sleator and Tarjan: The work on type-based
cost analysis by Martin Hofmann and his collaborators [32, 38, 37, 22, 24, 33,
34, 30, 31, 39, 26], which we discuss in more detail below, directly employs
potential functions as proposed by Sleator and Tarjan [52, 53]. For imperative
programs, a line of work infers cost bounds from lexicographic ranking
functions using arguments that implicitly achieve an amortised analysis [49,
50, 51, 16] (for details we refer the reader to [51]). The connection between
ranking functions and amortised analysis has also been discussed in the
context of term rewrite systems [33]. Proposals that incorporate amortised
analysis within the recurrence relations approach to cost analysis have been
discussed in [4, 17]. Still, to the best of our knowledge none of the cited
approaches is able to conduct a worst-case cost analysis for self-adjusting
binary search trees such as splay trees. One notable exception is [43] where
the correct amortised analysis of splay trees [52, 53] and other data
structures is certified in Isabelle/HOL with some tactic support. However, it
is not clear at all if the approach can be further automated.
In this article we take the first step towards the automated analysis of
logarithmic amortised cost. We extend the line of work by Martin Hofmann and
his collaborators on amortised analysis, where the search for suitable
potential functions is encoded as a type-and-effect systems. This line of work
has lead to several successful tools for deriving accurate bounds on the
resource usage of functional [24, 38, 6, 7], imperative programs [36, 29], as
well as term rewriting systems [8, 33, 34, 42]. The cited approaches employ a
variety of potential functions: While initially confined to inferring linear
cost [32], the methods were subsequently extended to cover polynomial [28],
multivariate polynomial [24], and also exponential cost [36]. We for the first
time propose a type system that supports logarithmic potential functions,
significantly extending and correcting an earlier note towards this goal [35].
Our analysis is couched in a simple core functional language just sufficiently
rich to provide a full definition of our motivating example: _splaying_. We
employ a big-step semantics, following similar approaches in the literature.
Further, our type system is geared towards runtime as computation cost (ie. we
assign a unit cost to each function call and zero cost to every other program
statement). It is straightforward to generalise this type system to other
monotone cost models. With respect to non-monotone costs, like eg. heap usage,
we expect the type system can also be readily be adapted, but this is outside
the scope of the paper.
The type system has been designed with the goal of automation. As in previous
work on type-based amortised analysis, the type system infers constraints on
unknown coefficients of template potential functions in a syntax directed way
from the program under analysis. Suitable coefficients can then be found
automatically by solving the collected constraints with a suitable constraint
solver (ie. an SMT solver that supports the theory of linear arithmetic). The
derivation of constraints is straightforward for all syntactic constructs of
our programming language. However, our automated analysis also requires
_sharing_ and _weakening_ rules. The latter supports the comparison of
different potential functions. As our potential functions are logarithmic, we
cannot directly encode the comparison between logarithmic expressions within
the theory of linear arithmetic. Here, we propose several ideas for
_linearising_ the required comparison of logarithmic expressions. The obtained
linear constraints can be then be added to the constraint system. Our proposed
linearisation makes use of (i) mathematical facts about the logarithm,
referred to as _expert knowledge_ , (ii) Farkas’ lemma for turning the
universally-quantified premise of the weakening rule into an existentially-
quantified statement that can be added to the constraint system and (iii)
finally a subtly modification of Schoenmakers potential.
We report on preliminary results for the automated amortised analysis of the
splay function. Our implementation semi-automatically verifies the correctness
of a type annotation with logarithmic amortised cost for the splay function
(more specifically for the zig-zig case of the splay function) using the
constraints generated by the type system. We believe that the ideas presented
in this article can be extended beyond the case of splay trees in order to
support the analysis of similar self-adjusting data structures such as the
ones used in the evaluation of [43]. Summarising, we make the following
contributions:
* •
We propose a new class of template potential functions suitable for
logarithmic amortised analysis; these potential functions in particular
include a variant of Schoenmakers’ potential (a key building block for the
analysis of the splay function) and logarithmic expressions.
* •
We present a type-and-effect system for potential-based resource analysis
capable of expressing logarithmic amortised costs, and prove its soundness.
* •
We report on a preliminary implementation for the logarithmic amortised
analysis of the splay function. With respect to the zigzig-case of the splay
function, our prototype is able to automatically check that the amortised cost
is bounded by $3\log(t)+1$. All former results to this respect required a
manual analysis.
#### Outline
The rest of this paper is organised as follows. In Section 2, we review the
key concepts underlying type-based amortised analysis and present our ideas
for their extension. In Section 3, we introduce a simple core language
underlying our reasoning and provide a full definition of splaying, our
running example. The employed class of potential functions is provided in
Section 4, while the type-and-effect system is presented in Section 5. In
Section 7 we report on our ideas for implementing the _weakening_ rule.
Concretely, we see these ideas at work in Section 6, where we employ the
established type-and-effect system on the motivating example of _splaying_. In
Section 8, we present our implementation and design choices in automation. In
Section 9, we present related work and finally, we conclude in Section 10.
## 2 Setting the Stage
Our analysis is formulated in terms of the physicist’s method of amortised
analysis in the style of Sleator and Tarjan [52, 53]. This method assigns a
_potential_ to data structures of interest and defines the _amortised cost_ of
an operation as the sum of the actual cost plus the change of the potential
through execution of the operation, ie. the central idea of an amortised
analysis as formulated by Sleator and Tarjan is to choose a potential function
$\phi$ such that
$\phi(v)+a_{f}(v)=c_{f}(v)+\phi(f(v))\hbox to0.0pt{$\;$,\hss}$
holds for all inputs $v$ to a function $f$, where $a_{f}$, $c_{f}$ denote the
amortised and total cost, respectively, of executing $f$. Hofmann et al. [32,
23, 25, 24, 33, 34] provide a generalisation of this idea to a set of
potential functions $\phi,\psi$, such that
$\phi(v)\geqslant c_{f}(v)+\psi(f(v))\hbox to0.0pt{$\;$,\hss}$
holds for all inputs $v$. This allows to ready off an upper bound on the
amortised cost of $f$, ie. we have $a_{f}(v)\leqslant\phi(v)-\psi(v)$. We add
that the above inequality indeed generalises the original formulation, which
can be seen by setting $\phi(v)\mathrel{:=}a_{f}(v)+\psi(v)$.
In this paper, we present a type-based resource analysis based on the idea of
potential functions that can infer logarithmic amortised cost. Following
previous work by Hofmann et al., we tackle two key problems in order to
achieve a semi-automated logarithmic amortised analysis: 1) Automation is
achieved by a _type-and-effect system_ that uses _template potential
functions_ , ie. functions of a fixed shape with indeterminate coefficients.
Here, the key challenge is to identify templates that are suitable for
logarithmic analysis and that are closed under the basic operations of the
considered programming language. 2) In addition to the actual amortised
analysis with costs, we employ _cost-free_ analysis as a subroutine, setting
the amortised $a_{f}$ and actual costs $c_{f}$ of all functions $f$ to zero.
This enables a _size analysis_ of sorts, because the inequality
$\phi(v)\geqslant\psi(f(v))$ bounds the size of the potential $\psi(f(v))$ in
terms of the potential $\phi(v)$. The size analysis we conduct allows lifting
the analysis of a subprogram to a larger context, which is crucial for
achieving a _compositional analysis_. We overview these two aspects in the
sequel of the section.
### 2.1 Type-and-effect System
To set the scene, we briefly review amortised analysis formulated as a type-
and effect system up to and including the multivariate polynomial analysis,
cf. [37, 28, 27, 23, 24, 33, 34, 26, 39].
_Polynomial Amortised Analysis._ Suppose that we have types
$\alpha,\beta,\gamma,\dots$ representing sets of values. We write
$\llbracket{\alpha}\rrbracket$ for the set of values represented by type
$\alpha$. Types may be constructed from base types such as Booleans and
integers, denoted by $\mathsf{Base}$, and by type formers such as list, tree,
product, sum, etc. For each type $\alpha$, we define a (possibly infinite) set
of _basic potential functions_
$\mathcal{BF}(\alpha)\colon\llbracket{\alpha}\rrbracket\to{\mathbb{R}^{+}_{0}}$.
Thus, if $p\in\mathcal{BF}(\alpha)$ and $v\in\llbracket{\alpha}\rrbracket$
then $p(v)\in{\mathbb{R}^{+}_{0}}$. An _annotated type_ is a pair of a type
$\alpha$ and a function
$Q:\mathcal{BF}(\alpha)\rightarrow{\mathbb{R}^{+}_{0}}$ providing a
coefficient for each _basic potential function_. The function $Q$ must be zero
on all but finitely many basic potential functions. For each annotated type
${\alpha}{\mid}{Q}$, the _potential function_
$\phi_{Q}:\llbracket{\alpha}\rrbracket\rightarrow{\mathbb{R}^{+}_{0}}$ is then
given by
$\phi_{Q}(v)\mathrel{:=}\sum_{p\in\mathcal{BF}(\alpha)}Q(p)\cdot p(v)\hbox
to0.0pt{$\;$.\hss}$
By introducing product types, one can regard functions with several arguments
as unary functions, which allows for technically smooth formalisations, cf.
[28, 27, 22]; the analyses in the cited papers are called _univariate_ as the
set of basic potential functions $\mathcal{BF}(\alpha)$ of a product type
$\alpha$ is given directly. In the later _multivariate_ versions of automated
amortised analysis [23, 24, 34] one takes a more fine-grained approach to
products. Namely, one then sets (for arbitrary $n$)
$\displaystyle\mathcal{BF}(\alpha_{1}\times\dots\times\alpha_{n})$
$\displaystyle\mathrel{:=}{\mathcal{BF}(\alpha_{1})}\times\dots\times{\mathcal{BF}(\alpha_{n})}\hbox
to0.0pt{$\;$,\hss}$ $\displaystyle(p_{1},\dots,p_{n})(v_{1},\dots,v_{n})$
$\displaystyle\mathrel{:=}\prod_{i=1}^{n}p_{i}(v_{i})\hbox to0.0pt{$\;$.\hss}$
Thus, the basic potential function for a product type is obtained as the
multiplication of the basic potential functions of its constituents.111Suppose
that for each type $\alpha$ there exists a distinguished element
$u\in\mathcal{BF}(\alpha)$ with $u(a)=1$ for all
$a\in\llbracket{\alpha}\rrbracket$. Then, the multivariate product types
contain all (linear combinations) of the basic potential functions, extending
earlier univariate definitions of product types.
_Automation._ The idea behind this setup is that the basic potential functions
$\mathcal{BF}(\alpha)$ are suitably chosen and fixed by the analysis designer,
the coefficients $Q(p)$ for $p\in\mathcal{BF}(\alpha)$, however, are left
indeterminate and will (automatically) be fixed during the analysis. For this,
constraints over the unknown coefficients are collected in a syntax-directed
way from the function under analysis and then solved by a suitable constraint
solver. The type-and-effect system formalises this collection of constraints
as typing rules, where for each construct of the considered programming
language a typing rule is given that corresponds to constraints over the
coefficients of the annotated types. Expressing the quest for suitable type
annotations as a type-and-effect system allows one to compose typing
judgements in a syntax-oriented way without the need for fixing additional
intermediate results, which is often required by competing approaches. This
syntax-directed approach to amortised analysis has been demonstrated to work
well for datatypes like lists or trees whose basic potential functions are
polynomials over the length of a list resp. the number of nodes of a tree. One
of the reasons why this works well is, e.g., that functional programming
languages typically include dedicated syntax for list construction and that
polynomials are closed under addition by one (ie. if $p(n)$ is a polynomial,
so is $p(n+1)$), supporting the formulation of a suitable typing rule for list
construction, cf. [28, 27, 22, 23, 24]. The syntax-directed approach has been
shown to generalise from lists and trees to general inductive data types, cf.
[33, 34, 41].
_Logarithmic Amortised Analysis._ We now motivate the design choices of our
type-and-effect system. The main objective of our approach is the automated
analysis of data-structures such as splay trees, which have _logarithmic_
amortised cost. The amortised analysis of splay trees is tricky and requires
choosing an adequate potential function: our work makes use of a variant of
Schoenmakers’ potential, $\operatorname{\mathsf{rk}}(t)$ for a tree $t$, cf.
[47, 43], defined inductively by
${\displaystyle\operatorname{\mathsf{rk}}(\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keywords2}{\color[rgb]{0,0.578125,0.015625}\small
leaf}}}}}})$ $\displaystyle\mathrel{:=}1,$
${{{{\displaystyle\operatorname{\mathsf{rk}}(\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small(}}}{$l$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{,}}}}}
{$d$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{,}}}}}
{$r$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small)}}}})$
$\displaystyle\mathrel{:=}\operatorname{\mathsf{rk}}(l)+\log(\lvert{l}\rvert)+\log(\lvert{r}\rvert)+\operatorname{\mathsf{rk}}(r)\hbox
to0.0pt{$\;$,\hss}$
where $l$, $r$ are the left resp. right child of the tree ($l$, $d$, $r$),
$\lvert{t}\rvert$ denotes the number of leaves of a tree $t$, and $d$ is some
data element that is ignored by the potential function. Besides Schoenmakers’
potential we need to add further basic potential functions to our analysis.
This is motivated as follows: Similar to the polynomial amortised analysis
discussed above we want that the basic potential functions can express the
construction of a tree, e.g., let us consider the function
${{{{f(x,d,y)\mathrel{:=}\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small(}}}{$x$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{,}}}}}
{$d$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{,}}}}}
{$y$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small)}}}},$
which constructs the tree ($x$, $d$, $y$) from some trees $x,y$ and some data
element $d$, and let us assume a constant cost $c_{f}(x,y)=1$ for the function
$f$. A type annotation for $f$ is given by
$\displaystyle\underbrace{\operatorname{\mathsf{rk}}(x)+\log(\lvert{x}\rvert)+\operatorname{\mathsf{rk}}(y)+\log(\lvert{y}\rvert)+1}_{\phi(x,y)}\geqslant
c_{f}(x,y)+\underbrace{\operatorname{\mathsf{rk}}(f(x,d,y))}_{\psi(f(x,y))}\hbox
to0.0pt{$\;$,\hss}$
ie. the potential $\phi(x,y)$ suffices to pay for the cost $c_{f}$ of
executing $f$ and the potential of the result $\psi(f(x,y))$ (the correctness
of this type can be established directly from the definition of Schoenmakers’
potential). As mentioned above, the logarithmic expressions in $\phi(x,y)$,
ie. $\log(\lvert{x}\rvert)+\log(\lvert{y}\rvert)+1$, specify the _amortised
costs_ of the operation.
We see that in order to express the potential $\phi(x,y)$ we also need the
basic potential functions $\log(\lvert{t}\rvert)$ for a tree $t$. In fact, we
will choose the slightly richer set of basic potential functions
$p_{(a,b)}(t)=\log(a\lvert{t}\rvert+b)\hbox to0.0pt{$\;$,\hss}$
where $a,b\in{\mathbb{N}}$ and $t$ is a tree. We note that by setting $a=0$
and $b=2$ this choice allows us to represent the constant function $u$ with
$u(t)=1$ for all trees $t$. We further note that this choice of potential
functions is sufficiently rich to express that $p_{(a,b)}(t)=p_{(a,b+a)}(s)$
for trees $s,t$ with $\lvert{t}\rvert=\lvert{s}\rvert+1$, which is needed for
precisely expressing the change of potential when a tree is extended by one
node. Further, we define basic potential functions for products of trees by
setting
$p_{(a_{1},\dots,a_{n},b)}(t_{1},\dots,t_{n})=\log({a_{1}\cdot\lvert{t_{1}}\rvert}+\dots+{a_{n}\cdot\lvert{t_{n}}\rvert}+b)\hbox
to0.0pt{$\;$,\hss}$
where $a_{1},\dots,a_{n},b\in{\mathbb{N}}$ and $t_{1},\dots,t_{n}$ is a tuple
of trees. This is sufficiently rich to state the equality
$p_{(a_{0},a_{1},\dots,a_{n},b)}(x_{1},x_{1},\dots,x_{n})=p_{(a_{0}+a_{1},\dots,a_{n},b)}(x_{1},\dots,x_{n})$,
which supports the formulation of a _sharing_ rule, which in turn is needed
for supporting the let-construct in functional programming; cf. [23, 24, 34]
for a more detailed exposition on the sharing rule and the let-construct.
### 2.2 Cost-free Semantics
_Polynomial Amortised Analysis._ We begin by reviewing the cost-free semantics
underlying previous work [27, 22, 23, 24] on polynomial amortised analysis.
Assume that we want to analyse the composed function call
$g(f(\vec{x}),\vec{z})$ using already established analysis results for
$f(\vec{x})$ and $g(y,\vec{z})$. Suppose we have already established that for
all $\vec{x}$, $y$, $\vec{z}$ we have:
$\displaystyle\phi_{0}(\vec{x})\geqslant c_{f}(\vec{x})+\beta(f(\vec{x}))$ (1)
$\displaystyle\phi_{i}(\vec{x})\geqslant\phi^{\prime}_{i}(f(\vec{x}))\quad\text{for
all $i$ ($0<i\leqslant n$)}$ (2)
$\displaystyle\beta(y)+\gamma(\vec{z})+\sum_{i=1}^{n}\phi^{\prime}_{i}(y)\phi_{i}^{\prime\prime}(\vec{z})\geqslant
c_{g}(y,\vec{z})+\psi(g(y,\vec{z}))\hbox to0.0pt{$\;$,\hss}$ (3)
where as in the multivariate case above, $n$ is arbitrary and equations (1)
and (3) assume cost, while equation (2) is _cost-free_. Then, we can conclude
for all $\vec{x},\vec{z}$ that
$\underbrace{\phi_{0}(\vec{x})+\gamma(\vec{z})+\sum_{i=1}^{n}\phi_{i}(\vec{x})\phi_{i}^{\prime\prime}(\vec{z})}_{\phi(\vec{x},\vec{z})}\geqslant
c_{f}(\vec{x})+c_{g}(f(\vec{x}),\vec{z})+\psi(g(f(\vec{x}),\vec{z}))\hbox
to0.0pt{$\;$,\hss}$
guaranteeing that the potential $\phi(\vec{x},\vec{z})$ suffices to pay for
the cost $c_{f}(\vec{x})$ of computing $f(\vec{x})$, the cost
$c_{g}(f(\vec{x}),\vec{z})$ of computing $g(f(\vec{x}),\vec{z})$ and the
potential $\psi(g(f(\vec{x}),\vec{z}))$ of the result $g(f(\vec{x}),\vec{z})$.
We note that the correctness of this inference hinges on the fact that we can
multiply equation (2) with $\phi^{\prime\prime}_{i}(\vec{z})$ for $i=1\dots
n$, using the monotonicity of the multiplication operation (note that
potential functions are non-negative). We highlight that the multiplication
argument works well with cost-free semantics, and enables lifting the resource
analysis of $f(\vec{x})$ and $g(y,\vec{z})$ to the composed function call
$g(f(\vec{x}),\vec{z})$.
_Remark._ We point out that the above exposition of cost-free semantics in the
context of polynomial amortised analysis differs from the motivation given in
the literature [27, 22, 23, 24], where cost-free semantics are motivated by
the quest for _resource polymorphism_ , which is the problem of computing (a
representation of) all polynomial potential functions (up to a fixed maximal
degree) for the program under analysis; this problem has been deemed of
importance for the handling of non tail-recursive programs. We add that for
the amortised cost analysis of inductively generated data-types, the cost-free
semantics proved necessary even for handling basic data-structure
manipulations [33, 34, 42]. In our view, cost-free semantics incorporate a
_size analysis_ of sorts. We observe that equation (2) states that the
potential of the result of the evaluation of $f(\vec{x})$ is bounded by the
potential of the function arguments $\vec{x}$, without accounting for the
costs of this evaluation. Thus, for suitably chosen potential functions
$\phi_{i},\phi^{\prime}_{i}$ can act as _norms_ and capture the size of the
result of the evaluation $f(\vec{x})$ in relation to the size of the argument.
As stated above, a separated cost and size analysis enables a compositional
analysis, an insight that we also exploit for logarithmic amortised analysis.
_Logarithmic Amortised Analysis._ Similar to the polynomial case, we want to
analyse the composed function call $g(f(\vec{x}),\vec{z})$ using already
established analysis results for $f(\vec{x})$ and $g(y,\vec{z})$. However, now
we extend the class of potential functions to sublinear functions. Assume that
we have already established that
$\displaystyle\phi_{0}(\vec{x})\geqslant c_{f}(\vec{x})+\beta(f(\vec{x}))$ (4)
$\displaystyle\log(\phi_{i}(\vec{x}))\geqslant\log(\phi_{i}^{\prime}(\vec{x}))\qquad\text{for
all $i$ ($0<i\leqslant n$)}$ (5)
$\displaystyle\beta(y)+\gamma(\vec{z})+\sum_{i=1}^{n}\log(\phi^{\prime}_{i}(y)+\phi_{i}^{\prime\prime}(\vec{z}))\geqslant
c_{g}(y,\vec{z})+\psi(g(y,\vec{z}))\hbox to0.0pt{$\;$,\hss}$ (6)
where equations (4) and (6) assume cost, while equation (5) is _cost-free_.
Equations (4) and (5) represent the result of an analysis of $f(\vec{x})$
(note that these equations do not contain the parameters $\vec{z}$, which will
however be needed for the analysis of $g(f(\vec{x}),\vec{z})$), and equation
(6) the result of an analysis of $g(y,\vec{z})$. Then, we can conclude for all
$\vec{x},y,\vec{z}$ that
$\underbrace{\phi_{0}(\vec{x})+\gamma(\vec{z})+\sum_{i=1}^{n}\log(\phi_{i}(\vec{x})+\phi_{i}^{\prime\prime}(\vec{z}))}_{\phi(\vec{x},\vec{z})}\geqslant
c_{f}(\vec{x})+c_{g}(f(\vec{x}),\vec{z})+\psi(g(f(\vec{x}),\vec{z}))\hbox
to0.0pt{$\;$,\hss}$
guaranteeing that the potential $\phi(\vec{x},\vec{z})$ suffices to pay for
the cost $c_{f}(\vec{x})$ of computing $f(\vec{x})$, the cost
$c_{g}(f(\vec{x}),\vec{z})$ of computing $g(f(\vec{x}),\vec{z})$ and the
potential $\psi(g(f(\vec{x}),\vec{z}))$ of the result $g(f(\vec{x}),\vec{z})$.
Here, we crucially use monotonicity of the logarithm function, as formalised
in Lemma 5.4. This reasoning allows us to lift isolated analyses of the
functions $f(\vec{x})$ and $g(y,\vec{z})$ to the composed function call
$g(f(\vec{x}),\vec{z})$; this is what is required for a compositional
analysis!
_Example._ We now illustrate the compositional reasoning on an example. We
consider the function
${{{{f(x,d,y)\mathrel{:=}\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small(}}}{$x$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{,}}}}}
{$d$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{,}}}}}
{$y$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small)}}}}$,
which takes two trees $x,y$ and some data element $d$ and returns the tree
($x$, $d$, $y$). Assume that we already have established that
$\displaystyle\psi(x)+\psi(y)+1\geqslant
c_{f}(x,y)+\operatorname{\mathsf{rk}}(f(x,d,y))$ (7)
$\displaystyle\log(\lvert{x}\rvert+\lvert{y}\rvert)\geqslant\log(\lvert{f(x,d,y)}\rvert)\hbox
to0.0pt{$\;$,\hss}$ (8)
where $\psi(u)=\operatorname{\mathsf{rk}}(u)+\log(\lvert{u}\rvert)$,
$c_{f}(x,y)=1$, and $d$ is an arbitrary data element, which is not relevant
for the cost analysis of $f$. We now want to analyse the composed function
$h(x,a,y,b,z):=f(f(x,a,y),b,z)$. We will use the above reasoning,
instantiating equations (4) and (5) with equations (7) and (8) for the inner
function call $f(x,a,y)$, and equation (6) with the sum of equations (7) and
(8) for the outer function call $f(u,b,z)$. As argued above, we can then
conclude for all $x,y,z$ that
$\displaystyle\psi(x)+\psi(y)+\psi(z)+\log(\lvert{x}\rvert+\lvert{y}\rvert)+\log(\lvert{x}\rvert+\lvert{y}\rvert+\lvert{z}\rvert)+2\geqslant{}$
$\displaystyle{}\geqslant
c_{f}(x,a,y)+c_{f}(f(x,a,y),b,z)+\psi(f(f(x,y),z))\hbox to0.0pt{$\;$,\hss}$
is a valid resource annotation for $h(x,a,y,b,z):=f(f(x,a,y),b,z)$; we have
used equation (8) twice in this derivation, once as
$\log(\lvert{x}\rvert+\lvert{y}\rvert)\geqslant\log(\lvert{f(x,a,y)}\rvert)$
and once lifted as
$\log(\lvert{x}\rvert+\lvert{y}\rvert+\lvert{z}\rvert)\geqslant\log(\lvert{f(x,a,y)}\rvert+\lvert{z}\rvert)$.
Kindly note that the above example appears in similar form as part of the
analysis of the splay function described in Section 6.
## 3 Motivating Example
In this section, we introduce the syntax of a suitably defined core (first-
order) programming language to be used in the following. Furthermore, we
recall the definition of _splaying_ , following the presentation by Nipkow in
[43]. Splaying constitutes the motivating examples for the type-based
logarithmic amortised resource analysis presented in this paper.
To make the presentation more succinct, we assume only the following types:
Booleans
${{\mathsf{Bool}=\\{\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keywords2}{\color[rgb]{0,0.578125,0.015625}\small
true}}}}}},\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keywords2}{\color[rgb]{0,0.578125,0.015625}\small
false}}}}}}\\}$, an abstract base type $\mathsf{Base}$ (abbrev. $\mathsf{B}$),
product types, and binary trees $\mathsf{Tree}$ (abbrev. $\mathsf{T}$), whose
internal nodes are labelled with elements ${b}{\colon}\\!{\mathsf{Base}}$. We
use lower-case Greek letters for the denotation of types. Elements
${t}{\colon}\\!{\mathsf{Tree}}$ are defined by the following grammar which
fixes notation.
${{{{{t::=\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keywords2}{\color[rgb]{0,0.578125,0.015625}\small
leaf}}}}}}\mid\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small(}}}{$t_{1}$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{,}}}}}
{$b$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{,}}}}}
{$t_{2}$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small)}}}}\hbox
to0.0pt{$\;$.\hss}$
The size of a tree is the number of leaves:
${\lvert{\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keywords2}{\color[rgb]{0,0.578125,0.015625}\small
leaf}}}}}}}\rvert\mathrel{:=}1$,
${{{{\lvert{\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small(}}}{$t$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{,}}}}}
{$a$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{,}}}}}
{$u$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small)}}}}}\rvert\mathrel{:=}\lvert{t}\rvert+\lvert{u}\rvert$.
Expressions are defined as follows and given in _let normal form_ to simplify
the presentation of the semantics and typing rules. In order to ease the
readability, we make use of some mild syntactic sugaring in the presentation
of actual code.
###### Definition 3.1.
$\displaystyle\circ$
${{{\displaystyle::=\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small\textless}}}}\mid\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small\textgreater}}}}\mid\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small=}}}}$
$\displaystyle e$ $\displaystyle::=f\ x_{1}\leavevmode\nobreak\
\dots\leavevmode\nobreak\ x_{n}$
${{\displaystyle\mid\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keywords2}{\color[rgb]{0,0.578125,0.015625}\small
true}}}}}}\mid\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keywords2}{\color[rgb]{0,0.578125,0.015625}\small
false}}}}}}\mid e_{1}\circ e_{2}$
${{{\displaystyle\mid\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keyword}{\color[rgb]{0.87890625,0,0.2890625}\small
if}}}}}}\ x\
\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keyword}{\color[rgb]{0.87890625,0,0.2890625}\small
then}}}}}}\ e_{1}\
\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keyword}{\color[rgb]{0.87890625,0,0.2890625}\small
else}}}}}}\ e_{2}$
${{{{{\displaystyle\mid\textup{{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small(}}}{$x_{1}$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{,}}}}}
{$x_{2}$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{,}}}}}
{$x_{3}$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small)}}}}}\mid\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keywords2}{\color[rgb]{0,0.578125,0.015625}\small
leaf}}}}}}$
${{{{{{{{{{{\displaystyle\mid\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keyword}{\color[rgb]{0.87890625,0,0.2890625}\small
match}}}}}}\ x\
\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keyword}{\color[rgb]{0.87890625,0,0.2890625}\small
with}}}}}}\
\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small\textbar}}}}\
\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keywords2}{\color[rgb]{0,0.578125,0.015625}\small
leaf}}}}}}\
\mathrel{\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small-\textgreater}}}}}e_{1}\
\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small\textbar}}}}\
\textup{{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small(}}}{$x_{1}$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{,}}}}}
{$x_{2}$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{,}}}}}
{$x_{3}$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small)}}}}}\
\mathrel{\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small-\textgreater}}}}}e_{2}$
${{{\displaystyle\mid\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keyword}{\color[rgb]{0.87890625,0,0.2890625}\small
let}}}}}}\ x\leavevmode\nobreak\
\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small=}}}}\leavevmode\nobreak\
e_{1}\
\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keyword}{\color[rgb]{0.87890625,0,0.2890625}\small
in}}}}}}\ e_{2}$ $\displaystyle\mid x$
We skip the standard definition of integer constants $n\in{\mathbb{Z}}$ as
well as variable declarations, cf. [45]. Furthermore, we omit binary
operations and focus on the bare essentials for the comparison operators. For
the resource analysis these are not of importance, as long as we assume that
no actual costs are emitted.
A _typing context_ is a mapping from variables $\mathcal{V}$ to types. Type
contexts are denoted by upper-case Greek letters. A program $\mathsf{P}$
consists of a signature $\mathcal{F}$ together with a set of function
definitions of the form $f(x_{1},\dots,x_{n})=e$, where the $x_{i}$ are
variables and $e$ an expression. A _substitution_ or (_environment_) $\sigma$
is a mapping from variables to values that respects types. Substitutions are
denoted as sets of assignments: $\sigma=\\{{x_{1}\mapsto
t_{1}},\dots,{x_{n}\mapsto t_{n}}\\}$. We write
$\operatorname{\mathsf{dom}}(\sigma)$ ($\operatorname{\mathsf{rg}}(\sigma)$)
to denote the domain (range) of $\sigma$. Let $\sigma$, $\tau$ be
substitutions such that
$\operatorname{\mathsf{dom}}(\sigma)\cap\operatorname{\mathsf{dom}}(\tau)=\varnothing$.
Then we denote the (disjoint) union of $\sigma$ and $\tau$ as
$\sigma\mathrel{\uplus}\tau$. We employ a simple cost-sensitive big-step
semantics based on eager evaluation, whose rules are given in Figure 1. The
judgement ${\sigma}\sststile{}{\ell}{{e}\Rightarrow{v}}$ means that under
environment $\sigma$, expression $e$ is evaluated to value $v$ in exactly
$\ell$ steps. Here only rule applications emit (unit) costs. If we do not take
costs into account, we simply write
${\sigma}\sststile{}{}{{e}\Rightarrow{v}}$.
${\sigma}\sststile{}{0}{{\text{\small\color[rgb]{0,0.578125,0.015625}{{false}}}}\Rightarrow{\text{\small\color[rgb]{0,0.578125,0.015625}{{false}}}}}$
${\sigma}\sststile{}{0}{{\text{\small\color[rgb]{0,0.578125,0.015625}{{true}}}}\Rightarrow{\text{\small\color[rgb]{0,0.578125,0.015625}{{true}}}}}$
${\sigma}\sststile{}{0}{{\text{\small\color[rgb]{0,0.578125,0.015625}{{leaf}}}}\Rightarrow{\text{\small\color[rgb]{0,0.578125,0.015625}{{leaf}}}}}$
${\sigma}\sststile{}{0}{{\text{\text{\small\color[rgb]{0,0.578125,0.015625}{(}}{$x_{1}$}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{$x_{2}$}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{$x_{3}$}\text{\small\color[rgb]{0,0.578125,0.015625}{)}}}}\Rightarrow{\text{\text{\small\color[rgb]{0,0.578125,0.015625}{(}}{$t$}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{$b$}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{$u$}\text{\small\color[rgb]{0,0.578125,0.015625}{)}}}}}\lx@proof@logical@and
x_{1}\sigma=tx_{2}\sigma=bx_{3}\sigma=u$
${\sigma}\sststile{}{0}{{x}\Rightarrow{v}}x\sigma=v$
${\sigma}\sststile{}{0}{{x_{1}\circ x_{2}}\Rightarrow{b}}\text{$b$ is value of
$x_{1}\sigma\circ x_{2}\sigma$}$
${\sigma}\sststile{}{\ell+1}{{\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{{f
}}}x_{1}\leavevmode\nobreak\ \ldots\leavevmode\nobreak\
x_{k}}\Rightarrow{v}}\lx@proof@logical@and
f(x_{1},\ldots,x_{k})=e\in\mathsf{P}{\sigma}\sststile{}{\ell}{{e}\Rightarrow{v}}$
${\sigma}\sststile{}{\ell_{1}+\ell_{2}}{{\text{\small\color[rgb]{0.87890625,0,0.2890625}{{let
}}}x\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{{ =
}}}e_{1}\text{\small\color[rgb]{0.87890625,0,0.2890625}{{ in
}}}e_{2}}\Rightarrow{v}}\lx@proof@logical@and{\sigma}\sststile{}{\ell_{1}}{{e_{1}}\Rightarrow{w}}{\sigma[x\mapsto
w]}\sststile{}{\ell_{2}}{{e_{2}}\Rightarrow{v}}$
${\sigma}\sststile{}{\ell}{{\text{\small\color[rgb]{0.87890625,0,0.2890625}{{match
}}}x\text{\small\color[rgb]{0.87890625,0,0.2890625}{{
with}}}\begin{array}[t]{l}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{{|
}}}\text{\small\color[rgb]{0,0.578125,0.015625}{{leaf}}}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{{
-> }}}e_{1}\\\ \text{\small\color[rgb]{0.10546875,0.00390625,0.375}{{|
}}}\text{\text{\small\color[rgb]{0,0.578125,0.015625}{(}}{$x_{0}$}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{$x_{1}$}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{$x_{2}$}\text{\small\color[rgb]{0,0.578125,0.015625}{)}}}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{{
-> }}}e_{2}\end{array}}\Rightarrow{v}}\lx@proof@logical@and
x\sigma=\text{\small\color[rgb]{0,0.578125,0.015625}{{leaf}}}{\sigma}\sststile{}{\ell}{{e_{1}}\Rightarrow{v}}$
${\sigma}\sststile{}{\ell}{{\text{\small\color[rgb]{0.87890625,0,0.2890625}{{if
}}}x\text{\small\color[rgb]{0.87890625,0,0.2890625}{{ then
}}}e_{1}\text{\small\color[rgb]{0.87890625,0,0.2890625}{{ else
}}}e_{2}}\Rightarrow{v}}\lx@proof@logical@and
x\sigma=\text{\small\color[rgb]{0.87890625,0,0.2890625}{{false}}}{\sigma}\sststile{}{\ell}{{e_{2}}\Rightarrow{v}}$
${\sigma}\sststile{}{\ell}{{\text{\small\color[rgb]{0.87890625,0,0.2890625}{{match
}}}x\text{\small\color[rgb]{0.87890625,0,0.2890625}{{
with}}}\begin{array}[t]{l}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{{|
}}}\text{\small\color[rgb]{0,0.578125,0.015625}{{leaf}}}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{{
-> }}}e_{1}\\\ \text{\small\color[rgb]{0.10546875,0.00390625,0.375}{{|
}}}\text{\text{\small\color[rgb]{0,0.578125,0.015625}{(}}{$x_{0}$}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{$x_{1}$}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{$x_{2}$}\text{\small\color[rgb]{0,0.578125,0.015625}{)}}}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{{
-> }}}e_{2}\end{array}}\Rightarrow{v}}\lx@proof@logical@and
x\sigma=\text{\text{\small\color[rgb]{0,0.578125,0.015625}{(}}{$t$}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{$a$}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{$u$}\text{\small\color[rgb]{0,0.578125,0.015625}{)}}}{\sigma^{\prime}}\sststile{}{\ell}{{e_{2}}\Rightarrow{v}}$
${\sigma}\sststile{}{\ell}{{\text{\small\color[rgb]{0.87890625,0,0.2890625}{{if
}}}x\text{\small\color[rgb]{0.87890625,0,0.2890625}{{ then
}}}e_{1}\text{\small\color[rgb]{0.87890625,0,0.2890625}{{ else
}}}e_{2}}\Rightarrow{v}}\lx@proof@logical@and
x\sigma=\text{\small\color[rgb]{0.87890625,0,0.2890625}{{true}}}{\sigma}\sststile{}{\ell}{{e_{1}}\Rightarrow{v}}$
Here $\sigma[x\mapsto w]$ denotes the update of the environment $\sigma$ such
that $\sigma[x\mapsto w](x)=w$ and the value of all other variables remains
unchanged. Furthermore, in the second match rule, we set
$\sigma^{\prime}\mathrel{:=}\sigma\mathrel{\uplus}\\{x_{0}\mapsto
t,x_{1}\mapsto a,x_{2}\mapsto u\\}$.
Figure 1: Big-Step Semantics
⬇
1splay a t = match t with
2 | leaf -> leaf
3 | (cl, c, cr) ->
4 if a = c then (cl, c, cr)
5 else if a < c then match cl with
6 | leaf -> (cl, c, cr)
7 | (bl, b, br) ->
8 if a = b then (bl, a, (br, c, cr))
9 else if a < b
10 then if bl = leaf then (bl, b, (br, c, cr))
11 else match splay a bl with
12 | (al, a’, ar) -> (al, a’, (ar, b, (br, c, cr)))
13 else if br = leaf then (bl, b, (br, c, cr))
14 else match splay a br with
15 | (al, a’, ar) -> ((bl, b, al), a’, (ar, c, cr))
16 else match cr with
17 | leaf -> (cl, c, cr)
18 | (bl, b, br) ->
19 if a = b then ((cl, c, bl), a, br)
20 else if a < b
21 then if bl = leaf then ((cl, c, bl), b, br)
22 else match splay a bl with
23 | (al, a’, ar) -> ((cl, c, al), a’, (ar, b, br))
24 else if br = leaf then ((cl, c, bl), b, br)
25 else match splay a br with
26 | (al, x, xa) -> (((cl, c, bl), b, al), x, xa)
Figure 2: Function splay.
_Splay trees_ have been introduced by Sleator and Tarjan [52, 53] as self-
adjusting binary search trees with strictly increasing in-order traversal.
There is no explicit balancing condition. All operations rely on a tree
rotating operation dubbed _splaying_ ; splay a t is performed by rotating
element $a$ to the root of tree $t$ while keeping in-order traversal intact.
If $a$ is not contained in $t$, then the last element found before leaf is
rotated to the tree. The complete definition is given in Figure 2. Based on
splaying, searching is performed by splaying with the sought element and
comparing to the root of the result. Similarly, the definition of insertion
and deletion depends on splaying. As an example the definition of insertion
and delete is given in Figure 3 and 4 respectively. See also [43] for full
algorithmic, formally verified, descriptions.
All basic operations can be performed in $O(\log n)$ amortised runtime. The
logarithmic amortised complexity is crucially achieved by local rotations of
subtrees in the definition of splay. Amortised cost analysis of splaying has
been provided for example by Sleator and Tarjan [52], Schoenmakers [47],
Nipkow [43], Okasaki [44], among others. Below, we follow Nipkow’s approach,
where the actual cost of splaying is measured by counting the number of calls
to
${\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{splay}}}}}}\colon\mathsf{B}\times\mathsf{T}\to\mathsf{T}$.
⬇
1insert a t = if t = leaf then (leaf, a, leaf)
2 else match splay a t with
3 | (l, a’, r) ->
4 if a = a’ then (l, a, r)
5 else if a < a’ then (l, a, (leaf, a’, r))
6 else ((l, a’, leaf), a, r)
Figure 3: Function insert.
⬇
1delete a t = if t = leaf then leaf
2 else match splay a t with
3 | (l, a’, r) ->
4 if a = a’ then if l = leaf then r
5 else match splay_max l with
6 | (l’, m, r’) -> (l’, m, r)
7 else (l, a’, r)
8
9splay_max t = match t with
10 | leaf -> leaf
11 | (l, b, r) -> match r with
12 | leaf -> (l, b, leaf)
13 | (rl, c, rr) ->
14 if rr = leaf then ((l, b, rl), c, leaf)
15 else match splay_max rr with
16 | (rrl, x, xa) -> (((l, b, rl), c, rrl), x, xa)
Figure 4: Functions delete and splay_max.
## 4 Resource Functions
In this section, we detail the basic potential functions employed and clarify
the definition of potentials used.
Only trees are assigned non-zero potential. This is not a severe restriction
as potentials for basic datatypes would only become essential, if the
construction of such types would emit actual costs. This is not the case in
our context. Moreover, note that lists can be conceived as trees of particular
shape. The potential $\Phi(t)$ of a tree $t$ is given as a non-negative linear
combination of basic functions, which essentially amount to “sums of logs”,
cf. Schoenmakers [47]. It suffices to specify the basic functions for the type
of trees $\mathsf{T}$. As already mentioned in Section 2, the _rank_
$\operatorname{\mathsf{rk}}(t)$ of a tree is defined as follows
${\displaystyle\operatorname{\mathsf{rk}}(\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keywords2}{\color[rgb]{0,0.578125,0.015625}\small
leaf}}}}}})$ $\displaystyle\mathrel{:=}1$
${{{{\displaystyle\operatorname{\mathsf{rk}}(\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small(}}}{$t$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{,}}}}}
{$a$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{,}}}}}
{$u$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small)}}}})$
$\displaystyle\mathrel{:=}\operatorname{\mathsf{rk}}(t)+\log^{\prime}(\lvert{t}\rvert)+\log^{\prime}(\lvert{u}\rvert)+\operatorname{\mathsf{rk}}(u)\hbox
to0.0pt{$\;$.\hss}$
We set $\log^{\prime}(n)\mathrel{:=}\log_{2}(\max\\{n,1\\})$, that is, the
(binary) logarithm function is defined for all numbers. This is merely a
technicality, introduced to ease the presentation. In the following, we will
denote the modified logarithmic function, simply as $\log$. Furthermore,
recall that $\lvert{t}\rvert$ denotes the number of leaves in tree $t$. The
definition of “rank” is inspired by the definition of potential in [47, 43],
but subtly changed to suit it to our context.
###### Definition 4.1.
The _basic potential functions_ of $\mathsf{Tree}$ are
* •
$\lambda t.\operatorname{\mathsf{rk}}(t)$, or
* •
$p_{(a,b)}\mathrel{:=}\lambda t.\log(a\cdot\lvert{t}\rvert+b)$, where $a,b$
are natural numbers.
The basic functions are denoted as $\mathcal{BF}$. Note that the constant
function $1$ is representable: $1=\lambda
t.\log(0\cdot\lvert{t}\rvert+2)=p_{(0,2)}$.
Following the recipe of the high-level description in Section 2, potentials or
more generally _resource functions_ become definable as linear combinations of
basic potential functions.
###### Definition 4.2.
A _resource function_
$r\colon\llbracket{\mathsf{T}}\rrbracket\to{\mathbb{R}^{+}_{0}}$ is a non-
negative linear combination of basic potential functions, that is,
$r(t)\mathrel{:=}\sum_{i\in{\mathbb{N}}}q_{i}\cdot p_{i}(t)\hbox
to0.0pt{$\;$,\hss}$
where $p_{i}\in\mathcal{BF}$. The set of resource functions is denoted as
$\mathcal{RF}$.
We employ $\ast$, natural numbers $i$ and pairs of natural numbers
$(a,b)_{a,b\in{\mathbb{N}}}$ as indices of the employed basic potential
functions. A _resource annotation over $\mathsf{T}$_, or simply _annotation_ ,
is a sequence $Q=[q_{\ast}]\cup[(q_{(a,b)})_{a,b\in{\mathbb{N}}}]$ with
$q_{\ast},q_{(a,b)}\in{\mathbb{Q}^{+}_{0}}$ with all but finitely many of the
coefficients $q_{\ast},q_{(a,b)}$ equal to 0. It represents a (finite) linear
combination of basic potential functions, that is, a resource function. The
empty annotation, that is, the annotation where all coefficients are set to
zero, is denoted as $\varnothing$.
###### Remark 4.3.
We use the convention that the sequence elements of resource annotations are
denoted by the lower-case letter of the annotation, potentially with
corresponding sub- or superscripts.
###### Definition 4.4.
The _potential_ of a tree $t$ with respect to an annotation $Q$, that is,
$Q=[q_{\ast}]\cup[(q_{(a,b)})_{a,b\in{\mathbb{N}}}]$, is defined as follows.
$\Phi({t}{\mid}{Q})\mathrel{:=}q_{\ast}\cdot\operatorname{\mathsf{rk}}(t)+\sum_{a,b\in{\mathbb{N}}}q_{(a,b)}\cdot
p_{(a,b)}(t)\hbox to0.0pt{$\;$,\hss}$
Recall that $p_{(a,b)}=\log(a\cdot\lvert{t}\rvert+b)$ and that
$\operatorname{\mathsf{rk}}$ is the rank function, defined above.
###### Example 4.5.
Let $t$ be a tree, then its potential could be defined as follows:
$\operatorname{\mathsf{rk}}(t)+3\cdot\log(\lvert{t}\rvert)+1$. With respect to
the above definition this potential becomes representable by setting
$q_{\ast}\mathrel{:=}1,q_{(1,0)}\mathrel{:=}3,q_{(0,2)}\mathrel{:=}1$. Thus,
$\Phi({t}{\mid}{Q})=\operatorname{\mathsf{rk}}(t)+3\cdot\log(\lvert{t}\rvert)+1$.
∎
We emphasise that the linear combination defined above is not independent.
Consider, for example $\log(2\lvert{t}\rvert+2)=\log(\lvert{t}\rvert+1)+1$.
_Analysis of Products of Trees._ We now lift the basic potential functions
$p_{(a,b)}$ of a single tree to products of trees. As discussed in Section 2,
we define the potential functions $p_{(a_{1},\dots,a_{m},b)}$ for a sequence
of $m$ trees $t_{1},\dots,t_{m}$, by setting:
$p_{(a_{1},\dots,a_{m},b)}(t_{1},\dots,t_{m})\mathrel{:=}\log({a_{1}\cdot\lvert{t_{1}}\rvert}+\dots+{a_{m}\cdot\lvert{t_{m}}\rvert}+b)\hbox
to0.0pt{$\;$,\hss}$
where $a_{1},\dots,a_{m},b\in{\mathbb{N}}$. Equipped with this definition, we
generalise annotations to sequences of trees. An annotation for a sequence of
length $m$ is a sequence
$Q=[q_{1},\dots,q_{m}]\cup[(q_{(a_{m},\dots,a_{n},b)})_{a_{i},b\in{\mathbb{N}}}]$,
again vanishing almost everywhere. Note that an annotation of length $1$ is
simply an annotation as defined above, where the coefficient $q_{1}$ is set
equal to the coefficient $q_{\ast}$. Based on this, the potential of a
sequence of trees $t_{1},\dots,t_{m}$ is defined as follows:
###### Definition 4.6.
Let $t_{1},\dots,t_{m}$ be trees and let
$Q=[q_{1},\dots,q_{m}]\cup[(q_{(a_{1},\dots,a_{m},b)})_{a_{i},b\in{\mathbb{N}}}]$
be an annotation of length $m$ as above. We define
$\Phi({t_{1},\dots,t_{m}}{\mid}{Q})\mathrel{:=}\sum_{i=1}^{m}q_{i}\cdot\operatorname{\mathsf{rk}}(t_{i})+\sum_{a_{1},\dots,a_{m},b\in{\mathbb{N}}}q_{(a_{1},\dots,a_{m},b)}\cdot
p_{(a_{1},\dots,a_{m},b)}(t_{1},\dots,t_{m})\hbox to0.0pt{$\;$,\hss}$
where
$p_{(a_{1},\dots,a_{m},b)}(t_{1},\dots,t_{m})\mathrel{:=}\log({a_{1}\cdot\lvert{t_{1}}\rvert}+\dots+{a_{m}\cdot\lvert{t_{m}}\rvert}+b)$
as defined above. Note that for an empty sequence of trees, we have
$\Phi({\epsilon}{\mid}{Q})\mathrel{:=}\sum_{b\in{\mathbb{N}}}q_{b}\log(b)$.
Let $t$ be a tree. Note that the rank function $\operatorname{\mathsf{rk}}(t)$
amounts to the sum of the logarithms of the size of subtrees of $t$. In
particular if the tree $t$ simplifies to a list of length $n$, then
$\operatorname{\mathsf{rk}}(t)=(n+1)+\sum_{i=1}^{n}\log(i)$. Moreover, as
$\sum_{i=1}^{n}\log(i)\in\Theta(n\log n)$, the above defined potential
functions are sufficiently rich to express linear combinations of sub- and
super-linear functions.
Let $\sigma$ denote a substitution, let $\Gamma$ denote a typing context and
let ${{x_{1}}{\colon}\\!{\mathsf{T}}},\dots,{{x_{m}}{\colon}\\!{\mathsf{T}}}$
denote all tree types in $\Gamma$. A _resource annotation for $\Gamma$_ or
simply _annotation_ is an annotation for the sequence of trees
${x_{1}\sigma},\dots,{x_{m}\sigma}$. We define the _potential_ of
$\Phi({\Gamma}{\mid}{Q})$ with respect to $\sigma$ as
$\Phi({\sigma};{\Gamma}{\mid}{Q})\mathrel{:=}\Phi({{x_{1}\sigma},\dots,{x_{m}\sigma}}{\mid}{Q})$.
###### Definition 4.7.
An _annotated signature_ $\overline{\mathcal{F}}$ is a mapping from functions
$f$ to sets of pairs consisting of the annotation type for the arguments of
$f$, ${\alpha_{1}\times\dots\times\alpha_{n}}{\mid}{Q}$ and the annotation
type ${\beta}{\mid}{Q^{\prime}}$ for the result:
$\overline{\mathcal{F}}(f)\mathrel{:=}\left\\{{\alpha_{1}\times\dots\times\alpha_{n}}{\mid}{Q}\to{\beta}{\mid}{Q^{\prime}}\colon\text{\parbox{215.2771pt}{$f$
takes $n$ arguments of which $m$ are trees, $Q$ is a resource annotation of
length $m$ and $Q^{\prime}$ a resource annotation of length
$1$}}\right\\}\hbox to0.0pt{$\;$.\hss}$
Note that $m\leqslant n$ by definition.
We confuse the signature and the annotated signature and denote the latter
simply as $\mathcal{F}$. Instead of
${\alpha_{1}\times\dots\times\alpha_{n}}{\mid}{Q}\to{\beta}{\mid}{Q^{\prime}}\in\mathcal{F}(f)$,
we typically write
${f}{\colon}\\!{{\alpha_{1}\times\dots\times\alpha_{n}}{\mid}{Q}\to{\beta}{\mid}{Q^{\prime}}}$.
As our analysis makes use of a _cost-free semantics_ any function symbol is
possibly equipped with a _cost-free_ signature, independent of $\mathcal{F}$.
The cost-free signature is denoted as $\mathcal{F}^{\text{cf}}$.
###### Example 4.8.
Consider the function splay: $\mathsf{B}\times\mathsf{T}\to\mathsf{T}$. The
induced annotated signature is given as
${\mathsf{B}\times\mathsf{T}}{\mid}{Q}\to{\mathsf{T}}{\mid}{Q^{\prime}}$,
where $Q\mathrel{:=}[q_{\ast}]\cup[(q_{(a,b)})_{a,b\in{\mathbb{N}}}]$ and
$Q^{\prime}\mathrel{:=}[q^{\prime}_{\ast}]\cup[(q^{\prime}_{(a,b)})_{a,b\in{\mathbb{N}}}]$.
The logarithmic amortised cost of splaying is then expressible through the
following setting: $q_{\ast}\mathrel{:=}1$, $q_{(1,0)}=3$, $q_{(0,2)}=1$,
$q^{\prime}_{\ast}\mathrel{:=}1$. All other coefficients are zero.
This amounts to a potential of the arguments
$\operatorname{\mathsf{rk}}(t)+3\log(\lvert{t}\rvert)+1$, while for the result
we consider only its rank, that is, the annotation expresses
$3\log(\lvert{t}\rvert)+1$ as the logarithmic cost of splaying. The
correctness of the induced logarithmic amortised costs for the zig-zig case of
splaying is verified in Section 6 and is also automatically verified by our
prototype. ∎
Suppose $\Phi({t_{1},\dots,t_{n},u_{1},u_{2}}{\mid}{Q})$ denotes an annotated
sequence of length $n+2$. Suppose $u_{1}=u_{2}$ and we want to _share_ the
values $u_{i}$, that is, the corresponding function arguments appear multiple
times in the body of the function definition. Then we make use of the operator
$\curlyvee\\!({Q})$ that adapts the potential suitably. The operator is also
called _sharing operator_.
###### Lemma 4.9.
Let $t_{1},\dots,t_{n},u_{1},u_{2}$ denote a sequence of trees of length $n+2$
with annotation $Q$. Then there exists a resource annotation
$\curlyvee\\!({Q})$ such that
$\Phi({t_{1},\dots,t_{n},u_{1},u_{2}}{\mid}{Q})=\Phi({t_{1},\dots,t_{n},u}{\mid}{\curlyvee\\!({Q})})$,
if $u_{1}=u_{2}=u$.
###### Proof 4.10.
Wlog. we assume $n=0$. Thus, let
$Q=[q_{1},q_{2}]\cup[(q_{(a_{1},a_{2},b)})_{a_{i}\in{\mathbb{N}}}]$. By
definition
$\Phi({u_{1},u_{2}}{\mid}{Q})=q_{1}\cdot\operatorname{\mathsf{rk}}(u_{1})+q_{2}\cdot\operatorname{\mathsf{rk}}(u_{2})+\sum_{a_{1},a_{2},b\in{\mathbb{N}}}q_{(a_{1},a_{2},b)}\cdot
p_{(a_{1},a_{2},b)}(u_{1},u_{2})\hbox to0.0pt{$\;$,\hss}$
where
$p_{(a_{1},a_{2},b)}(u_{1},u_{2})=\log(a_{1}\cdot\lvert{u_{1}}\rvert+a_{2}\cdot\lvert{u_{2}}\rvert+b)$.
By assumption $u=u_{1}=u_{2}$. Thus, we obtain
$\displaystyle\Phi({u,u}{\mid}{Q})$
$\displaystyle=q_{1}\cdot\operatorname{\mathsf{rk}}(u)+q_{2}\cdot\operatorname{\mathsf{rk}}(u)+\sum_{a_{1},a_{2},b\in{\mathbb{N}}}q_{(a_{1},a_{2},b)}\cdot
p_{(a_{1},a_{2},b)}(u,u)$
$\displaystyle=(q_{1}+q_{2})\operatorname{\mathsf{rk}}(u)+\sum_{a_{1}+a_{2},b\in{\mathbb{N}}}q_{(a_{1}+a_{2},b)}\cdot
p_{(a_{1}+a_{2},b)}(u)$ $\displaystyle=\Phi({u}{\mid}{\curlyvee\\!({Q})})\hbox
to0.0pt{$\;$,\hss}$
for suitable defined annotation $\curlyvee\\!({Q})$, whose definition can be
directly read off from the above constraints.
We emphasise that the definability of the sharing annotation
$\curlyvee\\!({Q})$ is based on the fact that the basic potential functions
$p_{(a_{1},\dots,a_{m},b)}$ have been carefully chosen so that
$p_{(a_{0},a_{1},a_{2},\dots,a_{m},b)}(x_{1},x_{1},\dots,x_{m})=p_{(a_{0}+a_{1},a_{2},\dots,a_{m},b)}(x_{1},x_{2},\dots,x_{m})\hbox
to0.0pt{$\;$,\hss}$
holds, cf. Section 2.
###### Remark 4.11.
We observe that the proof-theoretic analogue of the sharing operation
constitutes in a contraction rule, if the type system is conceived as a proof
system.
Let $Q=[q_{\ast}]\cup[(q_{(a,b)})_{a,b\in{\mathbb{N}}}]$ be an annotation and
let $K\in{\mathbb{Q}^{+}_{0}}$. Then we define $Q^{\prime}\mathrel{:=}Q+K$ as
follows:
$Q^{\prime}=[q_{\ast}]\cup[(q^{\prime}_{(a,b)})_{a,b\in{\mathbb{N}}}]$, where
$q^{\prime}_{(0,2)}\mathrel{:=}q_{(0,2)}+K$ and for all $(a,b)\not=(0,2)$
$q_{(a,b)}^{\prime}\mathrel{:=}q_{(a,b)}$. By definition the annotation
coefficient $q_{(0,2)}$ is the coefficient of the basic potential function
$p_{(0,2)}(t)=\log(0\lvert{t}\rvert+2)=1$, so the annotation $Q+K$, adds cost
$K$ to the potential induced by $Q$. Further, we define the multiplication of
an annotation $Q$ by a constant $K$, denoted as $K\cdot Q$ pointwise.
Moreover, let $P=[p_{\ast}]\cup[(p_{(a,b)})_{a,b\in{\mathbb{N}}}]$ be another
annotation. Then the addition $P+Q$ of annotations $P,Q$ is similarly defined
pointwise.
## 5 Logarithmic Amortised Resource Analysis
${\varnothing}{\mid}{Q+k}\vdash{\text{\small\color[rgb]{0,0.578125,0.015625}{{leaf}}}}{\colon}\\!{\mathsf{T}}{\mid}{Q^{\prime}}\lx@proof@logical@and\forall
c\geqslant 2\ q_{(c)}=\sum_{a+b=c}q^{\prime}_{(a,b)}k=q^{\prime}_{\ast}$
${\Gamma}{\mid}{Q+K}\vdash{e}{\colon}\\!{\alpha}{\mid}{Q^{\prime}+K}\lx@proof@logical@and{\Gamma}{\mid}{Q}\vdash{e}{\colon}\\!{\alpha}{\mid}{Q^{\prime}}K\geqslant
0$
${\Gamma,{x}{\colon}\\!{\alpha}}{\mid}{Q}\vdash{e}{\colon}\\!{\beta}{\mid}{Q^{\prime}}\lx@proof@logical@and{\Gamma}{\mid}{R}\vdash{e}{\colon}\\!{\beta}{\mid}{Q^{\prime}}r_{i}=q_{i}r_{(\vec{a},b)}=q_{(\vec{a},0,b)}$
${{x_{1}}{\colon}\\!{\mathsf{T}},{x_{2}}{\colon}\\!{\mathsf{B}},{x_{3}}{\colon}\\!{\mathsf{T}}}{\mid}{Q}\vdash{\text{\small\color[rgb]{0,0.578125,0.015625}{(}}{x_{1}}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{x_{2}}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{x_{3}}\text{\small\color[rgb]{0,0.578125,0.015625}{)}}}{\colon}\\!{\mathsf{T}}{\mid}{Q^{\prime}}\lx@proof@logical@and
q_{1}=q_{2}=q^{\prime}_{\ast}q_{(1,0,0)}=q_{(0,1,0)}=q^{\prime}_{\ast}q_{(a,a,c)}=q^{\prime}_{(a,c)}$
${{x_{1}}{\colon}\\!{\alpha},{x_{2}}{\colon}\\!{\alpha}}{\mid}{Q}\vdash{x_{1}\circ
x_{2}}{\colon}\\!{\mathsf{Bool}}{\mid}{Q}\circ\in\\{<,>,=\\}$
${\Gamma,{x}{\colon}\\!{\mathsf{Bool}}}{\mid}{Q}\vdash{\text{\small\color[rgb]{0.87890625,0,0.2890625}{{if
}}}x\text{\small\color[rgb]{0.87890625,0,0.2890625}{{ then
}}}e_{1}\text{\small\color[rgb]{0.87890625,0,0.2890625}{{ else
}}}e_{2}}{\colon}\\!{\alpha}{\mid}{Q^{\prime}}\lx@proof@logical@and{\Gamma}{\mid}{Q}\vdash{e_{1}}{\colon}\\!{\alpha}{\mid}{Q^{\prime}}{\Gamma}{\mid}{Q}\vdash{e_{2}}{\colon}\\!{\alpha}{\mid}{Q^{\prime}}$
${\Gamma,{x}{\colon}\\!{\mathsf{T}}}{\mid}{Q}\vdash{\text{\small\color[rgb]{0.87890625,0,0.2890625}{{match
}}}x\text{\small\color[rgb]{0.87890625,0,0.2890625}{{ with
}}}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{{|
}}}\text{\small\color[rgb]{0,0.578125,0.015625}{{leaf}}}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{{
-> }}}e_{1}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{{|
}}}\text{\small\color[rgb]{0,0.578125,0.015625}{(}}{x_{1}}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{x_{2}}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{x_{3}}\text{\small\color[rgb]{0,0.578125,0.015625}{)}}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{{
->
}}}e_{2}}{\colon}\\!{\alpha}{\mid}{Q^{\prime}}\lx@proof<EMAIL_ADDRESS>$r_{(\vec{a},a,a,b)}=q_{(\vec{a},a,b)}$\\\
$p_{(\vec{a},c)}=\sum_{a+b=c}q_{(\vec{a},a,b)}$\\\
${\Gamma}{\mid}{P+q_{m+1}}\vdash{e_{1}}{\colon}\\!{\alpha}{\mid}{Q^{\prime}}$
\end{minipage}\begin{minipage}[b]{150.69397pt} $r_{m+1}=r_{m+2}=q_{m+1}$ \\\
$r_{(\vec{0},1,0,0)}=r_{(\vec{0},0,1,0)}=q_{m+1}$ \\\
${\Gamma,{x_{1}}{\colon}\\!{\mathsf{T}},{x_{2}}{\colon}\\!{\mathsf{B}},{x_{3}}{\colon}\\!{\mathsf{T}}}{\mid}{R}\vdash{e_{2}}{\colon}\\!{\alpha}{\mid}{Q^{\prime}}$
\end{minipage}q_{i}=r_{i}=p_{i}$
${\Gamma,\Delta}{\mid}{Q}\vdash{\text{\small\color[rgb]{0.87890625,0,0.2890625}{{let
}}}x\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{{ =
}}}e_{1}\text{\small\color[rgb]{0.87890625,0,0.2890625}{{ in
}}}e_{2}}{\colon}\\!{\beta}{\mid}{Q^{\prime}}\begin{minipage}[b]{353.05444pt}
\centering$p_{i}=q_{i}$ \quad$p_{(\vec{a},c)}=q_{(\vec{a},\vec{0},c)}$
\quad$r_{j}=q_{m+j}$ \quad$r_{k+1}=p^{\prime}_{\ast}$
\quad$r_{(\vec{0},d,e)}=p^{\prime}_{(d,e)}$
\quad$\forall\vec{b}\neq\vec{0}\left(r_{(\vec{b},0,0)}=q_{(\vec{0},\vec{b},0)}\right)$
\\\ $\forall\vec{b}\neq\vec{0},\vec{a}\neq\vec{0}\lor c\neq 0\
\left(q_{(\vec{a},\vec{b},c)}=\sum_{(d,e)}p^{(\vec{b},d,e)}_{(\vec{a},c)}\right)$
\\\ $\forall\vec{b}\neq\vec{0},d\neq 0\lor e\neq 0\
\left(r_{(\vec{b},d,e)}={p^{\prime}}^{(\vec{b},d,e)}_{(d,e)}\wedge\forall(d^{\prime},e^{\prime})\neq(d,e)\
\left({p^{\prime}}^{(\vec{b},d,e)}_{(d^{\prime},e^{\prime})}=0\right)\wedge{}\right.$\\\
$\left.{}\land\sum_{(a,c)}p^{(\vec{b},d,e)}_{(\vec{a},c)}\geq{p^{\prime}}^{(\vec{b},d,e)}_{(d,e)}\land\forall\vec{a}\neq\vec{0}\lor
c\neq 0\ \left(p^{(\vec{b},d,e)}_{(\vec{a},c)}\neq
0\rightarrow{p^{\prime}}^{(\vec{b},d,e)}_{(d,e)}\leqslant
p^{(\vec{b},d,e)}_{(\vec{a},c)}\right)\right)$\\\
${\Gamma}{\mid}{P}\vdash{e_{1}}{\colon}\\!{\mathsf{T}}{\mid}{P^{\prime}}$
\hfill$\forall{\vec{b}\neq\vec{0},d\neq 0\lor e\neq 0}\
\left({\Gamma}{\mid}{P^{(\vec{b},d,e)}}\vdash^{\text{cf}}{e_{1}}{\colon}\\!{\mathsf{T}}{\mid}{{P^{\prime}}^{(\vec{b},d,e)}}\right)$
\hfill${\Delta,{x}{\colon}\\!{\mathsf{T}}}{\mid}{R}\vdash{e_{2}}{\colon}\\!{\beta}{\mid}{Q^{\prime}}$
\@add@centering\end{minipage}$
${\Gamma,\Delta}{\mid}{Q}\vdash{\text{\small\color[rgb]{0.87890625,0,0.2890625}{{let
}}}x\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{{ =
}}}e_{1}\text{\small\color[rgb]{0.87890625,0,0.2890625}{{ in
}}}e_{2}}{\colon}\\!{\beta}{\mid}{Q^{\prime}}\lx@proof<EMAIL_ADDRESS>$p_{i}=q_{i}$ \quad$p_{(\vec{a},c)}=q_{(\vec{a},\vec{0},c)}$\\\
${\Gamma}{\mid}{P}\vdash{e_{1}}{\colon}\\!{\alpha}{\mid}{\varnothing}$
\end{minipage}\begin{minipage}[b]{107.63855pt}
$q_{(\vec{0},\vec{b},c)}=r_{(\vec{b},c)}$ \ ($\vec{b}\not=\vec{0}$) \\\
${\Delta,{x}{\colon}\\!{\alpha}}{\mid}{R}\vdash{e_{2}}{\colon}\\!{\beta}{\mid}{Q^{\prime}}$
\end{minipage}\begin{minipage}[b]{51.6665pt} $r_{j}=q_{m+j}$\\\
$\alpha\not=\mathsf{T}$ \end{minipage}$
${\Gamma,{z}{\colon}\\!{\alpha}}{\mid}{\curlyvee\\!({Q})}\vdash{e[z,z]}{\colon}\\!{\beta}{\mid}{Q^{\prime}}{\Gamma,{x}{\colon}\\!{\alpha},{y}{\colon}\\!{\alpha}}{\mid}{Q}\vdash{e[x,y]}{\colon}\\!{\beta}{\mid}{Q^{\prime}}$
${\Gamma}{\mid}{Q}\vdash{e}{\colon}\\!{\alpha}{\mid}{Q^{\prime}}\lx@proof@logical@and{\Gamma}{\mid}{P}\vdash{e}{\colon}\\!{\alpha}{\mid}{P^{\prime}}\Phi({\Gamma}{\mid}{P})\leqslant\Phi({\Gamma}{\mid}{Q})\Phi({\Gamma}{\mid}{P^{\prime}})\geqslant\Phi({\Gamma}{\mid}{Q^{\prime}})$
${{x}{\colon}\\!{\alpha}}{\mid}{Q}\vdash{x}{\colon}\\!{\alpha}{\mid}{Q}\text{$x$
a variable}$
${{x_{1}}{\colon}\\!{\alpha_{1}},\dots,{x_{n}}{\colon}\\!{\alpha_{n}}}{\mid}{P+K\cdot
Q}\vdash{f(x_{1},\dots,x_{n})}{\colon}\\!{\beta}{\mid}{(P^{\prime}+K\cdot
Q^{\prime})-1}\lx@proof@logical@and{\alpha_{1}\times\cdots\times\alpha_{n}}{\mid}{P}\to{\beta}{\mid}{P^{\prime}}\in\mathcal{F}(f){\alpha_{1}\times\cdots\times\alpha_{n}}{\mid}{Q}\to{\beta}{\mid}{Q^{\prime}}\in\mathcal{F}^{\text{cf}}(f)K\in{\mathbb{Q}^{+}_{0}}$
To ease notation, we set $\vec{a}\mathrel{:=}a_{1},\dots,a_{m}$,
$\vec{b}\mathrel{:=}b_{1},\dots,b_{k}$ for vectors of indices
$a_{i},b_{j}\in{\mathbb{N}}$. Further, $i\in\\{1,\dots,m\\}$,
$j\in\\{1,\dots,k\\}$ and $a,b,c,d,e\in{\mathbb{N}}$. Sequence elements of
annotations, which are not constrained are set to zero.
Figure 5: Type System for Logarithmic Amortised Resource Analysis
In this section, we present the central contribution of this work. We
delineate a novel type-and-effect system incorporating a potential-based
amortised resource analysis capable of expressing _logarithmic_ amortised
costs. Soundness of the approach is established in Theorem 5.6.
Our potential-based amortised resource analysis is couched in a type system,
given in Figure 5. If the type judgement
${\Gamma}{\mid}{Q}\vdash{e}{\colon}\\!{\alpha}{\mid}{Q^{\prime}}$ is
derivable, then the worst-case cost of evaluating the expression $e$ is bound
from above by the difference between the potential
$\Phi({\sigma};{\Gamma}{\mid}{Q})$ before the execution and the potential
$\Phi({v}{\mid}{Q^{\prime}})$ of the value $v$ obtained through the evaluation
of the expression $e$. The typing system makes use of a _cost-free_ semantics,
which does not attribute any costs to the calculation. The cost-free typing
judgement is denoted as
${\Gamma}{\mid}{Q}\vdash^{\text{cf}}{e}{\colon}\\!{\alpha}{\mid}{Q^{\prime}}$
and based on a cost-free variant of the application rule, denoted as
$(\mathsf{{app:cf}})$. The rule $(\mathsf{{app:cf}})$ is defined as the rule
$(\mathsf{{app}})$, however, no costs are accounted for. Wrt. the cost-free
semantics, the _empty signature_ , denoted as
${\alpha_{1}\times\dots\times\alpha_{n}}{\mid}{\varnothing}\to{\beta}{\mid}{\varnothing}$,
is always admissible.
###### Remark 5.1.
Note, that if
${\alpha_{1}\times\dots\times\alpha_{n}}{\mid}{P}\to{\beta}{\mid}{P^{\prime}}$
and
${\alpha_{1}\times\dots\times\alpha_{n}}{\mid}{Q}\to{\beta}{\mid}{Q^{\prime}}$
are both cost-free signatures for a function $f$, then any linear combination
is admissable as cost-free signature of $f$. Ie. we can assume
${\alpha_{1}\times\dots\times\alpha_{n}}{\mid}{K\cdot P+L\cdot
Q}\to{\beta}{\mid}{K\cdot P^{\prime}+L\cdot
Q^{\prime}}\in\mathcal{F}^{\text{cf}}(f)$, where $K,L\in{\mathbb{Q}^{+}_{0}}$.
###### Remark 5.2.
Principally the type system can be parameterised in the resource metric (see
e.g.[24]). In this paper, we focus on amortised and worst-case runtime
complexity, symbolically measured through the number of function applications.
It is straightforward to generalise this type system to other monotone cost
models. Wrt. non-monotone costs, like eg. heap usage, we expect the type
system can also be readily be adapted, but this is outside the scope of the
paper.
We consider the typing rules in turn; recall the convention that sequence
elements of annotations are denoted by the lower-case letter of the
annotation. Further, note that sequence elements which do not occur in any
constraint are set to zero. The variable rule $(\mathsf{{var}})$ types a
variable of unspecified type $\alpha$. As no actual costs are required the
annotation is unchanged. Similarly no resources are lost through the use of
control operators. Hence the definition of the rules $(\mathsf{{cmp}})$ and
$(\mathsf{{ite}})$ is straightforward.
As exemplary constructor rules, we have rule $(\mathsf{{leaf}})$ for the empty
tree and rule $(\mathsf{{node}})$ for the node constructor. Both rules define
suitable constraints on the resource annotations to guarantee that the
potential of the values is correctly represented.
The application rule $(\mathsf{{app}})$ represents the application of a rule
given in $\mathsf{P}$. Each application emits actual cost $1$, which is
indicated in the addition of $1$ to the annotation $Q$. In its simplest form,
that is, for the factor $K=1$, the rule allows to directly read off the
required annotations for the typing context
In the pattern matching rule $(\mathsf{{match}})$ the potential freed through
the destruction of the tree construction is added to the annotation $R$, which
is used in the right premise of the rule. Note that the length of the
annotation $R$ is $m+2$, where $m$ equals the number of tree types in the type
context $\Gamma$.
The constraints expressed in the typing rules $(\mathsf{{let:T}})$ and
$(\mathsf{{let:gen}})$, guarantee that the potential provided through
annotation $Q$ is distributed among the call to $e_{1}$ and $e_{2}$, that is,
this rule takes care of function composition. The numbers $m$, $k$,
respectively, denote the number of tree types in $\Gamma$, $\Delta$. Due to
the sharing rule—discussed in a moment—we can assume wlog. that each variable
in $e_{1}$ and $e_{2}$ occurs at most once.
First, consider the rule $(\mathsf{{let:gen}})$, that is, the expression
$e_{1}$ evaluates to a value $w$ of arbitrary type $\alpha\not=\mathsf{Tree}$.
In this case the resulting value $w$ cannot carry any potential. This is
indicated through the empty annotation $\varnothing$ in the typing judgement
${\Gamma}{\mid}{P}\vdash{e_{1}}{\colon}\\!{\alpha}{\mid}{\varnothing}$.
Similarly, in the judgement
${\Delta,{x}{\colon}\\!{\alpha}}{\mid}{R}\vdash{e_{2}}{\colon}\\!{\beta}{\mid}{Q^{\prime}}$
for the expression $e_{2}$, all available potential prior to the execution of
$e_{2}$ stems from the potential embodied in the type context $\Delta$ wrt.
annotation $Q$. This is enforced by the corresponding constraints. Suppose for
$\vec{a}\not=\vec{0}$ and $\vec{b}\not=\vec{0}$, $q_{(\vec{a},\vec{b},c)}$
would be non-zero. Then the corresponding shared potential between the
contexts $\Gamma$ and $\Delta$ wrt. $Q$ is discarded by the rule, as there is
no possibility this potential is attached to the result type $\alpha$.
Second, consider the more involved rule $(\mathsf{{let:T}})$. To explain this
rule, we momentarily assume that in $Q$ no potential is shared, that is,
$q_{(\vec{a},\vec{b},c)}=0$, whenever
$\vec{a}\not=\vec{0},\vec{b}\not=\vec{0}$. In this sub-case the rule can be
simplified as follows:
${\Gamma,\Delta}{\mid}{Q}\vdash{\text{\small\color[rgb]{0.87890625,0,0.2890625}{{let
}}}x\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{{ =
}}}e_{1}\text{\small\color[rgb]{0.87890625,0,0.2890625}{{ in
}}}e_{2}}{\colon}\\!{\beta}{\mid}{Q^{\prime}}\lx@proof<EMAIL_ADDRESS>$p_{i}=q_{i}$\\\ $p_{(\vec{a},c)}=q_{(\vec{a},\vec{0},c)}$\\\
${\Gamma}{\mid}{P}\vdash{e_{1}}{\colon}\\!{\mathsf{T}}{\mid}{P^{\prime}}$
\end{minipage}\begin{minipage}[b]{129.16626pt}
$r_{(\vec{b},0,c)}=q_{(\vec{0},\vec{b},c)}$ \ ($\vec{b}\not=\vec{0}$) \\\
${\Delta,{x}{\colon}\\!{\mathsf{T}}}{\mid}{R}\vdash{e_{2}}{\colon}\\!{\beta}{\mid}{Q^{\prime}}$
\end{minipage}\begin{minipage}[b]{86.11084pt} $r_{j}=q_{m+j}$\\\
$r_{k+1}=p^{\prime}_{\ast}$\\\ $r_{(\vec{0},a,c)}=p^{\prime}_{(a,c)}$
\end{minipage}$
Again the potential in $\Gamma,\Delta$ (wrt. annotation $Q$) is distributed
for the typing of the expressions $e_{1}$, $e_{2}$, respectively, which is
governed by the constraints on the annotations. The simplified rule is
obtained, as the assumption that no shared potential exists, makes almost all
constraints vacuous. In particular, the cost-free derivation
${\Gamma}{\mid}{P^{(\vec{b},d,e)}}\vdash^{\text{cf}}{e_{1}}{\colon}\\!{\mathsf{T}}{\mid}{{P^{\prime}}^{(\vec{b},d,e)}}$
is not required.
Finally, consider the most involved sub-case, where shared potentials are
possible. Contrary to the simplified rules discussed above, such shared
potential cannot be split between the type contexts $\Gamma$ and $\Delta$,
respectively. Thus, the full rule necessarily employs the _cost-free
semantics_. Consequently, the premise
${\Gamma}{\mid}{P^{(\vec{b},d,e)}}\vdash^{\text{cf}}{e_{1}}{\colon}\\!{\alpha}{\mid}{{P^{\prime}}^{(\vec{b},d,e)}}$
expresses that for all non-zero vectors $\vec{b}$ and arbitrary indices $d$,
$e$, the potentials $\Phi({\Gamma}{\mid}{P^{(\vec{b},d,e)}})$ suffices to
cover the potential $\Phi({\alpha}{\mid}{{P^{\prime}}^{(\vec{b},d,e)}})$, if
no extra costs are emitted (compare Section 2). Intuitively this represents
that the values do not increase during the evaluation of $e_{1}$ to value $w$.
At last, the type system makes use of structural rules, like the _sharing_
rule $(\mathsf{{share}})$ and the weakening rules $(\mathsf{{w:var}})$ and
$(\mathsf{{w}})$. The _sharing_ rule employs the sharing operator, defined in
Lemma 4.9. Note that the variables $x,y$ introduced in the assumption of the
typing rule are fresh variables, that do not occur in $\Gamma$. Similarly, the
rule $(\mathsf{{shift}})$ allows to shift the potential before and after
evaluation of the expression $e$ by a constant $K$.
Note that the weakening rules embody changes in the potential of the type
context of expressions considered. This amounts to the comparison on
logarithmic expressions, principally a non-trivial task that cannot be
directly represented as constraints in the type system. Instead, the rule
$(\mathsf{{w}})$ employs a symbolic potential expressions for these
comparisons, replacing actual values for tree by variables. Let $\Gamma$
denote a type context containing the type declarations
${x_{1}}{\colon}\\!{\mathsf{T}},\dots,{x_{m}}{\colon}\\!{\mathsf{T}}$ and let
$Q$ be an annotation of length $m$. Then the _symbolic potential_ , denoted as
$\Phi({\Gamma}{\mid}{Q})$, is defined as follows.
$\Phi({x_{1},\dots,x_{m}}{\mid}{Q})\mathrel{:=}\sum_{i=1}^{m}q_{i}\cdot\operatorname{\mathsf{rk}}(x_{i})+\sum_{a_{1},\dots,a_{m},b\in{\mathbb{N}}}q_{(a_{1},\dots,a_{m},b)}\cdot
p_{(a_{1},\dots,a_{m},b)}(x_{1},\dots,x_{m})\hbox to0.0pt{$\;$,\hss}$
where
$p_{(a_{1},\dots,a_{m},b)}(x_{1},\dots,x_{m})=\log({a_{1}\cdot\lvert{x_{1}}\rvert}+\dots+{a_{m}\cdot\lvert{x_{m}}\rvert}+b)$.
In order to actually solve these constraints over symbolic potentials, we have
to _linearise_ the underlying comparisons of logarithmic expressions. This is
taken up again in Section 7.
###### Definition 5.3.
A program $\mathsf{P}$ is called _well-typed_ if for any rule
$f(x_{1},\dots,x_{k})=e\in\mathsf{P}$ and any annotated signature
${\alpha_{1}\times\dots\times\alpha_{k}}{\mid}{Q}\to{\beta}{\mid}{Q^{\prime}}\in\mathcal{F}(f)$,
we have
${{x_{1}}{\colon}\\!{\alpha_{1}},\dots,{x_{k}}{\colon}\\!{\alpha_{k}}}{\mid}{Q}\vdash{e}{\colon}\\!{\beta}{\mid}{Q^{\prime}}$.
A program $\mathsf{P}$ is called _cost-free_ well-typed, if the cost-free
typing relation is employed.
Before we state and proof the soundness of the presented type-and-effect
system, we establish the following auxiliary result, employed in the correct
assessment of the transfer of potential in the case of function composition,
see Figure 5. See also the high-level description provided in Section 2.
###### Lemma 5.4.
Assume $\sum_{i}q_{i}\log a_{i}\geqslant q\log b$ for some rational numbers
$a_{i},b>0$ and $q_{i}\geqslant q$. Then, $\sum_{i}q_{i}\log(a_{i}+c)\geqslant
q\log(b+c)$ for all $c\geqslant 1$.
###### Proof 5.5.
Wlog. we can assume $q=1$ and $q_{i}\geqslant 1$, as otherwise we simply
divide the assumed inequality by $q$. Further, observe that the assumption
$\sum_{i}q_{i}\log a_{i}\geqslant q\log b$ is equivalent to
$\prod_{i}a_{i}^{q_{i}}\geqslant b\hbox to0.0pt{$\;$.\hss}$ (9)
First, we prove that
$(x+y)^{r}\geqslant x^{r}+y^{r}\quad r\geqslant 1\quad x,y\geqslant 0\hbox
to0.0pt{$\;$.\hss}$ (10)
This is proved as follows. Fix some $x\geqslant 0$ and consider $(x+y)^{r}$
and $x^{r}+y^{r}$ as functions in $y$. It is then sufficient to observe that
$(x+y)^{r}\geqslant x^{r}+y^{r}$ for $y=0$ and that
$\nicefrac{{d}}{{dy}}(x+y)^{r}\geqslant\nicefrac{{d}}{{dy}}(x^{r}+y^{r})$ (the
derivatives with regard to $y$) for all $y\geqslant 0$. Indeed, we have
$\nicefrac{{d}}{{dy}}(x+y)^{r}=r(x+y)^{r-1}$ and
$\nicefrac{{d}}{{dy}}(x^{r}+y^{r})=ry^{r-1}$. Because of $r\geqslant 1$ and
$x\geqslant 0$, we can thus deduce that
$\nicefrac{{d}}{{dy}}(x+y)^{r}\geqslant\nicefrac{{d}}{{dy}}(x^{r}+y^{r})$ for
all $y\geqslant 0$.
Now we consider some $c\geqslant 1$. Combining (9) and (10), we get
$\prod_{i}(a_{i}+c)^{q_{i}}\geqslant\prod_{i}(a_{i}^{q_{i}}+c^{q_{i}})\geqslant\prod_{i}a_{i}^{q_{i}}+\prod_{i}c^{q_{i}}\geqslant
b+c\hbox to0.0pt{$\;$,\hss}$
where we have used that $i\geq 1$, and that $q_{i}\geqslant 1$ and $c\geqslant
1$ imply $\prod_{i}c^{q_{i}}\geqslant c$. By taking the logarithm on both
sides of the inequality we obtain the claim.
Finally, we obtain the following soundness result, which roughly states that
if a program $\mathsf{P}$ terminates, then the difference in potential has
paid its execution costs.222A stated, soundness assumes termination of
$\mathsf{P}$, but our analysis is not restricted to terminating programs. In
order to avoid the assumption the soundness theorem would have to be
formulated wrt. to a partial big-step or a small step semantics, cf. [27, 42].
We consider this outside the scope of this work.
###### Theorem 5.6 (Soundness Theorem).
Let $\mathsf{P}$ be well-typed and let $\sigma$ be a substitution. Suppose
${\Gamma}{\mid}{Q}\vdash{e}{\colon}\\!{\alpha}{\mid}{Q^{\prime}}$ and
${\sigma}\sststile{}{\ell}{{e}\Rightarrow{v}}$. Then
$\Phi({\sigma};{\Gamma}{\mid}{Q})-\Phi({v}{\mid}{Q^{\prime}})\geqslant\ell$.
Further, if
${\Gamma}{\mid}{Q}\vdash^{\text{cf}}{e}{\colon}\\!{\alpha}{\mid}{Q^{\prime}}$,
then $\Phi({\sigma};{\Gamma}{\mid}{Q})\geqslant\Phi({v}{\mid}{Q^{\prime}})$.
###### Proof 5.7.
The proof embodies the high-level description given in Section 2. It proceeds
by main induction on $\Pi\colon{\sigma}\sststile{}{\ell}{{e}\Rightarrow{v}}$
and by side induction on
$\Xi\colon{\Gamma}{\mid}{Q}\vdash{e}{\colon}\\!{\alpha}{\mid}{Q^{\prime}}$,
where the latter is employed in the context of the weakening rules. We
consider only a few cases of interest. For example, for a case not covered:
the variable rule $(\mathsf{{var}})$ types a variable of unspecified type
$\alpha$. As no actual costs are required the annotation is unchanged and the
theorem follows trivially.
_Case_. $\Pi$ derives
${{{\sigma}\sststile{}{0}{{\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keywords2}{\color[rgb]{0,0.578125,0.015625}\small
leaf}}}}}}}\Rightarrow{\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keywords2}{\color[rgb]{0,0.578125,0.015625}\small
leaf}}}}}}}}$. Then $\Xi$ consists of a single application of the rule
$(\mathsf{{leaf}})$:
${{\varnothing}{\mid}{Q+K}\vdash{\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keywords2}{\color[rgb]{0,0.578125,0.015625}\small
leaf}}}}}}}{\colon}\\!{\mathsf{T}}{\mid}{Q^{\prime}}\lx@proof@logical@and\forall
c\geqslant 2\ q_{(c)}=\sum_{a+b=c}q^{\prime}_{(a,b)}K=q^{\prime}_{\ast}\hbox
to0.0pt{$\;$.\hss}$
By assumption $Q=[(q_{(c)})_{c\in{\mathbb{N}}}]$ is an annotation for the
empty sequence of trees. On the other hand
$Q^{\prime}=[(q^{\prime}_{(a,b)})_{a,b\in{\mathbb{N}}}]$ is an annotation of
length $1$. Note that
${\operatorname{\mathsf{rk}}(\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keywords2}{\color[rgb]{0,0.578125,0.015625}\small
leaf}}}}}})=1$ by definition. Thus, we obtain:
$\displaystyle\Phi({\epsilon}{\mid}{Q+K})$
$\displaystyle=K+\sum_{c}q_{(c)}\cdot\log(c)$ $\displaystyle=K+\sum_{c\geq
2}q_{(c)}\cdot\log(c)$ $\displaystyle=q^{\prime}_{\ast}+\sum_{a+b\geq
2}q^{\prime}_{(a,b)}\cdot\log(a+b)$
$\displaystyle=q^{\prime}_{\ast}+\sum_{a,b}q^{\prime}_{(a,b)}\cdot\log(a+b)$
${{{\displaystyle=q^{\prime}_{\ast}\operatorname{\mathsf{rk}}(\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keywords2}{\color[rgb]{0,0.578125,0.015625}\small
leaf}}}}}})+\sum_{a,b}q^{\prime}_{(a,b)}p_{(a,b)}(\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keywords2}{\color[rgb]{0,0.578125,0.015625}\small
leaf}}}}}})=\Phi({\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keywords2}{\color[rgb]{0,0.578125,0.015625}\small
leaf}}}}}}}{\mid}{Q^{\prime}})\hbox to0.0pt{$\;$.\hss}$
_Case_. Suppose $\Pi$ has the following from:
${\sigma}\sststile{}{0}{{\text{\text{\small\color[rgb]{0,0.578125,0.015625}{(}}{$x_{1}$}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{$x_{2}$}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{$x_{3}$}\text{\small\color[rgb]{0,0.578125,0.015625}{)}}}}\Rightarrow{\text{\text{\small\color[rgb]{0,0.578125,0.015625}{(}}{$t$}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{$b$}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{$u$}\text{\small\color[rgb]{0,0.578125,0.015625}{)}}}}}\lx@proof@logical@and
x_{1}\sigma=tx_{2}\sigma=bx_{3}\sigma=u\hbox to0.0pt{$\;$.\hss}$
Wlog. $\Xi$ consists of a single application of the rule $(\mathsf{{node}})$:
${{x_{1}}{\colon}\\!{\mathsf{T}},{x_{2}}{\colon}\\!{\mathsf{B}},{x_{3}}{\colon}\\!{\mathsf{T}}}{\mid}{Q}\vdash{\text{\small\color[rgb]{0,0.578125,0.015625}{(}}{x_{1}}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{x_{2}}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{x_{3}}\text{\small\color[rgb]{0,0.578125,0.015625}{)}}}{\colon}\\!{\mathsf{T}}{\mid}{Q^{\prime}}\lx@proof@logical@and
q_{1}=q_{2}=q^{\prime}_{\ast}q_{(1,0,0)}=q_{(0,1,0)}=q^{\prime}_{\ast}q_{(a,a,b)}=q^{\prime}_{(a,b)}$
By definition, we have
$Q=[q_{1},q_{2}]\cup[(q_{(a_{1},a_{2},b)})_{a_{i},b\in{\mathbb{N}}}]$ and
$Q^{\prime}=[q^{\prime}_{\ast}]\cup[(q^{\prime}_{(a^{\prime},b^{\prime})})_{a^{\prime},b^{\prime}\in{\mathbb{N}}}]$.
We set
$\Gamma\mathrel{:=}{x_{1}}{\colon}\\!{\mathsf{T}},{x_{2}}{\colon}\\!{\mathsf{B}},{x_{3}}{\colon}\\!{\mathsf{T}}$
as well as $x_{1}\sigma=u$, $x_{2}\sigma=b$, and $x_{3}\sigma=v$. Thus
$\Phi({\sigma};{\Gamma}{\mid}{Q})=\Phi({u,v}{\mid}{Q})$ and we obtain:
$\displaystyle\Phi({u,v}{\mid}{Q})$
$\displaystyle=q_{1}\cdot\operatorname{\mathsf{rk}}(u)+q_{2}\cdot\operatorname{\mathsf{rk}}(v)+\sum_{a_{1},a_{2},b}q_{(a_{1},a_{2},b)}\cdot\log(a_{1}\cdot\lvert{u}\rvert+a_{2}\cdot\lvert{v}\rvert+b)$
$\displaystyle\geqslant
q^{\prime}_{\ast}\cdot\operatorname{\mathsf{rk}}(u)+q^{\prime}_{\ast}\cdot\operatorname{\mathsf{rk}}(v)+q_{(1,0,0)}\cdot\log(\lvert{u}\rvert)+q_{(0,1,0)}\cdot\log(\lvert{v}\rvert)+{}$
$\displaystyle\quad{}+\sum_{a,b}q_{(a,a,b)}\cdot\log(a\cdot\lvert{u}\rvert+a\cdot\lvert{v}\rvert+b)$
$\displaystyle=q^{\prime}_{\ast}\cdot(\operatorname{\mathsf{rk}}(u)+\operatorname{\mathsf{rk}}(v)+\log(\lvert{u}\rvert)+\log(\lvert{v}\rvert))+{}$
$\displaystyle\quad{}+\sum_{a,b}q^{\prime}_{(a,b)}\cdot\log(a\cdot(\lvert{u}\rvert+\lvert{v}\rvert)+b)$
${{{{{{{{{{{{\displaystyle=q^{\prime}_{\ast}\cdot\operatorname{\mathsf{rk}}(\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small(}}}{$u$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{,}}}}}
{$b$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{,}}}}}
{$v$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small)}}}})+\sum_{a,b}q^{\prime}_{(a,b)}\cdot
p_{(a,b)}(\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small(}}}{$u$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{,}}}}}
{$b$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{,}}}}}
{$v$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small)}}}})=\Phi({\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small(}}}{$u$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{,}}}}}
{$b$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{,}}}}}
{$v$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small)}}}}}{\mid}{Q^{\prime}})\hbox
to0.0pt{$\;$.\hss}$
_Case_. Suppose ${\sigma}\sststile{}{\ell}{{e}\Rightarrow{v}}$ and let the
last rule in $\Xi$ be of the following form:
${\Gamma}{\mid}{Q+K}\vdash{e}{\colon}\\!{\alpha}{\mid}{Q^{\prime}+K}{\Gamma}{\mid}{Q}\vdash{e}{\colon}\\!{\alpha}{\mid}{Q^{\prime}}\hbox
to0.0pt{$\;$,\hss}$
where $K\geqslant 0$. By SIH, we have that
$\Phi({\sigma};{\Gamma}{\mid}{Q})-\Phi({v}{\mid}{Q^{\prime}})\geqslant\ell$,
from which we immediately obtain:
$\Phi({\sigma};{\Gamma}{\mid}{Q})+K-\Phi({v}{\mid}{Q^{\prime}})-K=\Phi({\sigma};{\Gamma}{\mid}{Q})-\Phi({v}{\mid}{Q^{\prime}})\geqslant\ell\hbox
to0.0pt{$\;$.\hss}$
_Case_. Consider the first $(\mathsf{{match}})$ rule, where $\Pi$ ends as
follows:
${{\sigma}\sststile{}{\ell}{{\text{\small\color[rgb]{0.87890625,0,0.2890625}{{match
}}}x\text{\small\color[rgb]{0.87890625,0,0.2890625}{{
with}}}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{{|
}}}\text{\small\color[rgb]{0,0.578125,0.015625}{{leaf}}}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{{
-> }}}e_{1}\\\ \text{\small\color[rgb]{0.10546875,0.00390625,0.375}{{|
}}}\text{\text{\small\color[rgb]{0,0.578125,0.015625}{(}}{$x_{0}$}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{$x_{1}$}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{$x_{2}$}\text{\small\color[rgb]{0,0.578125,0.015625}{)}}}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{{
-> }}}e_{2}}\Rightarrow{v}}\lx@proof@logical@and
x\sigma=\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keywords2}{\color[rgb]{0,0.578125,0.015625}\small
leaf}}}}}}{\sigma}\sststile{}{\ell}{{e_{1}}\Rightarrow{v}}\hbox
to0.0pt{$\;$.\hss}$
Wlog. we may assume that $\Xi$ ends with the related application of the
$(\mathsf{{match}})$ rule:
${\Gamma,{x}{\colon}\\!{\mathsf{T}}}{\mid}{Q}\vdash{\text{\small\color[rgb]{0.87890625,0,0.2890625}{{match
}}}x\text{\small\color[rgb]{0.87890625,0,0.2890625}{{ with
}}}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{{|
}}}\text{\small\color[rgb]{0,0.578125,0.015625}{{leaf}}}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{{
-> }}}e_{1}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{{|
}}}\text{\small\color[rgb]{0,0.578125,0.015625}{(}}{x_{1}}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{x_{2}}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{x_{3}}\text{\small\color[rgb]{0,0.578125,0.015625}{)}}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{{
->
}}}e_{2}}{\colon}\\!{\alpha}{\mid}{Q^{\prime}}\lx@proof<EMAIL_ADDRESS>$r_{(\vec{a},a,a,b)}=q_{(\vec{a},a,b)}$\\\
$p_{(\vec{a},c)}=\sum_{a+b=c}q_{(\vec{a},a,b)}$\\\
${\Gamma}{\mid}{P+q_{m+1}}\vdash{e_{1}}{\colon}\\!{\alpha}{\mid}{Q^{\prime}}$
\end{minipage}\begin{minipage}[b]{150.69397pt} $r_{m+1}=r_{m+2}=q_{m+1}$ \\\
$r_{(\vec{0},1,0,0)}=r_{(\vec{0},0,1,0)}=q_{m+1}$ \\\
${\Gamma,{x_{1}}{\colon}\\!{\mathsf{T}},{x_{2}}{\colon}\\!{\mathsf{B}},{x_{3}}{\colon}\\!{\mathsf{T}}}{\mid}{R}\vdash{e_{2}}{\colon}\\!{\alpha}{\mid}{Q^{\prime}}$
\end{minipage}q_{i}=r_{i}=p_{i}\hbox to0.0pt{$\;$.\hss}$
Let $Q$ be an annotation of length $m+1$ while $Q^{\prime}$ is of length $1$.
Thus annotations $P$, $R$ have lengths $m$, $m+2$, respectively. We write
$\vec{t}\mathrel{:=}t_{1},\dots,t_{n}$ for the substitution instances of the
variables in $\Gamma$. Further
${x\sigma=\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keywords2}{\color[rgb]{0,0.578125,0.015625}\small
leaf}}}}}}$, where the latter equality follows from the assumption on $\Pi$.
By definition and the constraints given in the rule, we obtain:
$\displaystyle\Phi({\sigma};{\Gamma,{x}{\colon}\\!{\mathsf{T}}}{\mid}{Q})$
${{\displaystyle=\sum_{i}q_{i}\operatorname{\mathsf{rk}}(t_{i})+q_{m+1}\operatorname{\mathsf{rk}}(\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keywords2}{\color[rgb]{0,0.578125,0.015625}\small
leaf}}}}}})+\sum_{\vec{a},a,c}q_{(\vec{a},a,c)}\log(\vec{a}\lvert{\vec{t}}\rvert+a\lvert{\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keywords2}{\color[rgb]{0,0.578125,0.015625}\small
leaf}}}}}}}\rvert+c)$
${\displaystyle=\sum_{i}q_{i}\operatorname{\mathsf{rk}}(t_{i})+q_{m+1}(\operatorname{\mathsf{rk}}(\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keywords2}{\color[rgb]{0,0.578125,0.015625}\small
leaf}}}}}}))+\sum_{\vec{a},a,c}q_{(\vec{a},a,c)}\log(\vec{a}\lvert{\vec{t}}\rvert+a+c)$
$\displaystyle=\Phi({\sigma};{\Gamma}{\mid}{P})+q_{m+1}\hbox
to0.0pt{$\;$.\hss}$
Thus
$\Phi({\sigma};{\Gamma,{x}{\colon}\\!{\mathsf{T}}}{\mid}{Q})=\Phi({\sigma};{\Gamma}{\mid}{P+q_{m+1}})$
and the theorem follows by an application of MIH.
Now, consider the second $(\mathsf{{match}})$ rule, that is, $\Pi$ ends as
follows:
${\sigma}\sststile{}{\ell}{{\text{\small\color[rgb]{0.87890625,0,0.2890625}{{match
}}}x\text{\small\color[rgb]{0.87890625,0,0.2890625}{{
with}}}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{{|
}}}\text{\small\color[rgb]{0,0.578125,0.015625}{{leaf}}}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{{
-> }}}e_{1}\\\ \text{\small\color[rgb]{0.10546875,0.00390625,0.375}{{|
}}}\text{\text{\small\color[rgb]{0,0.578125,0.015625}{(}}{$x_{0}$}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{$x_{1}$}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{$x_{2}$}\text{\small\color[rgb]{0,0.578125,0.015625}{)}}}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{{
-> }}}e_{2}}\Rightarrow{v}}\lx@proof@logical@and
x\sigma=\text{\text{\small\color[rgb]{0,0.578125,0.015625}{(}}{$t$}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{$a$}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{$u$}\text{\small\color[rgb]{0,0.578125,0.015625}{)}}}{\sigma^{\prime}}\sststile{}{\ell}{{e_{2}}\Rightarrow{v}}\hbox
to0.0pt{$\;$.\hss}$
As above, we may assume that $\Xi$ ends with the related application of the
$(\mathsf{{match}})$ rule. In this subcase, the assumption on $\Pi$ yields
${{{{t\mathrel{:=}x\sigma=\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small(}}}{$u$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{,}}}}}
{$b$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{,}}}}}
{$v$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small)}}}}$.
By definition and the constraints given in the rule, we obtain:
$\displaystyle\Phi({\sigma};{\Gamma,{x}{\colon}\\!{\mathsf{T}}}{\mid}{Q})$
${{{{{{{{\displaystyle=\sum_{i}q_{i}\operatorname{\mathsf{rk}}(t_{i})+q_{m+1}\operatorname{\mathsf{rk}}(\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small(}}}{$u$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{,}}}}}
{$b$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{,}}}}}
{$v$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small)}}}})+\sum_{\vec{a},a,c}q_{(\vec{a},a,c)}\log(\vec{a}\lvert{\vec{t}}\rvert+a\lvert{\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small(}}}{$u$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{,}}}}}
{$b$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{,}}}}}
{$v$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small)}}}}}\rvert+c)$
$\displaystyle=\sum_{i}q_{i}\operatorname{\mathsf{rk}}(t_{i})+q_{m+1}(\operatorname{\mathsf{rk}}(u)+\log(\lvert{u}\rvert)+\log(\lvert{v}\rvert)+\operatorname{\mathsf{rk}}(v))+{}$
$\displaystyle\phantom{=}{}+\sum_{\vec{a},a,c}q_{(\vec{a},a,c)}\log(\vec{a}\lvert{\vec{t}}\rvert+a(\lvert{u}\rvert+\lvert{v}\rvert)+c)$
$\displaystyle=\Phi({\sigma};{\Gamma,{x_{1}}{\colon}\\!{\mathsf{T}},{x_{2}}{\colon}\\!{\mathsf{B}},{x_{3}}{\colon}\\!{\mathsf{T}}}{\mid}{R})\hbox
to0.0pt{$\;$,\hss}$
where we write $\vec{a}\lvert{\vec{t}}\rvert$ as shorthand to denote
componentwise multiplication.
Thus
$\Phi({\sigma};{\Gamma,{x}{\colon}\\!{\mathsf{T}}}{\mid}{Q})=\Phi({\sigma};{\Gamma,{x_{1}}{\colon}\\!{\mathsf{T}},{x_{2}}{\colon}\\!{\mathsf{B}},{x_{3}}{\colon}\\!{\mathsf{T}}}{\mid}{R})$
and the theorem follows by an application of MIH.
_Case_. Consider the $(\mathsf{{let}})$ rule, that is, $\Pi$ ends in the
following rule:
${\sigma}\sststile{}{\ell_{1}+\ell_{2}}{{\text{\small\color[rgb]{0.87890625,0,0.2890625}{{let
}}}x\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{{ =
}}}e_{1}\text{\small\color[rgb]{0.87890625,0,0.2890625}{{ in
}}}e_{2}}\Rightarrow{v}}\lx@proof@logical@and{\sigma}\sststile{}{\ell_{1}}{{e_{1}}\Rightarrow{w}}{\sigma[x\mapsto
w]}\sststile{}{\ell_{2}}{{e_{2}}\Rightarrow{v}}\hbox to0.0pt{$\;$,\hss}$
where $\ell=\ell_{1}+\ell_{2}$. First, we consider the sub-case, where the
type of $e_{1}$ is an arbitrary type $\alpha$ but not of type $\mathsf{Tree}$.
Ie. we assume that $\Xi$ ends in the following application of the
$(\mathsf{{let:gen}})$-rule
${\Gamma,\Delta}{\mid}{Q}\vdash{\text{\small\color[rgb]{0.87890625,0,0.2890625}{{let
}}}x\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{{ =
}}}e_{1}\text{\small\color[rgb]{0.87890625,0,0.2890625}{{ in
}}}e_{2}}{\colon}\\!{\beta}{\mid}{Q^{\prime}}\lx@proof<EMAIL_ADDRESS>$p_{i}=q_{i}$ \quad$p_{(\vec{a},c)}=q_{(\vec{a},\vec{0},c)}$\\\
${\Gamma}{\mid}{P}\vdash{e_{1}}{\colon}\\!{\alpha}{\mid}{\varnothing}$
\end{minipage}\begin{minipage}[b]{107.63855pt}
$q_{(\vec{0},\vec{b},c)}=r_{(\vec{b},c)}$ \ ($\vec{b}\not=\vec{0}$) \\\
${\Delta,{x}{\colon}\\!{\alpha}}{\mid}{R}\vdash{e_{2}}{\colon}\\!{\beta}{\mid}{Q^{\prime}}$
\end{minipage}\begin{minipage}[b]{43.05542pt} $r_{j}=q_{m+j}$\\\
$\alpha\not=\mathsf{T}$ \end{minipage}\hbox to0.0pt{$\;$.\hss}$
Recall that $\vec{a}=a_{1},\dots,a_{n}$, $\vec{b}=b_{1},\dots,b_{m}$,
$i\in\\{1,\dots,m\\}$, $j\in\\{1,\dots,k\\}$ and $a_{i},b_{j},a,b,c,d,e$ are
natural numbers. Further, the annotations $Q$, $P$, $R$ are of length $m+k$,
$m$ and $k$, respectively, while the corresponding resulting annotations
$Q^{\prime}$, $P^{\prime}$ and $R^{\prime}$, are of length $1$.
By definition and due to the constraints expressed in the typing rule, we
have:
$\displaystyle\Phi({\sigma};{\Gamma,\Delta}{\mid}{Q})$
$\displaystyle=\sum_{i}q_{i}\operatorname{\mathsf{rk}}(t_{i})+\sum_{j}q_{m+j}\operatorname{\mathsf{rk}}(u_{j})+\sum_{\vec{a},\vec{b},c}q_{(\vec{a},\vec{b},c)}\log(\vec{a}\lvert{\vec{t}}\rvert+\vec{b}\lvert{\vec{u}}\rvert+c)$
$\displaystyle\Phi({\sigma};{\Gamma}{\mid}{P})$
$\displaystyle=\sum_{i}q_{i}\operatorname{\mathsf{rk}}(t_{i})+\sum_{\vec{a},c}q_{(\vec{a},\vec{0},c)}\log(\vec{a}\lvert{\vec{t}}\rvert+c)$
$\displaystyle\Phi({w}{\mid}{\varnothing})$ $\displaystyle=0$
$\displaystyle\Phi({\sigma};{\Delta,{x}{\colon}\\!{\mathsf{Tree}}}{\mid}{R})$
$\displaystyle=\sum_{j}q_{m+j}\operatorname{\mathsf{rk}}(u_{j})+r_{k+1}\operatorname{\mathsf{rk}}(w)+\sum_{\vec{b},a,c}q_{(\vec{0},\vec{b},c)}\log(\vec{b}\lvert{\vec{u}}\rvert+c)\hbox
to0.0pt{$\;$,\hss}$
where we set $\vec{t}\mathrel{:=}t_{1},\dots,t_{m}$ and
$\vec{u}\mathrel{:=}u_{1},\dots,u_{k}$, denoting the substitution instances of
the variables in $\Gamma$, $\Delta$, respectively. Thus, we obtain
$\Phi({\sigma};{\Gamma,\Delta}{\mid}{Q})\geqslant\Phi({\sigma};{\Gamma}{\mid}{P})+\Phi({\sigma};{\Delta,{x}{\colon}\\!{\alpha}}{\mid}{R})\hbox
to0.0pt{$\;$.\hss}$
By main induction hypothesis, we conclude that
$\Phi({\sigma};{\Gamma}{\mid}{P})-\Phi({w}{\mid}{\varnothing})\geqslant\ell_{1}$
and
$\Phi({\sigma};{\Delta,{x}{\colon}\\!{\alpha}}{\mid}{R})-\Phi({v}{\mid}{Q^{\prime}})\geqslant\ell_{2}$,
from which the sub-case follows.
Second, we consider the more involved sub-case, where $e_{1}$ is of
$\mathsf{Tree}$ type. Thus, wlog. $\Xi$ ends in the following application of
the $(\mathsf{{let:T}})$-rule.
${\Gamma,\Delta}{\mid}{Q}\vdash{\text{\small\color[rgb]{0.87890625,0,0.2890625}{{let
}}}x\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{{ =
}}}e_{1}\text{\small\color[rgb]{0.87890625,0,0.2890625}{{ in
}}}e_{2}}{\colon}\\!{\beta}{\mid}{Q^{\prime}}\begin{minipage}[b]{353.05444pt}
\centering$p_{i}=q_{i}$ \quad$p_{(\vec{a},c)}=q_{(\vec{a},\vec{0},c)}$
\quad$r_{j}=q_{m+j}$ \quad$r_{k+1}=p^{\prime}_{\ast}$
\quad$r_{(\vec{0},d,e)}=p^{\prime}_{(d,e)}$
\quad$\forall\vec{b}\neq\vec{0}\left(r_{(\vec{b},0,0)}=q_{(\vec{0},\vec{b},0)}\right)$
\\\ $\forall\vec{b}\neq\vec{0},\vec{a}\neq\vec{0}\lor c\neq 0\
\left(q_{(\vec{a},\vec{b},c)}=\sum_{(d,e)}p^{(\vec{b},d,e)}_{(\vec{a},c)}\right)$
\\\ $\forall\vec{b}\neq\vec{0},d\neq 0\lor e\neq 0\
\left(r_{(\vec{b},d,e)}={p^{\prime}}^{(\vec{b},d,e)}_{(d,e)}\wedge\forall(d^{\prime},e^{\prime})\neq(d,e)\
\left({p^{\prime}}^{(\vec{b},d,e)}_{(d^{\prime},e^{\prime})}=0\right)\wedge{}\right.$\\\
$\left.{}\land\sum_{(a,c)}p^{(\vec{b},d,e)}_{(\vec{a},c)}\geq{p^{\prime}}^{(\vec{b},d,e)}_{(d,e)}\land\forall\vec{a}\neq\vec{0}\lor
c\neq 0\ \left(p^{(\vec{b},d,e)}_{(\vec{a},c)}\neq
0\rightarrow{p^{\prime}}^{(\vec{b},d,e)}_{(d,e)}\leqslant
p^{(\vec{b},d,e)}_{(\vec{a},c)}\right)\right)$\\\
${\Gamma}{\mid}{P}\vdash{e_{1}}{\colon}\\!{\mathsf{T}}{\mid}{P^{\prime}}$
\hfill$\forall{\vec{b}\neq\vec{0},d\neq 0\lor e\neq 0}\
\left({\Gamma}{\mid}{P^{(\vec{b},d,e)}}\vdash^{\text{cf}}{e_{1}}{\colon}\\!{\mathsf{T}}{\mid}{{P^{\prime}}^{(\vec{b},d,e)}}\right)$
\hfill${\Delta,{x}{\colon}\\!{\mathsf{T}}}{\mid}{R}\vdash{e_{2}}{\colon}\\!{\beta}{\mid}{Q^{\prime}}$
\@add@centering\end{minipage}\hbox to0.0pt{$\;$,\hss}$
where the annotations $Q$, $P$, $R$, $Q^{\prime}$, $P^{\prime}$ and the
sequences $\vec{a}$, $\vec{b}$ are as above. Further, for each sequence
$\vec{b}\not=\vec{0}$, $P^{(\vec{b},u,v)}$ and ${P^{\prime}}^{(\vec{b},u,v)}$
denote annotations of length $m$. By definition and due to the constraints
expressed in the typing rule, we have for all $\vec{b}\not=\vec{0}$:
$\displaystyle\Phi({\sigma};{\Gamma,\Delta}{\mid}{Q})$
$\displaystyle=\sum_{i}q_{i}\operatorname{\mathsf{rk}}(t_{i})+\sum_{j}q_{j}\operatorname{\mathsf{rk}}(u_{j})+\sum_{\vec{a}\neq\vec{0}\lor\vec{b}\neq\vec{0}\lor
c\neq
0}q_{(\vec{a},\vec{b},c)}\log(\vec{a}\lvert{\vec{t}}\rvert+\vec{b}\lvert{\vec{u}}\rvert+c)$
$\displaystyle\Phi({\sigma};{\Gamma}{\mid}{P})$
$\displaystyle=\sum_{i}q_{i}\operatorname{\mathsf{rk}}(t_{i})+\sum_{\vec{a}\neq\vec{0}\lor
c\neq 0}q_{(\vec{a},\vec{0},c)}\log(\vec{a}\lvert{\vec{t}}\rvert+c)-K$
$\displaystyle\Phi({w}{\mid}{P^{\prime}})$
$\displaystyle=r_{k+1}\operatorname{\mathsf{rk}}(w)+\sum_{a\neq 0\lor c\neq
0}r_{(\vec{0},a,c)}\log(a\lvert{w}\rvert+c)$
$\displaystyle\Phi({\sigma};{\Gamma}{\mid}{P^{(\vec{b},d,e)}})$
$\displaystyle=\sum_{\vec{a}\neq\vec{0}\lor c\neq
0}p^{(\vec{b},d,e)}_{(\vec{a},c)}\log(\vec{a}\lvert{\vec{t}}\rvert+c)$
$\displaystyle\Phi({w}{\mid}{{P^{\prime}}^{(\vec{b},d,e)}})$
$\displaystyle={p^{\prime}}^{(\vec{b},d,e)}_{(d,e)}\log(d\lvert{w}\rvert+e)$
$\displaystyle\Phi({\sigma};{\Delta,{x}{\colon}\\!{\mathsf{T}}}{\mid}{R})$
$\displaystyle=\sum_{j}q_{j}\operatorname{\mathsf{rk}}(u_{j})+r_{k+1}\operatorname{\mathsf{rk}}(w)+\sum_{\vec{a}\neq\vec{0}\lor
d\neq 0\lor e\neq
0}r_{(\vec{b},d,e)}\log(\vec{b}\lvert{\vec{u}}\rvert+d\lvert{w}\rvert+e)+K\hbox
to0.0pt{$\;$,\hss}$
where we set $\vec{t}\mathrel{:=}t_{1},\dots,t_{m}$ and
$\vec{u}\mathrel{:=}u_{1},\dots,u_{k}$, denoting the substitution instances of
the variables in $\Gamma$, $\Delta$, respectively.
By main induction hypothesis, we conclude that
$\Phi({\sigma};{\Gamma}{\mid}{P})-\Phi({w}{\mid}{P^{\prime}})\geqslant\ell_{1}$.
Further, for all $\vec{b}\not=\vec{0},d\neq 0\lor e\neq 0$, we have, due to
the cost-free typing constraints
$\Phi({\sigma};{\Gamma}{\mid}{P^{(\vec{b},d,e)}})\geqslant\Phi({w}{\mid}{{P^{\prime}}^{(\vec{b},d,e)}})$.
The latter yields more succinctly (for all $\vec{b}\not=\vec{0},d\neq 0\lor
e\neq 0$) that
$\sum_{\vec{a},c}p^{(\vec{b},d,e)}_{(\vec{a},c)}\log(\vec{a}\lvert{\vec{t}}\rvert+c)\geqslant{p^{\prime}}^{(\vec{b},d,e)}_{(d,e)}\log(d\lvert{w}\rvert+e)\hbox
to0.0pt{$\;$.\hss}$ (11)
A third application of MIH yields that
$\Phi({\sigma};{\Delta,{x}{\colon}\\!{\mathsf{T}}}{\mid}{R})-\Phi({v}{\mid}{Q^{\prime}})\geqslant\ell_{2}$.
Due to the conditions
$\sum_{(a,c)}p^{(\vec{b},d,e)}_{(\vec{a},c)}\geq{p^{\prime}}^{(\vec{b},d,e)}_{(d,e)}$,
for all $(d^{\prime},e^{\prime})\neq(d,e)$,
${p^{\prime}}^{(\vec{b},d,e)}_{(d^{\prime},e^{\prime})}=0$ and for all
$\vec{a}$, $c$ $\left(p^{(\vec{b},d,e)}_{(\vec{a},c)}\neq
0\rightarrow{p^{\prime}}^{(\vec{b},d,e)}_{(d,e)}\leqslant
p^{(\vec{b},d,e)}_{(\vec{a},c)}\right)$, we can apply Lemma 5.4 to Equation
(11) and obtain
$\sum_{\vec{a}\neq\vec{0}\lor c\neq
0}p^{(\vec{b},d,e)}_{(\vec{a},c)}\log(\vec{a}\lvert{\vec{t}}\rvert+\vec{b}\lvert{\vec{u}}\rvert+c)\geqslant{p^{\prime}}^{(\vec{b},d,e)}_{(d,e)}\log(\vec{b}\lvert{\vec{u}}\rvert+d\lvert{w}\rvert+e)\hbox
to0.0pt{$\;$.\hss}$
Due to the condition
$\left(q_{(\vec{a},\vec{b},c)}=\sum_{(d,e)}p^{(\vec{b},d,e)}_{(\vec{a},c)}\right)$
for all $\vec{b}\neq\vec{0},\vec{a}\neq\vec{0}\lor c\neq 0$, we can sum those
equations for all $d\neq 0\lor e\neq 0$ and obtain (for all
$\vec{b}\not=\vec{0},d\neq 0\lor e\neq 0$) that
$\sum_{\vec{a}\neq\vec{0}\lor c\neq
0}q_{(\vec{a},\vec{b},c)}\log(\vec{a}\lvert{\vec{t}}\rvert+\vec{b}\lvert{\vec{u}}\rvert+c)\geqslant\sum_{d\neq
0\lor e\neq
0}r_{(\vec{b},d,e)}\log(\vec{b}\lvert{\vec{u}}\rvert+d\lvert{w}\rvert+e)\hbox
to0.0pt{$\;$.\hss}$
We can combine the above fact to conclude the case.
_Case._ Finally, we consider the application rules $(\mathsf{{app}})$ and
$(\mathsf{{app:cf}})$. As the cost-free variant only differs in sofar that
costs are not counted by $(\mathsf{{app:cf}})$, it suffices to consider the
rule $(\mathsf{{app}})$. Let $f(x_{1},\dots,x_{k})=e\in\mathsf{P}$ and let
$\Pi$ derives
${\sigma}\sststile{}{\ell+1}{{f(x_{1},\dots,x_{k})}\Rightarrow{v}}$. We
consider the costed typing
${{x_{1}}{\colon}\\!{\alpha_{1}},\dots,{x_{k}}{\colon}\\!{\alpha_{k}}}{\mid}{(P+K\cdot
Q)+1}\vdash^{\text{cf}}{f(x_{1},\dots,x_{k})}{\colon}\\!{\alpha}{\mid}{P^{\prime}-1+K\cdot
Q^{\prime}}$, where $K\in{\mathbb{Q}^{+}_{0}}$. Set
$\Gamma\mathrel{:=}{x_{1}}{\colon}\\!{\alpha_{1}},\dots,{x_{k}}{\colon}\\!{\alpha_{k}}$.
As $\mathsf{P}$ is well-typed, we have
${\Gamma}{\mid}{P}\vdash{e}{\colon}\\!{\beta}{\mid}{P^{\prime}}\qquad\text{and}\qquad{\Gamma}{\mid}{Q}\vdash^{\text{cf}}{e}{\colon}\\!{\beta}{\mid}{Q^{\prime}}\hbox
to0.0pt{$\;$.\hss}$
We can apply MIH wrt. the evaluation $\Pi^{\prime}$ of
${\sigma}\sststile{}{\ell}{{e}\Rightarrow{v}}$ to conclude
$\Phi({\sigma};{\Gamma}{\mid}{P})-\Phi({v}{\mid}{P^{\prime}})\geqslant\ell$ as
well as
$\Phi({\sigma};{\Gamma}{\mid}{Q})\geqslant\Phi({v}{\mid}{Q^{\prime}})$. By
monotonicity of addition and multiplication:
$\displaystyle\Phi({\sigma};{\Gamma}{\mid}{P+K\cdot Q})$
$\displaystyle=\Phi({\sigma};{\Gamma}{\mid}{P})+K\cdot\Phi({\sigma};{\Gamma}{\mid}{Q})$
$\displaystyle\geqslant(\Phi({v}{\mid}{P^{\prime}})+\ell)+K\cdot\Phi({v}{\mid}{Q^{\prime}})=\Phi({v}{\mid}{P^{\prime}+K\cdot
Q^{\prime}})+\ell\hbox to0.0pt{$\;$.\hss}$
Thus
$\displaystyle\Phi({\sigma};{\Gamma}{\mid}{P+K\cdot
Q})-\Phi({v}{\mid}{P^{\prime}-1+K\cdot Q^{\prime}})={}$
$\displaystyle{}=(\Phi({\sigma};{\Gamma}{\mid}{P+K\cdot
Q})-\Phi({v}{\mid}{P^{\prime}+K\cdot Q^{\prime}}))+1\geqslant\ell+1\hbox
to0.0pt{$\;$.\hss}$
From this, the case follows, which completes the proof of the soundness
theorem.
###### Remark 5.8.
We note that the basic resource functions can be generalised to additionally
represent linear functions in the size of the arguments. The above soundness
theorem is not affected by this generalisation.
In the next section, we exemplify the use of the proposed type-and-effect
system, cf. Figure 5, on the motivating example.
## 6 Analysis
As promised, we apply in this section the proposed type-and-effect system to
obtain an optimal analysis of the amortised costs of the zig-zig case of
_splaying_ , once the type annotations are fixed. As a preparatory step, also
to emphasise the need for the cost-free semantics, we precise the informal
account of compositional reasoning given in Section 2.
### 6.1 Let-Normal Form
We consider the expression ($al$, $a$, ($ar$, $b$, ($br$, $c$, $cr$))) =: t’,
which becomes the following expression $e$ in let-normal form
⬇
1 let t’’’ = (br, c, cr) in (
2 let t’’ = (ar, b, t’’’) in ((al, a, t’’))
3 )
The expression $e$ is typable with the following derivation, where the
expression $e^{\prime}$ abbreviate let t’’ = (ar, b, t’’’) in ((al, a, t’’)).
(We have ignored expressions of base type to increase readability.)
${{br}{\colon}\\!{\mathsf{T}},{cr}{\colon}\\!{\mathsf{T}},{ar}{\colon}\\!{\mathsf{T}},{al}{\colon}\\!{\mathsf{T}}}{\mid}{Q}\vdash{e}{\colon}\\!{\mathsf{T}}{\mid}{Q^{\prime}}\lx@proof@logical@and{{br}{\colon}\\!{\mathsf{T}},{cr}{\colon}\\!{\mathsf{T}}}{\mid}{Q_{1}}\vdash{(br,c,cr)}{\colon}\\!{\mathsf{T}}{\mid}{Q^{\prime}_{1}}{{ar}{\colon}\\!{\mathsf{T}},{al}{\colon}\\!{\mathsf{T}},{t^{\prime\prime\prime}}{\colon}\\!{\mathsf{T}}}{\mid}{Q_{2}}\vdash{e^{\prime}}{\colon}\\!{\mathsf{T}}{\mid}{Q^{\prime}}\lx@proof@logical@and{{ar}{\colon}\\!{\mathsf{T}},{t^{\prime\prime\prime}}{\colon}\\!{\mathsf{T}}}{\mid}{Q_{3}}\vdash{(ar,b,t^{\prime\prime\prime})}{\colon}\\!{\mathsf{T}}{\mid}{Q^{\prime}_{3}}\begin{minipage}{129.16626pt}
$q^{3}_{1}=q^{3}_{2}={q^{\prime}}^{3}_{\ast}$\\\
$q^{3}_{(1,0,0)}=q^{3}_{(0,1,0)}={q^{\prime}}^{3}_{\ast}$\\\
$q^{3}_{(1,1,0)}={q^{\prime}}^{3}_{(1,0)}$
\end{minipage}\qquad\eqref{eq:subtree}$
Here, we employ the derivability of the following type judgement (12) by a
single application of $(\mathsf{{node}})$, wrt. the annotation $Q_{4}$,
$Q^{\prime}$ given below.
${{al}{\colon}\\!{\mathsf{T}},{t^{\prime\prime}}{\colon}\\!{\mathsf{T}}}{\mid}{Q_{4}}\vdash{(al,a,t^{\prime\prime})}{\colon}\\!{\mathsf{T}}{\mid}{Q^{\prime}}\hbox
to0.0pt{$\;$.\hss}$ (12)
It is not difficult to check, that the above derivation indeed proves well-
typedness of the expression $e$ wrt. the below given type annotations.
$\displaystyle Q$ $\displaystyle\colon
q_{1}=q_{2}=q_{3}=q_{4}=1;{\color[rgb]{1,0,0}q_{(1,1,1,0,0)}=1};q_{(1,1,0,0,0)}=1;q_{(0,0,0,0,2)}=1\hbox
to0.0pt{$\;$,\hss}$ $\displaystyle\quad
q_{(1,0,0,0,0)}=q_{(0,1,0,0,0)}=q_{(0,0,1,0,0)}=q_{(0,0,0,1,0)}=1\hbox
to0.0pt{$\;$,\hss}$ $\displaystyle Q^{\prime}$ $\displaystyle\colon
q^{\prime}_{\ast}=1,q^{\prime}_{(0,2)}=1\hbox to0.0pt{$\;$,\hss}$
$\displaystyle Q_{1}$ $\displaystyle\colon
q^{1}_{1}=q_{1}=1;q^{1}_{2}=q_{2}=1;q^{1}_{(0,0,2)}=q_{(0,0,0,0,2)}=1\hbox
to0.0pt{$\;$,\hss}$ $\displaystyle\quad
q^{1}_{(1,1,0)}=q_{(1,1,0,0,0)};q^{1}_{(1,0,0)}=q_{(1,0,0,0,0)}=1;q^{1}_{(0,1,0)}=q_{(0,1,0,0,0)}=1\hbox
to0.0pt{$\;$,\hss}$ $\displaystyle Q^{\prime}_{1}$
$\displaystyle\colon{q^{\prime}}^{1}_{\ast}=1;{q^{\prime}}^{1}_{(1,0)}=1;{q^{\prime}}^{1}_{(0,2)}=1\hbox
to0.0pt{$\;$,\hss}$ $\displaystyle P^{(1,0,1,0)}$
$\displaystyle\colon{\color[rgb]{1,0,0}{p}^{(1,0,1,0)}_{(1,1,0)}=q_{(1,1,1,0,0)}=1}\hbox
to0.0pt{$\;$,\hss}$ $\displaystyle{P^{\prime}}^{(1,0,1,0)}$
$\displaystyle\colon{\color[rgb]{1,0,0}{p^{\prime}}^{(1,0,1,0)}_{(1,0)}=1}\hbox
to0.0pt{$\;$,\hss}$ $\displaystyle Q_{2}$ $\displaystyle\colon
q^{2}_{1}=q_{3}=1;q^{2}_{2}=q_{4}=1;q^{2}_{3}={q^{\prime}}^{1}_{\ast}=1;\hbox
to0.0pt{$\;$,\hss}$
$\displaystyle\quad{q}^{2}_{(1,0,0,0)}=q_{(0,0,1,0,0)}=1;{q}^{2}_{(0,1,0,0)}=q_{(0,0,0,1,0)}=1;\hbox
to0.0pt{$\;$,\hss}$
$\displaystyle\quad{q}^{2}_{(0,0,1,0)}={q^{\prime}}^{1}_{(1,0)}=1;{q}^{2}_{(0,0,0,2)}={q^{\prime}}^{1}_{(0,2)}=1;{\color[rgb]{1,0,0}q^{2}_{(1,0,1,0)}={p^{\prime}}^{(1,0,1,0)}_{(1,0)}=1}\hbox
to0.0pt{$\;$,\hss}$ $\displaystyle Q_{3}$ $\displaystyle\colon
q^{3}_{1}=q^{2}_{1}=1;q^{3}_{2}=q^{2}_{3}=1;q^{3}_{(0,0,2)}={q}^{2}_{(0,0,0,2)}=1$
$\displaystyle\quad
q^{3}_{(1,0,0)}={q^{2}}_{(1,0,0,0)}=1;q^{3}_{(0,1,0)}={q^{2}}_{(0,0,1,0)}=1;q^{3}_{(1,1,0)}={q^{2}}_{(1,0,1,0)}=1\hbox
to0.0pt{$\;$,\hss}$ $\displaystyle Q_{3}^{\prime}$
$\displaystyle\colon{q^{\prime}}^{3}_{\ast}=1,{q^{\prime}}^{3}_{(1,0)}=1,{q^{\prime}}^{3}_{(0,2)}=1$
$\displaystyle Q_{4}$ $\displaystyle\colon
q^{4}_{1}=q^{2}_{2}=1;q^{4}_{(0,0,2)}={q^{\prime}}^{3}_{(0,2)}=1;q^{4}_{(1,0,0)}={q}^{2}_{(0,1,0,0)}=1;q^{4}_{(0,1,0)}={q^{\prime}}^{3}_{(1,0)}=1$
In the inference marked with $(\ast)$, we employ the (almost trivial)
correctness of the following _cost-free_ typing derivation for
${{br}{\colon}\\!{\mathsf{T}},{cr}{\colon}\\!{\mathsf{T}}}{\mid}{P^{(1,0,1,0)}}\vdash^{\text{cf}}{(br,c,cr)}{\colon}\\!{\mathsf{T}}{\mid}{{P^{\prime}}^{(1,0,1,0)}}$.
(For instantiation of the rule $(\mathsf{{let:T}})$ note $\vec{b}=(1,0)$.)
${{br}{\colon}\\!{\mathsf{T}},{cr}{\colon}\\!{\mathsf{T}}}{\mid}{P^{(1,0,1,0)}}\vdash^{\text{cf}}{(br,c,cr)}{\colon}\\!{\mathsf{T}}{\mid}{{P^{\prime}}^{(1,0,1,0)}}p^{(1,0,1,0)}_{(1,1,0)}={p^{\prime}}^{(1,0,1,0)}_{(1,0)}\hbox
to0.0pt{$\;$.\hss}$ (13)
For all $\vec{b}\not=(0)$, $\vec{b}\not=(1)$ and arbitrary $d$, $e$, we set
$P^{(\vec{b},d,e)}={P^{\prime}}^{(\vec{b},d,e)}\mathrel{:=}\varnothing$. Our
prototype fully automatically checks correctness of the above given
annotations.
We emphasise that the involved $(\mathsf{{let}})$-rule, employed in step
$(\ast)$ cannot be avoided. In particular, the additional cost-free derivation
(13) is essential. Observe the annotation marked in red in the calculation
above. Eventually these amount to a shared potential employed in step
($\ast$). The cost-free semantics allows us to exploit this shared potential,
which otherwise would have to be discarded.
To wit, assume momentarily the rule $(\mathsf{{let}})$ would not make use of
cost-free reasoning, similar to the simplified $(\mathsf{{let}})$-rule, that
we have used in the explanations on page 5. Then the shared potential
represented by the coefficient $q_{(1,1,1,0,0)}\in Q$ is discarded by the
rule. However, this potential is then missing, if we attempt to type the
judgement
${{ar}{\colon}\\!{\mathsf{T}},{al}{\colon}\\!{\mathsf{T}},{t^{\prime\prime\prime}}{\colon}\\!{\mathsf{T}}}{\mid}{R}\vdash{e^{\prime}}{\colon}\\!{\mathsf{T}}{\mid}{Q^{\prime}}\hbox
to0.0pt{$\;$,\hss}$
where $R$ is defined as $Q_{2}$, except that $r_{(1,0,1,0)}=0$. Thus, this
attempt fails. (Note that the corresponding coefficient of $Q_{2}$, marked in
red, is non-zero.)
###### Remark 6.1.
To some extent this is in contrast to the use of cost-free semantics in the
literature [22, 24, 34, 26, 42]. While cost-free semantics appear as an add-on
in these works, essential only if we want to capture non tail-recursive
programs, cost-free semantics is essential in our context—it is already
required for the representation of simple values.
### 6.2 Splay Trees
${{{{{{{{{{{{{{{{{{{{{{{{{{a}{\colon}\\!{\mathsf{B}},{t}{\colon}\\!{\mathsf{T}}}{\mid}{Q}\vdash{\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keyword}{\color[rgb]{0.87890625,0,0.2890625}\small
match}}}}}}\ t\
\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keyword}{\color[rgb]{0.87890625,0,0.2890625}\small
with}}}}}}\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small\textbar}}}}\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keywords2}{\color[rgb]{0,0.578125,0.015625}\small
leaf}}}}}}\
\mathrel{\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small-\textgreater}}}}}\
\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keywords2}{\color[rgb]{0,0.578125,0.015625}\small
leaf}}}}}}\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small\textbar}}}}\text{\small\color[rgb]{0,0.578125,0.015625}{(}}{cl}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{c}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{cr}\text{\small\color[rgb]{0,0.578125,0.015625}{)}}\
\mathrel{\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small-\textgreater}}}}}\
e_{1}}{\colon}\\!{\mathsf{T}}{\mid}{Q^{\prime}}{{a}{\colon}\\!{\mathsf{B}},{cl}{\colon}\\!{\mathsf{T}},{c}{\colon}\\!{\mathsf{B}},{cr}{\colon}\\!{\mathsf{T}}}{\mid}{Q_{1}}\vdash{\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keyword}{\color[rgb]{0.87890625,0,0.2890625}\small
if}}}}}}\ a=c\
\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keyword}{\color[rgb]{0.87890625,0,0.2890625}\small
then}}}}}}\
\text{\small\color[rgb]{0,0.578125,0.015625}{(}}{cl}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{c}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{cr}\text{\small\color[rgb]{0,0.578125,0.015625}{)}}\
\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keyword}{\color[rgb]{0.87890625,0,0.2890625}\small
else}}}}}}\
e_{2}}{\colon}\\!{\mathsf{T}}{\mid}{Q^{\prime}}{{a}{\colon}\\!{\mathsf{B}},{b}{\colon}\\!{\mathsf{B}},{cl}{\colon}\\!{\mathsf{T}},{cr}{\colon}\\!{\mathsf{T}}}{\mid}{Q_{1}}\vdash{\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keyword}{\color[rgb]{0.87890625,0,0.2890625}\small
match}}}}}}\ cl\
\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keyword}{\color[rgb]{0.87890625,0,0.2890625}\small
with}}}}}}\
\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small\textbar}}}}\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keywords2}{\color[rgb]{0,0.578125,0.015625}\small
leaf}}}}}}\
\mathrel{\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small-\textgreater}}}}}\
\text{\small\color[rgb]{0,0.578125,0.015625}{(}}{cl}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{c}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{cr}\text{\small\color[rgb]{0,0.578125,0.015625}{)}}\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small\textbar}}}}\text{\small\color[rgb]{0,0.578125,0.015625}{(}}{bl}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{b}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{br}\text{\small\color[rgb]{0,0.578125,0.015625}{)}}\
\mathrel{\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small-\textgreater}}}}}\
e_{3}}{\colon}\\!{\mathsf{T}}{\mid}{Q^{\prime}}{\Gamma,{cr}{\colon}\\!{\mathsf{T}},{bl}{\colon}\\!{\mathsf{T}},{br}{\colon}\\!{\mathsf{T}}}{\mid}{Q_{2}}\vdash{e_{3}}{\colon}\\!{\mathsf{T}}{\mid}{Q^{\prime}}{\Gamma,{cr}{\colon}\\!{\mathsf{T}},{bl}{\colon}\\!{\mathsf{T}},{br}{\colon}\\!{\mathsf{T}}}{\mid}{Q_{3}}\vdash{e_{3}}{\colon}\\!{\mathsf{T}}{\mid}{Q^{\prime}}{\Gamma,{cr}{\colon}\\!{\mathsf{T}},{bl}{\colon}\\!{\mathsf{T}},{br}{\colon}\\!{\mathsf{T}}}{\mid}{Q_{3}}\vdash{e_{4}}{\colon}\\!{\mathsf{T}}{\mid}{Q^{\prime}}\lx@proof@logical@and{{a}{\colon}\\!{\mathsf{B}},{bl}{\colon}\\!{\mathsf{T}}}{\mid}{Q}\vdash{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{splay}}{\@listingGroup{ltx_lst_space}{
}}{\@listingGroup{ltx_lst_identifier}{a}}{\@listingGroup{ltx_lst_space}{
}}{\@listingGroup{ltx_lst_identifier}{bl}}}}}}}{\colon}\\!{\mathsf{T}}{\mid}{Q^{\prime}-1}\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{splay}}:}}}}{{\mathsf{T}}{\mid}{Q}\to{\mathsf{T}}{\mid}{Q^{\prime}}}{\Delta,{cr}{\colon}\\!{\mathsf{T}},{br}{\colon}\\!{\mathsf{T}},{x}{\colon}\\!{\mathsf{T}}}{\mid}{Q_{4}}\vdash{\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keyword}{\color[rgb]{0.87890625,0,0.2890625}\small
match}}}}}}\ x\
\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keyword}{\color[rgb]{0.87890625,0,0.2890625}\small
with}}}}}}\
\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small\textbar}}}}\text{\small\color[rgb]{0,0.578125,0.015625}{(}}{al}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{a^{\prime}}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{ar}\text{\small\color[rgb]{0,0.578125,0.015625}{)}}\
\mathrel{\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small-\textgreater}}}}}t^{\prime}}{\colon}\\!{\mathsf{T}}{\mid}{Q^{\prime}}{\Delta,{cr}{\colon}\\!{\mathsf{T}},{br}{\colon}\\!{\mathsf{T}},{al}{\colon}\\!{\mathsf{T}},{a^{\prime}}{\colon}\\!{\mathsf{B}},{ar}{\colon}\\!{\mathsf{T}}}{\mid}{Q_{5}}\vdash{t^{\prime}}{\colon}\\!{\mathsf{T}}{\mid}{Q^{\prime}}$
Figure 6: Partial Typing Derivation for splay, focusing on the zig-zig Case.
${{{{{{{{{{{{{{{{{{{{{{{{{{a}{\colon}\\!{\mathsf{B}},{t}{\colon}\\!{\mathsf{T}}}{\mid}{P}\vdash{\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keyword}{\color[rgb]{0.87890625,0,0.2890625}\small
match}}}}}}\ t\
\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keyword}{\color[rgb]{0.87890625,0,0.2890625}\small
with}}}}}}\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small\textbar}}}}\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keywords2}{\color[rgb]{0,0.578125,0.015625}\small
leaf}}}}}}\
\mathrel{\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small-\textgreater}}}}}\
\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keywords2}{\color[rgb]{0,0.578125,0.015625}\small
leaf}}}}}}\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small\textbar}}}}\text{\small\color[rgb]{0,0.578125,0.015625}{(}}{cl}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{c}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{cr}\text{\small\color[rgb]{0,0.578125,0.015625}{)}}\
\mathrel{\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small-\textgreater}}}}}\
e_{1}}{\colon}\\!{\mathsf{T}}{\mid}{P^{\prime}}{{a}{\colon}\\!{\mathsf{B}},{cl}{\colon}\\!{\mathsf{T}},{c}{\colon}\\!{\mathsf{B}},{cr}{\colon}\\!{\mathsf{T}}}{\mid}{P_{1}}\vdash{\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keyword}{\color[rgb]{0.87890625,0,0.2890625}\small
if}}}}}}\ a=c\
\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keyword}{\color[rgb]{0.87890625,0,0.2890625}\small
then}}}}}}\
\text{\small\color[rgb]{0,0.578125,0.015625}{(}}{cl}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{c}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{cr}\text{\small\color[rgb]{0,0.578125,0.015625}{)}}\
\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keyword}{\color[rgb]{0.87890625,0,0.2890625}\small
else}}}}}}\
e_{2}}{\colon}\\!{\mathsf{T}}{\mid}{P^{\prime}}{{a}{\colon}\\!{\mathsf{B}},{b}{\colon}\\!{\mathsf{B}},{cl}{\colon}\\!{\mathsf{T}},{cr}{\colon}\\!{\mathsf{T}}}{\mid}{P_{1}}\vdash{\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keyword}{\color[rgb]{0.87890625,0,0.2890625}\small
match}}}}}}\ cl\
\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keyword}{\color[rgb]{0.87890625,0,0.2890625}\small
with}}}}}}\
\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small\textbar}}}}\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keywords2}{\color[rgb]{0,0.578125,0.015625}\small
leaf}}}}}}\
\mathrel{\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small-\textgreater}}}}}\
\text{\small\color[rgb]{0,0.578125,0.015625}{(}}{cl}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{c}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{cr}\text{\small\color[rgb]{0,0.578125,0.015625}{)}}\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small\textbar}}}}\text{\small\color[rgb]{0,0.578125,0.015625}{(}}{bl}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{b}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{br}\text{\small\color[rgb]{0,0.578125,0.015625}{)}}\
\mathrel{\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small-\textgreater}}}}}\
e_{3}}{\colon}\\!{\mathsf{T}}{\mid}{P^{\prime}}{\Gamma,{cr}{\colon}\\!{\mathsf{T}},{bl}{\colon}\\!{\mathsf{T}},{br}{\colon}\\!{\mathsf{T}}}{\mid}{P_{2}}\vdash{e_{3}}{\colon}\\!{\mathsf{T}}{\mid}{P^{\prime}}{\Gamma,{cr}{\colon}\\!{\mathsf{T}},{bl}{\colon}\\!{\mathsf{T}},{br}{\colon}\\!{\mathsf{T}}}{\mid}{P_{2}}\vdash{e_{4}}{\colon}\\!{\mathsf{T}}{\mid}{P^{\prime}}\lx@proof@logical@and{{a}{\colon}\\!{\mathsf{B}},{bl}{\colon}\\!{\mathsf{T}}}{\mid}{\varnothing}\vdash{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{splay}}{\@listingGroup{ltx_lst_space}{
}}{\@listingGroup{ltx_lst_identifier}{a}}{\@listingGroup{ltx_lst_space}{
}}{\@listingGroup{ltx_lst_identifier}{bl}}}}}}}{\colon}\\!{\mathsf{T}}{\mid}{\varnothing}\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{splay}}:}}}}{{\mathsf{T}}{\mid}{\varnothing}\to{\mathsf{T}}{\mid}{\varnothing}}{\Delta,{cr}{\colon}\\!{\mathsf{T}},{br}{\colon}\\!{\mathsf{T}},{x}{\colon}\\!{\mathsf{T}}}{\mid}{P_{3}}\vdash{\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keyword}{\color[rgb]{0.87890625,0,0.2890625}\small
match}}}}}}\ x\
\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keyword}{\color[rgb]{0.87890625,0,0.2890625}\small
with}}}}}}\
\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small\textbar}}}}\text{\small\color[rgb]{0,0.578125,0.015625}{(}}{al}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{a^{\prime}}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{ar}\text{\small\color[rgb]{0,0.578125,0.015625}{)}}\
\mathrel{\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small-\textgreater}}}}}t^{\prime}}{\colon}\\!{\mathsf{T}}{\mid}{P^{\prime}}{\Delta,{cr}{\colon}\\!{\mathsf{T}},{br}{\colon}\\!{\mathsf{T}},{al}{\colon}\\!{\mathsf{T}},{a^{\prime}}{\colon}\\!{\mathsf{B}},{ar}{\colon}\\!{\mathsf{T}}}{\mid}{P_{4}}\vdash{t^{\prime}}{\colon}\\!{\mathsf{T}}{\mid}{P^{\prime}}$
Figure 7: Cost-Free Derivation for splay, focusing on the zig-zig Case.
In this subsection, we exemplify the use of the type system presented in the
last section on the function splay, cf. Figure 2. Our amortised analysis of
splaying yields that the amortised cost of splay a t is bound by
$3\log(\lvert{t}\rvert)+1$, where the actual cost counts the number of
recursive calls to splay, cf. [52, 47, 43]. To verify this amortised cost, we
derive
${{a}{\colon}\\!{\mathsf{Base}},{t}{\colon}\\!{\mathsf{Tree}}}{\mid}{Q}\vdash{e}{\colon}\\!{\mathsf{Tree}}{\mid}{Q^{\prime}}\hbox
to0.0pt{$\;$,\hss}$ (14)
where the expression $e$ is the definition of splay given in Figure 2 and the
annotations $Q$ and $Q^{\prime}$ are as follows:
$\displaystyle Q$ $\displaystyle\colon q_{1}=1,q_{(1,0)}=3,q_{(0,2)}=1\hbox
to0.0pt{$\;$,\hss}$ $\displaystyle Q^{\prime}$ $\displaystyle\colon
q^{\prime}_{\ast}=1\hbox to0.0pt{$\;$.\hss}$
Remark that the amortised cost of splaying is represented by the coeficients
$q_{(1,0)}$ and $q_{(0,2)}$, expressing in sum $3\log(\lvert{t}\rvert)+1$.
Note, further that the coefficient $q_{1},q^{\prime}_{\ast}$, represent
Schoenmakers’ potential, that is, $\operatorname{\mathsf{rk}}(t)$ and
${\operatorname{\mathsf{rk}}(\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{splay}}{\@listingGroup{ltx_lst_space}{
}}{\@listingGroup{ltx_lst_identifier}{a}}{\@listingGroup{ltx_lst_space}{
}}{\@listingGroup{ltx_lst_identifier}{t}}}}}})$, respectively.
We restrict to the zig-zig case: t = (($bl$, $b$, $br$), $c$, $cr$) together
with the recursive call splay a bl = (al,a’,ar) and $a<b<c$. Thus splay a t
yields ($al$, $a^{\prime}$, ($ar$, $b$, ($br$, $c$, $cr$))) =: t’. Recall that
$a$ need not occur in $t$, in this case, the last element $a^{\prime}$ before
a leaf was found, is rotated to the root. Our prototype checks correctness of
these annotations automatically.
Let $e_{1}$ denote the subexpression of the definition of splaying, starting
in program line $4$. On the other hand let $e_{2}$ denote the subexpression
defined from line $5$ to $15$ and let $e_{3}$ denote the program code within
$e_{2}$ starting in line $8$. Finally the expression in lines $11$ and $12$,
expands to the following, if we remove part of the syntactic sugar:
${{{{{{{{{{{{e_{4}\mathrel{:=}\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keyword}{\color[rgb]{0.87890625,0,0.2890625}\small
let}}}}}}\ x\
\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small=}}}}\
\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{splay}}{\@listingGroup{ltx_lst_space}{
}}{\@listingGroup{ltx_lst_identifier}{a}}{\@listingGroup{ltx_lst_space}{
}}{\@listingGroup{ltx_lst_identifier}{bl}}}}}}\
\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keyword}{\color[rgb]{0.87890625,0,0.2890625}\small
in}}}}}}\
\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keyword}{\color[rgb]{0.87890625,0,0.2890625}\small
match}}}}}}\ x\
\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keyword}{\color[rgb]{0.87890625,0,0.2890625}\small
with}}}}}}\
\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small\textbar}}}}\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keywords2}{\color[rgb]{0,0.578125,0.015625}\small
leaf}}}}}}\
\mathrel{\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small-\textgreater}}}}}\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_keywords2}{\color[rgb]{0,0.578125,0.015625}\small
leaf}}}}}}\
\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small\textbar}}}}\text{\small\color[rgb]{0,0.578125,0.015625}{(}}{al}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{a^{\prime}}\text{\small\color[rgb]{0.10546875,0.00390625,0.375}{,}}{ar}\text{\small\color[rgb]{0,0.578125,0.015625}{)}}\
\mathrel{\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small-\textgreater}}}}}t^{\prime}\hbox
to0.0pt{$\;$.\hss}$
Figure 6 shows a simplified derivation of (14), where we have focused only on
a particular path in the derivation tree, suited to the considered zig-zig
case of the definition of splaying. Omission of premises or rules is indicated
by double lines in the inference step. Again we make crucial use of the cost-
free semantics in this derivartion. The corresponding inference step in Figure
6 is marked with ($\ast$) and the employed shared potentials are marked in
red.
We abbreviate
$\Gamma\mathrel{:=}{a}{\colon}\\!{\mathsf{B}},{b}{\colon}\\!{\mathsf{B}},{c}{\colon}\\!{\mathsf{B}}$,
$\Delta\mathrel{:=}{b}{\colon}\\!{\mathsf{B}},{c}{\colon}\\!{\mathsf{B}}$. In
addition to the original signature of splaying,
${\mathsf{B}\times\mathsf{T}}{\mid}{Q}\to{\mathsf{T}}{\mid}{Q^{\prime}}$, we
use the following annotations, induced by constraints in the type system, cf.
Figure 5. As in Section 6.1, we mark annotations that require cost-free
derivations in the $(\mathsf{{let:T}})$ rule in red.
$\displaystyle Q_{1}$ $\displaystyle\colon
q^{1}_{1}=q^{1}_{2}=q_{1}=1,q^{1}_{(1,1,0)}=q_{(1,0)}=3,q^{1}_{(1,0,0)}=q^{1}_{(0,1,0)}=q_{1}=1,q^{1}_{(0,0,2)}=q_{(0,2)}=1\hbox
to0.0pt{$\;$,\hss}$ $\displaystyle Q_{2}$ $\displaystyle\colon
q^{2}_{1}=q^{2}_{2}=q^{2}_{3}=1,q^{2}_{(0,0,0,2)}=1,q^{2}_{(1,1,1,0)}=q^{1}_{(1,1,0)}=3,q^{2}_{(0,1,1,0)}=q^{1}_{(1,0,0)}=1,$
$\displaystyle\quad
q^{2}_{(1,0,0,0)}=q^{1}_{(0,1,0)}=1,q^{2}_{(0,1,0,0)}=q^{2}_{(0,0,1,0)}=q^{1}_{1}=1\hbox
to0.0pt{$\;$,\hss}$ $\displaystyle Q_{3}$ $\displaystyle\colon
q^{3}_{1}=q^{3}_{2}=q^{3}_{3}=1,q^{3}_{(0,0,0,2)}=2,$ $\displaystyle\quad
q^{3}_{(0,1,0,0)}=3,q^{3}_{(1,0,0,0)}=q^{3}_{(0,0,1,0)}=q^{3}_{(1,0,1,0)}={\color[rgb]{1,0,0}q^{3}_{(1,1,1,0)}}=1\hbox
to0.0pt{$\;$.\hss}$
In the step marked with the rule $(\mathsf{{w}})$ in Figure 6, a _weakening_
step is applied, which amounts to the following inequality:
$\Phi({\Gamma,{cr}{\colon}\\!{\mathsf{T}},{bl}{\colon}\\!{\mathsf{T}},{br}{\colon}\\!{\mathsf{T}}}{\mid}{Q_{2}})\geqslant\Phi({\Gamma,{cr}{\colon}\\!{\mathsf{T}},{bl}{\colon}\\!{\mathsf{T}},{br}{\colon}\\!{\mathsf{T}}}{\mid}{Q_{3}})\hbox
to0.0pt{$\;$.\hss}$
We emphasise that this step can neither be avoided, nor easily moved to the
axioms of the derivation. To verify the correctness of _weakening_ through a
direct comparison. Let $\sigma$ be a substitution. Then, we have
$\displaystyle\Phi({\sigma};{{cr}{\colon}\\!{\mathsf{T}},{bl}{\colon}\\!{\mathsf{T}},{br}{\colon}\\!{\mathsf{T}}}{\mid}{Q_{2}})$
$\displaystyle=1+\operatorname{\mathsf{rk}}(cr)+\operatorname{\mathsf{rk}}(bl)+\operatorname{\mathsf{rk}}(br)+3\log(\lvert{cr}\rvert+\lvert{bl}\rvert+\lvert{br}\rvert)+{}$
$\displaystyle\quad{}+\log(\lvert{bl}\rvert+\lvert{br}\rvert)+\log(\lvert{cr}\rvert)+\log(\lvert{bl}\rvert)+\log(\lvert{br}\rvert)$
$\displaystyle=1+\operatorname{\mathsf{rk}}(cr)+\operatorname{\mathsf{rk}}(bl)+\operatorname{\mathsf{rk}}(br)+2\log(\lvert{t}\rvert)+\log(\lvert{t}\rvert)+{}$
$\displaystyle\quad{}+\log(\lvert{bl}\rvert+\lvert{br}\rvert)+\log(\lvert{cr}\rvert)+\log(\lvert{bl}\rvert)+\log(\lvert{br}\rvert)$
$\displaystyle\geqslant
1+\operatorname{\mathsf{rk}}(cr)+\operatorname{\mathsf{rk}}(bl)+\operatorname{\mathsf{rk}}(br)+\log(\lvert{bl}\rvert)+\log(\lvert{br}\rvert+\lvert{cr}\rvert)+2+{}$
$\displaystyle\quad{}+\log(\lvert{bl}\rvert+\lvert{br}\rvert+\lvert{cr}\rvert)+{}$
$\displaystyle\quad{}+\log(\lvert{bl}\rvert+\lvert{br}\rvert)+\log(\lvert{cr}\rvert)+\log(\lvert{bl}\rvert)+\log(\lvert{br}\rvert)+{}$
$\displaystyle\geqslant\operatorname{\mathsf{rk}}(bl)+1+3\log(\lvert{bl}\rvert)+\operatorname{\mathsf{rk}}(cr)+\operatorname{\mathsf{rk}}(br)+\log(\lvert{br}\rvert)+{}$
$\displaystyle\quad{}+\log(\lvert{cr}\rvert)+\log(\lvert{br}\rvert+\lvert{cr}\rvert)+{}$
$\displaystyle\quad{}+\log(\lvert{bl}\rvert+\lvert{br}\rvert+\lvert{cr}\rvert)+1=\Phi({\sigma};{{cr}{\colon}\\!{\mathsf{T}},{bl}{\colon}\\!{\mathsf{T}},{br}{\colon}\\!{\mathsf{T}}}{\mid}{Q_{3}})\hbox
to0.0pt{$\;$.\hss}$
Note that we have used Lemma 7.1 in the third line to conclude
$2\log(\lvert{t}\rvert)\geqslant\log(\lvert{bl}\rvert)+\log(\lvert{br}\rvert+\lvert{cr}\rvert)+2\hbox
to0.0pt{$\;$,\hss}$
as we have
${{{{{{{{\lvert{t}\rvert=\lvert{\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small(}}}{$\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small(}}}{$bl$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{,}}}}}
{$b$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{,}}}}}
{$br$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small)}}}}$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{,}}}}}
{$c$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{,}}}}}
{$cr$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small)}}}}}\rvert=\lvert{bl}\rvert+\lvert{br}\rvert+\lvert{cr}\rvert$.
Furthermore, we have only used monotonicity of $\log$ and formal
simplifications.
Further, we verify the use of the $(\mathsf{{let:T}})$-rule, marked with
$(\ast)$ in the proof. Consider the following annotation $Q_{4}$:
$\displaystyle Q_{4}$ $\displaystyle\colon
q^{4}_{1}=q^{3}_{1};q^{4}_{2}=q^{3}_{3};q^{4}_{3}=q^{\prime}_{\ast};q^{4}_{(1,0,0,0)}=q^{3}_{(1,0,0,0)};q^{4}_{(0,1,0,0)}=q^{3}_{(0,0,1,0)};$
$\displaystyle\quad
q^{4}_{(1,1,0,0)}=q^{3}_{(1,0,1,0)};q^{4}_{(1,1,1,0)}={p^{\prime}}^{(1,1,1,0)}_{(1,0)}=1$
$\displaystyle P^{(1,1,1,0)}$
$\displaystyle\colon{\color[rgb]{1,0,0}p^{(1,1,1,0)}_{(1,0)}}={\color[rgb]{1,0,0}q^{3}_{(1,1,1,0)}}={\color[rgb]{1,0,0}1}$
$\displaystyle{P^{\prime}}^{(1,1,1,0)}$
$\displaystyle\colon{\color[rgb]{1,0,0}{p^{\prime}}^{(1,1,1,0)}_{(1,0)}}={\color[rgb]{1,0,0}1}$
To see that $Q_{4}$ is consistent with the constraints on resource annotations
in the $(\mathsf{{let:T}})$-rule, we first note that
$\displaystyle Q+1$ $\displaystyle\colon
q=q^{3}_{2}=1,q_{(1,0)}=q^{3}_{(0,1,0,0)}=3;q_{(0,2)}=q^{3}_{(0,0,0,2)}\hbox
to0.0pt{$\;$.\hss}$
Hence the constraints on the annotations for the left typing tree in the
$(\mathsf{{let:T}})$-rule amount to the following:
$q=q_{2}^{3}=1\quad q_{(1,0)}=q^{3}_{(0,1,0,0)}=3\quad
q_{(0,2)}=q_{(0,0,0,2)}=2\quad q^{\prime}_{\ast}=q_{3}^{4}=1\hbox
to0.0pt{$\;$,\hss}$
which are fulfilled. Further, the right typing tree yields the constraints:
$\displaystyle q^{4}_{1}=q^{3}_{1}=1\quad q^{4}_{2}=q^{3}_{3}=1\quad
q^{4}_{(1,0,0,0)}=q^{3}_{(1,0,0,0)}=1\quad
q^{4}_{(0,1,0,0)}=q^{3}_{(0,0,1,0)}=1$ $\displaystyle
q^{4}_{(1,1,0,0)}=q^{3}_{(1,0,1,0)}=1\hbox to0.0pt{$\;$,\hss}$
which are also fulfilled. Hence, it remains to check the correctness of the
constraints for the actual cost-free derivation. First, note that for the
vector $\vec{b}=(1,1)$, the cost-free derivation needs to be checked wrt. the
annotation pair $P^{(1,1,1,0)}=[p^{(1,1,1,0)}_{(1,0)}]$ and
${P^{\prime}}^{(1,1,1,0)}=[{p^{\prime}}^{(1,1,1,0)}_{(1,0)}]$. Second, the
various constraints in the rule $(\mathsf{{let:T}})$ simplify to the
inequality $p^{(1,1,1,0)}_{(1,0)}\geqslant{p^{\prime}}^{(1,1,1,0)}_{(1,0)}$,
which holds. Third, the actual cost-free type derivation reads as follows:
${{{a}{\colon}\\!{\mathsf{B}},{bl}{\colon}\\!{\mathsf{T}}}{\mid}{P^{(1,1,1,0)}}\vdash{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{splay}}{\@listingGroup{ltx_lst_space}{
}}{\@listingGroup{ltx_lst_identifier}{a}}{\@listingGroup{ltx_lst_space}{
}}{\@listingGroup{ltx_lst_identifier}{bl}}}}}}}{\colon}\\!{\mathsf{T}}{\mid}{{P^{\prime}}^{(1,1,1,0)}}\hbox
to0.0pt{$\;$.\hss}$ (15)
The typing judgement (15) is derivable if the following cost-free signatures
are employed for splaying:
${\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{splay}}}}}}\colon{\mathsf{Tree}}{\mid}{P}\to^{\text{cf}}{\mathsf{Tree}}{\mid}{P^{\prime}}\qquad{\mathsf{Tree}}{\mid}{\varnothing}\to^{\text{cf}}{\mathsf{Tree}}{\mid}{\varnothing}\hbox
to0.0pt{$\;$,\hss}$
where $P=[p_{(1,0)}]$, $P^{\prime}=[p^{\prime}_{(1,0)}]$, with
$p_{(1,0)}=p^{\prime}_{(1,0)}\mathrel{:=}1$. Recall that $\varnothing$ denotes
the empty annotation, where all coefficients are set to zero. By definition,
$P=P^{(1,1,1,0)}$ and $P^{\prime}={P^{\prime}}^{(1,1,1,0)}$. Informally, this
cost-free signature is admissible, as the following equality holds:
${{{{\Phi({\sigma};{{a}{\colon}\\!{\mathsf{B}},{bl}{\colon}\\!{\mathsf{T}}}{\mid}{P})=\log(\lvert{bl}\rvert)=\log(\lvert{(al,a^{\prime},ar)}\rvert)=\Phi({{\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small(}}}{$al$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{,}}}}}
{$a^{\prime}$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{,}}}}}
{$ar$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small)}}}}}{\colon}\\!{\mathsf{T}}}{\mid}{P^{\prime}})\hbox
to0.0pt{$\;$.\hss}$
Recall that we have splay a bl = ($al$, $a^{\prime}$, $ar$) for the recursive
call and that
${{{{\lvert{bl}\rvert=\lvert{\text{{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small(}}}{$al$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{,}}}}}
{$a^{\prime}$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{,}}}}}
{$ar$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small)}}}}}}\rvert$.
Formally, the type derivation of (15) proceeds similar to the derivation in
Figure 6 in conjunction with the analysis in Subsection 6.1, see Figure 7. As
indicated also the cost-free derivation requires the use of the full version
of the rule $(\mathsf{{let:T}})$, as marked by $(\ast)$. In particular, the
informal argument on the size of the argument and the result of splaying is
built into the type system. We use the following annotations:
$\displaystyle P$ $\displaystyle\colon p_{(1,0)}=1$ $\displaystyle P^{\prime}$
$\displaystyle\colon p^{\prime}_{(1,0)}=1$ $\displaystyle P_{1}$
$\displaystyle\colon p^{1}_{(1,1,0)}=p_{(1,0)}=1$ $\displaystyle P_{2}$
$\displaystyle\colon{\color[rgb]{1,0,0}p^{2}_{(1,1,1,0)}=p^{1}_{(1,1,0)}=1}$
$\displaystyle P_{3}$ $\displaystyle\colon
p^{3}_{(1,1,1,0)}=p^{\prime}_{(1,0)}=1$ $\displaystyle P_{4}$
$\displaystyle\colon p^{4}_{(1,1,1,1,0)}=p^{3}_{(1,1,1,0)}=1$
Finally, one further application of the $(\mathsf{{match}})$-rule, yields the
desired derivation for suitable $Q_{5}$. See also the previous subsection.
Note that one further application of the _weakening_ rule is required here.
## 7 Linearisation and Expert Knowledge
In the context of the presented type-and-effect system (see Figure 5) an
obvious challenge is the requirement to compare potentials symbolically (see
Section 5) rather than compare annotations directly. This is in contrast to
results on resource analysis for constant amortised costs, see eg. [38, 37,
24, 26, 39]. Furthermore, the presence of logarithmic basic functions seems to
necessitate the embodiment of nonlinear arithmetic. In particular, as we need
to make use of basic laws of the $\log$ functions, as the following one. A
variant of the below fact has already been observed by Okasaki, cf. [44].
###### Lemma 7.1.
Let $x,y\geqslant 1$. Then $2+\log(x)+\log(y)\leqslant 2\log(x+y)$.
###### Proof 7.2.
We observe
$(x+y)^{2}-4xy=(x-y)^{2}\geqslant 0\hbox to0.0pt{$\;$.\hss}$
Hence $(x+y)^{2}\geqslant 4xy$ and from the monotonicity of $\log$ we conclude
$\log(xy)\leqslant\log(\frac{(x+y)^{2}}{4})$. By elementary laws of $\log$ we
obtain:
$\log\left(\frac{(x+y)^{2}}{4}\right)=\log\left(\left(\frac{x+y}{2}\right)^{2}\right)=2\log(x+y)-2\hbox
to0.0pt{$\;$,\hss}$
from which the lemma follows as $\log(xy)=\log(x)+\log(y)$.
A refined and efficient approach which targets linear constraints is
achievable as follows. All logarithmic terms, that is, terms of the form
$\log(.)$ are replaced by new variables, focusing on finitely many. For the
latter, we exploit the condition that in resource annotations only finitely
many coefficients are non-zero. Consider the following inequality as
prototypical example. Validity of the constraint ought to incorporate the
monotonicity of $\log$.
$a_{1}\log(\lvert{t}\rvert)+a_{2}\log(\lvert{cr}\rvert)\geqslant
b_{1}\log(\lvert{t}\rvert)+b_{2}\log(\lvert{cr}\rvert)\hbox
to0.0pt{$\;$,\hss}$ (16)
where we assume
${{{{t=\textup{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small(}}}{$cl$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{,}}}}}
{$c$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small{\@listingGroup{ltx_lst_identifier}{,}}}}}
{$cr$}\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\color[rgb]{0.10546875,0.00390625,0.375}\small)}}}}$
for some value $c$ and thus $\lvert{t}\rvert\geqslant\lvert{cr}\rvert$, cf.
Section 6.2. Replacing $\log(\lvert{t}\rvert),\log(\lvert{cr}\rvert)$ with new
unknowns $x$, $y$, respectively, we represent (16) as follows:
$\forall x,y\geqslant 0.\ a_{1}x+a_{2}y\geqslant b_{1}x+b_{2}y\hbox
to0.0pt{$\;$,\hss}$ (17)
Here we keep the side-condition $x\geqslant y$ and observe that the unknowns
$x$, $y$ can be assumed to be non-negative, as they represent values of the
$\log$ function. Thus properties like e.g. monotonicity of $\log$, as well as
properties like Lemma 7.1 above, can be expressed as inequalities over the
introduced unknowns. E.g., the inequality $x\geqslant y$ above represents the
axiom of monotonicity $\log(\lvert{t}\rvert)\geqslant\log(\lvert{cr}\rvert)$.
All such obtained inequalities are collected as _expert or prior knowledge_.
This entails that (17) is equivalent to the following existential constraint
satisfaction problem:
$\exists c,d.\ a_{1}\geqslant b_{1}+c\land a_{2}\geqslant d\land
b_{2}\leqslant c+d\hbox to0.0pt{$\;$.\hss}$ (18)
We seek to systematise the derivation of inequalities such as (18) from expert
knowledge. For that, we assume that the gathered prior knowledge is
represented by a system of inequalities as
$A\vec{x}\leqslant\vec{b},\vec{x}\geqslant 0$, where $A$ denotes a matrix with
as many rows as we have prior knowledge, $\vec{b}$ a column vector and
$\vec{x}$ the column vector of unknowns of suitable length; $\vec{x}\geqslant
0$ because $\log$ evaluates to non-negative values.
Below we discuss a general method for the derivation of inequalities such as
(18) based on the affine form of Farkas’ lemma. First, we state the variant of
Farkas’ lemma that we use in this article, cf. [48]. Note that $\vec{u}$ and
$\vec{f}$ denote column vectors of suitable length.
###### Lemma 7.3 (Farkas’ lemma).
Suppose $A\vec{x}\leqslant\vec{b},\vec{x}\geqslant 0$ is solvable. Then the
following assertions are equivalent.
$\displaystyle\forall\vec{x}\geqslant 0.\
A\vec{x}\leqslant\vec{b}\Rightarrow\vec{u}^{T}\vec{x}\leqslant\lambda$ (19)
$\displaystyle\exists\vec{f}\geqslant 0.\
\vec{u}^{T}\leqslant\vec{f}^{T}A\land\vec{f}^{T}\vec{b}\leqslant\lambda$ (20)
###### Proof 7.4.
It is easy to see that from (20), we obtain (19). Assume (20). Assume further
that $A\vec{x}\leqslant\vec{b}$ for some column vector $\vec{x}$. Then we have
$\vec{u}^{T}\vec{x}\leqslant\vec{f}^{T}A\vec{x}\leqslant\vec{f}^{T}\vec{b}\leqslant\lambda\hbox
to0.0pt{$\;$.\hss}$
Note that for this direction the assumption that
$A\vec{x}\leqslant\vec{b},\vec{x}\geqslant 0$ is solvable is not required.
With respect to the opposite direction, we assume (19). By assumption,
$A\vec{x}\leqslant\vec{b},\vec{x}\geqslant 0$ is solvable. Hence, maximisation
of $\vec{u}^{T}\vec{x}$ under the side condition
$A\vec{x}\leqslant\vec{b},\vec{x}\geqslant 0$ is feasible. Let $w$ denote the
maximal value. Due to (19), we have $w\leqslant\lambda$.
Now, consider the dual asymmetric linear program to minimise
$\vec{y}^{T}\vec{b}$ under side condition $\vec{y}^{T}A=\vec{u}^{T}$ and
$\vec{y}\geqslant 0$. Due to the Dualisation Theorem, the dual problem is also
solvable with the same solution
$\vec{y}^{T}\vec{b}=\vec{u}^{T}\vec{x}=w\hbox to0.0pt{$\;$.\hss}$
We define $\vec{f}\mathrel{:=}\vec{y}$, which attains the optimal value $w$,
such that $\vec{f}^{T}A=\vec{u}^{T}$ and $\vec{f}\geqslant 0$ such that
$\vec{f}^{T}\vec{b}=w\leqslant\lambda$. This yields (20).
Second, we discuss a method for the derivation of inequalities such as (18)
based on Farkas’ lemma. Our goal is to automatically discharge symbolic
constraints such as
$\Phi({\Gamma}{\mid}{P})\leqslant\Phi({\Gamma}{\mid}{Q})$—as well as
$\Phi({\Gamma}{\mid}{P^{\prime}})\geqslant\Phi({\Gamma}{\mid}{Q^{\prime}})$—as
required by the _weakening_ rule $(\mathsf{{w}})$ (see Section 5).
According to the above discussion we can represent the inequality
$\Phi({\Gamma}{\mid}{P})\leqslant\Phi({\Gamma}{\mid}{Q})$ by
$\vec{p}^{T}\vec{x}+c_{p}\leqslant\vec{q}^{T}\vec{x}+c_{q}\hbox
to0.0pt{$\;$,\hss}$
where $\vec{x}$ is a finite vector of variables representing the base
potential functions, $\vec{p}$ and $\vec{q}$ are column vectors representing
the unknown coefficients of the non-constant potential functions, and $c_{p}$
and $c_{q}$ are the coefficients of the constant potential functions. We
assume the expert knowledge is given by the constraints
$A\vec{x}\leqslant\vec{b},\vec{x}\geqslant 0$. We now want to derive
conditions for $\vec{p}$, $\vec{q}$, $c_{p}$, and $c_{q}$ such that we can
guarantee
$\forall\vec{x}\geqslant 0.\
A\vec{x}\leqslant\vec{b}\Rightarrow\vec{p}^{T}\vec{x}+c_{p}\leqslant\vec{q}^{T}\vec{x}+c_{q}\hbox
to0.0pt{$\;$.\hss}$ (21)
By Farkas’ lemma it is sufficient to find coefficients $\vec{f}\geqslant 0$
such that
$\vec{p}^{T}\leqslant\vec{f}^{T}A+\vec{q}^{T}\land\vec{f}^{T}\vec{b}+c_{p}\leqslant
c_{q}\hbox to0.0pt{$\;$.\hss}$ (22)
Hence, we can ensure Equation (21) by Equation (22) using the new unknowns
$\vec{f}$.
We illustrate Equation (22) on the above example. We have $A=(-1\ 1)$, $b=0$,
$\vec{p}=(b_{1}\ b_{2})^{T}$, $\vec{q}=(a_{1}\ a_{2})^{T}$ as well as
$c_{p}=c_{q}=0$. Then, the inequality $fb+c_{p}\leqslant c_{q}$ simplifies to
$0\leqslant 0$ and can in the following be omitted. With the new unknown
$f\geqslant 0$ we have
$(b_{1}\ b_{2})\leqslant f(-1\ 1)+(a_{1}\ a_{2})\hbox to0.0pt{$\;$,\hss}$
which we can rewrite to
$b_{1}\leqslant-f+a_{1}\land b_{2}\leqslant f+a_{2}\hbox to0.0pt{$\;$,\hss}$
easily seen to be equivalent to Equation (18).
Thus, the validity of constraints incorporating the monotonicity of log
becomes expressible in a systematic way. Further, the symbolic constraints
enforced by the _weakening_ rule can be discharged systematically and become
expressible as existential constraint satisfaction problems. Note that the
incorporation of Farkas’ lemma in the above form subsumes the well-known
practise of coefficient comparison for the synthesis of polynomial
interpretations [15], ranking functions [46] or resource annotations in the
context of constant amortised costs [24].
In the next section, we briefly detail our implementation of the established
logarithmic amortised resource analysis, based on the observations in this
section.
## 8 Implementation
Based on the principal approach, delineated in Section 2, we have provided a
prototype implementation of the logarithmic amortised resource analysis
detailed above. The prototype is capable of _type checking_ a given resource
annotation and requires user interaction to specify the structural inferences
sharing and weakening. These can be applied manually to improve efficiency of
type checking. In future work, we will strive for full automation, capable of
_type inference_. In this section, we briefly indicate the corresponding
design choices and heuristics used. Further, we present restrictions and
future development areas of the prototype developed.
_Template potential functions._ Our potential-based method employs linear
combinations of basic potential functions $\mathcal{BF}$, cf. Definition 4.1.
In order to fix the cardinality of the set of resource functions to be
considered, we restrict the coefficients of the potential functions
$p_{(a_{1},\dots,a_{m},b)}$. For the non-constant part, we demand that
$a_{i}\in\\{0,1\\}$, while the coefficients $b$, representing the constant
part are restricted to $\\{0,1,2\\}$. This restriction to a relative small set
of basic potential functions, suitable controls the number of constraints
generated for each inference rule in the type-and-effect system.
_Type-and-effect system._ Following ideas of classical Hindley-Milner type
inference, we collect for each node in the abstract syntax tree (AST) of the
given program the constraints given by the corresponding inference rules in
the type system (see Figure 5). As a pre-requisite, we restrict ourselves to
three type annotations employed for each function symbol. (i) One
indeterminate type annotation representing a function call with costs; (ii)
one indeterminate cost-free type annotation to represent a zero-cost call; and
(iii) one fixed cost-free annotation with the empty annotation that doesn’t
carry any potential. These restrictions were sufficient to handle the zig-zig
case of splaying. A larger, potentially infinite set of type annotations is
conceivable, as long as well-typedness is respected, cf. Definition 5.3. As
noted in the context of the analysis of constant amortised complexity an
enlarged set of type annotations may be even required to handle non-tail
recursive programs, cf. [24, 26]. The collected constraints on the type
annotations are passed to an SMT solver, in our case the SMT solver z3333See
https://github.com/Z3Prover/z3/wiki. and solved over the positive rational
numbers. Here we can directly encode the equalities and inequalities of the
constraints given in the type system. Due to the use of Farkas’ lemma (Lemma
7.3) only linear constraints are generated.
Our implementation is currently only supports type checking, taking user
guidance into account and thus is semi-automated. While deriving constraints
for the AST nodes of the program is straightforward (as there is only one type
rule for every syntactic statement of our programming language), we currently
require user interaction for the application of the structural rules.
_Structure rules._ The structural rules can in principle be applied at every
AST node of the program under analysis. However, they introduce additional
variables and constraints and for performance reasons it is better to apply
them sparingly. For the _sharing_ rule we proceed as follows: We recall that
the sharing rule allows us to assume that the type system is linear. In
particular, we can assume that every variable occurs exactly once in the type
context, which is exploited in the definition of the _let_ rules. However,
such an eager application of the sharing rule would directly yield to a size
explosion in the number of constraints, as the generation of each fresh
variables requires the generation of exponentially many annotations. Hence, we
only apply sharing only when strictly necessary. Similar to the sharing rule
$(\mathsf{{share}})$, variable weakening $(\mathsf{{w:var}})$ is employed only
when required. In this way the typing context can be kept small. This in turn
reduces the number of constraints generated.
For the _weakening_ rule, we employ our novel methods for symbolically
comparing logarithmic expressions, which we discussed in Section 7. Because of
our use of Farkas’ Lemma, weakening introduces new unknown coefficients, which
again may result in a forbiddingly large search space. Thus, weakening steps
are particularly costly. For performance reasons, we need to control the size
of the resulting constraint system. Currently, we rely on the user to specify
the number and place of the applications of the weakening rule. This is
achieved through the provision of suitable _tactics_ for type checking. Note
that the weakening rule may need to be applied in the middle of a type
derivation, see for example the typing derivation for our motivating example
in Figure 6. This contrasts to the literature where the weakening rule can
typically be incorporated into the axioms of the type system and thus
dispensed with. Perhaps a similar approach is possible in the context of
logarithmic amortised resource analysis. But our current understanding does
not support this.
_Expert Knowledge._ In Section 7, we propose the generation of a suitable
matrix $A$ collecting the _expert or prior knowledge_ on such inequalities. In
particular, wrt. Lemma 7.1, generation of this expert knowledge is
straightforward. The corresponding inequality amounts to a line in the expert
knowledge matrix $A$. Wrt. monotonicity we have experimented with a dedicated
size analysis based on a simple static analysis of the given AST, as well as
exploitation of the type annotations directly. For the latter, note that the
coefficients in the basic potential functions $p_{(a_{1},\dots,a_{m},b)}$ are
reflected in the corresponding type annotations. Hence comparison of these
(unknown) coefficients allows a sufficient size comparison of the data-
structures (ie. trees) used in the program at hand.
For now, our exploitation of the expert knowlege is restricted to the
monotonicity of logarithm together with the simple mathematical fact about
logarithms presented in Lemma 7.1. To improve the efficiency and effectivity
of the methodology, the following additions could be explored: (i) additional
mathematical facts on the logarithm function; (ii) improvement of the
dedicated size analysis supporting the applicabilty of monotonicty laws;
(iii) incorporation of basic static analysis techniques, like the result of a
reachability analysis, etc.
## 9 Related Work
To the best of our knowledge the established type-and-effect system for the
analysis of logarithmic amortised complexity is novel and also the semi-
automated resource analysis of self-balanced data-structures like splay trees,
is unparalleled in the literature. However, there is a vast amount of
literature on (automated) resource analysis. Without hope for a completeness,
we briefly mention [1, 3, 11, 20, 2, 4, 21, 6, 8, 17, 19] for an overview of
the field.
(Constant) amortised cost analysis has been in particular pioneered by Martin
Hofmann and his collaborators. Starting with seminal work on the static
prediction of heap space usage [32, 36], the approach has been generalised to
(lazy) functional programming [38, 37, 23, 25, 24] and rewriting [33, 34].
Automation of amortised resource analysis has also been greatly influenced by
Hofmann, yielding to sophisticated tools for the analysis of higher-order
functional programs [28, 27, 22], as well as of object-oriented programs [36,
9]. We mention here the highly sophisticated analysis behind the RaML
prototype developed in [29, 30, 31, 26] and the RAJA tool [36].
We now overview alternatives to conducting amortised cost analysis through the
means of a type-and-effect system. The line of work [58, 49, 50, 51, 16] has
focused on identifying abstractions resp. abstract program models that can be
used for the automated resource analysis of imperative programs. The goal has
been to identify program models that are sufficiently rich to support the
inference of precise bounds and sufficiently abstract to allow for a scalable
analysis, employing the size-change abstraction [58], (lossy) vector-addition
systems [49] and difference-constraint systems [50, 51]. This work has led to
the development of the tool LOOPUS, which performs amortised analysis for a
class of programs that cannot be handled by related tools from the literature.
Interestingly, LOOPUS infers worst-case costs from lexicographic ranking
functions using arguments that implicitly achieve an amortised analysis (for
details we refer the reader to [51]). Another line of work has targeted the
resource bound analysis of imperative and object-oriented programs through the
extraction of recurrence relations from the program under analysis, whose
closed-form solutions then allows one to infer upper bounds on resource usage
[1, 2, 4, 17]. Amortised analysis with recurrence relations has been discussed
for the tools COSTA [4] and CoFloCo [17]. Amortised analysis has also been
employed in the resource analysis for rewriting [42] and non-strict function
programs, in particular, if _lazy evaluation_ is conceived, cf. [39].
Sublinear bounds are typically not in the focus of these tools, but can be
inferred by some tools. In the recurrence relations based approach to cost
analysis [1, 2] refinements of linear ranking functions are combined with
criteria for divide-and-conquer patterns; this allows their tool PUBS to
recognise logarithmic bounds for some problems, but examples such as
_mergesort_ or _splaying_ are beyond the scope of this approach. Logarithmic
and exponential terms are integrated into the synthesis of ranking functions
in [13], making use of an insightful adaption of Farkas’ and Handelman’s
lemmas. The approach is able to handle examples such as _mergesort_ , but
again not suitable to handle self-balancing data-structures. A type based
approach to cost analysis for an ML-like language is presented in [54], which
uses the Master Theorem to handle divide-and-conquer-like recurrences. Very
recently, support for the Master Theorem was also integrated for the analysis
of rewriting systems [55], extending [7] on the modular resource analysis of
rewriting to so-called logically constrained rewriting systems [18]. The
resulting approach also supports the fully automated analysis of _mergesort_.
We also mention the quest for abstract program models whose resource bound
analysis problem is decidable, and for which the obtainable resource bounds
can be precisely characterised. We list here the size-change abstraction,
whose worst-case complexity has been completely characterised as polynomial
(with rational coefficients) [14, 56], vector-addition systems [12, 57], for
which polynomial complexity can be decided, and LOOP programs [10], for which
multivariate polynomial bounds can be computed. We are not aware of similar
results for programs models that induce logarithmic bounds.
## 10 Conclusion
We have presented a novel amortised resource analysis based on the potential
method. The method is rendered in a type-and-effect system. Our type system
has been designed with the goal of automation. The novelty of our contribution
is that this is the first approach towards an automation of an _logarithmic_
amortised complexity analysis. In particular, we show how the precise
logarithmic amortised cost of _splaying_ —a central operation of Sleator and
Tarjan’s splay trees—can be checked semi-automatically in our system. As our
potential functions are logarithmic, we cannot directly encode the comparison
between logarithmic expressions within the theory of linear arithmetic. This
however is vital for eg. expressing Schhoenmakers’ and Nipkow’s (manual)
analysis [47, 43] in our type-and-effect system. In order to overcome this
algorithmic challenge, we proposed several ideas for the _linearisation_ of
the induced constraint satisfaction problem. These efforts can be readily
extended by expanding upon the _expert knowledge_ currently employed, eg. via
incorporation of the results of a static analysis performed in a pre-
processing step. In future work, we aim at extension of the developed
prototypes to a fully automated analysis of logarithmic amortised complexity.
Here it may be profitable to expand the class of potential functions to take
_linear_ potential functions into account. This does not invalidate our
soundness theorem.
#### In Memoriam.
Martin Hofmann and the 4th author have discussed and developed a large part of
the theoretical body of this work together. Unfortunately, Martin’s tragic
hiking accident in January 2018 prevented the conclusion of this
collaboration. Due to Martin’s great interest and contributions to this work,
we felt it fitting to include him as first author. We’ve tried our best to
finalise the common conceptions and ideas. Still automation and continued
research clarified a number of issues and also brought a different focus on
various matters of the material presented. Martin Hofmann’s work was
revolutionary in a vast amount of fields and it will continue to inspire
future researchers—like he inspired us.
## References
* [1] E. Albert, P. Arenas, S. Genaim, and G. Puebla. Automatic inference of upper bounds for recurrence relations in cost analysis. In Proc. 15th SAS, volume 5079, pages 221–237, 2008.
* [2] E. Albert, P. Arenas, S. Genaim, and G. Puebla. Closed-form upper bounds in static cost analysis. JAR, 46(2), 2011.
* [3] C. Alias, A. Darte, P. Feautrier, and L. Gonnord. Multi-dimensional rankings, program termination, and complexity bounds of flowchart programs. In Proc. 17th SAS, volume 6337 of LNCS, pages 117–133, 2010.
* [4] D. E. Alonso-Blas and S. Genaim. On the limits of the classical approach to cost analysis. In A. Miné and D. Schmidt, editors, Proc. 19th SAS, volume 7460 of LNCS, pages 405–421. Springer, 2012.
* [5] M. Avanzini, N. Eguchi, and G. Moser. A path order for rewrite systems that compute exponential time functions. In Proceedings of the 22nd RTA, volume 10 of LIPIcs, pages 123–138, 2011.
* [6] M. Avanzini, U. D. Lago, and G. Moser. Analysing the complexity of functional programs: higher-order meets first-order. In Proc. 20th ICFP, pages 152–164. ACM, 2015.
* [7] M. Avanzini and G. Moser. A combination framework for complexity. IC, 248:22–55, 2016.
* [8] M. Avanzini, G. Moser, and M. Schaper. TcT: Tyrolean Complexity Tool. In Proc. 22nd TACAS, volume 9636 of LNCS, pages 407–423, 2016.
* [9] S. Bauer, S. Jost, and M. Hofmann. Decidable inequalities over infinite trees. In G. Barthe, G. Sutcliffe, and M. Veanes, editors, Proc. 22nd LPAR, volume 57 of EPiC Series in Computing, pages 111–130. EasyChair, 2018.
* [10] A. M. Ben-Amram and G. W. Hamilton. Tight worst-case bounds for polynomial loop programs. In M. Bojanczyk and A. Simpson, editors, Proc. 22nd FOSSACS, volume 11425 of LNCS, pages 80–97. Springer, 2019.
* [11] R. Blanc, T. A. Henzinger, T. Hottelier, and L. Kovács. ABC: algebraic bound computation for loops. In Proc. 16th LPAR, volume 6355 of LNCS, pages 103–118, 2010.
* [12] T. Brázdil, K. Chatterjee, A. Kucera, P. Novotný, D. Velan, and F. Zuleger. Efficient algorithms for asymptotic bounds on termination time in VASS. In A. Dawar and E. Grädel, editors, Proc. 33rd LICS, pages 185–194. ACM, 2018.
* [13] K. Chatterjee, H. Fu, and A. K. Goharshady. Non-polynomial worst-case analysis of recursive programs. In Proc. 29th CAV, volume 10427 of LNCS, pages 41–63, 2017.
* [14] T. Colcombet, L. Daviaud, and F. Zuleger. Size-change abstraction and max-plus automata. In E. Csuhaj-Varjú, M. Dietzfelbinger, and Z. Ésik, editors, Proc. 39th MFCS, volume 8634 of LNCS, pages 208–219. Springer, 2014.
* [15] E. Contejean, C. Marché, A.-P. Tomás, and X. Urbain. Mechanically proving termination using polynomial interpretations. JAR, 34(4):325–363, 2005.
* [16] T. Fiedor, L. Holík, A. Rogalewicz, M. Sinn, T. Vojnar, and F. Zuleger. From shapes to amortized complexity. In I. Dillig and J. Palsberg, editors, Proc. 19th VMCAI, volume 10747 of LNCS, pages 205–225. Springer, 2018.
* [17] A. Flores-Montoya. Cost Analysis of Programs Based on the Refinement of Cost Relations. PhD thesis, Darmstadt University of Technology, Germany, 2017.
* [18] C. Fuhs, C. Kop, and N. Nishida. Verifying procedural programs via constrained rewriting induction. TOCL, 18(2):14:1–14:50, 2017.
* [19] J. Giesl, C. Aschermann, M. Brockschmidt, F. Emmes, F. Frohn, C. Fuhs, J. Hensel, C. Otto, M. Plücker, P. Schneider-Kamp, T. Ströder, S. Swiderski, and R. Thiemann. Analyzing program termination and complexity automatically with AProVE. JAR, 58(1):3–31, 2017.
* [20] S. Gulwani and F. Zuleger. The reachability-bound problem. In B. G. Zorn and A. Aiken, editors, PLDI, pages 292–304. ACM, 2010.
* [21] M. Hermenegildo, F. Bueno, M. Carro, P. López-García, E. Mera, J. Morales, and G. Puebla. An overview of ciao and its design philosophy. TPLP, 12(1-2):219–252, 2012.
* [22] J. Hoffmann. Types with Potential: Polynomial Resource Bounds via Automatic Amortized Analysis. PhD thesis, Ludwig-Maximilians-Universiät München, 2011.
* [23] J. Hoffmann, K. Aehlig, and M. Hofmann. Multivariate amortized resource analysis. In Proc. 38th POPL, pages 357–370. ACM, 2011.
* [24] J. Hoffmann, K. Aehlig, and M. Hofmann. Multivariate amortized resource analysis. TOPLAS, 34(3):14, 2012.
* [25] J. Hoffmann, K. Aehlig, and M. Hofmann. Resource aware ML. In Proc. 24th CAV, volume 7358 of LNCS, pages 781–786, 2012.
* [26] J. Hoffmann, A. Das, and S.-C. Weng. Towards automatic resource bound analysis for ocaml. In Proc. 44th POPL, pages 359–373. ACM, 2017.
* [27] J. Hoffmann and M. Hofmann. Amortized resource analysis with polymorphic recursion and partial big-step operational semantics. In Proc. 8th APLAS, volume 6461 of LNCS, pages 172–187, 2010.
* [28] J. Hoffmann and M. Hofmann. Amortized resource analysis with polynomial potential. In Proc. 19th ESOP, volume 6012 of LNCS, pages 287–306, 2010.
* [29] J. Hoffmann and Z. Shao. Type-based amortized resource analysis with integers and arrays. In Proc. 12th FLOPS, volume 8475 of LNCS, pages 152–168, 2014.
* [30] J. Hoffmann and Z. Shao. Automatic static cost analysis for parallel programs. In Proc. 24th ESOP, volume 9032 of LNCS, pages 132–157, 2015.
* [31] J. Hoffmann and Z. Shao. Type-based amortized resource analysis with integers and arrays. JFP, 25, 2015.
* [32] M. Hofmann and S. Jost. Static prediction of heap space usage for first-order functional programs. In Proc. 30th POPL, pages 185–197. ACM, 2003.
* [33] M. Hofmann and G. Moser. Amortised resource analysis and typed polynomial interpretations. In Proc. of Joint 25th RTA and 12th TLCA, volume 8560 of LNCS, pages 272–286, 2014.
* [34] M. Hofmann and G. Moser. Multivariate amortised resource analysis for term rewrite systems. In Proc. 13th TLCA, volume 38 of LIPIcs, pages 241–256, 2015.
* [35] M. Hofmann and G. Moser. Analysis of logarithmic amortised complexity, 2018. extended version.
* [36] M. Hofmann and D. Rodriguez. Automatic type inference for amortised heap-space analysis. In M. Felleisen and P. Gardner, editors, Proc. 22nd ESOP, volume 7792 of LNCS, pages 593–613. Springer, 2013.
* [37] S. Jost, K. Hammond, H.-W. Loidl, and M. Hofmann. Static determination of quantitative resource usage for higher-order programs. In Proc. 37th POPL, pages 223–236. ACM, 2010.
* [38] S. Jost, H.-W. Loidl, K. Hammond, N. Scaife, and M. Hofmann. “Carbon Credits” for resource-bounded computations using amortised analysis. In Proc. 2nd FM, volume 5850 of LNCS, pages 354–369, 2009.
* [39] S. Jost, P. Vasconcelos, M. Florido, and K. Hammond. Type-based cost analysis for lazy functional languages. JAR, 59(1):87–120, 2017.
* [40] D. M. Kahn and J. Hoffmann. Exponential automatic amortized resource analysis. In J. Goubault-Larrecq and B. König, editors, Proc. 23rd FOSSACS, volume 12077 of LNCS, pages 359–380. Springer, 2020\.
* [41] G. Moser and M. Schneckenreither. Automated amortised resource analysis for term rewrite systems. In Proc. 14th FLOPS, volume 10818 of LNCS, pages 214–229, 2018.
* [42] G. Moser and M. Schneckenreither. Automated amortised resource analysis for term rewrite systems. Sci. Comput. Program., 185, 2020.
* [43] T. Nipkow. Amortized complexity verified. In Proc. 6th ITP, volume 9236 of LNCS, pages 310–324, 2015.
* [44] C. Okasaki. Purely functional data structures. Cambridge University Press, 1999.
* [45] B. Pierce. Types and programming languages. MIT Press, 2002.
* [46] A. Podelski and A. Rybalchenko. A complete method for the synthesis of linear ranking functions. In Proc. 5th VMCAI, volume 2937 of LNCS, pages 239–251, 2004.
* [47] B. Schoenmakers. A systematic analysis of splaying. IPL, 45(1):41–50, 1993.
* [48] A. Schrijver. Theory of linear and integer programming. Wiley, 1999.
* [49] M. Sinn, F. Zuleger, and H. Veith. A simple and scalable static analysis for bound analysis and amortized complexity analysis. In Proc. 26th CAV, volume 8559 of LNCS, pages 745–761, 2014.
* [50] M. Sinn, F. Zuleger, and H. Veith. Difference constraints: An adequate abstraction for complexity analysis of imperative programs. In R. Kaivola and T. Wahl, editors, FMCAD, pages 144–151. IEEE, 2015.
* [51] M. Sinn, F. Zuleger, and H. Veith. Complexity and resource bound analysis of imperative programs using difference constraints. JAR, 59(1):3–45, 2017.
* [52] D. Sleator and R. Tarjan. Self-adjusting binary search trees. JACM, 32(3):652–686, 1985.
* [53] R. Tarjan. Amortized computational complexity. SIAM J. Alg. Disc. Meth, 6(2):306–318, 1985.
* [54] P. Wang, D. Wang, and A. Chlipala. TiML: A functional language for practical complexity analysis with invariants. Proc. ACM Program. Lang., 1(OOPSLA), 2017.
* [55] S. Winkler and G. Moser. Runtime complexity analysis of logically constrained rewriting. In Proc. LOPSTR 2020, LNCS, 2020.
* [56] F. Zuleger. Asymptotically precise ranking functions for deterministic size-change systems. In L. D. Beklemishev and D. V. Musatov, editors, Proc. 10th CSR, volume 9139 of LNCS, pages 426–442. Springer, 2015.
* [57] F. Zuleger. The polynomial complexity of vector addition systems with states. In J. Goubault-Larrecq and B. König, editors, Proc. 23rd FOSSACS, volume 12077 of LNCS, pages 622–641. Springer, 2020\.
* [58] F. Zuleger, S. Gulwani, M. Sinn, and H. Veith. Bound analysis of imperative programs with the size-change abstraction. In E. Yahav, editor, Proc. 18th SAS, volume 6887 of LNCS, pages 280–297. Springer, 2011.
|
11institutetext: Department of Physics, P.O.Box 64, FI-00014, University of
Helsinki, Finland 22institutetext: Institut de Radioastronomie Millimétrique
(IRAM), 300 rue de la Piscine, 38406 Saint-Martin d’Hères, France
33institutetext: LERMA & UMR 8112 du CNRS, Observatoire de Paris, PSL Research
University, CNRS, Sorbonne Universités, UPMC Univ. Paris 06, 75014 Paris,
France 44institutetext: Institut d’Astrophysique Spatiale, CNRS, Univ. Paris-
Sud, Université Paris-Saclay, B$\hat{a}$t. 121, 91405 Orsay cedex, France
# Multi-wavelength observations and modelling of a quiescent cloud LDN1512
Mika Saajasto 11 Mika Juvela 11 Charlène Lefèvre 22 Laurent Pagani 33 Nathalie
Ysard 44
(Received day month year / Accepted day month year)
###### Abstract
Context. Light scattering at near-infrared wavelengths has been used to study
the optical properties of the interstellar dust grains, but these studies are
limited by the assumptions on the strength of the radiation field. On the
other hand, thermal dust emission can be used to constrain the properties of
the radiation field, although this is hampered by uncertainty about the dust
emissivity.
Aims. Combining light scattering and emission studies allows us to probe the
properties of the dust grains in detail. We wish to study if current dust
models allow us to model a molecular cloud simultaneously in the near infrared
(NIR) and far infrared (FIR) wavelengths and compare the results with
observations. Our aim is to place constraints on the properties of the dust
grains and the strength of the radiation field.
Methods. We present computations of dust emission and scattered light of a
quiescent molecular cloud LDN1512. We use NIR observations covering the J, H,
and KS bands, and FIR observations between 250 $\mu$m and 500 $\mu$m from
Herschel space telescope. We construct radiative transfer models for LDN1512
that include an anisotropic radiation field and a three-dimensional cloud
model.
Results. We are able to reproduce the observed FIR observations, with a
radiation field derived from the DIRBE observations, with all of the tested
dust models. However, with the same density distribution and the assumed
radiation field, the models fail to reproduce the observed NIR scattering in
all cases except for models that take into account dust evolution via
coagulation and mantle formation. The intensity from the diffuse interstellar
medium (ISM) like, dust models can be increased to match the observed one by
reducing the derived density, increasing the intensity of the background sky
and the strength of the radiation field between factors from 2 to 3. We find
that the column densities derived from our radiative transfer modelling can
differ by a factor of up to two, compared to the column densities derived from
the observations with modified blackbody fits. The discrepancy in the column
densities is likely caused because of temperature difference between a
modified blackbody fit and the real spectra. The difference between the fitted
temperature and the true temperature could be as high as $\Delta T=+1.5$ K.
Conclusions. We show that the observed dust emission can be reproduced with
several different assumptions about the properties of the dust grains.
However, in order to reproduce the observed scattered surface brightness dust
evolution must be taken into account.
###### Key Words.:
Interstellar medium (ISM): Clouds – Physical processes: Emission – Physical
processes: Scattering – Methods: Radiative Transfer – Stars: Formation
## 1 Introduction
Understanding how stars are formed is one of the crucial questions in
astronomy. The Herschel space observatory has provided us with detailed
observations of nearby molecular clouds and shown that star forming regions
have vastly diverse morphologies, from dynamically active filamentary fields
to more quiescent clouds with simple geometries (Molinari et al., 2010;
Men’shchikov et al., 2010; Juvela et al., 2012). These far infrared (FIR)
observations can be used to derive column density estimates and to study
possible variations in dust properties. However, these studies are limited by
our understanding of the emission properties of the grains, in particular the
dust opacity and to a lesser degree the dust opacity spectral index.
The light scattered by dust grains at near-infrared (NIR) and mid-infrared
(MIR) wavelengths can be studied and analysed without explicit assumptions of
the FIR thermal emission properties of the grains, thus the scattering
observations can be used to place additional constraints on dust properties
and the density distribution. Lehtinen & Mattila (1996) were the first to
study the extended surface brightness of dense interstellar cores at NIR
wavelengths and later Foster & Goodman (2006) showed that it is possible to
make large-scale maps of this extended surface brightness, naming it
’Cloudshine’, and attributed it to scattered light. Padoan et al. (2006) used
radiative transfer modelling to show that the observed Cloudshine was
consistent with the hypothesis of light scattering and could be used for
studies of cloud structure. Star-forming clouds, but not the cores, are
usually only moderately optically thick in the NIR and therefore there is a
near-linear dependence between the surface brightness and the column density,
assuming that the dust properties do not change with column density.
At MIR wavelengths, especially in the Spitzer 3.6 and 4.5 $\mu$m bands, a
surprisingly high surface brightness was detected towards several cloud cores
(Steinacker et al., 2010; Pagani et al., 2010; Juvela et al., 2012).
Explaining the higher-than-expected surface brightnesses with the classical
grain size distributions (Mathis et al., 1977) proved difficult, implying the
presence of larger grains with sizes of the order of $\sim 1\,\mu$m
(Steinacker et al., 2010; Pagani et al., 2010; Andersen et al., 2013; Lefèvre
et al., 2014; Steinacker et al., 2015). The high surface brightness towards
cloud centres was named ’Coreshine’ and is considered to be a direct evidence
of dust growth in dense clouds.
The comparison of thermal dust emission and light scattering provides crucial
information on the dust properties. On the other hand, because of limited
resolution, or issues caused by anisotropic illumination and line-of-sight
confusion, careful radiative transfer modelling is often needed to place
constraints on theoretical models of dust.
We have chosen the cloud LDN1512 (hereafter L1512) to study and model NIR
light scattering and thermal dust emission at FIR wavelengths. The cloud has a
simple cometary morphology, is nearby (140 $\pm\,20$ pc, Launhardt et al.,
2013), and based on the reported small line width of $\rm N_{2}H^{+}$ (Caselli
et al., 1995, 2002; Lin et al., 2020), appears to be quiescent.
The cloud has been previously mapped with the Herschel space observatory using
both the photodetecting array camera and spectrometer (PACS) (Poglitsch et
al., 2010) and the spectral and photometric imaging receiver (SPIRE) (Griffin
et al., 2010) instruments. As discussed by Launhardt et al. (2013) and Lippok
et al. (2013) the Herschel observations show a single starless core surrounded
by a diffuse envelope. Based on the fitting of the $\rm{}^{13}CO$ line, Lippok
et al. (2013) showed that the envelope of the core has a higher gas
temperature compared to the central regions and their stability analysis shows
that the core is thermally supercritical. The $\rm N_{2}H^{+}$ observations of
Lin et al. (2020) show a low kinetic temperature of $\sim 8\,\pm\,1$ K within
the innermost $\sim 0.017$ pc of the core, and based on their chemical
modelling, the cloud is sufficiently evolved that the $\rm N_{2}$ chemistry
has reached a steady state. Their results suggest that L1512 is likely older
than 1.4 Myr and that ambipolar diffusion has led to the formation of the
core.
We use J, H, and KS bands images, shown as a three-colour image in Fig. 1,
from the Wide InfraRed CAMera (WIRCAM) on the Canada-France-Hawaii Telescope
(CFHT). The observations are presented in Lin et al. (2020), where only the
stellar content of the images has been exploited. in this paper we concentrate
on the extended emission. After background subtraction, all three bands show a
clear extended surface brightness component.
In this study, we will simultaneously model the cloud at NIR and FIR
wavelengths using an anisotropic radiation field and a cloud model derived
from the Herschel observations together with dust models for the diffuse ISM
dust, based on the model described by Compiègne et al. (2011) and three models
that take into account the evolution of dust grains in shape and size. The aim
of our study is to find a solution, or place constraints, for the cloud, the
radiation field and the properties of the dust grains, that would give a
consistent explanation for the observed extinction, NIR scattering, and dust
emission.
Figure 1: Three colour image of the field L1512. The colours correspond to the
NIR surface brightness at J (blue colour), H (green), and KS bands (red). All
point sources have been masked.
This paper is organised as follows. In section 2, we give an overview of the
archival observations used in the paper and describe our NIR observations. In
section 3, we describe our radiative transfer methodology and explain our
radiation field and cloud models. In section 4 we present our results and
discuss our findings in section 5. Finally, in section 6, we summarise our
findings and provide our conclusions.
## 2 Observations
The Herschel observations were downloaded from the Herschel science archive
and are a part of the Herschel key project The Earliest Phases of Star
Formation (EPoS, PI: O. Krause). The EPoS project used both the PACS and the
SPIRE instruments to cover a total of 12 different fields between wavelengths
from 100 $\mu$m to 500 $\mu$m. In the projects source list, the source CB 27
corresponds to the cloud L1512. The nominal scanning speeds for the PACS and
SPIRE instruments were set to 20$\arcsec$ s-1 and 30$\arcsec$ s-1,
respectively. The general noise level in the EPoS projects maps is $\sim 6$
mJy/beam, but for the L1512 the noise levels are slightly higher $\sim 11$
mJy/beam. A more detailed description on the data reduction is given by
Launhardt et al. (2013).
The NIR observations cover the wide filters J (1.25 $\mu$m), H (1.6 $\mu$m),
and KS (2.15 $\mu$m) (see Fig. 23) and were obtained in 2013 using the WIRCAM
instrument on the CFHT. The observations were carried out using the Sky-
Target-Sky (STS) dithering mode, to subtract the atmospheric IR-emission and
to preserve any extended scattered light from the source. The seeing
conditions during the observations were good, with typical values less than
$1\arcsec$. Data reduction was performed at the TERAPIX center using a
specific reduction method to recover the extended emission.
## 3 Methods
Our aim is to simultaneously model the cloud in NIR and FIR and to compare our
modelling results with observations. In this section we describe our model of
the density distribution within the cloud and our radiation field model.
### 3.1 Cloud model
We use a three-dimensional model cloud discretised onto a grid of 2253 cells
with a Gaussian density distribution along the line-of-sight (LOS). The
angular cell size of our model cloud is 6$\arcsec$ which corresponds to the
pixel size of the Herschel SPIRE 250 $\mu$m map. Thus, the total angular
extent of our model is $22.5\arcmin\times 22.5\arcmin$, which at a distance of
140 pc corresponds to a physical size of $\sim 0.4\times 0.4$ pc. The density
distribution is based on the Herschel SPIRE 350 $\mu$m surface brightness
(determining the column density distribution on the plane of the sky) and
scaled along the LOS (z-coordinate) by a Gaussian distribution
$\rho(z)=\frac{1}{\sigma\sqrt{2\pi}}\exp\,\left(-\frac{(z-\mu)^{2}}{2\sigma^{2}}\right),$
(1)
where $\mu=0.0$ pc and with either $\sigma=0.07$ pc, labeled Narrow, or with
$\sigma=0.11$ pc, labeled Wide. The column density distribution is optimised
during the model fitting, taking into account the temperature structure of the
model cloud.
### 3.2 Radiation field
The radiation field used in our modelling is based on the Diffuse Infrared
Background Experiment
(DIRBE111http://lambda.gsfc.nasa.gov/product/cobe/dirbe_exsup.cfm) Zodi-
Subtracted Mission Average (ZSMA) maps (Hauser et al., 1998) at 1.2 - 240
$\mu$m. Outside the DIRBE wavelengths we follow the Mathis et al. (1983)
model, scaling the values below 1 $\mu$m by 1.4 to match the level of the
DIRBE data. Similarly, we use linear interpolation in the range from 240
$\mu$m to 650 $\mu$m, to avoid a discontinuity in our model (Fig. 2). The
intensity is distributed over the map pixels following the surface brightness
distribution of the DIRBE maps. The spatial distribution at wavelengths
shorter than 1 $\mu$m follows the distribution of the DIRBE J band (Fig. 3).
At wavelengths longer than 240 $\mu$m, we assume the spatial distribution of
the DIRBE 240 $\mu$m band. Compared to the Mathis et al. (1983) model, our
radiation field has significantly higher intensity in the range [10, 240]
$\mu$m, but the contribution from these wavelengths to the heating of the dust
particles is not significant. However, the increased radiation field strength
at NIR and MIR wavelengths will affect our light scattering modelling.
Figure 2: Intensity of the radiation field, green line, as a function of
wavelength. The purple line shows the Mathis et al. (1983) model. The black
dots show the intensity of the DIRBE observations averaged over the HEALPIX
map. Figure 3: Model for the all-sky sky brightness distribution in the J
band. The map has been rotated so that the direction towards L1512 is in the
centre. The intensity of the model has been truncated at 20 MJy sr-1 for
plotting and the image has been smoothed by a Gaussian with $\rm
FWHM=2^{\circ}$.
### 3.3 Radiative transfer
To solve the dust emission and the surface brightness of the scattered
radiation, we use the Monte Carlo radiative transfer program SOC (Juvela,
2019). A schematic overview of our modelling is shown in Fig 36.
The dust grain properties are defined by the absorption and scattering
efficiencies, $\rm Q_{\rm abs}$ and $\rm Q_{\rm sca}$, and a scattering
function described using the asymmetry parameters of the Henyey–Greenstein
scattering functions, $g$, summed over the individual dust components. The
default parameters are taken from the Compiègne et al. (2011) model for the
diffuse ISM. Variations in these parameters, as well as three models including
dust evolution, are also considered. A more detailed description of the dust
models is provided in Appendix A. The absorption and scattering properties of
the dust grains vary significantly between the models, an example of the
optical properties of selected models is shown in Fig. 4.
Figure 4: Comparison between the optical depth per H as a function of
frequency for the Default, LGM, SIGMA, and THEMIS models. The yellow shaded
regions show the frequency range from 250 to 500 $\mu$m and from 1.2 to 2.5
$\mu$m.
As a first step, the dust properties, the radiation field, and cloud model are
used to compute the dust emission at the Herschel SPIRE wavelengths of 250,
350, and 500 $\mu$m. We fit our model to the observed dust emission by an
iterative process where the density of the model cloud and the strength of the
radiation field are fitted to the observations. For each iteration step, we
compare the difference between the observed and simulated surface brightness
at 350 $\mu$m, $I_{\rm obs}(350)-I_{\rm sim}(350)$ and, for each map pixel,
scale the density of each model cloud cell along the corresponding LOS by
$K_{\rho}=\frac{I_{\rm obs}(350)}{I_{\rm sim}(350)}.$ (2)
Similarly, we scale the radiation field by an average scalar factor
$K_{ISRF}=\biggl{<}\frac{I_{\rm obs}(250\,\mu\rm{m})/I_{\rm
obs}(500\,\mu\rm{m})}{I_{\rm sim}(250\,\mu\rm{m})/I_{\rm
sim}(500\,\mu\rm{m})}\biggr{>}.$ (3)
In the $K_{\rm ISRF}$ computation, we use only the pixels in the central part
of the map, within the red circle shown in Fig. 5. The scaled radiation field
and cloud model are then used to compute a new estimate for the dust emission.
Once the emission fit has converged, we use the final scaled cloud model and
radiation field in a separate radiative transfer calculation to predict the
scattered light at NIR wavelengths.
The observed surface brightness excess is relative to the background sky
$I_{\nu}^{\Delta}=I_{\rm sca}+I_{\rm bg}\times(e^{-\tau}-1),$ (4)
where $I_{\rm sca}$ is the intensity of the scattered light, $I_{\rm bg}$ is
the absolute value of the background radiation seen towards the cloud and
$\tau$ is the optical depth of the cloud. To compare the observations and
simulations a background needs to be subtracted from the simulated maps. For
the J and KS bands, the background is estimated from the DIRBE ZSMA maps as an
average over the four closest pixels to L1512 and for the H band we estimate
the value from the J and KS band values following the Mathis et al. (1983)
model. However, the DIRBE observations include the contribution from point
sources which for Eq. 4 needs to be removed. We estimate this contribution
from the 2MASS point source catalogue by integrating the intensity of all
point sources that are within the DIRBE pixels (see Lefèvre et al. (2014) for
details). The integrated intensity values are then subtracted from the average
DIRBE values resulting in background levels of 0.059, 0.061, and 0.040 $\rm
MJy/sr$ for the J, H, and KS bands, respectively. The pixel-to-pixel
variations in the DIRBE observations are at most $\sim 15\,\%$ for the J and
KS bands. Because of the relative proximity of the cloud, $\rm D=140$ pc, we
assume that all of the background radiation is emitted from behind the cloud
(i.e. there is no foreground emission).
## 4 Results
In this section we report the main results of our study. In Section 4.1, we
analyse the NIR and Herschel observations and describe the results of our
modelling in Section 4.2.
### 4.1 Observations
Figure 5 shows the column density map obtained with modified blackbody (MBB)
fits of the $Herschel$ SPIRE data. We assume a spectral index $\beta=1.8$,
which corresponds to the average in nearby molecular clouds (Planck
Collaboration XXV, 2011). The SPIRE observations were convolved to a common
resolution of 40$\arcsec$ and colour corrected using the factors of Sadavoy et
al. (2013). The J band surface brightness has a local minimum at the location
of the column density maximum, showing that the J-band scattering has
saturated because of high optical depth. The dust emission peaks south of the
column density peak in all the SPIRE channels, indicating stronger heating
from that direction.
The Gaia data release 2 (DR2, Gaia Collaboration et al., 2018) provides the
parallaxes and magnitudes of over 1.6 billion sources, a catalogue that can be
used to estimate distances within the Galaxy. To locate stars that could
produce additional illumination, we select all stars that are within two
degrees of the cloud centre and brighter than 15${}^{\rm\,mag}$ in the Gaia G
band. We convert the parallaxes to distance estimates assuming
$r=1/\bar{\omega}$, where $\bar{\omega}$ is given in seconds of arc (see
discussion by Bailer-Jones (2015) and Luri et al. (2018) for the accuracy of
this method). We find nine stars that are likely to be closer than 100 pc to
the cloud and have Gaia G band magnitude brighter than 10 magnitudes. Only
three of these stars are on the southern side of the cloud (Fig. 5). However,
their estimated G band illumination compared to the observed NIR surface
brightness is $I_{\rm stars}/I_{\rm SB}=0.001$, with $I_{\rm stars}=5.6\times
10^{-5}$ MJy/sr and assuming an average surface brightness of 0.07 MJy/sr.
Thus, compared to the illumination from the ISRF, the contribution from these
stars to the illumination of the cloud is minimal.
Figure 5: Contours of J-band surface brightness on $Herschel$ column density
map. The plus signs show the locations of the 250 $\mu$m, 350 $\mu$m, and 500
$\mu$m emission maxima (red, green, and black, respectively). The white arrows
indicate the projected direction of the three brightest nearby stars and the
numbers next to the arrows indicate the 3D distance to the star and the Gaia G
band average magnitude. The red circle shows the area that is used to compute
the $K_{\rm ISRF}$ scaling factor.
We derive the J band optical depth from the WIRCam observations using the
Near-Infrared Color Excess Revisited (NICER; Lombardi & Alves, 2001) method
and assume a standard extinction curve (Cardelli et al., 1989). The
submillimetre optical depth was obtained by fitting the spectral energy
distribution (SED) of the Herschel SPIRE observations with MBB curves,
assuming an opacity spectral index $\beta=1.8$ and a dust opacity of
$\kappa_{\nu}=0.1(\nu/1000\rm GHz)^{\beta}$ cm${}^{2}\,$g-1 (Beckwith et al.,
1990), thus
$\tau_{250}=\frac{I_{250}}{B_{250}(T)},$ (5)
where $I_{250}$ is the fitted 250 $\mu$m intensity, $B_{250}$ is the Planck
function at 250 $\mu$m, and $T$ is the colour temperature from the SED fit.
Figure 6: J band extinction map and the optical depth at 250 $\mu$m. The white
contours in the J band extinction map show the estimated uncertainties of
$0.21,0.3,0.41$ mag, from lowest to highest contour level
The maps for J-band extinction and $\tau_{\rm 250}$ are shown in Fig. 6 and
the correlation between the two quantities in Fig. 7. In Fig. 7, we have
excluded the central region where the extinction estimates are uncertain due
to the low number of background stars (the region within the contours in Fig
6). A linear least squares fit gives $\tau_{250}/A_{\rm J}=(4.9\pm 0.3)\times
10^{-4}$ $\rm mag^{-1}$. Masking the high values above $\tau_{250}>0.002$,
reduces the ratio to $\tau_{250}/A_{\rm J}=(2.3\pm 0.3)\times 10^{-4}$ $\rm
mag^{-1}$. The ratio of $\tau_{250}/A_{\rm J}=4.9\times 10^{-4}$ $\rm
mag^{-1}$ is slightly higher than what one would expect for diffuse clouds
(Bohlin et al., 1978; Planck Collaboration XI, 2014), but is lower than the
average value in dense clumps found in Juvela et al. (2015).
Figure 7: Correlation between the optical depth at 250 $\mu$m and the J band
extinction. The purple line is a least squares fit to the data and the Pearson
correlation coefficient is 0.65 (p $<<$ 0.01).
### 4.2 Radiative transfer models
In this section we describe the results of our emission and scattering
modelling. We use the Compiègne et al. (2011) dust model as the default model,
but will also test variations, for example the effects of different size
distributions and grain optical properties. We will refer to these as ’COM
models’.
Table 2 lists the dust models used, and a more detailed explanations are
provided in Appendix A. We computed for each model the column density,
background-subtracted intensity in the J, H, and KS bands, the J band and 250
$\mu$m optical depths ($\tau_{\rm J,em}$ and $\tau_{250}$), and the scaling
factor of the radiation field $K_{\rm ISRF}$. The values in Table 2 are
averages over $5\times 5$ map pixels centred on point 1. The temperature of
the core is computed as an average value over $5^{3}$ cells, centred at the
core. The values derived from observations are all averages over 5$\times$5
pixels centred on point 1. The core temperature is based on the $\rm
N_{2}H^{+}$ observations by Lin et al. (2020). In addition to the models in
Table 1, we tested three further modifications to the dust properties. The
differences to the Default model were minor, and these results are presented
only in Appendix C.
Our radiative transfer computations do not include stochastically heated
grains (SHG), as the emission from these grains is minimal in the SPIRE bands
and including the stochastic heating would increase the computations times
significantly. We computed stochastic heating only for two test cases, for
models that were previously fitted assuming dust at an equilibrium
temperature. The results are shown in Appendix B (Figs. 20 and 21). Including
the stochastic heating decreases the emission in the SPIRE bands by $\sim
10-15\,\%$, because some of the energy is now emitted in the MIR wavelengths.
The emission is at 100 $\mu$m 30-45 $\%$ and at 160 $\mu$m 15$\%$ above the
observed values. For the Default model, the morphology of the simulated 100
and 160 $\mu$m maps agrees with the observed maps. For the THEMIS model, the
bright rim in both maps is up to 40 MJy sr-1 brighter than observed. For these
two cases, we also computed the predicted SEDs for the full wavelength range
3.6-500 $\mu$m (Fig. 22).
Table 1: Summary of the radiative transfer models, including the column
densities, NIR intensities, J band optical depth, 250 $\mu$m optical depth,
radiation field scaling, and the core temperature.
Model name | Description | $N(\rm H_{2})$ \rm(1)\rm(1)footnotemark: $\rm(1)$ | J\rm(1)\rm(1)footnotemark: $\rm(1)$ | H\rm(1)\rm(1)footnotemark: $\rm(1)$ | KS\rm(1)\rm(1)footnotemark: $\rm(1)$ | $\tau_{\rm J,em}$\rm(1)\rm(1)footnotemark: $\rm(1)$ | $\tau_{250}$\rm(1)\rm(1)footnotemark: $\rm(1)$ | $K_{\rm ISRF}$ | $\rm T_{core}$\rm(2)\rm(2)footnotemark: $\rm(2)$
---|---|---|---|---|---|---|---|---|---
| | $(\rm cm^{-2})$ | $(\rm MJy/sr)$ | $(\rm MJy/sr)$ | $(\rm MJy/sr)$ | | ($\times 10^{-3}$) | | $\rm(K)$
OBS | Values derived from observations | $4.51\times 10^{21}$ | 0.08 | 0.15 | 0.10 | 1.50 | 2.56 | - | 7.5
| COM models | | | | | | | |
Default | Compiègne et al. (2011) | $1.59\times 10^{22}$ | 0.055 | 0.047 | 0.054 | 5.80 | 2.67 | 0.447 | 9.34
Scaled2 | Emissivity of $\lambda>60$ $\mu$m scaled by 2 | $8.31\times 10^{21}$ | 0.074 | 0.057 | 0.053 | 3.03 | 2.80 | 0.515 | 10.31
Scaled4 | Emissivity of $\lambda>60$ $\mu$m scaled by 4 | $4.26\times 10^{21}$ | 0.082 | 0.051 | 0.040 | 1.55 | 2.87 | 0.663 | 10.66
LG | Included grains up to a size of 5 $\mu$m | $1.62\times 10^{22}$ | 0.112 | 0.111 | 0.128 | 4.50 | 2.93 | 0.462 | 10.36
LGM | Mass of large grains $\times$2, PAH mass $\times$0.5 | $9.21\times 10^{21}$ | 0.111 | 0.106 | 0.131 | 5.00 | 3.21 | 0.440 | 10.24
| Models with dust evolution | | | | | | | |
SIGMA | Two components, diffuse and dense components\rm(3)\rm(3)footnotemark: $\rm(3)$ | $9.58\times 10^{21}$ | 0.187 | 0.212 | 0.220 | 3.42 | 3.73 | 0.574 | 10.58
THEMIS | Dust model using the THEMIS framework\rm(4)\rm(4)footnotemark: $\rm(4)$ | $1.44\times 10^{22}$ | 0.082 | 0.127 | 0.173 | 1.05 | 2.74 | 0.471 | 7.61
DDust | Two dust components, Default and LG | $1.82\times 10^{22}$ | 0.079 | 0.085 | 0.108 | 14.38 | 3.53 | 0.745 | 9.31
222(1) The column density, background subtracted intensity, and the optical
depths of J band and 250 $\mu$m band have been computed as average values over
$5\times 5$ map pixels centred on point 1 (see left panel of Fig. 11).
(2) The value derived from observations based on the $\rm N_{2}H^{+}$ line
observations by Lin et al. (2020). The modelled values are computed as
averages over $10^{3}$ cells centred at the core.
(3) The diffuse component is the Default model and the dense component is
built with SIGMA (Lefèvre et al, in prep.).
(4) For details see Köhler et al. (2015); Ysard et al. (2016)
#### 4.2.1 COM models: dust emission
Figure 8 compares the dust emission in the Default model with the
observations. The fit residuals (observed value minus the model prediction) of
the simulated 350 $\mu$m emission are $\pm 1.5\,\%$. For the 250 and 500
$\mu$m bands, the residuals are on average $\pm 3\,\%$, but increase up to
$-10\,\%$ in the densest region.
The fitted radiation field strength is lower than our reference model for all
test cases, but on average, the observations can be fitted with a scaling
factor of $K_{\rm ISRF}=0.5$. The lowest value is $K_{\rm ISRF}\sim 0.44$ for
the model LGM, while the highest value is $K_{\rm ISRF}\sim 0.66$ for the
Scaled4 model. Thus, increasing the emissivity of the grains, models Scaled2
and Scaled4, increases the required $K_{\rm ISRF}$ to compensate for the
grains being colder for a given radiation field. The $\tau_{250}$ values of
the models are within higher but within $15\,\%$ of the values derived from
the observations with SED fits. In models with larger grain sizes, such as LG
and LGM, the difference increases to $\sim 30\,\%$.
Figure 8: Observed (first row) and simulated (second row) emission maps using
the Default model at 250, 350, and 500 $\mu$m. The third row shows the
difference between the observations and simulations.
The column density of the simulated Default case has a maximum of $\sim
2.6\times 10^{22}$ cm-2, whereas the maximum value derived from the Herschel
observations via MBB analysis is $1.2\times 10^{22}$ cm-2. Changes in the
assumed emissivity of the dust grains naturally affect the derived column
density, see also the discussion by Malinen et al. (2011) and Ysard et al.
(2012) on the uncertainties of the MBB analysis.
A low kinetic temperature of $\sim 8\,\pm\,1$ K within the innermost $\sim
0.017$ pc of the core was derived by Lin et al. (2020) using $\rm N_{2}H^{+}$
line observations. We computed for each model an average temperature $T_{\rm
core}$ over a cube of $5^{3}$ cells centred on the core. For the Default model
$\rm T_{\rm core}=9.34$ K, which is $\sim 1$ K higher than the $\rm
N_{2}H^{+}$ estimate. The other models also show temperatures 1-2.5 K above
the N2H+ estimate. The models with higher emissivity, Scaled2 and Scaled4, and
with larger grains, LG and LGM, are $0.7-1.5$ K warmer than the Default model.
This is caused by the higher submillimetre emissivity leading to lower column
density and lower cloud optical depth at the short wavelengths responsible for
dust heating. This, in turn, results in higher dust temperatures in the core.
On the other hand, the increased LOS illumination, model Wide, only increases
the core temperature marginally to 9.7 K.
#### 4.2.2 COM models: light scattering
We calculated the scattered surface brightness for the J, H, and KS bands
using the density distribution and the radiation field obtained from the fits
to the dust emission and by adding net effect of the background $I_{\rm
BG}\times(e^{-\tau}-1)$, resulting in simulated surface brightness maps.
The computed surface brightness maps are shown in Fig. 9 and the individual
components of the signal in Fig. 29. The modelled surface brightnesses are up
to a factor of four lower compared to the observed surface brightness.
Furthermore, the general morphology of the simulated maps does not match the
observed maps. The faint striations (see Fig. 9 E) are more pronounced and the
morphology of the bright rim is narrower in the simulated maps. The central
part of the cloud has a low intensity, $\sim 0.01$ $\rm MJy\,sr^{-1}$, but the
low-surface-brightness area is more extended. In the simulated H and KS band
maps, the rim and the striations are clearly visible, unlike in the
observations. As in the case of the J band, the morphology of the core does
not match the observations. In all simulated maps, the Veil is considerably
more prominent than in the observations. Since the Veil cannot be seen in the
WIRCam observations but is clearly seen in the Herschel observations, it is
possibly not connected to the L1512 but is further away on the same line-of-
sight.
The increase in emissivity, and thus lower column density, of the Scaled2 and
Scaled4 models has increased the surface brightness by up to $40\,\%$ in the J
band. In the H and KS bands the intensity has decreased by $10\,\%$ in the
diffuse regions and increased by up to $30\,\%$ in the dense regions. This
produces a more compact core and the Veil is less prominent. However, the
intensities are still a factor of 2-3 below the observed values. For the
models that include larger grains, LG and LGM, the surface brightness values
are only $\sim 30\,\%$ lower in the H and Ks bands and $\sim 20\,\%$ higher in
the J band than the observed values. However, the morphology of the maps does
not match the observations: the core is less compact and the Striations and
the Veil more prominent.
The values of $\tau_{\rm J,em}$ are on average three times higher than the
NICER estimates (see Table 1). The model Scaled4 is an exception, the optical
depth $\tau_{\rm J,em}=1.55$ being within $5\,\%$ of the value derived from
observations.
Figure 9: Observed (first row) and estimated surface brightness maps from the
Default model at the J, H, and KS bands (second row). All maps have been
background substracted.
#### 4.2.3 Models with dust evolution
The Default model is designed for diffuse lines of sight, but in dense cores
dust grains are expected to grow due to coagulation and mantle formation.
Thus, we also test dust models that take into account the evolution of dust
grains. A comparison between the optical properties of the models is shown in
Fig. 4.
We test three cases where the dust evolution is modelled by changing the
properties of the dust grains with increasing density. A more detailed
explanation of the models is provided in the Appendix A. As a summary: the
model DDust is a combination of models Default and LG, where the relative
abundance of the LG component increases with increasing density. The SIGMA
model uses the Default model for diffuse gas and a combination of aggregate
silicate and carbon grains with ice mantles in the dense regions. Finally, the
THEMIS model uses a diffuse component with core-mantle (CM) grains, an
intermediate component with core-mantle-mantle grains (CMM), and a dense dust
component with amorphous core-mantle-mantle aggregates that have ice mantles
(AMMI) (Köhler et al., 2015; Ysard et al., 2016). The evolution of the dust
grains, with increasing density, will affect all dust parameters. Thus, one
cannot identify a single parameter or parameters that change, as not only the
physical parameters, emissivity absorption and scattering properties, and size
distribution, but also chemical properties evolve due to formation of
aggregate grains and formation of (ice)mantles. Furthermore, the changes in
the chemical composition are also expected to effect the optical properties,
thus the affect of dust evolution is intricate.
In the fits of the dust emission, the 250 and 500 $\mu$m residuals in the
central region are smaller for SIGMA (Fig. 34) than for the default dust model
(e.g. Fig. 8). The residuals are slightly larger in the case of the THEMIS
model (Fig. 34) and in the case of the DDust (Fig. 34) model are similar to
the Default model.
Compared to the Default case, the SIGMA and DDust models have higher $K_{\rm
ISRF}$ factor, 0.67 and 0.75 respectively, while the $K_{\rm ISRF}$ factor of
the THEMIS model is 0.47. As with the models that include larger grains, LG
and LGM, $\tau_{250}$ of the SIGMA, THEMIS, and DDust models are higher by
$\sim 30\,\%$ compared to the Default model. The NIR optical depths are
$\tau_{\rm J}$=3.4, 1.05, and 14.4 for the SIGMA, THEMIS, and DDust models,
respectively. The corresponding core temperatures read from the 3D models are
10.6 K, 7.6 K, and 9.3 K. Thus, in spite of the lowest $\tau_{\rm J}$ value
(even below the NICER measurement), THEMIS results in the lowest dust
temperatures, some 0.5 K below the estimate derived from N2H+ line
observations.
The THEMIS, and DDust models produce column densities close to the Default
model with $N(\rm H_{2})=1.4\times 10^{22}$ cm-2 and $1.8\times 10^{22}$ cm-2,
respectively. The column density of the SIGMA model is lower, with $N(\rm
H_{2})=9.61\times 10^{21}$ cm-2. The model column density profiles (Fig. 10)
agree with the profile derived from the Herschel observations via MBB fits,
but are narrower than the FWHM=2.0$\arcmin$ estimated from the Herschel
estimate. The profile of the Default model, FWHM $=$ $1.4\,\arcmin$, is close
to the modelled N2H+ profile with FWHM $=$ $1.32\,\arcmin$ (Lin et al., 2020).
Figure 10: Column density of H2 derived from Herschel observations and the
comparison between column density profiles for selected models. The profiles
have been computed as average values over $3\times 3$ pixels along the arrow
in the left panel.
THEMIS produces two to three times higher NIR surface brightness than the
Default model (Fig. 28), but the SED shape and the morphology still do not
match the observations. Although the central region of the cloud in H and KS
bands has surface brightness values within 0.02 $\rm MJy\,sr^{-1}$ of the
observed value, the more diffuse regions are brighter than observed regions.
The striations are clearly visible in the THEMIS maps (H and KS bands),
whereas in observations no extended surface brightness is seen. The model
SIGMA has the highest surface brightness of all our test cases, the
intensities in the diffuse regions are in excess of 0.1 $\rm MJy\,sr^{-1}$ for
the H and KS bands and even for the J band the values are in range [0.04,0.12]
MJy sr-1. The morphology is close to the observed morphology, with a compact
core in the H and KS bands and the surface brightness of the Veil is lower.
The model DDust shows a clear dip in the core, even in the KS band map, a
factor of 2 to 3 lower surface brightness compared to the observations and the
model clearly overestimates the surface brightness of the diffuse regions.
### 4.3 NIR Spectra
Figure 11 shows NIR spectra for observations and the Default model, for the
four positions indicated in the figure. Point 1 corresponds to the brightest
part of the J band map and the three other points trace density variations
across the densest part of the cloud. In addition to the low intensities, the
general shapes of the simulated spectra do not match the observations, the
relative brightness of the KS and J bands being too high. A factor of three
increase in the scattered light would increase the net surface brightness by a
larger factor but would not improve the match with the SED shape. However, the
observed H-band value has somewhat higher uncertainty because of the lack of a
direct background sky brightness measurements at that wavelength.
Figure 11: Observed map of the J band scattered light (left panel), the red
numbers from one to four indicate the locations from where we have extracted
NIR spectra, shown in the panel on the right. The solid lines show the
observed spectra and the dashed lines show the spectra derived from the
Default model. The dotted lines show the simulated spectra derived from the
Default model, but the intensity of the scattered light has been multiplied by
a factor of three before background subtraction.
Figure 12 shows the NIR spectra for all test cases and points 1, 3, and 4.
Point 2 is similar to point 4 and its spectra are shown in the appendix (Fig.
35).
The error bars assume an uncertainty in the background sky brightness that is
$30\,\%$ in the H and $20\,\%$ in the other bands. The Default-case simulated
intensities are a factor two to three lower than the observed values. The
models LG (Default model with grains up to 5 $\mu$m in size) and LGM (As LG
but relative amount of large grains increased) produce higher intensities, but
the intensity of the H band is $\sim 30\,\%$ lower than the observed value
while the J and KS bands are 30 to $40\,\%$ brighter. The SIGMA model (two
dust components, Default for diffuse and a component derived with SIGMA for
dense LOS) is clearly overestimating the intensity of the scattered light in
all three channels. In point 1, the THEMIS model (three dust component based
on the THEMIS framework) is closest to the observations, but the intensity of
the KS band is 0.05 $\rm MJy\,sr^{-1}$ too bright. However, in the point 3,
the intensity and the shape of the spectra of the THEMIS model is within
$10\,\%$ of the observed intensity. In general the spectra from point 3 is
more easily reproduced and the models LG, LGM, and DDust all produce
approximately the correct SED shape but do not match the level of intensity.
Like in the point 1, the SIGMA model is overestimating the intensity, but the
shape of the spectra is correct.
For point 4, the COM models tend to underestimate the NIR intensities, with
the exception of the LG and LGM models that overestimate the J and KS bands,
but underestimate the H band. However, models Scaled2 and Albedo give very
similar results.
Figure 12: J, H, and KS band intensities from observations (horizontal lines)
and different models (symbols) for the map point 1 (lower panel), point 3
(middle panel), and point 4 (upper panel). The colours correspond to the J
(purple), H (red), and KS (black) bands. All intensity values have been
background subtracted. We assume a $20\,\%$ uncertainty in background sky
estimates for the J and KS bands and an uncertainty of $30\,\%$ for the H
band.
The increased FIR/submillimetre emissivity in the cases Scaled2 (emissivity
for $\lambda>60$ scaled by a factor of 2) and Scaled4 (as Scaled 2 but scaled
by a factor of 4) decreases the column density, thus, the higher intensity of
the H band compared to the KS band can be understood as a saturation effect.
However, increasing the emissivity further does not improve the match as the
column density becomes too low and the intensity of the scattered light is
reduced. The increased emissivity of the grains decreases the J band optical
depth, which for model Scaled4 is close to the NICER estimate, but the
morphology of the surface brightness maps and the shape of the NIR spectra are
further away from the observations. Thus, emissivity alone can not reconcile
observations and simulations.
## 5 Discussion
Based on our modelling of the LDN1512 observations, its thermal dust emission
can be fitted with many different assumptions on dust properties. However, we
can not simultaneously fit both the dust emission and NIR scattered light with
the COM models. The Default model was designed for high-latitude diffuse lines
of sight, while the L1512 cloud has a dense central core. The disparity
between the observations and our radiative transfer models based on the
Default model is a clear indication for the need of evolution of the dust
grains and we thus tested additional models, DDust, SIGMA, and THEMIS. We are
able to reproduce the observed intensity of dust emission and scattered light
at NIR wavelengths with the THEMIS model. However, the morphology of the
surface brightness maps shows considerable deviation from the observations. In
this section we discuss these results in more detail.
### 5.1 Uncertainties of the radiation field
The ISRF used in our simulations is based on the DIRBE observations and the
shape of the Mathis et al. (1977) model. We did not test changes in the
anisotropy of the external field. However, based on Gaia point sources, the
contribution from the nearby stars is minimal.
Based on the intensity variation between the DIRBE pixels, the background
uncertainty is of the order of $\sim 20\,\%$, but for the H band, for which
there are no direct DIRBE measurements, we have assumed a value of $30\,\%$.
With the exception of the model SIGMA (model with two dust components, Default
dust for diffuse regions and dust created with SIGMA for dense LOS), the
spectra derived from both points 1 and 3 are systematically underestimating
the intensity of the H band. For the THEMIS model (model with three dust
components based on the THEMIS framework), the intensity of the J and KS bands
from point 3 are comparable with the observed values. However, the H band
intensity from both points is still significantly below the observed values.
For point 4, the J and H band intensities of the THEMIS model are comparable
with the observed values, but the KS band is over-estimated by a factor of 2.
Furthermore, the shape of the spectra of the model SIGMA agree with the
observed spectra, although the intensity is clearly overestimated. Thus, the
discrepancy between the observed and simulated values can be explained to a
certain degree with uncertainties in the background sky estimate, but since
the shape of the simulated spectra also depends on the optical depth of the
cloud, uncertainties in the optical depth of the model or variations in the
relative abundances of the dust components will also affect the estimated
intensity.
### 5.2 NIR extinction
In the models fitted to dust emission observations, the NIR extinction is
typically three times higher than the direct NICER estimate of $\tau_{\rm J}$.
In addition to NICER estimates using the Cardelli et al. (1989) extinction
curve, we also calculated estimates for the extinction curves of the
respective dust models. The values are computed as averages over a $5\times 5$
pixel region around point 1 (the resolution of the optical depth maps was set
to 40$\arcsec$). For the Default model $\tau_{\rm J,ext}=1.32$, which is lower
compared to the value derived using the Cardelli et al. (1989) extinction
curve, $\tau_{\rm J,Card.}=1.50$. For the models LG (model with grains up to 5
$\mu$m in size), LGM (as the LG model but the relative amount of large grains
increased), and SIGMA, the optical depths are $\tau_{\rm J,ext}=2.19-3.0$. For
the model THEMIS the $\tau_{\rm J,ext}=1.42$, which is within $\sim 10\,\%$ of
the Cardelli et al. (1989) estimate.
Table 2: Comparison of $\tau_{\rm J}$ values derived from the dust-emission models and from the observations of background stars. The values are computed as averages over a $5\times 5$ region centred on point 1. Model | $\tau_{\rm J,em}$ | $\tau_{\rm J,ext}$ | $\tau_{\rm J,em}/\tau_{\rm J,ext}$
---|---|---|---
Default | 5.80 | 1.32 | 4.39
LG | 4.50 | 2.90 | 1.55
LGM | 5.00 | 3.00 | 1.66
SIGMA | 3.42 | 2.19 | 1.56
THEMIS | 1.05 | 1.42 | 0.74
The ratio for the Default model is $\tau_{\rm J,em}/\tau_{\rm J,ext}\sim
4.39$, while it is lower, $\tau_{\rm J,em}/\tau_{\rm J,ext}=1.55-1.66$, for
the models LG and LGM (Table 2). For the models with dust evolution, the
ratios are smaller, 0.74 and 1.56 for THEMIS and SIGMA, respectively. Thus the
models with larger grains, or some form of dust evolution, are more consistent
between the sub-millimetre emission and NIR extinction.
### 5.3 Uncertainties of cloud models derived from FIR fits
We derived the cloud density distribution and strength of the external
radiation field by fitting the observations of FIR dust emission. The
estimated column density is sensitive to the dust temperature which in turn is
sensitive to the strength of the radiation field. In this section, we quantify
the dependences between the radiation field, the dust temperatures, and the
optical depth of the cloud.
The effects of scaling and attenuation of the radiation field on the dust
temperature can be solved from the equilibrium equation
$\int_{0}^{\infty}Q_{\rm abs}(\nu)\times I_{\rm
ISRF}\,d\nu=\int_{0}^{\infty}Q_{\rm abs}(\nu)\times B_{\nu}(T)\,d\nu,$ (6)
where Q($\nu$) are the dust absorption efficiencies, $I_{\rm ISRF}$ is the
intensity of the radiation field, and $B_{\nu}(T)$ is a black-body function at
temperature $T$. Figure 13 shows the dependence of the temperature of the
Default and LGM dust grains on the energy density of the radiation field.
Results are calculated for grains at an equilibrium temperature and using the
Mathis et al. (1983) radiation field with a linear scaling factor $\epsilon$
and an attenuation by by $e^{-\tau}$. We use values in the range of [0.1,10]
for both $\epsilon$ and $\tau/\tau_{\rm J}$.
Compared to the Default model, the LGM model has more large grains and to
reach similar temperature, a higher radiation field is required. Because of
the higher emissivity, larger grains will have a lower temperature for a given
radiation field and a lower column density is required to reach similar
temperature as with the Default model.
Figure 13: Comparison between the dust temperature and the energy density of
the radiation field for the Default and LGM models. The strength of the
radiation field has been normalised by the Mathis et al. (1983) model.
Dust emission spectra are typically analysed as modified black-body radiation.
However, the spectra will always deviate from this model, because of
temperature variations in the sources and because the spectral index of dust
opacity is not constant over the examined wavelength range. Figure 14 shows
emission spectra obtained by multiplying the absorption cross sections of the
Default and LGM models with a $\rm T=15$ K black-body function. The figure
also shows the MBB fits using three points at 250, 350, and 500 $\mu$m and
keeping both $T$ and $\beta$ as free parameters. The fitted temperatures
$T_{\rm c}=16.48$ K and $T_{\rm c}=15.85$ K for the Default model and LGM
model, respectively, are higher than the true temperature of 15 K. The fitted
$\beta$ values are $\beta=1.62$ and $\beta=1.80$, for the Default and LGM
models respectively, lower than the $\beta$ values of the dust models in the
$250-350$ and $350-500$ $\mu$m intervals, $\beta\sim 1.88$ and $\beta\sim
1.94$ for the Default and LGM models, respectively. Thus, an observer relying
on MBB fits would underestimate the column density (see also Fig. 10). Similar
results of dust models containing only bare astrosilicates (Draine & Lee,
1984; Draine & Li, 2001, 2007) failing to reproduce the SED of the observed
dust emission, have been discussed by Fanciullo et al. (2015); Planck
Collaboration XXII (2015); Planck Collaboration XXIX (2016).
The errors in column density and radiation field will also affect the
predictions of NIR scattering. However, the low intensity of our modelled NIR
surface brightness is not caused by the uncertainties in the column density,
but is related to the scattering efficiency of the dust models, or in the case
of the evolved dust models (e.g. SIGMA and THEMIS) the chosen relative
abundances of the different grain populations. Furthermore, MBB results are
sensitive to temperature variations that can lead to a severe underestimation
of dust column densities (Shetty et al., 2009; Juvela & Ysard, 2012). For
example, Pagani et al. (2015), who studied the cloud LDN 183, showed that
Herschel observations can not be used to set strong constraints on the amount
of very cold dust. Additional methods may be needed, such as observations of
molecular lines and NIR/MIR extinction.
Figure 14: Comparison of dust emission spectra and their MBB fits. The black
and purple lines show the emission spectra for Default and LGM models with
grains at 15 K temperature. The dashed lines show MBB spectra fitted to the
250, 350, and 500 $\mu$m points. The resulting fit parameters are $T_{\rm
c}=16.48$, $\beta=1.62$ and $T_{\rm c}=15.85$, $\beta=1.79$, for the Default
and LGM models, respectively. The red curve (scaled by a factor of $1.0\times
10^{-30}$ to fit the figure) is from a simulated Default model map pixel where
the fitted temperature is similar to that of the black curve. The blue
vertical lines show the 250, 350, and 500 $\mu$m frequencies. The relative
differences between the black-bodies and the MBB fits near the 250, 350, and
500 $\mu$m points are shown in the plot below the emission spectra curves.
The column densities based on the Default model vary within a factor of 2,
while the variation in the radiation field strengths is within a factor of
$\sim 1.5$. The column density differences are not much larger between the
SIGMA, THEMIS, and DDust models, but the the differences in the radiation
field strength reach a factor of two. However, the relationships between the
true $Q(\nu)B(T_{\rm d})$ of the dust, the emission predicted by RT models,
and the results of MBB fits are complex (Fig. 15). The emission from the dust
grains might not follow any MBB curve, $T_{\rm d}$ and $T_{\rm c}$ differ and
the fitted spectral index $\beta$ differs from the opacity spectral index of
the dust grains. These problems are acerbated when temperatures also vary
within the beam.
Figure 15: Schematic overview between the assumptions that are made during a
MBB fit and their comparison to the simulated spectra and a real cloud.
We further quantify the discussion above by computing H band surface
brightness maps using the Default model, with the assumption that we have
underestimated the strength of the radiation field and overestimated the
column density. Our aim is to see if by modifying these parameters, the
Default model can be made to agree with the observations. We use a radiation
field that is 1, 2, or 3 times and a column density that is 1, 0.6, or 0.3
times the value previously obtained for the Default model. Another component
affecting the shape and strength of the NIR scattered light is the sky
background behind the cloud, we assume that this is 1.0, 2.0, or 3.0 times the
value derived from the DIRBE observations. The resulting H band surface
brightness maps are shown in Fig. 16 and in Figs. 30, 31. Increasing the
strength of the radiation field will increase the surface brightness excess,
but increasing the background sky brightness will decrease the surface
brightness. The two values can be used to fix the level of the modelled
surface brightness, but they are not enough to reproduce the observed surface
brightness morphology (Fig. 30). For that, also the cloud column density needs
to be decreased.
Increasing the strength of the radiation field by a factor of 2 and
simultaneously decreasing the column density by $\sim 30\,\%$, (e.g. Fig. 16
panel G), the intensity of the simulated H band surface brightness is within
$15\,\%$ of the observed one. This can also be achieved by increasing the
strength of the radiation field and the assumed background sky brightness by a
factor of 3 and decreasing the column density by $\sim 30\,\%$ (e.g. Fig. 16
panel L)). In both cases, the morphology of the surface brightness map in the
central part is comparable with the observations, with a bright rim towards
the south and decreasing surface brightness towards the core. However, the
Veil (Fig. 9) is still prominent. Decreasing the density of the cloud further
(Fig. 31) the central region of the cloud becomes excessively compact,
although the correct intensity can be reached by scaling the radiation field
or the assumed intensity of the background sky brightness (e.g. Fig. 16 panels
G, K, and L).
Although suitable parameters can be found to correct the H band intensity of
the Default model, the required changes are substantial. Thus, it is evident
that dust evolution needs to be taken into account, since the models SIGMA and
THEMIS produce results closer to the observations without the need to fine
tune other parameters.
Figure 16: Simulated H band surface brightness maps for the Default model with
density scaled by a factor of 0.6 during the scattering computations and with
different assumptions on the strength of the radiation field and on the sky
brightness behind the cloud. Shown on the first row is the observed H band
surface brightness map. Shown on the rows 2 to 4 are the simulated H band
maps, for which the strength of the radiation field has been scaled with a
factor between 1 and 3, and the intensity of the background has been scaled
between factors 1 and 3.0.
### 5.4 Effects of grain properties
Launhardt et al. (2013) showed that L1512 is a cold ($\rm T\approx 12$ K) core
with high column density ($N(\rm H_{2})\approx\sim 2.0\times 10^{22}cm^{-2}$)
and in the study by Lippok et al. (2013), the envelope of the core was better
fitted by assuming a higher temperature compared to the core, which is
consistent with the absence of internal heating. However, to explain the
observed molecular abundances, Lippok et al. (2013) had to increase the
hydrogen density derived by Launhardt et al. (2013) and the abundances of all
modelled chemical species between a factor of 2 to 3. In their chemical models
Lippok et al. (2013) used a slightly coagulated grain model for the whole
cloud with a coagulation time of $10^{5}$ yr at a gas density of $10^{5}$ cm-3
(Ossenkopf & Henning, 1994). Lippok et al. (2013) discuss that by using a
model of non-coagulated grains, they would have increased the hydrogen density
by a factor of $\sim 2.5$.
Larger grains lead to stronger scattering as discussed by Steinacker et al.
(2010) and Ysard et al. (2018), but the increase in grain size also increases
the absorption coefficient. On the other hand, more complex or ’fluffy’ grains
can have a larger surface area, and have higher scattering efficiency with
respect to their absorption efficiency (Lefèvre et al., 2016). This can be
seen with the dense dust components of the models SIGMA and DDust (Fig. 28),
where the dense regions of the model have higher intensity and their
morphology agrees better with the observations.
Our Default model consists of only PAH, silicate, and carbon grains, however,
in dense regions of the ISM, gas phase freeze-out can create mantles on the
grains. The initial studies trying to explain the detected coreshine required
a high fraction of dust mass in large grains (Pagani et al., 2010; Steinacker
et al., 2010; Andersen et al., 2013), with maximum grain size of the order of
$\sim 1\mu$m. However, to reach such large grain size through coagulation is
difficult with respect to the time-scales, cloud densities, and turbulence
(Steinacker et al., 2014). On the other hand, as discussed by Ysard et al.
(2016), a dust model including mantle formation and low-level coagulation is
able to reproduce the observed cloudshine levels without the need of very
large grains. Our results using the THEMIS model show that the model can fit
the observed emission, the intensity of the scattered light is within $15\,\%$
of the observed values, and the morphology of the central region of the
surface brightness maps is comparable to the observed surface brightness (Fig.
12. However, the intensity of the diffuse region surrounding the cloud is a
factor of 3 to 5 higher than observed. Furthermore, as discussed by Bazell &
Dwek (1990) and Ossenkopf (1991), porosity increases the absorption
efficiencies at FIR and sub-millimetre wavelengths while the compositional
inhomogeneities will have an effect upon the shape and strength of broadband
features. Thus, these structural differences of the grains will affect both
the true temperature and color temperature derived from sub-millimetre flux
ratios (Kruegel & Siebenmorgen, 1994).
Thus it is evident that a larger maximum grain size or the relative amount of
large grains in the dust mixture is not sufficient to reproduce the observed
NIR surface brightness. Evolution of dust grains, in the form of aggregates
and mantle formation, appears necessary. This is supported by our core
temperature estimates that tend to overestimate the core temperature by $\sim
1.5$ K. However, the THEMIS model, with three different dust populations
produces a colder core, with $T_{\rm core}=7.6$ K, similar to the estimated
temperature of 8 K derived from line observations. However, even for the
evolved grain models, the diffuse regions are still problematic as they are
systematically brighter compared to the observations. The high surface
brightness is likely related to the relative abundances of the dust
populations, as for example, the THEMIS models has a relatively high abundance
of the CMM population in the Veil and Striation regions, and similarly, the
SIGMA model has a high abundance of the dense component in the Veil and
Striations (Figs. 17 and 19). Decreasing the relative abundances of the dense
dust components in these regions would decrease the surface brightness and
improve the match with the observed surface-brightness morphology.
## 6 Conclusions
We have studied the cloud L1512 and modelled simultaneously the scattered NIR
light at J, H, and KS bands and the FIR emission at 250, 350, and 500 $\mu$m.
We have used several dust models based on the Compiègne et al. (2011) dust and
three separate dust models taking into account dust evolution. The radiation
field used in the modelling is derived from the DIRBE observations. The NIR
surface brightness is estimated using a density field and radiation field
strength that are obtained from first fitting the FIR dust emission. The key
result of our study are:
* •
The morphologies of the observed NIR surface brightness maps are in good
agreement with the column density map derived from the Herschel observations.
However, the low-column-density Veil seen above the cloud in the Herschel
observations is not visible in scattered light.
* •
In the radiative transfer modelling, we can fit the observed emission with any
of the tested dust models. The average fit residual in the 350 $\mu$m band are
$\pm 1.5\,\%$ and for the 250 and 500 $\mu$m bands at most $-15\,\%$.
Depending on the model, the estimated H2 column density at the cloud centre
(point 1) ranges from $4.3\times 10^{21}$ cm-2 to $1.6\times 10^{22}$ cm-2,
and the relative radiation field strength from 0.3 to 0.75.
* •
The core temperature of the radiative transfer models is on average $\sim 1.5$
K higher than the value $\sim 8\,\pm\,1$ K, suggested by $\rm N_{2}H^{+}$ line
observations. With $T_{\rm core}=7.61$ K, the THEMIS dust model gives the core
temperature closest to the gas inferred value.
* •
The radiative transfer models matching the dust emission predict a J band
optical depth that is on average three times higher than the value measured
with the background stars. The exceptions are Scaled4 and THEMIS, with
$\tau_{\rm J,em}=1.55$ and $\tau_{\rm J,em}=1.05$, respectively, in agreement
with the value $\tau_{\rm J,Card}=1.5$ derived using the Cardelli et al.
(1989) extinction curve.
* •
The NICER estimates of $\tau_{\rm J,ext}$ obtained with the NIR extinction
curves of the tested dust models are mostly within $15\,\%$ of the values
obtained with the Cardelli et al. (1989) extinction curve. However, dust
models containing larger grains (e.g. LG and LGM) increase $\tau_{\rm J,ext}$
by a factor of two.
* •
For the models based on the Compiègne et al. (2011) model, the predicted
surface brightness excess $I_{\nu}^{\Delta}$ in the central region of the
cloud is a factor of 2 to 4 below the observed values and the morphology of
the simulated scattered light maps does not match the observations. Increasing
the maximum grain size of the dust grains or extending the width of the cloud
along the line-of-sight will increase the intensity of the scattered light,
but not enough. Thus, dust grain evolution (e.g. aggregates) is needed.
* •
Increasing the FIR emissivity by a factor of two (model Scaled2), increases
the predicted NIR surface brightness by up to a factor of two. It also
decreases the column density and produces more compact scattered NIR surface
brightness maps. However, further increase of emissivity (model Scaled4) again
decreases the NIR signal.
* •
The observed H-band surface brightness and its morphology could only be
matched with the Default model by making substantial modifications to the
values derived from the emission fitting and observations, indicating the need
of changes in the dust properties.
* •
The observed thermal emission and scattered NIR surface brightness can be
reasonably reproduced only by using dust models that take into account grain
evolution. However, the J-band intensity and the H- and KS-band intensities of
the diffuse regions are far above the observed values.
It is easy to fit the dust emission alone but the simultaneous fitting of
emission and scattering is challenging. The dust evolution must be taken into
account, to produce sufficient amounts of scattered light and with the correct
morphology. The uncertainty of the sky brightness behind the studied cloud can
have a significant effect on both the intensity and morphology and should be
constrained with a high precision before drawing conclusions on the dust
properties.
###### Acknowledgements.
This work has made use of data from the European Space Agency (ESA) mission
Gaia (https://www.cosmos.esa.int/gaia), processed by the Gaia Data Processing
and Analysis Consortium (DPAC,
https://www.cosmos.esa.int/web/gaia/dpac/consortium). Funding for the DPAC has
been provided by national institutions, in particular the institutions
participating in the Gaia Multilateral Agreement.
## References
* Andersen et al. (2013) Andersen, M., Steinacker, J., Thi, W. F., et al. 2013, A&A, 559, A60
* Bailer-Jones (2015) Bailer-Jones, C. A. L. 2015, PASP, 127, 994
* Bazell & Dwek (1990) Bazell, D. & Dwek, E. 1990, ApJ, 360, 142
* Beckwith et al. (1990) Beckwith, S. V. W., Sargent, A. I., Chini, R. S., & Guesten, R. 1990, AJ, 99, 924
* Bertin (2011) Bertin, E. 2011, Astronomical Society of the Pacific Conference Series, Vol. 442, Automated Morphometry with SExtractor and PSFEx, ed. I. N. Evans, A. Accomazzi, D. J. Mink, & A. H. Rots, 435
* Bertin & Arnouts (1996) Bertin, E. & Arnouts, S. 1996, A&AS, 117, 393
* Bohlin et al. (1978) Bohlin, R. C., Savage, B. D., & Drake, J. F. 1978, ApJ, 224, 132
* Cardelli et al. (1989) Cardelli, J. A., Clayton, G. C., & Mathis, J. S. 1989, ApJ, 345, 245
* Caselli et al. (2002) Caselli, P., Benson, P. J., Myers, P. C., & Tafalla, M. 2002, ApJ, 572, 238
* Caselli et al. (1995) Caselli, P., Myers, P. C., & Thaddeus, P. 1995, ApJ, 455, L77
* Compiègne et al. (2011) Compiègne, M., Verstraete, L., Jones, A., et al. 2011, A&A, 525, A103
* Draine & Lee (1984) Draine, B. T. & Lee, H. M. 1984, ApJ, 285, 89
* Draine & Li (2001) Draine, B. T. & Li, A. 2001, ApJ, 551, 807
* Draine & Li (2007) Draine, B. T. & Li, A. 2007, ApJ, 657, 810
* Fanciullo et al. (2015) Fanciullo, L., Guillet, V., Aniano, G., et al. 2015, A&A, 580, A136
* Foster & Goodman (2006) Foster, J. B. & Goodman, A. A. 2006, ApJ, 636, L105
* Gaia Collaboration et al. (2018) Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2018, A&A, 616, A1
* Griffin et al. (2010) Griffin, M. J., Abergel, A., Abreu, A., et al. 2010, A&A, 518, L3
* Hauser et al. (1998) Hauser, M. G., Arendt, R. G., Kelsall, T., et al. 1998, ApJ, 508, 25
* Hirashita & Nozawa (2013) Hirashita, H. & Nozawa, T. 2013, Earth, Planets, and Space, 65, 183
* Hirashita & Voshchinnikov (2014) Hirashita, H. & Voshchinnikov, N. V. 2014, MNRAS, 437, 1636
* Juvela (2019) Juvela, M. 2019, A&A, 622, A79
* Juvela et al. (2015) Juvela, M., Ristorcelli, I., Marshall, D. J., et al. 2015, A&A, 584, A93
* Juvela et al. (2012) Juvela, M., Ristorcelli, I., Pagani, L., et al. 2012, A&A, 541, A12
* Juvela & Ysard (2012) Juvela, M. & Ysard, N. 2012, A&A, 539, A71
* Köhler et al. (2015) Köhler, M., Ysard, N., & Jones, A. P. 2015, A&A, 579, A15
* Kruegel & Siebenmorgen (1994) Kruegel, E. & Siebenmorgen, R. 1994, A&A, 288, 929
* Launhardt et al. (2013) Launhardt, R., Stutz, A. M., Schmiedeke, A., et al. 2013, A&A, 551, A98
* Lefèvre et al. (2014) Lefèvre, C., Pagani, L., Juvela, M., et al. 2014, A&A, 572, A20
* Lefèvre et al. (2016) Lefèvre, C., Pagani, L., Min, M., Poteet, C., & Whittet, D. 2016, A&A, 585, L4
* Lefèvre et al. (2019) Lefèvre, C., Min, M., Pagani, L., et al. 2019, SIGMA: Simple Icy Grain Model for Aggregates
* Lehtinen & Mattila (1996) Lehtinen, K. & Mattila, K. 1996, A&A, 309, 570
* Lin et al. (2020) Lin, S.-J., Pagani, L., Lai, S.-P., Lefèvre, C., & Lique, F. 2020, A&A, 635, A188
* Lippok et al. (2013) Lippok, N., Launhardt, R., Semenov, D., et al. 2013, A&A, 560, A41
* Lombardi & Alves (2001) Lombardi, M. & Alves, J. 2001, A&A, 377, 1023
* Luri et al. (2018) Luri, X., Brown, A. G. A., Sarro, L. M., et al. 2018, A&A, 616, A9
* Malinen et al. (2011) Malinen, J., Juvela, M., Collins, D. C., Lunttila, T., & Padoan, P. 2011, A&A, 530, A101
* Mathis et al. (1983) Mathis, J. S., Mezger, P. G., & Panagia, N. 1983, A&A, 128, 212
* Mathis et al. (1977) Mathis, J. S., Rumpl, W., & Nordsieck, K. H. 1977, ApJ, 217, 425
* Men’shchikov et al. (2010) Men’shchikov, A., André, P., Didelon, P., et al. 2010, A&A, 518, L103
* Min et al. (2016) Min, M., Rab, C., Woitke, P., Dominik, C., & Ménard, F. 2016, A&A, 585, A13
* Miville-Deschênes & Lagache (2005) Miville-Deschênes, M.-A. & Lagache, G. 2005, ApJS, 157, 302
* Molinari et al. (2010) Molinari, S., Swinyard, B., Bally, J., et al. 2010, A&A, 518, L100
* Ormel et al. (2011) Ormel, C. W., Min, M., Tielens, A. G. G. M., Dominik, C., & Paszun, D. 2011, A&A, 532, A43
* Ossenkopf (1991) Ossenkopf, V. 1991, A&A, 251, 210
* Ossenkopf & Henning (1994) Ossenkopf, V. & Henning, T. 1994, A&A, 291, 943
* Padoan et al. (2006) Padoan, P., Juvela, M., & Pelkonen, V.-M. 2006, ApJ, 636, L101
* Pagani et al. (2015) Pagani, L., Lefèvre, C., Juvela, M., Pelkonen, V. M., & Schuller, F. 2015, A&A, 574, L5
* Pagani et al. (2010) Pagani, L., Steinacker, J., Bacmann, A., Stutz, A., & Henning, T. 2010, Science, 329, 1622
* Planck Collaboration XI (2014) Planck Collaboration XI. 2014, A&A, 571, A11
* Planck Collaboration XXII (2015) Planck Collaboration XXII. 2015, A&A, 576, A107
* Planck Collaboration XXIX (2016) Planck Collaboration XXIX. 2016, A&A, 586, A132
* Planck Collaboration XXV (2011) Planck Collaboration XXV. 2011, A&A, 536, A25
* Poglitsch et al. (2010) Poglitsch, A., Waelkens, C., Geis, N., et al. 2010, A&A, 518, L2
* Pollack et al. (1994) Pollack, J. B., Hollenbach, D., Beckwith, S., et al. 1994, ApJ, 421, 615
* Rieke & Keene (2004) Rieke, G. & Keene, J. 2004, Pre-stellar and Proto-stellar Cores and Cold Dust, Spitzer Proposal
* Sadavoy et al. (2013) Sadavoy, S. I., Di Francesco, J., Johnstone, D., et al. 2013, ApJ, 767, 126
* Shetty et al. (2009) Shetty, R., Kauffmann, J., Schnee, S., Goodman, A. A., & Ercolano, B. 2009, ApJ, 696, 2234
* Steinacker et al. (2015) Steinacker, J., Andersen, M., Thi, W. F., et al. 2015, A&A, 582, A70
* Steinacker et al. (2014) Steinacker, J., Ormel, C. W., Andersen, M., & Bacmann, A. 2014, A&A, 564, A96
* Steinacker et al. (2010) Steinacker, J., Pagani, L., Bacmann, A., & Guieu, S. 2010, A&A, 511, A9
* Stutz et al. (2007) Stutz, A. M., Bieging, J. H., Rieke, G. H., et al. 2007, ApJ, 665, 466
* Stutz et al. (2009) Stutz, A. M., Rieke, G. H., Bieging, J. H., et al. 2009, ApJ, 707, 137
* Ysard et al. (2018) Ysard, N., Jones, A. P., Demyk, K., Boutéraon, T., & Koehler, M. 2018, A&A, 617, A124
* Ysard et al. (2012) Ysard, N., Juvela, M., Demyk, K., et al. 2012, A&A, 542, A21
* Ysard et al. (2016) Ysard, N., Köhler, M., Jones, A., et al. 2016, A&A, 588, A44
## Appendix A Summary of dust models
A more detailed description of all dust models used in our radiative transfer
modelling is provided in this section. Unless otherwise noted, the different
dust models are based on the Compiègne et al. (2011) model. We use the size
distributions and optical properties as included in the
DustEM333https://www.ias.u-psud.fr/DUSTEM/index.html (Compiègne et al. 2011).
The changes that we have made to the Compiègne et al. (2011) model do not take
into account any limits placed by the mineralogy or constraints placed by
chemical abundances available in the ISM. The changes to for example the
albedo or emissivity of the grains, are meant to be relatively small, but
still large enough that differences between the models can be distinguished.
* •
Default: The Compiègne et al. (2011) model. Contains two populations of PAH
grains with log-normal size distributions, a single component of small carbon
grains with a log-normal distribution, and two components of large grains
following power-law size distributions. The large grains consist of a
population of carbon grains and a population of silicate grains. The average
opacity spectral index in the range [$250,350$] $\mu$m is
$\beta_{250/350}=1.887$.
* •
Albedo: The albedo of the grains at NIR wavelengths has been increased by
$20\,\%$ without changing the extinction cross sections during the scattering
computations.
* •
Disttest: Compiègne et al. (2011) assumed a power-law size distribution for
the large silicate and carbon grains. For this model the exponent factor of
the power-law $\gamma$ has been increased by $20\,\%$ for both the silicate
(original $\gamma=-3.4$) and carbon grains(original $\gamma=-2.8$) and
$\beta_{250/350}=1.888$.
* •
Gtest: The asymmetry parameter $g$ of the grains has been increased by
$15\,\%$.
* •
Scaled2: The emissivity of all wavelengths longer than 60 $\mu$m has been
multiplied by a factor of 2.
* •
Scaled4: As Scaled2, but the emissivity has been multiplied by a factor of 4.
* •
Wide: The FWHM of the Gaussian distribution describing the line-of-sight
density distribution has been increased from $\sigma=1.0$ to $\sigma=1.73$.
* •
LG: Extended the maximum grain size of the Compiègne et al. (2011) model to 5
$\mu$m. The value of $\beta_{250/350}$ is 1.903.
* •
LGM: As LG, but in addition increased the relative abundance of the large
grains by a factor of 2 and decreased the relative abundance of the small
carbon grains and PAH components by a factor of 2. The value of
$\beta_{250/350}$ is 1.948.
* •
Ddust: Combination of Default and LG dust models, the relative abundance of
the latter increasing with density as
$S_{\rm
LG}=\frac{1}{2}+\frac{1}{2}\tanh(\frac{2.0\times\log(\rho)}{\rho_{0}}),$ (7)
where $\rho$ is the density of the modelled cloud and $\rho_{0}$ defines the
limit where the relative abundances of the two dust components are equal. An
example of the abundances is shown in Fig. 17. In our model we have set
$\rho_{0}=1.45$. For the average of the Default and LG models
$\beta_{250/350}=1.955$.
Figure 17: Relative abundances of the diffuse and dense components for the
DDust and SIGMA models across the innermost cells of the model cloud. Left
panel: the relative abundance of the diffuse component. Right panel: relative
abundance of the dense component.
* •
SIGMA: A model with two dust components, for the diffuse component we use our
Default model, and the dense component is built using SIGMA (Simple Icy Grain
Model for Aggregates, Lefèvre et al. 2019). The dense component consists of
aggregate silicate and carbon grains from the Min et al. (2016) model (note
that we do not use the iron sulphide component) with added ices (Pollack et
al. 1994). The final mixture consists of $56.8\,\%$ silicates, $18.9\,\%$
carbons, and $24.3\,\%$ ices. The size distribution has a maximum grain size
of 10.0 $\mu$m (see Fig. 18), however for this work we have truncated the
maximum grain size at 5 $\mu$m, and use a porosity factor of 0.7 for the
grains. The porosity was chosen accordingly to the size distribution obtained
from coagulation computation. Such a high porosity was necessary to obtain
favourable conditions to stick dust grains together (Ormel et al. 2011;
Hirashita & Nozawa 2013; Hirashita & Voshchinnikov 2014, Pagani et al. in
prep). In practice, there is a significant fraction of fluffy icy aggregates
with a fractal degree close to 2. The dust distribution is representative of a
coagulation at a constant density of $1\times 10^{6}$ cm3 with a constant
porosity of 0.7 for 0.436 My. The simplification of dust distribution evolving
at constant density and porosity will be discussed in a forthcoming paper
(Pagani et al. in prep.)
The relative abundances of the dense and diffuse dust components are set
according to Eq. 7, with $\rho_{0}=1.45$. The threshold was so that the
diffuse dust component would have a relative abundance of less than $\sim
10\,\%$ in the core of the model. For the average of the two dust components
$\beta_{250/350}=2.024$.
Figure 18: Size distribution of of the dust grains used for the dense
component of the SIGMA model. For the emission fitting and scattering
computations, the size distribution was truncated at 5 $\mu$m, as indicated by
the purple line.
* •
THEMIS: A dust model as discussed by Köhler et al. (2015) and Ysard et al.
(2016). The model consists of different dust population mixtures, that have a
varying relative abundance based on the density of the model. In the following
we adopt the naming convention as defined by Köhler et al. (2015). The diffuse
regions of the model consist of mostly core-mantle (CM) grains which gradually
evolve to core-mantle-mantle grains (CMM) as density increases. In the densest
regions of the model, we assume that the grains have further evolved and are
gradually replaced by aggregate CMM grains with additional ice mantles (AMMI).
The relative abundances of the CM and CMM populations are set according to Eq.
7, with $\rho_{0}=1.45$. The relative abundance of the AMMI grains is then
defined by taking all cells where $\rho>7.5$, setting the relative abundance
of these cells to 1.0 and smoothing the cells with a $3\times 3$ Gaussian
beam. We then reduce the relative abundance of the CMM population in these
cells so that for each cell, the sum $\rm CM+CMM+AMMI=1.0$. An example of the
relative abundances of the three components across the densest part of the
cloud is shown in Fig. 19. The size distributions of the different dust
components are discussed by Köhler et al. (2015) and Ysard et al. (2016)
(Figs. 1 and 2, respectively). The average $\beta_{250/350}$ over the three
dust components is 2.011.
Figure 19: Relative abundances of the CM, CMM, and AMMI dust populations of
the THEMIS model across the innermost cells of the model cloud. Left panel:
the relative abundance of the diffuse CM component. Centre: relative abundance
of the CMM component. Right panel: relative abundance of the dense AMMI
component.
## Appendix B Stochastically heated grains
The effects of stochastic heating of small grains was not included in our
modelling, because they are not directly connected to the submm emission and
NIR scattering. However, in order to study how well our best-fit models agree
with the observed emission in the MIR wavelengths, we computed two test cases
using the models Default and THEMIS, including the stochastic heating. We did
not fit the model to the observed emission, but rather used the best-fit
parameters from the computations without SHGs and re-computed the emission
including the SHGs.
Figure 20: Observed (first row) and simulated (second row) emission maps using
the Default model at 100, 160, and 250 $\mu$m. The third row shows the
difference between the observed and simulated maps. The simulated emission has
been computed from the best fit parameters of the Default model and with
taking into account stochastically heated grains.
The results of these computations are shown in Figs. 20 and 21. For both
models, the emission in the SPIRE bands has decreased, by $\sim 10-15\,\%$,
because some of the energy is now emitted in the MIR wavelengths. The emission
is at 100 $\mu$m 30-45 $\%$ and at 160 $\mu$m 15$\%$ above the observed
values. For the Default model, the morphology of the simulated 100 and 160
$\mu$m maps agrees with the observed maps. For the THEMIS model, the bright
rim in both maps is up to 40 MJy sr-1 brighter than observed. The region where
the CMM grains transition to the AMMI grains is clearly visible, especially in
the 250 $\mu$m map.
Figure 21: As Fig. 20, but for the THEMIS model.
An SED from the observations, computed as averages over a region corresponding
to the red circle in Fig. 5 is shown in Fig. 22 (the orange line). The
observations between 3.6 $\mu$m and 8 $\mu$m are from the Spitzer IRAC
instrument and the 24 $\mu$m data are from the Spitzer MIPS instrument. The
MIPS data are corrected with a aperture correction of 2 MJy sr-1. For the 60
$\mu$m observations, we use the improved reprocessing of the IRAS survey
(IRIS) data (Miville-Deschênes & Lagache 2005) and the 100 to 500 $\mu$m
observations are from the Herschel PACS (100 and 160 $\mu$m) and SPIRE (250,
350, and 500 $\mu$m) instruments.
Figure 22: Spectral energy distributions for the observations (orange line),
the Default model (star symbols), and the THEMIS model (diamond symbols). The
red and blue highlights show the assumed uncertainty in the modelled results
for the Default and THEMIS models, respectively. The data points of the models
are a sum of scattered surface brightness seen over the background level,
estimated emission, and subtracting the background seen trough the cloud.
An SED from the Default and THEMIS models with the SHGs included are shown in
Fig. 22. The background levels of the IRAC observations have not been
calibrated to an absolute level. Thus, we assume an uncertainty of
$\pm\,30\,\%$ for the simulated surface brightness at IRAC wavelengths and
because of the uncertainty of the colour corrections, we assume an uncertainty
of $\pm\,30\,\%$ for the 24 $\mu$m data. As discussed by Launhardt et al.
(2013), the nominal calibration uncertainties for PACS and SPIRE are $\sim
5\,\%$ and $\sim 7\,\%$, respectively, thus for the simulated MIR and FIR maps
we assume a flat uncertainty of 10$\,\%$. These uncertainties are highlighted
with red and blue shading in Fig. 22.
It is clear that the simulations do not precisely match the observations,
although the shapes of the SEDs are in agreement. The THEMIS model is closer
to the observations both in the NIR and FIR, although the Default model at 60
and 160 $\mu$m is closer to the observed values.
## Appendix C Analysis of dust models based on the default model
In addition to the models discussed in the main part of the text, we have
tested dust models that are modifications of the Default model, increased
albedo (model Albedo), increased the value of the $g$ parameter (model Gtest),
and changes to the size distribution of the grains (Disttest). We have also
tested a case where the LOS density of the cloud is wider compared to our
assumed ’compact core’ model (model Wide). A description of these models is
provided in Appendix A, and the results are summarised in Table 4. The
resulting dust emission and scattered surface brightness maps are shown in
Appendix E. The scaling of the radiation field is similar to all models with
$K_{\rm ISRF}\sim 0.45$, expect for the model Wide which has a lower scaling
factor of $K_{\rm ISRF}\sim 0.37$.
Compared to the Default model the scattered surface brightness maps (see Figs.
25, 26, and 27) of models Albedo, Disttest, Gtest and Wide, show only minor
differences. The most notable difference is for the model Albedo, which
produces $\sim 20\,\%$ more surface brightness in all three bands compared to
the Default model.
Increasing the line-of-sight extent of the cloud can produce more scattered
light as the amount of illumination along the line-of-sight increases. We test
a case, with the Default model, where the line-of-sight density distribution
is extended, case Wide. The results indicate that the surface brightness is
increased but only by $\sim 10\,\%$ (see Fig. 27 second row).
Table 3: Summary of the additional radiative transfer models, including the
column densities, NIR intensities, J band optical depth, 250 $\mu$m optical
depth, radiation field scaling, and the core temperature.
Model name | Description | $N(\rm H_{2})$ \rm(1)\rm(1)footnotemark: $\rm(1)$ | J\rm(1)\rm(1)footnotemark: $\rm(1)$ | H\rm(1)\rm(1)footnotemark: $\rm(1)$ | KS\rm(1)\rm(1)footnotemark: $\rm(1)$ | $\tau_{\rm J,em}$\rm(1)\rm(1)footnotemark: $\rm(1)$ | $\tau_{250}$\rm(1)\rm(1)footnotemark: $\rm(1)$ | $K_{\rm ISRF}$ | $\rm T_{\rm core}$\rm(2)\rm(2)footnotemark: $\rm(2)$
---|---|---|---|---|---|---|---|---|---
| | $(\rm cm^{-2})$ | $(\rm MJy/sr)$ | $(\rm MJy/sr)$ | $(\rm MJy/sr)$ | | ($\times 10^{-3}$) | | $\rm(K)$
OBS | Values derived from observations | $4.51\times 10^{21}$ | 0.08 | 0.15 | 0.10 | 1.50 | 2.56 | - | 7.5
| COM models | | | | | | | |
Default | Compiègne et al. (2011) | $1.59\times 10^{22}$ | 0.055 | 0.047 | 0.054 | 5.80 | 2.67 | 0.447 | 9.34
Albedo | Albedo of the dust grains increased by $20\,\%$ | $1.59\times 10^{22}$ | 0.098 | 0.074 | 0.077 | 5.77 | 2.67 | 0.447 | 9.34
Gtest | $g$ parameter of the dust grains increased by $15\,\%$ | $1.58\times 10^{22}$ | 0.052 | 0.041 | 0.051 | 5.79 | 2.66 | 0.443 | 9.34
Disttest | $\gamma$ of the dust size distribution increased by $20\,\%$ | $1.58\times 10^{22}$ | 0.074 | 0.075 | 0.088 | 7.30 | 2.69 | 0.445 | 9.33
Wide | Model cloud with a wider LOS density distribution | $1.59\times 10^{22}$ | 0.068 | 0.055 | 0.059 | 5.80 | 2.67 | 0.368 | 9.76
444(1) The column density, background subtracted intensity, and the optical
depths of J band and 250 $\mu$m band have been computed as average values over
$5\times 5$ map pixels centred on point 1 (see left panel of Fig. 11).
(2) The value derived from observations based on the $\rm N_{2}H^{+}$ line
observations by Lin et al. (2020). The modelled values are computed as
averages over $10^{3}$ cells centred at the core.
## Appendix D NIR and MIR observations
The MIR Spitzer observations were acquired from the Spitzer Heritage Archive.
The cloud L1512 has been covered by three programs, Program id. 94 (PI Charles
Lawrence) and Program id. 90109 (PI Roberta Paladini), both using the InfraRed
Array Camera (IRAC) (see Fig. 24) and in Program id. 53 (PI George Rieke) with
the Multiband Imaging Photometer for Spitzer (MIPS). However, the program
90109 was carried out during the warm mission, thus only the 3.6 and 4.5
$\mu$m channels were available. The IRAC observations are discussed in more
detail by Stutz et al. (2009) and Steinacker et al. (2015), for the cold and
warm missions, respectively. The MIPS observations are described by (Rieke &
Keene 2004) and Stutz et al. (2007).
The Spitzer 3.6 $\mu$m and 4.5 $\mu$m maps show extended surface brightness
towards the central region of the cloud, but in both 5.8 $\mu$m and 8.0 $\mu$m
maps the region is seen in absorption (see Fig. 24). The surface brightness in
the 3.6 and 4.5 $\mu$m maps can be caused by thermal emission by small grains,
but the surface brightness is only seen from the dense central regions. If the
surface brightness is caused by thermal emission because of the high optical
depth for the radiation heating the dust grains, one would expect it to be
bright in the more extended region, not in the cloud centre. On the other
hand, as discussed by Steinacker et al. (2010), in the current PAH emission
models, the emission should increase towards the longer wavelengths and the
Spitzer 4.5 and 5.8 $\mu$m bands, but in the Spitzer images the opposite is
seen as the cloud in seen in absorption towards the longer wavelengths. Thus,
we can conclude that the extended surface brightness seen in the 3.6 $\mu$m
band, and at shorter wavelengths, is scattered light.
Shown in Figs. 23 and 24 are the observations from the WIRCam instrument and
Spitzer space telescope. To study the diffuse signal, we have used the Source-
Extractor (Bertin & Arnouts 1996, SExtractor) and Point Spread Function
Extractor (Bertin 2011, PSFEx) to remove point sources. The extraction was
carried out in three steps, in the first step, we use SExtractor to detect
only the brightest point sources in each image. The detected bright sources
are then analysed by PSFEx to construct a PSF for each detected source. The
PSFs are then used in a second run with SExtractor to detect all point sources
in the images. This second run produces an image of objects which we subtract
from the original observations resulting in an image in which stars appear as
smooth holes. The NIR images are further smoothed over $6\times 6$ map pixels
to better show the morphology of the scattered light.
Figure 23: Colour maps show the NIR surface brightness at J band (left frame),
H band (centre), and KS band (right). The surface brightness maps have been
smoothed over 6 $\times$ 6 map pixels. The white contours show the $N(\rm
H_{2})$ column density derived from the Herschel observations. The contour
levels are 15$\,\%$, 45$\,\%$, and 75$\,\%$ of the peak column density of
$1.12\times 10^{22}$ cm-2. Figure 24: Spitzer observations covering the 3.6,
4.5, 5.8, and 8.0 $\mu$m bands. The surface brightness maps have been smoothed
over 6 $\times$ 6 map pixels. The white contours show the $N(\rm H_{2})$
column density derived from the Herschel observations. The contour levels are
15$\,\%$, 45$\,\%$, and 75$\,\%$ of the peak column density of $1.12\times
10^{22}$ cm-2.
## Appendix E Additional figures
The Figs. 25 to 28 show all of our surface brightness maps for the different
dust models. Shown in Fig. 29 are the individual components of the
simulations.
Figs. 30 to 31 are the simulated surface brightness maps of the H band using
the Default dust model. The rows and columns of the figures corresponding to
different assumptions on the strength of the radiation field and the sky
brightness behind the cloud. The model cloud column density is scaled by 1,
0.6, and 0.3 for the three figures, respectively.
Shown in Figs. 32 to 34 are the residuals between our models and the observed
emission (observed value minus the model prediction) for all dust models. Each
row in the figures corresponds to a single model and the column show the
reiduals in the 250, 350, and 500 $\mu$m bands.
Additional NIR spectra extracted from point 2, see Fig. 11, is shown in Fig.
35 and a schematic overview of our modelling process is shown in Fig. 36.
Figure 25: Observed NIR surface brightness (first row) compared to model
predictions with the Default (second row), Albedo (third row), and Disttest
(fourth row) models. Figure 26: As Fig. 25, but for models Gtest (second row),
Scaled2 (third row), and scaled4 (Fourth row). Figure 27: As Fig. 25, but for
models Wide (second row), LG (third row), and LGM (Fourth row). Figure 28: As
Fig. 25, but for models SIGMA (second row), THEMIS (third row), and DDust
(Fourth row). Figure 29: Different components in the surface brightness maps
for the Default model. Shown on the first row are the background subtracted
observed surface brightnesses and the second row shows the simulated scattered
light without the background subtraction. Shown on the third row is the
component of the background that is seen trough the cloud (attenuated by a
factor of $e^{-\tau}$) and the fourth row shows the background subtracted
simulated surface brightness. The last row shows the difference between the
observed and simulated background subtracted surface brightness. Figure 30:
Simulated H band surface brightness maps for the Default model with different
assumptions on the strength of the radiation field and on the sky brightness
behind the cloud. Shown on the first row are the observed surface brightness
maps. Shown on the rows 2 to 4 are the simulated H band maps, for which the
strength of the radiation field has been scaled with a factor between 1 and 3,
and the intensity of the background has been scaled between factors 1 and 0.3.
The density of the cloud is the same as in the Default model. Figure 31: As
Fig. 30, but the cloud model has a $70\,\%$ lower column density compared to
the Default model. Figure 32: Relative errors between the observed emission
and our model predictions for 250, 350, and 500 $\mu$m bands. Each row
corresponds to a different dust model. The rows from top to bottom correspond
to models Default, Albedo, Disttest, and Gtest. Figure 33: As Fig. 32, but
for models Scaled2, Scaled4, Wide, and LG. Figure 34: As Fig. 32, but for
models LGM, SIGMA, THEMIS, and DDust. Figure 35: J, H, and KS band intensities
from observations (horizontal lines) and different models (symbols) for the
map position 2. The colours correspond to the J (purple), H (red), and KS
(black) bands. All intensity values have been background subtracted. We assume
a $20\,\%$ uncertainty in background sky estimates for the J and KS bands and
an uncertainty of $30\,\%$ for the H band. Figure 36: Schematic overview of
our modelling process to fit the dust emission and to compute an estimate for
the scattered surface brightness.
|
# BENDR: using transformers and a contrastive self-supervised learning task to
learn from massive amounts of EEG data.
Demetres Kostas
University of Toronto, Toronto, Canada
Vector Institute, Toronto, Canada
<EMAIL_ADDRESS>
&Stéphane Aroca-Ouellette
University of Toronto, Toronto, Canada
Vector Institute, Toronto, Canada
&Frank Rudzicz
University of Toronto, Toronto, Canada
Vector Institute, Toronto, Canada
Li Ka Shing Knowledge Institute, Toronto, Canada
###### Abstract
Deep neural networks (DNNs) used for brain-computer-interface (BCI)
classification are commonly expected to learn general features when trained
across a variety of contexts, such that these features could be fine-tuned to
specific contexts. While some success is found in such an approach, we suggest
that this interpretation is limited and an alternative would better leverage
the newly (publicly) available massive EEG datasets. We consider how to adapt
techniques and architectures used for language modelling (LM), that appear
capable of ingesting awesome amounts of data, towards the development of
encephalography modelling (EM) with DNNs in the same vein. We specifically
adapt an approach effectively used for automatic speech recognition, which
similarly (to LMs) uses a self-supervised training objective to learn
compressed representations of raw data signals. After adaptation to EEG, we
find that a single pre-trained model is capable of modelling completely novel
raw EEG sequences recorded with differing hardware, and different subjects
performing different tasks. Furthermore, both the internal representations of
this model and the entire architecture can be fine-tuned to a _variety_ of
downstream BCI and EEG classification tasks, outperforming prior work in more
_task-specific_ (sleep stage classification) self-supervision.
## 1 Introduction
To classify raw electroencephalography (EEG) using deep neural networks
(DNNs), discriminative models need to both extract useful features from raw
sequences, and classify those features. This frames both the promise and the
challenge of using DNNs: feature engineering could be almost entirely avoided,
without introducing limitations on classifier complexity, but both feature
extraction and classification need to be learned from a _limited_ supply of
(relevant) high-dimensional data. This challenge is evident in brain-computer
interface (BCI) applications, where DNNs can struggle to determine good
features. A large degree of data variability within and between different
users causes the classification performance of many model types to vary [1, 2,
3, 4]. Fundamentally, this reveals that these models lack generality, and
instead rely on characteristics specific to particular subjects (and/or
sessions). Furthermore, beyond these inter- and intra-personal variations,
different features are relevant for different BCI tasks in the first place.
Hand-selected features (sets possibly pruned later on) are distinct under
different BCI paradigms, as different features better discriminate different
tasks111While this is typical, some procedures, like covariance-based
Riemannian classification schemes, do not necessarily need different features
for different tasks [2, 5]. [2], e.g., P300 versus motor imagery. In other
words, unlike domains such as computer vision where there is a clearer
understanding that nearly all DNNs tend to learn “low-level” features in
earlier layers (e.g., edge-detector-like primitives) [6, 7, 8], there is no
such understanding with DNNs used to process raw EEG. There are no known
transferable DNN properties or operations that are easily extended to any
subject, session, or task. Importantly however, the determination of which
“low-level” features DNNs developed in computer vision was revealed through
models that had transferable performance from general to specific tasks[6, 7].
The development of transferable DNNs for raw EEG then appears to be a
promising classification tool on the one hand, but could also serve to
validate existing techniques, and perhaps even suggest novel methods (if early
layers do or do not correspond to existing methodologies respectively).
The difficulty of learning both “lower-level” features and an expressive
classifier simultaneously may help explain why work using DNNs to classify raw
BCI data has tended to prefer shallower networks [9, 10, 2, 11, 12]. With
these shallower networks, the range of _learnable_ features is relatively
limited. By design, these employ constrained linear operations, and a limited
few of these layers include subsequent non-linear activations [9], an
otherwise crucial feature of DNN complexity. Fundamentally, their inability to
uniformly outperform feature-engineering approaches [2] indicate that these
limited features are not entirely sufficient, and more importantly, they may
not always be desirable in a DNN approach [9]. In prior work we presented
evidence that, if inter-personal variability had been adjusted for, the
performance of shallower models more quickly saturates to lower performance
levels as compared to a deeper network alternative [9], suggesting that more
complex raw-BCI-trial features _could_ be developed using DNNs with sufficient
data, notably such that these data provide a reasonable empirical estimate of
the data distribution in question. Inter/intra-person variability sabotages
this approach, limiting the inter-applicability of data from all people and
sessions of an entire dataset. This is disappointing since the labelling
process is much more difficult than in other domains of DNN research to begin
with222Consider the difficulty of collecting and labelling 100 more BCI trials
as compared to the same for 100 more images..
In this work, we argue that self-supervised sequence learning would be an
effective approach for developing and deploying more complex DNNs in BCI, as
it can learn from many more people, sessions, and tasks using _unlabelled_
data, thus promising to better model the input distribution of EEG data; it
affords the possibility to learn features with little variability across
traditionally confounding factors. Specifically, we investigate techniques
inspired by language modelling (LM), that have found recent success in self-
supervised end-to-end speech recognition and image recognition. We begin by
comparing fully supervised transfer learning (which has been frequently looked
to as an EEG/BCI TL solution) to self-supervised approaches, finding
inconsistency in the extension of computer vision-style pre-training to BCI
(and by extension the data domain of EEG). We then evaluate a simple
adaptation of previous work in self-supervised speech recognition called
wav2vec 2.0[13] to EEG. With this framework, arbitrary EEG segments are
encoded as a sequence of learned feature vectors we call BErt-inspired Neural
Data Representations (or ‘BENDR’). We ask whether BENDR are: transferable to
novel EEG recorded from unseen subjects, different hardware, and different
tasks, and if BENDR are generally suitable (both as-is or fine-tuned) to a
battery of downstream EEG classification tasks.
### 1.1 Pre-training with DNNs
For inspiration on tackling DNN pretraining in BCI, one can look to successful
applications in other domains. The modern deep learning (DL) “revolution” was
ushered in on the back of computer vision and image recognition [14, 15]. The
successes of DL in this domain have stemmed from a lineage of massive
_labelled_ datasets [15], such as the ImageNet dataset [16]. These datasets
were used to train deep convolutional neural networks, often one of the
variants or progeny of ResNet [17] and DenseNet [18]. Crucially, these are
labelled datasets, featuring – especially in the case of ImageNet – an
enormous number of unique possible classification _targets_ (1000 is common
with ImageNet333image-net.org/challenges/LSVRC/2012/, but more are
possible444http://image-net.org/about-stats). As mentioned above, leveraging
labelled data (especially for a particular task) of a similar scale in BCI is
impractical but, despite this, a sizeable amount of prior work tries to
fashion a transfer learning strategy after the successes of ImageNet pre-
training. These take the form of transferring knowledge from a network trained
with _more data_ , typically more subjects, to a target domain with _less
data_ , typically a single subject [9, 19, 20, 3, 21, 22], with some work
transferring between entire datasets of the same paradigm, rather than
subjects [23]. On the surface, these embody a general-to-specific supervised
transfer learning scheme reminiscent of ImageNet pre-training. However, these
particular framings lack diversity in pre-training targets. Instead, the
number and type of targets remains the same in both the pre-training and fine-
tuning stages. We remain unaware of any work that pre-trains a DNN with a
_wide gamut of BCI-relevant targets_ to a _more narrow_ target set, as would
be common when using ImageNet as pre-training for more specific computer
vision tasks555It is also worth noting that our own prior work does not
consider or identify this.. This is noteworthy, as this is part of what makes
ImageNet a _general task_. Evidence suggests that pre-training label diversity
is important for effective ImageNet transfer learning [24], though an excess
could be detrimental [25, 24]. More fundamentally, however, this pre-training
paradigm has begun to be questioned altogether, with some work finding that it
does not necessarily improve downstream performance, where commonly it has
been assumed that it should (e.g., in medical images or object localization;
though it _speeds up_ training considerably) [26, 6, 27, 25].
What has begun to emerge as a potential alternative in computer vision – and
markedly so when there is limited labelled downstream data – is self-
supervised learning [28, 29, 30, 31] 666Terminology here can be somewhat
fuzzy. What is meant by self-supervision is a supervision-like task that
requires domain-relevant understanding in some sense. Sometimes, ‘semi-
supervised’ is used instead, as it is often also a semi-supervised procedure
[28], since the task is learned in an unsupervised fashion first and then
classic supervised learning is used with labels. Typically, though, semi-
supervision involves inferring labels for unlabelled data during training.
Instead, self-supervision is loosely a particular case of representation
learning, which is not historically uncommon in BCI [32]. Though this work is
different given that typically the loss is domain or data agnostic.. These
works are inspired by the recent success in natural language processing (NLP)
using LMs, which can be used for transfer learning, but also for few-shot and
zero-shot learning [33, 34]. We propose that DNN transfer learning in BCI and
neuroimaging analysis generally could follow a similar line, with
_encephalography models_ (EM) in place of LMs. The important question being
how best to construct such an EM, so that it learns features that are general
enough while remaining usable for any analysis task?
Prior work has developed approaches for (EEG) self-supervised sleep stage
classification (SSC) through contrastive learning[35]. Contrastive learning in
its most general form consists of identifying positive representations from a
set that also includes incorrect or negative distractor representations [36].
Banville _et al._ proposed two potential contrastive learning tasks – a
“relative positioning” task and an extension they termed “temporal
shuffling”[35]. Underlying both tasks is the notion that neighbouring
representations share a label. This is a fair assumption for SSC, where sleep
stages change slowly, and is generally reasonable for continuous problems,
where some notion of smoothness is assumed. Their proposed “relative
positioning” task is a binary classification problem distinguishing whether a
pair of representations are within a local or positive window $\tau_{pos}$, or
outside a long-range or negative window $\tau_{neg}$ (when
$\tau_{neg}>\tau_{pos}$, those falling within $\tau_{neg}$ but outside
$\tau_{pos}$ are ignored). The representations themselves are a learned
mapping (in their case, a convolutional neural network) of raw EEG time-
windows to a feature vector. Their alternative “temporal shuffling” method
adds a third window or representation with which to contrast that is within
$\tau_{pos}$ of one (arbitrary) window called the ‘anchor’, and again learns
the representations through a binary classification task. In this case, the
classification determines whether the three representations are ordered
sequentially, or are out of order. These tasks ultimately both improved
downstream SSC performance over the same network trained in a fully supervised
manner with randomly initialized weights (with self-supervision being
distinctly better when limiting fine-tuning data, a common theme in the recent
wave of self-supervision literature [37, 33]) and a variant of the same
network but trained under an autoencoder paradigm (alternative pretraining
option; the network was pretrained to reconstruct the original waveform).
“Relative positioning” performed better on average (and no statistical
significance expressed) as compared to its counterpart, but a linear
classification of simple hand-crafted features was still highest performing
overall. These results demonstrate the promise of self-supervised learning
with DNNs for EEG over a supervised approach. This is all the more valuable,
as it appeared that the self-supervised time-window representations (learned
features of a window), when projected into a 2D visualization, also appeared
to model some sleep-stage information and information about subject age [35]
purely through the contrastive task without use of sleep-stage labels. The
major concern with these particular schemes though are the lengths of the time
windows ($\tau_{pos}$ and $\tau_{neg}$). The shortest windows employed were 2
minutes for $\tau_{pos}$ and $\tau_{neg}$, which seems prohibitively long. As
it is assumed that representations within $\tau_{pos}$ are similarly labelled,
it may be difficult to expand the use of this technique to time scales closer
to that of a BCI trial (across any paradigm), which tend to be no more than
several seconds at most. Instead, we focus our efforts on adapting a relevant
strategy from the wider ML literature that could develop features on smaller
time scales.
Returning to transfer learning successes in NLP, the _masked_ language model
(MLM) is a slight variation on the typical LM which models the probability of
encountering a language token given previous (or, in some cases, also
subsequent) tokens. The MLM scheme instead learns to _reconstruct_ language
token(s) given surrounding context (fashioned after the Cloze task), and is
employed by BERT [38] and its lineage [34] of similar models, which embody
part of the recent wave of successful NLP transfer learning. This family of
models may deploy a variety of auxiliary tasks [39] for transfer learning, but
the task currently at the heart of this family is as follows: given a sequence
of $N$ tokens $t_{1},...t_{N}$, and a subset of token indexes $I_{m}$, for
each token index $i\in I_{m}$, tokens are masked with some mask $M$ so that:
$q_{i}=\begin{cases}M;i\in I_{m}\\\ t_{i};\text{otherwise}\end{cases},\forall
i\in N$ (1)
A transformer encoder [38, 40] then reconstructs the original sequence of
tokens from the _masked_ sequence ($t_{i}$ and $q_{i},\forall i\in N$
respectively in eq. 1). $M$ could be a single learned token [13], or in the
case of BERT: 80% of the time a fixed [MASK] token, 10% a random token or 10%
the original token (with 15% of tokens masked within each sequence) [38].
Could an EM be developed in this vein, using individual samples rather than
tokens (i.e., direct application of BERT to raw EEG)? Unfortunately, the
highly correlated nature of neighbouring samples in EEG (or most other
continuous data for that matter), is not conducive to this approach. The
likely result would be that, instead of an EM, a method for interpolation
would be learned, as has been argued in similar work in self-supervised
learning with speech [41]. In other words, the smoothness of these data would
make it hard to produce general features simply through recovering missing
points. Masking a contiguous span of tokens instead, which is beneficial in
NLP [42, 34], could avoid simply learning to interpolate missing samples, but
the _reconstruction_ of time-series data is difficult, due to the difficulty
(among other things) of capturing the degree of error in time (within
contiguous sequences) [43]. The losses used for such reconstruction, commonly
mean squared error (or mean absolute error), erroneously assume independence
in the error between elements in the series, causing inappropriate error
signals when (among other things) when simply shifting a reconstruction in
time [43].
Contrastive predictive coding (CPC), is a contrastive learning-based task that
retains the character of sequence learning provided by masked language model-
like approaches, but is not as susceptible to degeneration into interpolation,
or similarly affected by the issues of time-series reconstruction [31]. With
CPC, the correct _learned representation_ for a particular sequence offset is
predicted relative to distractor representations, typically those of other
positions in the same sequence [31]. This task enables learning both a good
feature representation and an understanding of the progression of those
features end-to-end. Interestingly, both the representations alone [37], and
the addition of the sequence model [13] have proven potentially useful for
supervised fine-tuning after pre-training.
Prior work in self-supervised speech recognition has begun to synthesize parts
of CPC and MLM to produce methodologies for self-learning with raw waveforms
[13, 44, 45, 41, 31]. In our work, we adapt one of these approaches called
wav2vec 2.0 [13] (its particular formulation is detailed in section 2.4.1) to
EEG, and investigate how effective the representations (BENDR) are for
downstream tasks.
## 2 Materials and methods
All experiments are implemented using the _deep neural networks for
neurophyisiology_ (DN3) library 777https://github.com/SPOClab-ca/dn3. The
source code and pre-trained BENDR models can be found at
https://github.com/SPOClab-ca/BENDR.
### 2.1 Datasets
The ideal pre-training dataset for our purposes would feature many subjects,
each recorded over many sessions. These sessions would also ideally be
distributed across large time-scales and consist of a variety of performed
tasks. In other words, the pre-training dataset should consist of a
representative sample of EEG data in the most general sense. This also means
that these data should include multiple different recording hardware and
configurations. The closest publicly accessible dataset, to our current
knowledge, was the Temple University Hospital EEG Corpus (TUEG) [46]. It
consists of clinical recordings using a mostly conventional recording
configuration (monopolar electrodes in a 10-20 configuration) of over 10,000
people, some with recording sessions separated by as much as eight months
apart. The subjects were 51% female, and ages range from under 1 years old to
over 90 [46]. We focused specifically on versions 1.1 and 1.2 of this dataset
which amounted to approximately 1.5 TB of European-data-format (EDF) EEG
recordings _before_ preprocessing.
Furthermore, we compiled a non-exhaustive battery of publicly accessible EEG
data classification tasks summarized in table 8. Most of these were BCI task
datasets, which could readily be compared to previous work with DNNs trained
without any additional unlabelled data [9, 11]. We also included one of the
sleep stage classification (SSC) tasks used by Banville _et al._ [35] in their
work on sleep stage self-supervision described above, for comparison. This
dataset afforded some further insight into generality, as BCI data are
typically classified in the context of particular trials or events, and SSC is
a more continuous problem, requiring that large spans of time are labelled
with the particular sleep stage a subject is undergoing. These segments are
distinctly longer than the BCI trials we considered in the remaining battery
(an order of magnitude difference in our case when compared to the largest BCI
task sequence length), and are distinctly closer in length to the pre-training
task. This allowed us to consider how effective our approach was to such a
different time-scale. Another notable difference with the SSC dataset was the
scale of available labels, which seems to have enabled prior work to consider
deeper and more complex models [47]. We segmented these sequences into 30
second periods as in prior work, and focused on 5 labels as in prior work [47,
35].
Dataset | Paradigm | sfreq. H _z_ | # Ch. | Subjects | Targets | Folds
---|---|---|---|---|---|---
MMI [48, 49] | MI (L/R) | 160 | 64 | 105 | 2 | 5
BCIC [50] | MI (L/R/F/T) | 250 | 22 | 9 | 4 | 9
ERN [51] | Error Related Negativity | 200 | 56 | 26 (10) | 2 | 4
P300 [52, 53, 49] | Donchin Speller | 2048 | 64 | 9 | 2 | 9
SSC [54, 55, 49] | Sleep Staging | 100 | 2 | 83 | 5 | 10
Table 1: Summary of downstream dataset battery and number of cross-validation
folds used. Cross validation splits were in a leave-multiple-subjects-out
configuration if $Folds<Subjects$, or leave-one-subject-out if
$Folds=Subjects$ (as in prior work [9]). The ERN dataset was featured in an
online competition888https://www.kaggle.com/c/inria-bci-challenge which
featured 10 held-out test subjects (not used during training), which we used
as a test dataset for all four validation splits of this dataset.
### 2.2 Preprocessing
The focus of the preprocessing stage was to create a maximally consistent
representation of EEG sequences across datasets, so that the pre-trained
network was well-suited to a _variety_ of “downstream” tasks. More or less,
this amounted to modifying downstream datasets to match the configuration of
the pre-training dataset. The first aspect of this was to remove spurious
differences in channel amplitude. Each sequence gathered for training was
linearly scaled and shifted (a weight and offset for each sequence adjusts
every sample in the sequence) so that the maximum and minimum values within
each sequence equal $1$ and $-1$ respectively. To account for the lost
relative (to the entire dataset) amplitude information, a single channel was
added with the constant value
$\frac{max(s_{i})-min(s_{i})}{max(S_{ds})-min(S_{ds})}$, where $S_{ds}$ is the
set of all samples in the dataset and $s_{i}\subset S_{ds}$ is a particular
sub-sequence (i.e., trial). We additionally addressed the differences in
sampling frequency and electrode sets of the different dataset. Our solutions
to these problems were similarly minimalist and were achieved using standard
features in DN3 [56]. Specifically, we over- or under-sampled (by whole
multiples, for lower and higher sampling frequencies respectfully) to get
nearest to the target sampling frequency of 256 H _z_. Then, nearest-neighbour
interpolation was used to obtain the precise frequency (described further in
[56]). Additionally, the P300 dataset was low-pass filtered below 120 H _z_ to
avoid aliasing due to its higher sampling rate (and associated higher original
low pass filter). Furthermore, the SSC dataset featured two bi-polar
electrodes: FPz-Cz and Pz-Oz, which were simply mapped to FPz and Pz,
respectively. The TUEG dataset also features some higher sampling rate
signals; we included those with low-pass filters that did not violate the
Nyquist criterion (and subsequently re-sampled them as above), and ignored the
rest.
A reduced subset of the Deep1010 channel mapping from DN3 [56] was used
throughout. This ensured that particular channels were mapped to a consistent
index for each loaded trial. The original mapping was designed to be more
inclusive, and thus assumed up to 77 possible EEG electrodes. In the interest
of minimizing unnecessary electrodes for an already high-dimensional problem,
we focused on the 19 EEG channels of the _unambiguously illustrated 10/20_
channel set (UI 10/20) [57], as the TUEG dataset recordings were done using a
roughly 10/20 channel scheme. We simply ignored reference electrodes, electro-
oculograms, and any other auxiliary channels. When also accounting for the
additional relative amplitude channel described above, every sequence from
every dataset used 20 channels. All surplus channels were ignored, and missing
channels set to $0$.
During pre-training, we extracted sequences of 60 seconds (every 60 seconds)
from each usable sequence, which amounted to $15,360$ samples per subsequence.
We observed in early testing that there was better performance with larger
sequences (see figure 2 for more). As can be seen in table 2, the downstream
datasets all used sequence lengths shorter than this, but the architecture we
employed (see section 2.3) was ostensibly agnostic to sequence length (see
section 4 for caveats).
### 2.3 Model architecture
The model architecture closely follows that of wav2vec 2.0 [13] and is
comprised of two stages. A first stage takes raw data and dramatically
downsamples it to a new sequence of vectors using a stack of short-receptive-
field 1D convolutions. The product of this stage is what we call BENDR
(specifically in our case, when trained with EEG). A second stage uses a
transformer _encoder_ [40] (layered, multi-head self-attention) to map BENDR
to some new sequence that embodies the target task.
Raw data is downsampled through the stride (number of skipped samples) of each
convolution block in the first stage (rather than pooling, which would require
greater memory requirements). Each of our convolution blocks comprised of the
sequence: 1D convolution, GroupNorm [58], and GELU activation [59]. Our own
encoder features six sequential blocks, each with a receptive fields of 2,
except for the first, which was 3. Strides matched the length of the receptive
field for each block. Thus, the _effective sampling frequency_ of BENDR is 96
times smaller ($\approx 2.67$ H _z_) than the original sampling frequency
($256$ H _z_). Each block consists of 512 filters, meaning each vector has a
length of 512.
The transformer follows the standard implementation of Vaswani _et. al_ [40],
but with internal batch normalization layers removed and with an accompanying
weight initialization scheme known as T-Fixup [60]. Our particular transformer
architecture uses 8 layers, with 8 heads, model dimension of 1536 and an
internal feed-forward dimension of 3076. As with wav2vec 2.0, we use GELU
activations [59] in the transformer, and additionally include LayerDrop [61]
and Dropout at probabilities $0.01$ and $0.15$, respectively, during pre-
training but neither during fine-tuning. We represent position using an
additive (grouped) convolution layer [13, 62] with a receptive field of 25 and
16 groups before the input to the transformer. This allows the entire
architecture to be sequence-length independent, although it may come at the
expense of not properly understanding position for short sequences.
Originally, the downstream target of the wav2vec 2.0 process was a downstream
speech recognition _sequence_ (it was fine-tuned on characters and phonemes)
[13]. Instead, here the entire sequence is classified. To do this using a
transformer, we adopt the common practice [38] of feeding a fixed token
(_a.k.a._ [CLS] in the case of BERT or, in our case, a vector filled with an
arbitrary value distinct from the input signal range, in this case: $-5$) as
the first sequence input (prepended to BENDR). The transformer output of this
initial position was not modified during pre-training, and only used for
downstream tasks.
The most fundamental differences in our work as compared to that of the
speech-specific architecture that inspired it are: 1. we do not quantize BENDR
for creating pre-training _targets_ , and 2. we have _many_ incoming channels.
In wav2vec 2.0, a _single_ channel of raw audio was used. While a good deal of
evidence [9, 63, 11, 64, 2, 12] supports the advantage of temporally-focused
stages (no EEG channel mixing) separate from a stage (or more) that integrates
channels, we elected to preserve the 1D convolutions of the original work to
minimize any additional confound and to reduce complexity (compute and memory
utilization $\propto N_{filters}$ with 2D rather than
$\propto\frac{N_{filters}}{N_{EEG}}$ for 1D convolutions). This seemed fair,
as there is also evidence that 1D convolutions are effective feature
extractors for EEG, particularly with large amounts of data [65, 56]. Notably,
wav2vec 2.0 downsampled raw audio signals by a much larger factor (320) than
our own scheme, but speech information is localized at much higher frequencies
than encephalographic data is expected to be. The new effective sampling rate
of BENDR is $\approx 2.67$ H _z_ , or a feature-window (no overlap) of
$\approx 375~{}ms$. We selected this downsampling factor as it remained stable
(i.e., it did not degenerate to an infinite loss, or simply memorize
everything immediately) during training.
### 2.4 Training
We used the Adam [66] optimizer throughout training, with weight decay set to
$0.01$. We additionally used a cosine learning rate decay with linear warmup
for 5% and 10% of total training steps (batches) for pre-training and fine-
tuning respectively. The peak learning rate itself varied by dataset; this,
and other variable hyperparameters, are further documented in appendix A.
#### 2.4.1 Pre-training
The pre-training procedure largely follows wav2vec 2.0, but we make some
notable hyperparameter changes. Specifically, the self-supervised loss for a
masked token localized at BENDR position $t$, is defined as:
$\mathcal{L}=-log\frac{exp(cossim(c_{t},b_{t}))/\kappa}{\sum_{b_{i}\in
B_{D}}exp(cossim(c_{t},b_{i}))/\kappa}$ (2)
Where $c_{t}$ is the output of the transformer at position $t$, $b_{i}$ is the
BENDR vector at some offset $i$, and $B_{D}$ is a set of 20 uniformly selected
distractors from the same sequence, plus $b_{t}$. We use the cosine similarity
$cossim(x,y)=x^{T}y/(|x||y|)$ function to determine how similar vectors are,
and the sensitivity of this is adjusted by a temperature factor $\kappa$, set
to $0.1$. In essence, this loss operates by adjusting the output of the
transformer at position $t$ to be _most similar to the encoded representation
at $t$, despite that this input to the transformer is masked_. We also added
the mean squared activation of the BENDR to the loss, as was similarly done
previously [13], but set the weight of this additional term to 1 (rather than
10).
We learn a single mask vector during pre-training of the same length as each
BENDR vector, and use this as the transformer input to masked positions.
Contiguous sequences of 10 are masked with probability $p_{mask}=0.065$, such
that, for each sample, the likelihood of being the _beginning_ of a contiguous
section was $p_{mask}$, and overlap is allowed. The number of
negatives/distractors was set to 20 and uniformly sampled from the _same_
sequence as the masked vector, i.e., negatives do not cross trials or
sequences.
After pre-training, we examined how generalizable the sequence model and
vectors were to unseen data, by evaluating the contrastive task, expressed as
the transformer accuracy in constructing $c_{t}$ to be most similar to $b_{t}$
rather than the distractors. During evaluation, we masked half the amount
expected during training, but such that masked spans were evenly spaced
through the sequence (so that there were no overlapping sequences, and
sufficient context was available). That is, for a sequence length of $N_{S}$,
we masked $0.5\times N_{S}\times p_{mask}=N_{m}$ contiguous sequences (of 10),
and spaced them every $\left\lfloor{\frac{N_{S}}{N_{m}}}\right\rfloor$ steps
(starting at the first sample). $N_{S}$ first remained at $15,360$ (60 seconds
as in training, no overlap between subsequent sequence representations) for
all datasets except P300, where sessions were too short and instead $5120$ (20
seconds) was used. We then evaluated the change in performance across the
downstream datasets, excluding P300, as $N_{S}$ varied from 20-60 seconds.
#### 2.4.2 Downstream fine-tuning
Ultimately, our aims for subject-, session-, and dataset-generalizable
representations were not simply to accurately mimic the correct input, but
with the intent that these representations – and potentially the sequence
model itself – could be effectively transferred to specific and arbitrary
tasks. We considered six different variations of TL across the battery of EEG
classification tasks presented in table 8:
1. 1.
Add a new softmax classification layer to the first (pre-pended position)
output token of the transformer and train the entire model to classify the
downstream targets.
2. 2.
Ignore the pre-trained transformer, average pool the BENDR to four
concatenated vectors, add a new classification layer and train the model (only
the first stage and new layer) to classify the downstream targets.
3. 3.
The same as (1.), but without pre-training
4. 4.
The same as (1.), but keep the BENDR fixed and continue training the
transformer.
5. 5.
The same as (2.), but without pre-training
6. 6.
The same as (2.), but keep the first stage weights fixed and train only the
new classification layer.
We considered these permutations so that we could speak to the effect each
stage had on downstream performance, at least to some degree. First, we were
interested in 1) determining whether the new sequence representation (BENDR)
contained valuable features _as-is_ (as they appear to for speech [13]) or if
they required some further training, and 2) whether the sequence model learned
characteristics of the BENDR that were informative to the classification task.
Finally, ignoring pre-training all-together, of course, was to examine how
effective the network would be at learning the task otherwise, without pre-
training or transfer learning.
The P300, ERN, and SSC datasets all had imbalanced class distributions; we
adjusted for these imbalances by _undersampling_ points of the more frequent
classes with replacement so that the number of samples drawn – per epoch – of
each class was equal to the number of examples of the least frequent target
class.
We also included the sequence regularization proposed by wav2vec 2.0 [13],
though we adjusted it for our more varied trial lengths. That is, in all 6
fine-tuning configurations, contiguous sections of 10% of the entire BENDR of
a trial were masked with the mask token learned during pre-training (not
changed after pre-training) at a probability of $0.01$. In other words, this
was the likelihood of a sample being the beginning of a contiguous masked
section, as in pre-training. Additionally across the BENDR (throughout each
vector in the sequence), a similar procedure dropped features to 0, where
contiguous sections of 10% of the channels (51) were dropped with a
probability of $0.005$.
## 3 Results
### 3.1 Pre-training generalization
Figure 1: Violin plot (inner lines for quartile divisions) of test-subject-
wise accuracy for each downstream dataset. Specifically, accuracy of the
sequence model (transformer stage) at creating a representation that is
closest to the correct representation at masked sequence positions. The P300
dataset is distinctly lower performing than the remaining datasets, though
this was likely due to its shorter evaluation context (see figure 2).
Nonetheless, there is minimal test-subject-wise variation, particularly when
compared to classifier performance generally.
Figure 1 shows how accurate the transformer stage is at producing an
appropriately similar BENDR when compared to distractor representations. There
are two key observations in this figure, the first is that there is little
variability across the first four datasets, and within each of the five
datasets. The latter point implies that this accuracy is not radically
variable across different subjects, as it tends to be when considering
classifier performance [1, 3] (though, when fine-tuning for classification,
this variability returns; see figure 3). This could be because a) the
transformer adequately learns a general model of how BENDR sequences of novel
persons and equipment progressed, b) the BENDR themselves are invariant to
different people, hardware, and tasks, c) some combination of the last two
possibilities, or d) the problem is being solved via some non-signal
characteristics. We return to this question shortly. The second observation
was alluded to already: the P300 dataset distinctly under-performs the other
downstream datasets. However, this coincided with the shortest evaluation
sequence. Looking at figure 2, we see that all five datasets have consistently
similar performance when evaluated with 20 seconds of data, so the dip in P300
performance of figure 1 seems less remarkable. Taken together, 1 and 2 clearly
indicate that a longer evaluation context makes the contrastive task easier.
This suggests that the contrastive task is, in fact, solved by learning
signal-relevant features, rather than some more crude solution like
interpolation, or by simply creating a sequence of recognizable position
representations (both of which have no reason to exhibit this dependence on
sequence length). We believe the most likely explanation for the rise in
performance with more context is that local representations are more difficult
distractors, implying that the new effective sampling rate remains too high
(and there is still redundant information encoded in local BENDR).
Notwithstanding, there is a strong uniformity of performance across datasets
and subjects (in both figures 1 and 2), meaning this scheme develops features
(whether through the transformer itself, or the BENDR) that generalize to
novel subjects, hardware, and tasks, though their applicability to downstream
contexts remains to be seen.
Figure 2: Contrastive accuracy versus evaluation length in seconds (x-axis
logarithmic). Performance is distinctly similar for all datasets, rising for
longer sequences. We suggest that this implies that samples that are further
apart are easier to distinguish between than neighbouring samples. Thus, while
BENDR encode local signal characteristics well, there is redundancy.
### 3.2 Downstream fine-tuning
Figure 3: Performance of all downstream datasets for each of the six model configurations considered. Metrics vary by dataset, see table 2. Metrics were normalized to range from chance (0) to perfect (1). Individual translucent points are performances of single subjects (within each test fold), solid diamonds indicate mean performance across all subjects/folds, with surrounding bars showing $.95$ confidence intervals using $n=1000$ bootstrap sampling. The discretized pattern of the MMI dataset is due to the limited trials _per subject_ , which resulted in limited distribution of performance levels. Notably here, (1.) or (2.) were consistently among the best performing, yet both remained within the confidence levels of each other. The randomly initialized average-pooled BENDR with linear classifier (5.) also performed well, though less consistently. Model configurations are numbered in accordance with the list presented in section 2.4.2. Dataset | Start (s) | Length (s) | Metric | Best | Model config.
---|---|---|---|---|---
MMI | 0 | 6 | BAC | 86.7 | Linear (2.)
BCIC | -2 | 6 | Accuracy | 42.6 | Linear (2.)
ERN | -0.7 | 2 | AUROC | 0.65 | Linear (2.)
SSC | 0 | 30 | BAC | 0.72 | Linear (2.)
P300 | -0.7 | 2 | AUROC | 0.72 | BENDR (1.)
Table 2: Performances of downstream datasets. Start and length refer to length
of trials and start with respect to event markers in seconds. Best performance
specifies average performance across all subjects (and therefore folds) for
best performing model configuration. BAC: class balanced accuracy; AUROC: area
under the receiver operating characteristic curve. Model configurations are
numbered in accordance with the list presented in section 2.4.2.
Figure 3 and table 2 present a picture of how effectively BENDR could be
adapted to specific tasks. Overall, the fine-tuned linear classification
(listed as downstream configuration 2. above) that bypassed the transformer
entirely after pre-training was highest performing four out of five times,
though using the transformer for classification (1.) performed consistently
similarly (confidence intervals always overlapped), and surpassed the bypassed
transformer (2.) with the P300 dataset (and was highest performing for this
dataset). Deploying the full network (initial stage and transformer) without
pre-training was generally ineffective, though this was not the case with the
SSC dataset, which may have been due to the larger data availability. In fact,
for both the full and linear model architectures trained with the SSC data,
fine-tuning the pre-trained model is mostly on par with the randomly
initialized counterpart. Considering our results with the SSC data relative to
those of Banville _et. al_ ’s [35] proposed contrastive learning for sleep
staging (described in section 1.1), their reported results show that the fine-
tuned variants of our own model (1. and 2.) achieved a higher mean balanced
accuracy relative to their two proposed schemes. Taken in concert with our own
approach’s wider applicability and more fine-grained temporal feature
development, we believe this demonstrates that ours is a promising
alternative. Interestingly, with and without pre-training (2. and 5.) achieved
similar performance to Banville _et. al_ ’s fully supervised results (where
our configurations and their architecture employ similar 1D convolution-based
schemes), which is notable as with this dataset, both their “temporal-
shuffling” and “relative-positioning” tasks under-performed full supervision
when utilizing the full SSC dataset.
Our fine-tuned approaches similarly appear reasonably competitive with prior
work on the MMI dataset [9, 3], particularly when considering that only 19
channels (rather than the full set of 64) were being used. In all considered
configurations, despite heavy regularization (and the very low learning rates)
the randomly initialized parameters were consistently prone to overfitting,
all the more so with the full model architecture. Conversely, the pre-trained
networks were slow to fit to the downstream training data (under the exact
same training scheme for fine-tuning). Ultimately, though most of these
results are not necessarily state-of-the-art, this single pre-training scheme
nonetheless shows a breadth of transferability which is apparently unique.
## 4 Discussion
We are unaware of any prior work assessing transformer-based [40] DNNs with
EEG data (raw or otherwise). This is perhaps consistent with the
ineffectiveness we observed with the randomly initialized full architecture
(3.) and could imply that effective use of this powerful emerging architecture
_requires_ pre-training (or at least enough data, given the better looking SSC
performance). Future work should continue to evaluate this architecture,
particularly as it appears to be more widely applicable than the NLP
applications it was originally proposed for [67, 13].
We believe that our approach can be improved through adjusting the neural
network architecture and pre-training configuration such that it becomes more
data-domain (EEG) appropriate. Future work will prioritize effective
integration of spatial information, likely by better isolating temporal and
spatial operations. Evaluation using large downstream datasets that _also_
feature many channels, such as the Montreal Archive of Sleep Studies
(MASS)999http://massdb.herokuapp.com/en/ will be considered. Though available
for public access at the time of writing, these data were unavailable while
experiments were prepared and conducted. Prior work shows that DNN approaches
effective for EEG leverage spatial information [64], and it is presently
unclear to what degree this is the case with BENDR. In terms of data-
appropriate temporal modelling, which we have considered with relatively more
zeal in this work, recall that figure 2 presents the possibility that local
representations may be retaining redundant information, further improvements
may be found in better compressing the temporal resolution of BENDR. Future
work will consider larger downsampling factors in the initial stage, along
with longer sequences, balancing the more difficult problem of summarizing
more data (in effect, further data _compression_), with the apparent increased
effectiveness of the contrastive task (as observed in figure 2) on longer
sequences. A small but potentially fruitful avenue for further improvement
includes reconsidering the additive convolutional layer as a substitute for
explicit position encodings, which are in fact more common [34, 38, 40].
Recall that this was originally for two reasons: wav2vec 2.0 did the same, and
we felt it best to limit excessive changes to the architecture on a first
iteration, and because it seamlessly supported flexible input lengths. This
latter point comes, however, with a trade-off – our particular position
encoder had a receptive field of 25 (stride of 1), which means a little over 9
seconds of input. While it seems that convolutional position encodings offer
better performance [62], this input width exceeded the _entire_ length of all
but the sleep classification task (the length we chose was optimized for pre-
training behaviour).
After considering these possible avenues for improving BENDR, we still do not
fully discount the validity of some of the transfer learning paths we appear
to exclude above in our introduction. We will reconsider these paths in future
work. Particularly, given the success we had in crossing boundaries of
hardware in this work, and in prior work [56], it may be possible to construct
an _aggregate_ dataset featuring a variety of EEG classification tasks,
towards better ImageNet-like pre-training. The construction of a more coherent
label set that crosses several BCI paradigms would no doubt be a significant
effort (e.g., problems may include: is a rest period before one task paradigm
the same as rest before another? What about wakeful periods in sleep?). This
would no doubt be imbalanced; the labels would be distributed in a long-tailed
or Zipfian distribution that would likely require well thought-out adjustment
[68, 69]. Furthermore, the value of ImageNet pre-training _seems to be_
localized to very early layers and the internalization of domain-relevant data
statistics [70, 6]. Future work could look into which of these may be
leveraged with a new aggregate (multiple subjects _and_ tasks) pre-training,
or the common subject-specific fine-tuning. This may provide insight into
better weight initialization, or integration of explicit early layers similar
to [6] (one could also argue that SincNet layers [71] are some such layers
that could factor here). Additionally, as temporally-minded reconstruction
losses continue to develop [43], reconsidering the effectiveness of signal
reconstruction as a pre-training objective (and/or regularization) is
warranted, whether this is within an MLM-like scheme similar to BENDR, or a
seq2seq model [72].
## 5 Conclusion
We have proposed MLM-like training as a self-supervised pre-training step for
BCI/EEG DNNs. This is in the interest of diversifying the investigations into
successful transfer learning schemes for DNNs applied to BCI and EEG. While
previous approaches fashioned DNN transfer learning after ImageNet pre-
training, we find this approach inadequate as there is limited applicable data
availability and it is questionably analogous to its forebear. While our
proposed alternative might similarly suffer from this latter point to some
degree (the most distinct MLM success is with discrete sequences, not
continuous ones), it is more conducive to leveraging potentially immense
amounts of unlabelled data, it is not limited to long-term feature
developments as with previous proposals, and it seems to produce
representations equally suited to different users and sessions, which is a
problem ImageNet pre-training appears less suited to solving. In summary, we
see strong paths for the effective deployment of powerful computation and
massive data scales with EEG and BCI. Effective solutions in these specific
applications could help drive application _and_ analysis solutions in
neuroimaging and perhaps physiology generally.
## References
* [1] Claudia Sannelli, Carmen Vidaurre, Klaus-Robert Müller, and Benjamin Blankertz. A large scale screening study with a SMR-based BCI: Categorization of BCI users and differences in their SMR activity. PLOS ONE, 14(1):e0207351, 1 2019.
* [2] F Lotte, L Bougrain, A Cichocki, M Clerc, M Congedo, A Rakotomamonjy, and F Yger. A review of classification algorithms for EEG-based brain-computer interfaces: a 10 year update. Journal of neural engineering, 15(3):031005, 2018.
* [3] Hauke Dose, Jakob S. Møller, Helle K. Iversen, and Sadasivan Puthusserypady. An end-to-end deep learning approach to MI-EEG signal classification for BCIs. Expert Systems with Applications, 114:532–542, 2018.
* [4] Minkyu Ahn and Sung Chan Jun. Performance variation in motor imagery brain-computer interface: A brief review. Journal of Neuroscience Methods, 243:103–110, 2015.
* [5] Paolo Zanini, Marco Congedo, Christian Jutten, Salem Said, and Yannick Berthoumieu. Transfer Learning: A Riemannian Geometry Framework with Applications to Brain-Computer Interfaces. IEEE Transactions on Biomedical Engineering, 65(5):1107–1116, 2018\.
* [6] Maithra Raghu, Chiyuan Zhang, Jon Kleinberg, and Samy Bengio. Transfusion: Understanding transfer learning for medical imaging. arXiv, (NeurIPS), 2019.
* [7] Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hod Lipson. Understanding Neural Networks Through Deep Visualization. 6 2015.
* [8] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1, NIPS’12, pages 1097–1105, USA, 2012\. Curran Associates Inc.
* [9] Demetres Kostas and Frank Rudzicz. Thinker invariance: Enabling deep neural networks for BCI across more people. Journal of Neural Engineering, 17(5):56008, 2020.
* [10] Yannick Roy, Hubert Banville, Isabela Albuquerque, Alexandre Gramfort, Tiago H Falk, and Jocelyn Faubert. Deep learning-based electroencephalography analysis: a systematic review. Journal of Neural Engineering, 16(5):051001, 2019.
* [11] Vernon J. Lawhern, Amelia J. Solon, Nicholas R. Waytowich, Stephen M. Gordon, Chou P. Hung, and Brent J. Lance. EEGNet: A compact convolutional neural network for EEG-based brain-computer interfaces. Journal of Neural Engineering, 15(5):aace8c, 2018.
* [12] Robin Tibor Schirrmeister, Jost Tobias Springenberg, Lukas Dominique Josef Fiederer, Martin Glasstetter, Katharina Eggensperger, Michael Tangermann, Frank Hutter, Wolfram Burgard, and Tonio Ball. Deep learning with convolutional neural networks for EEG decoding and visualization. Human Brain Mapping, 38(11):5391–5420, 11 2017.
* [13] Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, and Michael Auli. wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations. 2020\.
* [14] Terrence J. Sejnowski. The unreasonable effectiveness of deep learning in artificial intelligence. Proceedings of the National Academy of Sciences, page 201907373, 1 2020.
* [15] Yann LeCun, Yoshua Bengio, Geoffrey Hinton, Lecun Y., Bengio Y., and Hinton G. Deep learning. Nature, 521(7553):436–444, 2015.
* [16] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, 2009.
* [17] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. Multimedia Tools and Applications, pages 1–17, 12 2015.
* [18] Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q Weinberger. Densely Connected Convolutional Networks. CoRR, 8 2016.
* [19] Fatemeh Fahimi, Zhuo Zhang, Wooi Boon Goh, Tih-Shi Lee, Kai Keng Ang, and Cuntai Guan. Inter-subject transfer learning with an end-to-end deep convolutional neural network for EEG-based BCI. Journal of Neural Engineering, 16(2):026007, 2019.
* [20] Gaowei Xu, Xiaoang Shen, Sirui Chen, Yongshuo Zong, Canyang Zhang, Hongyang Yue, Min Liu, Fei Chen, and Wenliang Che. A Deep Transfer Convolutional Neural Network Framework for EEG Signal Classification. IEEE Access, 7:112767–112776, 2019.
* [21] Michael A. Schwemmer, Nicholas D. Skomrock, Per B. Sederberg, Jordyn E. Ting, Gaurav Sharma, Marcia A. Bockbrader, and David A. Friedenberg. Meeting brain–computer interface user performance expectations using a deep neural network decoding framework, 2018.
* [22] Yuan-Pin Lin and Tzyy-Ping Jung. Improving EEG-Based Emotion Classification Using Conditional Transfer Learning. Frontiers in Human Neuroscience, 11(June):1–11, 2017.
* [23] Apiwat Ditthapron, Nannapas Banluesombatkul, Sombat Ketrat, Ekapol Chuangsuwanich, and Theerawit Wilaiprasitporn. Universal Joint Feature Extraction for P300 EEG Classification Using Multi-Task Autoencoder. IEEE Access, 7:68415–68428, 2019.
* [24] Minyoung Huh, Pulkit Agrawal, and Alexei A. Efros. What makes ImageNet good for transfer learning? CoRR, pages 1–10, 2016.
* [25] Jiquan Ngiam, Daiyi Peng, Vijay Vasudevan, Simon Kornblith, Quoc V. Le, and Ruoming Pang. Domain adaptive transfer learning with specialist models. arXiv, 2018.
* [26] Kaiming He, Ross Girshick, and Piotr Dollar. Rethinking imageNet pre-training. Proceedings of the IEEE International Conference on Computer Vision, 2019-Octob(i):4917–4926, 2019.
* [27] Simon Kornblith, Jonathon Shlens, and Quoc V. Le. Do better imagenet models transfer better? In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, volume 2019-June, pages 2656–2666, 2019\.
* [28] Kan Chen, Jiang Wang, Liang-Chieh Chen, Haoyuan Gao, Wei Xu, and Ram Nevatia. ABC-CNN: An Attention Based Convolutional Neural Network for Visual Question Answering. arXiv, 2016.
* [29] Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre H. Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Daniel Guo, Mohammad Gheshlaghi Azar, Bilal Piot, Koray Kavukcuoglu, Rémi Munos, and Michal Valko. Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning. 200, 2020.
* [30] Olivier J. Hénaff, Ali Razavi, Carl Doersch, S. M. Ali Eslami, and Aaron Van Den Oord. Data-efficient image recognition with contrastive predictive coding, 2019.
* [31] Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation Learning with Contrastive Predictive Coding. 2018\.
* [32] Xiang Zhang, Lina Yao, Xianzhi Wang, Jessica J. M. Monaghan, David Mcalpine, and Yu Zhang. A survey on deep learning-based non-invasive brain signals: recent advances and new frontiers. Journal of Neural Engineering, 2020.
* [33] Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language Models are Few-Shot Learners. CoRR, 5 2020.
* [34] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li Peter, and J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer, 2019.
* [35] Hubert Banville, Isabela Albuquerque, Aapo Hyvarinen, Graeme Moffat, Denis-Alexander Engemann, and Alexandre Gramfort. Self-Supervised Representation Learning from Electroencephalography Signals. In 2019 IEEE 29th International Workshop on Machine Learning for Signal Processing (MLSP), pages 1–6. IEEE, 10 2019.
* [36] Sanjeev Arora, Hrishikesh Khandeparkar, Mikhail Khodak, Orestis Plevrakis, and Nikunj Saunshi. A theoretical analysis of contrastive unsupervised representation learning. In 36th International Conference on Machine Learning, ICML 2019, volume 2019-June, pages 9904–9923, 2019.
* [37] Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, and Geoffrey Hinton. Big Self-Supervised Models are Strong Semi-Supervised Learners. arXiv, (NeurIPS):1–18, 2020.
* [38] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. CoRR, 2018.
* [39] Stéphane Aroca-Ouellette and Frank Rudzicz. On Losses for Modern Language Models. pages 4970–4981, 2020.
* [40] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention Is All You Need. 2017\.
* [41] Dongwei Jiang, Wubo Li, Ruixiong Zhang, Miao Cao, Ne Luo, Yang Han, Wei Zou, and Xiangang Li. A Further Study of Unsupervised Pre-training for Transformer Based Speech Recognition. 5 2020.
* [42] Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. SpanBERT: Improving Pre-training by Representing and Predicting Spans. Transactions of the Association for Computational Linguistics, 8:64–77, 7 2020.
* [43] Francois Rivest and Richard Kohar. A New Timing Error Cost Function for Binary Time Series Prediction. IEEE Transactions on Neural Networks and Learning Systems, 31(1):174–185, 2020.
* [44] Alexei Baevski and Abdelrahman Mohamed. Effectiveness of Self-Supervised Pre-Training for ASR. In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7694–7698. IEEE, 5 2020\.
* [45] Yu-An Chung, Hao Tang, and James Glass. Vector-Quantized Autoregressive Predictive Coding. In Interspeech 2020, volume arXiv, pages 3760–3764, ISCA, 10 2020\. ISCA.
* [46] Iyad Obeid and Joseph Picone. The temple university hospital EEG data corpus. Frontiers in Neuroscience, 10(May), 2016.
* [47] Sajad Mousavi, Fatemeh Afghah, and U. Rajendra Acharya. SleepEEGNet: Automated sleep stage scoring with sequence to sequence deep learning approach. PloS one, 14(5):e0216456, 2019.
* [48] Gerwin Schalk, Dennis J Mcfarland, Thilo Hinterberger, Niels Birbaumer, Jonathan R Wolpaw, and A Brain-computer Interface B C I Technology. BCI2000 : A General-Purpose Brain-Computer Interface ( BCI ) System. IEEE Transactions on Biomedical Engineering, 51(6):1034–1043, 2004\.
* [49] Ary L Goldberger, L. A. Amaral, Leon Glass, Jeffrey M Hausdorff, Plamen Ch Ivanov, Roger G Mark, Joseph E Mietus, George B Moody, C. K. Peng, and H Eugene Stanley. PhysioBank, PhysioToolkit, and PhysioNet: components of a new research resource for complex physiologic signals. Circulation, 101(23), 2000.
* [50] Michael Tangermann, Klaus Robert Müller, Ad Aertsen, Niels Birbaumer, Christoph Braun, Clemens Brunner, Robert Leeb, Carsten Mehring, Kai J. Miller, Gernot R. Müller-Putz, Guido Nolte, Gert Pfurtscheller, Hubert Preissl, Gerwin Schalk, Alois Schlögl, Carmen Vidaurre, Stephan Waldert, and Benjamin Blankertz. Review of the BCI competition IV. Frontiers in Neuroscience, 6(JULY):1–31, 2012.
* [51] Perrin Margaux, Maby Emmanuel, Daligault Sébastien, Bertrand Olivier, and Mattout Jérémie. Objective and Subjective Evaluation of Online Error Correction during P300-Based Spelling. Advances in Human-Computer Interaction, 2012:1–13, 2012.
* [52] Luca Citi, Riccardo Poli, and Caterina Cinel. Documenting, modelling and exploiting p300 amplitude changes due to variable target delays in donchin's speller. Journal of Neural Engineering, 7(5):056006, sep 2010.
* [53] Luca Citi, Riccardo Poli, and Caterina Cinel. Erp-based brain-computer interface recordings, 2014.
* [54] Bastiaan Kemp, Aeilko H. Zwinderman, Bert Tuk, Hilbert A.C. Kamphuisen, and Josefien J.L. Oberyé. Analysis of a sleep-dependent neuronal feedback loop: The slow-wave microcontinuity of the EEG. IEEE Transactions on Biomedical Engineering, 47(9):1185–1194, 2000\.
* [55] Bastiaan Kemp, Aeilko Zwinderman, Bert Tuk, Hilbert Kamphuisen, and Josefien Oberyé. The sleep-edf database [expanded], 2018.
* [56] Demetres Kostas and Frank Rudzicz. Dn3: An open-source python library for large-scale raw neurophysiology data assimilation for more flexible and standardized deep learning. bioRxiv, 2020.
* [57] Valer Jurcak, Daisuke Tsuzuki, and Ippeita Dan. 10/20, 10/10, and 10/5 systems revisited: Their validity as relative head-surface-based positioning systems. NeuroImage, 34(4):1600–1611, 2007.
* [58] Yuxin Wu and Kaiming He. Group Normalization. International Journal of Computer Vision, 128(3):742–755, 2020\.
* [59] Dan Hendrycks and Kevin Gimpel. Gaussian Error Linear Units (GELUs). pages 1–9, 2016.
* [60] Xiao Shi Huang, Felipe Perez, Jimmy Ba, and Maksims Volkovs. Improving transformer optimization through better initialization. Proceedings of Machine Learning and Systems 2020, pages 9868–9876, 2020.
* [61] Angela Fan, Edouard Grave, and Armand Joulin. Reducing transformer depth on demand with structured dropout. arXiv, 103:1–15, 2019.
* [62] Abdelrahman Mohamed, Dmytro Okhonko, and Luke Zettlemoyer. Transformers with convolutional context for ASR. arXiv, 4 2019.
* [63] Demetres Kostas, Elizabeth W Pang, and Frank Rudzicz. Machine learning for MEG during speech tasks. Scientific Reports, 9(1):1609, 12 2019.
* [64] Stanislas Chambon, Mathieu N. Galtier, Pierrick J. Arnal, Gilles Wainrib, and Alexandre Gramfort. A Deep Learning Architecture for Temporal Sleep Stage Classification Using Multivariate and Multimodal Time Series. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 26(4):758–769, 2018.
* [65] Lukas A.W. Gemein, Robin T. Schirrmeister, Patryk Chraba̧szcz, Daniel Wilson, Joschka Boedecker, Andreas Schulze-Bonhage, Frank Hutter, and Tonio Ball. Machine-learning-based diagnostics of EEG pathology. NeuroImage, 220, 10 2020.
* [66] Diederik P. Kingma and Jimmy Lei Ba. Adam: A method for stochastic optimization. 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings, pages 1–15, 2015.
* [67] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. pages 1–21, 2020.
* [68] Kaihua Tang, Jianqiang Huang, and Hanwang Zhang. Long-Tailed Classification by Keeping the Good and Removing the Bad Momentum Causal Effect. (NeurIPS):1–12, 2020.
* [69] Kaidi Cao, Colin Wei, Adrien Gaidon, Nikos Arechiga, and Tengyu Ma. Learning imbalanced datasets with label-distribution-aware margin loss. Advances in Neural Information Processing Systems, 32(NeurIPS):1–18, 2019.
* [70] Behnam Neyshabur, Hanie Sedghi, and Chiyuan Zhang. What is being transferred in transfer learning? arXiv, (NeurIPS):1–12, 2020.
* [71] Mirco Ravanelli and Yoshua Bengio. Interpretable Convolutional Filters with SincNet. Arxiv, (Nips), 11 2018.
* [72] Alex Graves. Supervised Sequence Labelling with Recurrent Neural Networks. Springer, c2012, Berlin ; New York, 2012.
## Appendix A Downstream hyperparameters
Dataset | Batch Size | Epochs | Learning Rate
---|---|---|---
MMI | 4 | 7 | $1\times 10^{-5}$
BCIC | 60 | 15 | $5\times 10^{-5}$
ERN | 32 | 15 | $1\times 10^{-5}$
P300 | 80 | 20 | $1\times 10^{-5}$
SSC | 64 | 40 | $5\times 10^{-5}$
Table 3: Hyperparameters that varied between datasets, these were not changed
between different model configurations (see list in section 2.4.2).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.